uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,220,288
arxiv
\section{The nuclear mean field} The cross sections involving neutrons and protons impinging on nuclei display a marked energy variation generally interpreted as an interference between the incident and the transmitted waves. This occurrence in turn implies a mean free path for collisions between the nuclear constituents large compared not only to the internucleonic distance, but even, sometimes, to the dimensions of the nucleus itself. This finding is strongly suggestive of a mean field approach to the structure of the nucleus, especially as far as its ground state is concerned. The nuclear mean field is implemented at the empirical level with the shell model. At the theoretical level a natural treatment resorts to the Hartree--Fock (HF) self--consistent theory, in a frame viewing nuclei as self--bound composite systems of nucleons interacting via a static two--body potential and governed by non--relativistic quantum mechanics. Care is however required in handling the HF theory for the atomic nuclei. Indeed any realistic nucleon--nucleon (NN) force $V_{NN}$ embodies so much repulsion that the expectation value of the nuclear Hamiltonian in the HF ground state is far from being enough attractive. In other words the HF wave function does not prevent two nucleons to come close to each other, where they experience a violent repulsion. The Brueckner theory provides a remedy to this flaw. Indeed it yields an effective interaction $G$ between two nucleons in a nucleus such that the interaction of two ``uncorrelated'' nucleons through $G$ equals the interaction of two ``correlated'' nucleons through $V_{NN}$. Formally this result is achieved in the framework of perturbation theory by summing up (with the Bethe--Goldstone equation) the infinite set of ladder diagrams representing the scattering of two nucleons in the medium out of the Fermi sea, the other nucleons remaining passive. Is the Brueckner--Hartree--Fock (BHF) accounting satisfactorily for the nuclear ground state properties? While the answer to this question is negative (the BHF mean field yielding a too large nuclear central density), yet an important lesson has been learned from BHF. It amounts to recognize that the perturbative corrections to the BHF mean field, necessary to reconcile theory and experiment, should not be computed by expanding in $G$, but rather by grouping together diagrams involving three, four, etc. nucleons repeatedly interacting among themselves through $G$ ({\it hole line} expansion). In spite of the fact that a proof of the convergence of the hole line expansion has never been provided, three-- and four--hole line contributions have been actually computed in nuclear matter; however to achieve the same goal in finite nuclei has proved to be an almost impossible task. Accordingly short cuts have been seeked to incorporate the effects going beyong BHF still in a self--consistent mean field framework for the nuclear ground state.\footnote{We do not mention, for brevity, relativistic mean field approaches which also look promising in reproducing the nuclear ground state properties.} In this connection a successfull approach allows for a density dependence of the interaction $G$, in addition to the one naturally induced by the Pauli operator in the Bethe--Goldstone equation. Indeed this procedure conveniently simulates the energy dependence and non--locality of $G$. Moreover it explains quite successfully the experimentally well established fact that in a nucleus the single particle orbits below the Fermi level are not 100\% occupied as the HF (but not the BHF) approach would imply. Clearly the prize to be payed for the simplicity (and the success) is the introduction of a certain amount of phenomenology empirically fixing the local density dependence of $G$. In the above outlined scheme, Negele\cite{Negele} has been able to impressively reproduce over a wide range of momentum transfer the data of elastic electron scattering on several nuclei (like $^{16}$O, $^{40}$Ca and $^{208}$Pb). This findings strongly support the view that most of the physics of the nuclear ground state lends indeed itself to be embodied in a ``mean field''. Should this be the case, then we would actually know and understand the nucleons' distribution in nuclei. However, before drawing this conclusion, one should remind that electrons only probe the {\it proton} distribution. It would be therefore desirable to test the mean field scheme on other observables, the most natural one being the neutron distribution. Although experimental information on the latter has been gathered in the past, e.g. with pion scattering on nuclei, still our knowledge of it remains rather poor. Even the recurring question of whether or not the nuclear surface is neutron rich cannot be presently answered with certainty. Furthermore the present workshop stresses the urgence of reaching an accurate knowledge of the neutron distribution in order to achieve a precise interpretation of the atomic parity--violating (PV) experiments. Thus in the following we propose a method to measure the neutron distribution in the nuclear ground state, which is based on PV polarized electron--nucleus scattering. \section{The formalism of PV electron scattering} The helicity asymmetry, as measured in the scattering of rigth-- and left-- handed electrons off nuclei, is defined as follows \begin{eqnarray} {\cal A} &&= \frac{d^2\sigma^+ - d^2\sigma^-} {d^2\sigma^+ + d^2\sigma^-} \nonumber\\ &&={\cal A}_0\frac{v_L R^L_{AV}(q,\omega) + v_T R^T_{AV}(q,\omega) + v_{T'} R{T'}_{VA}(q,\omega)} {v_L R^L(q,\omega) + v_T R^T(q,\omega)} \label{asym} \end{eqnarray} where \begin{equation} v_L = \left(\frac{Q^2}{{\vec q}^2}\right)^2, \label{vL} \end{equation} \begin{equation} v_T = \frac{1}{2}\left\vert \frac{Q^2}{{\vec q}^2}\right\vert + \tan^2\frac{\theta}{2} \label{vT} \end{equation} and \begin{equation} v_{T'} = \sqrt{\left\vert \frac{Q^2}{{\vec q}^2}\right\vert + \tan^2\frac{\theta}{2}} \tan\frac{\theta}{2} \label{vTp} \end{equation} are the usual lepton factors, $\theta$ is the electron scattering angle and $Q^2=\omega^2-{\vec q}^2 < 0$ is the space--like four momentum transfer of the vector boson carrying the electromagnetic ($\gamma$) or the weak neutral ($Z_0$) interaction. In (\ref{asym}) the nuclear and nucleon's structure are embedded in both the parity conserving (electromagnetic) longitudinal and transverse ($R^L$ and $R^T$) response functions and in the parity violating (weak neutral) ones. These classify as well in vector longitudinal and transverse ($R^L_{AV}$, $R^T_{AV}$) and axial transverse $R^{T'}_{VA}$ responses, the first (second) index in the subscript referring to the vector/axial nature of the weak neutral leptonic (hadronic) current. Finally the scale of the asymmetry is set by \begin{equation} {\cal A}_0 = \frac{\sqrt{2}G m^2_{\scriptscriptstyle N}}{\pi\alpha} \frac{|Q^2|}{4m_{\scriptscriptstyle N}^2} \approx 6.5\times10^{-4}\tau \qquad \left(\tau=\frac{|Q^2|}{4m_{\scriptscriptstyle N}^2} \right) \label{A0} \end{equation} in terms of the electromagnetic ($\alpha$) and Fermi ($G$) coupling constants ($m_{\scriptscriptstyle N}$ is the nucleon mass). To gain informations on the structure of nuclei and nucleons the standard Coulomb, electric and magnetic multipole decomposition \begin{equation} R^L(q,\omega) = \sum_{J\ge 0} F_{CJ}^2 (q) \label{RL} \end{equation} and \begin{equation} R^T(q,\omega) = \sum_{J\ge 1}\left\{ F_{EJ}^2(q) + F_{MJ}^2(q) \right\} \label{RT} \end{equation} for the parity conserving (electromagnetic) and \begin{eqnarray} R^L_{AV}(q,\omega) &=& a_A\sum_{J\ge 0} F_{CJ}(q){\widetilde F}_{CJ}(q) \label{RLAV}\\ R^T_{AV}(q,\omega) &=& a_A\sum_{J\ge 1} \left\{ F_{EJ}(q){\widetilde F}_{EJ}(q) + F_{MJ}(q){\widetilde F}_{MJ}(q)\right\} \label{RTAV}\\ R^{T'}_{VA}(q,\omega) &=& -a_V\sum_{J\ge 1} \left\{ F_{EJ}(q){\widetilde F}_{MJ_5}(q) + F_{MJ}(q){\widetilde F}_{EJ_5}(q)\right\} \label{RTPAV} \end{eqnarray} for the parity violating (weak neutral) responses are performed. In the above formulas $\omega$ is supposed to be given so the responses actually become functions of $q$ only. Furthermore the Standard Model for the electroweak interaction is assumed: thus the vector and axial--vector leptonic coupling at the tree level read \begin{eqnarray} a_V &=& 4\sin^2\theta_W - 1 \label{av}\\ a_A &=& -1 \label{aa} \end{eqnarray} in terms of the Weinberg's angle. Finally the form factors, chosen to be real, with (without) a tilde are the weak--neutral (electromagnetic) ones. They, of course, split into isoscalar and isovector components according to the isospin decomposition \begin{equation} J_{\mu}= \xi^{(0)}\left(J_\mu\right)_{00} + \xi^{(1)}\left(J_\mu\right)_{10} \label{current} \end{equation} of the hadronic current. Again, on the basis of the Standard Model at tree level, one has \begin{equation} \xi^{(0)}=\xi^{(1)} = 1 \label{csiem} \end{equation} in the electromagnetic sector. In the weak neutral sector, instead, one has \begin{eqnarray} \xi^{(0)}&=&\beta_V^{(0)}= -2\sin^2\theta_W \simeq -0.461, \label{csiw1}\\ \xi^{(1)}&=&\beta_V^{(1)}= 1-2\sin^2\theta_W \simeq 0.538 \label{csiw2} \end{eqnarray} for the vector coupling and \begin{equation} \xi^{(0)}=\beta_A^{(0)}=0, \qquad \xi^{(1)}=\beta_A^{(1)}=1 \label{csiwax} \end{equation} for the axial one. All the above formulas hold valid in the approximation of exchanging only one vector boson between the lepton and the hadron and up to a possible parity admixture in the nucleon itself (anapole moment of the nucleon) or in the nuclear states.The latter would stem from parity--violating components in the nucleon--nucleon interaction. These items will not be dealt with here. For a comprehensive treatment we refer the interested reader to the specialized literature on the subject. \section{Elastic polarized electron scattering from spin zero, isospin zero nuclei} As pointed out long time ago by Feinberg and Walecka\cite{Feen} formula (\ref{asym}) applied to the elastic scattering of polarized electrons from a spin--zero, isospin--zero nuclear target leads to the simple expression \begin{equation} {\cal A} = {\cal A}_0 2 \sin^2\theta_W \label{asymel} \end{equation} for the asymmetry. It thus seemed that the opportunity was there to address the physics of the Standard Model in the low energy regime, being (\ref{asymel}) a ``model--independent'' expression. However things are not so simple because the isospin purity of the nuclear states, on which (\ref{asymel}) relies, is not realized in nature: indeed the proton and neutron quantum states are different in a nucleus. Accordingly, to account for the isospin breaking, (\ref{asymel}) should be recast as follows \begin{eqnarray} {\cal A}&&={\cal A}_0 a_A\beta_V^{(0)} \left\{ {\displaystyle \frac {1+ {\displaystyle\frac{\beta_V^{(1)}}{\beta_V^{(0)}}} {\displaystyle\frac{<J_i |{\hat M}_{0;10}| J_i>} {<J_i |{\hat M}_{0;00}| J_i> }} } {1+ {\displaystyle\frac{<J_i |{\hat M}_{0;10}| J_i>} {<J_i |{\hat M}_{0;00}| J_i> }} }} \right\} \label{nuclasym}\\ &&\simeq {\cal A}_0 2\sin^2\theta_W \left\{ 1 + \left(\frac{\beta_V^{(1)}}{\beta_V^{(0)}} -1\right) {\displaystyle \frac {<J_i |{\hat M}_{0;10}| J_i> }{<J_i |{\hat M}_{0;00}| J_i>} } \right\} \nonumber \\ &&= {\cal A}_0 2\sin^2\theta_W \left\{ 1 - \frac{1}{2\sin^2\theta_W} {\displaystyle \frac {<J_i |{\hat M}_{0;10}| J_i> }{<J_i |{\hat M}_{0;00}| J_i>} } \right\}\, . \nonumber \end{eqnarray} The above formula should of course be used ``cum grano salis''. Indeed the expansion in the arguably small ground state matrix element of the {\it isovector} monopole operator ${\hat M}_{0;10}$ is warranted except where the ground state matrix element of the {\it isoscalar} monopole operator ${\hat M}_{0;00}$ is also or vanishing or very small, which happens, of course, at or close to the diffraction minima of the elastic cross--sections. Thus, barring for these small domains, one would like to estimate, in the formula for the asymmetry \begin{equation} {\cal A}={\cal A}_0 2\sin^2\theta_W \left[ 1 + \Gamma(q)\right]\ , \label{asym2} \end{equation} the impact of the nuclear dependent term $\Gamma(q)$, whose definition follows by comparing (\ref{nuclasym}) with (\ref{asym2}). Before specifically addressing this issue, let us first answer the question: given that $\Gamma(q)$ is not zero, is it still small enough to render worth trying a measurement of $\sin^2\theta_W$ with parity violating electron scattering? Possibly this was the case a decade ago. Indeed around 1990 the value of the Weinberg's angle quoted in the literature read \begin{equation} \sin^2 \theta_W = 0.227\pm 0.005 \, , \label{Wein} \end{equation} namely it was given with a precision of 2.2\%. Accordingly in such a condition a meaningful determination of $\sin^2\theta_W$ would have required on the one hand a measurement of the asymmetry to an accuracy of $1\div 2\%$ and on the other to assume the isospin impurity in a nucleus to be below such a level. Let us now see how these figures translate into a kinematical constraint. Given that an accuracy of $10^{-7}$ could be reached in measuring ${\cal A}$, a test of the Standard Model, as expressed by (\ref{Wein}), would have had a good chance to be performed providing ${\cal A}\ge 10^{-5}$, in turn implying [from (\ref{A0})] $q \ge 1.75$~fm$^{-1}$. Now, always from (\ref{A0}), it follows that ${\cal A}$ grows with $q$, but, at the same time, the elastic form factor falls off with $q$: one would thus choose the range \begin{equation} 1.75 \le q \le 3.5 \,{\mathrm fm}^{-1} . \label{range} \end{equation} as a reasonnable compromise between these two opposite requirements. The question to be addressed was (and is) then: how large is $\Gamma (q)$ in the range (\ref{range})~? Is there $\Gamma(q) \le 10^{-2}$~? Whatever the answer to these questions might be, today the above arguing is of course untenable since now \begin{equation} \sin^2\theta_W = 0.23055 \pm 0.00041 \, , \label{Weinlast} \end{equation} i.e, the Weinberg's angle is known with an accuracy of $0.18\%$. Yet for others relevant observables, for example the strange parity content of the nucleon, although a generalization of the expression (20) for the asymmetry is required, a reliable theoretical handling of $\Gamma (q)$ is still crucial. This issue will be addressed in the next Section. \section{A simple model for isospin breaking} The isospin symmetry is broken in atomic nuclei by the Coulomb force which pushes the protons orbits outward with respect to the neutrons ones. On the other hand the strong proton--neutron interaction tries to equalize the proton's and neutron's Fermi energies, thus acting in the direction of restoring the isospin symmetry: the balance between these two effects is believed to leave a generally modest symmetry breaking in nuclei,which thus lends itself to a perturbative treatment. To explore this physics Donnelly {\it et al.}\cite{Donn} worked out a simple model characterized by two monopole states ($J^\pi =0^+$), one isoscalar and one isovector, mixed by the isospin breaking interaction, in particular the Coulomb one. In this scheme the dominantly isospin $T = T_0$ ground state is actually represented by the superposition \begin{equation} |``T_0''> = \cos\chi\,|T_0> + \sin\chi\, |T_0+1>\, , \label{T0} \end{equation} and the dominantly $T_1 = T_0 + 1$ excited state by the orthogonal combination \begin{equation} |``T_1''> = -\sin\chi\,|T_0> + \cos\chi\, |T_0+1>\, . \label{T1} \end{equation} The associated ground state matrix elements of the isoscalar and isovector monopole Coulomb operators read then, respectively, \begin{eqnarray} &&<``T_0''|{\hat M}_{0;00}(q)|``T_0''> \nonumber \\ &&\qquad = \cos^2\chi\,<T_0|{\hat M}_{0;00}|T_0> + \sin^2\chi\, <T_0+1|{\hat M}_{0;00}|T_0+1> \label{M00me} \end{eqnarray} and \begin{eqnarray} &&<``T_0''|{\hat M}_{0;10}(q)|``T_0''> = \cos^2\chi\,<T_0|{\hat M}_{0;10}|T_0> \label{M10me} \\ && + \sin^2\chi\, <T_0+1|{\hat M}_{0;10}|T_0+1> + 2\sin\chi\cos\chi\, <T_0+1|{\hat M}_{0;10}|T_0>\, . \nonumber \end{eqnarray} Let us then consider $N = Z$ nuclei. The Clebsch--Gordon (CG) coefficients entering into the reduction of the above matrix elements in isospace are 1 and $1/\sqrt{3}$ , respectively, in formula (\ref{M00me}), whereas in (\ref{M10me}) only the third term survives and the associated CG is $1/\sqrt{3}$. One thus obtains for the asymmetry in leading order of the mixing angle $\chi$ the expression \begin{eqnarray} {\cal A} &&= {\cal A}_0 a_A {\displaystyle\frac {\beta_V^{(0)}<0^+;0\|{\hat M}_{0;00}\|0^+;0> + \beta_V^{(1)}2\chi\frac{1}{\sqrt{3}}<0^+;1\|{\hat M}_{0;10}\|0^+;0>} { <0^+;0\|{\hat M}_{0;00}\|0^+;0> + 2\chi\frac{1}{\sqrt{3}}<0^+;1\|{\hat M}_{0;10}\|0^+;0>} } \nonumber \\ &&\quad \simeq {\cal A}_0 2\sin^2\theta_W \{1 +\Gamma(q)\} \label{asym3} \end{eqnarray} where \begin{equation} \Gamma(q) = 2\left(\frac{\beta_V^{(1)}}{\beta_V^{(0)}} - 1\right) \chi {\cal R}(q) = -\frac{1}{\sin^2\theta_W}\chi {\cal R}(q) \label{Gamma} \end{equation} and \begin{equation} {\cal R}(q) = \frac{1}{\sqrt{3}} \frac{<0^+;1\|{\hat M}_{0;10}\|0^+;0>}{<0^+;0\|{\hat M}_{0;00}\|0^+;0>} = \frac{F_{C0}(0^+;``1'';``0''\to 0^+)} {F_{C0}(0^+;``0'';``0''\to 0^+)}\, . \label{Erre} \end{equation} In the above the double bar matrix elements are meant to be reduced in isospace. Note that ${\cal R}(q)$ simply represents the ratio between the inelastic and the elastic form factors associated with the two monopoles states of our $N = Z$ nucleus. Explicit calculations of $\Gamma(q)$ for a few nuclei have been performed by Donnelly {\it et al.}\cite{Donn} in a Wood--Saxon single particle wave functions basis with various effective interactions and in different configurations spaces. We display in Fig.~1a and 1b their results for $\Gamma(q)$ in C$^{12}$ and Si$^{28}$. The singularities in the curves should be disregarded since they don't have, as previously discussed, any physical significance. Although the results of ref.\cite{Donn} are model dependent (they do change significantly according to the effective interaction employed), yet they convey the message that for ligth $N = Z$ nuclei like C$^{12}$ the isospin breaking remains tiny indeed, below 1\% over the whole range (\ref{range}) of $q$. \begin{figure}[t] \mbox{\epsfig{file=fig1.ps,width=0.9\textwidth}} \vskip 0.1cm \caption[Fig.~\ref{fig1}]{\label{fig1} The nuclear--structure--dependent part of the parity--violating asymmetry as defined by eqs,~(\ref{Gamma}) and (\ref{Erre}). {\it Left:} calculations for elastic scattering from $^{12}$C using Woods--Saxon single--particle wave functions. The dotted line at $|\Gamma(q)|=10^{-2}$ indicates the level above which the structure--dependent effects would have confused the interpretation of the asymmetry as a test of electroweak theories, when these were known with the precision given by (\ref{Wein}). {\it Right:} the same as in the left panel, but for $^{28}$Si. For this nucleus Harmonic Oscillator single--particle wave functions and the shell model amplitudes as given by W.C. Haxton (unpublished) have been used. } \end{figure} Note also that in reaching this result the mixing angle $\chi$ has been extracted, with a quite conservative attitude, from the perturbative formula \begin{equation} \sin^2\chi = \frac{\left|<T_0+1|H_{CSV}|T_0>\right|^2} {\left(E_{T_0+1}-E_{T_0}\right)^2} \label{sinchi} \end{equation} ( $H_{CSV}$ is the charge symmetry violating part of the nuclear hamiltonian). In fact, while the experiments indicate for the matrix element appearing in the numerator of (\ref{sinchi}) a value ranging, in C$^{12}$, between 150 and 300~keV, the latter value has been adopted in obtaining Fig.~1a and 1b (moreover $\Delta E = E_{T_0+1}-E_{T_0}\simeq 17$~MeV according to the simple two levels model). These findings are supported by the results of a quite sophisticated calculation of Ramavataram {\it et al.}\cite{Ramava}. These authors, using a state of the art variational wave function originally due to Pandharipande {\it et al.}\cite{Pandha}, obtain in He$^4$ a $\Gamma(q)$ always well below 1\% in the whole range (\ref{range}). Of course this outcome also reflects the particularly rigid structure of He$^4$, whose first excited state lies at 20.1~MeV (the first excited state of C$^{12}$ is at 4.44~MeV). It should however be pointed out that, for $N = Z$ but heavier nuclei like Si$^{28}$, the isospin breaking, while small, grows and reaches a few \% in the range of q given by (\ref{range}). \section{The case of $N \ne Z$ nuclei} For nuclei with $N \ne Z$ a novel (and important) feature appears. Although it will be illustrated in the specific case of nuclei with $N = Z + 2$, and therefore with third isospin component $M_T = -1$, it remains valid in all cases. Sticking always to the two levels model\cite{Donn} we consider then a nucleus with a dominant isospin $T_0 = 1$ component in the ground state. Let in addition the nucleus have an excited state with a dominant isospin $T_1 = T_0 + 1 = 2$. Both states are further characterized by having $J^\pi = 0^+$. Proceeding as in the case of a $Z = N$ nucleus, one arrives to an asymmetry still given by an expression like (\ref{asym3}), but with the model dependent nuclear term $\Gamma(q)$ reading now as follows \begin{eqnarray} &&\Gamma(q) = \frac{1}{2}\beta_V^{(1)} \left\{ {-\frac{1}{\sqrt{6}}<0^+;1\|{\hat M}_{0;1}\|0^+;1> + 2\sqrt{\frac{1}{10}}\chi <0^+;2\|{\hat M}_{0;1}\|0^+;1> } \right\} \nonumber\\ &&\qquad\quad\times\left\{ \sqrt{\frac{1}{3}} <0^+;1\|{\hat M}_{0;0}\|0^+;1> -\frac{1}{\sqrt{6}} <0^+;1\|{\hat M}_{0;1}\|0^+;1> + \right. \nonumber\\ &&\qquad\qquad\qquad\qquad \left. + 2\sqrt{\frac{1}{10}}\chi <0^+;2\|{\hat M}_{0;1}\|0^+;1> \right\}^{-1} \label{Gamma1}\\ \end{eqnarray} which can again be recast according to \begin{equation} \Gamma(q) = \frac{1}{2}\beta_V^{(1)} {\displaystyle\frac {F_{C0}(0^+;``1'';``1''\to 0^+)_{\mathrm isovector}} {F_{C0}(0^+;``1'';``1''\to 0^+)_{\mathrm total}} }\, . \label{Gamma2} \end{equation} Namely $\Gamma(q)$ turns out to be, like before, proportional to the ratio between the inelastic (isovector) and the elastic (this time both isoscalar and isovector) form factors. Now from (\ref{Gamma1}) a new feature is immediately apparent: unlike the $N = Z$ case, where $\Gamma(q)$ was found to be proportional to the mixing parameter $\chi$, here $\Gamma(q)$, in addition to terms proportional to $\chi$, also embodies terms {\it independent} from it. As a consequence, $\Gamma(q)$ turns out to be now much larger, as it is clearly observed in Fig.~2a and 2b, where the results of ref.\cite{Donn} are displayed for C$^{14}$ and Si$^{30}$. \begin{figure}[t] \mbox{\epsfig{file=fig2.ps,width=0.9\textwidth}} \vskip 0.1cm \caption[Fig.~\ref{fig2}]{\label{fig2} The structure--dependent part of the parity--violating asymmetry for elastic scattering as defined in eqs.~(\ref{tildegam}) and (\ref{betap}). The results for $^{14}$C using a 1p--shell model space (solid line) and a $2\hbar\omega$--space (dashed line) are displayed. Also displayed are the results for $^{30}$Si in the extreme single--particle shell model (solid line) and in a full 2s1d shell--model calculation (dashed line). No isospin mixing effects are included. } \end{figure} Actually what is shown in the figures 2a and 2b is not $\Gamma(q)$, but ${\widetilde\Gamma} (q)$ since the $N \ne Z$ nuclei are more appropriately discussed in terms of protons and neutrons rather than in term of isospin and, accordingly, the expression (\ref{asym2}) for the asymmetry is now more conveniently recast into the form: \begin{equation} {\cal A}={\cal A}_0 a_A\left(\beta_V^p+\frac{N}{Z}\beta_V^n\right) \left[ 1 + {\widetilde\Gamma}(q)\right] \label{asym4} \end{equation} where \begin{eqnarray} \beta_V^p &=& \frac{1}{2}\left(\beta_V^{(0)} +\beta_V^{(1)}\right) =0.038\, , \label{betavp}\\ \beta_V^n &=& \frac{1}{2}\left(\beta_V^{(0)} -\beta_V^{(1)}\right) = -\frac{1}{2} \label{betavn} \end{eqnarray} and \begin{equation} {\widetilde\Gamma}(q) = \frac{1}{2}{\tilde\beta}'_V {\displaystyle\frac {\left\langle 0^+\left|\frac{1}{2}\left(1-\frac{Z}{N}\right){\hat M}_{0;00} +\frac{1}{2}\left(1+\frac{Z}{N}\right){\hat M}_{0;10}\right|0^+ \right\rangle} {<0^+\left| {\hat M}_{0;00} + {\hat M}_{1;10}\right|0^+>} }\, , \label{tildegam} \end{equation} being \begin{equation} {\tilde\beta}'_V = 4\frac{N}{Z}\frac{\beta_V^n}{\beta_V^p + {\displaystyle\frac{N}{Z}}\beta_V^n}\, . \label{betap} \end{equation} It is immediately checked that by setting $N = Z$ in the above formulas one gets ${\widetilde\Gamma}(q)\to \Gamma(q)$ and ${\tilde\beta}'_V = 1/\sin^2\theta_W$. Let us further observe that, although in the figures 2a and 2b ${\widetilde\Gamma}(q)$ appears indeed to be markedly model dependent, yet the basic message previously anticipated clearly stands out: in the range of momentum transfers (\ref{range}) ${\widetilde\Gamma}(q)$ assumes values typically ranging between 10 and 50\%. In conclusion, from the previous two Sections it follows that for light $N = Z$ nuclei, like He$^4$ and C$^{12}$, \begin{itemize} \item{} a test of the electroweak theory with parity--violating polarized elastic electron scattering experiments is out of question or, perhaps, only marginally, if at all, possible at very low momentum transfer in He$^4$; \item{} however many investigations show that elastic (and, also, quasielastic, not addressed here) parity violating polarized electron scattering experiments can be usefully exploited to unravel the strange form factor of the nucleon. \end{itemize} For medium--heavy and heavy nuclei $\Gamma(q)$ grows with the mass number $A$, especially when $N \ne Z$ (large terms not proportional to $\chi$ appear!). Therefore parity--violating polarized electron scattering can be advantageously used to measure the amount of isospin breaking in a nucleus or, better yet, the neutron distribution. \section{The neutron distribution} To appreciate how the neutron distribution can be measured with polarized elastic electron scattering it helps to revisit the previous concepts with a somewhat different language. Let us thus observe that for $T = 0$ nuclei ($N = Z$) if isospin is a ``good symmetry'' then the standard unpolarized electron scattering measures the {\it isoscalar} nuclear density. In these conditions the latter also fixes the polarized elastic electron scattering which is thus independent from any further nuclear structure information. If, however, isospin is sligthly broken, then the {\it isovector} nuclear density also enters, as a small perturbation, in the elastic polarized electron scattering thus introducing an additional model dependence. Quite on the contrary, for $T \ne 0$ ($N \ne Z$) nuclei both the {\it isoscalar} and the {\it isovector} densities enter into the scattering process, no matter if isospin is or is not a perfect symmetry. In this instance, as already pointed out, it is preaferable to use the neutron--proton language. Accordingly the ground state matrix element of the Coulomb monopole operator \begin{equation} <0^+\left|{\hat M}_0(q)\right|0^+> = \frac{1}{\sqrt{4\pi}}\int d{\vec x} j_0(qx)\rho({\vec x})\, , \label{monopole} \end{equation} where $j_0$ is the zeroth order spherical Bessel function and $\rho({\vec x})$ is the matter density of the nucleus, will display both an isoscalar and an isovector components, according to the expressions \begin{equation} <0^+\left|{\hat M}_{0;00}(q)\right|0^+> = \frac{1}{\sqrt{4\pi}}\int d{\vec x} j_0(qx) \frac{\rho_p({\vec x})+\rho_n({\vec x})}{2} \label{isosca} \end{equation} and \begin{equation} <0^+\left|{\hat M}_{1;00}(q)\right|0^+> = \frac{1}{\sqrt{4\pi}}\int d{\vec x} j_0(qx) \frac{\rho_p({\vec x})- \rho_n({\vec x})}{2} \label{isovec} \end{equation} In the above $\rho_p$ and $\rho_n$ are the protons and neutrons densities, respectively. When (\ref{isosca}) and (\ref{isovec}) are inserted into the formula (\ref{asym4}) it is then an easy matter to obtain for the asymmetry the expression\footnote{Note that, by comparison, (\ref{Gamma3}) yields \begin{equation} {\widetilde\Gamma}(q) = \frac{N}{Z} \frac{\beta_V^n}{\beta_V^p + {\displaystyle\frac{N}{Z}}\beta_V^n} \left[ 1 - {\displaystyle\frac {Z \int d{\vec x} j_0(qx)\rho_n({\vec x})} {N \int d{\vec x} j_0(qx)\rho_p({\vec x})} } \right] \, . \label{Gamma3} \end{equation} } \begin{equation} {\cal A} = {\cal A}_0 a_A \left\{ \beta_V^p + \beta_V^n {\displaystyle\frac {\int d{\vec x} j_0(qx)\rho_n({\vec x})} {\int d{\vec x} j_0(qx)\rho_p({\vec x})} } \right\} \label{asym5} \end{equation} Since, according to the Standard model, \begin{equation} \beta_V^p = 0.038 \qquad{\mathrm and}\quad \beta_V^n = - 0.5 \,, \label{boh} \end{equation} (\ref{Gamma3}) shows that {\it the asymmetry in the parity--violating elastic polarized electron scattering represents an almost direct measurement of the Fourier transform of the neutron density}, the analogous quantity for the protons being fixed by the elastic unpolarized electron scattering. In particular a rigorous $q$--independence of ${\cal A}/{\cal A}_0$ would imply \begin{equation} Z \rho_n({\vec x}) = N \rho_p({\vec x}), \end{equation} i.e. pure isospin symmetry! Of course, as previously discussed, in nuclei the distribution of neutrons differs from the one of protons. To gain a first insight on how this difference would be perceived in a parity violating elastic electron scattering experiment Donnelly {\it et al.} have computed the asymmetry (\ref{asym5}) and the ${\widetilde\Gamma}(q)$ of eq.~(\ref{Gamma3}) for Ca$^{40}$, Ca$^{48}$ and Pb$^{208}$ using phenomenological proton densities which well accomplish for the elastic unpolarized electron scattering. For example, for Ca$^{40}$ they employ the well--known 3 parameters Fermi distribution: \begin{equation} \rho({\vec r})=\rho_0{\displaystyle\frac{1+\omega{\displaystyle\frac{r^2}{R^2}}} {1+e^{{\displaystyle (r-R)/a}}} }\, . \label{Fermi3} \end{equation} For the neutrons they use the same densities, euristically enlarging however, for explorative purposes, the radius parameter by 0.2~fm. Their results are displayed in Fig.~3. To grasp the significance of this figure it helps to expand ${\widetilde\Gamma}(q)$ as follows: \begin{equation} {\widetilde\Gamma}(q) = \frac{\beta_V^n}{\beta_V^p + {\displaystyle\frac{N}{Z}}\beta_V^n} \left[ \frac{N}{Z} - {\displaystyle\frac {N - \frac{1}{6}q^2 R^2_n} { \int d{\vec x} j_0(qx)\rho_p({\vec x})} } + \dots \right] \, . \label{Gamma4} \end{equation} It is thus clear that the region around the first dip/peak carries information on the neutron radius whereas the fourth moment of the neutron distribution will be observed nearby the second dip/peak and so on. On the basis of this expansion one finds that in the case of Pb$^{208}$ at $q\simeq 0.5$~fm$^{-1}$, with ${\cal A}\simeq 8\times 10^{-8}$, a 1\% change in the neutron radius is reflected in a change of about 6\% in the asymmetry. \begin{figure}[t] \mbox{\epsfig{file=fig3.ps,width=0.9\textwidth}} \vskip 0.1cm \caption[Fig.~\ref{fig3}]{\label{fig3} The parity--violating asymmetry (upper row) and the structure--dependent part of the asymmetry as defined in eq.~(\ref{Gamma4}) (lower row) for elastic electron scattering from $^{40}$Ca, $^{48}$Ca and $^{208}$Pb. Displayed are the calculations with $\rho_n(r)/N=\rho_p(r)/Z$ [solid line, corresponding to ${\widetilde\Gamma}(q)=0$] and for $\rho_n(r)/N\ne \rho_p(r)/Z$ (dashed line). The density parameterizations are discussed in the text and specified in ref.\cite{Donn}. } \end{figure} It might be interesting to observe that formula (\ref{Gamma4}) can be generalized in the sense that the Fourier transform of the pure Fermi distribution can be, although not exactly, quite accurately analytically expressed. Indeed for this density the form factor turns out to be\cite{noi1} \begin{equation} F_{\mathrm Fermi}(q) =\rho_0 (2\pi)^2R^2 a \left\{j_1(qR){\tilde y}_0(\pi qa) - {\tilde j}_1(\pi qa)y_0(qR)\right\} \frac{\pi qa}{{\tilde j}_0(\pi qa)} \label{FForm} \end{equation} where the $j(y)$ and the ${\tilde j}({\tilde y})$ are the ``spherical'' and the ``modified spherical'' Bessel functions of first (second) kind, respectively \cite{Abram}. The form factor $F_3$ of the three--parameters Fermi distribution is then expressed in terms of $F_{\mathrm Fermi}$ according to: \begin{equation} F_3(q) \simeq F_{\mathrm Fermi}(q) -\frac{\omega}{R^2} \frac{d^2}{dq^2}\left[ q F_{\mathrm Fermi}(q)\right]\, . \label{F3} \end{equation} When inserted into (\ref{asym5}) and (\ref{Gamma3}), the above formula allows to analytically recover the results of Donnelly for Ca$^{40}$. \section{Flaws and hopes} The results of the previous section are flawed essentially by two shortcomings: one relates to the factorization of the single nucleon physics, which has been assumed in deducing (\ref{asym5}) from (\ref{asym4}). Especially when $q$ is large, and thus relativistic effects become substantial, such a procedure almost certainly becomes unwarranted. It is clear that this point must be more carefully addressed in future research. The second problem relates to the approximation of considering just the Fourier transform of the charge and neutron distributions, which corresponds to consider a single vector boson exchange between the impinging field and the target, leaving the electron to be described by plane waves. This is clearly insufficient, especially in heavy nuclei, where, in fact, the electron wave is quite distorted by the nuclear Coulomb field. To account for this effect a heavy computational effort is presently carried out by an MIT--Indiana University collaboration. A more modest, but perhaps also useful approach, resorts to a kind of eikonal approximation to describe the distortion of the electron wave. We feel confident that these difficulties will in the end be overcome. It will thus become possible to measure the neutron distribution in the ground state of atomic nuclei with parity violating {\it elastic} scattering experiments. This most remarkable occurrence goes in parallel with the finding of Alberico {\it et al.}\cite{noi2} for the {\it quasi--elastic} polarized electron--nucleus scattering. Indeed these authors show that, being the $Z_0$ almost ``blind'' to protons, the PV longitudinal response of an uncorrelated system of protons and neutrons, like the relativistic Fermi gas, to a polarized beam of electrons is almost vanishing. Departure from this expectation will thus signal the effect of neutron--proton correlations in nuclei, which are expected to be especially relevant in the isoscalar channel. Thus polarized electron scattering experiments appear to open a window on crucial, and till now insufficiently explored, aspects of the nuclear structure. A final remark is in order: parity--violating {\it nuclear} electron scattering experiments, initially conceived as a tool for exploring the Standard Model at the nuclear level, will turn out, in the end, to represent a tool for providing the information the {\it atomic} parity violating experiments need to accurately test the Standard Model at the atomic level: namely the neutron distribution. Indeed the precision on the measured energy levels of the Cesium atoms, which is required to test the Standard Model, is so high that it cannot be reached without controlling also the neutron distribution in nuclei.
1,941,325,220,289
arxiv
\section{\label{sec:Intro}Introduction} Nuclei in the mass $\sim$ 130 region are known to exhibit a rich variety of shapes and structures. In this region, interesting phenomena of shape co-existence \cite{coe1, coe2}, strongly deformed \cite{hd2} to superdeformed \cite{sd1, sd2} shapes, chiral doublet bands \cite{cdb1, cdb2} and $\gamma$- bands built on quasiparticle states \cite{xee,xe,xe1} have been observed. This is the heaviest mass region with valence neutrons and protons occupying the same intruder orbital, $1h_{11/2}$. For the neutron-deficient isotopic chains in this mass region, protons occupy the low-$\Omega$ orbitals, whereas neutrons occupancy changes from mid-$\Omega$ to high-$\Omega$ orbitals of $1h_{11/2}$. Due to the competing shape polarising effects of low-$\Omega$ and high-$\Omega$ orbitals, the neutron-deficient nuclei in this region are expected to have, in general, triaxial shapes \cite{AG96,RW02}. \begin{table*} \caption{Axial and triaxial quadrupole deformation parameters $\epsilon$ and $\epsilon'$ employed in the TPSM calculation. } \begin{tabular}{|ccccccccccccccc|} \hline & $^{125}$Pr & $^{127}$Pr & $^{129}$Pr &$^{131}$Pr & $^{133}$Pr& $^{135}$Pr & $^{137}$Pr &$^{127}$Pm & $^{129}$Pm &$^{131}$Pm &$^{133}$Pm & $^{135}$Pm & $^{137}$Pm & $^{139}$Pm \\ \hline $\epsilon$ &0.300 & 0.283 & 0.267 & 0.234 & 0.194 & 0.150 & 0.150 & 0.300 & 0.300 & 0.300 & 0.292 & 0.230 & 0.200 & 0.190 \\ \hline $\epsilon'$ &0.100 & 0.100 & 0.100 & 0.101 & 0.090 & 0.080 & 0.080 & 0.110 & 0.110 & 0.110 & 0.120 & 0.110 & 0.100 & 0.090 \\ \hline $\gamma$ & 18.4 & 19.5 & 20.5 & 23.3 & 24.8 & 28.1 & 28.1 & 20.1 & 20.1 & 20.1 & 22.3 & 25.6 & 26.5 & 25.3 \\ \hline \end{tabular}\label{tab1} \end{table*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=14cm]{fig1.pdf} \caption{(Color online) Projected energies are shown before diagonalization of the shell model Hamiltonian for $^{125}$Pr. The bands are labelled by three quantities : group structure, energy and K-quantum number of the quasiparticle state. For instance, $(1\pi,1.54,3/2)$ designates one-quasiproton state having intrinsic energy of 1.54 MeV and K$=3/2$. The two signature bands for low-K states are depicted separately as the energy splitting between the two branches is large and the plots become quite clumsy when plotted as a single curve. In the legend of the figure bands are designated for $\alpha=-1/2$ states and for the $\alpha=+1/2$ states same symbols are used except that the corresponding curves are dashed lines. } \label{bd1} \end{figure*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=15cm]{fig2.pdf} \caption{(Color online) Band diagrams for odd-proton $^{127-137}$Pr isotopes. The bands are labelled as in Fig.~\ref{bd1} with solid lines representing $\alpha = -1/2$ and the dashed curves designating $\alpha = +1/2$. Only the potions of diagrams that encompasses the band crossing are displayed.} \label{bd3} \end{figure*} The interplay between proton and neutron configurations also plays an important role in the elucidation of the high-spin band structures observed in this mass region. Band structures have been observed up to quite high-spin and band crossing features have attracted a considerable attention \cite{ml06,sz00,qx08}. In particular, the nature of the band crossings in odd-proton Pr- and Pm-isotopes has been extensively studied in recent years \cite{fs21,an02,es11,ad12,bcr}. It has been shown that standard cranked shell model (CSM) approach with fixed pairing and deformation fields can describe the band crossing features reasonably well for heavier Pr- and Pm-isotopes. However, for lighter isotopes of $^{127}$Pr and $^{131}$Pm, the gain in alignment is substantially underpredicted using this approach \cite{cm98}. The band crossings in these nuclei have also been investigated using the extended version of total Routhian surface (TRS) approach \cite{cm98} in which pairing and deformation fields are determined self-consistently. The observed band crossing features have been reproduced in this more realistic approach, and it has been demonstrated that nature of the first band crossing is quite different from that predicted using the standard CSM approach. It has been shown that for lighter isotopes band crossings for these isotopes have dominant contribution from neutron configuration. This is in contradiction to the standard CSM results which predict proton BC crossing earlier than the neutron AB crossing for these nuclei. Further, band crossing features in odd-proton isotopes have been investigated using the projected shell model (PSM) approach. In this model, basis states are constructed from the solutions of the Nilsson potential with axial symmetry \cite{ysm}. In the study of odd-proton nuclei, the basis space in PSM is comprised of one-proton and one-proton coupled to two-neutron configurations. It has been shown using this approach that band crossing features of lighter isotopes of Promethium could be described well. However, for heavier isotopes discrepancies were observed between the PSM predicted and the experimental data. The major reason for this discrepancy is due to neglect of the proton aligning configurations in the basis space of PSM since it is evident from the CSM analysis\cite{cm98} that proton contribution becomes more dominant for heavier Pr- and Pm- isotopes. In order to elucidate the band crossing features for these isotopes, it is imperative to include both neutron- and proton-aligning configurations in the basis space. In the present work, we have generalized the basis space of the projected shell model for odd-proton systems by including proton aligning configurations, in addition, to the neutron states. The generalised basis configuration space has been implemented in the three-dimensional version of the projected shell model, what is now referred to as the triaxial projected shell model (TPSM) as most of the Pr- and Pm- isotopes discussed in the present work are predicted to have triaxial shapes. We have also included five-quasiparticle configurations in the basis space, which allows to investigate the second band crossing observed in some of these isotopes. \begin{figure*}[htb] \vspace{1cm} \includegraphics[totalheight=14cm]{fig3.pdf} \caption{(Color online) Projected energies before diagonalization are shown for $^{127}$Pm. The labelling of the bands follows the Fig.~\ref{bd1} description.} \label{bd2} \end{figure*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=15cm]{fig4.pdf} \caption{(Color online) Band diagrams for $^{129-139}$Pm isotopes. The figure is similar to that of Fig.~\ref{bd3}.} \label{bd4} \end{figure*} In recent years, the TPSM approach has turned out to be a useful tool to investigate the high-spin band structures in deformed and transitional nuclei \cite{GH14,an17,sb19,Vs17}. The model has provided some new insights into nature of the high-spin band structures in even-even and odd-odd nuclei in the mass $\sim$ 130 region. The chiral doublet bands observed in this mass region have been well described using the TPSM approach \cite{GH14,JG12}. For a few even-even Ce- and Nd- isotopes, it has been demonstrated that some excited band structures observed are $\gamma$-bands built on two-quasiparticle states \cite{JG09}. There has been an anomaly in the g-factor measurements for the band heads of the s-bands observed in these nuclei \cite{zemel,wyss1,KS87,RW88,RW8,sj17}. The g-factors for the two observed s-bands are either positive or negative, implying that the character of both the s-bands is either proton or neutron. In mass $\sim$ 130 region, the Fermi surfaces of neutrons and protons are close in energy and it is, therefore, expected that neutrons and protons will align almost simultaneously. From this perspective, it is expected that two observed s-bands should have neutron and proton structures, respectively. The corresponding g-factors should be positive and negative for the two s-bands. The observation of both s-bands having positive or negative g-factors is ruled out using this standard picture. \begin{figure*}[htb] \vspace{1cm} \includegraphics[totalheight=17cm]{fig5.pdf} \caption{(Color online) TPSM band head energies after configuration mixing for odd-proton $^{125-137}$Pr isotopes. The dominant intrinsic configuration is specified for each state. } \label{bhe1} \end{figure*} \begin{figure*}[htb] \vspace{1cm} {\includegraphics[totalheight=17cm]{fig6.pdf}} \caption{(Color online) TPSM band head energies after configuration mixing for odd-proton $^{127-139}$Pm isotopes. The dominant intrinsic configuration is specified for each state.} \label{bhe2} \end{figure*} \begin{figure*}[htb] \vspace{0cm} {\includegraphics[totalheight=16cm]{fig7.pdf}} \caption{(Color online) Dominant probability contributions of various projected configurations in the wave functions of the band head structures shown in Fig.~\ref{bhe1}.} \label{wf1} \end{figure*} \begin{figure*}[htb] \vspace{0cm} {\includegraphics[totalheight=16cm]{fig8.pdf}} \caption{(Color online) Dominant probability contributions of various projected configurations in the wave functions of the band head structures shown in Fig.~\ref{bhe2}. } \label{wf2} \end{figure*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=16cm]{fig9.pdf} \caption{(Color online) TPSM energies for the lowest two bands after configuration mixing are plotted along with the available experimental data for $^{125,127,129}$Pr isotopes. Data is taken from \cite{Pr2529}.} \label{expe1} \end{figure*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=15cm]{fig10.pdf} \caption{(Color online) TPSM energies for the lowest two bands after configuration mixing are plotted along with the available experimental data for $^{131-135}$Pr isotopes. Data is taken from \cite{Pr33,Pr35}.} \label{expe11} \end{figure*} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.6\textwidth,clip]{fig11.pdf}} \caption{(Color online) TPSM energies for the lowest two bands after configuration mixing are plotted along with the available experimental data for $^{137}$Pr isotope. Data is taken from \cite{Pr37}.} \label{expe2} \end{figure} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=14cm]{fig12.pdf} \caption{(Color online) TPSM energies for the lowest two bands after configuration mixing are plotted along with the available experimental data for $^{127-133}$Pm isotopes. Data is taken from \cite{cm98}.} \label{expe101} \end{figure*} \begin{figure*}[htb] \vspace{0cm} \includegraphics[totalheight=16cm]{fig13.pdf} \caption{(Color online) TPSM energies for the lowest two bands after configuration mixing are plotted along with the available experimental data for $^{135-139}$Pm isotopes. Data is taken from \cite{ad12,139pm}.} \label{expe22} \end{figure*} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.58\textwidth,clip]{fig14.pdf}} \caption{(Color online) Comparison of the aligned angular momentum, $i_x=I_x(\omega)-I_{x,ref}(\omega)$, where $\hbar\omega=\frac{E_{\gamma}}{I_x^i(\omega)-I_x^f(\omega)}$, $I_x(\omega)= \sqrt{I(I+1)-K^2}$ and $I_{x,ref}(\omega)=\omega(J_0+\omega^{2}J_1)$. The reference band Harris parameters used are $J_0$=23 and $J_1$=90, obtained from the measured energy levels as well as those calculated from the TPSM results, for $^{125-137}$Pr nuclei. }\label{ali1} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.58\textwidth,clip]{fig15.pdf}} \caption{(Color online) Comparison between experimental and calculated dynamic moment of inertia, J$^{(2)} = \frac{4}{E_{\gamma}(I)-E_{\gamma}(I-2)}$, of the yrast band for $^{125-137}$Pr isotopes. }\label{ali2} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.58\textwidth,clip]{fig16.pdf}} \caption{(Color online) Comparison of the measured and calculated aligned angular momentum ($i_x$) for $^{127-139}$Pm nuclei. }\label{ali3} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.55\textwidth,clip]{fig17.pdf}} \caption{(Color online) Comparison between experimental and calculated dynamic moment of inertia (J$^{(2)}$) of the yrast band for $^{127-139}$Pm isotopes. }\label{ali4} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.52\textwidth,clip]{fig18.pdf}} \caption{(Color online) Probability of various projected K-configurations in the wave functions of the yrast band after diagonalization for $^{127}$Pr and $^{131}$Pm isotopes.} \label{wf3} \end{figure} It has been shown using the TPSM approach that each quasiparticle state has a $\gamma$- band built on it \cite{scr} and the second excited s-band, as a matter of fact, is a $\gamma$-band built on the two-quasiparticle state. As the intrinsic structures of the two-quasiparticle band and the $\gamma$-band built on it is same, the predicted g-factors for two bands should be similar. This explained as to why both the observed s-bands have either positive or negative g-factor values \cite{sj17,zemel}. As Cerium and Neodymium are even-even cores of odd-mass Praseodymium and Promethium nuclei, it is expected that these odd-mass nuclei should also depict $\gamma$-bands built on quasiparticle states. $\gamma$-bands in some odd-mass nuclei have already been identified \cite{sh10}. To delineate $\gamma$-bands in high-spin band structures of odd-mass Pr- and Pm- isotopes is one of the objectives of the present work. The manuscript is organised in the following manner. In the next section, the extended TPSM approach is briefly presented. In section III, TPSM results obtained for Pr- and Pm- isotopes are compared with the experimental data, where ever available. Finally, the summary and conclusions obtained in the present study are outlined in section IV. \section{Triaxial Projected Shell Model Approach} The inclusion of multi-quasiparticle basis space in TPSM approach has made it feasible to study not only the ground-state properties, but also the high-spin band structures in deformed and transitional nuclei \cite{sj17,scr,ns19}. Using the TPSM approach, odd-proton systems have been studied earlier with the model space of one-proton and one-proton coupled to two-neutron quasiparticle states. However, in order to investigate the high-spin spectroscopy of these systems, the basis space needs to be extended by including proton aligning configurations, in addition to the neutron states. In the present work, the extended basis space has been implemented and the complete basis space in the generalized approach is given by $:$ \begin{equation} \begin{array}{r} ~~\hat P^I_{MK}~ a^\dagger_{\pi_1} \ack\Phi\ket;\\ ~~\hat P^I_{MK}~a^\dagger_{\pi_1}a^\dagger_{\nu_1}a^\dagger_{\nu_2} \ack\Phi\ket;\\ ~~\hat P^I_{MK}~a^\dagger_{\pi_1}a^\dagger_{\pi_2}a^\dagger_{\pi_3} \ack\Phi\ket;\\ ~~\hat P^I_{MK}~a^\dagger_{\pi_1}a^\dagger_{\pi_2} a^\dagger_{\pi_3}a^\dagger_{\nu_1} a^\dagger_{\nu_2} \ack\Phi\ket, \label{intrinsic} \end{array} \end{equation} where $\ack\Phi\ket$ is the triaxially-deformed quasiparticle vacuum state. $P^I_{MK}$ is the three-dimensional angular-momentum-projection operator given by \cite{RS80} $:$ \begin{equation} \hat P ^{I}_{MK}= \frac{2I+1}{8\pi^2}\int d\Omega\, D^{I}_{MK} (\Omega)\,\hat R(\Omega), \label{Anproj} \end{equation} with the rotation operator \begin{equation} \hat R(\Omega)= e^{-i\alpha \hat J_z}e^{-i\beta \hat J_y} e^{-i\gamma \hat J_z}.\label{rotop} \end{equation} Here, $''\Omega''$ represents the set of Euler angles ($\alpha, \gamma = [0,2\pi],\, \beta= [0, \pi]$) and $\hat{J}^{,}s$ are the angular-momentum operators. The angular-momentum projection operator in Eq.~(\ref{Anproj}) not only projects out the good angular-momentum, but also states having good $K$-values by specifying a value for $K$ in the rotational matrix, $"D"$ in Eq.~(\ref{Anproj}). The constructed projected basis of Eq.~(\ref{intrinsic}) is then used to diagonalise the shell model Hamiltonian, consisting of the harmonic oscillator single-particle Hamiltonian and a residual two-body interaction comprising of quadrupole-quadrupole, monopole pairing and quadrupole pairing terms. These terms represent specific correlations which are considered to be essential to describe the low-energy nuclear phenomena \cite{BarangerKumar}. The Hamiltonian has the following form $:$ \begin{eqnarray} \hat H = \hat H_0 - {1 \over 2} \chi \sum_\mu \hat Q^\dagger_\mu \hat Q^{}_\mu - G_M \hat P^\dagger \hat P - G_Q \sum_\mu \hat P^\dagger_\mu\hat P^{}_\mu . \label{hamham} \end{eqnarray} In the above equation, $\hat H_0$ is the spherical single-particle part of the Nilsson potential \cite{Ni69}. The QQ-force strength, $\chi$, in Eq. (\ref{hamham}) is related to the quadrupole deformation $\epsilon$ as a result of the self-consistent HFB condition and the relation is given by \cite{KY95}: \begin{equation} \chi_{\tau\tau'} = {{{2\over3}\epsilon\hbar\omega_\tau\hbar\omega_{\tau'}}\over {\hbar\omega_n\left<\hat Q_0\right>_n+\hbar\omega_p\left<\hat Q_0\right>_p}},\label{chi} \end{equation} where $\omega_\tau = \omega_0 a_\tau$, with $\hbar\omega_0=41.4678 A^{-{1\over 3}}$ MeV, and the isospin-dependence factor $a_\tau$ is defined as \begin{equation} a_\tau = \left[ 1 \pm {{N-Z}\over A}\right]^{1\over 3},\nonumber \end{equation} with $+$ $(-)$ for $\tau =$ neutron (proton). The harmonic oscillation parameter is given by $b^2_\tau=b^2_0/a_\tau$ with $b^2_0=\hbar/{(m\omega_0)}=A^{1\over 3}$ fm$^2$. The monopole pairing strength $G_M$ (in MeV) is of the standard form \begin{eqnarray} G_M = {{G_1 \mp G_2{{N-Z}\over A}}\over A}, \label{pairing} \end{eqnarray} where the minus (plus) sign applies to neutrons (protons). In the present calculation, we choose $G_1$ and $G_2$ such that the calculated gap parameters reproduce the experimental mass differences. This choice of $G_M$ is appropriate for the single-particle space employed in the present calculation, where three major oscillator shells are used for each type of nucleons (N$=3,4,5$ major shells for both neutrons and protons). The quadrupole pairing strength $G_Q$ is assumed to be proportional to $G_M$, the proportionality constant being fixed as usual to be 0.16. These interaction strengths, although not exactly the same, are consistent with those used earlier in the TPSM calculations \cite{JG12,bh14,sh10}.\\ Using the angular-momentum projected states as the basis, the shell model Hamiltonian of Eq.~(\ref{hamham}) is diagonalized following the Hill-Wheeler approach \cite{KY95}. The generalized eigen-value equation is given by \begin{equation} \sum_{\kappa^{'}K^{'}}\{\mathcal{H}_{\kappa K \kappa^{'}K^{'}}^{I}-E\mathcal{N}_{\kappa K \kappa^{'}K^{'}}^{I}\}f^{I}_{\kappa^{'} K^{'}}=0, \label{a15} \end{equation} where the Hamiltonian and norm kernels are given by \begin{eqnarray*} && \mathcal{H}_{\kappa K \kappa^{'}K^{'}}^{I} = \langle \Phi_{\kappa}|\hat H\hat P^{I}_{KK^{'}}|\Phi_{\kappa^{'}}\rangle ,\\ &&\mathcal{N}_{\kappa K \kappa^{'}K^{'}}^{I}= \langle \Phi_{\kappa}|\hat P^{I}_{KK^{'}}|\Phi_{\kappa^{'}}\rangle . \end{eqnarray*} The Hill-Wheeler wavefunction is given by \begin{equation} |\psi_{IM}\ket = \sum_{\kappa K} a_{\kappa K}^{I}\hat {P}_{MK}^{I} |\Phi_{\kappa} \ket,\label{Anprojaa} \end{equation} where $ a_{\kappa K}^{I}$ are the variational coefficients, and the index $^{``}\kappa^{``}$ designates the basis states of Eq. (\ref{intrinsic}). \section{Results and Discussion} TPSM calculations have been performed for odd-mass $^{125-137}$Pr and $^{127-139}$Pm isotopes using the axial and non-axial deformations listed in Table 1. These deformation values have been adopted from the earlier studies performed for these nuclei \cite{fs21,ysm,MN95}. The intrinsic states obtained from the solution of the triaxial Nilsson potential with these deformation parameters are projected onto good angular-momentum states as discussed in the previous section. For each system about 40 to 50 intrinsic states are selected around the Fermi surface for which the angular-momentum projection is performed. The angular-momentum projected states, which are close to the yrast line, are depicted in Fig.~\ref{bd1} for $^{125}$Pr. We would like to mention that all the projected states near the fermi surface are employed in the final diagonalization of the shell model Hamiltonian, but for clarity only projected states that are close to the yrast line are shown in Fig.~\ref{bd1}. This diagram, what is referred to as the band diagram, is quite instructive as it reveals the intrinsic structures of the observed band structures \cite{KY95}. The lowest projected band in Fig.~\ref{bd1} originates from the one-quasiproton state, having K $=3/2$, with the quasiparticle energy of 0.86 MeV. Although the triaxial quasiparticle state does not have a well-defined angular-momentum projection quantum number, but the three-dimensional projection operator, not only projects out the angular-momentum quantum number, but also its projection along the intrinsic z-axis, what is referred to as the K-quantum number in the literature \cite{lamme,boeker}. The K-value specified in all the diagrams in the present work refers to this projected quantum number. The two signature branches for low-K bands are shown separately as the splitting between the two states is quite large for these configurations. The signature splitting for the lowest K $=3/2$ band increases as expected with increasing angular-momentum. It is noted from Fig.~\ref{bd1} that three-quasiparticle band comprised of one-proton and two-neutron aligned configuration, having K= $1/2$, crosses the ground-state band and becomes yrast at I $= 41/2$. This crossing is between the $\alpha=-1/2$ states of the two bands, and the $\alpha=+1/2$ branch is quite high in excitation energy as compared to its signature partner band. What is interesting to note from Fig.~\ref{bd1} is that above the band crossing, the $\alpha=+1/2$ states of the yrast band originate from the $\gamma$-band built on the three quasiparticle state. In the TPSM analysis, $\gamma$-bands are built on each quasiparticle state \cite{scr} and apart from the $\gamma$-band based on the ground-state, $\gamma$-bands built on two-quasiparticle states have been identified in several even-even nuclei \cite{156dy,zemel,sj17,kum14}. These bands have $K=K_0+2$, where $K_0$ is the K-quantum number of the parent band. In the present work, $\gamma$-band based on the ground-state, K $=3/2$, has K $=7/2$ and is located at an excitation energy of $\sim 1.0$ MeV from the ground-state band at I $=11/2$. The $\gamma$-band built on the aligned three quasiparticle state with K $=1/2$ has K $=5/2$ and is located at an excitation energy of $\sim 2.5 $ MeV at I $=11/2$. This band is noted to cross the $\gamma$-band based on the ground-state band at I $=35/2$ and becomes the lowest band for $\alpha=+1/2$ signature branch. The reason that it becomes lowest is because it is built on the three-quasiparticle state with K $=5/2$ and depicts less signature splitting as compared to its parent band. It is also noted from Fig.~\ref{bd1} that a five-quasiparticle state, having both two-protons and two-neutrons aligned, with K $=1/2$, crosses the three-quasiparticle state at a higher spin, I $=59/2$. The band diagrams for other studied Pr-isotopes are similar to that of $^{125}$Pr, except that nature of the band crossing changes with increasing neutron number. In Fig.~\ref{bd3}, only the band crossing portion of the band diagrams is depicted for the Praseodymium isotopes ranging from A=127 to 137. The first band crossing in $^{127}$Pr is again due to the alignment of two-neutrons, but occurs at I $=39/2$, which is slightly lower as compared to that of $^{125}$Pr. For $^{129}$Pr, the nature of the first band crossing has changed and is now due to the alignment of two-protons rather than of two-neutrons as was for the earlier two cases. The three-proton configuration having K $=3/2$ becomes lower than one-quasiparticle ground-state band at I $=35/2$. For $^{131}$Pr, the band crossing occurs at I $=35/2$ as for $^{129}$Pr, but for other studied isotopes, it is observed at a higher angular-momentum of I $=43/2$. These band crossing features shall be discussed in detail later when comparing the alignment and moment of inertia obtained from the TPSM results with those deduced from the experimental data. The band structures for $^{127}$Pm are displayed in Fig.~\ref{bd2} and again only configurations important to describe the near yrast spectroscopy are plotted. The ground-state band, having K $=3/2$, is built on the one-quasiproton Nilsson state with energy of $0.93$ MeV. The $\gamma$-band based on the ground-state band, having K $=7/2$ lies at an excitation energy of $\sim 1$ MeV for I $=11/2$. It is observed from the figure that three-quasiparticle state having K $=1/2$ with one-proton coupled to two-aligned neutrons, crosses the ground-state band at I $=43/2$. It is also noted from the figure that $\gamma$-band built on the three-quasiparticle state, having K $=5/2$, also crosses the ground-state band at I $=47/2$. This almost simultaneous crossing will lead to forking of the ground-state band into two s-bands as is known to occur in many even-even systems \cite{sj17, wyss1}. In $A \sim 130$ region, some even-even isotopes of Ba, Ce and Nd are known to have several s-bands \cite{wyss1}. As the neutron and proton Fermi surfaces are close in energy for these isotopes, the forking of the ground-state band into two s-bands is expected with one s-band having neutron character and the other originating from protons. However, this traditional picture cannot explain the magnetic moment measurements for the band heads, I $=10^+$ states, of the two s-bands with g-factors of both the states having a neutron character in $^{134}$Ce \cite{zemel}. This long-standing puzzle was addressed using the TPSM approach and it was shown \cite{JG09} that the second s-band in $^{134}$Ce is a $\gamma$-band based on the two-neutron aligned state and since the intrinsic configurations of the two s-bands are same in this interpretation, the g-factors of the two s-bands are expected to be similar \cite{sj17}. It was also predicted that two s-bands observed in $^{136,138}$Nd nuclei should both have positive g-factors with the aligning particles being protons \cite{sj17}. Further, $\gamma$-bands built on two-quasiparticle states have been observed in $^{70}$Ge \cite{kum14} and $^{156}$Dy \cite{156dy} nuclei. In the present work, we shall examine whether it would be feasible to identify the $\gamma$-bands built on quasiparticle states. $\gamma$-band in odd-mass nuclei are quite rare and these bands built on the ground-state have been been identified in $^{103,105}$Nb \cite{sh10,hj13}, $^{107,109}$Tc \cite{gu10}, and very recently in $^{155,157}$Dy nuclei \cite{snt20}. The problem is that in odd-mass nuclei, $\gamma$- configurations compete with one-quasiparticle states and contain a strong admixtures from these states. This shall be addressed later in the presentation of the band head energies of various band structures after diagonalization of the shell model Hamiltonian. The band diagrams for other studied Pm-Isotopes are displayed in Fig.~\ref{bd4}, depicting only the important band crossing regions. The band crossing for $^{129}$Pm and $^{131}$Pm isotopes occur at I $=39/2$ and is due to the alignment of two-neutrons. For other Pr-isotopes, the band crossing occurs at lower angular-momenta and is due to the alignment of two-protons. It is also noted from Fig.~\ref{bd1} that the five-quasiparticle state, which contains three-proton plus two-neutron aligned configuration, crosses the three-quasiparticle state at higher angular-momenta. Therefore, the present calculations predict that odd-Pm isotopes, studied in the present work, should depict a second crossing at high-spin. The angular-momentum projected states, depicted in the band diagrams Figs. \ref{bd1}, \ref{bd3}, \ref{bd2} and \ref{bd4} and many more in the vicinity of the Fermi surface are employed to diagonalize the shell model Hamiltonian of Eq. (\ref{hamham}). As already mentioned that present approach is similar to the traditional spherical shell model (SSM) approach with the exception that angular-momentum states, projected from the deformed Nilsson configurations, are employed as the basis states instead of the spherical configurations. The nuclei studied in the present work are beyond the reach of the SSM approach as the dimensionality of the spherical basis space becomes too prohibitive to manage with the existing computational facilities. In the TPSM approach, the optimal deformed basis states are choosen to describe the properties of deformed systems and the number of basis states required is quite minimal. In most of the studies, it has been demonstrated that 40 to 50 basis states are sufficient to describe the properties of deformed systems. The additional work required in the TPSM approach is that the deformed basis need to be projected onto states having good angular-momentum in order to diagonalize the spherical shell model Hamiltonian of Eq. (\ref{hamham}). In the studied Pr-isotopes, the lowest projected states after diagonalization are depicted in Fig.~\ref{bhe1} for the angular-momentum, I $=11/2$, which is the ground-state for the negative parity bands observed for the studied isotopes. The states in Fig.~\ref{bhe1} are labelled with the projected intrinsic state that is most significant in the wavefunction. It is noted that ground-state for all the studied isotopes originates from the one-quasiparticle state having K $=3/2$. The $\gamma$-band based on the ground-state, having K $=7/2$, is located at about $1$ MeV excitation energy from the ground-state in all the isotopes. The main problem to identify them in odd-mass systems is that they are mixed with the single-particle states as is evident from the figure that there are several single-quasiparticle states, which are in the vicinity of the $\gamma$-bands. This figure also displays, three-quasiparticle states and the $\gamma$-bands built on them. These three-quasiparticle states become favoured in energy at high-spin and cross the ground-state band as illustrated in the band diagrams, Figs. \ref{bd1}, \ref{bd3}, \ref{bd2} and \ref{bd4}. It is also noted from Fig.~\ref{bhe1} that these three-quasiparticle states become lower in energy for $^{135}$Pr and $^{137}$Pr isotopes and it might be feasible to populate the low-spin members of these states. In particular, the most interesting prediction is the possibility of observing almost two identical three-quasiparticle bands, one the normal three-quasiparticle band having K $=1/2$ and the other the $\gamma$-band, having K $=5/2$, based on the three quasiparticle state. These two states should have similar electromagnetic properties, like g-factors, since they originate from the same intrinsic quasiparticle configuration. The band head energies for the studied Pm-isotopes are displayed in Fig.~\ref{bhe2} and have a similar pattern as that for the Pr-isotopes, shown in Fig.~\ref{bhe1}. The only difference between the two figures is that the band head energies for the three-quasiparticle band structures is slightly lower for the Pm-isotopes. In particular, for $^{139}$Pm, the three quasiparticle state and the $\gamma$-band based on this state is quite low in energy and is the best candidate for which these structures could be identified in the future experimental studies. The dominant components in the wavefunctions of the above discussed band head states are depicted in Figs.~\ref{wf1} and \ref{wf2} for I $=11/2$. It is observed from these figures that all the states are mixed, even the ground-state band head has small admixtures from the other one-quasiparticle states and also from the $\gamma$-band. We would like to remind the reader that projection from a triaxial intrinsic state give rise to several bands having different values of the K-quantum number. The shell model Hamitonian is diagonalized with all these projected states that results into mixing among them. $\gamma$-band, which happens to be the first excited band, has also mixing from the ground-state and other configurations. Although all the states are mixed, but it is possible to identify them at low-spin values as they have one predominant component. For high-spin states, the bands are highly mixed, and it is difficult to identify them. The complete band structures for the lowest and the first excited state, obtained after diagonalization of the shell model Hamiltonian, are depicted in Figs.~\ref{expe1}, \ref{expe11}, and \ref{expe2} for Pr- isotopes and Figs.~\ref{expe101} and \ref{expe22} for Pm- isotopes and are compared with the experimental data, wherever available. For most of the nuclei, the ground-state band, except for $^{127}$Pm and $^{129}$Pm, are known up to quite high-spin and the TPSM energies are noted to be in good agreement with these known level energies. Some preliminary TPSM results for $^{135}$Pm were presented in the experimental work \cite{fs21} and it was shown that the results agreed remarkably well with the data. The energy values for all the states have been specified in the Figs.~\ref{expe1} - \ref{expe22} , which shall be useful to make comparisons with future experimental measurements, and as well as with other theoretical studies. We now turn our discussion to the band crossing features observed in the studied Pr- and Pm-isotopes. As already stated in the introduction, the observed band crossing features in some of these isotopes could not be explained using the standard CSM approach and it was necessary to employ the self-consistent TRS approach to shed light on the anomalous band crossing features. Further, in the earlier PSM study of the odd-proton isotopes, only neutron aligned states were considered in the basis space \cite{ysm}. However, it was evident from the earlier analysis \cite{cm98} that neutron and proton alignments compete, and it is imperative to include both two-neutron and two-proton aligned configurations in the basis space. In the present work, both these configurations have been included and in the following, we shall present the results of alignments and moments of inertia. Alignment, $i_x$ and the dynamic moment of inertia, $J^{(2)}$ have been evaluated using the standard expressions \cite{RB97}. These quantities are displayed in Figs.~\ref{ali1} and \ref{ali2} for the Pr-isotopes. For $^{125}$Pr, $^{127}$Pr and $^{129}$Pr, both expt. and TPSM deduced $i_x$ depict an increasing trend with spin and band crossing is not evident from this plot. As we shall see below that $J^{(2)}$, which is more sensitive to changes in the alignment, depicts band crossing features. $i_x$ plots for $^{131}$Pr, $^{133}$Pr, $^{135}$Pr and $^{137}$Pr show upbends, which is indicative of a band crossing having large interaction strength between the ground-state and the aligned band. It is evident from the figure that TPSM results agree fairly well with those deduced from the experimental data. For $^{125-133}$Pr isotopes, $J^{(2)}$ in Fig.~\ref{ali2} depict upbends between spin values of I $=31/2$, and $35/2$. The upbend is a clear indication of the change in the configuration along the yrast band and is a signature of the band crossing phenomenon. For $^{135}$Pr, the upbend in $J^{(2)}$ is noticed at a higher angular momenta of I $=41/2$ and in the case of $^{137}$Pr, the discontinuity in $J^{(2)}$ is observed at a lower angular-momentum. The $J^{(2)}$ evaluated from the measured energies depicts a larger enhancement as compared to the TPSM predicted value. For the isotopes of Pm from A=127 to 135, $i_x$ plotted in Fig.~\ref{ali3} depicts an increasing trend with spin. For A=127 and 129, experimental quantities are not available, but for A=131, 133 and 135, TPSM values are in good agreement with the data. For $^{137}$Pm and $^{137}$Pm, two upbends are predicted by TPSM calculations, and for $^{139}$Pm, both the upbends are observed in the experimental data. $J^{(2)}$ calculated for the studied Pm-isotopes are depicted in Fig.~\ref{ali4} and are noted to be in good agreement with the known experimental quantities, except that for the isotopes of $^{135}$Pm and $^{137}$Pm, the TPSM calculated upbends are smoother in comparison to the experimental quantities. It has been demonstrated using the self-consistent TRS model \cite{cm98} that alignments for the studied odd-proton Pr- and Pm-isotopes are quite complicated with considerable mixing between neutron and proton configurations. In the band diagrams of Figs.~\ref{bd3} and \ref{bd4}, the alignment is either due to protons or neutrons as the energies are plotted before configuration mixing. In order to investigate the mixing between the neutron and proton configurations, the wavefunction amplitudes are depicted in Fig.~\ref{wf3} for $^{127}$Pr and $^{131}$Pm, which were studied in detail in Ref.~\cite{cm98}. It is quite evident from the figure that band crossing is not entirely due to the alignment of two neutrons, but also has a significant contribution from the aligned proton configuration. For heavier isotopes, the situation is reversed with proton contribution larger than the neutron one. Therefore, present work substantiates the TRS prediction that alignments for odd-proton Pr- and Pm-isotopes are quite complicated with mixing between the proton and neutron aligned configurations. This is primarily due to the reason that both aligned protons and neutrons occupy the same intruder orbital, $1h_{11/2}$. \section{Summary and conclusions} In the present work, the TPSM approach for odd-proton nuclei has been generalized to include three-proton and three-proton coupled to two-neutron quasiparticle configurations. This extension allows the application of the TPSM approach to high-spin band structures observed in odd-proton system. In the earlier version, only one-proton and one-proton coupled to two-neutron configurations were considered and this limited the application of the TPSM approach. In some odd-proton Pr- and Pm- isotopes, anomalous band crossing features were reported using the standard CSM analysis. It was demonstrated, using a more realistic TRS approach in which pairing and deformation properties were obtained self-consistently, that first band crossing in some Pr- and Pm- isotopes also contained a large contribution from the proton configuration. Normally, it is expected that in odd-proton system, the proton crossing is blocked for the yrast band and the first crossing is due to the alignment of two-neutrons. It has been shown using the extended basis space that for lighter Pr- and Pm-isotopes, the band crossings is dominated by the alignment of neutrons. However, for heavier isotopes, it has been shown that first band crossing has dominant contribution from the aligned protons. The present work also confirmed the TRS prediction that band crossings in Pr- and Pm-isotopes have mixed neutron and proton aligned configurations. Further, we have explored the possibility of observing $\gamma$-bands in the studied odd-mass systems. $\gamma$-bands are quite scarce in odd-mass systems and have been observed only in a few nuclei. In comparison, the $\gamma$-bands in even-even systems have been observed, not only based on the ground-state, but have also been identified built on two-quasiparticle excited configurations. In even-even Ce- and Nd-isotopes, several s-bands are observed and it was shown that some of these s-bands are as a matter of fact, $\gamma$-bands built on the two-quasiparticle states. Since Ce- and Nd-isotopes are even-even cores of Pr- and Pm-isotopes, it is expected that they should also depict some features of the even-even cores. It has been shown in the present work that heavier Pr- and Pm-isotopes are the best candidates to observe the $\gamma$-bands based on three-quasiparticle configurations. We have provided the excitation energies of the band heads of these structures which shall be helpful in identifying them in future experimental studies. \section{ACKNOWLEDGEMENTS} The authors would like to acknowledge Department of Science and Technology ( Govt. of India ) for providing financial assistance under the Project No.$CRG/2019/004960$ to carry out a part of the present research work.
1,941,325,220,290
arxiv
\subsection{Introduction} \noindent It has been shown many years ago [1-6] that the main prerequisites used to prove the Bell inequalities (BI) [7,8], or Clauser,Horn ,Shimony,Holt (CHSH) inequalities [9] \ such as the use of a common probability space, joint probability distributions for non-commuting observables etc. were inappropriate for the description of the spin polarization correlation experiments (SPCE). Therefore the violation of BI-CHSH\ in these experiments [10,11] gave neither information about the nonlocality of QT nor about its completeness. In spite of this, testing of BI-CHSH continued and several loopholes were indicated which could explain the apparent,but not existing, violation of BI -CHSH by the imperfection of the experimental set ups [12]. Precise new experiments [13,14] permitted to close several loopholes, confirmed QT predictions for the correlation functions and even allowed to detect strange anomalies in various detection rates reported recently by Adenier and Khrennikov [15]. Instead of looking for new loopholes it would be more interesting, in our opinion, to use the existing data in order to find anomalies similar to those reported in [15] or to search for some fine structure in the data with help of various purity tests proposed by us several years ago [4,16,17]. Only the discovery of the fine structure in the data, not accounted for by QT , would provide a decisive proof that the statistical description provided by QT\ of these data is incomplete ending the debate started by EPR [18] over 75 years ago. The limitations and inapplicability of BI , CHSH and GHZ inequalities [19] to SPCE\ \ have been demonstrated now in numerous publications e.g. [20-27,46-50]. Several local models [2,3,28,29,41] were able to reproduce QT\ predictions. BI-CHSH are also violated in macroscopic experiments discussed by Aerts [30,31] and in computer experiments of Accardi and Regoli [21]. The strong arguments in favor of statistical and contextual character of spin projections were given by Allahverdyan, Balian and Newenhuizen [32]. Instead of rejoicing that there was no contradiction between QT and locality [25] \ many members of physics community continue to marvel at the picture of two perfect random dices giving completely correlated outcomes etc. One may wonder why it is so ? \ One reason could be that they do not understand the implications of the use of a common probability space and joint probability distributions for the protocol of a random experiment . Perhaps the authors criticizing BI were using too technical or/and too condensed language e.g.:'' The main hypothesis needed to prove Bell-type inequalities is the assumption that the probabilities estimated in various SPCE can be calculated from one sample space ( probability space) by conditionalization.'' [5] . The other reason could be that after Harry Potter and other science fiction stories everything seems to be possible and magic explanations for the correlations are much more attractive than those based on a common cause or a common sense. Similar reasons may explain perhaps the fact known for years and explained recently in detail by Ballentine [34] that :''Once acquired, the habit of considering an individual particle to have its own wave function is hard to break.... though it has been demonstrated strictly incorrect,''. Fortunately we are at the fourth Vaxj\"{o} conference and in my opinion slowly a consensus is starting to build up around statistical contextual interpretation (SCI) of QT [20,27,33-35]. According to this interpretation the information gathered in the measurements in all different experimental contexts provides the only reliable contextual information about a ''state'' of the identically prepared physical systems whose properties are measured. It is not an easy process because during the conference several discussions showed that people are attributing different meaning to the words such as : probability,\ contextual, observables, measurement, photon and local realism and they have their own mental images of invisible sub-phenomena underlying every physical observable phenomenon or experiment. In this paper we will give additional arguments in favor of SCI. The paper is organized as follows. In the section \ 2 we define the meaning which we attach to the terms and notions such as: phenomena, sub-phenomena, physical reality, filters, contextual, observables, probabilities etc. In the section 3 we recall shortly the EPR paradox and the explanation given by SCI. In the section 4 we explain why according to SCI there is no strict anti-correlations of ''measured'' spin projections for each ''individual EPR pair'' in the singlet state. In the section 5 we criticize in historical perspective various proofs of BI-CHCH including the most recent one given by Larsson and Gill [36] and we explain why the prerequisites used in these proofs are not valid in SPCE.\ In the section 6 we discuss how the predictable completeness of QT could be tested. \subsection{IMPORTANT NOTIONS} The main assumption in physics is that there exists a material world (physical reality) in which we can observe various phenomena and with which we may interact by performing various repeatable experiments. \ Another assumption is the existence of physical laws which are responsible for the richness of phenomena and for the regularities observed in our experiments. We have no doubt that the Moon exists when we do not look at it but of course when we look at it we can perceive it in different colors, in different phases etc. We do not have intuitive images of electrons and photons but we have the abstract mathematical model given by QT\ able to describe quantitatively various phenomena they are participating in. The importance of the interplay of ontic and epistemic realities in QT was recently discussed in some detail by Atmaspacher and Primas [37] and Emch [38]. The mathematical models are always of limited validity and apply to particular phenomena. To describe the motion of the Moon around the Earth we use a model of a material point following well defined trajectory obtained by solving Newton's differential equations of motion. To explain its phases the Moon is modeled as a sphere. To describe the details of the formation of a crater on the Moon, when a meteorite hits it, we need much more detailed and complicated mathematical model. This is why we will probably discover one day that at very short distances the extendedness of the hadrons and of other elementary particles plays more important role [39] than in the standard model we are using today. Therefore the statement that a given theory for example QT provides the most complete description of the individual physical systems lacks humility since as Bohr said [40]: ''The main point to realize \ is that the knowledge presents itself within a conceptual framework adapted to account for previous experience and that any such frame may prove too narrow to comprehend new experiences.'' \subsubsection{Phenomena and Sub-Phenomena} Phenomena \ produce observable effects. Sub-phenomena are invisible. For example we can see a track left by a single charged high energy elementary particle on the picture from the bubble chamber. To describe its trajectory we can use a model of a point-like particle moving according to the laws of classical physics but of course there is an underlying microscopic invisible sub- phenomenon leading to the track formation and if we wanted to explain it in detail we should go beyond the classical electrodynamics. If this high energy elementary particle collides with the proton from a hydrogen atom in a bubble chamber then we see many out- going tracks from the collision point what is a new phenomenon : a creation of several new particles during the collision. To describe this phenomenon we have to go beyond the quantum electrodynamics. We have to prepare in an accelerator a collimated beam of ''identical elementary particles'' and to observe several collision events in the bubble chamber. The only reproducible regularities, besides the conservation laws, we assumed \ to be valid, are of statistical nature and can be predicted by some abstract mathematical model \ providing the probabilities for different possible outcomes of the collision. The intuitive description of invisible sub-phenomena taking place during the collision is not provided. The phenomena described by QT\ are all of this nature. Any experimental set-up can be divided into three parts: a source preparing the ensemble of ''identical'' physical systems , an interaction/filtering part and the detectors/counters part which produces time- series of observable events : clicks on various detectors, dots on a screen, tracks in a bubble chamber etc. One may have an impression that nowadays we are able to perform the experiments on the individual physical systems such as an electron or an ion in some trap. However in order to find any statistical regularity in the data from these experiments we have to reset initial conditions in the trap and repeat these experiments several times obtaining again the ensemble of measurements performed on the ''identical'' physical systems. For the extensive discussion of general experimental set-ups see [42]. QT gives the probabilistic predictions for the distribution of outcomes \ obtained in various phenomena without providing intuitive models of invisible sub-phenomena. One encounters paradoxes only if incorrect models are used to describe these sub-phenomena . A source of light is not a gun and photons are not small bullets etc. \subsubsection{Attributes and Contextual Properties} An attributive property is a constant property of an object which can be observed and measured at any time and which is not modified by the measurement e.g.: inertial mass, electric charge etc. A contextual property is a property revealed only in specific experiments and characterizes the interaction of the object with the measuring apparatus. Let us mention Accardi's chameleon which is green on a leaf and brown on a bark of a tree [22,25]. In classical physics we assume that measurements of various attributive properties possessed by an individual physical system are compatible what means that they can be measured simultaneously\ or in any sequence giving always the same results. In quantum physics the contextual properties are known after the measurements e.g. as a click on a detector placed behind a polarization filter. The measurements of incompatible properties cannot be performed simultaneously and the measurement of one of them destroys the information about the other one. Various sequences of these measurements lead to various probability distributions of the outcomes [42]. The measurements of attributive properties are called by Accardi [25] passive dynamical systems and those of contextual properties active dynamical systems. In QT contextual properties of individual systems are of statistical character because they may be only deduced from the properties of pure ensembles they are members of [41] ''...a value of a physical observable, here a spin projection, associated with a pure quantum ensemble and in this way with an individual physical system, being its member, is not an attribute of the system revealed be a measuring apparatus; it turns out to be a characteristic of this ensemble created by its interaction with the measuring device. In other words the QM is a contextual theory in which the values of the observables assigned to a physical system have only meaning in a context of a particular physical experiment''. Another argument in favor of \ SCI of QT comes from probability theory. The probabilities are only the ''properties'' of random experiments [5]:\ '' talking about the probabilities we should always indicate the random experiment needed to estimate their values''. QT provides the algorithms to find various probabilities therefore it is a contextual theory. The contextuality in this sense is the fully objective property of the Nature. Even if nobody observes the collisions of high energy protons they are described by the same probability distributions of the possible outcomes no matter where it happens. The probabilities found in QT\ do not describe one particular random experiment but a whole class of equivalent random experiments which are assumed to be repeatable as many times as needed. \subsubsection{Probabilities, Correlations and Causality} It is not obvious how to define the probability, what is the randomness etc. These important topics were discussed \ recently in detail by Khrennikov in his stimulating book [20]. We illustrate here these difficulties by two simple examples, the more detailed discussion may be found in [20,22,42] 1) Let us consider a random experiment which can give only two outcomes: 1 or -1. We repeat this experiment 2n times and we obtain a time series of the results:1,-1,1,-1,...,1,-1. By increasing the value of n the relative frequency of getting 1 can approach 1/2 as close as we wish suggesting that the probability of getting 1 in each experiment is equal to 1/2. Of course it is incorrect because if we analyze the time series of outcomes in detail \ we see that we have a succession of the couples of two deterministic experiments or one deterministic experiment keeping memory of the previous result which is called a periodic two dimensional Markov Chain. 2) Let us toss now a fair coin assigning 1 for ''head'' and -1 for '' tail''. We get again a time series: 1,1,-1,-1,-1,1,-1..with relative frequencies which tend to 1/2. There is no apparent structure in this series so the hypothesis of the independent and identical repetitions of the same experiment seems to be satisfied and we say that a complete description of this experiment is provided by a single number :a probability of getting 1 which is equal to 1/2. Of course we believe that if we knew all parameters describing the sub-phenomena of tossing experiment we could predict each individual result using the laws of the classical physics. The randomness is here only due to the lack of control of these parameters. The statisticians and probabilists invented many tests in order to test the randomness and independence in such time series but conclusions from any statistical study is valid only on a given level of confidence. Moreover the series \ formed by the consecutive decimals of the number $\pi$ passed all the tests of the randomness in spite of the fact that any consecutive decimal is strictly determined. Without any completely conclusive measure of randomness we have to limit ourselves for any practical applications to generators of pseudo-random numbers which passed the known tests of the randomness. One of the postulates of Copenhagen interpretation was that in a measurement process a measured value of a physical observable is chosen among all possible values of this observable with a given probability and in a completely random way. It was also believed that this indeterministic behavior of quantum ensembles could not be explained by the lack of control of some hidden variables describing deterministic interactions of individual members with measuring devices. This intrinsically indeterministic behavior of individual quantum ''particles'' was believed to provide a new standard of the randomness which could serve to produce the unbreakable keys in quantum cryptography. The fact that the strong correlations created by the source in SPCE\ survive the filtration and measurement processes is a strong argument against purely random behavior of individual systems during these processes. The individual systems have to carry a memory of their preparation coded in some parameters and a measuring device described by its own uncontrollable microscopic parameters has to act in some deterministic way to produce an observable outcome without destroying completely the memory how the systems were prepared at the source. It is well known that a strong correlation between two random variables has nothing to do with a causal relation between these variables. For example the average price of oil in a given year is correlated with the average salary of Anglican priests in the same year due to the common cause which is inflation. The existence of strong correlations between non-interacting physical systems which interacted in the past was analyzed for the first time by EPR [18] and led them to conclude that the description of these phenomena provided by QT\ is incomplete. In the next section we discuss shortly their paper. \subsection{EPR-PARADOX} EPR consider two systems I and II which are permitted to interact from t=0 to t=T and which evolve freely and independently afterwards. The state of \ I or II for t$\geq$T can be found only by the reduction of wave packet. Let a% $_{1}$, a$_{2}$, a$_{3}$.. be the eigenvalues of \noindent some physical observable A to be measured on the system I and u$_{1}$,u$_{2}$,u$_{3}$.. a complete set of corresponding orthogonal eigenfunctions . At the moment of measurement T$_{1}$ $\geq$ T of the observable A on the system I the wave function $\Psi(x_{1},x_{2})$ of the system I+II is given by \begin{equation} \Psi(x_{1},x_{2})=\underset{n}{\sum}\psi_{_{n}}(x_{_{2}})u_{n}(x_{1}) \tag{(1)} \end{equation} If the measurement of A gives a$_{k}$ then the wave function is reduced to c$% _{k}\psi_{_{k}}(x_{_{2}})$ $u_{k}(x_{1})$ \ \ where $\psi_{_{k}}(x_{_{2}})$ is, up to normalization constant, the wave function of the system II immediately after the measurement of A on the system I has been completed and the result a$_{k}$ known. If instead of A we decided at t=T$_{1}$ to measure on I another non-commuting observable B with the eigenvalues b$_{1}$,b$_{2}$,b$_{3}$ and a complete set of orthogonal eigenfunctions v$_{1}$,v$_{2}$,v$_{3}$... then instead of the formula (1) we would have \begin{equation} \Psi(x_{1},x_{2})=\underset{s}{\sum}\varphi_{_{s}}(x_{_{2}})v_{s}(x_{1}) \tag{(2)} \end{equation} If b$_{r}$ was obtained then the wave of the system II immediately after the measurement of B on the system I would have been , up to normalization constant, $\varphi_{_{s}}(x_{_{2}}).$ In their paper EPR conclude :'' Thus it is possible to assign two different wave functions ( in our example $\psi_{_{k}}(x_{_{2}})$ $and$ $% \varphi_{_{s}}(x_{_{2}})$ ) to the same reality (the second system after the interaction with the first'' . In each case the functions are assigned with certainty and without disturbing the system II. If one assumes that the wave functions are in one to one correspondence with the states of individual physical systems one obtains a contradiction called EPR Paradox. Of course according to SCI there is no paradox because the wave functions \ $% \psi _{_{k}}(x_{_{2}})$ $and$ $\varphi _{_{s}}(x_{_{2}})$ \ describe only different sub-ensembles of the ensemble of the particles II. The eigenvalue expansions (1) and (2) being mathematical identity describe different incompatible experiments.\ They imply the existence of the long-range correlations between the measurements performed on the non-interacting separated physical systems . The state $\Psi (x_{1},x_{2})$ is not factorized and is an example of the entangled state so popular nowadays. The discussion following the EPR\ paper was recently reviewed in detail in [27].\ Some arguments of EPR\ were rejected but nobody was able to prove that QT provided the complete description of individual physical systems. \subsection{SINGLET STATE} The most studied example of the entangled state is a singlet spin state. Spin version of the EPR experiment was proposed by Bohm [43]. Using SCI\ we analyze here only the predictions of QT for SPCE. The detailed discussion of EPR-B paradox in a spirit of SCI may be found in [27]. The singlet spin state vector for the a system of two particles has the form: \begin{equation} \Psi_{0}=\left( \mid+\text{ }\rangle\otimes\text{ }\mid-\text{ }\rangle\text{ - }\mid-\text{ }\rangle\otimes\text{ }\mid+\text{ }\rangle\right) \sqrt{1/2} \tag{(3)} \end{equation} where the single particle vectors $\mid+$ $\rangle$ and $\mid-$ $\rangle$ denote ''spin up'' and ''spin down '' with respect to some common coordinate system. If we ''measure'' the spin of the particle \#1 along the unit direction vector \textbf{a} and the spin of the particle \#2 along the unit direction% \textbf{\ \ }vector \textbf{b} , the results will be correlated and for the singlet state the correlations are described by the correlation function: \begin{equation} E(\mathbf{a},\mathbf{b})=\left\langle \Psi_{0}\left| \sigma_{\mathbf{a}\text{ \ \ }}\otimes\text{ }\sigma_{\mathbf{b}\text{ \ \ }}\right| \Psi _{0}\right\rangle =-cos\text{ }\theta_{\mathbf{ab}}\ \ \tag{(4)} \end{equation} where $\ \sigma_{\mathbf{a}\text{ \ \ }}=$ \textbf{\ }$\mathbf{\sigma}\bullet $ \textbf{a} \ \ and $\sigma_{\mathbf{b}\text{ \ \ }}=$ \textbf{\ }$\mathbf{% \sigma}\bullet$ \textbf{b} \ \ denote the components of the Pauli spin operator in the directions of the unit vector \textbf{a} and \textbf{b} respectively and $\theta_{\mathbf{ab}}$ is the angle between the directions \textbf{a} and\textbf{\ b}. Since $E(\mathbf{a},\mathbf{b})=-1$ for $\ \theta_{\mathbf{ab}}$ =0 it was concluded that the results of the spin projection measurements for each individual couple of particles are strictly anti-correlated. It was pointed out in [41] that this conclusion is unjustified since according to SCI the state vector $\Psi_{0}$ allows only to find statistical distribution of outcomes without giving a deterministic prediction for any individual outcome. The reason is that sharp directions and angles do not exist in the Nature. Fuzzy measurements in QT\ have been studied for years and many important results have been obtained \ [44,45]. Each spin polarization correlation experiment (A,B) is defined by two macroscopic orientation vectors\ \textbf{A} \ and\textbf{\ B} being some average orientation vectors of the analyzers [41,26,27]. More precisely the analyzer \textit{\ A} is defined by a probability distribution d$\rho _{A}(% \mathbf{a})$ , where \textbf{a }are microscopic direction vectors \textbf{a}$% \in O_{A}$ $\ $and $O_{A}=\left\{ \mathbf{a}\in S^{(2)};\left| 1-\mathbf{a}% \cdot\mathbf{A}\right| \leq\varepsilon_{A}\right\} .$ Similarly the analyzer B is defined by its probability distribution d$\rho _{B}(\mathbf{b}).$ \ Therefore even if the detectors and filters were perfect , no detection loophole, the idealized QT prediction for the correlation function $E(% \mathbf{A},\mathbf{B})$ would be given not by the formula (4) but by a smeared formula: \ \begin{equation} E(\mathbf{A},\mathbf{B})=\underset{O_{A}}{\int}\underset{}{\underset{O_{B}}{% \int}-cos\text{ }\theta_{\mathbf{ab}}\ }d\rho_{A}(\mathbf{a})d\rho _{B}(% \mathbf{b})\ \ \tag{(5)} \end{equation} The quantitative effect of smearing of $cos$ $\theta_{\mathbf{ab}}$ in the formula (5) can be very small but $\ E(\mathbf{A},\mathbf{A})\neq-1$ and there are no strict anti-correlations between measured polarization projections. Of course in SPCE \ the formulas (4) and (5) have to include additional factors to account for the efficiencies of detectors, various transmission coefficients etc. The formula (5) and similar formulas for joint probabilities of detection [41,26,27,32] confirm only the fundamental contextuality of QT due to which a spin projection on a given axis is not a predetermined attribute of an individual physical system recorded by a measuring device but it is created in the interaction of the system with this device. As we already told the correlations between far away measurements suggest the existence of supplementary parameters keeping the memory of the preparation stage and describing the invisible sub-phenomena during the measurement process.. \subsection{BELL INEQUALITIES} Let us describe a typical SPCE\ in the language of observed phenomena. A pulse from a laser hitting a non-linear crystal produces two correlated physical fields propagating with constant velocities towards the far away detectors. Each of these fields has a property that it produces clicks when hitting the photon- detector. We place two polarization analyzers A and B in front of the detectors on both sides and after interaction of the fields with the analyzers we obtain two correlated time- series of clicks on the detectors. Each analyzer is characterized by its macroscopic direction vectors, which may be changed at any time. By changing the direction vectors we have various coincidence experiments labeled by (A, B ) where \textbf{A} and \textbf{B} are the macroscopic direction vectors for the analyzers A and B respectively. In QT the crystal is described as a source of couples of photons in a spin singlet state and the ensemble of these couples is described approximately by the state vector (3). It is well known that the individual photons are neither localizable nor visible and they do not behave as point-like particles following some classical trajectories. Nevertheless the mental picture of correlated photon pairs travelling across the experimental set-up and carrying their own unknown spins (intrinsic magnetic moments ) whose projections on any direction are predetermined by a source and recognized by the polarization analyzers is commonly used in the discussions of SPCE. Knowing that such description of the sub-phenomena is inaccurate Bell tried to formulate the most abstract probabilistic local hidden variable model in order to explain the spin polarization correlations in a singlet state predicted by QT. As we told above in the experiment (A,B) two time -series of outcomes are produced with each outcome being 1 or -1. The main assumptions in so called local realistic hidden variable model (LRHV) proposed by Bell [7] are: 1. Individual outcomes are produced locally by corresponding analyzers A and B. 2. There are some uncontrollable hidden variables $\lambda\in\Lambda$ determining the value of individual outcomes. In the experiment (A,B) the outcomes are obtained as values of some bi-valued functions on $\Lambda$ such that A($\lambda,\mathbf{a})=\pm1$ and B($\lambda,\mathbf{b})=\pm1$ respectively where\textbf{\ a} and \textbf{b} denote the settings of the analyzers (we keep here for purpose the original Bell notation \textbf{a} and \textbf{b} instead of \textbf{A} and\textbf{\ B} used above). 3 .The probability space $\Lambda$ and the probability distribution $\ \rho(\lambda)$ do not depend on \textbf{a }and\textbf{\ b.} Since the probability distribution of hidden variables is prepared at the source far away from the detectors Bell wrongly believed that the assumption 3 is another consequence of the locality. The assumptions 1-3 allow to write the correlation function E(a,b) as: \ \begin{equation} E(\mathbf{a},\mathbf{b})=\underset{\Lambda}{\int}A(\lambda,\mathbf{a}% )B(\lambda,\mathbf{b})\rho(\lambda)d\lambda\ \ \ \tag{(6)} \end{equation} Using the formula (6) for any couple of directions of analyzers, BI and CHSH inequalities below can be proven. \bigskip\ \begin{equation} \bigskip\left| E(\mathbf{a},\mathbf{b})-E(\mathbf{a},\mathbf{b}^{\prime })\right| +\left| E(\mathbf{a}^{\prime},\mathbf{b}^{\prime})+E(\mathbf{a}% ^{\prime},\mathbf{b})\right| \leq2\ \ \ \tag{(7)} \end{equation} In 1976 we met John Bell in Geneva, and we left him few handwritten pages with our comments concerning the limitations of his proof. In particular we pointed out that if the hidden variables $\lambda$ describing each pair of ''particles'' were couples of bi-valued, strictly correlated, spin functions S$_{1}$ and S$_{2}$ on a sphere such that measured outcomes for each pair were the values of S$_{1}$(\textbf{a}) and S$_{2}$(\textbf{b}) then one could not use the integration over the set of all of these functions as it is done in the formula (6). Moreover we indicated that in this case one should try to prove BI by using the estimates of the correlation functions : the empirical averages obtained by averaging the sums of the products S$_{1}$% (\textbf{a})S$_{2}$(\textbf{b}) over all pairs in long runs of the corresponding experiments. If E(\textbf{a,b}) is replaced by its estimate the proof of (7) may never be rigorous because the error bars have to be included and one has also to assume that the sets of spin functions describing the couples in the runs from different experiments are exactly the same what is highly improbable due to the richness of the uncountable set of spin functions on a sphere. These ideas in their final more mature form were only published later in a series of papers [4,5,16,41]. In the meantime Pitovsky [2,3] constructed the spin functions on the sphere for which the integral (6) could not be defined and proposed a local hidden variable model based on these functions able to reproduce the predictions of QT and violating BI. Aerts[6,30] inspired by Accardi's paper[1]\ showed that BI can also be violated in macroscopic experiments. De Baere[46,47] pointed out that BI might be violated due to the non-reproducibility of a set of hidden variables. The Pitovsky model was difficult to understand. \ We simplified it and rendered fully contextual in [41]. The spin functions were strictly correlated but due to the smearing over the microscopic directions there was no strict anti-correlations. In [5] we gave several arguments why BI could not be proven rigorously using the empirical averages and we showed that the use of the unique probability space implied the experimental protocol which was incompatible with SPCE. In conclusion we wrote:''The various SPCE cannot be replaced by one random experiment of the type discussed above and in our opinion this is the reason why the Bell inequalities do not hold. The various probabilities appearing in their proofs are counter-factual and have nothing to do with the measured ones'' Let us clarify here about which one random experiment we were talking. The k-variate random variable X=(X$_{1},...,$X$_{k})$ and joint probability distribution on a unique probability space $\Lambda$ were invented in order to describe the following random experiment: we take a large random sample of members from some population we measure the values of X$_{i}$ $i=1,..,k$ on each member in a sample obtaining an individual outcome for the experiment as a set \ of k numbers (x$_{1,..,}$x$_{k})$ .\ The empirical joint distribution for the frequencies of these outcomes gives the information about the joint probability distribution characterizing the whole population. \ From this joint probability distribution one may obtain by conditionalization the marginal probability distributions for any single random variable X$_{i}$ or for any group of them. This is exactly the protocol of the random experiment implied by the formula (6): pick up a pair described by $\lambda,$ measure the spin projections for this pair in all directions \ etc. what is of course impossible.\ We thought that the arguments presented against BI were convincing enough to stop further speculations concerning their violation but we were wrong. \ Probably some papers were simply unknown or not understood. With growing interest in quantum information and speculations about faster than light communications in EPR experiments it was necessary to provide an up-to-date refutation of BI and of nonlocality of QT. It was done e.g. by Accardi et al. [22,25], Khrennikov[20], Hess and Phillip[24], Kracklauer[28] and by myself.[22,26,27]. We already explained why the formula (6) did not apply to SPCE . The correct formula which may be used to describe locally the sub-phenomena in any particular experiment SPCE for the couple of analyzers (A,B) is: \ \begin{equation} E(\mathbf{A},\mathbf{B})=\underset{\Lambda_{AB}}{\int}A(\lambda_{1},\mathbf{a% })B(\lambda_{2},\mathbf{b})\rho(\lambda_{1},\lambda_{2})d\rho _{A}(\mathbf{a}% )d\rho_{B}(\mathbf{b})d\lambda\ \ \ \tag{(8)} \end{equation} where $\Lambda_{AB}=\Lambda_{1}\times\Lambda_{2}\times\Lambda_{A}\times \Lambda_{B}$ $\ $\ with ($\lambda_{1},\lambda_{2},\mathbf{a,b)\in}\Lambda _{AB}$ and d$\lambda$ is a shorthand notation for the measure on $\Lambda _{1}\times\Lambda_{2}$ \ for which the integral makes sense. One of the most recent reformulations of CHSH theorem was given by Larsson and Gill [36]. \ Instead of $\Lambda_{AB}$ they use a unique probability space $\Lambda$ as in formula (6) saying\ :''...the particles travelling from the source carry some information about what the result would be of each possible measurement at the detectors. This information is denoted $% \lambda$ above,and can consist of anything , from a simple ''absolute'' polarization to some complicated recipe of what each result measurement will be, for each setting of the detector parameter. What it is exactly is not important the very existence of such information will be referred to as '' Realism'', the $\lambda$, in a sense is '' the element of reality'' that determines the measurement result.''. Without writing explicitly the formula (6) they conclude that under these prerequisites the CHSH theorem cannot be violated and therefore the Local Realistic hidden variable models are impossible. Next \ authors give an interesting discussion of how various experimental factors such as visibility , efficiency etc. may prevent the violation or prevent the application of the theorem getting a formula (9) which we rewrite here in a simplified form: \ \begin{equation} |E(\mathbf{a},\mathbf{b|}\Lambda\mathbf{(}A\mathbf{,}B\mathbf{)})-E(\mathbf{a% },\mathbf{b}^{\prime}|\Lambda(A,B^{\prime}))|+\left| E(\mathbf{a}^{\prime},% \mathbf{b}^{\prime}|\Lambda(A^{\prime},B^{\prime }))+E(\mathbf{a}^{\prime},% \mathbf{b|}\text{ }\Lambda(A^{\prime},B))\right| \leq4-2\delta\ \tag{(9)} \end{equation} where $\ \Lambda\mathbf{(}C\mathbf{,}D\mathbf{)\subset}$ \ $\Lambda$ denotes a subset of $\Lambda$ describing the experiment (C,D) and $\delta$ is some minimum probability of the overlap of the subsets used in the formula (9). \ In spite of the fact that the authors did not list very strong assumption concerning the existence of the joint distributions on the unique probability space $\Lambda$ in their proof of (7) and (8) we completely agree with them that if the ''Realism '' is understood as a strict predetermination of the experimental outcomes at the source then Local Realistic hidden variable models are impossible. All local models able to reproduce the QT predictions [21,41,25,28,29] and to violate the equalities (7) are contextual as well as contextual are all SPCE. The correct formula is (8) and there is no overlap between $\Lambda_{AB}$ and $\Lambda_{CD}$ if C% $\neq A$ or B$\neq D.$ Therefore BI\ or CHSH cannot be proven. The only formula which can be proven is the formula (8) with $\delta=0$ which of course does not violate QT predictions. Using the language of Larsson and Gill the information $\lambda$ does not determine the future measurement results. The hidden information in the moment of the measurement ,carried by the particles, about the preparation at the source is stored in ($% \lambda_{1},$ $\lambda_{2}).$ The information which predetermines the outcome of the measurement for the analyzer A is stored in ($\lambda_{1,}% \mathbf{a})$ where \textbf{a }describes a microscopic state of analyzer A in the moment of measurement. The information what the outcome would be is not created at the source and decoded with mistakes by the analyzer but it is created in the interaction with the analyzer and known only after the measurement is completed. The hidden variable model of underlying sub-phenomena given by (8) is intuitive, local and contextual .\ According to Accardi's terminology [25] such probabilistic model describes an adaptive dynamical system. Another simple hidden variable model of SPCE\ has been proposed recently by Matzov [29]. We see that testing of BI-CHSH cannot help us to check the completeness of QT. We should test instead the predictable completeness of QT. \subsection{\protect\bigskip PURITY TESTS AND PREDICTABLE COMPLETENESS} Let us consider an experiment in which we have a stable source producing a beam of '' identical invisible particles'' whose intensity is measured by the clicks on some detector. When we pass this beam by some quantum filter F we obtain a beam having different properties and reduced intensity. The detailed discussion why quantum filters are not the selectors of preexisting properties is given in [42]. If by repeating our experiment several times we discover that the relative frequencies converge to some number p(F) we may interpret it as a probability that an individual particle from the beam \ will pass the filter F. According to SCI the claim that QT gives the complete description of the individual system being a member of some pure quantum ensemble may be only understood in the sense that the probabilistic predictions of QT\ provide complete description of the ensembles of the outcomes of all possible measurements performed on this pure quantum ensemble. A standard interpretation of QT did recognize the importance of a pure quantum state and defined it as a state of physical system which passed by a maximal filter or on which a complete set of commuting observables was measured. The immediate question was what to do if we did not have a maximal filter or how could we know that a filter used was a maximal one?\ We found this definition highly unsatisfactory and we analyzed in 1973 various general experimental set-ups containing \ the sources of some hypothetical particle beams, detectors (counters), filters, transmitters and instruments [42].This analysis led us to the various conclusions which are pertinent to the topic of this paper: 1) Properties of the beams depend on the properties of the devices and vice-versa and are defined only in terms of the observed interactions between them. For example a beam \ b is characterized by the statistical distribution of outcomes obtained by passing \ several replicas of this beam by all available devices d$_{i}$. A device d is defined by the statistical distribution of the results it produces for all available beams b$_{i}$. All observables are contextual and physical phenomena observed depend on the richness of the beams and of the devices. 2) In different runs of the experiments we observe the beams b$_{k}$ each characterized by its empirical probability distribution. Only if an ensemble \ss\ of all these beams is a pure ensemble of pure beams we can associate estimated probability distributions of the results with the beams b$\in$\ss\ and eventually with the individual particles who are forming these beams. 3) A pure ensemble \ss\ of pure beams b is characterized by such probability distributions s(r) which remain approximately unchanged: (i) for the new ensembles \ss$_{i}$ obtained from the ensemble \ss\ by the application of the i-th intensity reduction procedure on each beam b$\in$\ss (ii) for all rich sub-ensembles of \ss\ chosen in a random way In order to test the validity of the Optical Theorem \ we decided to test whether the initial two-hadron states prepared for the high energy collision are mixed with respect to the impact parameter. Therefore we reviewed in a series of papers several non-parametric statistical purity tests which could be used [17] and together with Gajewski [51] we performed the purity tests for $\pi^{-}d$ charge multiplicity distributions using the raw data from the Cambridge-Cracow-Warsaw collaboration in which the deuterium filled bubble chamber was exposed to a $\pi^{-}$ beam of momentum 21GeV/c. We wanted to find significant differences between the data obtained in the different accelerator runs. \ If the initial state is pure, the different channels should be randomly distributed in time with some fixed probabilities of the appearance. If one concentrates on the appearance of two groups of the channels one can obtain the time ordered sequences of 0 and 1 such as : 1000110000... The randomness of these sequences can be tested in different ways . In one of the tests the hypothesis that the distribution was random could be rejected on the significance level:as low as of 0,0014. Since we considered our paper mainly as the illustration of various testing methods we did not insist on the importance of this result hoping that it will be confirmed by others. In 1984 we noticed that the purity tests could be used also to test completeness of QT because :'' The main feature of any theory with supplementary parameters is that the quantum pure ensembles become mixed ensembles of the individual systems characterized by the different values of these new parameters. There is a principal difference between a pure statistical ensemble and a mixed one. The pure ensemble is homogeneous , a mixed one should reveal a fine structure.''[16]. \ If the source is producing a pure beam of particles all runs of the experiment should be highly compatible. If the source is producing a mixed ensemble, the mixture could vary slightly from one run to another. We could also hope to change its composition by using some intensity reduction procedures. The purity test may be defined more rigorously as follows. Let O be a stable source of particles and $\gamma$ a measuring device of some physical observable $\gamma$X. A set S=\{x$_{k;}$ $k=1,...,m$\}, where x% $_{k}$ denote the measured values of $\gamma$X, is a sample drawn from some statistical population of the random variable X associated with the observable $\gamma$X. If b$_{i}$ is a beam of m$_{i}$ particles produced by the source O in the time interval [t$_{i}$ , t$_{i}$ +$\Delta$t] we obtain a sample S$_{i}$ when $\gamma$X is measured the beam b$_{i}$. By using j-th beam intensity reduction procedure applied to the beam b$_{i}$ we obtain a family of new beam b$_{i}$(j) , j=1,,,,n . Measuring $\gamma$X on the beams b% $_{i}$(j) we obtain n new samples S$_{i}$(j). We state that the beams produced by the source O are pure only if we cannot reject the hypothesis H$% _{0}:$ H$_{0}$: All the samples S$_{i}$ and S$_{i}$(j) for different values of t$% _{i}$ and $\Delta$t are drawn from the same unknown statistical population. To test H$_{0}$ one has to use the statistical non-parametric compatibility tests such as : Wilcoxon-Mann-Whitney test, normal scores test, rank or run tests [17]. These tests can be used to analyze any existing experimental data in particular the data from SPCE. Of course the rejection of H$_{0}$ proves only the impurity of the ensembles which were incorrectly described in QT\ as pure ensembles but it could be also an indication that QT is not predictably complete. It is not easy to show that QT is not predictably complete because the mathematical language it uses is very rich and flexible [27,42] allowing a good fit to the experimental data. To prove the predictable incompleteness of QT we need something more. The results of any experiment may be represented as a time series of various possible outcomes. If there are k different outcomes possible QT describes this time series as a sample drawn at random from some particular multinomial probability distribution . The outcomes should appear therefore randomly in time with given probabilities. If one could detect some temporal fine structure in this time series or to find a stochastic model able to explain it \ then it would mean that QT\ does not provide a complete description of the experimental data obtained in this experiment. Several methods are used to study and to compare empirical time-series: frequency or harmonic analysis, periodograms etc.[52,58]. Due to the limited efficiencies of detectors and other imperfections of the experimental set-ups one has always a dilemma whether and how one should correct the data in order to obtain a ''fair sample'' of outcomes to be compared with the theoretical model of the phenomenona. Adenier and Khrennikov analyzed recently the data from SPCE\ of Greg Weihs et al.[13] and found several interesting anomalies which survived various data correction attempts and which were not accounted for by the current description provided by QT. To elucidate these anomalies one would like to have more information about the calibration tests performed by the group before the experiment such as numbers of counts on the detectors: without spin analyzers on one of the sides and on both sides, with coincidence circuitry on and off, with the singlet source replaced by another source producing two beams having known spin polarization etc. \subsection{CONCLUSIONS} In spite of experimental imperfections we do not believe that the violation of BI in SPCE is due to the unfair sampling. The main reason, is that the probabilistic models used to prove BI are not valid for SPCE\ . We hope that the arguments presented in this paper will cut short all speculations about the nonlocality observed in SPCE \ and will promote the statistical contextual interpretation (SCI) of QT . SCI is free of paradoxes and shares Einstein's conviction that the probabilistic description of the phenomena is due to the lack of knowledge and control of the underlying sub-phenomena. At the same time SCI agrees with Bohr that the measured value of the physical observable is not predetermined and that it has only meaning in a specific experimental context which must be always included in any model aiming to describe the sub-phenomena. SCI agrees with Bohr that QT gives only the probabilistic predictions for the phenomena but SCI does not say that more detailed description of the underlying sub-phenomena is impossible. According to SCI the mysterious long range correlations in SPCE are due to the memory of the preparation at the source preserved in the sub-phenomena. Several local descriptions of sub-phenomena based on this idea e.g. [2,3,41,28,29] were able to reproduce the prediction of QT. Similar long range correlations exist in the macroscopic world. For example the violent earthquake in the middle of the ocean causes strong correlations between random variables such as :the force and the height of the Tsunami waves, force of the winds, number of victims etc. on far away shores [22]. Also in statistical physics there exist the long range correlations between coordinates and coarse -grained velocities of the Brownian particles which interacted in the past what was proven by \ Allahverdyan, Khrennikov and Nieuwenhuizen [53]. In view of this any speculation how the measurement performed on one photon influences the behavior of the other photon from the EPR pair is completely unfounded. A general belief was that the language of Hilbert spaces and probability amplitudes used by QT could not be deduced from the classical theory of probability. This belief was shown to be incorrect by Khrennikov [54] \ who developed a probability calculus of conditional probabilities depending explicitly on experimental contexts and was able to reconstruct as a special case the probabilistic formalism of nonrelativistic quantum mechanics. In another paper [55] he showed that quantum averages \TEXTsymbol{<}O% \TEXTsymbol{>}=Tr($\widehat{O}$ $\rho)$ can be obtained as approximations of expectation values in some prequantum classical statistical field theory. One cannot obtain a proof of incompleteness of QT by constructing ad hoc local hidden variable models able to reproduce some predictions of QT . The convincing proof of incompleteness would require a construction of a general model providing \ a consistent description of all phenomena described by QT and able to give more detailed predictions of these phenomena than those given by QT what seems to be a formidable task. Testing the predictable completeness\ of QT\ seems easier and more promising. \ The outcomes of the physical experiments can be represented by some\ numerical time series. QT takes for granted the randomness of these time series and gives only the probabilities of the appearance of various outcomes. Any significant deviation from the randomness or the discovery of some reproducible fine structure in these series, not explained by QT, would prove that QT is not predictably complete and that a more detailed description of the phenomena is needed. For example to describe effectively the behavior of cold trapped ions the continuous quantum evolution had to be supplemented by some quantum jumps obeying some stochastic L\'{e}vy process what was demonstrated by Claude Cohen-Tannoudji and collaborators [56]. Similarly it seems plausible that some new description may be needed in order to describe in detail the data from beautiful experiments with ultra slow propagation of coherent light pulses in Bose-Einstein condensates reported recently by Lene Vestergaard Hau\ and collaborators [57]. The discovery of new fine structures in the data would be important by itself but also it would give additional clear argument against treating the quantum state vectors as attributes of individual physical systems which can be manipulated instantaneously \ Such interpretation and instantaneous manipulations of qubits are often used in the domain of quantum computing [59]. Bohr considered QT as a theory of quantum phenomena and insisted on the ''wholeness'' of such phenomena [40]. If we have to talk about invisible sub-phenomena , what is inevitable, we have to use the most precise language in order to avoid paradoxes and confusion. \subsection{ACKNOWLEDGMENTS} The author would like to thank Andrei Khrennikov for the warm hospitality extended to him during this enjoyable and interesting conference. \subsection{REFERENCES}
1,941,325,220,291
arxiv
\section{Set-up} We assume that in the market, there is a single risky asset at discrete times $t=1,\dotso,T$. Let $S=(S_t)_{t=1}^T$ be the canonical process on the path space $\mathbb{R}_+^T$, i.e, for $(s_1,\dotso,s_T) \in \mathbb{R}_+^T$ we have that $S_i(s_1,\dotso,s_T)=s_i$. The random variable $S_i$ represents the price of the risky asset at time $t=i$. We denote the current spot price of the asset as $S_0=s_0$. In addition, we assume that in the market there are a finite number of options $g_i:\ \mathbb{R}_+^T\rightarrow\mathbb{R},\ i=1,\dotso,N$, which can be bought or sold at time $t=0$ at price $g_i^0$. We assume $g_i$ is continuous and $g_i^0=0$. Let $$\mathcal{M}:=\{\mathbb{Q} \text { probability measure on } \mathbb{R}_+^T:\ S=(S_i)_{i=1}^T \text{ is a } \mathbb{Q}-\text{martingale};$$ $$\text{ for } i=1,\dotso,N,\ \mathbb{E}_\mathbb{Q}g_i=0.\}$$ We make the standing assumption that $\mathcal{M}\neq\emptyset$. Let us consider the semi-static trading strategies consisting of the sum of a static option portfolio and a dynamic strategy in the stock. We will denote by $\Delta$ the predictable process corresponding to the holdings on the stock. More precisely, the semi-static strategies generate payoffs of the form: $$x+\sum_{i=1}^N h_i g_i(s_1,\dotso,s_n)+\sum_{j=1}^{T-1}\Delta_j(s_1,\dotso,s_j)(s_{j+1}-s_j)=:x+h\cdot g+(\Delta\cdot S)_T,\ s_1,\dotso,s_T\in\mathbb{R}_+,$$ where $x$ is the initial wealth, $h=(h_1,\dotso,h_N)$ and $\Delta=(\Delta_1,\dotso,\Delta_{T-1})$. We will assume that $U$ is a function defined on $\mathbb{R}_+$ that is bounded, strictly increasing, strictly concave, continuously differentiable and satisfies the Inada conditions $$U'(0)=\lim_{x\rightarrow 0}U'(x)=\infty,$$ $$U'(\infty)=\lim_{x\rightarrow\infty}U'(x)=0.$$ We also assume that $U$ has asymptotic elasticity strictly less than 1, i.e, $$AE(U)=\limsup_{x\rightarrow\infty}\frac{xU'(x)}{U(x)}<1.$$ Let $\mathcal{P}$ be a set of probability measures on $\mathbb{R}_+^T$, which represents the possible beliefs for the market. We make the following assumptions on $\mathcal{P}$:\\ \textbf{Assumption P:} \begin{itemize} \item[(1)] $\mathcal{P}$ is convex and weakly compact. \item[(2)] For any $\mathbb{P}\in\mathcal{P}$, there exists a $\mathbb{Q}\in\mathcal{M}$ that is equivalent to $\mathbb{P}$. \end{itemize} Note that the second condition is natural in the sense that every belief in the market model is reasonable concerning no arbitrage, e.g., see \cite{Nutz2}. We consider the robust utility maximization problem $$\hat u(x)=\sup_{(\Delta, h)}\inf_{\mathbb{P}\in\mathcal{P}}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right).$$ \section{Main result} Theorem~\ref{theorem1} and Theorem~\ref{theorem2} are the main results of this paper. We will first introduce some spaces and value functions concerning the duality. Let \begin{equation} V(y)=\sup_{x>0}[U(x)-xy)],\ \ \ y>0, \notag\end{equation} and \begin{equation} I:=-V'=(U')^{-1}. \notag\end{equation} For any $\mathbb{P}\in\mathcal{P}$, we define some spaces as follows, where the (in)equalities are in the sense of $\mathbb{P}$-a.s. \begin{itemize} \item $\mathfrak{X}_\mathbb{P}(x,h)=\{X:\ X_0=x,\ x+(\Delta\cdot S)_T+h\cdot g\geq 0,\ \text{for some }\Delta\} $ \item $\mathcal{Y}_\mathbb{P}(y)=\{Y\geq 0:\ Y_0=y,\ XY \text{ is a $\mathbb{P}$-super-martingale, }\forall X\in\mathfrak{X}_\mathbb{P}(1,0)\}$ \item $\mathfrak{Y}_\mathbb{P}(y)=\{Y\in\mathcal{Y}_\mathbb{P}(y):\ \mathbb{E}_\mathbb{P}\left(Y_T(X_T+h\cdot g) \right)\leq xy, \forall X\in\mathfrak{X}_\mathbb{P}(x,h)\}$ \item $\mathcal{C}_\mathbb{P}(x,h)=\{c\in L_+^0(\mathbb{P}):\ c\leq X_T+h\cdot g, \text{ for some } X\in\mathfrak{X}_\mathbb{P}(x,h)\}$ \item $\mathcal{C}_\mathbb{P}(x)=\bigcup_h\mathcal{C}_\mathbb{P}(x,h)$ \item $\mathcal{D}_\mathbb{P}(y)=\{d\in L_+^0(\mathbb{P}):\ d\leq Y_T,\ \text{for some } Y\in\mathfrak{Y}_\mathbb{P}(y)\}$ \end{itemize} Denote \begin{equation}\label{cd} \mathcal{C}_\mathbb{P}=\mathcal{C}_\mathbb{P}(1),\ \mathcal{D}_\mathbb{P}=\mathcal{D}_\mathbb{P}(1). \end{equation} It is easy to see that for $x>0,\ \mathcal{C}_\mathbb{P}(x)=x\mathcal{C}_\mathbb{P},\ \mathcal{D}_\mathbb{P}(x)=x\mathcal{D}_\mathbb{P}$. Define the value of the optimization problem under $\mathbb{P}\in\mathcal{P}$: \begin{equation} u_\mathbb{P}(x)=\sup_{c\in\mathcal{C}_\mathbb{P}(x)}\mathbb{E}_\mathbb{P}U(c),\ \ \ v_\mathbb{P}(y)=\inf_{d\in\mathcal{D}_\mathbb{P}(y)}\mathbb{E}_\mathbb{P}V(d). \notag\end{equation} Then define \begin{equation} u(x)=\inf_{\mathbb{P}\in\mathcal{P}}u_\mathbb{P}(x),\ \ \ v(y)=\inf_{\mathbb{P}\in\mathcal{P}}v_\mathbb{P}(y). \notag\end{equation} Below are the main results of this paper. \begin{theorem}\label{theorem1} Under \textbf{Assumption P}, we have \begin{equation}\label{exchangeability} u(x)=\hat u(x)=\inf_{\mathbb{P}\in\mathcal{P}}\sup_{(\Delta, h)}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right),\ x>0. \end{equation} Besides, the value function $u$ and $v$ are conjugate, i.e., \begin{equation} u(x)=\inf_{y>0}(v(y)+xy),\ \ \ v(y)=\sup_{x>0}(u(x)-xy). \notag\end{equation} \end{theorem} \begin{theorem}\label{theorem2} Let $x_0>0$. Under \textbf{Assumption P}, there exists a probability measure $\hat{\mathbb{P}}\in\mathcal{P}$, an optimal strategy $\hat X_T=x_0+(\hat\Delta\cdot S)_T+\hat h\cdot g\geq 0$, and $\hat Y_T\in\mathfrak{Y}_{\hat{\mathbb{P}}}(\hat y)$ with $\hat y=u'_{\hat{\mathbb{P}}}(x_0)$ such that \begin{itemize} \item[(i)] $u(x_0)=u_{\hat{\mathbb{P}}}(x_0)=\mathbb{E}_{\hat{\mathbb{P}}}[U(\hat X_T)],$ \item[(ii)] $v(\hat y)=u(x_0)-\hat yx_0,$ \item[(iii)] $v(\hat y)=v_{\hat{\mathbb{P}}}(\hat y)=\mathbb{E}_{\hat{\mathbb{P}}}V(\hat Y_T)],$ \item[(iv)] $\hat X_T=I(\hat Y_T)$ and $\hat Y_T=U'(\hat X_T)$, $\hat{\mathbb{P}}$-a.s., and moreover $\mathbb{E}_{\hat{\mathbb{P}}}[\hat X_T\hat Y_T]=x_0\hat y$. \end{itemize} \notag\end{theorem} \section{Proof of the main results} This section is devoted to the proof of the main results, Theorem~\ref{theorem1} and Theorem~\ref{theorem2}. \begin{proof}[Proof of Theorem~\ref{theorem1}] For $\mathbb{P}\in\mathcal{P}$ and any measurable function $f$ defined on $\mathbb{R}_+^j$, there exists a sequence of continuous functions $(f_n)_{n=1}^\infty$ converging to $f$ $\mathbb{P}$-a.s. (see e.g., Page 70 in \cite{Doob 1}). By a truncation argument, $f_n$ can be chosen to be bounded. Therefore, we have \begin{equation} \sup_{(\Delta, h)}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right)=\sup_{(\Delta, h),\ \Delta\in C_b}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right), \notag\end{equation} where $\Delta\in C_b$ means that each component $\Delta_j$ is a continuous bounded function on $\mathbb{R}_+^j,\ j=1,\dotso T-1$. Hence, \begin{eqnarray} \hat u(x)&=&\sup_{(\Delta, h)}\inf_{\mathbb{P}\in\mathcal{P}}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right)\notag\\ &\geq&\sup_{(\Delta, h),\ \Delta\in C_b}\inf_{\mathbb{P}\in\mathcal{P}}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right)\notag\\ \label{minimax}&=&\inf_{\mathbb{P}\in\mathcal{P}}\sup_{(\Delta, h),\ \Delta\in C_b}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right)\\ &=&\inf_{\mathbb{P}\in\mathcal{P}}\sup_{(\Delta, h)}\mathbb{E}_\mathbb{P}U\left(x+(\Delta\cdot S)_T+h\cdot g\right)\notag\\ &\geq&\hat u(x),\notag \end{eqnarray} where \eqref{minimax} follows from the minimax theorem. Hence \eqref{exchangeability} is proved. The rest of this theorem can be proved following the arguments in the proofs of Lemmas 7 and 8 in \cite{Denis1}. \end{proof} \begin{theorem}\label{theorem 3.1} Under \textbf{Assumption P}(2), for any given $\mathbb{P}\in\mathcal{P}$, the set $\mathcal{C}_\mathbb{P},\ \mathcal{D}_\mathbb{P}$ defined in \eqref{cd} satisfy the properties in Proposition 3.1 in \cite{Schachermayer2}. \end{theorem} \begin{proof} It is obvious that (i) $\mathcal{C}_\mathbb{P}$ and $\mathcal{D}_\mathbb{P}$ are convex and solid, (ii) $\mathcal{C}_\mathbb{P}$ contains the constant function $1$, and (iii) for any $c\in\mathcal{C}_\mathbb{P}, d\in\mathcal{D}_\mathbb{P}$, we have $\mathbb{E}_\mathbb{P}[cd]\leq 1$. We will finish the proof by showing the next four lemmas, where we use the notation $d\mathbb{Q}/d\mathbb{P}$ to denote both the Radon-Nikodym process and the Radon-Nikodym derivative on the whole space $\mathbb{R}_+^T$, whenever $\mathbb{Q}\sim\mathbb{P}$. \begin{lemma} $\mathcal{C}_\mathbb{P}$ is bounded in $L^0(P)$. \end{lemma} \begin{proof} By \textbf{Assumption P}(2), there exists $\mathbb{Q}\in\mathcal{M}$ that is equivalent to $\mathbb{P}$. Then \begin{equation} \sup_{c\in\mathcal{C}_\mathbb{P}}\mathbb{E}_\mathbb{P}\left[\frac{d\mathbb{Q}}{d\mathbb{P}}c\right]=\sup_{c\in\mathcal{C}_\mathbb{P}}\mathbb{E}_\mathbb{Q}[c]\leq 1. \notag\end{equation} Hence, \begin{eqnarray} \sup_{c\in\mathcal{C}_\mathbb{P}}\mathbb{P}(c>K)&=&\sup_{c\in\mathcal{C}_\mathbb{P}}\mathbb{P}\left(\frac{d\mathbb{Q}}{d\mathbb{P}}c>\frac{d\mathbb{Q}}{d\mathbb{P}}K\right)\notag\\ &\leq&\sup_{c\in\mathcal{C}_\mathbb{P}}\left[\mathbb{P}\left(\frac{d\mathbb{Q}}{d\mathbb{P}}\leq\frac{1}{\sqrt{K}}\right)+ \mathbb{P}\left(\frac{d\mathbb{Q}}{d\mathbb{P}}c>\sqrt{K}\right)\right]\notag\\ &\leq&\mathbb{P}\left(\frac{d\mathbb{Q}}{d\mathbb{P}}\leq\frac{1}{\sqrt{K}}\right)+\frac{1}{\sqrt{K}}\sup_{c\in\mathcal{C}_\mathbb{P}}\mathbb{E}_\mathbb{P}\left[\frac{d\mathbb{Q}}{d\mathbb{P}}c\right]\notag\\ &\leq&\mathbb{P}\left(\frac{d\mathbb{Q}}{d\mathbb{P}}\leq\frac{1}{\sqrt{K}}\right)+\frac{1}{\sqrt{K}}\rightarrow 0,\ \ \ K\rightarrow\infty.\notag \end{eqnarray} \end{proof} \begin{lemma}\label{lemma2} For $c\in L_+^0(\mathbb{P})$, if $\mathbb{E}_\mathbb{P}[cd]\leq 1,\ \forall d\in\mathcal{D}_\mathbb{P}$, then $c\in\mathcal{C}_\mathbb{P}$. \end{lemma} \begin{proof} It can be shown that for any $\mathbb{Q}\in\mathcal{M}(\mathbb{P}):=\{\mathbb{Q}\in\mathcal{M}:\ \mathbb{Q}\sim\mathbb{P}\}$, the process $\frac{d\mathbb{Q}}{d\mathbb{P}}$ is in $\mathfrak{Y}_\mathbb{P}$. Then for $c\in L_+^0(\mathbb{P})$ \begin{equation} \sup_{\mathbb{Q}\in\mathcal{M}(\mathbb{P})}\mathbb{E}_\mathbb{Q}[c]=\sup_{\mathbb{Q}\in\mathcal{M}(\mathbb{P})}\mathbb{E}_\mathbb{P}\left[\frac{d\mathbb{Q}}{d\mathbb{P}}c\right]\leq 1. \notag\end{equation} Applying the super-hedging Theorem on Page 6 in \cite{Nutz2}, we have that there exists a trading strategy $(\Delta, h)$, such that $1+(\Delta\cdot S)_T+h\cdot g\geq c,\ \mathbb{P}$-a.s., and thus $c\in\mathcal{C}_\mathbb{P}$. \end{proof} \begin{lemma} For $d\in L_+^0(\mathbb{P})$, if $\mathbb{E}_\mathbb{P}[cd]\leq 1,\ \forall c\in\mathcal{C}_\mathbb{P}$, then $d\in\mathcal{D}_\mathbb{P}$. \end{lemma} \begin{proof} Let $d\in L_+^0(\mathbb{P})$ satisfying $\mathbb{E}_\mathbb{P}[cd]\leq 1,\ \forall c\in\mathcal{C}_\mathbb{P}$. Then applying Proposition 3.1 in \cite{Schachermayer2} (here the space $\mathcal{C}_\mathbb{P}$ is lager than $\mathcal{C}$ defined in (3.1) in \cite{Schachermayer2}), we have that there exists $\tilde Y\in\mathcal{Y}_\mathbb{P}(1)$, such that $0\leq d\leq \tilde Y_T$. Define $$ Y_k = \left\{ \begin{array}{rl} \tilde Y_k, &\mbox{ $k=0,\dotso,T-1,$} \\ d,\ &\mbox{ $k=T.$} \end{array} \right. $$ Then it's easy to show that $Y\in\mathfrak{Y}_\mathbb{P}$, and therefore $d\in\mathcal{D}_\mathbb{P}$ since $d=Y_T$. \end{proof} \begin{lemma} $\mathcal{C}_\mathbb{P}$ and $\mathcal{D}_\mathbb{P}$ are closed in the topology of convergence in measure. \end{lemma} \begin{proof} Let $\{c_n\}_{n=1}^\infty\subset\mathcal{C}_\mathbb{P}$ converge to some $c$ in probability with respect to $\mathbb{P}$. By passing to a subsequence, we may without loss of generality assume that $c_n\rightarrow c\geq 0,\ \mathbb{P}$-a.s. Then for any $d\in\mathcal{D}_\mathbb{P}$, \begin{equation} \mathbb{E}_\mathbb{P}[cd]\leq\liminf_{n\rightarrow\infty}\mathbb{E}_\mathbb{P}[c_nd]\leq 1, \notag\end{equation} by Fatou's lemma. Then from Lemma~\ref{lemma2} we know that $c\in\mathcal{C}_\mathbb{P}$, which shows that $\mathcal{C}_\mathbb{P}$ is closed in the topology of convergence in measure. Similarily, we can show that $\mathcal{D}_\mathbb{P}$ is closed. \end{proof} \noindent The proof of Theorem~\ref{theorem 3.1} is completed at this stage. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem2}] We use Theorem~\ref{theorem 3.1} to show the second equalities in (i) and (iii), as well as (iv), by applying Theorems 3.1 and 3.2 in \cite{Schachermayer2}. The rest of the proof is purely convex analytic and can be done exactly the same way as that in the proofs of Lemmas 9-12 in \cite{Denis1}. \end{proof} \section{A example of $\mathcal{P}$} We will give an example of $\mathcal{P}$ satisfying \textbf{Assumption P} in this section. We assume that there exists $M>0$, such that \begin{equation} \mathcal{M}_M:=\{\mathbb{Q}\in\mathcal{M}:\ \mathbb{Q}(||S||_\infty>M)=0\}\neq\emptyset. \notag\end{equation} \begin{remark} The assumption above is not restrictive. For example, if we are given a finite set of prices of European call options over a finite range of strike prices at each time period, and that the prices are consistent with an arbitrage-free model, then the model can be realized on a finite probability space, see \cite{Davis1} for details. \end{remark} Fix $\alpha\in(0,1)$ and $\beta\in(1,\infty)$. Let \begin{equation}\label{egP} \mathcal{P}:=\left\{\mathbb{P}:\ \mathbb{P}\sim\mathbb{Q}\text{ and }\alpha\leq\frac{d\mathbb{P}}{d\mathbb{Q}}\leq\beta,\ \text{for some }\mathbb{Q}\in\mathcal{M}_M\right\}. \end{equation} \begin{remark} From the financial point of view, the boundedness condition on the Radon-Nikodym derivative $d\mathbb{P}/d\mathbb{Q}$ means that the physical measures, which represents the personal beliefs, should not be too far away from the martingale measures. \end{remark} \begin{theorem} $\mathcal{P}$ defined in \eqref{egP} satisfies \textbf{Assumption P}. \end{theorem} \begin{proof} It is obvious that $\mathcal{P}$ is convex, nonempty, and satisfies \textbf{Assumption P}(2). Let $(\mathbb{P})_{n=1}^\infty\subset\mathcal{P}$. Then there exists $\mathbb{Q}_n\in\mathcal{M}_M$ that is equivalent to $\mathbb{P}_n$ satisfying $\alpha\leq d\mathbb{P}_n/d\mathbb{Q}_n\leq\beta$. Since $(\mathbb{P}_n)$ and $(\mathbb{Q}_n)$ are supported on $[0,M]^T$, they are tight, and thus relatively weakly compact from Prokhorov's theorem. By passing to subsequences, we may without loss of generality assume that there exist probability measures $\mathbb{P}$ and $\mathbb{Q}$ supported on $[0,M]^T$, such that $\mathbb{P}_n\xrightarrow{\mbox{\tiny{w}}}\mathbb{P}$ and $\mathbb{Q}_n\xrightarrow{\mbox{\tiny{w}}}\mathbb{Q}$. Since probability measures have a compact support it can be shown using the monotone class theorem that $\mathbb{Q}\in\mathcal{M}_M$. Let $f$ be any nonnegative, bounded and continuous function. Then \begin{equation} \alpha\mathbb{E}_{\mathbb{Q}_n}[f]\leq\mathbb{E}_{\mathbb{P}_n}[f]\leq\beta\mathbb{E}_{\mathbb{Q}_n}[f]. \notag\end{equation} Letting $n\rightarrow\infty$, we have \begin{equation} \alpha\mathbb{E}_\mathbb{Q}[f]\leq\mathbb{E}_\mathbb{P}[f]\leq\beta\mathbb{E}_\mathbb{Q}[f]. \notag\end{equation} Hence, $\mathbb{P}\in\mathcal{P}$, which completes the proof. \end{proof} \section{An extension} Now instead of assuming that the market has a finite number of options, we can assume that our model is calibrated to a continuum of call options with payoffs $(S_i-K)^+,\ K\in\mathbb{R}_+$ at each time $t=i$, and price $$\mathcal{C}(i,K)=\mathbb{E}^\mathbb{Q}\left[(S_i-K)^+\right].$$ It is well-known that knowing the marginal $S_i$ is equivalent to knowing the prices $\mathcal{C}(i,K)$ for all $K\geq0$; see \cite{BL78}. Hence, we can assume that the marginals of the stock price $S=(S_i)_{i=1}^T$ are given by $S_i\sim\mu_i$, where $\mu_1,\dotso,\mu_T$ are probability measures on $\mathbb{R}_+$. Let $$\mathcal{M}:=\{\mathbb{Q} \text { probability measure on } \mathbb{R}_+^T:\ S=(S_i)_{i=1}^T \text{ is a } \mathbb{Q}-\text{martingale};$$ $$\text{ for } i=1,\dotso,T,\ S_i \text{ has marginal } \mu_i \text{ and mean } s_0\}.$$ If we make the similar assumptions as before on the utility function and $\mathcal{P}$, and if we can generalize the super-hedging theorem in \cite{Nutz2} to the case of infinitely many options (we only need the version with one probability measure here, which is much weaker result then the full generalization of \cite{Nutz2}), we can get similar results correspondingly. Indeed, this can be seen from the proof of Lemma~\ref{lemma2}. \bibliographystyle{siam}
1,941,325,220,292
arxiv
\section{Introduction} \label{sec:introduction} Thermonuclear detonations are common to all current likely models of Type Ia supernovae (SNe Ia), but how they are actually generated in progenitor systems is still an open question. Different models predict different locations for the detonation and different mechanisms for initiating the event. Common to all of the cases is a severe lack of numerical resolution in the location where the detonation is expected to occur. The length and time scale at which a detonation forms is orders of magnitude smaller than the resolution that typical multi-dimensional hydrodynamic simulations can achieve. The mere presence of a detonation (or lack thereof) in a simulation is therefore only weak evidence regarding whether a detonation would truly occur. In this study we examine the challenges associated with simulating thermonuclear detonations. The inspiration for this work comes from the literature on head-on collisions of WDs, which can occur, for example, in certain triple star systems \citep{thompson:2011,hamers:2013}. WD collisions rapidly convert a significant amount of kinetic energy into thermal energy and thus set up conditions ripe for a thermonuclear detonation. Since they are easy to set up in a simulation, they are a useful vehicle for studying the properties of detonations. Early studies on WD collisions \citep{rosswog:2009,raskin:2010,loren-aguilar:2010, hawley:2012,garcia-senz:2013} typically had effective spatial resolutions in the burning region of 100--500 km for the grid codes, and 10--100 km for the SPH codes, and observed detonations that convert a large amount of carbon/oxygen material into iron-group elements. These studies varied in methodology (Lagrangian versus Eulerian evolution, nuclear network used) and did not closely agree on the final result of the event (see Table 4 of \cite{garcia-senz:2013} for a summary). There is mixed evidence for simulation convergence presented in these studies. \cite{raskin:2010} claim that their simulations are converged in nickel yield up to 2 million (constant mass) particles, but the nickel yield still appears to be trending slightly upward with particle count. The earlier simulations of \cite{raskin:2009} are not converged up to 800,000 particles, where the smoothing length was kept constant instead of the particle mass. \cite{hawley:2012} do not achieve convergence over a factor of 2 in spatial resolution. \cite{garcia-senz:2013} claim at least qualitative (though not strict absolute) convergence, but their convergence test is only over a factor of 2 in particle count, which is a factor of $2^{1/3} = 1.3$ in spatial resolution (for constant mass particles). \cite{kushnir:2013} test convergence over an order of magnitude in spatial resolution, and find results that appear to be reasonably well converged for one of the two codes used (VULCAN2D), and results that are not converged for the other code used (FLASH). \cite{papish:2015} claim convergence in nuclear burning up to 10\% at a resolution of 5--10 km, but do not present specific data demonstrating this claim or precisely define what is being measured. \cite{loren-aguilar:2010} and \cite{rosswog:2009} do not present convergence studies for their work. \cite{kushnir:2013} argued that many of these simulations featured numerically unstable evolution, ultimately caused by the zone size being significantly larger than the length scale over which detonations form. The detonation length scale can vary widely based on physical conditions \citep{seitenzahl:2009,garg:2017} but is generally not larger than 10 km. \citeauthor{kushnir:2013} argue that this numerically unstable evolution is the primary cause of convergence difficulties. They further argue that it is possible to apply a burning limiter to achieve converged results, which was used in their work and later the simulations of \cite{papish:2015}. We investigate this hypothesis in \autoref{sec:unstable_burning}. In this paper, we attempt to find what simulation length scale is required to achieve converged thermonuclear ignitions. The inspiration for this work comes from our simulations of WD collisions using the reactive hydrodynamics code \texttt{CASTRO}\ \citep{castro, astronum:2017}. We have done both 2D axisymmetric and 3D simulations of collisions of $0.64\ \mathrm{M}_\odot$ carbon/oxygen WDs, and we were unable to achieved converged simulations at any resolution we could afford to run (the best was an effective zone size of 0.25 km, using adaptive mesh refinement, for the 2D case). We were therefore forced to turn to 1D simulations, where we can achieve much higher resolution (at the cost, of course, of not being able to do a test that can be directly compared to multi-dimensional simulations). We believe the simulations presented below help show why we and others had difficulty achieving convergence at the resolutions achievable in multi-dimensional WD collision simulations. \section{Test Problem} \label{sec:collisions} Our test problem is inspired by \cite{kushnir:2013}, and very loosely approximates the conditions of two $0.64\ \mathrm{M}_\odot$ WDs colliding head-on. The simulation domain is 1D with a reflecting boundary at $x = 0$. For $x > 0$ there is a uniform fluid composed (by mass) of $50\%\, ^{12}$C, $45\%\, ^{16}$O, and $5\%\, ^{4}$He. The fluid is relatively cold, $T = 10^7$ K, has density $\rho = 5 \times 10^6$ g/cm$^3$, and is traveling toward the origin with velocity $-2 \times 10^8$ m/s. A uniform constant gravitational acceleration is applied, $g = -1.1 \times 10^8$ m/s$^{2}$. This setup causes a sharp initial release of energy at $x = 0$, and the primary question is whether a detonation occurs promptly near this contact point, or occurs later (possibly at a distance from the contact point). The simulated domain has width $1.6384 \times 10^9$ cm, and we apply inflow boundary conditions that keep feeding the domain with material that has the same conditions as the initial fluid. Simulations are performed with the adaptive mesh refinement (AMR) code \texttt{CASTRO}. For the burning we use the alpha-chain nuclear network \texttt{aprox13}. Release 18.12 of the \texttt{CASTRO}\ code was used. The \texttt{AMReX}\ and \texttt{Microphysics}\ repositories that \texttt{CASTRO}\ depends on were also on release 18.12. The problem is located in the \texttt{Exec/science/Detonation} directory, and we used the \texttt{inputs-collision} setup. The simulation is terminated when the peak temperature on the domain first reaches $4 \times 10^9$ K, which we call a thermonuclear ignition (for reference, the density at the location where the ignition occurs is approximately $1.4\times 10^7\ \text{g / cm}^3$). This stopping criterion is a proxy for the beginning of a detonation. Reaching this temperature does not guarantee that a detonation will begin, and in this study we do not directly address the question of whether a ignition of this kind always leads to a detonation. Nor are we commenting on the physics of the ignition process itself. Rather, the main question we investigate here is whether this ignition is numerically converged, and for this purpose this arbitrary stopping point is sufficient, since in a converged simulation the stopping point should be reached at the same time independent of resolution. A converged ignition is a prerequisite to having a converged detonation. We measure two diagnostic quantities: the time since the beginning of the simulation required to reach this ignition criterion, and the distance from the contact point of the peak temperature. The only parameter we vary in this study is the spatial resolution used for this problem. For low resolutions we vary only the base resolution of the grid, up to a resolution of 0.25 km. For resolutions finer than this, we fix the base grid at a resolution of 0.25 km, and use AMR applied on gradients of the temperature. We tag zones for refinement if the temperature varies by more than 50\% between two zones. Timesteps are limited only by the hydrodynamic stability constraint, with CFL number 0.5. Although this leads to Strang splitting error in the coupling of the burning and hydrodynamics for low resolution, we have verified that the incorrect results seen at low resolution do not meaningfully depend on the timestep constraint (both by applying a timestep limiter based on nuclear burning, and by using the spectral deferred corrections driver in \texttt{CASTRO}, which directly couples the burning and hydrodynamics). At very high resolution, the splitting error tends to zero as the CFL criterion decreases the timestep. \begin{figure}[ht] \centering \includegraphics[scale=0.30]{{{amr_ignition_self-heat}}} \caption{Distance from the contact point of the ignition (solid blue), and time of the ignition (dashed green), as a function of finest spatial resolution. \label{fig:self-heat-distance}} \end{figure} \autoref{fig:self-heat-distance} shows our main results. The lowest resolution we consider, 256 km, is typical of the early simulations of white dwarf collisions, and demonstrates a prompt ignition near the contact point. As the (uniform) resolution increases, the ignition tends to occur earlier and nearer to the contact point. This trend is not physically meaningful: all simulations with resolution worse than about 1 km represent the same prompt central ignition, and as the resolution increases, there are grid points physically closer to the center that can ignite. However, when the resolution is better than 1 km, the situation changes dramatically: the prompt central ignition does not occur, but rather the ignition is delayed and occurs further from the contact point. When we have finally reached the point where the curves start to flatten and perhaps begin to converge, the ignition occurs around 900 km from the contact point, about 1 second after contact (contrast to less than 0.05 seconds for the simulation with 1 km resolution). Even at this resolution, it is not clear if the simulation is converged. We were unable to perform higher resolution simulations to check convergence due to the length of time that would be required. \begin{figure}[ht] \centering \includegraphics[scale=0.30]{{{amr_ignition_co}}} \caption{Similar to \autoref{fig:self-heat-distance}, but with pure C/O material. Note the different vertical axis scale. \label{fig:self-heat-distance-co}} \end{figure} We also tested a similar configuration made of pure carbon/oxygen material (equal fraction by mass). This is closer to the configuration used in the $0.64\ \mathrm{M}_\odot$ WD collision simulations that previous papers have focused on. However, for the setup described above, pure carbon/oxygen conditions do not detonate at all. This is not particularly surprising, since the 1D setup is a very imperfect representation of the real multi-dimensional case, and is missing multi-dimensional hydrodynamics that could substantially alter the dynamical evolution. So the small amount of helium we added above ensured that the setup ignited. (Of course, there will likely be a small amount of helium present in C/O white dwarfs as a remnant of the prior stellar evolution.) However, we can prompt the C/O setup to ignite by starting the initial temperature at $10^9$ K instead of $10^7$ K. This loosely mimics the effect from the first test where helium burning drives the temperature to the conditions necessary to begin substantial burning in C/O material. But since no helium is present in this case, it allows us to test whether it is easier to obtain convergence for pure C/O burning, even though the test itself is artificial. The only other change relative to the prior test is that we refined on relative temperature gradients of 25\% instead of 50\%. The results for this case are shown in \autoref{fig:self-heat-distance-co}. In this case, the ignition is central at all resolutions, but the simulation is still clearly unconverged at resolutions worse than 100 m, as the ignition becomes significantly delayed at high resolution. This story contains two important lessons. First, the required resolution for even a qualitatively converged simulation, less than 100 m, is out of reach for an analogous simulation done in 3D. Second, the behavior for resolutions worse than 1 km qualitatively appears to be converged, and one could perhaps be misled into thinking that there was no reason to try higher resolutions, which is reason for caution in interpreting reacting hydrodynamics simulations. With that being said, our 1D tests are not directly comparable to previous multi-dimensional WD collision simulations. The 1D tests should not be substituted for understanding the actual convergence properties of the 2D/3D simulations, which may have different resolution requirements for convergence. Our tests suggest only that it is plausible that simulations at kilometer-scale (or worse) resolution are unconverged. This observation is, though, consistent with the situation described in \autoref{sec:introduction}, where our 2D WD collision simulations (not shown here) are unconverged, and many of the previous collision simulations presented in the literature have relatively weak evidence for convergence. \section{Numerically Unstable Burning} \label{sec:unstable_burning} \citet{kushnir:2013} observe an important possible failure mode for reacting hydrodynamics simulations. Let us define $\tau_{\rm e} = e / \dot{e}$ as the nuclear energy injection timescale, and $\tau_{\rm s} = \Delta x / c_{\rm s}$ as the sound-crossing time in a zone (where $\Delta x$ is the grid resolution and $c_{\rm s}$ is the speed of sound). When the sound-crossing time is too long, energy is built up in a zone faster than it can be advected away by pressure waves. This effect generalizes to Langrangian simulations as well, where $\tau_{\rm s}$ should be understood as the timescale for transport of energy to a neighboring fluid element. This is of course a problem inherent only to numerically discretized systems as the underlying fluid equations are continuous. This can lead to a numerically seeded detonation caused by the temperature building up too quickly in the zone. The detonation may be spurious in this case. If $\tau_{\rm s} \ll \tau_{\rm e}$, we can be confident that a numerically seeded detonation has not occurred. In practice, we quantify this requirement as: \begin{equation} \tau_{\rm s} \leq f_{\rm s}\, \tau_{\rm e} \label{eq:burning_limiter} \end{equation} and require that $f_{s}$ is sufficiently smaller than one. \citet{kushnir:2013} state that $f_{\rm s} = 0.1$ is a sufficient criterion for avoiding premature ignitions. \citeauthor{kushnir:2013} enforced this criterion on their simulations by artificially limiting the magnitude of the energy release after a burn, and claimed that this is resulted in more accurate WD collision simulations. We find that for our test problem (and also the WD collisions we have simulated) we do observe $\tau_{\rm s} > \tau_{\rm e}$; typically the ratio is a factor of 2--5 at low resolution (see \autoref{fig:self-heat-ts_te}). This means that an ignition is very likely to occur for numerical reasons, regardless of whether it would occur for physical reasons. At low resolution, adding more resolution does not meaningfully improve the ratio of $\tau_s$ to $\tau_e$ at the point of ignition. The ignition timescale is so short that almost all of the energy release occurs in a single timestep even though the timestep gets shorter due to the CFL limiter. It is only when the resolution gets sufficiently high that we can simultaneously resolve the energy release over multiple timesteps and the advection of energy across multiple zones. Even at the highest resolution we could achieve for the test including helium, about 50 cm, $\tau_s / \tau_e$ was 0.8 at ignition, which is not sufficiently small to be confident of numerical stability. Note that merely decreasing the timestep (at fixed resolution) does not help here either, as the instability criterion is, to first order, independent of the size of the timestep. \begin{figure}[ht] \centering \includegraphics[scale=0.30]{{{amr_ignition_self-heat_ts_te}}} \caption{Ratio of the sound-crossing timescale to the energy injection timescale for the simulations in \autoref{fig:self-heat-distance}. \label{fig:self-heat-ts_te}} \end{figure} We thus investigate whether limiting the energy release of the burn (we will term this ``suppressing'' the burn), as proposed by \citeauthor{kushnir:2013}, is a useful technique for avoiding the prompt detonation. Since the limiter ensures the inequality in \autoref{eq:burning_limiter} holds by construction, the specific question to ask is whether the limiter achieves the correct answer and is converged in cases where the simulation would otherwise be uncorrect or unconverged. Before we examine the results, consider a flaw in the application of the limiter: a physical detonation may \textit{also} occur with the property that, in the detonating zone, $\tau_s > \tau_e$. For example, consider a region of WD material at uniformly high temperature, say $5 \times 10^9\ \text{K}$, with an arbitrarily large size, say a cube with side length 100 km. This region will very likely ignite, even if it is surrounded by much cooler material. By the time the material on the edges can advect heat away, the material in the center will have long since started burning carbon, as the sound crossing time scale is sufficiently large compared to the energy injection time scale. This is true regardless of whether the size of this cube corresponds to the spatial resolution in a simulation. Suppression of the burn in this case is unphysical: if we have a zone matching these characteristics, the zone should ignite. When the resolution is low enough, there is a floor on the size of a hotspot, possibly making such a detonation more likely. This is an unavoidable consequence of the low resolution; yet, it may be the correct result of the simulation that was performed. That is, even if large hotspots are unphysical because in reality the temperature distribution would be smoother, if such a large hotspot \textit{were} to develop (which is the implicit assumption of a low resolution simulation), then it would likely ignite. If the results do not match what occurs at higher resolution, then the simulation is not converged and the results are not reliable. However, it may also be the case that a higher resolution simulation will yield similar results, for example because even at the higher resolution, the physical size of the hotspot stays the same. For this reason, an appeal to the numerical instability criterion alone is insufficient to understand whether a given ignition is real. \begin{figure}[ht] \centering \includegraphics[scale=0.30]{{{amr_ignition_suppressed}}} \caption{Similar to \autoref{fig:self-heat-distance}, but for simulations with the suppressed burning limiter applied (\autoref{eq:burning_limiter}). \label{fig:suppressed-distance}} \end{figure} \autoref{fig:suppressed-distance} shows the results we obtain for our implementation of a ``suppressed'' burning mode. In a suppressed burn, we limit the changes to the state so that \autoref{eq:burning_limiter} is always satisfied. This is done by rescaling the energy release and species changes from a burn by a common factor such that the equality in \autoref{eq:burning_limiter} is satisfied. (If the inequality is already satisfied, then the integration vector is not modified.) We find that the suppressed burn generally does not yield correct results for low resolutions. The 64 km resolution simulation happens to yield approximately the correct ignition distance, but it does not occur at the right time, and in any case the incorrectness of the results at neighboring resolutions suggests that this is not a robust finding. The suppressed burning simulation reaches qualitative convergence at around the same 100 m resolution as the normal self-heating burn. Because of both the theoretical reasons discussed above, and this empirical finding that the burning suppression does not make low resolution simulations any more accurate, we do not believe that the suppressed burning limiter should be applied in production simulations. \newline \section{Conclusion}\label{Sec:Conclusion} \label{sec:conclusion} Our example detonation problem demonstrates, at least for this class of hydrodynamical burning problem, a grid resolution requirement much more stringent than 1 km. This test does not, of course, represent all possible WD burning conditions. However, the fact that it is even possible for burning in white dwarf material to require a resolution better than 100 m should suggest that stronger demonstrations of convergence are required. This is especially true bearing in mind our observation that the numerical instability can result in simulations that appear qualitatively converged when the resolution is increased by a factor of one or two orders of magnitude but not three orders of magnitude. This study does not directly address the problem of how, in the detailed microphysical sense, a detonation wave actually begins to propagate, as we cannot resolve this length scale even in our highest resolution simulations. Rather, we are making the point that for simulations in which a macroscopic detonation wave appears self-consistently, this is only a valid numerical result if the resolution is sufficiently high. This convergence requirement does not imply that the detonation itself is physically realistic; but, it does imply that we are not even correctly solving the fluid equations we intend to solve when the convergence requirement is not met. We believe that our test case can be useful in the future for testing algorithmic innovations that hope to improve the realism of burning at low resolutions. \acknowledgments This research was supported by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook. An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Oak Ridge Leadership Computing Facility located in the Oak Ridge National Laboratory, which is supported by the Office of Science of the Department of Energy under Contract DE-AC05-00OR22725. Project AST106 supported use of the ORNL/Titan resource. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors would like to thank Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance LIred and SeaWulf computing systems, the latter of which was made possible by a \$1.4M National Science Foundation grant (\#1531492). The authors thank Chris Malone and Don Willcox for useful discussions on the nature of explosive burning, and Doron Kushnir for providing clarification on the nature of the burning limiter used in \cite{kushnir:2013}. This research has made use of NASA's Astrophysics Data System Bibliographic Services. \facilities{OLCF, NERSC} \software{\texttt{CASTRO}\ \citep{castro, astronum:2017}, \texttt{AMReX}\ \citep{boxlib-tiling}, \texttt{yt}\ \citep{yt}, \texttt{matplotlib}\ \citep{matplotlib} } \bibliographystyle{aasjournal} \input{paper.bbl} \end{document}
1,941,325,220,293
arxiv
\section{\uppercase{Introduction}} \label{sec:Introduction} Recent advances in collecting and analyzing large datasets have led to the information being naturally represented as higher-order tensors. Tensor Decomposition transforms input tensors to a reduced latent space which can then be leveraged to learn salient features of the underlying data distribution. Tensor Decomposition has been successfully employed in many fields, including machine learning, signal processing, and network analysis~\cite{mondelli2019connection,cheng2020novel,wen2020tensor}. Canonical Polyadic Decomposition (CPD) is the most popular means of decomposing a tensor to a low-rank tensor decomposition model. It has become the standard tool for unsupervised multiway data analysis. The Matricized Tensor Times Khatri-Rao product (MTTKRP) kernel is known to be the computationally intensive kernel in CPD. Since the real-world tensors are sparse, specialized hardware accelerators are becoming common means of improving compute efficiency of sparse tensor computations. But external memory access time has become the bottleneck due to irregular data access patterns in sparse MTTKRP operation. Since real-world tensors are sparse, specialized hardware accelerators are attractive for improving the compute efficiency of sparse tensor computations. But external memory access time is the bottleneck due to irregular data access patterns in sparse MTTKRP operation. Field Programmable Gate Arrays (FPGAs) are an attractive platform to accelerate CPD due to the vast inherent parallelism and energy efficiency FPGAs can offer. Since sparse MTTKRP is memory bound, improving the sustained memory bandwidth and latency between the compute units on the FPGA and the external DRAM memory can significantly reduce the MTTKRP compute time. FPGA facilitates near memory computing with custom adaptive hardware due to its reconfigurability and large on-chip BlockRAM memory \cite{xilinxalveo}. It enables the development of memory controllers and compute units specialized for specific data formats; such customization is not supported on CPU and GPU. The key contributions of this paper are: \begin{itemize} \item We investigate possible sparse MTTKRP compute patterns and possible pitfalls while adapting sparse MTTKRP computation to FPGA. \item We scrutinize the importance of a FPGA-based memory controller design to reduce the total memory access time of MTTKRP. Since MTTKRP on FPGA is a memory-bound operation, it leads to significant acceleration in total MTTKRP compute time. \item We explore possible hardware solutions for Memory Controller design with memory modules (e.g., DMA controller and cache) that can use to reduce the overall memory access time. \end{itemize} The rest of the paper is organized as follows: Section \ref{sec:Background} focuses on the background of tensor decomposition and spMTTKRP. Section \ref{sec:Tensor_Formats} and Section \ref{mttkrp_access_patterns} investigate the compute patterns and memory access patterns of spMTTKRP. Section \ref{sec:Memory_Controller_Requirements} discusses the properties of configurable Memory Controller design. Finally, we discuss the work in progress in Section \ref{discussion}. \section{\uppercase{Background}} \label{sec:Background} \subsection{Tensor Decomposition} \label{sec:TD} Canonical Polyadic Decomposition (CPD) decomposes a tensor into a sum of 1D tensors~\cite{kolda2009tensor}. For example, it approximates a 3D tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$ as \[ \mathcal{X} \approx \sum_{r=1}^R \lambda_r \cdot \mathbf{a}_r \otimes \mathbf{b}_r \otimes \mathbf{c}_r =: [\![ {\lambda} ; \mathbf{A}, \mathbf{B}, \mathbf{C} ]\!], \] where $R \in \mathbb{Z}_+$ is the rank, $\mathbf{a}_r \in \mathbb{R}^{I_0}$, $\mathbf{b}_r \in \mathbb{R}^{I_1}$, and $\mathbf{c}_r \in \mathbb{R}^{I_2}$ for $r=1, \ldots, R$. The components of the above summation can be expressed as factor matrices, i.e., $\mathbf{A} = [\mathbf{a}_1, \ldots, \mathbf{a}_R] \in \mathbb{R}^{I_0 \times R}$ and similar to $\mathbf{B}$ and $\mathbf{C}$. We normalize these vectors to unit length and store the norms in $\lambda = [\lambda_1, \ldots, \lambda_R] \in \mathbb{R}^R$. Since the problem is non-convex and has no closed-form solution, existing methods for this optimization problem rely on iterative schemes. The Alternating Least Squares (ALS) algorithm is the most popular method for computing the CPD. Algorithm~\ref{cp-als} shows a common formulation of ALS for 3D tensors. In each iteration, each factor matrix is updated by fixing the other two; e.g., $\mathbf{A} \gets \mathcal{X}_{(0)}(\mathbf{B} \odot \mathbf{C})$. This Matricized Tensor-Times Khatri-Rao product (MTTKRP) is the most expensive kernel of ALS. \begin{algorithm} \DontPrintSemicolon Input: A tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$, the rank $R \in \mathbb{Z}_+$ \; Output: CP decomposition $[\![ {\lambda} ; \mathbf{A}, \mathbf{B}, \mathbf{C} ]\!]$, $\lambda \in \mathbb{R}^{R}$, $\mathbf{A} \in \mathbb{R}^{I_0 \times R}$, $\mathbf{B} \in \mathbb{R}^{I_1 \times R}$, $\mathbf{C} \in \mathbb{R}^{I_2 \times R}$ \; \While{\emph{stopping criterion not met}}{ $\mathbf{A} \gets \mathcal{X}_{(0)}(\mathbf{B} \odot \mathbf{C})$ \; $\mathbf{B} \gets \mathcal{X}_{(1)}(\mathbf{A} \odot \mathbf{C})$ \; $\mathbf{C} \gets \mathcal{X}_{(2)}(\mathbf{A} \odot \mathbf{B})$ \; Normalize $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$ and store the norms as $\lambda$ \; } \caption{{\sc CP-ALS for the 3D tensors}} \label{cp-als} \end{algorithm} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figures/mttkrp.png} \caption{An illustration of MTTKRP for a 3D tensor $\mathcal{X}$ in mode $0$} \label{fig:mttkrp} \end{figure} Figure~\ref{fig:mttkrp} illustrates the update process of MTTKRP for mode $0$. With a sparse tensor stored in the coordinate format, sparse MTTKRP (spMTTKRP) for mode $0$ can be performed as shown in Algorithm~\ref{cp-als}. For each non-zero element $x$ in a sparse 3D tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$ at $(i, j, k)$, the $i$th row of $\mathbf{A}$ is updated as follows: the $j$th row of $\mathbf{B}$ and the $k$th row of $\mathbf{C}$ are fetched, and their Hadamard product is computed and scaled with the value of $x$. The main challenge for efficient computation is how to access the factor matrices and non-zero elements for the spMTTKRP operation. We will use a hypergraph model for modeling these dependencies in the spMTTKRP operation in Section~\ref{sec:Tensor_Formats}. Algorithm \ref{gpu_paper_ref_mttkpr} shows the sequential sparse MTTKRP (spMTTKRP) approach for third-order tensors in Coordinate (COO) Format, where IndI, indJ, and indK correspond to the coordinate vectors of each non-zero tensor element. ``nnz" refers to the number of non-zero values inside the tensor. \begin{algorithm} \DontPrintSemicolon \KwIn{indI[nnz], indJ[nnz], indK[nnz], vals[nnz], $\mathbf{B}[J][R]$, $\mathbf{C}[K][R]$} \KwOut{$\mathbf{\tilde{A}}[I][R]$ } \For{$z = 0 \emph{ to nnz}-1$}{ $i =$ indI[\textit{z}] \; $j =$ indJ[\textit{z}] \; $k =$ indK[\textit{z}] \; \For{$r = 0 \emph{ to } R-1$}{ $\mathbf{\tilde{A}}[i][r]$ += vals[\textit{z}] $\cdot$ $\mathbf{B}[j][r]$ $\cdot$ $\mathbf{C}[k][r]$ \; } } return $\mathbf{\tilde{A}}$ \caption{{\sc Single iteration of COO based spMTTKRP for third order tensors}} \label{gpu_paper_ref_mttkpr} \end{algorithm} \subsection{FPGA Technologies} \label{sec:FPGA_Technologies} FPGAs are especially suitable for accelerating memory-bound applications with irregular data accesses which require custom hardware. The logic cells on state-of-the-art FPGA devices consist of Look Up Tables (LUTs), multiplexers, and flip-flops. FPGA devices also have access to a large on-chip memory (BRAM). High-bandwidth interfaces to external memory can be implemented on FPGA. Current FPGAs are comprised of multiple Super Logic Regions (SLRs), where each SLR is connected to a single or several DRAMs using a memory interface IP. HBM technology is used in state-of-the-art FPGAs as the high bandwidth interconnections particularly benefit FPGAs \cite{8916363}. The combination of high bandwidth access to large banks of memory from logic layers makes 3DIC architectures attractive for new approaches to computing, unconstrained by the memory wall. Cache Coherent Interconnect supports shared memory and cache coherency between the processor (CPU) and the accelerator. Both FPGA and the processor have access to shared memory in the form of external DRAM, while the cache coherency protocol ensures that any modifications to a local copy of the data in either device are visible to the other device. Protocols such as CXL \cite{CXL_web} and CCIX \cite{CCIX_web} develop to realize coherent memory. \section{\uppercase{Sparse MTTKRP Compute patterns}} \label{sec:Tensor_Formats} The spMTTKRP operation for a given tensor can be represented using a hypergraph. For illustrative purposes, we consider a 3 mode sparse tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$ where $(i_0, i_1, i_2)$ denote the coordinates of $x$ in $\mathcal{X}$. Here $I_0$, $I_1$, and $I_2$ represent the size of each tensor mode. Note that the following approaches can be applied to tensors with any number of modes. For a given tensor $\mathcal{X}$, we can build a hypergraph $H = (V, E)$ with the vertex set $V$ and the hyperedge set $E$ as follows: vertices correspond to the tensor indices in all the modes and hyperedges represent its non-zero elements. For a 3D sparse tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$ with $M$ non-zero elements, its hypergraph $H = (V,E)$ consists of $|V| = I_0 + I_1 + I_2$ vertices and $|E| = M$ hyperedges. A hyperedge $\mathcal{X}(i, j, k)$ connects the three vertices $i$, $j$, and $k$, which correspond to the indices of rows of the factor matrices. Figure~\ref{hypergraph} shows an example of the hypergraph for a sparse tensor. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figures/hypergraph.png} \caption{A hypergraph example of a sparse tensor} \label{hypergraph} \end{figure} Our goal is to determine a mapping of $\mathcal{X}$ into memory for each mode so that the total time spent on (1) loading tensor data from external memory, (2) loading input factor matrix data from the external memory, (3) storing output factor matrix data to the external memory, and (4) element-wise computation for each non-zero element of the tensor is minimized. Considering the current works in the literature \cite{8735529} \cite{8821030} \cite{10.1109/SC.2018.00022} \cite{alto_paper}, for a given mode, there are 2 ways to perform sparse MTTKRP. \begin{itemize} \item Approach 1: Output-mode direction computation \item Approach 2: Input-mode direction computation \end{itemize} Algorithm~\ref{mttkrp_appr_1_1} and Algorithm~\ref{mttkrp_appr_2} show the mode $0$ MTTKRP of a tensor with three modes using Approach 1 and Approach 2, respectively. \begin{algorithm} \DontPrintSemicolon Input: A sparse tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$, dense factor matrices $\mathbf{{B}} \in \mathbb{R}^{I_1 \times R}$, $\mathbf{{C}} \in \mathbb{R}^{I_2 \times R}$ \; Output: Updated dense factor matrix $\mathbf{A} \in \mathbb{R}^{I_0 \times R}$ \; \For{each $i_0$ output factor matrix row in $\mathbf{A}$}{ $\mathbf{A}(i_0, :) = 0 $ \; \For{each nonzero element in $\mathcal{X}$ at $(i_0,i_1,i_2)$ with $i_0$ coordinates}{ Load($\mathcal{X}(i_0, i_1, i_2)$) \; Load($\mathbf{B}(i_1,:)$) \; Load($\mathbf{C}(i_2,:)$) \; \For{$r=1, \ldots, R$}{ $\mathbf{A}(i_0, r) += \mathcal{X}(i_0, i_1, i_2) \times \mathbf{B}(i_1,r) \times \mathbf{C}(i_2,r)$ \; } } Store($\mathbf{A}(i_0,:)$) \; } \Return $\mathbf{{A}}$ \caption{{\sc Approach 1 for mode 0 of a tensor with 3 modes}} \label{mttkrp_appr_1_1} \end{algorithm} \begin{algorithm} \DontPrintSemicolon Input: A sparse tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$, dense factor matrices $\mathbf{{B}} \in \mathbb{R}^{I_1 \times R}$, $\mathbf{{C}} \in \mathbb{R}^{I_2 \times R}$ \; Output: Updated dense factor matrix $\mathbf{A} \in \mathbb{R}^{I_0 \times R}$ \; \For{each $i_1$ input factor matrix row in $\mathbf{B}$}{ Load($\mathbf{B}(i_1,:)$) \; \For{each nonzero element in $\mathcal{X}$ at $(i_0,i_1,i_2)$ with $i_i$ coordinates}{ Load($\mathcal{X}(i_0, i_1, i_2)$) \; Load($\mathbf{C}(i_2,:)$) \; \For{$r=1, \ldots, R$}{ $\mathbf{p_A}(i_0, r) = \mathcal{X}(i_0, i_1, i_2) \times \mathbf{B}(i_1,r) \times \mathbf{C}(i_2,r)$ \; } Store($\mathbf{p_A}(i_0, :)$) \; } \For{each $i_0$ output factor matrix row in $\mathbf{A}$}{ $\mathbf{A}(i_0,:) = 0$ \; \For{each partial element $\mathbf{p_A}$ with $i_0$ coordinates}{ \For{$r=1, \ldots, R$}{ Load($\mathbf{p_A}(i_0, :)$) \; $\mathbf{A}(i_0, r) += \mathbf{p_A}(i_0, r)$ } } Store($\mathbf{A}(i_0, :)$) \; } } \Return $\mathbf{{A}}$ \caption{{\sc Approach 2 for mode 0 of a tensor with 3 modes}} \label{mttkrp_appr_2} \end{algorithm} We use the hypergraph model of the tensor to describe the different approaches. The main difference between these two approaches is the hypergraph traversal order. Hence, we denote the two approaches based on the order of hyperedge traversal. In Approach 1, all hyperedges that share the same vertex of the output mode are accessed consecutively. For each hyperedge, all the input vertices are traversed to access their corresponding rows of input factor matrices. In Approach 2, all hyperedges that share the same vertex of one of the input modes are accessed sequentially. For each vertex, all its incident hyperedges are iterated consecutively. For each hyperedge, the rest of the input vertices of the hyperedge are traversed to access rows of the remaining input factor matrices. It follows the element-wise multiplication and addition. In Approach 1, since the order of hyperedge depends on the output mode, the output factor matrix can be calculated without generating intermediate partial sums (Algorithm~\ref{mttkrp_appr_1_1}: line 10). However, in Approach 2, since the hyperedges are ordered according to the input mode coordinates, we need to store the partial sums (Algorithm~\ref{mttkrp_appr_2}: line 9) in the FPGA external memory. It leads to accumulating the partial sums to generate the output factor matrix (Algorithm~\ref{mttkrp_appr_2}: line 11-17). The total computations of both approaches are the same: for a general sparse tensor with $|T|$ non-zero elements, $N$ modes, and factor matrices with rank $R$, since every hyperedge will be traversed once, and there are $N-1$ multiplications and one addition for computing MTTKRP, the total computation per mode is $N \times |T| \times R$. However, their total external memory accesses are different: both approaches require $|T|$ load operations for all the hyperedges and the total factor matrix elements transferred per mode is $(N-1) \times |T| \times R$, which corresponds to accessing input factor matrices of vertices in the hypergraph model. However, in Approach 2, the partial value needs to be stored in the memory (Algorithm~\ref{mttkrp_appr_2}: line 10), which requires an additional $|T| \times R$ external memory storage. Let $I_{out}$ and $I_{in}$ represent the length of the output mode and the input mode, respectively. Then the total amount of data transferred is $|T| + (N-1) \times |T| \times R + I_{out}\times R$ for Approach 1 and $|T| + N \times |T| \times R + I_{in}\times R$ for Approach 2. Therefore, Approach 1 benefits from avoiding loading and storing partial sums. Table \ref{appr_properties} summarizes the properties of the two approaches. \begin{table}[ht] \caption{Comparison of the Approaches} \begin{center} \resizebox{\columnwidth}{!}{ \begingroup \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.1} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Approach} & \textbf{Total Computations} & \textbf{Total external memory accesses} & \textbf{Size of total partial sums} \\ \hline\hline 1 & $N \times|T|\times R$ & $|T| + (N-1) \times |T| \times R + I_{out}\times R$ & $0$ \\ \hline 2 & $N \times|T|\times R$ & $|T| + N \times |T| \times R + I_{in}\times R$ & $|T| \times R$ \\ \hline \end{tabular} \endgroup } \label{appr_properties} \end{center} \end{table} In the following, we discuss these in detail and identify the memory access characteristics. \subsection{spMTTKRP on FPGA} \label{FPGA_mttkrp_ii} In this paper, we consider large-scale data decomposition on very large tensors. Hence the FPGA stores the tensor and the factor matrices inside their external DRAM memory. Therefore, we need to optimize the FPGA memory controller to the DRAM technology. In this section, we first explain the DRAM timing model following the memory access patterns of sparse MTTKRP. Figure \ref{overall_arch} shows the conceptual overall design. Approach 2 is not practical for FPGA due to the large external memory requirement to store the partial sums during the computation. In the work of this paper, we focus on Approach 1. For Approach 1, the tensor is sorted according to the coordinates of the output mode. Typically, spMTTKRP is calculated for all the modes. To adapt Approach 1 to compute the factor matrices corresponding to all the modes, (1) Use multiple copies of the tensor. Each tensor copy is sorted according to the coordinates of a tensor mode or (2) re-order the tensor in the output direction before computing spMTTKRP for a mode. Using multiple copies of a tensor is not a practical solution due to the limited external memory of the FPGA. Hence, our memory solution focuses on remapping the tensor in the output direction before computing spMTTKRP for a mode. It enables to perform spMTTKRP using Approach 1 for each tensor mode. Algorithm \ref{mttkrp_appr_1} summarizes the Approach 1 with remapping. The algorithm focus on computing spMTTKRP of mode 1. Initially, we assume the sparse tensor is ordered according to the coordinates of mode 0 after computing the factor matrix of mode 0. Before starting the spMTTKRP for mode 1, we remap the according to the mode 1 coordinates (lines 3 - 6). After remapping, all the non-zero values with the same output mode coordinates are brought to the compute unit consecutively (line 9). For each non-zero value, the corresponding rows of the input factor matrices are brought into the compute units following the element-wise multiplication and addition. Since the tensor elements with the same output mode coordinates are brought together, the processing unit can calculate the output factor matrix without storing the partial values in FPGA external memory. Once a row of factor matrix is computed, the value is stored in the external memory. Here, Load/Store corresponds to loading/storing an element from the external memory. Also, ``$:$" refers to performing an operation for an entire factor matrix row. The proposed approach introduces several implementation overheads: \\ \textbf{Additional external memory accesses:} The remapping required additional external memory load and a store (Algorithm \ref{mttkrp_appr_1}: lines 4 and 6). The total access to the external memory is increased by $2 \times |T|$ for a tensor of size $|T|$. The communication overhead per mode is: $$\dfrac{2 \times |T|}{|T| + (N-1) \times |T| \times R + I_{out}\times R}$$ $$\approx \dfrac{2}{1 + (N-1) \times R}$$ For a typical scenario (N = 3-5 and R = 16-64), the total external memory communication only increases by less than $6\%$. \\ \textbf{Additional external memory space:} During the remapping process, the remapped data requires an additional space equal to the size of the tensor ($|T|$) to store the remapped tensor elements in the memory. \textbf{Excessive memory address pointers to store the remapped tensor:} \label{track_me} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Architecture_Update.png} \caption{Conceptual overall design} \label{overall_arch} \end{figure} The remapping brings the tensor elements with the same output mode coordinate together (Algorithm \ref{mttkrp_appr_1}: line 5). To achieve this, the memory controller needs to track the memory location (i.e., memory address) of the next tensor element with each output coordinate needs to be stored. This required memory address pointers, which track the memory address to store a tensor element depending on its output mode coordinate. Algorithm \ref{mttkrp_appr_1} requires such memory pointers proportionate to the size of the output mode of a given tensor. The number of address pointers may not fit in the FPGA internal memory for a large tensor. For example, a tensor with an output mode with 10 million coordinate values requires 40 MB to store the memory address pointers (i.e., 32-bit memory addresses are considered). It does not fit in the FPGA on-chip memory. Hence the address pointers should be stored in the external memory. It introduces additional external memory access for each tensor element. Also, the number of tensor elements with the same output mode coordinate value is different for each output coordinate due to the sparsity of the tensor. It complicates the memory layout of the tensor. An ideal memory layout should guarantee: (1) The number of memory address pointers required for remapping fit in the internal memory of the FPGA, and (2) Each tensor partition contains the same number of tensor elements. \begin{algorithm} \DontPrintSemicolon Input: A sparse tensor $\mathcal{X} \in \mathbb{R}^{I_0 \times I_1 \times I_2}$ sorted in mode 0, dense factor matrices $\mathbf{{A}} \in \mathbb{R}^{I_0 \times R}$, $\mathbf{{C}} \in \mathbb{R}^{I_2 \times R}$ \; Output: Updated dense factor matrix $\mathbf{B} \in \mathbb{R}^{I_1 \times R}$ \; \For{each non-zero element in $\mathcal{X}$ at $(i_0,i_1,i_2)$ with $i_0$ coordinates}{ Load($\mathcal{X}(i_0, i_1, i_2)$) \; pos$_{i_1}$ = Find(Memory address of $i_1$) \; Store($\mathcal{X}(i_0, i_1, i_2)$ at memory address pos$_{i_1}$) \; } \For{each $i_1$ output factor matrix row in $\mathbf{B}$}{ $\mathbf{B}(i_1, :) = 0 $ \; \For{each non-zero element in $\mathcal{X}$ at $(i_0,i_1,i_2)$ with $i_1$ coordinates}{ Load($\mathcal{X}(i_0, i_1, i_2)$) \; Load($\mathbf{A}(i_0,:)$) \; Load($\mathbf{C}(i_2,:)$) \; \For{$r=1, \ldots, R$}{ $\mathbf{B}(i_1, r) += \mathcal{X}(i_0, i_1, i_2) \times \mathbf{A}(i_0,r) \times \mathbf{C}(i_2,r)$ \; } } Store($\mathbf{B}(i_1,:)$) \; } \Return $\mathbf{{B}}$ \caption{{\sc Approach 1 with remapping for mode 1 of a tensor with 3 modes}} \label{mttkrp_appr_1} \end{algorithm} \section{\uppercase{Sparse MTTKRP memory access patterns}} \label{mttkrp_access_patterns} The proposed sparse MTTKRP computation has 5 main actions: (1) load a non-zero tensor element, (2) load corresponding factor matrices, (3) perform spMTTKRP operation, (4) store remapped tensor elements, and (5) store the final output. The objective of the memory controller is to decrease the total DRAM memory access time. To identify the opportunities to reduce the memory access time, we analyze the memory access patterns of the proposed tensor format and memory layout. The summary of memory access patterns is as follows: \begin{enumerate} \item The tensor elements can be loaded as streaming accesses while remapping and computing spMTTKRP. \item Each remapped tensor element can be stored element-wise. \item The different rows of each input factor matrices are random accesses. \item Each row of output factor matrix can be stored in streaming memory access. \end{enumerate} Accessing the data in bulk (i.e., a large chunk of data stored sequentially) can reduce the total memory access time. It is due to the characteristics of the DRAM. DMA (Direct Memory Access) is the standard method to perform bulk memory transfers. Further, the random accesses can be performed as element-wise memory accesses through a cache to explore the temporal and spatial locality of the accesses. It can improve the total access time. Thus memory transfer types are as follows: \begin{enumerate} \item \textbf{Cache transfers}: Supports random memory accesses. Load/store individual requests in minimum latency. Access patterns with high spatial and temporal locality are transferred using cache lines. \item \textbf{DMA stream transfers}: Supports streaming accesses. Load/store operations on all requested data with minimum latency from memory. \item \textbf{DMA element-wise transfers}: Transfer the requested data element-wise. This method is used with data with no spatial and temporal locality. \end{enumerate} \section{\uppercase{Towards Configurable Memory Controller}} \label{sec:Memory_Controller_Requirements} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/MC.png} \caption{Proposed Memory Controller} \label{MC} \end{figure} To support the memory accesses, we propose a programmable memory controller as shown in Figure \ref{MC}. It consists of a Cache Engine, a tensor remapper, and a DMA Engine. We evaluate the impact of using caches and DMAs as intermediate buffering techniques to reduce the total execution time of sparse MTTKRP. The modules inside the memory controller (e.g., Cache Engine, tensor remapper, and DMA Engine) can be developed as configurable hardware. These modules are programmable during the FPGA synthesis time. For example, the Cache Engine can have a different number of cache lines and associativity that can be configured during synthesis time. Also, the resource utilization of each module heavily depends on the configuration. On the other hand, the FPGA contains limited on-chip resources. Hence the FPGA resources should be distributed among different modules optimally, such that the overall memory access time is minimized (see Section \ref{DSE}). \subsection{Memory Controller Architecture} \subsubsection{Cache Engine} \label{sub_cache} The Cache Engine can be used to satisfy a single memory request with minimum latency. The Cache Engine is used to explore the spatial and temporal locality of requested data. We intend to use Cache Engine to explore the locality of the input factor matrices. When a tensor computation requests a row of the factor matrix, the memory controller first look-ups the Cache Engine. If the requested factor matrix row is already in the Cache Engine due to prior requests, the factor matrix row is forwarded to the tensor computation from the Cache Engine. Otherwise, the tensor row is loaded to the Cache Engine from the FPGA external memory. Then a copy of the matrix row is forwarded to the computation while storing it in the cache. \subsubsection{DMA Engine} \label{sub_dma} The DMA Engine can process bulk transfers between the compute units inside FPGA and FPGA external memory. A DMA Engine can have several DMA buffers inside. The main advantages of having a DMA Engine are: (a) DMA requests can request more than one element at once, unlike the Cache Engine, and reduce the input traffic of the memory controller, (b) Using a DMA Engine to access data without polluting the cache inside the Cache Engine, and (c) DMA transfers can utilize the external memory bandwidth for bulk transfers. \subsubsection{Tensor Remapper} Tensor remapper includes a DMA buffer and the proposed remapping logic discussed in Section \ref{sec:Tensor_Formats}. It loads each partition of the tensor as a bulk transfer similar to the DMA Engine. After, it stores each tensor element depending on the output mode coordinate value in an element-wise fashion. \\\\ \textbf{Required memory consistency of the memory controller:} The suggested memory controller above should have a weak memory consistency model with the following properties: \begin{itemize} \item \textbf{Consistency of DMA Engine, Cache Engine and Tensor Remapper:} They process its requests based on a first-in-first-out basis. \item \textbf{Consistency between Cache Engine, Tensor Remapper and DMA Engine: } The first-in first-served basis is maintained. Since the same memory location is not accessed by the Cache Engine, Tensor Remapper and DMA Engine at the same time, weak consistency is maintained. \end{itemize} \subsection{Programmable Parameters} The Cache Engine and DMA Engine use on-chip FPGA memory (i.e., BRAM and URAM). These resources need to be shared among the modules optimally to achieve significant improvements in memory access time. The resource requirement of the Cache Engine and DMA Engine depends on their configurable parameters mentioned below. \subsubsection{Memory Controller Parameters} \label{cache_sub} \label{dma_sub} Cache Engine parameters include cache line width, number of cache lines, and associativity of the cache. The design parameters of the DMA Engine are: the number of DMAs, the number of DMA buffers per DMA, and the size of DMA buffers. The design parameters of the Tensor Remapper include: (1) size of the DMA buffer, (2) width of a tensor element, and (3) the maximum number of address pointers Tensor Remapper can track. \begin{table} \caption{Characteristics of sparse tensors in FROSTT Repository} \begin{center} \resizebox{0.8\columnwidth}{!}{ \begingroup \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.2} \begin{tabular}{ |c|c|c|c| } \hline \textbf{Metric} & \textbf{Value} \\ \hline\hline Length of a tensor mode & $17$-$39$ M \\ \hline Width of a matrix $(R)$ & $8 - 32$ (Typical = 16) \\ \hline Number of non-zeros & $3$-$144$ M \\ \hline Number of modes & $3$, $4$, $5$ \\ \hline Tensor size & $\le 2.25$ GB \\ \hline Size of a factor matrix & $< 4.9$ GB \\ \hline \end{tabular} \endgroup } \label{summary} \end{center} \end{table} \subsection{Exploring the Design Space} \label{DSE} The tensor datasets can have different characteristics depending on the domain from which the dataset is extracted. Table \ref{summary} shows the characteristics of the tensors in The Formidable Repository of Open Sparse Tensors and Tools (FROSTT) \cite{frosttdataset}. It is commonly used in the high-performance computing community to benchmark custom accelerator designs for sparse MTTKRP. Tensor datasets from separate domains of applications have different characteristics such as sparsity, size of the modes, and the number of modes. Hence, the datasets extracted from various applications show the least memory access time with different configurations of the memory controller. Hence, performance estimator software is required to estimate the optimal configurable parameters for datasets of a domain. We introduce the features of a Performance Model Simulator (PMS) software to estimate the total execution time of spMTTKRP for a given dataset. It can use with multiple datasets from the same domain to estimate the average execution time ($t_{avg}$) for a selected domain. Also, it should estimate the total FPGA on-chip memory requirement for a given set of programmable parameters to make sure the memory controller fits in the FPGA device. We will explore the possible inputs required for a PMS concerning: (1) available FPGA resources (i.e., total BRAMs, and URAMs of the selected FPGA and data width of memory interface), (2) size of data structures (e.g., size of an input tensor element, size of an input factor matrix element, and rank of the input factor matrices), and (3) Parameters of the memory controller (i.e., DMA buffer sizes, number of cache lines, associativity of cache, and number of factor matrices shared by a cache). A module-by-module (e.g., Cache Engine and DMA Engine) exhaustive parameter search can be proposed to identify the optimal parameters for the memory controller. \section{\uppercase{Discussion}} \label{discussion} In this paper, we investigated the characteristics of a custom memory controller that can reduce the total memory access time of sparse MTTKRP on FPGAs. Sparse MTTKRP is a memory-bound operation. It has 2 types of memory access patterns that can be optimized to reduce the total memory access time. A memory controller design that can be configured during compile/synthesis time depending on the application and targeted hardware is required. We are developing a configurable memory controller and a memory layout for sparse tensors to reduce the total memory access time of sparse MTTKRP operation. Since synthesizing a FPGA can take a long time, optimizing the memory controller parameters for a given application can be a time-consuming process. Hence, we are developing a Performance Model Simulator (PMS) software to identify the optimal parameters for a given application on a selected FPGA. \section*{ACKNOWLEDGEMENTS} This work was supported by the U.S. National Science Foundation (NSF) under grants NSF SaTC \# 2104264 and PPoSS- 2119816. \bibliographystyle{apalike} {\small
1,941,325,220,294
arxiv
\section{Introduction} Since its invention in \cite{ChTk81,Tk81}, the method of integration by parts has become the commonly used platform for reduction of loop integrals to basic (master) integrals. The distinguishing feature of this method is rapidly increasing volume of symbolic algebraic computation needed for manipulation with recurrence relations (RR) when the number of internal and external lines or/and loops increases, especially in the presence of such physical parameters as masses, external momenta and the space-time dimension. Over the last years a number of algorithmic ideas, approaches and software packages has been reported~\cite{Tar98,Lap00,Tar04,AnLa04} to reduce, given a set of RR, loop integrals to a minimal set of master integrals. There are two main aspects of an algorithmic approach to this problem: (i) universality, i.e. applicability to the most general form of RR and (ii) computational efficiency. Most of practically done computations are based on the approach of Laporta~\cite{Lap00} since it deals directly with RR and provides a rather efficient reduction procedure based on Gaussian elimination. However, this elimination is restricted to RR with specified values of indices. As specification takes place, one extracts only partial information from RR. Moreover, in typical multiloop calculations it may lead to an enormous number of intermediate RR~\cite{AnLa04} that can annihilate the computational efficiency. Another approach suggested by Tarasov~\cite{Tar98,Tar04} appeals to potentially much more universal algorithmic techniques based on computation of {\em Gr\"obner bases} (GB). But for all that, RR are transformed into partial differential equations. Such a transformation can only be done for internal lines endowed with different masses which are independent variables for the differential equations. Insertion of these extra variables may increase substantially the running time and storage space required for computing GB. Furthermore, if one considers indices of RR as (discrete) symbolic variables for the differential system, then in the aggregate with extra independent variables (masses) this tends to increase the volume of computation so much that becomes hardly possible for problems of interest in modern multiloop calculations. Thus, in practice, one can treat differential systems for loop integrals only with fixed numerical powers of propagators. In this paper we consider RR generated by the integration by parts method as linear finite-difference equations for loop integrals whose coefficients can (polynomially) depend on indices and all relevant physical parameters such as masses, scalar products of external momenta and the space-time dimension. We present basic concepts of the GB technique (Sect.2) as the most universal algorithmic tool for RR with symbolic indices. GB allow one to analyze algorithmically the complete set of algebraic restrictions imposed on loop integrals by RR. Specifically, we apply (Sect.3) the GB technique to find a minimal set of basic (master) integrals which are not fixed by RR, and to reduce any other loop integral to these basic ones. We illustrate this technique by a simple example of one-loop propagator type integral taken from~\cite{Tar98}. We also briefly discuss computational issues of the GB approach applied to RR (Sect.4). The concept of GB was invented by Buchberger in 1965 for systems of multivariate commutative polynomials and then extended to systems of linear partial differential equations, noncommutative polynomials and other algebraic structures (see, for example, book~\cite{GBA}). Due to their algorithmic universality, GB have found numerous fruitful applications. Special program modules for their computing in the case of commutative polynomials built-in all modern computer algebra systems such as Maple, Mathematica and others. In addition, Maple has special librarian functions for computing GB for differential equations, which were used in~\cite{Tar98,Tar04}, and for noncommutative Ore algebras~\cite{Ch98}. \section{Gr\"obner Bases for Recurrence Relations} Since any tensor multi-loop Feynman integral by shifting space-time dimension can be reduced to scalar integrals~\cite{Tar96}, we shall, without the loss of generality, consider the following scalar integrals \begin{equation} {\cal{I}_\nu}=\int d^dk_1\cdots d^dk_L \frac{1}{\prod_{i=1}^n P_i^{\nu_i}} \label{integral} \end{equation} with $n$ internal lines. Here propagators $P_i$ have symbolic exponents $\nu_i$ which we collect together into the {\em multi index} $\nu=\{\nu_1,\nu_2,\ldots,\nu_n\}$. RR derived from integration by parts form a homogeneous system of linear finite-difference equations in the integral $\cal{I}_\nu$ as function of its indices. Let $\nu_i-\lambda_i$ be the minimal value of the $i$-th index entering in the RR system. Here $\lambda_i\geq 0$ are explicitly known integers. The set of all multi indices with nonnegative components will be denoted by $N^n_{\geq 0}$, and, hence, $\lambda\in N^n_{\geq 0}$. Let $\mu=\{\nu_1-\lambda_1,\nu_2-\lambda_2,\ldots,\nu_n-\lambda_n\}$ and the integral ${\cal{I}}_\mu\neq 0$ be a function of indices in $\nu$. Then the RR are a set of equalities $f_j=0$ including finite sums of the form \begin{equation} f_j=\sum_{\alpha} b_\alpha^j \, D^\alpha {\cal{I}}_\mu\,, \qquad j=1,\ldots,p \label{fde} \end{equation} Here $D^\alpha=D_1^{\alpha_1}\cdots D_n^{\alpha_n}$ with $\alpha=\{\alpha_1,\ldots,\alpha_n\}\in N^n_{\geq 0}$, and each $D_i$ is the right-shift operator for the $i$-th index, i.e., $D_i{\cal{I}}_\mu={\cal{I}}_{\mu_1,\ldots,\mu_i+1,\ldots,\mu_n}$. Coefficients $b_\alpha^j$ depend polynomially on indices $\{\nu_1,\ldots,\nu_n\}$ and physical parameters, e.g., masses, scalar products of external momenta, space-time dimension $d$. Hereafter we shall call integrals $D^\alpha {\cal{I}}_\mu$ and sums $f_j$ in (\ref{fde}) by (difference) {\em terms} and (linear difference) {\em polynomials}, respectively. Coefficients $b_\alpha^j$ of polynomials $f_j$ will be considered as elements in the field $Q(\nu)$ of rational functions in indices whose coefficients in turn are parametric rational functions. Consider now the set of all linear polynomials \begin{equation} <F>=\{\ \sum_{\beta} b_\beta \, D^\beta (f)\mid f\in F,\ b_\beta\in Q(\nu)\ \} \label{ideal} \end{equation} generated by the polynomial set $F=\{f_1,\ldots,f_p\}$ defined in~(\ref{fde}). Any element $g\in \, <F>$ yields the finite-difference equation $g=0$ which is a consequence of the initial RR, and $<F>$ accumulates all the consequences. The set $F$ of difference polynomials is called {\em a basis} of $<F>$. Let $\succ$ be a total order on terms $D^\alpha {\cal{I}}_\mu$ such that for any $\alpha,\beta,\gamma \in N^n_{\geq 0}$ the following holds \\[-0.2cm] (i) $D^\alpha {\cal{I}}_\mu \succ {\cal{I}}_\mu \Longleftrightarrow \alpha$ contains nonzero indices, \\[-0.2cm] (ii) $D^\gamma D^\alpha {\cal{I}}_\mu\succ D^\gamma D^\beta {\cal{I}}_\mu\Longleftrightarrow$ $D^\alpha {\cal{I}}_\mu\succ D^\beta {\cal{I}}_\mu$. \\[-0.2cm] Such term orders are nothing else than {\em admissible term orders} in commutative polynomial algebra~\cite{Buch98}, if one compares multi indices of the terms. As typical examples of admissible orders we indicate ''lexicographical'' order $\succ_{\mathop{\mathrm{lex}}\nolimits}$ with $\alpha \succ_{\mathop{\mathrm{lex}}\nolimits} \beta$ when $\alpha_1 > \beta_1$ or when $\alpha_1 = \beta_1$ and $\alpha_2 > \beta_2$, etc., and ''total degree'' order $\alpha \succ_{\mathop{\mathrm{tdeg}}\nolimits} \beta$ when $\sum \alpha_i > \sum \beta_i$ or $\sum \alpha_i = \sum \beta_i$ and $\alpha \succ_{\mathop{\mathrm{lex}}\nolimits} \beta$. Given a term order $\succ$, one can extract the {\em leading term} $\mathop{\mathrm{lt}}\nolimits(f)$ from every difference polynomial $f$. Its coefficient is called {\em the leading coefficient} and will be denoted by $\mathop{\mathrm{lc}}\nolimits(f)$. Now we can define GB for~(\ref{ideal}) as a finite polynomial set $G$ satisfying \begin{enumerate} \item $<G>=<F>$. \\[-0.6cm] \item For every $f\in \,<F>$ there is $g\in G$ and $\gamma\in N^n_{\geq 0}$ such that $\mathop{\mathrm{lt}}\nolimits(f)=D^\gamma (\mathop{\mathrm{lt}}\nolimits(g))$. \end{enumerate} Note that conditions (i) and (ii) of admissibility for $\succ$ provide existence of GB and termination of an algorithm for its construction exactly as they do in commutative polynomial algebra~\cite{Buch98,CLO}. If the term order $\succ$ is considered as a certain ''simplicity relation'' between integrals, then condition (i) means that integral ${\cal{I_\mu}}$ is ''simpler'' than that obtained from ${\cal{I_\mu}}$ by shifting up some indices. Condition (ii) shows that such a simplicity relation between two integrals is stable under identical shifting of their corresponding indices. For two multi indices $\alpha$ and $\beta$, $\alpha$ is called {\em a divisor} of $\beta$ and $\beta$ is called {\em a multiple} of $\alpha$, if $\beta-\alpha\in N^n_{\geq 0}$. The {\em least common multiple} of $\alpha$ and $\beta$ will be denoted by $\mathop{\mathrm{lcm}}\nolimits(\alpha,\beta)$. Thus, property 2 means that the leading multi index of any consequence of RR as element in $<F>$ has a divisor among multi indices of the leading terms in GB. Property 1 implies that both sets of RR defined by $F$ and $G$ are fully equivalent since any consequence of one set is a consequence of another set and vice versa. A term whose multi index divides (is multiple of) another term is said to be a divisor (multiple) of this term. An important notion of the GB techniques is {\em reduction}. Polynomial $f$ is said to be {\em reducible modulo polynomial} $g$ if $f$ has a term $u$ with a coefficient $b\in Q(\mu)$ such that $u$ is multiple of $\mathop{\mathrm{lt}}\nolimits(g)$, i.e., $u=D^\gamma \mathop{\mathrm{lt}}\nolimits(g)$ for some $\gamma \in N^n_{\geq 0}$. In this case {\em an elementary reduction step} is given by \begin{equation} f\stackrel{g}{\longrightarrow} f^\star=\frac{1}{b}\,f-D^\gamma \left(\frac{1}{\mathop{\mathrm{lc}}\nolimits(g)}\,g\right)\,.\label{step} \end{equation} If $f^\star$ in (\ref{step}) is still reducible modulo $g$, then the second reduction step can be performed, etc., until an irreducible polynomial obtained. By property (i) of term order $\succ$, the number of elementary reductions is finite~\cite{CLO}. Similarly, one can {\em reduce} a polynomial $f$ {\em modulo a finite polynomial set $F$} by doing elementary reductions of $f$ modulo individual polynomials in $F$. The reduction process is terminated with polynomial $\bar{f}$ irreducible modulo $F$. In this case we say that $\bar{f}$ is in the {\em normal form modulo} $F$ and write $\bar{f}=NF(f,F)$. The above described procedure gives an algorithm for computing the normal form. From the above definition of GB $G$ for set $F$, it follows that $NF(f,G)=0$ for any polynomial $f$ in $<F>$. Therefore, if $G$ is known, one can algorithmically detect the dependency of an extra recurrence relation $r$ on set $F$. It suffices to verify whether $NF(r,G)$ vanishes or not. It is remarkable, that there is an algorithm for construction of GB for any finite set of linear difference polynomials and an admissible term order. This algorithm was discovered by Buchberger in 1965 for commutative algebraic polynomials. Below we present the Buchberger algorithm in its simplest form~\cite{Buch98,CLO} adapted to linear finite-difference polynomials. First, we define the {\em $S$-polynomial} $S(f,g)$ for a pair of (nonzero) difference polynomials $f$ and $g$ as follows. Let $lt(f)$ and $lt(g)$ have multi indices $\alpha$ and $\beta$, respectively, and let $\gamma=\mathop{\mathrm{lcm}}\nolimits(\alpha,\beta)$. Then $$ S(f,g)=D^{\gamma - \alpha}\left(\frac{1}{\mathop{\mathrm{lc}}\nolimits(f)}\,f\right) - D^{\gamma - \beta} \left(\frac{1}{\mathop{\mathrm{lc}}\nolimits(g)}\,g\right)\,. $$ {\em Buchberger algorithm}: \vskip 0.1cm Start with $G:=F$. For a pair of polynomials $f_1,f_2\in G$: \hspace*{0.5cm} Compute $S(f_1,f_2)$. \hspace*{0.5cm} Compute $h:=NF(S(f_1,f_2),G)$. \hspace*{0.5cm} If $h=0$, consider the next pair. \hspace*{0.5cm} If $h\neq 0$, add $h$ to $G$ and iterate. \vskip 0.1cm \noindent Inter-reduction of the output GB, that is, reduction of every polynomial $g\in G$ modulo $G\setminus \{g\}$, gives a {\em reduced} GB. By normalizing (dividing by) the leading coefficients, we obtain the {\em monic} reduced GB that is uniquely defined~\cite{Buch98,CLO} by the input polynomial set $F$ and term order $\succ$. \section{Generic Master Integrals} Having computed GB for the set (\ref{ideal}) generated by the initial set (\ref{fde}), one can algorithmically, just as in commutative algebra~\cite{CLO}, find the maximal set of difference monomials (integrals) which are independent modulo set (\ref{ideal}). In other words, there is no nonzero polynomial in (\ref{ideal}) composed from those monomials, and any other monomial (integral) can be reduced to (expressed by means of) these monomials. Thus, the maximal independent set of monomials (integrals) is exactly the collection of basic integrals. They may be called {\em generic master integrals} to emphasize their relevance to RR with symbolic indices. The reduction of any integral to ''masters'' can also be done algorithmically (Sect.2) by means of GB. By keeping track of all intermediate elementary reductions~(\ref{step}), one can obtain an explicit expression of any loop integral in terms of (generic) master integrals and their multiples. As we defined in Sect.3, a multiple of the integral is obtained by applying $D^\gamma$ with $\gamma \in N^n_{\geq 0}$. It is clear, however, that $D^\gamma{\cal{I}}_{\mu}$ can be evaluated only if the dependence of ${\cal{I}}_{\mu}$ on indices is explicitly known. But how to obtain the set of master integrals? It can be easily detected from the leading monomials of GB. Namely, all those monomials that are not multiple of any leading monomial in GB are independent (masters). This fact is very well known in theory of commutative Gr\"obner bases~\cite{CLO} and apparently applicable to linear finite-difference polynomials. Clearly, any multiple of a leading term in GB is reducible to (expressible in terms of) a linear combination of monomials that are not multiple of any leading term in the GB and with coefficients from $Q(\nu)$. Though the particular form of master integrals depends on the choice of term order $\succ$, their number is invariant on $\succ$ and fully defined by initial RR. In fact, if one accepts the interpretation of $\succ$ as simplicity relation (Sect.2), the set of master integrals detected from GB contains the ''simplest'' integrals\footnote{For numerically fixed values of indices and particular integrals sometimes one uses other simplicity relations which do not satisfy (i) and (ii)~\cite{AnLa04}.}. Usually a ''total degree'' order is preferable (cf.~\cite{Lap00,AnLa04}) from the computational point of view since total degree GB are typically computed much faster then lexicographical ones. For pure illustrative purposes consider RR for the one-loop propagator type integral ${\cal{I}}_{\nu_1\nu_2}$ taken from paper~\cite{Tar98} and put $m^2=q^2=0$ in those relations, thus, keeping space-time dimension $d$ as the only parameter \vskip 0.1cm \noindent $ \nu_1{\cal{I}}_{\nu_1-1\,\nu_2+1}-(d-\nu_1-2\nu_2){\cal{I}}_{\nu_1\nu_2}=0, \\[0.05cm] \nu_1{\cal{I}}_{\nu_1+1\,\nu_2-1}-\nu_2{\cal{I}}_{\nu_1-1\,\nu_2+1}+(\nu_2-\nu_1){\cal{I}}_{\nu_1\nu_2}=0. \\ $ \\[-0.3cm] Their form (\ref{fde}) for ${\cal{I}}_{\nu_1-1,\nu_2-1}\neq 0$ is given by \vskip 0.1cm \noindent $[\nu_1D_2^2-(d-\nu_1-2\nu_2)]\,{\cal{I}}_{\nu_1-1,\nu_2-1}=0, \\[0.05cm] [\nu_1D_1^2-\nu_2D_2^2+(\nu_2-\nu_1)]\,{\cal{I}}_{\nu_1-1\,\nu_2-1}=0, $ \vskip 0.1cm \noindent where $D_1$, $D_2$ are the right-shift operators for indices $\nu_1$ and $\nu_2$, respectively. A total degree Gr\"obner basis at $D_1\succ D_2$ calculated by Maple \footnote{Algebra of shift operators is a particular case of the Ore algebra~\cite{Ch98}, and one can use it for computing GB for RR as already mentioned in~\cite{Tar04}. However, this way of computation is very expensive and can be applied only to small problems (Sect.4).} gives the following GB form of RR \vskip 0.1cm \noindent $[(d-\nu_1-2\nu_2)D_1D_2-\nu_1D_2^2]\,{\cal{I}}_{\nu_1-1\,\nu_2-1}=0, \\[0.05cm] [\nu_1(d-\nu_1-2\nu_2)D_1^2+\\[0.05cm] (\nu_1(2\nu_2-\nu_1)+\nu_2(2\nu_2-d))D^2_2]\,{\cal{I}}_{\nu_1-1\,\nu_2-1}=0,\\[0.05cm] D^3_2\,{\cal{I}}_{\nu_1-1\,\nu_2-1}=0. $ \vskip 0.1cm \noindent An immediate consequence of this GB is the equality ${\cal{I}}_{\nu_1-1\,\nu_2+2}=0$ that follows from the last element of GB. This implies $ D^\gamma {\cal{I}}_{\nu_1-1\,\nu_2+2}=0$ for $\gamma\in N^n_{\geq 0}\,. $ Similarly, the order $D_2\succ D_1$ gives ${\cal{I}}_{\nu_1+2\,\nu_2-1}=0$ and $D^\gamma {\cal{I}}_{\nu_1+1\,\nu_2-1}=0$ for $\quad \gamma\in N^n_{\geq 0}$ that can also be verified by calculating the normal form $NF(D^3_2\,{\cal{I}}_{\nu_1-1\,\nu_2-1},GB)=0$. The leading terms in the above GB are represented by the operators $ D_1D_2,\,D_1^2,\,D_2^3$ which generate four master integrals ${\cal{I}}_{\nu_1-1\,\nu_2-1},\,{\cal{I}}_{\nu_1\,\nu_2-1},\,{\cal{I}}_{\nu_1-1\,\nu_2},\, {\cal{I}}_{\nu_1-1\,\nu_2+1}\,.$ \section{Computational Aspects} First of all, it should be emphasized that the above analysis is based only on RR themselves without any use of the ''integral structure'' of ${\cal{I}}_{\mu}$. If there are extra restrictions which follow from this structure, they should be incorporated into RR. For example, if a loop integral admits symmetry under some permutation of indices, one has to use RR in the properly symmetrized form. Thus, in example of Sect.4 ${\cal{I}}_{\nu_1\,\nu_2}={\cal{I}}_{\nu_2\,\nu_1}$~\cite{Tar98}, and if one adds RR with permuted indices, then GB becomes $D_1^2{\cal{I}}_{\nu_1-1\,\nu_2-1},\,D_1D_2{\cal{I}}_{\nu_1-1\,\nu_2-1},\,D_2^2{\cal{I}}_{\nu_1-1\,\nu_2-1}$. It should be also noted that the GB technique is applicable to RR with specified indices. Provided with an appropriate ordering on the integrals (priority criteria~\cite{Lap00,AnLa04}), the GB algorithm is just Gaussian elimination~\cite{CLO}. Hence, it is conceptually identical to the Laporta algorithm. In the above illustrative and very simple example (Sect.3) we performed the GB calculation over the field. Generally, however, one has to be careful in manipulation with arbitrary indices occurring in the coefficients of difference polynomials. In calculating GB over a coefficient field, the leading coefficients of difference polynomials are treated generically, and division by them can be done at the intermediate steps of calculation. GB computed this way may loose its GB properties under some specifications of parameters. This happens when, in the course of reduction, the division by a leading coefficient was done, and this coefficient vanishes under the specification. To avoid this trouble, one can compute GB over a coefficient ring rather than over a field (e.g. without division). But computation over the ring may lead to growth of intermediate coefficients and make the computation much more tedious. As it is well-known (see~\cite{CLO} and references therein), the running time and storage space required by the GB algorithms the GB tend to be exponential and superexponential in number of variables (indices of RR). Besides, the presence of parameters and symbolic indices in coefficients increases substantially the volume of computation. Therefore, to be practically applicable to modern multiloop calculations, one needs efficient implementation of the GB algorithms provided by special tuning to finite-difference polynomials of type~(\ref{fde}). To our knowledge, there are no GB packages available for multivariate finite-difference polynomials, except those designed for the (noncommutative) skew-polynomial algebras or Ore algebras~\cite{Ch98}. Our experimentation with the Maple package implementing the Buchberger algorithm for skew polynomials, revealed its very low efficiency for RR with parameters. Thus, Maple on a 2 Ghz PC with 512 Mb RAM was not able to construct (in reasonable time) GB for the full (but still rather small) one-loop example from~\cite{Tar98} when both $m^2$ and $q^2$ are nonzero and present in RR. To our opinion, this is because of extra computational costs caused by internal noncommutative settings of underlying built-in operations. We find more perspective to adapt our involutive algorithms and software packages~\cite{GB98,GBY01} to linear multivariate finite-difference polynomials. The involutive algorithms also compute GB, but they are based on the other course of intermediate computation different from that in the Buchberger algorithm. Our implementation of involutive algorithms already shown their higher efficiency~\cite{GBY01} for conventional commutative polynomials. In addition, involutive algorithms admit effective and natural parallelism, that may be crucial to manage with multiloop calculations needed in modern higher energy physics. \section{Acknowledgements} This work was partially supported by the SFB/CPP-TR 9 grant and also by grants 04-01-00784 from the Russian Foundation for Basic Research and 2339.2003.2 from the Russian Ministry of Science and Education. I am specially grateful O.Tarasov for longstanding useful discussions on algorithmic aspects of manipulation with systems of differential equations arising in the loop integral reduction. I also want to acknowledge K.Chetyrkin, M.Kalmykov and V.Smirnov for their helpful remarks.
1,941,325,220,295
arxiv
\section{Introduction} Robinson's Joint Consistency Theorem \cite{rob} gives a sufficient condition in the context of first-order logic for two theories to have a common model. This result was originally proved by A. Robinson with the aim of providing a new purely model-theoretic proof of the Beth definability property. Robinson's theorem was a historical forerunner of Craig's celebrated interpolation theorem, to which it is famously classically equivalent. In the context of classical first-order logic, it is known since Lindstr\"om's work~\cite{lind78} that in the presence of compactness, the Robinson Consistency Property is a consequence of the Omitting Types Theorem. Following in Lindstr\"om's footsteps, we use an Omitting Types Theorem for many-sorted hybrid-dynamic first-order logics established in~\cite{gai-hott} to obtain a Robinson Consistency Theorem. However, our results rely on compactness, so they apply to any star-free fragment of this logic. In \cite{ArecesBM01}, Areces et al. solved the interpolation problem positively for hybrid propositional logic, and in \cite{ArecesBM03}, they establish a similar result for hybrid predicate logic with constant domains (also called here rigid domains). Our results, although similar, do not follow from theirs. For one thing, the framework of \cite{ArecesBM03} is limited to constant domain quantification, whereas we allow variable domains. Moreover, as usual in the area of algebraic specification, we work in a many-sorted setting and, importantly, we consider arbitrary pushouts of signatures (see, e.g.,~\cite{tar-bit}); not only inclusions. This seemingly small change splits one-sorted interpolation and many-sorted interpolation apart. One can have the former but not the latter, as Example \ref{ex:counter-2} shows. The same holds for Robinson consistency. Our approach is based on institution theory, an abstract framework introduced in~\cite{gog-ins} for reasoning about properties of logical systems from a meta-perspective. Institutional setting achieves generality appropriate for the development of abstract model theory, yet it is geared towards applications, particularly applications to specification and verification of systems. However, for the sake of simplicity, we work in a concrete example of hybrid logic, applying the modularization principles advanced by institution theory, which have at the core the notion of signature morphism and the satisfaction condition (the truth is invariant w.r.t. change of notation). This brings about certain peculiarities, such as regarding variables as special constants, and omnipresence of signature morphisms, which are simply maps between signatures. For more on institution theory, we refer the reader to~\cite{dia-book}. The article is structured in the usual way. Section~\ref{sec:HFOL} introduces Hybrid First-Order Logic (HFOL) with rigid symbols in an institutional setting, giving the expected definitions of Kripke structures, reducts, and (local and global) satisfaction relation. Sections~\ref{sec:basics} and~\ref{sec:relat} give some preliminary results, most importantly the Lifting Lemma~\ref{lemma:lifting} and Proposition~\ref{prop:amalg}. The bulk of the work is in Section~\ref{sec:rc} where, unsurprisingly, we prove Robinson Joint Consistency Theorem for HFOL. Our proof is modelled after Lindstr\"om~\cite{lind78}, and uses our earlier result from~\cite{gai-hott} establishing Omitting Types Theorem for HFOL. \section{Hybrid First-Order Logic with rigid symbols (${\mathsf{HFOL}}$)}\label{sec:HFOL} In this section, we present hybrid first-order logic with rigid symbols. \paragraph{Signatures} The signatures are of the form $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r},\Sigma)$: \begin{itemize} \item $\Sigma=(S,F,P)$ is a many-sorted first-order signature such that (a)~$S$ is a set of sorts, (b)~$F$ is a set of function symbols of the form $\sigma:\mathtt{ar}\to s$, where $\mathtt{ar}\in S^*$ is called the \emph{arity} of $\sigma$ and $s\in S$ is called the \emph{sort} of $\sigma$, and (c)~$P$ is a set of relation symbols of the form $\pi:\mathtt{ar}$, where $\mathtt{ar}\in S^*$ is called the arity of $\pi$.~\footnote{$S^*$ denotes the set of all strings with elements from $S$.} \item $\Sigma^\mathtt{r}=(S^\mathtt{r},F^\mathtt{r},P^\mathtt{r})$ is a many-sorted first-order signature of \emph{rigid} symbols such that $\Sigma^\mathtt{r}\subseteq \Sigma$. \item $\Sigma^\mathtt{n}=(S^\mathtt{n},F^\mathtt{n},P^\mathtt{n})$ is a single-sorted first-order signature such that $S^\mathtt{n}=\{\mathtt{n}\}$, $F^\mathtt{n}$ is a set of constants called \emph{nominals}, and $P^\mathtt{n}$ is a set of unary or binary relation symbols called \emph{modalities}. \end{itemize} We usually write $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r}\subseteq \Sigma)$ rather than $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r},\Sigma)$. Throughout this paper, we let $\Delta$ and $\Delta^i$ range over signatures of the form $(\Sigma^\mathtt{n},\Sigma^\mathtt{r}\subseteq\Sigma)$ and $(\Sigma_i^\mathtt{n},\Sigma_i^\mathtt{r}\subseteq\Sigma_i)$, respectively. A \emph{signature morphism} $\chi \colon \Delta \to \Delta^1$ consists of a pair of first-order signature morphisms $\chi^{\mathtt{n}} \colon \Sigma^{\mathtt{n}} \to \Sigma_1^{\mathtt{n}}$ and $\chi \colon \Sigma \to \Sigma_1$ such that $\chi(\Sigma^{\mathtt{r}}) \subseteq \Sigma_1^{\mathtt{r}}$.~\footnote{ A first-order signature morphism $\chi:(S,F,P)\to(S_1,F_1,P_1)$, is a triple $(\chi:S\to S_1, \chi:F\to F_1,\chi:P\to P_1)$ which maps each function symbol $\sigma:s_1\dots s_n\to s\in F$ to $\chi(\sigma):\chi(s_1)\dots \chi(s_n)\to\chi(s)\in F_1$ and each relation symbol $\pi:\mathtt{ar}\in P$ to $\chi(\pi):\chi(\mathtt{ar})\in P_1$.} \begin{fact} ${\mathsf{HFOL}}$ signature morphisms form a category $\mathtt{Sig}^{\mathsf{HFOL}}$ under the component-wise composition as first-order signature morphisms. \end{fact} \paragraph{Kripke structures} For every signature $\Delta$, the class of Kripke structures over $\Delta$ consists of pairs $(W,M)$, where \begin{itemize} \item $W$ is a first-order structure over $\Sigma^\mathtt{n}$, called a \emph{frame}, with the universe $|W|$ consisting of a non-empty set of possible worlds, and \item $M\colon|W|\to |\mathtt{Mod}^{\mathsf{FOL}}(\Sigma)|$ is a mapping from the universe of $W$ to the class of first-order $\Sigma$-structures such that the rigid symbols are interpreted in the same way across worlds: ${M_{w_1}\!\upharpoonright\!_{\Sigma^\mathtt{r}}}={M_{w_2}\!\upharpoonright\!_{\Sigma^\mathtt{r}}}$ for all $w_1,w_2\in |W|$, where $M_{w_i}$ denotes the first-order $\Sigma$-structure corresponding to $w_i$, and $M_{w_i}\!\upharpoonright\!_{\Sigma^\mathtt{r}}$ is the reduct of $M_{w_i}$ to the signature $\Sigma^\mathtt{r}$. \end{itemize} A \emph{homomorphism} $h \colon (W, M) \to (V,N)$ over a signature $\Delta$ is also a pair \begin{center} $(W\stackrel{h}\to V, \{M_{w}\stackrel{h_{w}}\to N_{h(w)}\}_{w \in |W|})$ \end{center} consisting of first-order homomorphisms such that the mappings corresponding to rigid sorts are shared across the worlds, that is, $h_{w_{1}, s} = h_{w_{2}, s}$ for all possible worlds $w_{1}, w_{2} \in |W|$ and all rigid sorts $s \in S^\mathtt{r}$. \begin{fact} For any signature $\Delta$, the $\Delta$-homomorphisms form a category $\mathtt{Mod}^{\mathsf{HFOL}}(\Delta)$ under the component-wise composition. \end{fact} \paragraph{Reducts} Every signature morphism $\chi \colon \Delta \to \Delta^1$ induces appropriate \emph{reductions of models}: every $\Delta^1$-model $(V, N)$ is reduced to a $\Delta$-model $(V,N) \!\upharpoonright\!_{\chi}$ that interprets every symbol $x$ in $\Delta$ as $(V, N)_{\chi(x)}$. When $\chi$ is an inclusion, we usually denote $(V, N) \!\upharpoonright\!_\chi$ by $(V, N) \!\upharpoonright\!_\Delta$ -- in this case, the model reduct simply forgets the interpretation of those symbols in $\Delta^1$ that do not belong to $\Delta$. \begin{fact} For each signature morphism $\chi\colon\Delta\to\Delta'$ and each Kripke structure $(W,M)$ over $\Delta'$, the map $\mathtt{Mod}^{\mathsf{HFOL}}$ from $\mathtt{Sig}^{\mathsf{HFOL}}$ to $\mathbb{C}at^{op}$, defined by $\mathtt{Mod}^{\mathsf{HFOL}}(\chi)(W,M) = (W,M)\!\upharpoonright\!_\chi$, is a functor. \end{fact} \paragraph{Hybrid terms} For any signature $\Delta$, we make the following notational conventions: (a)~$S^\mathtt{e}\coloneqq S^\mathtt{r}\cup\{\mathtt{n}\}$ the extended set of rigid sorts, where $\mathtt{n}$ is the sort of nominals, (b)~$S^\mathtt{f} \coloneqq S \setminus S^{\mathtt{r}}$ the subset of flexible sorts, (c)~$F^\mathtt{f}\coloneqq F\setminus F^\mathtt{r}$ the subset of flexible function symbols, (d)~$P^\mathtt{f}\coloneqq P\setminus P^\mathtt{r}$ the subset of flexible relation symbols. The \emph{rigidification} of $\Sigma$ with respect to $ F^\mathtt{n}$ is the signature $@\Sigma=(@S,@F,@P)$, where (a)~$@S\coloneqq \{\at{k} s \mid k\in F^\mathtt{n} \mbox{ and } s\in S\}$, (b)~$@F\coloneqq\{\at{k}\sigma\colon \at{k}\mathtt{ar} \to \at{k} s \mid k\in F^\mathtt{n} \mbox{ and } (\sigma\colon \mathtt{ar}\to s) \in F \}$, and (c)~$@P\coloneqq \{\at{k} \pi\colon \at{k} \mathtt{ar} \mid k\in F^\mathtt{n} \mbox{ and }(\pi\colon\mathtt{ar})\in P\}$.~\footnote{$\at{k} (s_1\ldots s_n) \coloneqq \at{k} s_1\ldots\at{k} s_n$ for all arities $s_1\ldots s_n$.} Since rigid symbols have the same interpretation across worlds, we let $\at{k} x = x$ for all nominals $k\in F^\mathtt{n}$ and all rigid symbols $x$ in $\Sigma^\mathtt{r}$. The set of \emph{rigid $\Delta$-terms} is $T_{@\Sigma}$, while the set of \emph{open $\Delta$-terms} is $T_\Sigma$. The set of \emph{hybrid $\Delta$-terms} is $T_{\overline\Sigma}$, where $\overline\Sigma=(\overline{S},\overline{F},\overline{P})$, $\overline{S}=S\cup @S^\mathtt{f}$, $\overline{F}=F\cup @F^\mathtt{f}$, and $\overline{P}=P\cup @P^\mathtt{f}$. The interpretation of the hybrid terms in Kripke structures is uniquely defined as follows: for any $\Delta$-model $(W,M)$, and any possible world $w\in|W|$, \begin{enumerate}[1)] \item $M_{w,\sigma(t)} = M_{w,\sigma}(M_{w,t})$, where $(\sigma\colon\mathtt{ar}\to s)\in F$, and $t\in T_{\overline\Sigma,\mathtt{ar}}$, \footnote{$M_{w,(t_1,\ldots,t_2)}\coloneqq M_{w,t_1},\ldots,M_{w,{t_n}}$ for all tuples of hybrid terms $(t_1,\ldots,t_n)$.} \item $M_{w,(\at{k} \sigma)(t)} = M_{w',\sigma} (M_{w,t})$, where $(\at{k} \sigma\colon\at{k} \mathtt{ar}\to\at{k} s)\in @F^\mathtt{f}$, $t\in T_{\overline\Sigma,\at{k}\mathtt{ar}}$ and $w'=W_k$. \end{enumerate} \paragraph{Sentences} The simplest sentences defined over a signature $\Delta$, usually referred to as atomic, are given by \begin{center} $\rho \Coloneqq k \mid \varrho \mid t_{1} = t_{2} \mid \varpi(t)$ \end{center} where (a)~$k \in F^\mathtt{n}$ is a nominal, (b)~$(\varrho:\mathtt{n})\in P^\mathtt{n}$ is a unary modality, (c)~$t_i \in T_{\overline\Sigma,s}$ are hybrid terms, $s\in \overline{S}$ is a hybrid sort, (d)~$\varpi:\mathtt{ar}\in\overline{P}$ and $t\in T_{\overline\Sigma,\mathtt{ar}}$. We call \emph{hybrid equations} sentences of the form $t_1=t_2$, and \emph{hybrid relations} sentences of the form $\varpi(t)$. The set $\mathtt{Sen}^{\mathsf{HFOL}}(\Delta)$ of \emph{full sentences} over $\Delta$ are given by the following grammar: \begin{center} $\varphi \Coloneqq \rho \mid \at{k} \varphi \mid \lnot \varphi \mid \textstyle\vee \Phi \mid \store{z} \varphi' \mid \Exists{X} \varphi'' \mid \pos{\lambda} \varphi $ \end{center} where (a)~$\rho$ is an atomic sentence, (b)~$k \in F^\mathtt{n} $ is a nominal, (c)~$\Phi$ is a finite set of sentences over $\Delta$, (d)~$z$ is a nominal variable for $\Delta$ and $\varphi'$ is a sentence over the signature $\Delta(z)$ obtained from $\Delta$ by adding $z$ as a new constant to $ F^\mathtt{n}$, (e)~$X$ is a set of variables for $\Delta$ of sorts from the extended set of rigid sorts $S^\mathtt{e}$ and $\varphi''$ is a a sentence over the signature $\Delta(X)$ obtained from $\Delta$ by adding the variables in $X$ as new constants to $ F^\mathtt{n}$ and $F^{\mathtt{r}}$, and (f)~$(\lambda:\mathtt{n}~\mathtt{n})\in P^\mathtt{n}$ is a binary modality. Other than the first kind of sentences (\emph{atoms}), we refer to the sentence-building operators, as \emph{retrieve}, \emph{negation}, \emph{disjunction}, \emph{store}, \emph{existential quantification} and \emph{possibility}, respectively. Other Boolean connectives and the universal quantification can be defined as abbreviations of the above sentence building operators. Each signature morphism $\chi\colon\Delta\to\Delta^1$ induces \emph{sentence translations}: any $\Delta$-sentence $\varphi$ is translated to a $\Delta^1$-sentence $\chi(\varphi)$ by replacing, in an inductive manner, the symbols in $\Delta$ with symbols from $\Delta^1$ according to $\chi$. \begin{fact} $\mathtt{Sen}^{\mathsf{HFOL}}$ is a functor $\mathtt{Sig}^{\mathsf{HFOL}} \to \mathbb{S}et$ which maps each signature $\Delta$ to the set of sentences over $\Delta$. \end{fact} \paragraph{Local satisfaction relation} Given a $\Delta$-model $(W, M)$ and a world $w \in |W|$, we define the \emph{satisfaction of $\Delta$-sentences at $w$} by structural induction as follows: \noindent\emph{For atomic sentences}: \begin{itemize} \item $(W, M) \models^{w} k $ iff $W_k = w$; \item $(W,M)\models^w \varrho$ iff $w\in W_\varrho$; \item $(W, M) \models^{w} t_{1} = t_{2}$ iff $M_{w, t_1} = M_{w,t_2}$; \item $(W, M) \models^{w} \varpi(t)$ iff $M_{w,t} \in M_{w, \varpi}$. \end{itemize} \noindent\emph{For full sentences}: \begin{itemize} \item $(W, M) \models^{w} \at{k} \varphi$ iff $(W, M) \models^{w'} \varphi$, where $w' = W_{k}$; \item $(W, M) \models^{w} \neg \varphi$ iff $(W, M) \not\models^{w} \varphi$; \item $(W, M) \models^{w} \vee \Phi$ iff $(W, M) \models^{w} \varphi$ for some $\varphi \in \Phi$; \item $(W, M) \models^{w} \store{z}{\varphi}$ iff $(W^{z \leftarrow w}, M) \models^{w} \varphi$, where $(W^{z \leftarrow w}, M)$ is the unique $\Delta(z)$-expansion of $(W, M)$ that interprets the variable $z$ as $w$; \footnote{An expansion of $(W, M)$ to $\Delta(X)$ is a Kripke structure $(W', M')$ over $\Delta(X)$ that interprets all symbols in $\Delta$ in the same way as $(W, M)$.} \item $(W, M) \models^w \Exists{X}{\varphi}$ iff $(W', M') \models^w \varphi$ for some expansion $(W', M')$ of $(W, M)$ to the signature $\Delta(X)$; \footnotemark[\thefootnote] \item $(W, M) \models^{w} \pos{\lambda} \varphi$ iff $(W, M) \models^{w'} \varphi$ for some $w' \in |W|$ s.t. $(w, w') \in W_{\lambda}$. \end{itemize} The following \emph{satisfaction condition} can be proved by induction on the structure of $\Delta$-sentences. The proof is essentially identical to those developed for several other variants of hybrid logic presented in the literature (see, e.g.~\cite{dia-qvh}). \begin{proposition}[Local satisfaction condition] \label{prop:sat-cond} Let $\chi \colon \Delta \to \Delta^1$ be a signature morphism. Then $(W, M) \models^{w} \chi(\varphi)$ iff $(W, M) \!\upharpoonright\!_{\chi} \models^{w} \varphi$, for all Kripke structures over $\Delta^1$, all sentences $\varphi$ over $\Delta$.~\footnote{By the definition of reducts, $(W', M')$ and $(W', M') \!\upharpoonright\!_{\chi}$ have the same possible worlds, which means that the statement of Proposition~\ref{prop:sat-cond} is well-defined.} \end{proposition} \paragraph{Global satisfaction relation} The global satisfaction relation is defined by \begin{center} $(W,M)\models \varphi$ iff for each possible world $w\in|W|$ we have $(W,M)\models^w\varphi$ \end{center} for all signatures $\Delta$, all Kripke $\Delta$-structures $(W,M)$ and all $\Delta$-sentences $\varphi$. The global consequence relation between sentences is defined by \begin{center} $\varphi\models \psi$ iff $(W,M)\models \varphi$ implies $(W,M)\models\psi$ for all Kripke structures $(W,M)$, \end{center} for all sentences $\varphi$ and $\psi$ over the same signature. The global consequence relation can be extended to sets of sentences in the usual way. We adopt the terminology used in the algebraic specification literature. A pair $(\Delta,\Phi)$ consisting of a signature $\Delta$ and a set of sentences $\Phi$ over $\Delta$ is called a \emph{presentation}. We let $\Phi^\bullet$ denote $\{\varphi\in\mathtt{Sen}(\Delta)\mid \Phi\models \varphi\}$, the closure of $\Phi$ under the global consequence relation. A \emph{presentation morphism} $\chi:(\Delta,\Phi)\to (\Delta^1,\Phi^1)$ consists of a signature morphism $\chi:\Delta\to\Delta^1$ such that $\Phi^1\models\chi(\Phi)$. Any presentation $(\Delta,T)$ such that $T=T^\bullet$ is called a \emph{theory} . A \emph{theory morphism} is just a presentation morphism between theories. \paragraph{Examples} Fragments of ${\mathsf{HFOL}}$ have been studied extensively in the literature. We give a few examples. \begin{example}(Rigid First-Order Hybrid Logic (${\mathsf{RFOHL}}$) \cite{DBLP:conf/wollic/BlackburnMMH19})\label{ex:RFOHL} This logic is obtained from ${\mathsf{HFOL}}$ by restricting the signatures $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r}\subseteq\Sigma)$ such that (a)~$\Sigma^\mathtt{n}$ has only one binary modality, (b)~$\Sigma$ is single-sorted, (c)~the unique sort is rigid, (d)~there are no rigid function symbols except variables (regarded here as special constants), and (e)~there are no rigid relation symbols. \end{example} \begin{example}(Hybrid First-Order Logic with user-defined Sharing (${\mathsf{HFOLS}}$)) \label{ex:HFOLS} This logic has the same signatures and Kripke structures as ${\mathsf{HFOL}}$. The sentences are obtained from atoms constructed with open terms only, that is, if $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r}\subseteq\Sigma)$, all (ground) equations over $\Delta$ are of the form $t_1=t_2$, where $t_1,t_2\in T_\Sigma$, and all (ground) relation over $\Delta$ are of the form $\varpi(t)$, where $(\varpi:\mathtt{ar})\in P$ and $t\in T_{\Sigma,\mathtt{ar}}$. A version of ${\mathsf{HFOLS}}$ is the underlying logic of H system~\cite{cod-h}. Other variants of ${\mathsf{HFOLS}}$ have been studied in \cite{martins,dia-msc,dia-qvh}. \end{example} \begin{example}(Hybrid Propositional Logic (${\mathsf{HPL}}$)) \label{ex:HDPL} This is the most common form of multi-modal hybrid logic (e.g. \cite{ArecesB01}). ${\mathsf{HPL}}$ is obtained from ${\mathsf{HFOL}}$ by restricting the signatures $\Delta=(\Sigma^\mathtt{n},\Sigma^\mathtt{r}\subseteq\Sigma)$ such that $\Sigma^\mathtt{r}$ is empty and the set of sorts in $\Sigma$ is empty. Notice that if $\Sigma=(S,F,P)$ and $S=\emptyset$ then $P$ contains only propositional symbols. \end{example} \paragraph{Reachability} Let $(W,M)$ be a Kripke structure over a signature $\Delta$. \begin{itemize} \item A possible world $w\in|W|$ is called \emph{reachable} if it is the denotation of some nominal, that is, $w= W_k$ for some nominal $k\in F^\mathtt{n}$. \item Let $w\in|W|$ be a possible world and $s\in S$ a sort. An element $e\in M_{w,s}$ is called reachable if it is the denotation of some hybrid term, that is, $w=W_k$ and $e=M_{w,t}$ for some nominal $k\in F^\mathtt{n}$ and rigid hybrid term $t\in T_{@\Sigma,@_k s}$. \item $(W,M)$ is reachable by an $S^\mathtt{e}$-sorted set $C$ of nominals and rigid hybrid terms if (a)~its set of possible worlds consists of denotations of nominals in $C$, and (b)~its carrier sets for the rigid sorts consist of denotations of rigid hybrid terms from $C$. \item $(W,M)$ is reachable if $(W,M)$ is reachable by nominals and rigid hybrid terms. \end{itemize} The notion of reachability is connected to quantification, which is the reason for considering a Kripke structure reachable if its elements of rigid sorts are denotations of terms, thus disregarding elements of flexible sorts. This notion is semantic and it makes sense also for fragments of ${\mathsf{HFOL}}$ whose sentences do not contain rigid hybrid terms such as ${\mathsf{RFOHL}}$, ${\mathsf{HFOLS}}$ or ${\mathsf{HPL}}$. In institution theory, the notion of reachability was originally defined in \cite{Petria07} at an abstract level, and it played an important role in proving several proof-theoretic results~\cite{gai-com,gai-cbl,gai-bir} as well as model-theoretic properties~\cite{gai-int,gai-dls,gai-her,tutu-iilp}. \section{Basic definitions and results} \label{sec:basics} In this section, we establish the terminology and we state some foundational results necessary for the present study. We start by noticing that rigid quantification cannot refer to unreachable elements of flexible sorts. \begin{lemma} \label{lemma:reach-equiv} Let $(W,M)$ be a reachable Kripke structure over a signature $\Delta$. Let $(W,N)$ be a Kripke structure obtained from $(W,M)$ by (a)~replacing unreachable elements of flexible sorts by some new elements, (b)~preserving the interpretation of function and relation symbols on the elements inherited from $(W,M)$, and (c)~interpreting function symbols arbitrarily on the new arguments. Then $(W,M)$ and $(W,N)$ are elementarily equivalent, in symbols, $(W,M)\equiv(W,N)$ (that is, $(W,M)\models\varphi$ iff $(W,N)\models\varphi$, for all $\varphi\in\mathtt{Sen}(\Delta)$). \end{lemma} The proof of the lemma above is straightforward by induction on the structure of sentences. We recall Robinson consistency property as stated in institution theory (see, for example, ~\cite{DBLP:journals/sLogica/GainaP07}). \begin{definition} \label{def:rob} Consider the following square $\mathcal{S}$ of signature morphisms. \begin{center} \begin{tikzcd} \Delta^2 \ar[r,"\upsilon_2"] & \Delta'\\ \Delta \ar[u,"\chi_2"] \ar[r,swap,"\chi_1"] & \Delta^1 \ar[u,swap,"\upsilon_1"] \end{tikzcd} \end{center} $\mathcal{S}$ is a \emph{Robinson square}, if for every consistent theories $T^1 \subseteq \mathtt{Sen}(\Delta^1)$, $T^2 \subseteq \mathtt{Sen}(\Delta^2)$ and complete theory $T\subseteq \mathtt{Sen}(\Delta)$ such that $\chi_1$, $\chi_2$ are theory morphisms, it holds that $\upsilon_1(T^1) \cup \upsilon_2(T^2)$ is consistent. \end{definition} As it was shown in~\cite{gai-godel}, ${\mathsf{HFOL}}$ is compact, which means that interpolation is equivalent to Robinson consistency property. \begin{proposition} The following are equivalent for a commutative square $\mathcal{S}$ of signature morphisms as depicted in the diagram of Definition~\ref{def:rob}: \begin{enumerate}[1)] \item $\mathcal{S}$ is a Robinson square. \item For every consistent theories $T^1 \subseteq \mathtt{Sen}(\Delta^1)$ and $T^2 \subseteq \mathtt{Sen}(\Delta^2)$ such that $\chi_1^{-1}(T^1)\cup\chi_2^{-1}(T^2)$ is consistent, the set $\upsilon_1(T^1)\cup \upsilon_2(T^2)$ is consistent. \item $\mathcal{S}$ is a Craig Interpolation (CI) square , that is, for every $\Phi^1\subseteq\mathtt{Sen}(\Delta^1)$ and $\Phi^2\subseteq \mathtt{Sen}(\Delta^2)$ such that $\upsilon_1(\Phi^1)\models \upsilon_2(\Phi^2)$ there exists $\Phi\subseteq \mathtt{Sen}(\Delta)$ such that $\Phi^1\models \chi_1(\Phi)$ and $\chi_2(\Phi)\models\Phi^2$. \end{enumerate} \end{proposition} The equivalence of the first two statements can be proved similarly to \cite[Proposition 6]{DBLP:journals/sLogica/GainaP07}, while the equivalence of first and last statement can be shown using ideas from~\cite[Corollary 3.1]{tar-bit}. \begin{example} \label{ex:counter-1} Let $\Delta^1\stackrel{\chi_1}\leftarrow \Delta\stackrel{\chi_2}\to\Delta^2$ be a span of signature morphisms such that \begin{itemize} \item (a)~$\Delta$ has three nominals $\{k_1,k_2,k_3\}$, three flexible sorts $\{s_1,s_2,s_3\}$ and three flexible constants $\{c_1:\to s_1,c_2:\to s_2,c_3:\to s_3\}$; (b)~$\Delta^1$ has two nominals $\{k,k_3\}$, one flexible sort $\{s\}$ and two flexible constants $\{c:\to s,c_3:\to s\}$; (c)~$\Delta^2$ has two nominals $\{k,k_1\}$, two flexible sorts $\{s,s_2\}$ and three flexible constants $\{c_1:\to s,c_2:\to s_2,c_3:\to s\}$; \item (a)~on nominals $\chi_1(k_1)=\chi_1(k_2)=k$, $\chi_1(k_3)=k_3$, on sorts $\chi_1(s_1)=\chi_1(s_2)=\chi_1(s_3)=s$, on function symbols $\chi_1(c_1:\to s_1)=\chi(c_2:\to s_2)=c:\to s$, $\chi_1(c_3:\to s_3)=c_3:\to s$; (b)~on nominals $\chi_2(k_1)=k_1$, $\chi_2(k_2)=\chi_2(k_3)=k$, on sorts $\chi_2(s_1)=\chi_2(s_3)=s$, $\chi_2(s_2)=s_2$, on function symbols $\chi_2(c_1:\to s_1)=c_1:\to s$, $\chi_2(c_2:\to s_2)=c_2:\to s_2$, $\chi_2(c_3:\to s_3)=c_3:\to s$. \end{itemize} Let $\Delta^1\stackrel{\upsilon_1}\to \Delta' \stackrel{\upsilon_2}\leftarrow\Delta^2$ be a pushout of the above span such that \begin{itemize} \item $\Delta'$ has one nominal $\{k\}$, one flexible sort $\{s\}$ and two flexible constants $\{c:\to s, c_3:\to s\}$; \item (a)~$\upsilon_1(k)=\upsilon_1(k_3)=k$, $\upsilon_1(c:\to s)= c:\to s$ and $\upsilon_1(c_3:\to s)=c_3:\to s$; (b)~$\upsilon_2(k)=\upsilon_2(k_1)=k$, $\upsilon_2(c_1:\to s)=c:\to s$, $\upsilon_2(c_2:\to s_2)=c:\to s$ and $\upsilon_2(c_3:\to s)=c_3:\to s$. \end{itemize} \end{example} According to the following lemma, interpolation doesn't hold in ${\mathsf{HFOL}}$, in general. \begin{lemma}\label{lemma:counter-1} The pushout described in Example~\ref{ex:counter-1} is not a CI square. \end{lemma} \begin{proof} Let $\Phi^1\coloneqq\{\at{k_3}(c=c_3)\}$ and $\Phi^2\coloneqq\{\at{k_1}(c_1 = c_3)\}$. Obviously, $\upsilon_1(\Phi^1)\models \upsilon_2(\Phi^2)$. Suppose towards a contradiction that there exists an interpolant $\Phi$ over $\Delta$ such that $\Phi^1\models \chi_1(\Phi)$ and $\chi_2(\Phi)\models \Phi^2$. Let $(W^1,M^1)$ be the Kripke structure over $\Delta^1$ defined as follows: $W^1$ consists of one possible world $w$, and $M^1_{w}$ is the single-sorted algebra such that $M^1_{w,s}=\{d,e\}$ and $M^1_{w,c}=M^1_{w,c_3}=d$. We have $(W^1,M^1)\models\Phi^1$, and since $\Phi^1\models \chi_1(\Phi)$, we get $(W^1,M^1)\models\chi_1(\Phi)$. By the satisfaction condition, $(W^1,M^1)\!\upharpoonright\!_{\chi_1}\models \Phi$. Let $(V,N)$ be the Kripke structure over $\Delta$ obtained from $(W^1,M^1)\!\upharpoonright\!_{\chi_1}$ by changing the interpretation of $c_3:\to s_3$ from $d$ to $e$, which implies that $V_{w,c_3}=e$. There exists an isomorphism $h:(V,N)\to (W^1,M^1)\!\upharpoonright\!_{\chi_1}$ such that $h_{w,s_1}$ and $h_{w,s_2}$ are identities, while $h_{w,s_3}(d)=e$ and $h_{w,s_3}(e)=d$. It follows that $(V,N)\models\Phi$. There exists a $\chi_2$-expansion $(V^2,N^2)$ of $(V,N)$. By the satisfaction condition, $(V^2,N^2)\models \chi_2(\Phi)$. Since $N^2_{w,c_1}=N_{w,c_1}=d$ and $N^2_{w,c_3}=N_{w,c_3}=e$, we have $(V^2,N^2)\not \models \Phi^2$, contradicting $\chi_2(\Phi)\models \Phi^2$. \end{proof} We are interested in characterizing a span of signature morphisms whose pushout is a CI square. For this purpose, it is necessary to restrict one of the arrows of the underlying span according to the following definition. \begin{definition} \label{def:flex} A signature morphism $\chi:\Delta\to \Delta^1$ \emph{preserves flexible symbols}~if \begin{enumerate}[1)] \item $\chi$ preserves flexible sorts, that is, $\chi(s)\in S_1^\mathtt{f}$ for all $s\in S^\mathtt{f}$, and \item $\chi$ adds no new flexible operations on `old' flexible sorts, that is, for all flexible sorts $s\in S^\mathtt{f}$ and all function symbols $\sigma_1:\mathtt{ar}_1\to \chi(s)\in F_1^\mathtt{f}$ there exists $\sigma:\mathtt{ar}\to s\in F^\mathtt{f}$ such that $\chi(\sigma:\mathtt{ar}\to s)=\sigma_1:\mathtt{ar}_1\to\chi(s)$. \end{enumerate} If, in addition, $\chi$ is injective on flexible sorts and on flexible function and relation symbols that have at least one flexible sort $s\in S^\mathtt{f}$ in the arity then we say that $\chi$ \emph{protects flexible symbols}. \end{definition} If $\chi:\Delta\to \Delta^1$ is an inclusion that preserves flexible sorts and adds no new function symbols $\sigma:\mathtt{ar}\to s$ with $s\in S^\mathtt{f}$ on $\Delta$ then $\chi:\Delta\to \Delta^1$ protects flexible symbols. If $\Delta$ has no flexible sorts then $\chi:\Delta\to\Delta^1$ protects flexible symbols. In particular, if $\Delta$ is a ${\mathsf{HPL}}$ or ${\mathsf{RFOHL}}$ signature then $S^\mathtt{f}=\emptyset$, which means that $\chi$ protects flexible symbols. In applications, $\chi:\Delta\to\Delta^1$ from Definition~\ref{def:flex} is appropriate for hiding information, which makes ${\mathsf{HFOL}}$ an instance of the abstract completeness result for structured specifications proved in \cite{DBLP:journals/tcs/Borzyszkowski02}. \begin{lemma}[Lifting Lemma] \label{lemma:lifting} Consider the following: \begin{enumerate}[1)] \item a signature morphism $\chi:\Delta\to \Delta^1$ which is injective on sorts and nominals, and protects flexible symbols; \item a set $C^1$ of new nominals and new rigid constants for $\Delta^1$; \item $(V^1,N^1)\in|\mathtt{Mod}(\Delta^1(C^1))|$ reachable by $C^1$ and $(W,M)\in|\mathtt{Mod}(\Delta(C))|$ reachable by $C$ such that $(V^1,N^1)\!\upharpoonright\!_{\chi^C}\equiv (W,M)$, where \begin{itemize} \item $C$ is the reduct of $C^1$ across $\chi$, i.e. $C\coloneqq\{c:\to s\mid c:\to \chi(s)\in C^1\}$, and \item $\chi^C:\Delta(C)\to\Delta^1(C^1)$ is the extension of $\chi$ that maps each constant $c:\to s\in C$ to $c:\to \chi(s)\in C^1$. \end{itemize} \end{enumerate} Then $(W^1,M^1)\equiv(V^1,N^1)$ for some $\chi^C$-expansion $(W^1,M^1)$ of $(W,M)$. \end{lemma} \begin{proof} Let $(V^1,N^1)\in|\mathtt{Mod}(\Delta^1(C^1))|$ and $(W,M)\in|\mathtt{Mod}(\Delta(C))|$ be Kripke structures such that $(V^1,N^1)\!\upharpoonright\!_\chi\equiv (W,M)$. To keep notation consistent, we let $(V,N)$ be the reduct $(V^1,N^1)\!\upharpoonright\!_\chi$ of $(V^1,N^1)$, and we will construct an expansion $(W^1,M^1)$ of $(W,M)$ such that $(W^1,M^1)\equiv(V^1,N^1)$ in three steps. \begin{enumerate}[1)] \item We construct an isomorphism $h:(W,M)\to (V,R)$, where $(V,R)$ is obtained from $(V,N)$ by replacing all unreachable elements by the unreachable elements from $(W,M)$. Firstly, we define $h$ as a function, which implicitly means that we define the universe of $(V,R)$. Let $k\in C_\mathtt{n}$ be a nominal, $v\coloneqq V_k$ and $w\coloneqq W_k$. \noindent\textsc{Case $s\in S^\mathtt{r}$:} We define the set $R_{v,s}\coloneqq N_{v,s}$, and the function $h_{w,s}:M_{w,s}\to R_{v,s}$ by $h_{w,s}(M_{w,c})=N_{v,c}$ for all constants $c:\to s\in C$. Since both $(W,M)$ and $(V,N)$ are reachable by $C$ and $(W,M)\equiv(V,N)$, the function $h_{w,s}:M_{w,s}\to R_{v,s}$ is bijective. \noindent\textsc{Case} $s\in S^\mathtt{f}$: Let $R_{v,s}$ be the set obtained from $N_{v,s}$ by removing all unreachable elements and adding all unreachable elements from $M_{w,s}$. We define $h_{w,s}:M_{w,s}\to R_{v,s}$ by $h_{w,s}(M_{w,t})=N_{v,t}$ for all rigid hybrid $\Delta(C)$-terms $t$ of sort $@_k s$, and $h_{w,s}(e)=e$ for all unreachable elements $e\in M_{w,s}$. Since $(W,M)\equiv(V,N)$, the function $h_{w,s}:M_{w,s}\to R_{v,s}$ is bijective. Secondly, we interpret the function and relation symbols from $\Delta(C)$ in $(V,R)$. Let $k\in C_\mathtt{n}$ be a nominal, $v\coloneqq V_k$ and $w\coloneqq W_k$. \noindent\textsc{Case} $\sigma:\mathtt{ar}\to s\in F(C)$: We define $R_{v,\sigma}:R_{v,\mathtt{ar}}\to R_{v,s}$ by $R_{v,\sigma}(e)=h_{w,s}(M_{w,\sigma}(h_{w,\mathtt{ar}}^{-1}(e)))$ for all elements $e\in R_{v,\mathtt{ar}}$. Since $(W,M)\equiv(V,N)$, we have $R_{v,\sigma} (e) = N_{v,\sigma}(e)$ for all reachable elements $e\in N_{v,\mathtt{ar}}\cap R_{v,\mathtt{ar}}$. \noindent\textsc{Case $\varpi:\mathtt{ar}\in P$} We define $R_{v,\varpi}\coloneqq h_{w,\mathtt{ar}}(M_{w,\varpi})$. Since $(W,M)\equiv(V,N)$, we have $e\in R_{v,\varpi}$ iff $e\in N_{v,\varpi}$ for all reachable elements $e\in N_{v,\mathtt{ar}}\cap R_{v,\mathtt{ar}}$. By construction, $h:(W,M)\to(V,R)$ is a homomorphism, and since it is bijective, $h:(W,M)\to(V,R)$ is an isomorphism. \item We define an expansion $(V^1,R^1)$ of $(V,R)$ along $\chi$ such that $(V^1,R^1)\equiv(V^1,N^1)$. Roughly, $(V^1,R^1)$ is obtained from $(V^1,N^1)$ by replacing all unreachable elements of sorts in $\chi(S^\mathtt{f})$ with unreachable elements of flexible sorts from $(V,R)$. Concretely, $(V^1,R^1)$ is obtained from $(V^1,N^1)$ as follows: \noindent\textsc{Case} $s_1\in \chi(S^\mathtt{f})$: $R^1_{v,s_1}\coloneqq R_{v,\chi^{-1}(s_1)}$ for all $v\in|V^1|$, which is well-defined since $\chi$ is injective on sorts. \noindent\textsc{Case} $\sigma_1:\mathtt{ar}_1\to s_1\in \chi(F^\mathtt{f})$, where $\mathtt{ar}_1$ contains at least one sort from $\chi(S^\mathtt{f})$: For all $v\in|V^1|$, $R^1_{v,\sigma_1}\coloneqq R_{v,\chi^{-1}(\sigma_1)}$. Since $\chi$ protects flexible symbols, $\chi^{-1}(\sigma_1)$ is unique, which means that $R^1_{v,\sigma_1}$ is well-defined. Also, we have $N^1_{v,\sigma_1}(e)=N_{v,\chi^{-1}(\sigma_1)}(e)=R_{v,\chi^{-1}(\sigma_1)}(e)$ for all reachable elements $e\in N^1_{v,\mathtt{ar}_1}$. \noindent\textsc{Case} $\pi_1:\mathtt{ar}_1\in \chi(P^\mathtt{f})$, where $\mathtt{ar}_1$ contains at least one sort from $\chi(S^\mathtt{f})$: For all $v\in|V^1|$, $R^1_{v,\pi_1}\coloneqq R_{v,\chi^{-1}(\pi_1)}$. Since $\chi$ protects flexible symbols, $\chi^{-1}(\pi_1)$ is unique, which means that $R^1_{v,\pi_1}$ is well-defined. Also, we have $e\in N^1_{v,\pi_1}=N_{v,\chi^{-1}(\pi_1)}$ iff $e\in R_{v,\chi^{-1}(\pi_1)}=R^1_{v,\pi_1}$ for all reachable elements $e\in N^1_{v,\mathtt{ar}_1}$. \noindent\textsc{Case} $\sigma_1:\mathtt{ar}_1\to s_1\in F_1^\mathtt{f}\setminus \chi(F^\mathtt{f})$, where $\mathtt{ar}_1$ has at least one sort from $\chi(S^\mathtt{f})$: For all $v\in|V^1|$, the function $R^1_{v,\sigma_1}: R^1_{v,\mathtt{ar}_1}\to R^1_{v,s_1}$ is defined by \begin{itemize} \item $R^1_{v,\sigma_1}(e)=N^1_{v,\sigma_1}(e)$ for all elements $e\in R^1_{v,\mathtt{ar}_1}\cap N^1_{v,\mathtt{ar}_1}$ and \item $R^1_{v,\sigma_1}(e)$ is an arbitrary value in $R^1_{v,s_1}$ for all unreachable $e\in R^1_{v,\mathtt{ar}_1}\setminus N^1_{v,\mathtt{ar}_1}$. \end{itemize} \noindent\textsc{Case} $\pi_1:\mathtt{ar}_1\in P_1^\mathtt{f}\setminus \chi(P^\mathtt{f})$, where $\mathtt{ar}_1$ contains at least one sort from $\chi(S^\mathtt{f})$: For all possible worlds $v\in|V^1|$, let $R^1_{v,\pi_1}\coloneqq\{e\in N^1_{v,\pi_1}\mid e \text{ is reachable}\}$. Now, since $\chi$ protects flexible symbols, $\chi$ preserves flexible symbols, which means that for all possible worlds $v\in |V|$ and all sorts $s\in S$, \begin{center} $e\in N_{v,s}$ is unreachable iff $e\in N^1_{v,\chi(s)}$ is unreachable. \end{center} It follows that the reachable sub-structures of $(V^1,N^1)$ and $(V^1,R^1)$ coincide. By Lemma~\ref{lemma:reach-equiv}, $(V^1,N^1)\equiv(V^1,R^1)$. \item We define an isomorphism $h^1:(W^1,M^1)\to(V^1,R^1)$ by expanding $h:(W,M)\to (V,R)$ along $\chi$. Firstly, we define $h^1$ as a function. Let $h^1:W\to V$ be $h:W\to V$, which is bijective. Assume $k$ is a nominal in $C$ and let $w=W_k$. \noindent\textsc{Case} $s_1\in \chi(S)$: Let $h^1_{w,s_1}\coloneqq h_{w,\chi^{-1}(s_1)}$, which is bijective. \noindent\textsc{Case} $s_1\in S^1\setminus \chi(S)$: $M^1_{w,s_1}\coloneqq R^1_{w,s_1}$ and $h^1_{w,s_1}:M^1_{w,s_1}\to R^1_{w,s_1}$ is the identity. Secondly, we interpret the function and relation symbols from $\Delta^1(C^1)$ in $(W^1,M^1)$. For any nominal or modality $x$ in $\Delta^1(C^1)$, we define $W^1_x\coloneqq (h^1)^{-1}(V^1_x)$. Take a nominal $k$ in $C^1$, and let $w\coloneqq W^1_k$ and $v\coloneqq V^1_k$. \noindent\textsc{Case} $\sigma_1:\mathtt{ar}_1\to s_1\in F^1(C^1)$: We define $M^1_{w,\sigma_1}:M^1_{w,\mathtt{ar}_1}\to M^1_{w,s_1}$ by $M^1_{w,\sigma_1}(e)=(h^1_{w,s_1})^{-1}(R^1_{v,\sigma_1}(h^1_{w,\mathtt{ar}_1}(e)))$ for all elements $e\in M^1_{w,\mathtt{ar}_1}$. \noindent\textsc{Case} $\pi_1:\mathtt{ar}_1\in P^1$: We define $M^1_{w,\pi_1}\coloneqq (h^1_{w,\mathtt{ar}_1})^{-1}(R^1_{\pi_1})$.\\ Since $h:(W,M)\to (V,R)$ is an isomorphism, $h^1:(W^1,M^1)\to (V^1,R^1)$ is an isomorphism too. \end{enumerate} It follows that $(W^1,M^1)\equiv(V^1,R^1)$. Since $(V^1,R^1)\equiv(V^1,N^1)$, we get $(W^1,M^1)\equiv(V^1,N^1)$. \end{proof} Definition~\ref{def:flex} provides a general criterion for proving Robinson consistency property, while Lemma~\ref{lemma:lifting} is essential for completing the proof of Robinson consistency theorem. \section{Relativization} \label{sec:relat} Relativization is a well-known method in classical model theory for defining substructures and their properties~\cite{DBLP:books/daglib/0080659}. The substructures are usually characterized by some unary predicate and the technique is necessary in the absence of sorts when dealing with modular properties (such as putting together models defined over different signatures) which implicitly involve signature morphisms. For ${\mathsf{HFOL}}$, the relativization is necessary to prove Robinson consistency property from omitting types property, since the signatures of nominals are single-sorted. It is worth mentioning that relativization is not necessary to prove Robinson consistency for many-sorted first-order logic. \begin{definition} \label{def:amalg} The \emph{relativized union} of any signatures $\Delta^1$ and $\Delta^2$ is a presentation $(\Delta^\diamond,\Phi^\diamond)$ defined as follows: \begin{enumerate}[1)] \item $\Delta^\diamond$ is the signature obtained from $\Delta^1\coprod\Delta^2$ by adding two nominals $o_1$ and $o_2$, and two unary modalities ${\pi_1:1}$ and ${\pi_2:1}$, \item $\Phi^\diamond\subseteq\mathtt{Sen}(\Delta^\diamond)$ consists of $\pi_1\vee\pi_2$ and all sentences of the form $\at{k_i}\pi_i$, where $i\in\{1,2\}$ and $k_i\in F^\mathtt{n}_i\cup\{o_i\}$. \end{enumerate} \end{definition} Let $\mathtt{inj}_i:\Delta^i\to \Delta^1\coprod\Delta^2$ be the canonical injection, for each $i\in\{1,2\}$. Let $\theta:\Delta^1\coprod\Delta^2\hookrightarrow \Delta^\diamond$ be an inclusion. Here, we are interested more in the vertex $\Delta^\diamond$ and less in the arrows $(\mathtt{inj}_i;\theta)$, where $i\in\{1,2\}$. The presentation $(\Delta^\diamond,\Phi^\diamond)$ is meant to define Kripke structures obtained from the union of a Kripke structure over $\Delta^1$ and a Kripke structure over $\Delta^2$. The new nominals $o_1$ and $o_2$ together with the sentences $\at{o_1} \pi_1$ and $\at{o_2}\pi_2$ ensure that the domains of $\pi_1$ and $\pi_2$ are not empty. For each nominal $k_1\in F_1^\mathtt{n}$, the sentence $\at{k_1}\pi_1$ ensures that the interpretation of $k_1$ belongs to the denotation of $\pi_1$. A similar remark holds for any sentence $\at{k_2}\pi_2$ with $k_2\in F_2^\mathtt{n}$. For the sake of simplifying the notation, we assume without loss of generality that $\Delta^1$ and $\Delta^1$ are disjoint, which means that $\Delta^1\coprod \Delta^2=\Delta^1\cup \Delta^2$. \begin{definition} Let $\Delta^1$ and $\Delta^2$ be two disjoint signatures. For each $i\in\{1,2\}$, the \emph{relativized reduct} $\!\upharpoonright\!_{\pi_i}:\mathtt{Mod}(\Delta^\diamond)\to\mathtt{Mod}(\Delta^i)$ is defined as follows: \begin{enumerate}[1)] \item For each $(W,M)\in|\mathtt{Mod}(\Delta^\diamond,\Phi^\diamond)|$, the Kripke structure $(W,M)\!\upharpoonright\!_{\pi_i}$ denoted $(W^i,M^i)$ is defined by (a)~$|W^i|= W_{\pi_i}$, (b)~$W^i_k =W_k$ for all nominals $k\in F^\mathtt{n}_i$, (c)~$W^i_\varrho=W_\varrho\cap W_{\pi_i}$ for all unary modalities $(\varrho:\mathtt{n})\in P_i^\mathtt{n}$, (d)~$W^i_\lambda=\{(w,v)\in W_\lambda \mid w,v\in W_{\pi_i} \}$ for all modalities $(\lambda:\mathtt{n}~\mathtt{n})\in P^\mathtt{n}_i$, (e)~$M^i= M|_{W^i}$ and $M^i_{w,x}=M_{w,x}$ for all possible worlds $w\in |W^i|$ and all sort/function/relation symbols $x$ in $\Sigma_i$. \item For each $h:(W,M)\to (W',M')\in \mathtt{Mod}(\Delta^\diamond,\Phi^\diamond)$, the homomorphism $h\!\upharpoonright\!_{\pi_i}:(W,M)\!\upharpoonright\!_{\pi_i}\to (W',M')\!\upharpoonright\!_{\pi_i}$ is defined by $(h\!\upharpoonright\!_{\pi_i})_w=h_w$ for all $w\in W_{\pi_i}$. \end{enumerate} \end{definition} \begin{definition} The \emph{relativized translation} $\mathtt{rt}(\pi_i):\mathtt{Sen}(\Delta^i)\to \mathtt{Sen}(\Delta^\diamond,\Phi^\diamond)$, where $i\in\{1,2\}$, is defined by induction on the structure of sentences, simultaneously, for all disjoint signatures $\Delta^1$ and $\Delta^2$: \begin{enumerate}[1)] \item $\mathtt{rt}(\pi_i)(k)\coloneqq \pi_i\Rightarrow k$ for all nominals $k\in F^\mathtt{n}_i$. \item $\mathtt{rt}(\pi_i)(\varrho)\coloneqq \pi_i\Rightarrow \varrho$ for all unary modalities $(\varrho:\mathtt{n})\in P_i^\mathtt{n}$. \item $\mathtt{rt}(\pi_i)(t_1=t_2)\coloneqq \pi_i\Rightarrow t_1 = t_2$ for all ground equations $t_1=t_2$ over $\Delta^i$. \item $\mathtt{rt}(\pi_i)(\varpi(t_1,\dots,t_n))\coloneqq \pi_i\Rightarrow \varpi(t_1,\dots t_n)$ for all ground relations $\varpi(t_1,\dots,t_n)$ over $\Delta^i$. \item $\mathtt{rt}(\pi_i)(\at{k}\gamma)\coloneqq \pi_i\Rightarrow \at{k}\mathtt{rt}(\pi_i)(\gamma)$ for all nominals $k\in F^\mathtt{n}_i$ and all sentences $\gamma\in\mathtt{Sen}(\Delta^i)$. \item $\mathtt{rt}(\pi_i)(\pos{\lambda}\gamma)\coloneqq \pi_i\Rightarrow \pos{\lambda}\pi_i\wedge \mathtt{rt}(\pi_i)(\gamma)$ for all modalities $(\lambda:\mathtt{n}~\mathtt{n}) \in P_i^\mathtt{n}$ and all sentences $\gamma\in \mathtt{Sen}(\Delta^i)$. \item $\mathtt{rt}(\pi_i)(\neg \gamma) \coloneqq \pi_i \Rightarrow \neg\mathtt{rt}(\pi_i)(\gamma)$ for all $\gamma\in\mathtt{Sen}(\Delta^i)$. \item $\mathtt{rt}(\pi_i)(\gamma_1\vee\gamma_2)\coloneqq \mathtt{rt}(\pi_i)(\gamma_1)\vee \mathtt{rt}(\pi_i)(\gamma_2)$ for all $\gamma_1,\gamma_2\in\mathtt{Sen}(\Delta^i)$. \item \label{it:rt-10} $\mathtt{rt}(\pi_i)(\store{z}\gamma)\coloneqq \pi_i\Rightarrow \store{z} \mathtt{rt}(\pi_i)(\gamma)$ for all sentences $\store{z}\gamma\in \mathtt{Sen}(\Delta^i)$, where $z$ a nominal variable. \footnote{Notice that $\Delta^\diamond(z)$ is obtained from the relativized union of $\Delta^i(z)$ and $\Delta^j$, where $i,j\in\{1,2\}$ and $i\neq j$; therefore, $\mathtt{rt}(\pi)(\gamma)$ is well-defined.} \item $\mathtt{rt}(\pi_i)(\Exists{x}\gamma)\coloneqq \pi_i\Rightarrow \Exists{x} \at{x} \pi_i \wedge \mathtt{rt}(\pi_i)(\gamma)$ for all sentences $\Exists{x}\gamma\in \mathtt{Sen}(\Delta^i)$ with $x$ a nominal variable. \item $\mathtt{rt}(\pi_i)(\Exists{y}\gamma) \coloneqq \pi_i\Rightarrow \Exists{y} \mathtt{rt}(\pi_i)(\gamma)$ for all sentences $\Exists{y}\gamma\in \mathtt{Sen}(\Delta^i)$ with $y$ a rigid variable. \end{enumerate} \end{definition} Simultaneous induction for all disjoint signatures is necessary for the case corresponding to quantified sentences. Unlike first-order logic, relativization is applied not only to quantifiers but it starts with atomic sentences. One can notice that locally the antecedent $\pi_i$ of the implication is redundant, but globally it is not. \begin{proposition} [Satisfaction condition] \label{prop:sat-cond} For all disjoint signatures $\Delta^1$ and $\Delta^2$, all Kripke structures $(W,M)\in|\mathtt{Mod}(\Delta^\diamond,\Phi^\diamond)|$ and all sentences $\gamma\in\mathtt{Sen}(\Delta^i)$, where $i\in\{1,2\}$, the following \emph{satisfaction conditions} hold: \begin{itemize} \item For all worlds $w\in W_{\pi_i}$, we have $(W,M)\models^w \mathtt{rt}(\pi_i)(\gamma)$ iff $(W,M)\!\upharpoonright\!_{\pi_i} \models^w \gamma$. \item For all worlds $w\in|W|\setminus W_{\pi_i}$, we have $(W,M)\models^w\mathtt{rt}(\pi_i)(\gamma)$. \item $(W,M)\models \mathtt{rt}(\pi_i)(\gamma)$ iff $(W,M)\!\upharpoonright\!_{\pi_i}\models \gamma$. \end{itemize} \end{proposition} The first two statements of Proposition~\ref{prop:sat-cond} are straightforward by induction on the structure of sentences. The third statement, which corresponds to the global satisfaction condition, is a consequence of the first two statements. \begin{proposition} \label{prop:amalg} For all disjoint signatures $\Delta^1$ and $\Delta^2$, and all Kripke structures $(W^1,M^1)$ and $(W^2,M^2)$ over $\Delta^1$ and $\Delta^2$, respectively, there exists $(W,M)\in|\mathtt{Mod}(\Delta^\diamond,\Phi^\diamond)|$ called the \emph{relativized union} of $(W^1,M^1)$ and $(W^2,M^2)$ such that $(W,M)\!\upharpoonright\!_{\pi_1}=(W^1,M^1)$ and $(W,M)\!\upharpoonright\!_{\pi_2}=(W^2,M^2)$. \end{proposition} The relativized union of Kripke structure is not unique. \section{Robinson consistency} \label{sec:rc} Robinson consistency property is derived from Omitting Types Theorem, which was proved in \cite{gai-hott} for ${\mathsf{HFOL}}$. We start by defining the semantic opposite of a sentence. In first-order logic the semantic opposite of a sentence is its negation. \begin{definition} Given a sentence $\psi$ over a signature $\Delta$, we let (a)~$+\psi$ denote the sentence $\Forall{z^\circ}\at{z^\circ}\psi$, (b)~$-\psi$ denote the sentence $\Exists{z^\circ}\at{z^\circ}\neg\psi$, and (c)~$\pm\psi$ range over $\{+\psi,-\psi\}$, where $z^\circ$ is a distinguished nominal variable for $\Delta$. \end{definition} The proof of the following lemma is straightforward. \begin{lemma} \label{lemma:+} For all Kripke structures $(W,M)$ and all sentences $\psi$ over a signature $\Delta$, we have (a)~$(W,M) \models \psi$ iff $(W,M)\models +\psi$ iff $(W,M)\models^w+\psi$ for some possible world $w\in|W|$, and (b)~$(W,M)\not\models \psi$ iff $(W,M)\models -\psi$. \end{lemma} By Lemma~\ref{lemma:+}, the satisfaction of $+\psi$ does not depend on the possible world where the sentence $+\psi$ is evaluated. The same comment holds for $-\psi$ too. \subsection{Framework} We set the framework in which Robinson consistency property is proved. Let $(\Delta^1,\Phi^1)\stackrel{\chi_1}\leftarrow(\Delta,\Phi)\stackrel{\chi_2}\to(\Delta^2,\Phi^2)$ be a span of presentation morphisms such that (a)~$\chi_2$ is injective on sorts and nominals, and protects flexible sorts, (b)~$\Phi$ is maximally consistent over $\Delta$, and (c)~$\Phi^i$ is consistent over $\Delta^i$ for each $i\in\{1,2\}$. Assume a set of new rigid constants $C$ for $\Delta$ such that $\mathtt{card}(C_s)=\alpha$ for all rigid sorts $s\in S^\mathtt{r}$, where $\alpha\coloneqq max\{\mathtt{card}(\mathtt{Sen}(\Delta^1)),\mathtt{card}(\mathtt{Sen}(\Delta^2))\}$. For each $i\in\{1,2\}$, let $C^i$ be the set of new constants for $\Delta^i$ obtained by renaming the translation of the constants in $C$ along $\chi_i$ and by adding a set of new constants $C^i_{s^i}$ of cardinality $\alpha$ for each rigid sort $s^i\in S^\mathtt{r}_i$ outside the image of $\chi_i$: \begin{center} $C^i\coloneqq \{c^i:\to\chi_i(s) \mid c:\to s\in C\} \cup (\bigcup_{s^i\in S^\mathtt{r}_i \setminus \chi_i(S^\mathtt{r})} C^i_{s^i})$ \end{center} where (a)~each constant $c^i:\to\chi_i(s)$ is the renaming of a constant $c:\to s\in C$, and (b)~$C^i_{s^i}$ is a set of new constants of sort $s^i$ for all rigid sorts $s^i\in S^\mathtt{r}_i \setminus \chi_i(S^\mathtt{r})$. Let $\chi_i^C:\Delta(C)\to\Delta^i(C^i)$ be the extension of $\chi_i:\Delta\to\Delta^i$ to $\Delta(C)$ which maps each constant $c:\to s \in C$ to its renaming $c^i:\to \chi_i(s)\in C^i$. Without loss of generality we assume that $\Delta^1(C^1)$ and $\Delta^2(C^2)$ are disjoint. \begin{figure}[h] \centering \begin{tikzcd} \Delta(C) \ar[rr,"\chi_1^C"] \ar[rd,"\chi_2^C "] & & \Delta^1(C^1) \ar[rd,dotted,"(\_)^{\pi_1}"] & \\ & \Delta^2(C^2) \ar[rr,dotted,"(\_)^{\pi_2}", near start] & & (\Delta^\diamond(C^\diamond),\Phi_C^\diamond) \\ & & & \\ \Delta \ar[uuu,hook] \ar[rr,"\chi_1" near start,dashed] \ar[rd,swap, "\chi_2"] & & \Delta^1 \ar[uuu,dashed,hook] \ar[dr,dotted,"(\_)^{\pi_1}"]&\\ & \Delta^2 \ar[uuu,hook] \ar[rr,dotted,"(\_)^{\pi_2}"] & & (\Delta^\diamond,\Phi^\diamond) \ar[uuu,hook] \\ \end{tikzcd} \caption{} \label{fig:frame} \end{figure} In Figure~\ref{fig:frame}, $(\Delta^\diamond,\Phi^\diamond)$ is the relativized union of $\Delta^1$ and $\Delta^2$, while $(\Delta^\diamond(C^\diamond),\Phi_C^\diamond)$ is the relativized union of $\Delta^1(C^1)$ and $\Delta^2(C^2)$. The definitions of $C$, $C^1$ and $C^2$ are unique up to isomorphism and they are essential for the proof of Robinson consistency theorem. \begin{notation}[Semantics] For each $(W,M)\in|\mathtt{Mod}(\Delta^\diamond,\Phi^\diamond)|$, we let \begin{itemize} \item $(W^1,M^1)$ and $(W^2,M^2)$ denote $(W,M)\!\upharpoonright\!_{\pi_1}$ and $(W,M)\!\upharpoonright\!_{\pi_2}$, respectively; \item $(W^a,M^a)$ and $(W^b,M^b)$ denote $(W^1,M^1){\!\upharpoonright\!_{\chi_1}}$ and $(W^2,M^2){\!\upharpoonright\!_{\chi_2}}$, respectively. \end{itemize} Moreover, let $(V,N)$ be a Kripke structure over $\Delta^\diamond(C^\diamond)$ and adopt a similar convention as above for its reducts. \end{notation} \begin{notation}[Syntax] Let $i\in\{1,2\}$ and $\psi\in\mathtt{Sen}(\Delta(C))$. \begin{itemize} \item Let $\psi^i$ and $\psi^{\pi_i}$ denote $\chi_i^C(\psi)$ and $\mathtt{rt}(\pi_i)(\chi_i^C(\psi))$, respectively. \item Let $\Phi^{\pi_i}$ denote $\mathtt{rt}(\pi_i)(\Phi^i)$. \end{itemize} \end{notation} \subsection{Results} This section contains the main results, which are Robinson consistency theorem and its corollaries. The following lemma is crucial for subsequent developments and it is a consequence of Proposition~\ref{prop:sat-cond} and Lemma~\ref{lemma:+}. \begin{lemma}\label{lemma:bicond} For all $(V,N)\in|\mathtt{Mod}(\Delta^\diamond(C^\diamond))|$ and all $\psi\in\mathtt{Sen}(\Delta(C))$, \begin{center} $(V,N)\models +(+\psi)^{\pi_1} \Leftrightarrow +(+\psi)^{\pi_2} \text{ iff } \Big((V^a,N^a)\models \psi \text{ iff }(V^b,N^b) \models \psi \Big)$. \end{center} \end{lemma} Lemma~\ref{lemma:bicond} says that $(V,N)$ globally satisfies $+(+\psi)^{\pi_1} \Leftrightarrow +(+\psi)^{\pi_2}$ for all $\Delta(D)$-sentences~$\psi$ iff $(V^a,N^a)$ and $(V^b,N^b)$ are elementarily equivalent. \begin{notation} We define the following set of sentences over $\Delta^\diamond(C^\diamond)$: \begin{center} $T_C\coloneqq \Phi_C^\diamond \cup\Phi^{\pi_1} \cup \Phi^{\pi_2} \cup \{ +(+\psi)^{\pi_1} \Leftrightarrow +(+\psi)^{\pi_2} \mid \psi\in\mathtt{Sen}(\Delta(C))\}$. \end{center} \end{notation} Notice that $T_C$ describes Kripke structures $(V,N)$ over $\Delta^\diamond(C^\diamond)$ obtained from the relativized union of some Kripke structures $(V^1,N^1)\in|\mathtt{Mod}(\Delta^1(C^1),\Phi^1)|$ and $(V^2,N^2)\in|\mathtt{Mod}(\Delta^2(C^2),\Phi^2)|$ such that $(V^a,N^a)$ and $(V^b,N^b)$ are elementarily equivalent. \begin{proposition} \label{prop:cons} $T_C$ is consistent. \end{proposition} \begin{figure}[h] \begin{tikzcd} \pm\Psi \ar[r,dotted,no head]& \Delta(D) \ar[rr,"\chi_1^D"] \ar[rd,"\chi_2^D"] & & \Delta^1(D^1) \ar[rd,dotted,"\pi_1"] & \\ & & \Delta^2(D^2) \ar[rr,dotted,"\pi_2",near start] & & (\Delta^\diamond(D^\diamond),\Phi_D^\diamond) \\ & & & & \\ \Exists{D}\bigwedge\pm\Psi \ar[r,dotted,no head] & \Delta \ar[uuu,hook] \ar[rr,"\chi_1" near start,dashed] \ar[rd,swap, "\chi_2"] & & \Delta^1 \ar[uuu,dashed,hook] \ar[dr,dotted,"\pi_1"]&\\ & & \Delta^2 \ar[uuu,hook] \ar[rr,dotted,"\pi_2"] & & (\Delta^\diamond,\Phi^\diamond) \ar[uuu,hook] \\ \end{tikzcd} \caption{} \end{figure} \begin{proof} Let $\Psi$ be a finite set of $\Delta(C)$-sentences. Let $D\subseteq C$ be the finite subset of all constants from $C$ that occur in $\Psi$. We define $D^1\coloneqq \chi_1^C(D)$ and $D^2\coloneqq \chi_2^C(D)$. For each $i\in\{1,2\}$, we let $\chi_i^D$ denote the restriction of $\chi_i^C$ to $\Delta(D)$. We show that $\Phi_D^\diamond \cup\Phi^{\pi_1} \cup \Phi^{\pi_2} \cup \{ + (+\psi)^{\pi_1} \Leftrightarrow +(+\psi)^{\pi_2} \mid \psi\in\Psi\}$ is consistent, where $(\Delta^\diamond(D^\diamond),\Phi_D^\diamond)$ is the relativized union of $\Delta^1(D^1)$ and $\Delta^2(D^2)$. \begin{itemize} \item[] Since $\Phi^1$ is consistent, $(W^1,M^1)\models \Phi^1$ for some Kripke structure $(W^1,M^1)$ over $\Delta^1$. Let $(V^1,N^1)$ be an arbitrary expansion of $(W^1,M^1)$ to $\Delta^1(D^1)$. Let $\pm \Psi\coloneqq \{ \pm \psi \mid \psi \in \Psi \text{ and } (V^a,N^a) \models \pm \psi \}$, where $(V^a,N^a)=(V^1,N^1)\!\upharpoonright\!_{\chi_1^D}$. Since $(V^a,N^a)\models \pm\Psi$, $(W^a,M^a)\models \Exists{D}\bigwedge\pm\Psi$.\footnote{Since $D$ is not a set of variables, $\Exists{D}\bigwedge\pm\Psi$ is not a sentence in our language, but there exists a $\Delta$-sentence semantically equivalent to it.} Since $\Phi^2$ is consistent, $(W^2,M^2)\models \Phi^2$ for some Kripke structure $(W^2,M^2)$ over $\Delta^2$. Since $\chi_1(\Phi)\subseteq \Phi^1$ and $\chi_2(\Phi)\subseteq \Phi^2$, by satisfaction condition, $(W^a,M^a)\models \Phi$ and $(W^b,M^b)\models \Phi$. Since $\Phi$ is maximally consistent, $(W^a,M^a)\equiv (W^b,M^b)$. Since $(W^a,M^a)\models \Exists{D}\bigwedge\pm\Psi$, $(W^b,M^b)\models \Exists{D}\bigwedge \pm \Psi$. It follows that $(V^b,N^b)\models \pm\Psi$ for some expansion $(V^b,N^b)$ of $(W^b,M^b)$ to $\Delta(D)$. Since $\{\Delta(D)\hookleftarrow \Delta\stackrel{\chi_2}\to\Delta^2,\Delta(D)\stackrel{\chi_2^D}\to \Delta^2(D^2)\hookleftarrow \Delta^2\}$ is a pushout and ${(V^b,N^b)\!\upharpoonright\!_\Delta}=(W^b,M^b)={(W^2,M^2)\!\upharpoonright\!_{\chi_2}}$, there exists an expansion $(V^2,N^2)$ of $(W^2,M^2)$ to $\Delta^2(D^2)$ such that $(V^2,N^2)\!\upharpoonright\!_{\chi_2^D}=(V^b,N^b)$. Let $(V,N)$ be a relativized union of $(V^1,N^1)$ and $(V^2,N^2)$. Since $(V^1,N^1)\models \Phi^1$ and $(V^2,N^2)\models \Phi^2$, by Proposition~\ref{prop:sat-cond}, $(V,N)\models \Phi_D^\diamond\cup \Phi^{\pi_1}\cup\Phi^{\pi_2}$. By Lemma~\ref{lemma:bicond}, $(V,N)\models \{+(+\psi)^{\pi_1} \Leftrightarrow +(+\psi)^{\pi_2} \mid \psi\in\Psi\}$. \end{itemize} Hence, by compactness, $T_C$ is consistent. \end{proof} Recall that $\mathtt{n}$ denotes the sort of nominals. \begin{notation} [Nominal type] We define a type in one nominal variable $z$: \begin{center} $\Gamma_\mathtt{n}\coloneqq\{\at{z}\pi_i \Rightarrow z\neq c^i\mid i=\overline{1,2} \text{ and } c:\to \mathtt{n}\in C\}$ \end{center} where $c^i$ denotes $\chi_i(c)$ for all nominals $c:\to \mathtt{n}\in C$, and $z\neq c^i$ denotes $\neg\at{z}c^i$. \end{notation} Notice that a Kripke structure $(V,N)$ which satisfies $\pi_1\vee\pi_2$ and omits $\Gamma_\mathtt{n}$ has the set of possible worlds reachable by the nominals in $C^\diamond$. \begin{proposition} \label{prop:omit-n} $T_C$ $\alpha$-omits $\Gamma_\mathtt{n}$, that is, for each set of sentences $p\subseteq \mathtt{Sen}(\Delta^\diamond(C^\diamond,z))$ of cardinality strictly less than $\alpha$ such that $T_C\cup p$ is consistent, we have $T_C\cup p\not\models\Gamma_\mathtt{n}$. \end{proposition} \begin{proof} Let $p\subseteq\mathtt{Sen}(\Delta^\diamond(C^\diamond,z))$ be a set of sentences of cardinality strictly less than $\alpha$ such that $T_C\cup p$ is consistent. Let $D_\mathtt{n}$ be the set of all nominals $c\in C_\mathtt{n}$ such that either $c^1$ or $c^2$ occurs in $p$. Since $\mathtt{card}(p)<\alpha$, we have $\mathtt{card}(D_\mathtt{n})<\alpha$. Let $D$ be the set of constants obtained from $C$ by removing all nominals from $C_\mathtt{n}\setminus D_\mathtt{n}$. We define $D^i\coloneqq \chi_i^C(D)$ for each $i\in\{1,2\}$. It follows that $p\subseteq \mathtt{Sen}(\Delta^\diamond(D^\diamond,z))$, where $(\Delta^\diamond(D^\diamond),\Phi_D^\diamond)$ is the relativized union of $\Delta^1(D^1)$ and $\Delta^2(D^2)$. Let $T_D$ be the set of all sentences from $T_C$ which contains only constants from $D^1$ and $D^2$. Since $T_C\cup p$ is consistent, its subset $T_D\cup p$ is consistent too. Let $(V,N)$ be a Kripke structure over $\Delta^\diamond(D^\diamond)$ such that $(V,N)\models T_D$ and let $v\in |V|$ such that $(V^{z\leftarrow v},N)\models p$. Since $(V,N)\models \pi_1\vee \pi_2$, we have $v\in \pi_1^V$ or $v\in \pi_2^V$. We assume that $v\in \pi_1^V$, as the case $v\in \pi_2^V$ is symmetrical. According to our conventions, $(V^1,N^1)= (V,N)\!\upharpoonright\!_{\pi_1}$, $(V^a,N^a)= (V,N)\!\upharpoonright\!_{\chi_1^D}$, $(V^2,N^2)= (V,N)\!\upharpoonright\!_{\pi_2}$ and $(V^b,N^b)= (V,N)\!\upharpoonright\!_{\chi_2^D}$, where $\chi_i^D$ denotes the restriction of $\chi_i^C$ to $\Delta(D)$ for each $i\in\{1,2\}$. \begin{figure}[h]\small \begin{tikzcd} \pm\Psi \ar[r,dotted,no head]& \Delta(D,c) \ar[rr] \ar[rd] & & \Delta^1(D^1,c^1) \ar[rd,dotted,"\pi_1"] & \\ & & \Delta^2(D^2,c^2) \ar[rr,dotted,"\pi_2",near start] & & \Delta^\diamond(D^\diamond,c^1,c^2) \\ & & & & \\ \Exists{c}\bigwedge\pm\Psi \ar[r,dotted,no head] & \Delta(D) \ar[uuu,hook] \ar[rr,"\chi_1^D" near start,dashed] \ar[rd,swap, "\chi_2^D"] & & \Delta^1(D^1) \ar[uuu,dashed,hook] \ar[dr,dotted,"\pi_1"]&\\ & & \Delta^2(D^2) \ar[uuu,hook] \ar[rr,dotted,"\pi_2"] & & \Delta^\diamond(D^\diamond) \ar[uuu,hook] \\ \end{tikzcd} \caption{} \end{figure} Since $\mathtt{card}(D_\mathtt{n})<\alpha=\mathtt{card}(C_\mathtt{n})$, there exists a nominal $c\in C_\mathtt{n}\setminus D_\mathtt{n}$. Let $\Psi\subseteq \mathtt{Sen}(\Delta(D,c))\setminus\mathtt{Sen}(\Delta(D))$ be a finite set of sentences. Then $T_D \cup p \cup \{(\at{c^1}\pi_1),(\at{c^2}\pi_2), (z = c^1) \} \cup \{ +(+\psi)^{\pi_1} \Leftrightarrow + (+\psi)^{\pi_2} \mid \psi\in\Psi\}$ is consistent: \begin{itemize} \item[] We define $\pm \Psi=\{ \pm\psi\mid ((V^a)^{c\leftarrow v},N^a) \models \pm \psi\}$. Since $((V^a)^{c\leftarrow v},N^a) \models \pm\Psi$, $(V^a,N^a)\models \Exists{c}\bigwedge \pm\Psi$. Since $(V,N)\models \{ +(+\varphi)^{\pi_1} \Leftrightarrow + (+\varphi)^{\pi_2} \mid \varphi\in\mathtt{Sen}(\Delta(D))\}$, by Lemma~\ref{lemma:bicond}, $(V^a,N^a) \equiv (V^b,N^b)$. It follows that $(V^b,N^b)\models \Exists{c}\bigwedge \pm\Psi$. By semantics, $((V^b)^{c\leftarrow u},N^b) \models\pm\Psi$ for some $u\in|V^b|$. We get $((V^a)^{c\leftarrow v},N^a)\models \psi$ iff $((V^b)^{c\leftarrow u},N^b)\models \psi$ for all $\psi\in\Psi$. By Lemma~\ref{lemma:bicond}, $((V)^{(c^1,c^2)\leftarrow (v,u)},N)\models \{ +(+\psi)^{\pi_1} \Leftrightarrow + (+\psi)^{\pi_2} \mid \psi\in\Psi\} $. By satisfaction condition, $((V)^{(z,c^1,c^2)\leftarrow (v,v,u)},N)\models \{ +(+\psi)^{\pi_1} \Leftrightarrow + (+\psi)^{\pi_2} \mid \psi\in\Psi\} $. By satisfaction condition, since $(V,N)\models T_D$, $((V)^{(z,c^1,c^2)\leftarrow (v,v,u)},N)\models T_D$. Since $(V^{z\leftarrow v},N)\models p$, by satisfaction condition, $((V)^{(z,c^1,c^2)\leftarrow (v,v,u)},N)\models p$. Since $v\in \pi_1^V$, $u\in \pi_2^V$ and the interpretations of $z$ and $c^1$ are $v$, we obtain $((V)^{(z,c^1,c^2)\leftarrow (v,v,u)},N)\models \{(\at{c^1}\pi_1),(\at{c^2}\pi_2),(z= c^1)\}$. \end{itemize} By compactness, $T_{D\cup\{c\}} \cup p \cup \{\at{z} \pi_1 \wedge z = c^1 \}$ is consistent, where $T_{D\cup\{c\}}$ is the set of all sentences from $T_C$ which contains only constants from $D^1\cup D^2 \cup \{c^1:\to \mathtt{n}, c^2:\to\mathtt{n}\}$. Now let $E\subset C$ be any proper subset which includes $D$. We define $E^i\coloneqq \chi_i^C(E)$ for each $i\in\{1,2\}$. Assuming that $T_E\cup p\cup \{\at{z}\pi_1\wedge z=c^1\}$ is consistent, we prove that $T_{E\cup\{k\}}\cup p \cup \{\at{z}\pi_1\wedge z = c^1 \}$ is consistent for any $k\in C_\mathtt{n}\setminus E_\mathtt{n}$. The proof is similar to the one above. By compactness, $T_C\cup p \cup \{\at{z}\pi_1\wedge z = c^1 \}$ is consistent. Since $p\subseteq\mathtt{Sen}(\Delta^\diamond(C^\diamond,z))$ is an arbitrary set of cardinality strictly less than $\alpha$ consistent with $T_C$, it follows that $T_C$ $\alpha$-omits $\Gamma_\mathtt{n}$. \end{proof} \begin{notation}[Rigid types] Let $z^1$ be a variable of sort $s^1\in S^\mathtt{r}_1$ and $z^2$ be a variable of sort $s^2\in S^\mathtt{r}_2$. For each $i\in\{1,2\}$, we define a type in variable $z^i$: \begin{center} $\Gamma_{s^i}\coloneqq \{ z^i\neq c^i \mid c^i:\to s^i\in C^i\}$. \end{center} \end{notation} Notice that a Kripke structure $(V,N)$ over $\Delta^\diamond(C^\diamond)$ which omits $\Gamma_{s^i}$ has the carrier sets corresponding to the sort $s^i$ reachable by the constants of sort $s^i$ in $C^i$, where $i\in\{1,2\}$. \begin{proposition} \label{prop:omit-r} $T_C$ $\alpha$-omits both types $\Gamma_{s^1}$ and $\Gamma_{s^2}$. \end{proposition} \begin{proof} We show that $T_C$ omits $\Gamma_{s^1}$, since showing that $T_C$ omits $\Gamma_{s^2}$ is similar. Moreover, we focus on the case when $s^1\in\chi_1(S^\mathtt{r})$, since the case $s^1\not\in\chi_1(S^\mathtt{r})$ is easy. Let $p\subseteq\mathtt{Sen}(\Delta^\diamond(C^\diamond,z^1))$ be a set of sentences such that $\mathtt{card}(p)<\alpha$ and $T_C\cup p$ is consistent. We define the subset of constants $D\subseteq C$ as follows: (a)~for all rigid sorts $s\in \chi_1^{-1}(s^1)$, the set $D_s$ consists of all constants $c:\to s\in C$ such that either $c^1$ or $c^2$ occurs in $p$, and (b)~for all rigid sorts $s\not \in \chi_1^{-1}(s^1)$, we have $D_s\coloneqq C_s$. For each $i\in\{1,2\}$, we define $D^i\coloneqq \chi_i(D)\cup \{c: \to s\in C^i \mid s\in S_i^\mathtt{r}\setminus \chi_i(S^\mathtt{r}) \}$. It follows that $p\subseteq \mathtt{Sen}(\Delta^\diamond(D^\diamond,z^1))$, where $(\Delta^\diamond(D^\diamond,z^1),\Phi_D^\diamond)$ is the relativized union of $\Delta^1(D^1)$ and $\Delta^2(D^2)$. Let $T_D$ be the set of all sentences from $T_C$ which contains only constants from $D^1$ and $D^2$. Since $T_C\cup p$ is consistent, its subset $T_D\cup p$ is consistent too. Let $(V,N)$ be a Kripke structure over $\Delta^\diamond(D^\diamond)$ such that $(V,N)\models T_D$ and $(V,N^{z^1\leftarrow e})\models p$ for some possible world $v\in V_{\pi_1}$ and element $e\in N_{v,s^1}$. According to our conventions, $(V^1,N^1)= (V,N)\!\upharpoonright\!_{\pi_1}$, $(V^a,N^a)= (V,N)\!\upharpoonright\!_{\chi_1^D}$, $(V^2,N^2)= (V,N)\!\upharpoonright\!_{\pi_2}$ and $(V^b,N^b)= (V,N)\!\upharpoonright\!_{\chi_2^D}$, where $\chi_i^D$ denotes the restriction of $\chi_i^C$ to $\Delta(D)$ for each $i\in\{1,2\}$. \begin{enumerate}[1)] \item \label{prop:omit-r1} Let $s\in \chi_1^{-1}(s^1)$. Since $\mathtt{card}(D_s)<\alpha=\mathtt{card}(C_s)$, there exists $c \in C_s\setminus D_s$. Let $\Psi\subseteq \mathtt{Sen}(\Delta(D,c))\setminus\mathtt{Sen}(\Delta(D))$ be a finite set of sentences. We show that $T_D \cup p \cup \{ z^1 = c^1 \} \cup \{ +(+\psi)^{\pi_1} \Leftrightarrow + (+\psi)^{\pi_2} \mid \psi\in\Psi\}$ is consistent, where $c^1:\to \chi_1(s)$ is the translation of $c:\to s$ along $\chi_1^C$. The proof is similar to the first part of the proof of Proposition~\ref{prop:omit-n}. By compactness, $T_{D\cup\{c\}} \cup p \cup \{ z^1 = c^1 \}$ is consistent, where $T_{D\cup\{c\}}$ is the set of all sentences from $T_C$ which contains only constants from $D^1\cup\{c^1:\to \chi_1(s) \}$ and $D^2\cup\{c^2:\to \chi_2(s)\}$. \item \label{prop:omit-r2} Now let $E\subseteq C$ be an arbitrary subset of constants which includes $D$. We define $E^i\coloneqq \chi_i^C(E)\cup \{c: \to s\in C^i \mid s\in S_i^\mathtt{r}\setminus \chi_i(S^\mathtt{r}) \}$ for each $i\in\{1,2\}$. Assuming that $T_E\cup p\cup \{ z^1=c^1\}$ is consistent, we prove that $T_{E\cup\{d\}}\cup p \cup \{ z^1 = c^1 \}$ is consistent for any $d:\to s\in C\setminus E$. The proof is similar to the one above. \end{enumerate} From (\ref{prop:omit-r1}) and (\ref{prop:omit-r2}), by compactness, $T_C\cup p \cup \{ z^1 = c^1 \}$ is consistent. Since $p\subseteq\mathtt{Sen}(\Delta^\diamond(C^\diamond,z^1))$ is an arbitrary set of cardinality strictly less than $\alpha$ an consistent with $T_C$, it follows that $T_C$ $\alpha$-omits $\Gamma_{s^1}$. \end{proof} All the preliminary results for proving Robinson consistency property are in place. \begin{theorem}[Robinson consistency] \label{th:robinson} Recall that $\chi_2$ is injective on sorts and nominals. In addition, assume that $\chi_2$ protects flexible symbols. Let $\Delta^1\stackrel{\upsilon_1}\to \Delta'\stackrel{\upsilon_2}\leftarrow\Delta^2$ be the pushout of $\Delta^1\stackrel{\chi_1}\leftarrow \Delta\stackrel{\chi_2}\to\Delta^2$. Then $\upsilon_1(\Phi^1)\cup \upsilon^2(\Phi^2)$ is consistent. \end{theorem} \begin{figure}[h]\centering \begin{tikzcd} \Delta(C) \ar[rr,"\chi_1^C"] \ar[rd,"\chi_2^C "] & & \Delta^1(C^1) \ar[rd,dotted,"(\_)^{\pi_1}"] & \\ & \Delta^2(C^2) \ar[rr,dotted,"(\_)^{\pi_2}", near start] & & \Delta^\diamond(C^\diamond) \\ & & & \\ \Delta \ar[uuu,hook] \ar[rr,"\chi_1" near start,dashed] \ar[rd,swap, "\chi_2"] & & \Delta^1 \ar[uuu,dashed,hook] \ar[dr,"\upsilon_1"]&\\ & \Delta^2 \ar[uuu,hook] \ar[rr,"\upsilon_2"] & & \Delta' \\ \end{tikzcd} \caption{} \end{figure} \begin{proof} By Proposition~\ref{prop:omit-n}, $T_C$ $\alpha$-omits $\Gamma_\mathtt{n}$. By Proposition~\ref{prop:omit-r}, $T_C$ $\alpha$-omits $\Gamma_{s^1}$ and $\Gamma_{s^2}$ for all rigid sorts $s^1\in S^\mathtt{r}_1$ and $s^2\in S^\mathtt{r}_2$. By ~\cite[Extended Omitting Types Theorem]{gai-hott}, there exists $(V,N)\in|\mathtt{Mod}(\Delta^\diamond(C^\diamond))|$ such that $(V,N)\models T_C$ and $(V,N)$ omits $\Gamma_\mathtt{n}$, $\Gamma_{s^1}$ and $\Gamma_{s^2}$ for all rigid sorts $s^1\in S^\mathtt{r}_1$ and $s^2\in S^\mathtt{r}_2$. Since $(V,N)\models \Phi^{\pi_1}\cup \Phi^{\pi_2}$, by satisfaction condition, $(V^1,N^1)\models \Phi^1$ and $(V^2,N^2)\models \Phi^2$. Since $\Phi^1\models \chi_1(\Phi)$ and $\Phi^2\models \chi_2(\Phi)$, by satisfaction condition, $(V^a,N^a)\models\Phi$ and $(V^b,N^b)\models\Phi$. Since $\Phi$ is maximally consistent, $(V^a,N^a)\equiv(V^b,N^b)$. Since $(V,N)$ omits $\Gamma_\mathtt{n}$ and $\Gamma_{s^1}$ for all rigid sorts $s^1\in S^\mathtt{r}_1$, $(V^1,N^1)$ is reachable by $C^1$. Since $(V,N)$ omits $\Gamma_\mathtt{n}$ and $\Gamma_{s^2}$ for all rigid sorts $s^2\in S^\mathtt{r}_2$, $(V^2,N^2)$ is reachable by $C^2$. Since $(V^a,N^a)\equiv(V^b,N^b)$, $(V^a,N^a)$ is reachable by $C$ and $(V^2,N^2)$ is reachable by $C^2$, by Lemma~\ref{lemma:lifting}, $(U^2,R^2)\equiv (V^2,N^2)$ for some $\chi_2^C$-expansion $(U^2,R^2)$ of $(V^a,N^a)$. We define $(W^1,M^1)\coloneqq (V^1,N^1)\!\upharpoonright\!_{\Delta^1}$ and $(W^2,M^2)\coloneqq (U^2,R^2)\!\upharpoonright\!_{\Delta^2}$. Since $(V^1,N^1)\models \Phi^1$ and $(U^2,R^2)\models \Phi^2$, by satisfaction condition, $(W^1,M^1)\models \Phi^1$ and $(W^2,M^2)\models \Phi^2$. Since $\Delta^1\stackrel{\upsilon_1}\to \Delta'\stackrel{\upsilon_2}\leftarrow\Delta^2$ is the pushout of $\Delta^1\stackrel{\chi_1}\leftarrow \Delta\stackrel{\chi_2}\to\Delta^2$ and $(W^1,M^1)\!\upharpoonright\!_{\chi_1}=(W,M)=(W^2,M^2)\!\upharpoonright\!_{\chi_2}$, by \cite[Example 3.5]{dia-msc}, there exists a unique Kripke structure $(W',M')\in|\mathtt{Mod}(\Delta')|$ such that ${(W',M')\!\upharpoonright\!_{\upsilon_1}}=(W^1,M^1)$ and $(W',M')\!\upharpoonright\!_{\upsilon_2}=(W^2,M^2)$. By satisfaction condition, $(W',M')\models \upsilon_1(\Phi^1)\cup \upsilon^2(\Phi^2)$. \end{proof} In many-sorted first-order logic, if one of the signature morphisms in the span is injective on sorts then the corresponding pushout is a CI square~\cite{DBLP:journals/sLogica/GainaP07}. Lemma~\ref{lemma:counter-1} shows that in ${\mathsf{HFOL}}$, this condition is also necessary. The following two examples focus on the second condition, the protection of flexible symbols. \begin{example}\label{ex:counter-2} Let $\Delta^1\stackrel{\chi_1}\hookleftarrow \Delta\stackrel{\chi_2}\hookrightarrow\Delta^2$ be a span of inclusions such that \begin{itemize} \item $\Delta$ has one nominal $\{k\}$, one flexible sort $\{s\}$ and one constant $\{c:\to s\}$; \item $\Delta^1$ has nominals $\{k,k_1\}$, one rigid sort $\{s\}$ and one flexible constant $\{c:\to s\}$; \item $\Delta^2$ has two nominals $\{k,k_2\}$, one flexible sort $s$, and two flexible constants $\{c:\to s, c_2:\to s\}$. \end{itemize} Let $\Delta^1\stackrel{\upsilon_1}\hookrightarrow \Delta' \stackrel{\upsilon_2}\hookleftarrow\Delta^2$ be a pushout of the above span such that \begin{itemize} \item $\Delta'$ consists of three nominals $\{k,k_1,k_2\}$, one rigid sort $\{s\}$, and two flexible constants $\{c:\to s, c_2:\to s\}$. \end{itemize} \end{example} In Example~\ref{ex:counter-2}, the signature morphism $\chi_2$ adds a new constant $c_2:\to s$ on the flexible sort $s$, which means that flexible symbols are not protected. The following lemma shows that the pushout constructed above is not a CI square. \begin{lemma}\label{lemma:counter-2} The pushout described in Example~\ref{ex:counter-2} is not a CI square. \end{lemma} \begin{proof} Let $\Phi_1\coloneqq\{\Forall{y}c=y\in\mathtt{Sen}(\Delta^1)\}$ and $\Phi_2\coloneqq\{c_2=c\in\mathtt{Sen}(\Delta^2)\}$. It is straightforward to show $\Phi_1\models \Phi_2$. Suppose towards a contradiction that there exists $\Phi\subseteq\mathtt{Sen}(\Delta)$ such that $\Phi_1\models\Phi$ and $\Phi\models\Phi_2$. Let $(W^1,M^1)$ be the Kripke structure over $\Delta^1$ that consists of one possible world $w$, which means that $W^1_k=W^1_{k_1}=w$ and $M^1_w$ is the single-sorted algebra consisting of one element $M^1_{w,s}=\{e\}$, which means that $M^1_{w,c}=e$. Obviously, $(W^1,M^1)\models \Forall{x}x=c$. By the satisfaction condition, $(W^1,M^1)\!\upharpoonright\!_\Delta\models \Phi$. Let $(W,M)$ be the Kripke structure over $\Delta$ obtained from $(W^1,M^1)\!\upharpoonright\!_\Delta$ by adding a new flexible element $d$ of sort $s$. Since $d$ is an unreachable element, by Lemma~\ref{lemma:reach-equiv}, $(W,M)\equiv(W^1,M^1)\!\upharpoonright\!_\Delta$. It follows that $(W,M)\models\Phi$. Let $(W^2,M^2)$ be the expansion of $(W,M)$ to $\Delta^2$ which interprets $c_2:\to s$ as $d$. By the satisfaction condition, $(W^2,M^2)\models\Phi$. Since $\Phi\models \Phi_2$, we have $(W^2,M^2)\models \Phi_2$, which is a contradiction, as $M^2_{w,c}=e\neq d = M^2_{w,c_2}$. \end{proof} We give another example of pushout which is not a CI square. \begin{example}\label{ex:counter-3} Let $\Delta^1\stackrel{\chi_1}\hookleftarrow \Delta\stackrel{\chi_2}\hookrightarrow\Delta^2$ be a span of inclusions such that \begin{itemize} \item~$\Delta$ has one nominal $\{k\}$, one flexible sort $\{Nat\}$ and two flexible function symbols $\{0:\to Nat,succ:Nat \to Nat\}$; \item~$\Delta^1$ has two nominals $\{k,k_1\}$, one rigid sort $\{Nat\}$ and three flexible function symbols $\{0: \to Nat, succ:Nat \to Nat, \texttt{\_+\_}: Nat~Nat\to Nat \}$; \item~$\Delta^2$ has two nominals $\{k,k_2\}$, two rigid sorts $\{Nat,List\}$, four flexible operations $\{0:\to Nat, succ:Nat \to Nat, nil:\to List, \texttt{\_|\_}: Nat~List\to List\}$. \end{itemize} Let $\Delta^1\stackrel{\upsilon_1}\hookrightarrow \Delta' \stackrel{\upsilon_2}\hookleftarrow\Delta^2$ be a pushout of the above span such that \begin{itemize} \item $\Delta'$ has three nominals $\{k,k_1,k_2\}$, two rigid sorts $\{Nat,List\}$ and five flexible function symbols $\{0:\to Nat, succ:Nat\to Nat, \texttt{\_+\_}: Nat~Nat\to Nat, nil:\to List, cons:Nat~List\to List\}$. \end{itemize} \end{example} In Example~\ref{ex:counter-3}, the signature morphism $\chi_2$ does not preserve the flexible sort $Nat$. \begin{lemma}\label{lemma:counter-3} The pushout described in Example~\ref{ex:counter-3} is not a CI square. \end{lemma} \begin{proof} Let $\Phi^1$ be the set of $\Delta^1$-sentences which consists of \begin{itemize} \item $\Forall{{x:Nat}}succ(succ(x))=x$, \item $\Forall{{x:Nat}}0 + x = x$, and \item $\Forall{{x:Nat},{y:Nat}} succ(y) + x = succ(y + x)$. \end{itemize} Let $\Phi^2\coloneqq \{\Forall{x:Nat}succ(succ(x))=x \}$. Suppose towards a contradiction that there exists a set of $\Delta$-sentences $\Phi$ such that $\Phi^1\models\Phi$ and $\Phi\models\Phi^2$. Let $(W^1,M^1)$ be a Kripke structure over $\Delta^1$ that consists of two possible worlds $\{w_1,w_2\}$, both $M^1_{w_1}$ and $M^1_{w_2}$ are the quotient algebra $\mathbb{Z}_2$. Obviously, $(W^1,M^1)\models \Phi^1$. Since $\Phi^1\models \Phi$, $(W^1,M^1)\models \Phi$. By the satisfaction condition, $(W^1,M^1)\!\upharpoonright\!_\Delta\models\Phi$. Let $(W,M)$ obtained by adding a new element $\widehat{2}$ of sort $Nat$ such that $M_{w_i,succ}(\widehat{2})=\widehat{0}$ for each $i\in\{1,2\}$. Since $\widehat{2}$ is an unreachable element, by Lemma~\ref{lemma:reach-equiv}, $(W,M)\equiv(W^1,M^1)\!\upharpoonright\!_\Delta$. Let $(W^2,M^2)$ be the expansion of $(W,M)$ to $\Delta^2$ which interprets $List$ in both worlds as the set of all lists with elements from $\{\widehat{0},\widehat{1},\widehat{2}\}$. By the satisfaction condition, $(W^2,M^2)\models \Phi$. Since $M_{w_1,succ} (M_{w_1,succ}(\widehat{2})) = \widehat{1}$, we have $(W^2,M^2)\not \models \Phi^2$, which contradicts $\Phi\models\Phi^2$. \end{proof} \section{Conclusions} Lemma~\ref{lemma:counter-2} and Lemma~\ref{lemma:counter-3} show that not only injectivity on sorts but also protection of flexible symbols is necessary for interpolation in ${\mathsf{HFOL}}$. Recall that ${\mathsf{RFOHL}}$ signatures form a subcategory of ${\mathsf{HFOL}}$ signatures. It is not difficult to check that the subcategory of ${\mathsf{RFOHL}}$ signatures is closed under pushouts. It follows that Theorem~\ref{th:robinson} is applicable to ${\mathsf{RFOHL}}$. Since intersection-union square of signature morphism are, in particular, pushouts, our results cover the ones obtained in \cite{ArecesBM03}. Similarly, ${\mathsf{HPL}}$ signatures form a subcategory of ${\mathsf{HFOL}}$ signatures. It is not difficult to check that the subcategory of ${\mathsf{HPL}}$ signatures is closed under pushouts. It follows that Theorem~\ref{th:robinson} is applicable to ${\mathsf{HPL}}$. Since Theorem~\ref{th:robinson} is derived from Omitting Types Theorem, our results rely on quantification over possible worlds. Therefore, the present work does not cover the interpolation result from \cite{ArecesBM01}, which is applicable to hybrid propositional logic without quantification. ${\mathsf{HFOL}}$ and ${\mathsf{HFOLS}}$ have the same signatures and Kripke structures. By \cite[Lemma 2.20]{gai-godel}, ${\mathsf{HFOLS}}$ and ${\mathsf{HFOL}}$ have the same expressivity power. The relationship between ${\mathsf{HFOL}}$ and ${\mathsf{HFOLS}}$ is similar to the relationship between first-order logic and unnested first-order logic, which allows only terms of depth one~\cite{DBLP:books/daglib/0080659}. Therefore, a square of ${\mathsf{HFOL}}$ signature morphisms is a CI square in ${\mathsf{HFOL}}$ iff it is a CI square in ${\mathsf{HFOLS}}$. Proposition~\ref{def:rob} does not give an effective construction of an interpolant, which is an open problem. \bibliographystyle{aiml22}
1,941,325,220,296
arxiv
\section{#1}\setcounter{equation}{0}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{displaymath}}{\begin{displaymath}} \newcommand{\end{displaymath}}{\end{displaymath}} \newcommand{\alpha}{\alpha} \newcommand{$B \to X_s e^+ e^-$ }{$B \to X_s e^+ e^-$ } \newcommand{$b \to s e^+ e^-$ }{$b \to s e^+ e^-$ } \newcommand{$b \to c e \bar\nu $ }{$b \to c e \bar\nu $ } \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal O}}{{\cal O}} \newcommand{\frac}{\frac} \newcommand{\tilde{C}}{\tilde{C}} \newcommand{K^+\rightarrow\pi^+\nu\bar\nu}{K^+\rightarrow\pi^+\nu\bar\nu} \newcommand{K^+\rightarrow\pi^+\nu\bar\nu}{K^+\rightarrow\pi^+\nu\bar\nu} \newcommand{K_{\rm L}\rightarrow\pi^0\nu\bar\nu}{K_{\rm L}\rightarrow\pi^0\nu\bar\nu} \newcommand{K_{\rm L}\rightarrow\pi^0\nu\bar\nu}{K_{\rm L}\rightarrow\pi^0\nu\bar\nu} \newcommand{K_{\rm L} \to \mu^+\mu^-}{K_{\rm L} \to \mu^+\mu^-} \newcommand{K_{\rm L} \to \mu^+ \mu^-}{K_{\rm L} \to \mu^+ \mu^-} \newcommand{K_{\rm L} \to \pi^0 e^+ e^-}{K_{\rm L} \to \pi^0 e^+ e^-} \def\frac{\as}{4\pi}{\frac{\alpha_s}{4\pi}} \def\gamma_5{\gamma_5} \newcommand{\IM\lambda_t}{{\rm Im}\lambda_t} \newcommand{\RE\lambda_t}{{\rm Re}\lambda_t} \newcommand{\RE\lambda_c}{{\rm Re}\lambda_c} \renewcommand{\baselinestretch}{1.3} \textwidth=17.5cm \textheight=23.3cm \oddsidemargin -0.3cm \topmargin -1.5cm \baselineskip -3.5cm \parskip 0.3cm \tolerance=10000 \parindent 0pt \def\ltap{\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}} \def\gtap{\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}} \DeclareUnicodeCharacter{2212}{-} \begin{document} \vskip 30pt \begin{center} {\Large \bf Reappraisal of the minimal flavoured $Z^{\prime}$ scenario} \\ \vspace*{1cm} \renewcommand{\thefootnote}{\fnsymbol{footnote}} {{\sf ~Tirtha Sankar Ray\footnote{email: [email protected]}}, {\sf ~Avirup Shaw\footnote{email: [email protected]}} }\\ \vspace{10pt} {{\em Department of Physics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India}} \normalsize \end{center} \begin{abstract} \noindent Recent results from the intensity frontier indicate the tantalizing possibility of violation in lepton flavour universality. In light of this we revisit the minimal phenomenological $Z'$ model taking in account both vectorial and axial-vectorial flavour violating couplings to the charged leptons. We make a systematic study to identify the minimal framework that can simultaneously explain the recent results on anomalous magnetic moment of muon and electron while remaining in consonance with $R_{K^{(*)}}$, $B^0_s-\bar{B^0_s}$ mixing and angular observables in the $B^+\to K^{+*} \mu^+\mu^-$ channel reported by the LHCb collaboration. We demonstrate that the neutrino trident data imply a further ${\rm SU(2)}_L$ violation in the leptonic couplings of the exotic $Z'$. \vskip 5pt \noindent \end{abstract} \setcounter{footnote}{0} \renewcommand{\thefootnote}{\arabic{footnote}} \section{Introduction}\label{intro} With continued improvement in resolution, the consistency of anomalous results from the intensity frontier may be the most significant indication of Beyond Standard Model (BSM) physics that we have today in hard experimental data. Recently the measurement of the $(g-2)_\mu$ at the Fermi National Accelerator Laboratory (FNAL) \cite{Muong-2:2021ojo} has added to this intrigue. The dramatic improvement in the resolution in the recent results has pushed the deviation from Standard Model (SM) prediction at $4.2~\sigma$ with $\Delta a_{\mu} = a_\mu^{\rm exp} - a_\mu^{\rm SM} \sim (251 \pm 59)\times 10^{-11}.$ Notwithstanding the recent lattice results \cite{Borsanyi:2020mff} this deviation has withstood scrutiny for some time now. Interestingly if one compares this with the status of $(g-2)_e$ measurement based on the Lawrence Berkeley National Laboratory (LBNL) determination of the structure constant based on cesium \cite{Parker:2018vye} one obtains a moderate deviation from SM at $2.4~\sigma,$ with the opposite pull at $\Delta a_e \sim (-8.8 \pm 3.6)\times 10^{-13}$ \cite{Hanneke:2008tm}. This sign flip if true cannot be explained by simple mass scaling $\Delta a_e /\Delta a_{\mu} \sim m_e^2/m_\mu^2$ and should be construed as an indication of Lepton Flavour Universality Violation (LFUV) of any underlying New Physics (NP). One can trace the imprint of this violation of lepton flavour universality in the rare decays of the $B$-mesons giving a more compelling experimental basis for the LFUV hypothesis. Indications of such LFUV can be seen in the flavour changing neutral current (FCNC) induced decays involving the $b\to sl^+l^-$ transitions. These are indeed easy picking ground in flavour physics searches for NP owing to the Glashow Iliopoulos Maiani (GIM) suppression of tree-level contribution to them within SM. In this context the observable $R_{K^{(*)}}$ is of significance as they are relatively independent of the form factor uncertainties \cite{SHIFMAN1979385, Colangelo:2000dp}. SM predictions \cite{Descotes-Genon:2015uva,Bordone:2016gaq,Capdevila:2017bsm} of these parameters exhibit $2-3~\sigma $ deviations from the corresponding experimental results reported by LHCb \cite{LHCb:2021trn, Aaij:2017vbb}. Tying nicely with the paradigm of LFUV in underlying physics. Thus it is not surprising that there exists extensive literature that study various motivated BSM framework that tries to explain these anomalous results individually or in combinations \cite{Gauld:2013qba,Glashow:2014iga,Bhattacharya:2014wla, Crivellin:2015mga, Crivellin:2015era,Celis:2015ara,Sierra:2015fma,Belanger:2015nma,Gripaios:2015gra, Allanach:2015gkd,Fuyuto:2015gmk,Chiang:2016qov,Boucenna:2016wpr,Boucenna:2016qad,Celis:2016ayl, Altmannshofer:2016jzy,Bhattacharya:2016mcc,Crivellin:2016ejn,Becirevic:2016zri,GarciaGarcia:2016nvr,Bhatia:2017tgo,Ko:2017yrd,Chen:2017usq,Baek:2017sew, King:2017anf, King:2018fcg, Dasgupta:2018nzt, Biswas:2019twf, Dwivedi:2019uqd, CarcamoHernandez:2019ydc, Bodas:2021fsy, Biswas:2021dan, Hiller:2014yaa,Biswas:2014gga,Gripaios:2014tna,Sahoo:2015wya,Becirevic:2015asa, Alonso:2015sja,Calibbi:2015kma, Huang:2015vpt,Pas:2015hca,Bauer:2015knc,Fajfer:2015ycq,Barbieri:2015yvd, Sahoo:2015pzk, Dorsner:2016wpm,Sahoo:2016nvx,Das:2016vkr,Chen:2016dip,Becirevic:2016oho,Becirevic:2016yqi,Sahoo:2016pet,Barbieri:2016las,Cox:2016epl, Ma:2001md, Baek:2001kca, Heeck:2011wj, Harigaya:2013twa, Altmannshofer:2016brv, Biswas:2016yan, Biswas:2016yjr, Banerjee:2018eaf, Huang:2020ris, Dinh:2020inx, Chakraborti:2021kkr, Chakraborti:2021dli}. Moreover, after the declaration of the updated results \cite{LHCb:2021trn, Muong-2:2021ojo}, several authors and collaboration published various articles in different existing and/or new BSM scenarios. Among the different categories of BSM frameworks, a kind of scenario with a non-standard neutral massive boson ($Z'$) is very popular and effective, for the combined explanation of $R_K$ (including other $b\to s ll$ anomalies) and $(g-2)_\mu$ \cite{Arcadi:2021cwg, Alvarado:2021nxy, Davighi:2021oel, Darme:2021qzw, Lee:2021ndf, Greljo:2021npi,Wang:2021uqz, Navarro:2021sfb, Bause:2021prv, Ko:2021lpx}. In the present paper we revisit one of the simplest of these framework that extends the SM to include an additional massive abelian gauge boson $Z'$ with flavour violating couplings. With Occum's razor in hand we build bottom up model of the $Z'$ guided solely by the requirement of the experimental data remaining agnostic to any UV completion. Our approach is to pare down to the minimal version of the model that would be simultaneously in consonance with the various experimental results at low energy intensity frontiers with focus on observables that indicate LFUV. Our notion of simplicity will be guided by economy of new parameters rather than any symmetry or embedding considerations. Starting with the simplest two parameter model of $Z'$ mass and universal couplings, we systematically build this phenomenological model of $Z'$ from bottom up by adding new parameters as the experimental data demand the same. In this context, we demonstrate that the simultaneous inclusion of flavour violating vectorial and axial-vectorial couplings to the leptons is a prudent choice in constructing such minimal models. Once we establish a \textit{data driven} minimal phenomenological $Z'$ model for LFUV we explore the region of parameter space of such a setup that would be in agreement with relevant experimental results including $B^0_s - \bar{B}^0_s$ mixing \cite{Zyla:2020zbs}, recent LHCb results of angular observables in the $B^+ \to K^{+*}\mu^+\mu^-$ channel \cite{LHCb:2020gog} and some other constraints that are related to $b\to s$ transitions \cite {LHCb:2020pcv, LHCb:2013pra, LHCb:2014vgu, Belle:2017oht}. We have also taken into account the constraints from collider and fixed target experiments \cite{Zyla:2020zbs, Mishra:1991bv}. The article is organised as follows. In the Sec.~\ref{model} we explore different effective $Z'$ scenarios in the context of recent data of $(g-2)_\mu$ and $(g-2)_e$. Then in Sec.~\ref{btosll_ano} we resolve different anomalies related to $b\to s ll$ transitions in a particular economic effective $Z'$ scenario. After that in Sec.~\ref{angular} we discuss the new results of LHCb related to angular observables of the decay $B^+\to K^{+*}\mu^+\mu^-$. We discuss our numerical results in Sec.~\ref{neu_res}. Then in Sec.~\ref{excons} we comment on some relevant constraints. In Sec.~\ref{plus_g2e} we re-examine the scenarios that we consider in this article in the context of latest experimental value of $(g-2)_e$ for which the $\Delta a_e$ is positive. Finally, in Sec.~\ref{concl} we will conclude our findings. \section{Resolution of anomalous magnetic moment of charged lepton in minimal $Z'$ scenario(s)}\label{model} The magnetic moment $\vec{\mathbb{M}}$ of a charged lepton ($l$) can easily be defined in terms of its spin $\vec{\mathbb{S}}$ and gyromagnetic ratio ($g_{l}$) using the Dirac equation as follows \begin{eqnarray} \vec{\mathbb{M}}= g_{l} \dfrac{e}{2\,m_l} \vec{\mathbb{S}}\,. \label{mug2} \end{eqnarray} Ideally the value of $g_{l}$ is equal to 2. Quantum correction provides marginal shift within the SM. In order to estimate this deviation of $g_{l}$ from its tree-level value one can usually define the parameter, \begin{eqnarray} a_{l} = \dfrac{g_{l}-2}{2}\,. \end{eqnarray} \begin{figure}[H] \begin{center} \includegraphics[height=4.5cm,width=5cm,angle=0]{g2_Zp} \caption{Relevant Feynman diagram that is contributed to the $(g-2)_l$ in addition to the SM.} \label{muong2} \end{center} \end{figure} We consider the extension of the SM incorporating a massive $Z'$ as mass $M_{Z'}$ having flavoured vectorial and axial-vectorial couplings as given by a generic effective Lagrangian, \begin{equation}\label{aZp_bZp_same} \mathcal{L}\in \bar{l}\gamma^\alpha(a^l_{Z^{\prime}}+\gamma^5 b^l_{Z^{\prime}})l~Z'_{\alpha}\,, \end{equation} where, $a^l_{Z^{\prime}}$ and $b^l_{Z^{\prime}}$ (where $l\in e, \mu$) are the vectorial and axial-vectorial couplings of light charged leptons with $Z'$ boson. This leads to an additional contribution to the anomalous magnetic moment of charged lepton depicted in Fig.~\ref{muong2} and the corresponding contribution is given by \cite{Gninenko:2001hx, Baek:2001kca, Biswas:2019twf, Biswas:2021dan}, \begin{equation}\label{g2_Z} \Delta a_l^{Z'} = \frac{1}{8\pi^2}~ \left((a^l_{Z'})^2F_{a^l_{Z^{\prime}}}(R_{Z'})-(b^l_{Z'})^2F_{b^l_{Z^{\prime}}}(R_{Z'}))\right)\,,\\ \end{equation} with $R_{Z'}\equiv M^2_{Z'}/m^2_{l}$. Here, $M_{Z'}$ is the mass of $Z'$ boson and $m_l$ is the mass of charged lepton. The loop functions corresponding to vectorial and axial-vectorial interactions are given as follows, \begin{eqnarray} F_{a^l_{Z^{\prime}}}(R_{Z'}) &=& \int_0^1 dx\, \frac{2x(1-x)^2}{(1 -x)^2+R_{Z'} x} \;, \\ F_{b^l_{Z^{\prime}}}(R_{Z'}) &=& \int_0^1 dx\,\frac{2x(1-x)(3+x)}{(1 -x)^2+R_{Z'} x}\;. \end{eqnarray} \subsection{The Minimal Model} We now embark on identifying the minimal flavoured model of $Z'$ that can explain the anomalous magnetic moment of the charged leptons. Our hunt will be guided by the recent experimental data on the anomalous magnetic moment of the charged leptons that we now briefly summarise: \begin{itemize} \item In the case of muon ($\mu$), over the last two decades there is an enduring deviation ($\gtrapprox 3.5 \sigma$) between theoretical prediction and the corresponding experimental data. Recently, a measurement of the $(g-2)_\mu$ has been reported by FNAL \cite{Muong-2:2021ojo}, \begin{eqnarray} a_{\mu}^{\rm exp} = 116592061\pm 41 \times 10^{-11}\,. \label{mug2exp} \end{eqnarray} The state of the art SM theoretical prediction is \cite{Aoyama:2020ynm} \begin{eqnarray} a_{\mu}^{\rm SM} = 116591810\pm 43 \times 10^{-11}\,. \label{mug2th} \end{eqnarray} This amounts to a disagreement between SM prediction and experiment at $4.2 \sigma$ parametrised by, \begin{eqnarray} \Delta a_{\mu} = a_{\mu}^{\rm exp} - a_{\mu}^{\rm SM} =(251\pm 59) \times 10^{-11}\,. \label{mug2delta} \end{eqnarray} \item Similarly there exists a milder disagreement if we compare theoretical and experimental values of $(g-2)$ in the electronic sector. For example, SM prediction of $(g-2)_e$ \cite{Aoyama:2017uqe} was determined with the value of fine structure constant, evaluated at the Berkeley laboratory by performing high precision measurement using cesium atoms~\cite{Parker:2018vye}. This value moderately deviates ($\simeq 2.4\sigma$) from the corresponding experimental result \cite{Hanneke:2008tm} and is parametrised as, \begin{equation} \Delta a_e = a_e^{\rm exp} - a_e^{\rm SM} =(-8.8\pm 3.6)\times 10^{-13}\,. \label{eg2negative} \end{equation} Interestingly the deviation in the anomalous magnetic moment for the electronic and muonic sector have opposite sign. It is difficult to account for this from a simple mass scaling ($\Delta a_e / \Delta a_\mu \sim m^2_e/m^2_\mu\sim 10^{-5}$). This can be construed as an evidence of lepton flavour non-universality of any underlying NP\footnote{For an explanation of the anomalous magnetic moment of leptons within lepton flavour universality see \cite{Hiller:2019mou}.}. In this paper we systematically study the minimal flavoured $Z'$ model that can simultaneously explain the deviation in the anomalous magnetic moment in the electronic and muonic sector. \end{itemize} \begin{enumerate} \item First we consider that the $Z'$ has universal vectorial interaction with charged lepton with the effective Lagrangian of the form, \begin{equation}\label{aZp_mu1} \mathcal{L}\in \bar{l}\gamma^\alpha(a_{Z^{\prime}})l~Z'_{\alpha}\,, \end{equation} where, $l\in (e,\mu)$. This scenario has two free parameters $M_{Z^{\prime}}$ and $a_{Z^{\prime}}$. With this setup, while one can tune $a_{Z^{\prime}}$ to explain the result of anomalous magnetic moment of muon. However, there is no possibility to explain the relative sign difference between the two leptonic generation. \item We now consider non-zero vectorial ($a_{Z^{\prime}}$) and axial-vectorial ($b_{Z^{\prime}}$) couplings with both muon and electron. The effective interaction is given by, \begin{equation}\label{aZp_bZp_same1} \mathcal{L}\in \bar{l}\gamma^\alpha(a_{Z^{\prime}}+\gamma^5 b_{Z^{\prime}})l~Z'_{\alpha}. \end{equation} This extends the number of free parameters to three viz $a_{Z^{\prime}}$, $b_{Z^{\prime}}$ and $M_{Z^{\prime}}$. This is still unable to simultaneously explain the measured value of $\Delta a_\mu$ and $\Delta a_e$. The region of the parameter space allowed by $\Delta a_\mu$ and $\Delta a_e$ individually are shown in Fig.~\ref{fig:aZbZ} in $a_{Z^{\prime}}$ vs $b_{Z^{\prime}}$ plane for different values of $M_{Z'}$ in the range [$10^{-3}$ to $10^3$] GeV. We obtain no overlap at al. \begin{figure}[t!] \begin{center} \subfloat[]{\label{fig:aZbZ}\includegraphics[height=7.5cm,width=8.5cm,angle=0]{e_mu}} \subfloat[]{\label{fig:e_mu_a0e_b0mu}\includegraphics[height=7.5cm,width=9.5cm,angle=0]{e_mu_a0e_b0mu}} \caption{Left panel: 1$\sigma$ allowed parameter space by $(g-2)_\mu$ (blue) and $(g-2)_e$ (red) in $a_{Z^{\prime}}$ and $b_{Z^{\prime}}$ plane. Right panel: 1$\sigma$ allowed parameter space in $a^\mu_{Z^{\prime}}$ vs $b^e_{Z^{\prime}}$ plane with the allowed values of $M_{Z^{\prime}}$ (in GeV).} \end{center} \end{figure} \item Keeping with three parameter models we now explore the possibility of addressing the anomalous magnetic moments using an interaction of the following form, \begin{equation}\label{aZp_mu} \mathcal{L}\in \bar{\mu}\gamma^\alpha(a^\mu_{Z^{\prime}})\mu~Z'_{\alpha}+\bar{e}\gamma^\alpha(\gamma^5 b^e_{Z^{\prime}})e~Z'_{\alpha}\,. \end{equation} With these combination one can simultaneously explain the $(g-2)_{\mu}$ and $(g-2)_{e}$ data at al. We have shown the allowed parameter space in $a^\mu_{Z^{\prime}}$ vs $b^e_{Z^{\prime}}$ plane while the values of $M_{Z^{\prime}}$ are shown by colour codes. Expectedly with increasing values of $a^\mu_{Z^{\prime}}$ and $b^e_{Z^{\prime}}$ a large $M_{Z^{\prime}}$ is required to reduce the loop effects by suppressing the propagator as can be read off from Fig.~\ref{fig:e_mu_a0e_b0mu}. \end{enumerate} \section{LFUV From \boldmath$B$-sector}\label{btosll_ano} A synergy of experimental results in the decay of $B$-meson provide further credence to the emergent paradigm of LFUV. As for example $R_{K^{(*)}}$ define as \cite{Hiller:2003js}, \begin{align}\label{Rth} R_{K^{(*)}} \equiv \frac{\int^{q^2_{\rm max}}_{q^2_{\rm min}} \frac{d\Gamma\left({B} \rightarrow K^{(*)} \mu^+ \mu^-\right)}{d q^2} d q^2} {\int^{q^2_{\rm max}}_{q^2_{\rm min}} \frac{d\Gamma\left({B} \rightarrow K^{(*)} e^+ e^-\right)}{d q^2} d q^2}\;, \end{align} where $q^2$ represents the dilepton mass squared with the limits $q^2_{\rm max}= (m_B-m_{K^{(*)}})^2$, $q^2_{\rm min}= 4m^2_l$ and $m_B$ represents the mass of $B$-meson. When QED corrections are included in the SM, these ratios are close to modulo one \cite{Isidori:2020acz}. We summarise some of the relevant FCNC related experimental parameters in Table \ref{rkrkstdata}. The imprint of NP in this data is becoming increasingly apparent. \begin{table}[!h] \centering \begin{tabular}{|c|cr|cr|cr|} \hline Observable & SM prediction & & Measurement & & Deviations &\\ \hline $R_K : q^2 = [1.1,6] \, \text{GeV}^2$ & $1.00 \pm 0.01 $& \cite{Descotes-Genon:2015uva,Bordone:2016gaq} & $0.846^{+0.042+0.013}_{-0.039-0.012}$ & \cite{LHCb:2021trn} & 3.1$\sigma$ &\\ \hline $R_{K^*} ^{\rm low}: q^2 = [0.045,1.1] \, \text{GeV}^2$ & $0.92 \pm 0.02$ & \cite{Capdevila:2017bsm} & $0.660^{+0.110}_{-0.070} \pm 0.024$ & \cite{Aaij:2017vbb} & $2.1\sigma-2.3\sigma$ &\\ \hline $R_{K^*}^{\rm central} : q^2 = [1.1,6] \, \text{GeV}^2$ & $1.00 \pm 0.01 $& \cite{Descotes-Genon:2015uva,Bordone:2016gaq} & $0.685^{+0.113}_{-0.069} \pm 0.047$ & \cite{Aaij:2017vbb} & $2.4\sigma-2.5\sigma$ &\\ \hline \end{tabular} \caption{The experimental values of $R_K$ and $R_{K^*}$ along with their SM predictions for different ranges of $q^2$.} \label{rkrkstdata} \end{table} Considering this, we now proceed to check the compatibility of the minimal $Z'$ model discussed in the previous section with the experimental data related to $b\to sll$ transitions. With this non-universal nature of the coupling of $Z'$ to $e$ and $\mu$ it is expected that the decay widths for $b\to s\mu^+\mu^-$ and $b\to s e^+e^-$ will be different. The possibility to exploit this property to resolve the above mentioned LFUV anomalies is optimistic. In order to couple to the quark sector we introduce a single new flavour off diagonal interaction with coupling $\textsl{g}_{bs}$ \begin{equation}\label{btos_int} \Delta\mathcal{L}\in \textsl{g}_{bs}(\bar b \gamma^\alpha P_L s) Z'_{\alpha}\,,\end{equation} in addition to the Lagrangian given in Eq.~\ref{aZp_mu}. In the rest of this paper, we refer to this minimally flavoured scenario as MFS. The effective interaction between $bsZ'$ imply new contribution to the ${B^0_s}-\bar{B^0_s}$ oscillation at tree-level. This can provide stringent constraint in the parameter space and thus we incorporate the constraint of ${B^0_s}-\bar{B^0_s}$ oscillation in our analysis. The analysis that follows our approach is to reconstruct the flavour observables using their proper definition within MFS and implement them in our numerical analysis. This may be {\it contrasted} with the approach where fit values of Wilson Coefficients (WCs) are utlised. \subsection{The \boldmath$B\to K^{(*)}l^+l^-$ transition} The hadronic decay $B\to K^{(*)}l^+l^-$ is driven by the underlying transition $b\to sl^+\l^-$ at the quark level. The effective Hamiltonian at hadronic scale $Q\sim m_b$ is given by \cite{Buchalla:1995vs} \begin{equation} \centering {\cal H}_{\rm eff}(b\to s l^+l^-) = {\cal H}_{\rm eff}(b\to s\gamma) - \frac{G_{\rm F}}{\sqrt{2}}\frac{\alpha_{\rm em}}{\pi} V_{ts}^* V_{tb} \left[ C_{9}(Q) \mathcal{O}_{9}+ C_{10}(Q) \mathcal{O}_{10} \right]\,, \label{Heff2_at_mu} \end{equation} where, \begin{equation}\label{Q9V} \mathcal{O}_{9} = (\bar{s}\gamma^\alpha P_L b) (\bar{l}\gamma_\alpha l)\,, \qquad \mathcal{O}_{10} = (\bar{s}\gamma^\alpha P_L b) (\bar{l}\gamma_\alpha\gamma_5 l)\,, \end{equation} are the two crucial operators for this transition. In the first part of the Hamiltonian (see Eq.~\ref{Heff2_at_mu}) there is no NP contribution, while in the remaining part, along with the SM contribution there is NP contribution from tree-level $Z'$ exchange (see Fig.~\ref{Zptree}). Since $Z'$ is assumed to couple only to left handed SM fermion, therefore the corresponding chirality flipped operators are not generated. The relevant WCs corresponding to the operators $\mathcal{O}_{9}$ and $\mathcal{O}_{10}$ have contained the tree-level NP contributions from a phenomenological $Z'$ define in Eqs.~\ref{aZp_mu} and \ref{btos_int} are given by \cite{Buras:2012dp}, \begin{eqnarray} C^{\rm NP}_{9} &=& -\frac{\sqrt{2}\pi}{G_F \alpha_\text{em} V_{ts}^* V_{tb}} \frac{\textsl{g}_{bs} a^\mu_{Z^{\prime}}}{M^2_{Z^{\prime}}}\,,\label{C9NP} \\ C^{\rm NP}_{10} &=& -\frac{\sqrt{2}\pi}{G_F \alpha_\text{em} V_{ts}^* V_{tb}}\frac{\textsl{g}_{bs} b^e_{Z^{\prime}}}{M^2_{Z^{\prime}}}\,, \label{C10NP} \end{eqnarray} where $G_F$ is the Fermi constant, $\alpha_\text{em}$ represents the fine structure constant and $V_{ij}$ stands for the Cabibbo Kobayashi Maskawa (CKM) matrix elements. \begin{figure}[t!] \begin{center} \includegraphics[height=3.5cm,width=7cm,angle=0]{btosll} \caption{Tree-level Feynman diagram that contributes to $b\to sl^+l^-$ transition mediated by $Z'$ boson.} \label{Zptree} \end{center} \end{figure} \subsubsection{\boldmath$R_K$}\label{ddwk} In light of the recent result on the differential distribution of the $B^+\to K^+l^+l^-$ transitions from LHCb \cite{LHCb:2021trn} in this section we briefly discuss the decay distribution of this transition. The differential branching fraction for $B^+\to K^+l^+l^-$ is written as \cite{Altmannshofer:2014rta} \begin{align}\label{btokdw} \frac{d\Gamma(B^+\to K^+l^+l^-)}{dq^2} = \frac{G_F^2\alpha^2_\text{em}|V_{tb}V_{ts}^*|^2}{2^{10} \pi^5 m_B^3} \lambda^{3/2}(m_B^2,m_{K}^2,q^2) \left( |F_V|^2+|F_A|^2 \right), \end{align} where, \begin{align} \lambda(a,b,c)&= a^2 +b^2 + c^2 - 2 (ab+ bc + ac) \,,\\ F_V(q^2) &= C_{9}^\text{eff}(q^2) f_+(q^2) + \frac{2m_b}{m_B+m_K}C_{7}^\text{eff} f_T(q^2) \,, \\ F_A(q^2) &= C_{10}f_+(q^2) \,. \end{align} For completeness we have given the relevant expressions for the WCs in Appendix \ref{NDR}. $f_+$ and $f_T$ are the relevant form factors. Corresponding details are given in the Appendix~\ref{sec:ftfprel}. Using the Eqs.~\ref{btokdw} and \ref{Rth} we can evaluate the $R_K$. \subsubsection{\boldmath$R_{K^{*}}$}\label{ddwkst} Similar to the earlier section, in order to determine the $R_{K^{*}}$ here we would like to calculate the differential decay rate of $B\to K^{0*}l^+l^-$ with respect to $q^2$ and which is given as follows \cite{Bobeth:2008ij,Matias:2012xw} \begin{eqnarray} \label{ddwrkst} \frac{d\Gamma(B^0\to K^{0*}l^+l^-)}{dq^2}&=&\frac{1}{4}(3I_1^c+6I_1^s-I_2^c-2I_2^s)\;. \end{eqnarray} Using the above expression and Eq.~\ref{Rth} we can determine the observable $R_{K^{*}}$. The angular coefficients $I^{c,s}_{1,2}$ that are involved in the above Eq.~\ref{ddwrkst} can be defined as follows \cite{Altmannshofer:2008dz} \begin{eqnarray} I_1^s &=& \frac{(2+\beta_l^2)}{4} \left[|{A_\perp^L}|^2 + |{A_\parallel^L}|^2 + (L\to R) \right] + \frac{4 m_l^2}{q^2} \text{Re}\left({A_\perp^L}^{}{A_\perp^R}^* + {A_\parallel^L}^{}{A_\parallel^R}^*\right), \label{I1s} \\ I_1^c &=& |{A_0^L}|^2 +|{A_0^R}|^2 + \frac{4m_l^2}{q^2} \left[|A_t|^2 + 2\text{Re}({A_0^L}^{}{A_0^R}^*) \right],\label{I1c}\\ I_2^s &=& \frac{ \beta_l^2}{4}\left[ |{A_\perp^L}|^2+ |{A_\parallel^L}|^2 + (L\to R)\right], \label{I2s}\\ I_2^c &=& - \beta_l^2\left[|{A_0^L}|^2 + (L\to R)\right], \label{I2c} \end{eqnarray} with $\beta_l=\sqrt{1-4m_l^2/q^2}$. The functions $A_{L/R,i}$ are called the transversity amplitudes that can be decomposed in terms of appropriate WCs and form factors. Details of the relevant form factors are provided in Appendix \ref{BSZ:ff}. The WCs ($C_9$ and $C_{10}$) are containing the NP contributions from $Z'$ exchange (see Eqs.~\ref{C9NP} and \ref{C10NP}). The complete expressions for the transversity amplitudes have been given in Appendix \ref{trvnampli}. \subsection{Constraint from ${B^0_s}-\bar{B^0_s}$ mixing due to tree-level contributions of $Z^\prime$}\label{bbar_con} In this section we discuss the constraint from the mass difference ($\Delta M_s$) between the ${B^0_s}$ meson mass eigenstates arising from the ${B^0_s}-\bar{B^0_s}$ mixing phenomena. The SM contribution to this $\Delta B=2$ transition process through the top quark mediated box-diagram \cite{Buras:1990fn, Urban:1997gw} and is numerically given as \cite{Lenz:2019lvd} $(\Delta M_s)_{\rm SM}=(18.77\pm 0.86){\rm ps}^{-1}$ that is in good agreement with experimental value \cite{Zyla:2020zbs} $(\Delta M_s)_{\rm exp}=(17.749\pm 0.019\pm 0.007){\rm ps}^{-1}$. \begin{figure}[htbp!] \begin{center} \includegraphics[height=3.5cm,width=7cm,angle=0]{bs_bs} \caption{Tree-level Feynman diagram that contributes to ${B^0_s}-\bar{B^0_s}$ transition mediated by $Z'$ boson.} \label{bs_bs} \end{center} \end{figure} In the minimal flavoured $Z'$ model there exists a tree-level $Z'$ contribution depicted in Fig.~\ref{bs_bs} and can be defined as \cite{Buras:2012dp} \begin{equation}\label{Zprime1} \Delta M_s(Z^\prime)= \left[\frac{g_{bs}}{V^*_{ts}V_{tb}}\right]^2 \frac{4\tilde r \pi \sin^2\theta_W}{\sqrt{2}G_F\alpha_\text{em} M^2_{Z^\prime}}, \end{equation} with \begin{equation} \tilde r=\frac{C_1^{\rm VLL}(M_{Z^\prime})}{0.985} \eta_6^{6/21}\left[1+1.371\frac{\alpha_s^{(6)}(m_t)}{4\pi}(1-\eta_6)\right], \label{rtilde} \end{equation} where, \begin{equation}\label{equ:WilsonZ} C_1^\text{VLL}(Q)= 1+\frac{\alpha_s}{4\pi}\left(-2\log\frac{M_{Z'}^2}{Q^2}+\frac{11}{3}\right)\;. \end{equation} The above quantity depicts ${\cal O}(\alpha_s)$ QCD corrections to $Z'$ tree-level exchange \cite{Buras:2012fs} and the two factors containing \begin{equation} \eta_6=\frac{\alpha_s^{(6)}(M_{Z^\prime})}{\alpha_s^{(6)}(m_t)}\;, \end{equation} designate together NLO QCD renormalisation group evolution from top quark mass ($m_t$) to $M_{Z^\prime}$ as given in \cite{Buras:2001ra}. In our scan we restrict the value of ${(\Delta M_s)_{\rm exp}}/{(\Delta M_s)_{\rm SM}}$ within the 2 $\sigma$ allowed range $(0.946\pm 0.086)$. \section{Angular observables for \boldmath$B^+\to K^{+*}\mu^+\mu^-$ transition}\label{angular} The LHCb collaboration recently reported results for angular observables of the $B^0\to K^{0*}\mu^+\mu^-$ channel \cite{LHCb:2020lmf}. Interestingly, the observation by LHCb collaboration~\cite{LHCb:2020lmf} on the $B^0\to K^{0*}\mu^+\mu^-$ decay channel is in tension with respect to the SM prediction of observable $P'_5$. Similarly, a latest data of LHCb with 9${\rm fb}^{-1}$ luminosity \cite{LHCb:2020gog} has provided the first measurement of the full set of CP-averaged angular observables in the isospin partner decay $B^+\to K^{+*}\mu^+\mu^-$. In this case the $K^{+*}$-meson reconstructed through the decay chain $K^{+*}\to K^0_S\pi^+$ with $K^0_S\to \pi^+\pi^-$. For a particular angular observable $P_2$ in the $6.0-8.0$ ${\rm GeV}^2$ interval there is the largest local disagreement with respect to the SM prediction. The corresponding deviation is around $3\sigma$. Considering this, using the definition given in \cite{Descotes-Genon:2013vna} we evaluate CP-averaged angular observables $P_2$ and $P'_5$. With these observables, we further impose the constraint in the parameter space of the MFS. We have considered the given CP-averaged binned data (for different $q^2$ values given in \cite{LHCb:2020gog} ) for the angular observables. The expressions for the observables $P_2$ and $P'_5$ containing NP contribution from the MFS can be written as, \begin{eqnarray} P_2&=&\beta_l\frac{I^s_6}{8I_2^s}~~~{\rm with}~~~I_6^s=2\beta_l\left[\text{Re} ({A_\parallel^L}^{}{A_\perp^L}^*) - (L\to R) \right] \;,\\ \label{P2ex} P_{5}^\prime&=&\frac{I_{5}}{2\sqrt{-I_2^c I_2^s}}~~~{\rm with}~~~I_5 = \sqrt{2}\beta_l\left[\text{Re}({A_0^L}^{}{A_\perp^L}^*) - (L\to R)\right]\;, \label{P5pex} \end{eqnarray} where, $I^{s}_2$ and $I^{c}_2$ are given in Eqs.~\ref{I2s} and \ref{I2c} respectively. With the given data and the corresponding theoretical predictions in MFS we have derived $\chi^2$ per degrees of freedom. Then using this we have imposed the condition on the parameter space which is allowed by 95\% C.L. of the given data for $P_2$ and $P'_5$. { Here, we would like to mention that in our analysis we have not considered the bin [6.0, 8.0] GeV$^2$ of $q^2$ since it is known to suffer from long distance $c\bar{c}$ corrections close to the open charm threshold.} \section{Numerical results}\label{neu_res} \begin{figure}[t!] \begin{center} \hspace*{-0.5cm}\subfloat[]{\label{ab}\includegraphics[height=8cm,width=9.5cm,angle=0]{a_b_comb_new}} \subfloat[]{\label{MZpgbs}\includegraphics[height=8cm,width=9cm,angle=0]{MZp_gbs_comb_new}} \caption{Left panel: Allowed parameter space in $a^\mu_{Z'}$ vs $b^e_{Z'}$ plane with allowed values of the mass $Z'$ boson (in GeV). Different values of masses are indicated by different colour code. Right panel: Allowed parameter space in $M_{Z'}$ (in GeV) vs $\textsl{g}_{bs}$ plane. For both the panel, we have taken the current experimental results of $(g-2)_\mu$, $(g-2)_e$ (with negative value of $\Delta a_e$), latest data of $R_K$, $R_{K^*}$, leading angular observables ($P_2$ and $P'_5$) of $B^+\to K^{+*}\mu^+\mu^-$ decay mode. Further, we have taken the constraint that is imposed by the ${B^0_s}-\bar{B^0_s}$ mixing. If we relax the angular observables conditions then there will be an enhancement of the allowed parameter space and which is reflected by the grey coloured region in both the panels.} \label{fig:all} \end{center} \end{figure} We have performed an extensive numerical scan sampling over $10^7$ points using the free parameters in MFS. From this scan, about four thousand ($0.04\%$) points survive which satisfy the latest measurement of $\Delta a_\mu$ at Fermilab \cite{Muong-2:2021ojo}, $\Delta a_e$ given by Barkeley laboratory \cite{Parker:2018vye}, $R_K$ published by LHCb collaboration \cite{LHCb:2021trn}, up to date results of $R_{K^{*}}$ for the both lower and central bin values of $q^2$ \cite{Aaij:2017vbb}. Moreover, we incorporate relevant constraint from ${B^0_s}-\bar{B^0_s}$ oscillation data \cite{Zyla:2020zbs}. Additionally we have also imposed the constraints from the leading angular observables ($P_2$ and $P'_5$) of $B^+\to K^{+*}\mu^+\mu^-$ decay mode \cite{LHCb:2020gog}. In the left panel (\ref{ab}) of Fig.~\ref{fig:all} we have shown the allowed parameter space in $a^\mu_{Z'}$ vs $b^e_{Z'}$ plane with allowed values of the mass of $Z'$ boson (indicated by different colour code). This pattern can be explained if we consider the similar argument as given for the Fig.~\ref{fig:e_mu_a0e_b0mu}. However, in the case of Fig.~\ref{fig:all} apart from the case of $(g-2)_l$, we have to consider the NP contributions to the WCs $C_9$ and $C_{10}$ for $B$-meson decays. If one looks at the Eqs.~\ref{C9NP} and \ref{C10NP}, then it is evident that the NP contributions to WCs $C_9$ and $C_{10}$ are proportional to ${a^\mu_{Z'}}/{M^2_{Z'}}$ and ${b^e_{Z'}}/{M^2_{Z'}}$ respectively. Therefore, if the values of $a^\mu_{Z'}$ and $b^e_{Z'}$ are increased then in order to restrict the numerical prediction of the observables within the allowed range, the values of $M_{Z'}$ will also increase to suppress the propagator effect and it is depicted in the left panel (\ref{ab}) of Fig~\ref{fig:all}. Similar argument also holds good for the right panel (\ref{MZpgbs}) and it is evident from the Eqs.~\ref{C9NP} and \ref{C10NP}. Hence, following the previous argument and from the Eqs.~\ref{C9NP} and \ref{C10NP} it is clear that with the increasing values of $\textsl{g}_{bs}$ the values of $M_{Z'}$ will also increase and it is reflected from the right panel (\ref{MZpgbs}) of the Fig.~\ref{fig:all}. If we relax the constraints from angular observables of the $B^+\to K^{+*}\mu^+\mu^-$ decay mode, we expectedly obtain an enlarged allowed parameter space and these additional points are depicted in grey in both the panels of Fig.~\ref{fig:all}. \subsection{Some relevant constraints regarding $b\to s$ decay transitions} Some comments on processes related $b\to s$ transitions are now in order. In our minimal scenario constraint from Br($B_s\to\mu^+\mu^-$) is not relevant, because it is dominated by the WC $C_{10}$. In this scenario, the WC $C_9$ gets the NP contribution for the $b\to s\mu^+\mu^-$ transition whereas the WC $C_{10}$ receives the NP contribution for $b\to s e^+e^-$ transition due to the presence of non zero value of $b^e_{Z'}$. Therefore, we have computed Br$(B_s\to e^+e^-)$ for allowed parameter points shown in the Fig.~\ref{fig:all}. Consequently, we have found that due to NP contribution there is a substantial amount of enhancement to the Br$(B_s\to e^+e^-)$ with respect to the corresponding SM prediction $8.6\times 10^{-14}$ \cite{LHCb:2020pcv}. This can be construed as an testable prediction of this framework. {For example, if we consider the value of $M_{Z'}$ is 180 GeV then within the allowed region of parameter space the model prediction for the Br$(B_s\to e^+e^-)$ can be as large as $2.43\times 10^{-12}$ which is well below the experimental upper limit value $9.4\times 10^{-9}$ \cite{LHCb:2020pcv}. Moreover, we have found that within the allowed values of $M_{Z'}$ (e.g., between 100 GeV to 200 GeV), the largest values of Br$(B_s\to e^+e^-)$ almost remain the same.} Further, we have checked that parameter space (presented in the Fig. \ref{fig:all}) is also in consonance with the experimental results for the decays $B\to K^{(*)}e^+e^-$ \cite{LHCb:2013pra, LHCb:2014vgu}. \section{Other experimental constraints}\label{excons} We now turn our attention to other relevant bounds\footnote{In our analysis, we have not considered the constraints from flavour violating processes like $\mu\to e\gamma$ or $\tau\to 3\mu$ as there is no mixing in the charged lepton sector of the SM.} on the effective $Z'$ scenario from searches at both low energy and high energy collider experiments. In most UV complete models where an exotic $Z'$ couples to the charged leptons, an interaction with the corresponding neutrinos is naturally expected. In fact in the limit of preserved ${\rm SU(2)}_L$ symmetry we expect identical coupling between the left handed charged leptons and their isospin partner neutrinos. Additional constraints that arise due to the coupling of the $Z'$ to the neutrinos are summarised below: \begin{enumerate} \item The CCFR experiment has put stringent bounds on muon neutrino-nucleus scattering cross section ($\nu_{\mu} (\overline{\nu_{\mu}}) + N \rightarrow \nu_{\mu} (\overline{\nu_{\mu}}) + \mu^+ \mu^- + N$) that can provide constraint on the parameter space of interest. The neutrino trident production cross section measured at the CCFR experiment at ${\sigma_{\rm CCFR}}/{\sigma_{\rm SM}} = 0.82\pm 0.28$ \cite{Mishra:1991bv}. \item Measurement of Br($B\to K^{(*)}\nu\bar{\nu}$) by the Belle collaboration \cite{Belle:2017oht} can also potentially constrain the $Z'-\nu$ couplings. \end{enumerate} It is known that the neutrino trident production cross section excludes the $(g-2)_\mu$ allowed region above the GeV scale for ${\rm SU(2)}_L$ invariant couplings \cite{Altmannshofer:2014pba, Altmannshofer:2019zhy}. Continuing with our bottom up approach we introduce a generic coupling of the $Z'$ with the neutrinos aligned with the charged lepton coupling introduced in Sec.~\ref{model}. We keep the couplings of the $Z'$ to the charged leptons and the corresponding neutrinos independent and their ratio will be parametrised by $a_{Z'}^{\nu_l} / a_{Z'}^l =\varepsilon_l$. The $\varepsilon_l$ is a measure of the isospin violation in the couplings and $\varepsilon_l=1$ represents the ${\rm SU(2)}_L$ invariant limit. In the passing, we note that UV complete models with ${\rm SU(2)}_L$ violating $Z'$ couplings are not very common. It is possible to construct scenarios where the ${\rm SU(2)}_L$ violating couplings of SM leptons to the $Z'$ entirely originate from a linear mixing with exotic vector like lepton partners. Provided the charged lepton partner has a different ${\rm U(1)}'$ quantum number compared to the corresponding neutrino partner, the effective $Z'$ couplings with the SM leptons will violate isospin after electroweak symmetry breaking. These frameworks can possibly be embedded in UV complete scenarios. For example see ref.\;\cite{Li:2019sty} for an $E_6$ GUT scenario where several exotic scalar fields, that obtain vacuum expectation values as $E_6$ breaks to the SM gauge group, drive a linear mixing between the SM matter fields with their vector like partners. While the focus in \cite{Li:2019sty} is on the isospin violating coupling in the quark sector, a generalisation to the leptonic sector with a Dirac like neutrino mass is straightforward. For other approach to isospin violation in the quark sector see for example the discussion in the context of dark matter phenomenology in ref.\;\cite{Frandsen:2011cg, Li:2022qrl}, which can potentially be extended to the leptonic sector. Following \cite{Altmannshofer:2014rta} we evaluate the neutrino trident production cross section assuming $\varepsilon_\mu(\neq 1)$ and compare the parameter space of interest with the CCFR results. Further, we utilise {\tt flavio} \cite{Straub:2018kue} to numerically evaluate Br($B\to K^{(*)}\nu\bar{\nu}$) \cite{Buras:2014fpa} in the parameter space that is consistent with $(g-2)_\mu$, $(g-2)_e$, $R_{K^{(*)}}$, leading angular observables of the decay $B^+\to K^{+*}\mu^+\mu^-$, ${B^0_s}-\bar{B^0_s}$ mixing and CCFR data\footnote{An additional constraint from the $\nu_e e$ scattering \cite{Bellini:2011rx, Borexino:2013zhu, Borexino:2017rsf} can also play a role for MeV scale $Z'$ model. However, the effect for $Z'$ masses greater than 1.5 GeV is negligible \cite{Lindner:2018kjo}. As we concentrate on the weak scale $Z'$ that is relevant for the $B$-meson sector we do not consider this limit in our analysis.}. \begin{figure}[t!] \begin{center} \includegraphics[height=9cm,width=12cm,angle=0]{trident_ep_All \caption{Allowed parameter space in $M_{Z'}-a^\mu_{Z'}$ plane with allowed values of the isospin violating parameter $\varepsilon_\mu$. Different values of $\varepsilon_\mu$ are indicated by different colour codes. The parameter space has been obtained by considering the current experimental results of $(g-2)_\mu$, $(g-2)_e$ (with negative value of $\Delta a_e$), latest data of $R_K$, $R_{K^*}$, leading angular observables ($P_2$ and $P'_5$) of $B^+\to K^{+*}\mu^+\mu^-$ decay mode and ${B^0_s}-\bar{B^0_s}$ oscillation data. Further we imposed the CCFR data for neutrino trident production. Within the allowed parameter points the branching ratios for $B\to K^{(*)}\nu\bar{\nu}$ is found to be consistent with the experimental upper bound $1.6 (2.7)\times 10^{-5}$ and remain one order of magnitude below this limit in the entire region of the parameter space of interest. If we relax the constraint from neutrino trident production cross section then there will be an enhancement of the allowed parameter space and which is reflected by the grey coloured region. The purple coloured vertical line represents the LEP collider bound on $M_{Z'}$.} \label{fig:CCFR} \end{center} \end{figure} In Fig.~\ref{fig:CCFR} we present the parameter space which is allowed by experimental data considered in Sec.~\ref{neu_res} and additionally is in agreement with the CCFR data for neutrino trident production and the limits on Br($B\to K^{(*)}\nu\bar{\nu}$). Expectedly the ${\rm SU(2)}_L$ invariant couplings are excluded by the CCFR data. This necessitates the introduction of non-trivial isospin violation in the $Z'$ couplings parametrised by $\varepsilon_l \neq 1$. In Fig.~\ref{fig:CCFR} the allowed parameter points in the $M_{Z'}-a^\mu_{Z'}$ plane have values of $\varepsilon_\mu$ represented by different colours. The black line represents the CCFR exclusion limit for $\varepsilon_\mu=1$. As can be read off from the plot the region of parameter space consistent with $(g-2)_\mu, B$-physics observable and CCFR require $\varepsilon_\mu < 0.2.$ For the allowed parameter points the branching ratios for $B\to K^{(*)}\nu\bar{\nu}$ is found to be consistent with the experimental upper bound $1.6 (2.7)\times 10^{-5}$ \cite{Belle:2017oht} and remain one order of magnitude below this limit in the entire region of the parameter space of interest. A few comments about the direct collider bounds on the $Z'$ model considered here is now in order. The most stringent collider bound on the effective framework arises from the LHC searched in the $pp\to Z\to 4\mu$ channel and is relevant in the range $5\lesssim M_{Z'}\lesssim 70\,\mathrm{GeV}$ \cite{Falkowski:2018dsl,Altmannshofer:2014cfa,Altmannshofer:2014pba,Altmannshofer:2016jzy} which is not of concern for the parameter space presented in Fig.~\ref{fig:CCFR}. Given the coupling between the electron and $Z'$ in the MFS the most relevant constraint from LEP \cite{Zyla:2020zbs} (indicated by purple coloured vertical dashed line in Fig.~\ref{fig:CCFR}) exclude the parameter space below $M_{Z'}< 209$ GeV. The Fig.~\ref{fig:CCFR} clearly indicates that some of the sampled parameter points are able to survive all the constraints considered in this study provided an isospin violating coupling is assumed between the exotic $Z'$ and the lepton doublets. \section{LKB measurement of $(g-2)_{e}$}\label{plus_g2e} Before we conclude we would like to remark on a recent measurement at Laboratoire Kastler Brossel (LKB) with rubidium atoms reported a new value for the fine structure constant \cite{Morel:2020dww}. Using this measurement, the SM prediction of $(g-2)_e$ shift, and is estimated to be $1.6\sigma$ lower with respect to the experimental value \cite{Hanneke:2008tm} with, \begin{equation} \Delta a_e = a_e^{\rm exp} - a_e^{\rm SM} = (4.8\pm 3.0)\times 10^{-13}. \label{eg2positive} \end{equation} A discussion about the minimal flavoured $Z'$ scenario in view of this recent result is now in order. \begin{figure}[htbp! \begin{center} \subfloat[]{\label{fig:aZMZ_pluse}\includegraphics[height=7.5cm,width=9cm,angle=0]{MZp_aZp_plus_g2e}} \subfloat[]{\label{fig:aZbZ_plus_g2}\includegraphics[height=7.5cm,width=9.9cm,angle=0]{aZp_bZp_plus_g2e}}\\ \subfloat[]{\label{fig:aZeaZmu_plus_g2}\includegraphics[height=7.5cm,width=9.8cm,angle=0]{MZp_aZe_aZmu_g2e}} \hspace*{-0.2cm}\subfloat[]{\label{fig:aZeaZmuMZpgbs_plus_g2}\includegraphics[height=7.5cm,width=9cm,angle=0]{plus_inset}} \caption{Upper left panel: 1$\sigma$ allowed parameter space in $M_{Z^{\prime}}$ (in MeV) vs $a_{Z^{\prime}}$ plane satisfied by both the data of $(g-2)_{\mu}$ and $(g-2)_{e}$. Upper right panel: 1$\sigma$ allowed parameter space in $a_{Z^{\prime}}$ vs $b_{Z^{\prime}}$ (same for muon and electron) plane with the allowed values of $M_{Z^{\prime}}$ (in GeV) satisfied by both the data of $(g-2)_{\mu}$ and $(g-2)_{e}$. Lower left panel: 1$\sigma$ allowed parameter space in $a^\mu_{Z^{\prime}}$ vs $a^e_{Z^{\prime}}$ plane with allowed values of $M_{Z^{\prime}}$ (in GeV). Lower right panel: parameter space in $a^\mu_{Z^{\prime}}$ vs $a^e_{Z^{\prime}}$ plane with allowed values of $M_{Z^{\prime}}$ (in GeV) allowed by $\Delta a_\mu$ by Fermilab, result on $\Delta a_e$ given by LBL, latest data of $R_K$ published by LHCb collaboration, up to date values of $R_{K^{*}}$ for the both lower and central bin values of $q^2$ and leading angular observables of the decay mode $B^+\to K^{+*}\mu^+\mu^-$. The inset shows the region in $M_{Z^{\prime}}$ (in GeV) vs $\textsl{g}_{bs}$ plane allowed by the above mentioned experimental results.} \label{psoitive_I_II} \end{center} \end{figure} \begin{enumerate} \item With both positive value of $\Delta a_e$ and $\Delta a_\mu$ one can hope to explain both simultaneously in a scenario where both the electron and muon have identical vectorial coupling with the $Z'$ as defined in Eq.~\ref{aZp_mu1}. The corresponding $1\sigma$ allowed parameter space is shown in $M_{Z^{\prime}}$ vs $a_{Z^{\prime}}$ plane in the Fig.~\ref{fig:aZMZ_pluse}. The preferred $M_{Z^{\prime}}$ in the MeV scale is too restricted to explain the LFUV in the $B$-meson sector \item A possibility is where both the electron and muon have vectorial as well as axial-vectorial coupling (of same strengths) with the $Z'$. In such case the most recent data of $(g-2)_e$ with positive value of $\Delta a_e$ and the recent data of $(g-2)_{\mu}$ can be explained simultaneously. In Fig.~\ref{fig:aZbZ_plus_g2}, $1\sigma$ allowed parameter space has been shown in $a_{Z^{\prime}}$ vs $b_{Z^{\prime}}$ plane for allowed values of $M_{Z^{\prime}}$. Again in this case also, the allowed values of independent parameters are restricted within very small values. \item In the passing we note that the other possible four parameter model is where, $a^\mu_{Z^{\prime}}\neq 0$, $b^\mu_{Z^{\prime}}= 0$ (for muon) but $a^e_{Z^{\prime}}= 0$, $b^e_{Z^{\prime}}\neq 0$ (for electron), then it is not possible to explain both the $(g-2)_{\mu}$ and $(g-2)_e$ with positive value of $\Delta a_e$ simultaneously. Because, in such scenario $(g-2)_e$ with positive value of $\Delta a_e$ cannot be explained. \item We further consider a four parameter scenario in which both the electron and muon have independent vectorial coupling with the $Z'$. In Fig.~\ref{fig:aZeaZmu_plus_g2} we have shown the 1$\sigma$ allowed parameter space satisfied by both the $(g-2)_\mu$ and $(g-2)_e$ with positive value of $\Delta a_e$. From this figure it is clear that the mass of the $Z'$ can be increased substantially with respect to the previous scenarios making it more favourable to explain LFUV in the $B$-sector. We compute the LFUV observables $R_{K^{(*)}}$, ${B^0_s}-\bar{B^0_s}$ mass difference and leading angular observables of the decay mode $B^+\to K^{+*}\mu^+\mu^-$ with the motivation to find out the region of parameter space which satisfy the corresponding experimental results simultaneously. The result from our analysis is depicted in the Fig.~\ref{fig:aZeaZmuMZpgbs_plus_g2}. The inset shows the corresponding allowed region in $M_{Z^{\prime}}$ (in GeV) vs $\textsl{g}_{bs}$ plane. We expectedly find that the identification of the most optimistic flavoured $Z'$ model depends on the relative sign of the $\Delta a_\mu$ and $\Delta a_e$. \end{enumerate} {The numerical results presented here from our in-house implementation of the $B$-physics observables have been extensively validated with the results obtained from the publicly available package {\tt flavio} \cite{Straub:2018kue}. We reproduce Fig.~\ref{ab}, Fig.~\ref{MZpgbs}, Fig.~\ref{fig:CCFR} and Fig.~\ref{fig:aZeaZmuMZpgbs_plus_g2} using the package {\tt flavio}. A detailed quantitative comparison of our results with {\tt flavio} is presented in Appendix \ref{flavio}.} \section{Conclusion}\label{concl} A synergy of experimental results in measurement of $R_K$ (with $3.1\sigma$ deviation) and $R_{K^{*}}$ by LHCb collaboration, $(g-2)_\mu$ (with $4.2\sigma$ deviation) by Fermi Lab and $(g-2)_e$ provide a tantalizing hint of lepton flavour violation and hence Beyond Standard Model Physics in the flavour sector. {In this paper, instead of conforming to a specific UV complete scenario we survey the data driven phenomenological effective models with vectorial and axial-vectorial leptonic coupling for the $Z'$}. We systematically identify the minimal flavoured $Z'$ model that can simultaneously explain these experimental evidences of lepton flavour universality violation while remaining in consonance with the correlated ${B^0_s}-\bar{B^0_s}$ oscillation. We explore the parameter space that is allowed by these observables taking into account the leading angular observables of the decay mode $B^+\to K^{+*}\mu^+\mu^-$. From our systematic study we observe that the models are very sensitive to the relative sign between $\Delta a_\mu$ and $\Delta a_e$. For example, we find that a $Z'$ that couple vectorially to moun while having an purely axial-vectorial coupling to electron can explain the data of anomalous magnetic moment of leptons (muon and electron). An off diagonal coupling to the quarks can simultaneously explain the $B$-physics observables for a weak scale $Z'$. An increase in the resolution of measurement of the anomalous magnetic moment of lepton in the future will provide a handle in identifying specific scenarios of flavoured $Z'$ models. On the other hand we find the minimal model with flavour specific vectorial coupling to the lepton suits the measurement of $\Delta a_e$ using the LKB data. Interestingly the CCFR data for neutrino trident production cross section excludes an ${\rm SU(2)}_L$ invariant coupling between the exotic $Z'$ and the leptonic doublets for models that simultaneously satisfy the $B$-physics and $(g-2)_l$ constraints. This implies the uncomfortable reality of an isospin violating couplings for the $Z'$ along with flavour violation. {\bf Acknowledgements} We would like to give thank Chirashree Lahiri and Rohan Pramanick for computational and technical support. AS acknowledges the financial support from Department of Science and Technology, Government of India through SERB-NPDF scholarship with grant no.:PDF/2020/000245. TSR acknowledges Department of Science and Technology, Government of India, for support under grant agreement no.:ECR/2018/002192 [Early Career Research Award].\\ {\Large{\bf Appendix}} \begin{appendix} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{\thesection-\arabic{equation}} \setcounter{equation}{0} \section{Relevant Wilson Coefficients for the \boldmath$b\to s l^+l^-$ transitions}\label{NDR} In this appendix we collect all the relevant Wilson Coefficients that are useful in constructing the observables related to the $b\to s l^+l^-$ transitions. The operator $\mathcal{O}_{10}$ does not evolve under QCD renormalisation and its coefficient is independent of energy scale $Q$ and can be expressed in the following way \begin{equation} C_{10}(Q) = - \frac{Y(x_t )}{\sin^2\theta_{W}}+C^{\rm NP}_{10}\;, \end{equation} where $\theta_W$ is the Weinberg angle. Unlike $C_{10}$, $C_9$ varies with energy scale and using the results of NLO QCD corrections to $C^{\rm eff}_{9}(Q)$ in the SM \cite{Misiak:1992bc, Buras:1994dj} we can readily obtain this coefficient in the NP scenario under the naive dimensional regularisation (NDR) renormalisation scheme as \begin{eqnarray}\label{c9_eff} C_9^{\rm eff}(q^2)&=&C_9^{\rm NDR}\tilde{\eta}\left(\frac{q^2}{m^2_b}\right)+h\left(z,\frac{q^2}{m^2_b}\right)\left(3C_1+C_2+3C_3+C_4+3C_5+C_6\right) \\ \nonumber &&-\frac 12 h\left(1,\frac{q^2}{m^2_b}\right)\left(4C_3+4C_4+3C_5+C_6\right)-\frac 12 h\left(0,\frac{q^2}{m^2_b}\right)\left(C_3+4C_4\right) \\ \nonumber &&+\frac 29\left(3C_3+C_4+3C_5+C_6\right), \end{eqnarray} where, \begin{equation}\label{C9tilde} C_9^{\rm NDR} = P_0^{\rm NDR} + \frac{Y(x_t )}{\sin^2\theta_{W}}+C^{\rm NP}_9 -4 Z(x_t ) + P_E E(x_t )\;. \end{equation} The value\footnote{The analytic formula for $P_0^{\rm NDR}$ has been given in \cite{Buras:1994dj}.} of $P_0^{\rm NDR} (P_E)$ is set at $2.60\pm 0.25$ \cite{Buras:2003mk} (${\cal O}(10^{-2})$ \cite{Buras:1994dj}). The function $Y(x_t)$, $Z(x_t)$ and $E(x_t)$ are the usual Inami-Lim functions \cite{Buras:1994dj, Buchalla:1995vs}. The function $\tilde{\eta}$ (given in the Eq.\;\ref{c9_eff}) represents single gluon corrections to the matrix element $\mathcal{O}_{9}$ and it takes the form \cite{Buras:1994dj} \begin{eqnarray} \tilde{\eta}\left(\frac{q^2}{m^2_b}\right)=1+\frac{\alpha_s}{\pi}\omega\left(\frac{q^2}{m^2_b}\right), \end{eqnarray} where $\alpha_s$ is the QCD fine structure constant. The functional forms of $\omega$ and $h$ are given by \cite{Buras:1994dj} \begin{eqnarray} \omega\left(\frac{q^2}{m^2_b}\right)&=&-{2\over9}\pi^2-{4\over3}{\rm Li}_{_2}\left(\frac{q^2}{m^2_b}\right)-{2\over3}\ln \left(\frac{q^2}{m^2_b}\right)\ln\left(1-\frac{q^2}{m^2_b}\right) \\ \nonumber &&-{5+4\frac{q^2}{m^2_b}\over3\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\ln\left(1-\frac{q^2}{m^2_b}\right) -{2\frac{q^2}{m^2_b}\bigg(1+\frac{q^2}{m^2_b}\bigg)\bigg(1-2\frac{q^2}{m^2_b}\bigg)\over3\bigg(1-\frac{q^2}{m^2_b}\bigg)^2\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\ln \left(\frac{q^2}{m^2_b}\right) \\ \nonumber &&+{5+9\frac{q^2}{m^2_b}-6\left(\frac{q^2}{m^2_b}\right)^2\over6\bigg(1-\frac{q^2}{m^2_b}\bigg)\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\;, \end{eqnarray} and \begin{eqnarray} \label{hz} h\left(z,\frac{q^2}{m^2_b}\right)&=&{8\over27}-{8\over9}\ln{m_{_b}\over\mu}-{8\over9}\ln z+{16z^2m^2_b\over9q^2} \\ \nonumber &&-{4\over9}\left(1+{2z^2m^2_b\over q^2}\right)\sqrt{\bigg|1-{4z^2m^2_b\over q^2}\bigg|} \left\{\begin{array}{ll}\ln\Bigg|{\sqrt{1-\frac{4z^2m^2_b}{q^2}}+1\over\sqrt{1-\frac{4z^2m^2_b}{q^2}}-1}\Bigg|-i\pi, &{\rm if}\: \frac{4z^2m^2_b}{q^2}<1\\ 2\arctan{1\over\sqrt{\frac{4z^2m^2_b}{q^2}-1}},&{\rm if}\: \frac{4z^2m^2_b}{q^2}>1\end{array}\right.\;. \end{eqnarray} Wilson Coefficients {$C_1\ldots C_6$} are defined as \cite{Buras:1994qa} \begin{eqnarray} \label{c1} C_1(M_W)&=&\frac{11}{2}\frac{\alpha_s(M_W)}{4\pi}\;, \\ \label{c2} C_2(M_W)&=&1-\frac{11}{6}\frac{\alpha_s(M_W)}{4\pi}\;,\\ \label{c3_4} C_3(M_W)&=&-\frac 13 C_4(M_W)=-\frac{\alpha_s(M_W)}{24\pi}=\widetilde{E}(x_t )=E(x_t )-\frac 23\;,\\ C_5(M_W)&=&-\frac 13 C_6(M_W)=-\frac{\alpha_s(M_W)}{24\pi}=\widetilde{E}(x_t )=E(x_t )-\frac 23\;. \label{c5_5} \end{eqnarray} The formula of decay branching ratio of $B\to K^{(*)} l^+l^-$ consists of another effective Wilson Coefficient namely $C_{7}^{{\rm eff}}$ for which there is no NP contribution in our chosen scenario. Within the SM $C_{7}^{{\rm eff}}$ can be defined as \cite{Buras:1994dj} \begin{eqnarray} \label{C7eff} C_{7}^{{\rm eff}} & = & -\frac{1}{2} \eta^\frac{16}{23} D'(x_t )-\frac{1}{2} \frac{8}{3} \left(\eta^\frac{14}{23} - \eta^\frac{16}{23}\right) E'(x_t ) + C_2(M_W)\sum_{i=1}^8 h_i \eta^{a_i}, \end{eqnarray} with \begin{equation} \eta = \frac{\alpha_s(M_W)}{\alpha_s(m_b)},~~~\alpha_s(m_b) = \frac{\alpha_s(M_Z)}{1 - \frac{23}{3} \frac{\alpha_s(M_Z)}{2\pi} \, \ln(M_Z/m_b)}. \label{eq:asmumz} \end{equation} The values of $a_i$, $h_i$ and $\bar h_i$ can be obtained from \cite{Buras:1994dj}. $D'(x_t )$ and $E'(x_t )$ are the Inami-Lim functions \cite{Buras:1994dj, Buchalla:1995vs} that represent SM contributions (at the LO level) to the photonic and gluonic magnetic dipole moment operators. \section{Form Factor for the \boldmath$B\to K^{(*)} l^+l^-$ transitions} In this appendix we briefly summarise the $B\to K^{(*)}$ form factors related to the rare $B$-meson decays considered in our analysis. \subsection{\boldmath Details of form factors for $B^+\to K^+l^+l^-$ transitions} \label{sec:ftfprel} The long-distance effects for hadronic dynamics of $B^+\to K^+l^+l^-$ decay is represented by the following matrix elements \cite{Bartsch:2009qp} \begin{eqnarray} \langle K^+(p')|\bar s\gamma^\mu b|B^+(p)\rangle &=& f_+(s)\, (p+p')^\mu +[f_0(s)-f_+(s)]\,\frac{m^2_B-m^2_K}{q^2}q^\mu\,, \label{fpf0def}\\ \langle K^+(p')|\bar s\sigma^{\mu\nu}b|B^+(p)\rangle &=& i\frac{f_T(s)}{m_B+m_K}\left[(p+p')^\mu q^\nu - q^\mu (p+p')^\nu\right]\,. \label{ftdef} \end{eqnarray} Here, the form factors are $f_+$, $f_0$ and $f_T$. Further $q=p-p'$ and $s=q^2/m^2_B$. $f_0$ terms drops out from the expression of differential decay width (see Eq.~\ref{btokdw}) due to smallness of lepton masses. Using the approach given in ref.~\cite{Buras:2014fpa} we implement the following expression for $f_+$, \begin{equation} f_+(q^2) = \frac{1}{1-q^2/m_+^2}\left[ \alpha_0 + \alpha_1 z(q^2) + \alpha_2 z^2(q^2)+\frac{z^3(q^2)}{3}(-\alpha_1+2\alpha_2) \right], \end{equation} with a simplified series expansion (SSE) \begin{equation} z(t) = \frac{\sqrt{t_+-t}-\sqrt{t_+-t_0}}{\sqrt{t_+-t}+\sqrt{t_+-t_0}}\,, \end{equation} where, $t_\pm=(m_B\pm m_K)^2$ and $t_0=t_+(1-\sqrt{1-t_-/t_+})$. The resonance mass is given by $m_+=m_B+0.046$\,GeV. The values of the parameters $\alpha_0$, $\alpha_1$, and $\alpha_2$ as are given bellow \cite{Buras:2014fpa} \begin{align} \alpha_0 &= 0.432 \pm 0.011 \,,& \alpha_1 &= -0.664 \pm 0.096 \,,& \alpha_2 &= -1.20 \pm 0.69 \,. \end{align} The corresponding expression for $f_T$ is extracted from the following ratio, \begin{equation}\label{ftfp} \frac{f_T(s)}{f_+(s)}=\frac{m_B+m_K}{m_B}\, \end{equation} This is independent of unknown hadronic quantities in the domain of interest \cite{Charles:1998dr,Beneke:2000wa, Wise:1992hn,Burdman:1992gh,Falk:1993fr,Casalbuoni:1996pg,Buchalla:1998mt}. \subsection{Details of form factors for $B^0\to K^{0*}l^+l^-$ transitions}\label{BSZ:ff} The matrix elements for the relevant operators for $B^0(p)\to K^{0*}(k)$ transitions in terms of momentum transfer ($q^\mu = p^\mu - k^\mu$) dependent form factors can be written as \cite{Altmannshofer:2008dz} \begin{eqnarray} \lefteqn{ \langle K^{0*}(k) | \bar s\gamma_\mu(1-\gamma_5) b | B^0(p)\rangle = -i \epsilon^*_\mu (m_B+m_{K^*}) A_1(q^2) + i (2p-q)_\mu (\epsilon^* \cdot q)\, \frac{A_2(q^2)}{m_B+m_{K^*}}}\hspace*{2.8cm}\nonumber\\ && {}+ i q_\mu (\epsilon^* \cdot q) \,\frac{2m_{K^*}}{q^2}\, \left[A_3(q^2)-A_0(q^2)\right] + \epsilon_{\mu\nu\rho\sigma}\epsilon^{*\nu} p^\rho k^\sigma\, \frac{2V(q^2)}{m_B+m_{K^*}},\hspace*{0.5cm}\label{eq:SLFF} \end{eqnarray} and, \begin{eqnarray} \lefteqn{\langle K^{0*}(k) | \bar s \sigma_{\mu\nu} q^\nu (1+\gamma_5) b | B^0(p)\rangle = i\epsilon_{\mu\nu\rho\sigma} \epsilon^{*\nu} p^\rho k^\sigma \, 2 T_1(q^2)}\nonumber\\ & {} + T_2(q^2) \left[ \epsilon^*_\mu (m_B^2-m_{K^*}^2) - (\epsilon^* \cdot q) \,(2p-q)_\mu \right] + T_3(q^2) (\epsilon^* \cdot q) \left[ q_\mu - \frac{q^2}{m_B^2-m_{K^*}^2}(2p-q)_\mu \right].\nonumber\\[-10pt]\label{eq:pengFF} \end{eqnarray} Here, $\epsilon_\mu$ represents polarization vector of the $K^*$. The form factors $A_i$ and $V$ are scale independent. On the other hand the $T_i$ depend on the renormalisation scale. The form factor in the light cone sum rules (LCSR) scheme can be generically written as \cite{Bharucha:2015bzk} \begin{equation} F_i(q^2) = P_i(q^2) \sum_k \alpha_k^i \,\left[z^*(q^2)-z^*(0)\right]^k\,, \label{eq:SSE} \end{equation} with an SSE, \begin{equation} z^*(t) = \frac{\sqrt{t_+-t}-\sqrt{t_+-t_0}}{\sqrt{t_+-t}+\sqrt{t_+-t_0}} \;, \end{equation} where, $t_\pm \equiv (m_B\pm m_{K^*})^2$ and $t_0\equiv t_+(1-\sqrt{1-t_-/t_+})$. Here, $P_i(q^2)=(1-q^2/m_{R,i}^2)^{-1}$ represents a simple pole corresponding to the first resonance in the spectrum. Appropriate resonance masses $m_{R,i}$ and the coefficients $\alpha^i_k$ can be extracted from \cite{Bharucha:2015bzk}. \section{Transversity amplitudes} \label{trvnampli} The expressions of the transversity amplitudes (up to corrections of $\mathcal{O}(\alpha_s)$) in terms of appropriate Wilson Coefficients and form factors are given as follows \cite{Altmannshofer:2008dz} \begin{equation} A_{\perp L,R} = N \sqrt{2} \lambda^{1/2} \bigg[ \left[ C_9^{\text{eff}} \mp C_{10} \right] \frac{ V(q^2) }{ m_B + m_{K^*}} + \frac{2m_b}{q^2} C_7^{\text{eff}} T_1(q^2) \bigg], \label{Applr} \end{equation} \begin{equation} A_{\parallel L,R} = - N \sqrt{2}(m_B^2 - m_{K^*}^2) \bigg[ \left[ C_9^{\text{eff}} \mp C_{10} \right] \frac{A_1(q^2)}{m_B-m_{K^*}} +\frac{2 m_b}{q^2} C_7^{\text{eff}} T_2(q^2) \bigg], \label{Aprlr} \end{equation} \begin{eqnarray} A_{0L,R} = - \frac{N}{2 m_{K^*} \sqrt{q^2}} \bigg\{ \left[ C_9^{\text{eff}} \mp C_{10} \right] \bigg[ (m_B^2 - m_{K^*}^2 - q^2) ( m_B + m_{K^*}) A_1(q^2) -\lambda \frac{A_2(q^2)}{m_B + m_{K^*}} \bigg] \nonumber\\ + {2 m_b}C_7^{\text{eff}} \bigg[ (m_B^2 + 3 m_{K^*}^2 - q^2) T_2(q^2) -\frac{\lambda}{m_B^2 - m_{K^*}^2} T_3(q^2) \bigg] \bigg\},\label{A0lr} \end{eqnarray} \begin{equation} A_t = \frac{N}{\sqrt{q^2}}\lambda^{1/2}2 C_{10}A_0(q^2), \label{At} \end{equation} with \begin{equation} N= V_{tb}^{\vphantom{*}}V_{ts}^* \left[\frac{G_F^2 \alpha^2_{\rm em}}{3\cdot 2^{10}\pi^5 m_B^3} q^2 \lambda^{1/2} \beta_l \right]^{1/2}, \end{equation} where $\lambda= m_B^4 + m_{K^*}^4 + q^4 - 2 (m_B^2 m_{K^*}^2+ m_{K^*}^2 q^2 + m_B^2 q^2)$ and $\beta_l=\sqrt{1-4m_l^2/q^2}$. Moreover, $L$ and $R$ refer to the chirality of the leptonic current. Here the particular amplitude $A_t$ is related to the time-like component of the virtual $K^*$, and it does not contribute in the case of massless leptons. Therefore, it can be neglected if the lepton mass is small in comparison to the mass of the lepton pair. \vspace*{-0.5cm} { \section{Relative comparison with flavio}\label{flavio} In this appendix we present a detailed comparison of numerical results obtained from our in-house implementation and the publicly available package {\tt flavio} \cite{Straub:2018kue}. \begin{figure}[htbp!] \begin{center} \hspace*{-0.5cm}\subfloat[]{\label{ab_flavio}\includegraphics[height=7.5cm,width=8.5cm,angle=0]{a_b_comb_flavio_f2}} \subfloat[]{\label{MZpgbs_flavio}\includegraphics[height=7.5cm,width=8cm,angle=0]{MZp_gbs_comb_flavio_f2}}\\ \subfloat[]{\label{fig:flavio_CCFR}\includegraphics[height=7.5cm,width=8cm,angle=0]{trident_ep_All_flavio} \subfloat[]{\label{fig:flavio_pos}\includegraphics[height=7.5cm,width=8.5cm,angle=0]{plus_inset_flavio}} \caption{The plots are generated from the results of the package {\tt flavio}. Similar plots have been presented in Fig.~\ref{fig:all}, Fig.~\ref{fig:CCFR} and Fig.~\ref{fig:aZeaZmuMZpgbs_plus_g2} respectively, but obtained from the results of our in-house code.} \label{fig:flavio_all} \end{center} \end{figure} Using the package {\tt flavio} the relevant plots have been generated and presented in Figs.~\ref{ab_flavio}, \ref{MZpgbs_flavio}, \ref{fig:flavio_CCFR} and \ref{fig:flavio_pos}. These may be compared with Figs.~\ref{ab}, \ref{MZpgbs}, \ref{fig:CCFR} and \ref{fig:aZeaZmuMZpgbs_plus_g2} respectively. Admittedly there is numerical differences which remains below $\sim 7\%$ in the region of interest. However, the qualitative nature of the results obtained remain consistent with each other. } \end{appendix} \newpage \providecommand{\href}[2]{#2}\begingroup\raggedright
1,941,325,220,297
arxiv
\section{Introduction} \label{sec: introduction} Environment perception in Autonomous Vehicles (AV) is a challenging problem. With the current approach of using only on-board sensors to solve the perception problem, it is impossible to sense occluded areas and mitigate the effects of sensor outage. Complex traffic intersections with buildings close to the curb may minimize the field of view of an AV's sensors. Integration of smart infrastructure nodes (sensing and compute) on roads where AVs operate can help overcome these challenges. The elevated and static view-point of the smart sensors enables them to observe the environment, detect more objects in the scene, and communicate that information to AVs. AVs can fuse that information with their own sensor measurements and augment their situational awareness. Fisheye cameras are well known for their low cost and wide Field of View (FoV), making them suitable for such smart infrastructure based sensing applications. However, the fisheye camera needs to provide this information in the same coordinate frame as the vehicle. For this reason, they need to be localized or registered within the same map that is used by the AV for navigation. In this work, we propose a method to localize a downward looking static smart infrastructure fisheye camera in a prior map consisting of a metric satellite image, and a co-registered LiDAR map of ground points with their LiDAR reflectivity values. An overview of the approach is shown in Figure \ref{fig: cameraregistrationinpriormap}. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{figures/camera_registration_inpriormap.png} \caption{Overview of our two step approach to camera localization. The coordinates shown are in meter, wrt the origin of the map. We start with a noisy GPS initialization (cyan), then perform feature matching between satellite image and rectified fisheye image to obtain an inital camera pose - PnP Localization (Green), and finally maximize the Mutual Information between LiDAR map and the fisheye image to obtain a refined camera localization estimate.} \label{fig: cameraregistrationinpriormap} \vspace{-20pt} \end{figure} \section{Related Work} \label{sec: relatedwork} There are several contributions on localization of camera images in prior maps (satellite imagery or LiDAR generated 3D maps). Satellite maps can be procured easily from third party sources \cite{vora2020aerial, maxar}, and with the widespread use of LiDARs in autonomous driving, we now have several high definition map providers which provide dense 3D map of the environment \cite{vora2019high}. This has made localization of cheaper sensors like cameras on prior maps an active area of research with an ultimate aim of using cheaper sensors on-board an AV. \cite{Yu2020MonocularCL} presents monocular perspective camera localization in pre-built 3D LiDAR maps using 2D-3D line correspondences. This method shows promise only in structured environment where lines can be easily detected in both camera and the LiDAR map. \cite{Wolcott2014VisualLW} presents LiDAR map based monocular camera localization in urban environment. Unlike \cite{Yu2020MonocularCL} which depends on detection of geometric primitives like lines in both the sensing modalities, \cite{Wolcott2014VisualLW} uses dense appearance based approach which work in unstructured environments. In \cite{Wolcott2014VisualLW}, given an initial belief of the camera pose, they generate several synthetic views of the environment by projecting the LiDAR map points using a perspective camera model, and compare these synthetic views against the live camera feed. The synthetic view which maximizes the Normalized Mutual Information (NMI) between the real image gray scale values and the projected points' LiDAR reflectivity values, is the solution to the localization problem. \cite{Wolcott2014VisualLW} draws inspiration from work done on 3D-LiDAR Camera extrinsic calibration described in \cite{MIGP}, which uses maximization of Mutual Information (MI) for calibrating a 3D-LiDAR Camera pair. \cite{Viswanathan} presents a camera localization technique which matches ground imagery obtained by cameras onboard an AV to the available satellite imagery. The camera images are warped to obtain a bird's eye view (BEV) of the ground. Next, the BEV image is matched with the given satellite imagery using using SIFT \cite{Lowe:2004:DIF:993451.996342} features. While the above mentioned approaches provide solutions for perspective cameras, our focus is to localize (estimate $[\mathbf{^{C}R_W}, \mathbf{^{C}t_W}]$ in Equation \ref{eqn: fisheyeprojectionmodel}) a downward looking static fisheye camera (Figure \ref{fig: fisheyeimage}) in a prior map (Figure \ref{fig: priormap}) which consists of 2D satellite imagery with metric information (Figure \ref{fig: satimage}) and 3D LiDAR map of ground points (Figures \ref{fig: intensityimage}, \ref{fig: zheightimage}). We initialize the camera localization using feature matching as done in \cite{Viswanathan}, and refine the camera localization using maximization of a MI based cost function as done in \cite{Wolcott2014VisualLW} and \cite{MIGP}. \section{Overview} \label{sec: overview} In this section we provide an overview of the various components of our implementation. \subsection{Fisheye Camera} \label{sec: fisheyecamera} \vspace{-30pt} \begin{figure}[H] \centering \subfloat[Fisheye image captured at a traffic intersection in our simulator]{\includegraphics[width=0.225\textwidth]{figures/fisheyesampleimage.png}\label{fig: fisheyeimage}} \quad \subfloat[Rectification of fisheye image to perspective image]{\includegraphics[width=0.225\textwidth]{figures/perspectifiedfisheyeimage.png}\label{fig: perspectifiedfisheyeimage}} \caption{Fisheye Image (Figure \ref{fig: fisheyeimage}) and its perspective rectification (Figure \ref{fig: perspectifiedfisheyeimage})} \label{fig: fisheyevsperspectiveprojection} \vspace{-10pt} \end{figure} We use the fisheye camera projection model proposed in \cite{MeiRives} to project 3D point $\mathbf{P_{W}} = [X_W, Y_W, Z_W]$ defined in the prior map coordinate frame $\mathbf{W}$ to a 2D point $\mathbf{p}$ on the fisheye camera image plane using Equation \ref{eqn: fisheyeprojectionmodel}. \begin{align} \textbf{p} &= \mathbf{\Pi}(K, D, \xi, [\mathbf{^{C}R_W}, \mathbf{^{C}t_W}], \mathbf{P_{W}}) \label{eqn: fisheyeprojectionmodel} \end{align} Here $\mathbf{\Pi}()$ is the projection function, $K=\begin{bmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ \end{bmatrix}$, $D = [k_1, k_2, p_1, p_2]$ and $\xi$ are the camera intrinsics, and $[\mathbf{^{C}R_W}$, $\mathbf{^{C}t_W}]$ is the camera extrinsic. $\mathbf{^{C}R_W}$ $\in SO(3)$ is an orthonormal rotation matrix and $\mathbf{^{C}t_W}$ $\in R^{3}$ is a 3D vector. The goal of this work is to estimate the unknown camera extrinsic $[\mathbf{^{C}R_W}$, $\mathbf{^{C}t_W}]$ in the prior map frame $\mathbf{W}$. \subsubsection{Intrinsic Calibration} \label{sec: intrinsiccalibration} We estimate the intrinsic parameters $K$, $D$ and $\xi$ by collecting several images of a large checkerboard at different poses, and feeding those images to the omnidirectional camera calibrator in OpenCV\cite{opencv_library}, which provides an implementation\footnote{\url{https://docs.opencv.org/4.5.2/dd/d12/tutorial_omnidir_calib_main.html}} of the intrinsic calibration technique presented in \cite{boli}. \subsubsection{Rectification} \label{sec: perspectiverectification} The intrinsic camera calibration parameters are used to rectify the fisheye image (Figure \ref{fig: fisheyeimage}) into its corresponding perspective image (Figure \ref{fig: perspectifiedfisheyeimage}) utilizing OpenCV's rectification routines. Although perspective rectification results in loss of field of view, it makes the application of computer vision algorithms developed for perspective images possible for fisheye images. \subsection{Prior Map} \label{sec: priormap} \begin{figure*}[!ht] \centering \subfloat[Satellite Map]{\includegraphics[width=0.3\textwidth]{figures/satellite_image_new.png}\label{fig: satimage}} \quad \subfloat[LiDAR Map: Reflectivity of Ground Points]{\includegraphics[width=0.3\textwidth]{figures/intensity_image_new.png}\label{fig: intensityimage}} \quad \subfloat[LiDAR Map: Height of Ground Points]{\includegraphics[width=0.3\textwidth]{figures/height_image_new.png}\label{fig: zheightimage}} \caption{\textbf{Prior Map:} Figures \ref{fig: satimage}, \ref{fig: intensityimage} $\&$ \ref{fig: zheightimage} show the components of our prior map. The full map is not shown in the interest of space.} \label{fig: priormap} \end{figure*} The prior map (Figure \ref{fig: priormap}) consists of two important components which are registered and expressed in the frame of reference $\mathbf{W}$. They are: \subsubsection{Satellite Map} \label{sec: satmap} The satellite map is a metric Bird's Eye View (BEV) satellite image (Figure \ref{fig: satimage}). In our case, a pixel on the image corresponds to 0.1 m on the ground. \subsubsection{LiDAR Map} \label{sec: LiDARmap} The prior LiDAR map is built using an offline mapping process described in \cite{Wolcott2014VisualLW}. Broadly, a survey vehicle equipped with several 3D LiDAR scanners and a high end inertial navigation system is manually driven and sensor data is collected in the environment we want to map. Next, an offline pose-graph optimization SLAM (Simultaneous Localization and Mapping) problem is solved to obtain the accurate global pose of the vehicle. Finally, a dense ground point mesh is constructed from the optimized pose graph using region growing techniques which gives a dense 3D point cloud map. The ground points from this dense 3D cloud are used to generate the LiDAR ground reflectivity image (Figure \ref{fig: intensityimage}) and ground height image (Figure \ref{fig: zheightimage}). The LiDAR Map (Figures \ref{fig: intensityimage} and \ref{fig: zheightimage}) is aligned with the satellite imagery (Figure \ref{fig: satimage}) using the GPS measurements from the inertial navigation system. \section{Problem Formulation} \label{sec: problemformulation} \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{figures/FisheyeRegistrationBlockDiagram.png} \caption{\textbf{Overview of the method:} The block diagram shows the two steps involved in our approach. In Step 1, we match features between the perspective projection of fisheye image and a cropped satellite map to initialize camera pose, and in Step 2 we refine this initialization by maximization of Mutual Information between fisheye image and prior 3D-LiDAR map.} \label{fig: fisheyeregistrationblockdiagram} \vspace{-20pt} \end{figure*} The goal of this work is to estimate the unknown camera pose $[\mathbf{^{C}R_W}$, $\mathbf{^{C}t_W}]$ in the prior map frame $\mathbf{W}$. We assume that we have a noisy estimate of the fisheye camera's (GPS) position (no orientation) in the map which helps us reduce the search space in the prior map. We follow a two step approach to register the camera in the prior map, the details of which are presented in Sections \ref{sec: initsparsefeaturematching} and \ref{sec: refinement_of_localization}, and a broad overview is provided in Figure \ref{fig: fisheyeregistrationblockdiagram}. \subsection{Initialization using sparse feature matching} \label{sec: initsparsefeaturematching} Traditionally available feature detection, description and matching techniques are usually suitable for perspective images only. Therefore, we rectify the fisheye image into the corresponding perspective image as explained in Section \ref{sec: perspectiverectification}, and use SuperGlue \cite{sarlin20superglue}, a pre-trained deep learning based feature matching algorithm, for matching features (Figure \ref{fig: featurematches}) between the rectified fisheye image and the cropped satellite image (cropped using GPS initialization, refer Figure \ref{fig: fisheyeregistrationblockdiagram}). The matched features are used to solve a Perspective-n-Point (PnP) problem \cite{P3P, RANSAC} to estimate the initial pose of camera (also called the PnP estimate) in the prior map reference frame $\mathbf{W}$. As we know the metric scale of the satellite image (1 pixel = 0.1 m), we obtain the camera pose in metric units. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{figures/superglue_matches.png} \caption{SuperGlue \cite{sarlin20superglue} Matching between Satellite Image (Left) $\&$ perspective projection of fisheye image (Right)} \label{fig: featurematches} \vspace{-15pt} \end{figure} \subsection{Refinement of camera localization using Maximization of Mutual Information} \label{sec: refinement_of_localization} Mutual Information (MI) has been used in several fields for registering data from multi-modal sensors \cite{paulviola, MaesMI}. We refine the initial camera pose estimate from Section \ref{sec: initsparsefeaturematching} by maximizing the Mutual Information (MI) between the LiDAR reflectivity of ground points and the fisheye grayscale values at the pixel locations onto which the LiDAR points are projected using the camera pose $[\mathbf{^{C}R_W}, \mathbf{^{C}t_W}]$. \subsubsection{Theory} MI (Equation \ref{eqn: mutualinformationdefinition}) provides a way to statistically measure mutual dependence between two random variables $X$ and $Y$. \begin{equation} MI(X, Y) = H(X) + H(Y) - H(X, Y) \label{eqn: mutualinformationdefinition} \end{equation} Where $H(X)$ and $H(Y)$ are the Shannon entropy over random variables $X$ and $Y$ respectively, and $H(X, Y)$ is the joint Shannon entropy over the two random variables: \begin{align} H(X) &= - \sum_{x \in X} p_{X}(x) \log p_{X}(x) \label{eqn: marginalX}\\ H(Y) &= - \sum_{y \in X} p_{Y}(y) \log p_{Y}(y) \label{eqn: marginalY}\\ H(X, Y) &= - \sum_{x \in X} \sum_{y \in Y} p_{X, Y}(x, y) \log p_{XY}(x, y) \label{eqn: jointXY} \end{align} The entropy $H(X)$ of a random variable $X$ denotes the amount of uncertainty in $X$, whereas $H(X, Y)$ is the amount of uncertainty when the random variables $X$ and $Y$ are co-observed. The formulation of MI in Equation \ref{eqn: mutualinformationdefinition} shows that maximization of $MI(X, Y)$ is achieved by minimization of the joint entropy $H(X, Y)$, which coincides with minimization of dispersion of two random variable's joint histogram. \subsubsection{Mathematical Formulation} \label{sec: mathematicalformulation} Let $\{\mathbf{P_{W_{i}}}; i = 1,2, \dots, n\}$ be the set of 3D points whose coordinates are known in the prior map reference frame $\mathbf{W}$ and let $\{X_{i}; i = 1,2, \dots, n\}$ be the corresponding reflectivity values for these points ($X_i \in [0, 255]$). Equation \ref{eqn: fisheyeprojectionmodel} presents the relationship between $\mathbf{P_{W_{i}}}$ and its image projection $\mathbf{p_{i}}$ as a function of $[\mathbf{^{C}R_W}, \mathbf{^{C}t_W}]$. Let $\{Y_i; i = 1, 2, \dots, n\}$ be the grayscale intensity of the pixels $\mathbf{p_{i}}$ where ${\mathbf{P_{W_{i}}}}$ project onto, such that: \begin{align} Y_i = I(\mathbf{p_{i}}) \end{align} where $Y_i \in [0, 255]$ and $I$ is the grayscale fisheye image. Therefore, $X_i$ is an observation of the random variable $X$, and for a given $[\mathbf{^{C}R_W}, \mathbf{^{C}t_W}]$, $Y_i$ is an observation of random variable $Y$. The marginal ($p_{X}(x)$, $p_{Y}(y)$) and joint ($p_{X, Y}(x, y)$) probabilities of the random variables $X$ and $Y$, required for calculating MI (Equation \ref{eqn: mutualinformationdefinition}), can be estimated using a normalized histogram (Equation \ref{eqn: pdfdefinition}): \begin{align} \hat{p}(X = k) = \frac{x_k}{n}, k \in [0, 255] \label{eqn: pdfdefinition} \end{align} where $x_k$ is the observed counts of the intensity value $k$. \subsubsection{Global Optimization} $\mathbf{^CR_W} \in SO(3)$ is an orthonormal rotation matrix which can be parameterized as Euler angles $[\phi, \theta, \psi]^{\top}$ and $\mathbf{^{C}t_{W}} = [x, y, z]^{\top}$ is an Euclidean 3-vector. $\psi$ is the rotation of the camera along its principal axis. In our context, the fisheye camera is facing vertically downward so we do not refine the $\phi$ $\&$ $\theta$ and leave it at what the feature matching based technique (Section \ref{sec: initsparsefeaturematching}) determines it to be, which is very close to 0. Therefore, as far as rotation variables are concerned, we refine only $\psi$. We represent all the variables to be optimized together as $\Theta = [x, y, z, \psi]^{\top}$. The optimization is posed as a maximization problem: \begin{align} \hat{\Theta} = \argmax_{\Theta} MI(X, Y; \Theta) \label{eqn: globaloptimization} \end{align} \section{Experiments and Results} \label{sec: experimentsandresults} This section describes the experiments performed to evaluate the proposed technique using data obtained from both our simulator and real world sensor. \subsection{Simulation Studies} \label{sec: simulationstudies} We first validate our approach on a simulator which is built using data from real sensors. The Mathworks' tool RoadRunner \cite{roadrunner} is used to generate the 2D features like lane geometry and lane markings with the satellite map used as a reference. The 3D structures are created using the Unreal Engine Editor \cite{unrealeditor} with the help of real satellite and 3D LiDAR maps. Since the simulated environment is created using the prior map components, it can safely be assumed that the simulator aligns with the real world to a high degree of accuracy. In order to generate the fisheye images, we model a fisheye camera in Unreal Engine using the equidistant model with a field of view of 180 degrees. We demonstrate our approach in simulation for the fisheye image shown in Figure \ref{fig: fisheyeimage}. The fisheye image is first rectified (Section \ref{sec: perspectiverectification}) to generate Figure \ref{fig: perspectifiedfisheyeimage}, which is used for estimating the initial camera pose using the approach in Section \ref{sec: initsparsefeaturematching}. Next, the initialization is refined using maximization of MI (Section \ref{sec: refinement_of_localization}). \vspace{-20pt} \begin{figure}[!ht] \centering \subfloat[]{\includegraphics[width=0.225\textwidth]{figures/loc1_ht1_pnp_marked_cyan.pdf}\label{fig: loc1_ht1_pnp_marked}} \quad \subfloat[]{\includegraphics[width=0.225\textwidth]{figures/loc1_ht1_mi_cyan.pdf}\label{fig: loc1_ht1_mi}} \caption{Two step approach for camera localization in prior map : Figure \ref{fig: loc1_ht1_pnp_marked} shows the projection of 3D-LiDAR ground points (cyan) using the initial estimate obtained using feature matching (PnP Estimate from Section \ref{sec: initsparsefeaturematching}), and Figure \ref{fig: loc1_ht1_mi} shows the projection of 3D-LiDAR ground points using the refined camera pose estimate from maximization of MI (from Section \ref{sec: refinement_of_localization}). The misalignment visible in Figure \ref{fig: loc1_ht1_pnp_marked}, is absent in Figure \ref{fig: loc1_ht1_mi} (best viewed digitally).} \label{fig: improvementwithMIbasedlocalization} \vspace{-15pt} \end{figure} We qualitatively validate the camera localization (Figure \ref{fig: improvementwithMIbasedlocalization}) estimate ($[\mathbf{^{C}R_W}, \mathbf{^{C}t_W}]$) by projecting points from the 3D-LiDAR map (Figures \ref{fig: intensityimage} and \ref{fig: zheightimage}) onto the fisheye image (Figure \ref{fig: fisheyeimage}). As shown in Figure \ref{fig: loc1_ht1_pnp_marked}, the projection of LiDAR map points on the fisheye image obtained using the initial camera pose are not well aligned. When we plot (Figure \ref{fig: xperturbatioMI}) the MI around the initial camera pose, we observe that it is not at its maximum at the initial estimate (also called PnP Estimate), thus holding the promise for further improvement. Similarly, Figure \ref{fig: costfunction2d} presents the surface plot of MI, which shows the presence of a global maximum in each sub-plot. Therefore, on solving the optimization problem posed in Equation \ref{eqn: globaloptimization} we obtain camera pose estimate which maximizes the MI between the two modalities and results in negligible misalignment of the projected LiDAR map points in Figure \ref{fig: loc1_ht1_mi}. \begin{figure}[!ht] \centering \includegraphics[width = 0.4\textwidth]{figures/x_MI.png} \caption{ Plot of MI around the PnP estimate (from Section \ref{sec: initsparsefeaturematching}). Plot shows that MI is not at maximum at the PnP estimate, therefore the maximization of MI may reduce the misalignment in projection of 3D-LiDAR points visible in Figure \ref{fig: loc1_ht1_pnp_marked}}\label{fig: xperturbatioMI} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{figures/costsurface.png} \caption{\textbf{Plot of MI by perturbing two degrees of freedom around the PnP Estimate} (i.e. the initial estimate from Section \ref{sec: initsparsefeaturematching}). The cost function from single Image LiDAR Scan pair is not differentiable at several points. Hence, we use an exhaustive grid search around the initialization point to arrive at solution where MI is maximized.} \label{fig: costfunction2d} \vspace{-10pt} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{figures/perturbation_collage.png} \caption{Performance of MI based fisheye camera localization refinement for different initial conditions. Here we perform 100 independent trials. \textcolor{red}{+} is the initialization, \textcolor{green}{*} is the final result.} \label{fig: mi_randominitializaton} \vspace{-10pt} \end{figure} We run 100 independent trials to evaluate the robustness of the MI based refinement method (Section \ref{sec: refinement_of_localization}) to change in initialization (Figure \ref{fig: mi_randominitializaton}). The translation parameters show greater variance when compared to yaw $\psi$. The higher variance of the translation variables can be attributed to the fact that the MI based cost function is less sensitive to changes in translation variables, especially in the outdoor scenario where most of the points lie in the far field. In the limiting case, far away points are considered to be points at infinity, represented as $[x, y, z, 0]^{T}$, which, under camera projection (Equation \ref{eqn: fisheyeprojectionmodel}), render the translation variable ($\mathbf{^{C}t_{W}}$) in the optimization problem (Equation \ref{eqn: globaloptimization}) un-observable. This result is also presented in \cite{MIGP}, specifically when discussing sensor registration in an outdoor environment using only a single image - LiDAR scan pair, which is similar to our situation. \subsection{Real World Experiments} In order to demonstrate the validity of our algorithm in realistic situations, we conducted experiments with data collected from a real fisheye camera (Figure \ref{fig: realfisheyedatacollection}). \subsubsection{System Description} We use a fisheye lens Fujinon FE185C057HA 2/3 inch sensor, which provides $185^{\circ}$ of vertical and horizontal FoV. Our camera is a 5MP Sony IMX264. We mount our sensor from a tripod (Figure \ref{fig: realfisheyedatacollection}), looking vertically down, and capture images of the environment. Our ultimate goal is to mount these cameras at challenging intersections for navigation of autonomous vehicles, and use the proposed method to register them in a prior map. We use an iPhone to provide us an approximate GPS location (without the orientation) of the fisheye camera, which is used to limit the search space in the prior map. We use a high accuracy RTK-GPS (uBlox ZED F9P GNSS + uBlox antenna ANN-MB-00) unit to measure GPS co-ordinates of distinctive corners on road markings that can be used for quantifying the accuracy of camera localization (Figure \ref{fig: reprojectionErrors}). \vspace{-10pt} \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{figures/real_fisheye_data_collection.pdf} \caption{Collecting data for real experiments using a tripod mounted downward looking fisheye camera. Our ultimate goal is to mount these cameras, along with our smart infrastructure nodes, at challenging intersections for navigation of autonomous vehicles.} \label{fig: realfisheyedatacollection} \vspace{-15pt} \end{figure} \subsubsection{Results} We present results with real world data from two different locations in Figure \ref{fig: misalignmentvsalignmentsimulateddata_rdloc} which qualitatively demonstrate the incremental improvement in camera localization using our two step approach. While the projection of LiDAR points onto the fisheye image using the initial camera localization appears misaligned in Figures \ref{fig: misalignmentinprojection_rdloc1} and \ref{fig: misalignmentinprojection_rdloc2}, the misalignment is reduced when the LiDAR points are projected using refined camera localization in Figures \ref{fig: properalignment_rdloc1} and \ref{fig: properalignment_rdloc2}. We quantify the veracity of camera localization by measuring the average reprojection error for points on the fisheye image whose GPS locations we have measured using high accuracy RTK-GPS. We manually mark these points on the fisheye image, and measure the difference between them and the reprojection of the corresponding 3D point in the prior map onto the fisheye image, using the estimated camera localization. The results presented in Figure \ref{fig: reprojectionErrors} show that the reprojection error on the fisheye image plane reduces when we refine the initial camera localization by maximizing the mutual information. \begin{figure*}[!ht] \centering \subfloat[Location 1 - Projection of LiDAR points (cyan) using PnP Estimate]{\includegraphics[width=0.45\textwidth]{figures/pnp_loc1_cyan.pdf}\label{fig: misalignmentinprojection_rdloc1}} \quad \subfloat[Location 1 - Projection of LiDAR points (cyan) using MI Estimate]{\includegraphics[width=0.45\textwidth]{figures/max_mi_loc1_cyan.pdf}\label{fig: properalignment_rdloc1}} \subfloat[Location 2 - Projection of LiDAR points (cyan) using PnP Estimate]{\includegraphics[width=0.45\textwidth]{figures/pnp_loc2_cyan.pdf}\label{fig: misalignmentinprojection_rdloc2}} \quad \subfloat[Location 2 - Projection of LiDAR points (cyan) using MI Estimate]{\includegraphics[width=0.45\textwidth]{figures/max_mi_loc2_cyan.pdf}\label{fig: properalignment_rdloc2}} \caption{\textbf{Real World Experiments: }Projection of 3D-LiDAR ground points (cyan) on to fisheye camera using the initial camera pose (PnP Estimate) (Figure \ref{fig: misalignmentinprojection_rdloc1}, \ref{fig: misalignmentinprojection_rdloc2}) and MI based refinement of initial camera pose (Figure \ref{fig: properalignment_rdloc1}, \ref{fig: properalignment_rdloc2}). The misalignment of LiDAR points visible in highlighted areas in Figures \ref{fig: misalignmentinprojection_rdloc1}, \ref{fig: misalignmentinprojection_rdloc2}, are minimized in Figures \ref{fig: properalignment_rdloc1}, \ref{fig: properalignment_rdloc2}} \label{fig: misalignmentvsalignmentsimulateddata_rdloc} \vspace{-15pt} \end{figure*} \begin{figure*}[!ht] \centering \subfloat[Location1 - Average Reprojection Error with PnP = \textcolor{red}{20.52} pixel, and with Maximization of MI = \textcolor{blue}{9.12} pixel]{\includegraphics[width=0.45\textwidth]{figures/repErr_loc1.pdf}\label{fig: repErrLoc1}} \quad \subfloat[Location2 - Average Reprojection Error with PnP = \textcolor{red}{26.40} pixel, and with Maximization of MI = \textcolor{blue}{13.11} pixel]{\includegraphics[width=0.45\textwidth]{figures/repErr_loc2.pdf}\label{fig: repErrLoc2}} \caption{\textcolor{green}{Green Circle} - Hand annotated corner point whose GPS location was measured using a high accuracy RTK-GPS unit, \textcolor{red}{Red Circle} - Projection of corner point's position onto Fisheye Image using PnP estimate (Section \ref{sec: initsparsefeaturematching}), \textcolor{blue}{Blue Circle} - Projection of corner point's position onto Fisheye Image using MI estimate (Section \ref{sec: refinement_of_localization}).} \label{fig: reprojectionErrors} \vspace{-20pt} \end{figure*} \section{Discussion $\&$ Conclusion} \label{sec: conclusion} We present an approach to localize a smart infrastructure node equipped with a fisheye camera. The downward facing fisheye image is registered to a prior map, comprising of a co-registered satellite image and a ground reflectivity/height map from LiDAR-SLAM. Our two-step approach uses feature matching between the rectified fisheye image and the satellite imagery to get an initial camera pose, followed by maximization of MI between the fisheye image and 3D LiDAR map to refine the initial camera localization. Since we have only a single camera image to register against (the smart infrastructure node is static), the cost surface may not always be smooth \cite{MIGP} and therefore not differentiable - leading to the failure of gradient descent methods. Hence, we use an exhaustive grid search method to find the optimal camera pose. Such a search may be time-consuming (depending on the number of 3D LiDAR map points used for calculating MI (Equation \ref{eqn: mutualinformationdefinition}), the interval of exhaustive grid search and the available compute power), and not suitable for real-time operation. This is acceptable for our application, because we need to localize the smart infrastructure node once at install and this can be an offline process. Moreover, this method can be accelerated by use of GPUs.
1,941,325,220,298
arxiv
\section{Introduction} The human epidermal growth factor receptor 2~(HER2) amplification status is an important tumor marker in breast and gastric cancer. It indicates a more aggressive disease with a greater rate of recurrence and mortality. Hence, it influences the decision making for finding an appropriate therapy~\cite{Mitri2012}. The amplification of HER2 is detected by assessing fluorescence \textit{in situ} hybridization~(FISH) images: Fluorescence signals are counted in at least 20 interphase nuclei from tumor regions and are then graded into a HER2 negative or positive status with a high or low amplification~\cite{Wolff2018}. To optimize the diagnostics in terms of speed, accuracy, objectivity and interpretability, we are developing a comprehensible, multi-step deep learning~(DL)-based pipeline~(\figureref{fig:flow}A). It mimics the pathologist's evaluation steps and integrates the decision processes into a report for transparency~(\figureref{fig:flow}A.6). The pipeline independently evaluates each nucleus twice by different networks creating a second opinion. The pre-selection of nuclei reduces the risks of isolating overlapping nuclei parts~(e.g.~signals) or artifacts which could alter the nucleus-specific classification. The first component is an instance segmentation network designed for cell containing data sets, called StarDist~\cite{Schmidt2018}, to detect and extract all individual nuclei~(not only 20) from the entire FISH slide~(\figureref{fig:flow}A.1). The retrieved nuclei are then classified by a custom image classification convolutional neural network~(CNN) and a RetinaNet-based FISH signal detector system~\cite{Lin2017}~(\figureref{fig:flow}A.2 and 4). Visualizations, such as class activation maps~(CAMs)~\cite{Zhou2016}, display the decision making of the pipeline. \begin{figure}[htb] % % \floatconts {fig:flow} {\caption{\textbf{(A)}~Workflow for the automated evaluation of the HER2 gene amplification testing from FISH images. \textbf{(B)}~Utilized architectures of the pipeline components. Adapted from \citet{Lin2017}; \citet{Schmidt2018}. \textbf{(C)}~Graphical user interface for interactive access to the pipeline results for pathologists.}} {\includegraphics[width=\linewidth]{figure/fig_1}} \end{figure} \section{Methods} Selected tumor samples for our data sets originated from breast cancer tissues preserved in formalin-fixed paraffin-embedded~(FFPE) blocks from clinical institutions all over Germany and were harbored at the Institute of Pathology of the Carl Gustav Carus Hospital of TU Dresden. FISH slides were produced and digitized as described in \citet{Zakrzewski2019}. These routinely processed slides often display diverse artifacts, low signal-to-noise ratios and color disparities as a consequence of sample fixation and imaging. All pipeline components were implemented in Python 3 using the Keras framework~(version 2.2.4, \citet{keras}) and TensorFlow~(version 1.12.0, \citet{Tensorflow}) as back-end. Trainings were performed on one Nvidia GeForce GTX 1080 Ti without pre-training. The data set for the StarDist-based\footnote{\url{https://github.com/mpicbg-csbd/stardist}}~\cite{Schmidt2018} \textit{Nucleus Detector}~(\figureref{fig:flow}B, first panel) comprises randomly chosen images from 62 FISH slides from 2015 to 2018~(representing 62 patients, $\sim$7,440 nuclei, JPEG format) of the size 1,600~$\times$~1,200~px. The nuclei were manually outlined by a pathologist using Fiji~\cite{FIJI} and were exported as label masks\footnote{~\url{https://forum.image.sc/t/creating-labeled-image-from-rois-in-the-roi-manager/4256/2}}. Training was performed for 250 epochs with 1,000 steps per epoch using standard configurations and a learning rate scheduler. A validation split of 0.15 was used and the images were scaled down to half their size to fit the nuclei dimensions to the receptive field of the architecture. Additionally, augmentation operations including axis-aligned rotations and horizontal as well as vertical flips were utilized to increase the available number of training images. The test set included randomly chosen images~(1,600~$\times$~1,200~px) from 10 FISH slides~($\sim$810 nuclei). The \textit{Nucleus Classifier} is a VGG-like~\cite{VGG} customized CNN of five convolutional and five fully connected layers interleaved with max pooling and batch normalization~(\figureref{fig:flow}B, second panel). Adam optimizer~\cite{Adam} and weighted categorical cross entropy were used. The data set consists of 8,313 single nucleus images~(image dimensions varied around 80~$\times$~80~px) randomly extracted from the \textit{Nucleus Detector's} data set. These nucleus images were manually classified by a pathologist into artifact~(277 images), background~(6,439 images), HER2 normal expression~(HER2 negative, 224 images) and HER2 low~(HER2 positive, 362 images) or high amplification~(HER2 positive, 224 images). The CNN was trained with a batch size of 16 due to GPU memory limitations for 300 epochs. The validation split was 0.2~(39 images, $\sim$316 signals). Images were not scaled down but augmented using the same operations as mentioned above. The RetinaNet-based\footnote{~\url{https://github.com/fizyr/keras-retinanet}}~\cite{Lin2017} \textit{Signal Detector's}~(\figureref{fig:flow}B, third panel) data set is a subset of 397 single nucleus images ($\sim$5,955 signals) of the \textit{Nucleus Classifier's} set. The FISH signals within the nuclei were manually annotated by a pathologist into three classes: HER2~(single signal), HER2-Cluster~(indeterminate amount of signals) or chromosome enumeration probe 17~(CEP17; reference centromeric satellite DNA, single signal). The training was performed with a ResNet50~\cite{resnet} backbone and a validation split of 0.1 for 130 epochs using standard configurations. The images were not scaled down but augmented with brightness and contrast changes in addition to the operations above. Network performances were estimated on validation and test sets using precision, recall and average precision~(AP) scores. \section{Results} The first component, called \textit{Nucleus Detector}~(\figureref{fig:flow}A.1), was trained to detect interphase nuclei in the FISH image. The shape prediction of nuclei as star-convex polygons excludes classification-altering artifacts and overlapping nuclei parts. It achieves a precision score of 0.76 and recall score of 0.65 on the test set since the differentiation of small and adjacent nuclei~(often predicted as single nucleus) remains challenging. The \textit{Nucleus Classifier}~(\figureref{fig:flow}A.2) was used to classify the extracted nuclei into artifact, background, HER2 normal expression and HER2 low or high amplification. Thereby, the filter classes~(artifact, background) ensure that images with no or more than one nucleus will not be considered for the HER2 amplification testing. On the validation set, the network achieves a precision and recall score of 0.98. CAMs~(\figureref{fig:flow}A.2) are used to elucidate the classifications and demonstrate that the classes were recognized based on FISH signal presence and number. The third component is the \textit{Signal Detector}~(\figureref{fig:flow}A.4) and was trained to localize and classify FISH signals within a single nucleus into HER2, HER2-cluster and CEP17. Thus, each nucleus is classified a second time~(second opinion). The bounding boxes provide details regarding the number and position of FISH signals per nucleus. The \textit{Signal Detector} achieves a mean AP of 0.73 on the validation set. The CEP17 signals~(AP: 0.94) are detected very well but the detection and distinction of HER2~(AP: 0.65) and HER2-cluster signals~(AP: 0.60) remains challenging. However, mainly crowded HER2 and weak FISH signals are not identified or detected in multiple and distinctly classified boxes. The nucleus- and FISH image-wide HER2 amplification status is inferred via different ratios~(\figureref{fig:flow}A.3 and 5) and thresholds as mentioned in \citet{Zakrzewski2019}. The classification thresholds can be modified to the needs of any clinical purposes. All steps of the pipeline are documented in a report file, which can be used to evaluate the classifications made by the individual pipeline components~(\figureref{fig:flow}A.6 and 7). Our DL-based system is advanced and more comprehensible compared to the previous version of \citet{Zakrzewski2019} as it mimics the evaluation process of pathologists more closely including interpretability steps. Moreover, it is also capable of processing low quality FISH images characterized by high background noise, a large number of artifacts, low signal-to-noise ratio, weak signals, major differences in nuclei shape or overlapping nuclei often occurring in daily routine. \section{Conclusion} Our proposed pipeline is a step towards the development of a DL-based assisting tool for pathologists to cope with the growing number of cancer cases during clinical routine. It can be individually~(re-)trained on lab-specific images to reach optimal performance. While increasing the speed of the evaluation, the pipeline additionally enhances objectivity, provides the maximum amount of information for medical reports and can be applied to any FISH-based~(e.g.~BCR/ABL, BCL/IGH fusions; MYC, BCL6, ALK translocations) analyses. To integrate these novel analysis requirements, it will be necessary to train our pipeline on the individual FISH protocol-specific images. Potential implementation into clinical routines will be achieved using an intuitive interface~(\figureref{fig:flow}C, web based, currently work in progress) to enable usage on a variety of devices~(e.g. computer and tablet). The interface needs to be as easy to use as possible so that a minimum of extra training for pathologists is required. We are also optimizing our system to be applicable on whole slide images up to a size of $\text{100k}\times\text{100k}$~px or more. The source code for our pipeline can be found on GitLab\footnote{~\url{https://gitlab.com/Avya/deepher2}}. \input{paper.bbl} \end{document}
1,941,325,220,299
arxiv
\section{Asset taxonomy} \label{appendix:a} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Asset_Taxonomy.png} \end{figure} \section{Classification of cash (green) and bitcoin (orange)} \label{appendix:b} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Classification_of_cash_and_bitcoin.png} \end{figure} \section{Classification of Ether} \label{appendix:c} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Classification_of_ether.png} \end{figure} \section{Classification of a Crowdlitoken token} \label{appendix:d} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Classification_of_crowdlitoken.png} \end{figure} \section{Classification of a CryptoKitties token} \label{appendix:e} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Classification_of_cryptokitties.png} \end{figure} \section{Classification of a traditional share} \label{appendix:f} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Figures/Figure_Classification_of_a_traditional_share.png} \end{figure} \section{Conclusion}\label{chap:Conclusion} Various classification frameworks for traditional and cryptographic assets already exist and are applied in practice. However, a universal approach linking the two worlds has not yet been developed. In this paper we fill this research gap by proposing a taxonomy that extends existing classification frameworks. We identify 14 different attributes that are supported by the existing literature and by which each type of asset can be properly classified. These attributes include the claim structure, technology, underlying, consensus-/validation mechanism, legal status, governance, information complexity, legal structure, information interface, total supply, issuance, redemption, transferability, and fungibility. With the help of a morphological box, various possible characteristics that an asset can have are identified and assigned to these attributes. In this way, our taxonomy bridges the gap between physical, digital, and cryptographic assets, where sometimes the same asset can appear in all three forms, thus creating clear terminology. Thanks to the methodical approach, the individual attributes can be expanded or broken down at any level of detail without changing the overall framework. The classification of selected assets, such as cash and bitcoin, has also shown that the proposed taxonomy is applicable in practice. In a next step, the robustness and practical relevance of the taxonomy could be further tested, for example by interviewing experts in the field. \section{Classification Examples}\label{chap:Classification} This subchapter seeks to test the above-mentioned taxonomy with selected examples. First, the taxonomy is used to compare cash to bitcoin, as both are intended means of payment\footnote{Bitcoin is often considered to be a store of value, but the original intention is to provide an alternative means of payment.}. This comparison is followed by the classification of Ether, a utility token, Crowdlitoken, an asset token, CryptoKitties, and a traditional share. \subsection{Comparison between Cash and Bitcoin} As both cash and bitcoin follow the purpose of a means of payment, both assets share certain similarities (see Appendix~\ref{appendix:b}). Neither cash, in the case of a fiat money system, nor bitcoin have a direct underlying asset. The value of the two assets is rather based on the public's trust in the issuer of the currency or in the underlying technological protocol, respectively. There is also no oracle interface, i.e. no specific source that interacts (e.g., directly provides information) with cash or bitcoin. Since both assets are designed as cash equivalents, their units are transferable from one party to another and individual units are interchangeable. Besides these commonalities there are some significant differences. While cash represents a certain value which depends on the denomination, bitcoin is of contractual type as it is transferred via smart contracts in the Bitcoin-script which is not Turing-complete. Bitcoin is furthermore not subject to any type of legal claim and has no legal structure. In contrast, cash is regulated as legal tender under national law. Since cash is of physical form, consensus on its state is given deterministically by the owner of the asset. Bitcoin, on the contrary, is a digital representation of value based on the distributed ledger technology. It is the native token of the Bitcoin blockchain, the consensus of which is based on the proof-of-work mechanism and thus finality of the system is not guaranteed but only probabilistic. This implies a decentralised governance of the asset, which is in contrast to the centralised governance of cash by central banks. Both assets also differ in terms of their total supply as well as in their ways to manage the number of outstanding units. While the maximum supply of bitcoin is fixed at 21 million units, there is no such restriction for cash. The issuance of additional units of bitcoin is conditional on the mining of new blocks and reducing the number of outstanding units is not possible\footnote{It is possible to send units of bitcoin, or other cryptographic assets, to an address without a known private key, so that these units are no longer accessible. However, this does not reduce the number of total units in the system.}. The issuance and redemption of cash, on the contrary, is handled flexibly by central banks.{\tiny } \subsection{Ether} Ether (see Appendix~\ref{appendix:c}), which is classed as a utility token, is the native token of the Turing-complete Ethereum platform which is governed by the Ethereum Foundation located in the Crypto Valley. The token itself is unregulated. Although multiple decentralised systems which can act as a quantitative oracle interface for the platform exist, there are no legal claims and no underlyings associated with the token. Consensus on the Ethereum platform is, at the time of writing, achieved based on the proof-of-work mechanism, and therefore is of a probabilistic nature. As a consequence, the governance of the token is decentralised. Like with bitcoin, the issuance of Ether tokens is conditional on the creation of new blocks, i.e., when miners get awarded with newly mined units, and the destruction of existing units is not possible. However, currently the total supply of Ether is not limited. All Ether tokens are transferable between parties and are fungible. \subsection{Crowdlitoken} Crowdlitokens (see Appendix~\ref{appendix:d}) are classed as asset tokens and are tokenised real estate bonds, regulated under the existing law. They are issued on the Ethereum Blockchain under the ERC-20 standard and represent a contract including fixed claims (e.g., voting and interest payment). The token value is derived from the fundamental value of the issuing company, and only indirectly by its real estate portfolio. Due to the underlying distributed ledger technology, consensus on the state of the tokens is not final but only probabilistic. Crowdlitokens are structured as notes/bonds. They are governed in a centralised manner through a qualitative oracle interface since token holders are allowed to vote on changes proposed by the management. They can be issued and burnt (e.g., through token buybacks) flexibly by the corresponding company, implying a flexible token supply. The Crowdlitoken is both transferable and fungible, whereby only persons who have successfully completed the KYC/AML audits can subscribe to the bonds and exercise all rights relating to them. \subsection{CryptoKitties} CryptoKitties (see Appendix~\ref{appendix:e}), as the last example from the crypto space, are collectible digital representations of cats created on the Ethereum blockchain. The corresponding smart contracts can generate over four billion variations of phenotypes and genotypes (CryptoKitties, 2019). CryptoKitties neither represent claims against a counterparty, nor a specific underlying. They are non-fungible - every cat is unique - but transferable ERC-721 tokens, without any regulatory or legal governance. Although the front-end as a traditional web app is managed by the development team, the token’s governance, e.g., ownership, is decentralised. Since consensus of the underlying Ethereum protocol is reached via a proof-of-work mechanism, the finality of the state of a CryptoKitties token is probabilistic. Also, there is no oracle interface related to CryptoKitties tokens. The creation of additional units is done by breeding two CryptoKitties, resulting in a new unique kitty, represented by a newly issued unique token, while destroying a unit is not possible. The corresponding smart contract allows for a total limit of around four billion cats that can be bred, implying a fixed total supply. \subsection{Traditional Share} Traditional shares (see Appendix~\ref{appendix:f}), as the one example from traditional finance, are either physical or digital in nature and represent a contract including fixed claims (e.g., voting and/or profit participation) against a counterparty, with its fundamental value also representing the underlying of the asset. Shares, as a legal form, are governed in a centralised manner and are subject to the existing law (e.g., national corporate law), with the general assembly of shareholders being the supreme organ of a stock corporation, i.e., acting as a qualitative oracle interface. Consensus on the state of a share is deterministically given by the share registry. The creation of new shares as well as the reduction in share capital, for example through share buybacks, is left to the general assembly of the corporation. As a consequence, the total supply of traditional shares is flexible. Shares are typically transferable, with exceptions such as restricted shares, and fungible, i.e., substitutable with other shares of the same company. \section{Introduction} Since the inception of the Bitcoin network in the year 2009, the space for cryptographic assets has developed rapidly. The continuing technological innovation in the underlying distributed ledger technology could consequently lead to an increasing transformation of traditional financial markets into crypto-based markets. Although different asset classification frameworks exist for both worlds, a holistic approach merging both traditional finance and the crypto economy is still lacking. This poses a challenge to the various stakeholders such as investors or regulators in retaining an overview of existing assets of different types and, in particular, of their design and individual characteristics. In order to fill this lack of research, we propose a taxonomy for the systematic classification of all types of assets, be it of physical, digital or tokenised nature. \section{Literature Review}\label{chap:Literature Review} The characteristics and properties of the most common types of financial instruments such as stocks, bonds, and derivatives have been the subject of research for some time, not only in the academia, but also in the industry. Therefore, a wide range of publications exist that deal with the functioning of these different instruments in a structured way. \newline One framework defining the structure and format for the classification of financial instruments (CFI) was first proposed by the International Organization for Standardization (ISO) in the year 1997. The last revised version of the framework is called ISO 10962:2019 and was published by ISO in 2019. It seeks to provide a standard for identifying the type of financial instrument and its main high-level features in the form of specific codes consisting of six alphabetical characters, and should thus help to standardise country and institution-specific terminology in relation to financial instruments \cite{iso2019}. The first character of the CFI code indicates the main category of financial instruments. These include equities, collective investment vehicles, debt instruments, different types of derivatives, and others.\footnote{For a detailed description of each category, see \cite{iso2019}.} The second character of the CFI code indicates multiple subclasses in a given main category, called groups. Equities, for example, are divided into the groups common/ordinary shares, preferred/preference shares, and common/ordinary convertible shares, among other groups. The last four characters of the CFI code define the specific attributes of a financial instrument and depend on the group to which the asset is allocated. For financial instruments in the group common/ordinary shares from the "equities" main category, relevant attributes include voting rights, ownership, payment status, and form. These attributes come with predefined possible values that determine the final code of a financial instrument \cite{iso2019}. For other groups such as bonds from the "debt instruments" category, alternative attributes, e.g., the type of interest or guarantee, are of relevance. \newline A second framework for classifying financial instruments is proposed by Brammertz and Mendelowitz \cite{brammertz2018digital}. Their so-called ACTUS taxonomy is based on the specific nature of financial contracts and in particular on their cash flow profiles and seeks to create a global standard for the consistent representation of financial instruments. It distinguishes between financial contracts, which in turn are split into the subcategories of basic contracts and combined/derivatives contracts on the one hand, and credit enhancement on the other. Basic contracts consist of fixed income and index-based products, whereas combined/derivative contracts comprise symmetric financial products, options, and securitisation products. The second main category of the ACTUS taxonomy, i.e., credit enhancement, includes guarantee contracts, collateral contracts, margining contracts, and repurchase agreements. The standard is implemented on the SolitX platform with a technical API layer and DLT adapter for transaction systems and accounting, and in the AnalytX architecture for risk management analysis, simulations, asset and liability management, and business planning \cite{Swisscom2019}. \newline The standards proposed by \cite{iso2019} and \cite{brammertz2018digital} show that sophisticated classification frameworks for traditional financial assets exist, which are used in practice. For cryptographic assets, on the contrary, the characteristics of many tokens in various respects, for example in terms of regulation, utility or valuation, were and are still largely ambiguous and hard to measure. Several initiatives from governments, the academia, and the industry have sought to reduce these uncertainties by systematically structuring the hundreds of existing tokens based on predefined criteria. \newline The Swiss Financial Market Supervisory Authority (FINMA), for example, issued guidelines for enquiries regarding the regulatory framework for initial coin offerings in early 2018, in which it distinguishes between three types of tokens, i.e., payment tokens, utility tokens, and asset tokens, based on the underlying economic purpose \cite{FINMA2020}. Whether a particular token is a financial instrument and thus would be subject to certain laws and regulations depends on its economic function and the rights associated with it. Other jurisdictions, such as the European Union, Israel, Malta, and the United Kingdom, follow a similar classification approach, although their terminologies differ to some extent \cite{blandin2019global}. Additionally, some jurisdictions follow the approach that the three main types of tokens are not necessarily mutually exclusive. Rather, there are also hybrid forms that share characteristics of two or three main types. Accordingly, particular cryptographic assets could thus, for example, have certain characteristics of both payment and utility tokens. \newline In April 2019, the U.S. Securities and Exchange Commission (SEC) through its strategic hub for financial innovation, FinHub, published guidelines to determine whether a digital asset, which may be a cryptographic asset, is an investment contract, i.e. an agreement whereby one party invests money in a common enterprise with the expectation of receiving a return on investment. This assessment is done by applying the so-called Howey test. If an investment agreement exists, the digital asset is classified as a security and therefore U.S. federal securities laws apply and must be considered by issuers and other parties involved in, for example, the marketing, offering, sale, resale or distribution of the respective asset \cite{sec2019}. Other jurisdictions, e.g., Ireland, follow a similar approach of classifying cryptographic assets based on their qualification as a security \cite{lawlibrary2019}. However, the Howey Test is to be understood less as a classification framework but more as a decision aid as to whether a cryptographic asset represents a security or not. \newline An academically based classification framework for cryptographic assets, which goes beyond the legal perspective and also takes technological and economic aspects, among others, into account was carried out by Oliveira et al. \cite{oliveira2018token}. By applying a design science research approach, including 16 interviews with representatives of projects with blockchain-based token systems, the paper derives a token classification framework for cryptographic assets that can be used as a tool for better informed decision making when using tokens in blockchain applications. Their final classification framework consists of the 13 attributes class, function, role, representation, supply, incentive system, transactions, ownership, burnability, expirability, fungibility, layer, and chain, each of which include a set of defined characteristics. \newline A similar framework was developed by Ballandies et al. \cite{Ballandies2018}. The authors established a classification framework for distributed ledger systems consisting of a total of 19 descriptive and quantitative attributes with four dimensions (distributed ledger, token, action, and type). The attributes comprise the distributed ledger type, origin, address traceability, Turing completeness, and storage in the distributed ledger dimension, underlying, unconditional creation, conditional creation, transferability, burn, and supply in the token dimension, action fee, read permission, and actor permission in the action dimension, and fee, validate permission, write permission, proof, and type in the consensus dimension. The framework was derived from feedback from the blockchain community. \newline Three further classification frameworks for cryptographic assets that were strongly driven by the industry are those proposed by the consulting firm MME, the International Token Standardization Association (ITSA), and the Ethereum Enterprise Alliance (EEA). \newline The framework by MME was published in May 2018 and focuses on the legal properties and risk assessment of cryptographic assets. The paper’s resulting classification is based on a token’s function or main use, alongside other criteria such as the existence of a counterparty, as well as its type and/or the underlying asset or value. The final archetypes of cryptographic assets are native utility tokens, counterparty tokens, and ownership tokens, which are each subject to additional subcategories of token types \cite{mueller2018conceptual}. \newline The International Token Classification (ITC) framework by the ITSA comprises an economic, technological, legal, and regulatory vertical each containing a set of subdimensions with different attributes. The economic and technological verticals include three subdimensions each, which refer to a token’s economic purpose, its target industry, and the way of distribution, and the technological setup, consensus mechanism, and technological functionality, respectively. The legal vertical includes the two subdimensions legal claim and issuer type, whereas the regulatory vertical focuses on assessing a tokens regulatory status in the US, China, Germany, and Switzerland. Over all verticals, a total of twelve subdimensions are defined, though ITSA plans to define further subdimensions in the future. Concerning the evaluation of these individual subdimensions, as of September 2019, the ITC framework already provided detailed information on four of the twelve subdimensions, namely for the economic purpose, industry, technological setup, and legal claim. The classification into these four subdimensions was compiled in a database covering more than 800 cryptographic tokens. Besides the classification framework and the corresponding database, the ITSA also introduced a nine digit unambiguous identifier for each token, the so-called International Token Identification Number, short ITIN \cite{ITSA}. \newline The third industry-driven framework for classifying cryptographic tokens was published by the EEA in November 2019. Their proposed Token Taxonomy Initiative (TTI) distinguishes between five characteristics a token can possess. The first characteristic is the token type and refers to whether a token is fungible or non-fungible. The second characteristic, the token unit, distinguishes between the attribute of being either fractional, whole or singleton and indicates whether a token is subdivisible or not. The value type, as the third characteristic, can assume the attribute of being either of an intrinsic value, i.e., the token itself is of value (e.g., bitcoin), or a reference value, i.e., the token value is referenced elsewhere (e.g., tokenised real estate). Characteristic four, the representation type, comprises the attribute of being common or unique. Common tokens, on the one hand, share a single set of properties, are not distinct from one another and are recorded in a central place. Unique tokens, on the other hand, have unique properties and their own identity, and can be traced individually. The fifth and last characteristic is the template type and classifies tokens as either single or hybrid and refers to any parent/child relationship or dependencies between tokens. Unlike single tokens, hybrid tokens combine parent and child tokens in order to model different use cases. In addition, the TTI provides measures in order to promote interoperability standards between different blockchain implementations \cite{EEAEEA2019}. \section{The (Crypto) Asset Taxonomy}\label{chap:The (Crypto) Asset Taxonomy} Building on the literature review in Chapter \ref{chap:Literature Review}, this chapter proposes a holistic framework for the classification of assets. Unlike existing classification frameworks, our asset taxonomy aims to classify all existing types of assets, i.e., assets from both traditional finance as well as the crypto economy, based on their formal characteristics. Furthermore, the taxonomy introduces a terminology that is suitable for both traditional and the crypto assets. A morphological box is chosen as the methodological approach in order to be able to take the multi-dimensionality of the matter into account. The taxonomy is illustrated in Appendix~\ref{appendix:a}. In total, we identify 14 different attributes based on which all types of assets can be classified. They include claim structure, technology, underlying, consensus/validation mechanism, legal status, governance, information complexity, legal structure, information interface, total supply, issuance, redemption, transferability, and fungibility, with each attribute comprising a set of at least two characteristics. Note that certain attributes in the frameworks discussed in Chapter \ref{chap:Literature Review} subsume some of the attributes presented here. Hence, our 14 attributes factorize these superordinate attributes to make them universally applicable. Table \ref{tab:table1} breaks down the 14 attributes in terms of their inclusion in the publications discussed in Chapter \ref{chap:Literature Review}. The first column shows the attribute labels of the taxonomy we propose. Column two to ten refer to the publications discussed, where an "x" indicates that the corresponding attribute is either explicitly or implicitly considered in the classification framework given in row one. Note that the terminology regarding a particular attribute differs across these publications, for example, because they focus on different types of assets. The terminology we propose generalises these terms to ensure compatibility across all types of assets, thus creating a common linguistic understanding. Also note that due to the extension of the taxonomy to traditional assets, some DLT-specific attributes/characteristics in the publications discussed are summarised or generalised, while new attributes/characteristics were added in order to enable the mapping of traditional asset types. Overall, Table \ref{tab:table1} shows that each of the existing frameworks covers certain attributes determined by the specific focus or objective of the publication. The framework of FINMA \cite{FINMA2020}, for example, focuses on regulatory aspects, and thus predominantly includes corresponding attributes, i.e., claim structure, legal status, and legal structure. Other frameworks, for example the one published by the EEA \cite{EEAEEA2019}, focus more on technological aspects or the design of token features. Overall, none of the frameworks discussed in Chapter \ref{chap:Literature Review} covers the full range of formal attributes identified in our taxonomy. However, our taxonomy is generally confirmed by the existing literature, as each attribute is considered in at least one of the existing classification frameworks. The degree of agreement with the classification framework we propose varies, however. While the publication of ISO \cite{iso2019} covers four attributes of our taxonomy, the publications of Oliveira et al. \cite{oliveira2018token} and Ballandies et al. \cite{Ballandies2018} cover ten. There are also differences in coverage from an attribute perspective. While the underlying of an asset is of relevance in all frameworks analysed, the attributes information interface and fungibility are only covered by two. The taxonomy we propose therefore goes further than the existing classification frameworks, firstly because it is independent of the type of assets to be classified and secondly because it contains additional attributes and characteristics. Since some of these attributes and characteristics are not intuitively clear, they are explained in more detail in the following:\\ \begin{table*} \caption{Coverage of the 14 attributes in existing classification frameworks} \label{tab:table1} \centering \begin{tabularx}{\textwidth}{@{}l*{8}{C}c@{}} \toprule Attribute & ISO \space \space \cite{iso2019} & B.\&M. \cite{brammertz2018digital} & FINMA \cite{FINMA2020} & O. et al. \cite{oliveira2018token} & B. et al. \cite{Ballandies2018} & MME \cite{mueller2018conceptual} & ITSA \cite{ITSA} & EEA \cite{EEAEEA2019} \\ \midrule Claim structure & x & x & x & x & & x & x & & \\ Technology & & & x & x & x & x & x & x & \\ Underlying & x & x & x & x & x & x & x & x & \\ Consensus/Validation mechanism & & & & & x & x & x & & \\ Legal status & & & x &x & & x & x & & \\ Governance & & & & x & x & x & & & \\ Information complexity & x & x & & & x & & & & \\ Legal structure & x & x & x & & & x & & & \\ Information interface & & & & & x & & & x & \\ Total supply & & x & & x & x & & & x & \\ Issuance & & x & & x & x & & & x & \\ Redemption & & x & & x & x & & & x & \\ Transferability & & & x & x & x & x & & x & \\ Fungibility & & & & x & & & & x & \\ \bottomrule \end{tabularx} \end{table*} \textbf{Claim structure:} Does the asset represent a claim, i.e., a demand for something due or believed to be due \cite{MerriamWebster}? \begin{itemize}[label={--}] \itemsep0em \item No claim(s): The asset does not represent any kind of claim. \item Flexible claim(s): The asset represents certain claims, the possession or exercise of which can depend on certain conditions (e.g., catastrophe bonds). \item Fixed claim(s): The asset represents claims which can neither be restricted nor restrained under any condition (e.g., fixed income). \end{itemize} \bigskip \textbf{Technology:} Which technology is the asset based on? \begin{itemize}[label={--}] \itemsep0em \item Physical: The asset exists in a physical form (e.g., gold bullion). \item Digital: The asset exits in a digital form, but is not based on the distributed ledger technology (e.g., electronic share). \item Distributed ledger technology: The asset is based on the distributed ledger technology, structured either as a native token, i.e., a token that is native to a specific blockchain, or as a protocol token, i.e., a token issued on an existing blockchain protocol \cite{oliveira2018token} such as, for example, ERC-20 or ERC-721 tokens for the Ethereum blockchain. \end{itemize} \bigskip \textbf{Underlying:} Which underlying or collateral is the asset’s value based on? \begin{itemize}[label={--}] \itemsep0em \item No underlying: The asset’s value is not a derivative of an underlying asset (e.g., bitcoin). \item Company: The asset’s value represents a stake in a company (e.g., equity). \item Bankable asset: The asset’s value represents a bankable asset, i.e., an asset that can be deposited in a bank or custody account (e.g., fiat currencies). \item Cryptographic asset: The asset’s value represents a cryptographic asset, i.e., an asset based on the distributed ledger technology (e.g., derivative of a cryptographic asset). \item Tangible asset: The asset is in a physical form \cite{Kenton2019} (e.g., real estate). \item Contract: The asset’s value represents a contract (e.g., license agreement). \end{itemize} \bigskip \textbf{Consensus-/Validation-mechanism:} How is the agreement on the finality (e.g., property rights or ownership transfer) of the asset reached? \begin{itemize}[label={--}] \item Instant finality: Consensus is final. Mechanisms that typically, but not necessarily, belong to the deterministic type are, for example, notary services or qualified written form. \item Probabilistic finality: Consensus is not final, but reached with a certain level of confidence. Mechanisms that typically, but not necessarily, belong to the probabilistic type are, for example, proof-of-work or proof-of-stake. \end{itemize} \bigskip \textbf{Legal status:} What is the regulatory framework governing the asset? \begin{itemize}[label={--}] \itemsep0em \item Regulated: There are regulatory requirements for the issuance, redemption and governance of the asset. \item Unregulated: There is no specific regulatory framework for the issuance, redemption and governance of the asset. \end{itemize} \bigskip \textbf{Governance:} In which way is the asset governed? \begin{itemize}[label={--}] \itemsep0em \item Centralised: The asset is governed by an authoritative party or consortium. \item Decentralised: The asset is governed without centralised control (e.g., certain types of cryptographic assets such as bitcoin). \end{itemize} \bigskip \textbf{Information complexity:\footnote{Note that the characteristics of this attribute build on each other, i.e., each characteristic contains additional information compared to the previous one.}} What type of information complexity is associated with the asset? \begin{itemize}[label={--}] \itemsep0em \item Value: The asset represents a specific value (e.g., currencies). \item Contract: The asset encompasses conditional information in addition to its value (e.g., coupon bonds or DLT-based smart contracts\footnote{Note that such (smart) contracts, as in the case of bitcoin, are not necessarily based on a Turing-complete system.}). \item Turing completeness: The asset is based on a Turing-complete («universally programmable») computational model (e.g., Ethereum). \end{itemize} \bigskip \textbf{Legal structure:} What is the legal form of the asset? \begin{itemize}[label={--}] \itemsep0em \item No legal structure: There is no legal structure governing the asset. \item Foundation: The asset is governed by a foundation/ trust structure. \item Note/bond: The asset is structured as a note or bond. \item Share: The asset is structured as a share. \item Other\footnote{The characteristic "Other" subsumes the broad range of alternative legal structures for reasons of simplicity and practicability.}: The asset has an alternative legal structure (e.g., central bank money). \end{itemize} \bigskip \textbf{Information interface:} How does the asset receive and/or send relevant information? \begin{itemize}[label={--}] \itemsep0em \item No interface: The asset has no kind of information interface. \item Qualitative: The asset manages relevant information indirectly through an authorised instance (e.g., general assembly). \item Quantitative: The asset manages relevant information from authorised sources automatically (e.g., IoT sources or oracle interfaces in the case of DLT-based smart contracts). \end{itemize} \bigskip \textbf{Total supply:} To which limit can the asset be generated? \begin{itemize}[label={--}] \itemsep0em \item Fixed: The total supply of the asset is fixed. \item Conditional: The total supply of the asset is dependent on predefined conditions. \item Flexible: The total supply of the asset is managed flexibly by authorised parties. \end{itemize} \bigskip \textbf{Issuance:} How is the asset generated? \begin{itemize}[label={--}] \itemsep0em \item Once: After an initial issuance, no additional units of the asset are issued. \item Conditional: Additional units of the asset are issued once predefined conditions are met (e.g., newly issued cryptographic assets through mining). \item Flexible: Additional units of the asset can be issued flexibly by authorised parties (e.g., increase in share capital). \end{itemize} \bigskip \textbf{Redemption:} How is the number of outstanding assets reduced? \begin{itemize}[label={--}] \itemsep0em \item No redemption: The number of outstanding assets cannot be reduced. \item Fixed: The reduction of the number of outstanding assets follows a predefined protocol. \item Conditional: The reduction of the number of outstanding assets is initiated once predefined conditions are met. \item Flexible: The reduction of the number of outstanding assets can be carried out flexibly by authorised parties (e.g., share buyback). \end{itemize} \bigskip \textbf{Transferability:} Can the asset’s ownership be transferred to another party? \begin{itemize}[label={--}] \itemsep0em \item Transferable: The asset’s ownership can be transferred to another party. \item Non-transferable: The asset’s ownership cannot be transferred to another party, for example, by sale or giveaway (e.g., some types of registered securities). \end{itemize} \bigskip \textbf{Fungibility:} Can the asset be interchanged with another asset of the same type? \begin{itemize}[label={--}] \itemsep0em \item Fungible: The asset is substitutable with another asset of the same type. \item Non-fungible: The asset is not substitutable with another asset of the same type (e.g., artwork). \end{itemize}
1,941,325,220,300
arxiv
\section{Introduction} \label{intro} Understanding the acceleration of atomic nuclei to macroscopic energies $\gtrsim50\,{\rm J}$ is one of the great challenges of modern astrophysics. These Ultra High Energy Cosmic Ray (UHECR) accelerators operate undercover because the Universe is filled with electromagnetic and hydromagnetic waves that scatter and attenuate the cosmic radiation making direct source identification nearly impossible. On the other hand, these interactions lie at the heart of cosmic-ray science because they are intimately involved in most acceleration models. They also lead to the production of secondary messengers -- $\gamma$-rays and neutrinos -- signatures of leptonic and hadronic processes at play in cosmic accelerators. The modeling of these interaction processes connects particle physics to multimessenger astronomy which can reveal the UHECR sources indirectly. $\gamma$-rays and X-rays can be produced by both hadronic and leptonic processes, and a multimessenger effort is made to detect high energy neutrinos as messengers of hadronic accelerators. In this brief review, we summarize direct and general observational constraints on UHECR sources (Sec.~2). In Sec.~3, we discuss three general UHECR acceleration sites: magnetospheres, jets and non-relativistic shocks. Magnetospheres are mostly associated with compact objects -- spinning neutron stars and black holes -- and can plausibly accelerate cosmic rays to the observed energies. The challenge is to explain how they can escape without loss. Jet models for UHECR acceleration are mostly associated with black holes jets associated with Gamma Ray Bursts (GRB), and Active Galactic Nuclei (AGN). Here the challenge is to identify sources within the UHECR horizon, as these sources are rare. Shock models are most challenged by the need to accelerate cosmic rays to the highest energies measured. A hierarchical approach in which the entire observed spectrum originates successively at SuperNova Remnant (SNR) shocks, Galactic Wind Termination Shocks (GWTS), and at Cluster Accretion Shocks (CAS), is outlined. The opportunity for deciding between these, other, possible acceleration sites in the near future is discussed in Sec.~4. \section{Summary of Observational Constraints on the origin of UHECR} Using observations made with the Telescope Array (TA)~\citep{2008NuPhS.175..221K}, and the Pierre Auger Observatory (PAO)~\citep{PierreAuger:2004naf}, over the past decades, it has been established that nuclei are dominating the spectrum already at energies $\sim5$~EeV. Both experiments confirm the existence of an "ankle" in the cosmic-ray spectrum at $\sim$~5 EeV. Both experiments agree that the observed spectrum starts to be strongly suppressed at energy $\gtrsim50\,\,{\rm EeV}$. This could be due to photopion production and photodisintegration by microwave background photons --- the GZK effect \citep{Greisen1966jv,Zatsepin1966jv} --- or may represent the maximum energy that can be accelerated in any cosmic source. Despite many claims of association of UHECR with specific sources or source classes, the only significant departure from complete isotropy of UHECR is a $6.6\,{\sigma}$ dipole reported by PAO at energies above 8~EeV\citep{2017Sci...357.1266P}. It has been shown that the evolution of the dipole amplitude with energy is expected if the sources are extragalactic, as a result of the evolution of the GZK horizon with energy~\citep{Globus:2017fym}. This feature could reflect an anisotropy in the distribution of many sources following the large scale structure \citep{2021ApJ...913L..13D}, local sources such as nearby radio galaxies \citep{Eichmann_2022} or the presence of a single dominant source, such as a Galactic transient~\citep{Eichler:2016mut} -- which may require strong Galactic magnetic turbulence to not upset the anisotropy limits but cannot be totally ruled out. However, the source location could well be significantly shifted across the sky due to cosmic ray deflection by the poorly understood strong Galactic and extragalactic magnetic fields. Given the uncertainty between the magnetic field models, one cannot reach strong conclusions yet regarding the nature of the sources with the dipole anisotropy \citep{Allard:2021ioh}. It should be noted that while the dipole appears above 8~EeV, the sky in the 4-8 EeV energy range is compatible with isotropy \citep{2017Sci...357.1266P}. This is certainly due to the combined effect of a large GZK horizon and the presence of accelerators operating up to lower maximum rigidity. The ``light ankle'' reported by the KASCADE Grande experiment at $\sim0.1$~EeV \citep{Schoo:2016yeo} may either be a distinct proton component from more distant sources, or the accumulation of secondaries from the primary UHECR accelerators \citep{PhysRevD.92.021302}. Another significant and generally accepted feature of the observations is the change in the cosmic ray composition with energy (see Figs~2.10-2.12~in \citep{Coleman_2023}). Such a statement can only be made on statistical grounds because we cannot determine the composition of individual primary cosmic rays. The nuclei become progressively heavier above the ankle at $\sim5\,{\rm EeV}$. The cosmic rays are mostly heavy nuclei (CNO or heavier) at energies $\gtrsim40\,{\rm EeV}$ \citep{Coleman_2023}. There are too few showers with larger primary energy to be definitive, but it is generally supposed that the composition remains heavy up to the highest energies. This change in mindset by the UHECR community has a simple interpretation. Most source and propagation models depend upon the particle's rigidity, $R$, its momentum per unit charge, not its total energy. If we use rigidity instead of energy, the observed cosmic ray spectrum can be accounted for with a maximum rigidity at the accelerator $R_{\rm max}\sim8$~EV. To account for the whole extragalactic spectrum with only a single extragalactic component, one needs a very hard spectrum at the sources $R^{-1}$ to reproduce the evolution of the composition above the ankle, and a soft proton component ($\sim R^{-2.5}$ for sources avolving as the SFR \citep{globus2017b}) to account for the light ankle seen by KASCADE-Grande. Such a single-component model gives a coherent picture of the Galactic-to-extragalactic cosmic ray transition \citep{globus2015} (in this model the protons are the product of photodissociation processes at the source). Other models based on more than a single extragalactic component remain of course possible, and the sub-ankle protons could be the result of another type of accelerator operating in this energy range. The contemporary luminosity density for UHECR acceleration is ${\cal L}_{\rm UHECR}\sim6\times10^{44}$~erg~Mpc$^{-3}$~yr$^{-1}$ above 5~EeV \citep{PhysRevD.102.062005}. For the purposes of astronomical comparison, this is very roughly $3\sim10^{-5}$ times the local galaxy luminosity density. More quantitative comparisons require specific source and propagation models. Regarding the cosmogenic gamma-ray diffuse flux (the by-product of the GZK effect), the mixed-composition model appear to be less constrained by the Fermi-LAT than the electron-positron dip (pure proton) scenario \citep{2011PhLB..695...13B, Gavish_2016, PhysRevD.94.063002} that rules out SFR-like and stronger cosmological evolutions. For a mixed composition, however, only very strong evolutions (e.g., FR-II radio galaxies) are excluded by the current observations \citep{2017ApJ}. The cosmogenic neutrino flux is well below the IceCube sensitivity for a mixed composition \citep{2017ApJ}. The IceCube collaboration reported a correlation between a PeV neutrino and a blazar \citep{IceCube:2018cha} implying that blazars are able to accelerate protons to $\sim 0.1$~EeV. In summary, it is reasonable to assume that any source model needs to be able to accelerate cosmic rays up to a minimum rigidity $\sim$8~EV (equivalently iron at $\sim$200~EeV) to account for the observed spectrum and composition; therefore, in order to be considered as a possible source candidate, several conditions must be fulfilled by a cosmic accelerator:\\ ${\small \bullet}$ Confinement: the acceleration time should be smaller than the escape time; this is related to both the strength of the magnetic field and the geometry, size and age of the accelerator -- this is an expression of the famous Hillas criterion~\citep{1984ARA&A..22..425H};\\ ${\small \bullet}$ Power: the central engine must be able to provide the necessary energy (either electromagnetic or kinetic, depending how the energy transfer is mediated);\\ ${\small \bullet}$ Radiation losses at the accelerator: within the accelerating field the acceleration time should be smaller than the radiation loss time;\\ ${\small \bullet}$ Interaction losses (photopion and photodisintegration) at the accelerator: the acceleration time should be smaller than the interaction loss time;\\ ${\small \bullet}$ Emissivity: the density of sources must be enough to account for the observed UHECR flux (${\cal L}_{\rm UHECR}\sim6\times10^{44}$~erg~Mpc$^{-3}$~yr$^{-1}$ above 5~EeV) \citep{PhysRevD.102.062005};\\ ${\small \bullet}$ Secondary radiation: the accompanying photon and neutrino flux should not be greater than the observed fluxes. This constraint must be satisfied by the secondary gamma-ray and neutrino flux at the sources and also by the cosmogenic, diffuse flux. \\ ${\small \bullet}$ Anisotropy: the source model needs to account for the observed anisotropies. This is the weakest constraint due to the uncertainty regarding the magnetic fields~\citep{Unger:2017kfh, refId0}. \section{UHECR Source Models}\label{sec-source} \subsection{Magnetospheric Models} \subsubsection {General Considerations} In order to accelerate a proton to an energy $\sim8\,{\rm EeV}$, it must follow a trajectory where $\int d{\bf r}\cdot{\bf E}$ exceeds $\sim8$~EV. The simplest way to imagine this happening is in the context of a large scale magnetic field. Faraday's law guarantees that if the magnetic field varies with time, then there will be an EMF and an opportunity for particle acceleration. However, even a stationary flow can produce large, accelerating, electric field when there are appropriate boundary conditions. One such configuration is the unipolar inductor \citep{Weber}. A spinning, conducting body is envisaged to be endowed with an axisymmetric magnetic field distribution. In the simplest case, the body rotates with angular velocity~$\Omega$. If sufficient plasma is continuously produced to sustain an electrical current, without having dynamical relevance, the electromagnetic field in the ``magnetosphere'' will be force-free. Imposing a boundary condition at the neutron star surface leads to an electric field ${\bf E}=-({\bf\Omega}\times{\bf r})\times{\bf B}$ which implies that ${\bf E}\cdot{\bf B}=0$. The magnetic flux surfaces will be equipotential and isorotational and the potential difference between two adjacent flux surfaces containing magnetic flux $d\Phi$ is $dV=\Omega d\Phi/2\pi$. There will also be an electrical current $I(\Phi)$ flowing within a flux surface labeled by $\Phi$ which is associated with a toroidal component of magnetic field and an outward Poynting flow of electromagnetic energy, $IdV$. This Poynting flux extracts rotational energy efficiently from the neutron star with very little dissipation in the neutron star. It can be radiated at great distance, for example in a Pulsar Wind Nebula such as the Crab Nebula. The relationship between $I$ and $V$ depends upon the detailed boundary conditions but, if the electromagnetic field remains force-free, then $V\sim I Z_0/2\pi$, where $Z_0\equiv377\,{\rm Ohm}$ is the impedance of free space. To order of magnitude, a potential difference of, say, $10\,{\rm EV}$ is then associated with an electromagnetic power $\sim10^{36}\,{\rm W}$. Whether or not this potential difference is used for efficient UHECR particle acceleration depends upon circumstance but, if it is, then the associated power is inescapable. \subsubsection{Neutron Stars} Now consider the application of these general ideas to neutron stars. Spinning magnetized neutron stars are, manifestly, not axisymmetric but quite similar considerations apply. If the magnetic moment is that of a regular pulsar, then the spin period must be close to the maximal allowed value $\sim1.5\,{\rm ms}$ and the source lifetime is only years \citep{Fang:2012rx}. Magnetars, with surface magnetic field $\sim100\,{\rm GT}$ are more promising \citep{Arons:2002yj}. A spin period $\sim30\,{\rm ms}$ suffices but the source lifetime would be less than a day. It is hard to see how UHECR could escape the environs of a neutron star so soon after its birth given the high density of matter and radiation which should cause catastrophic losses. However, there is another possibility and this is that the power derives from the magnetic, not the rotational energy. This can be as much as $10^{41}\,{\rm J}$ and it could be released intermittently, following magnetic flares, where $\Omega$ would be replaced by the reciprocal of the time it would take light to cross a fraction of the magnetic surface, a few $\mu s$. Furthermore, the magnetic energy stored below the surface might be hundreds of times larger than that within the magnetosphere and accessible for much longer. This is important because the magnetar birth rate should be less than $\sim10^{-6}\,{\rm Mpc}^{-3}\,{\rm yr}^{-1}$. requiring an energy per magnetar of $>10^{43}\,{\rm J}$ to account for the UHECR luminosity density. Despite this energetic discouragement, it is instructive to consider the general principles of particle acceleration exhibited by an axisymmetric pulsar magnetosphere. In this case, the magnetic field is stationary and the EMF formally vanishes. The magnetic surfaces are electrostatically equipotential. However, a current must flow which may require the continuous formation of electron-positron pairs in the magnetosphere. It has long been supposed that this can happen at a ``gap'', a possibly transient potential drop along the magnetic field where ${\bf E}\cdot{\bf B}\ne0$ and single charged particles can be accelerated to sufficient energy to produce $\gamma$-rays which can form pairs and breakdown the vacuum. There has been much modeling of gaps and generally it has been found that the associated potential drops across a gap are order of magnitude smaller than what would be needed to create UHECR directly at gaps. They could, of course be associated with the emission of much less energetic $\gamma$-rays and cosmic rays. If there is minimal potential drop along the flux surfaces, then the power must associated with electrical current crossing between the flux surfaces. There are many different ways that this has been modeled. For example, there could be enough plasma present that there is a Lorentz force density, ${\bf j}\times{\bf B}$, that gradually converts the electromagnetic energy into kinetic energy flux with minimal dissipation. This outflow might then be an efficient particle accelerator in a jet or wind (see below). Alternatively, the current may complete dissipatively as particle acceleration. At one extreme, all of the power may heat the thermal plasma and cause it to radiate. An intermediate possibility is that much of this power creates a suprathermal distribution of intermediate energy cosmic rays. A third possibility is that it is the highest energy particles that carry most of the electrical current across magnetic surfaces. The manner in which this may happen is interesting. A modest energy particle will gyrate around and move along the magnetic field in the local frame in which the electric field vanishes. However, for larger particle energies the gyro radius, $r_L$, increases and the collisionless particle motions also include drift velocities of their guiding centers with speeds $\sim cr_L/L$, where $L$ is the scale length. It is possible that despite the rarity of the highest energy particles, their greater mobility can allow them to carry most of the electrical current and, consequently, be accelerated to energies as large as those for which $r_L$ approaches $L$, specifically, those with energy $\sim ZecBL$. This basic mechanism has been invoked to account for intermediate energy particle acceleration at a pulsar wind termination shock. For this to be possible, in principle, requires that the electrical current flows along and not across magnetic surfaces all the way from the pulsar magnetosphere to the surrounding nebula. \subsubsection{Black Holes} Many of these basic physics ideas are translatable to black hole magnetospheres with some important differences. We suppose that magnetic flux is trapped by the inertia and pressure of orbiting gas and some of this flux passes in and out of the event horizon. The black hole is not a perfect conductor. In fact, a formal surface conductivity can be associated with the horizon associating a resistance of $\sim Z_0/2\pi$ with the black hole. This has two consequences. The magnetic field lines rotate with an angular velocity roughly half that of the black hole. There is as much power dissipated behind the horizon as there is extracted outside the black hole. For the case of massive black holes, the power associated with UHECR acceleration is then, at least, comparable with that radiated by a moderate AGN such as a FRI radio source or a Seyfert galaxy. There is the same need to supply electrical charge continuously and gaps have been invoked as sources of $\gamma$-rays (e.g. \citep{Levinson:2010fc}). As with pulsars, the potential differences that need to be applied to keep the current flowing are generally tiny compared with those needed to account for UHECR. To drive home this point, if the current carrying pair plasma had the minimum density and energy $\sim1\,{\rm MeV}$ required, then its energy density would be only a fraction $\sim$ the ratio of this energy to the total potential difference --- $\sim1\,{\rm MeV}$ to $\sim10\,{\rm EV}$ or $10^{-13}$. (This is also the ratio of the minimum to the maximum gyro radii.) In fact gaps will not even be required if either the magnetic field lines are able to undergo interchange instability to keep the magnetosphere supplied with electron-ion plasma or externally produced $\gamma$-rays can produce pairs directly. For all of these reasons, gaps are not seen as UHECR sources. As with pulsars, it also possible to imagine particle acceleration occurring when current crosses the toroidal magnetic field lines associated with the electromagnetic outflow produced by a magnetized, spinning black hole. This takes place in the ``Espresso'' model of UHECR acceleration \citep{Caprioli_2015} where lower energy cosmic rays are injected into the outflow and emerge with rigidity comparable with the total potential difference available --- an expression of the famous Hillas criterion \citep{1984ARA&A..22..425H} (see section 3.2.1 below). Assuming $\Gamma\sim30$ (as in powerful blazars), a one-shot boost of a factor of $\sim \Gamma^2$ in energy, can transform the highest-energy galactic CRs at 100 PeV in the highest-energy UHECRs at 100 EeV. If this mechamism can account for the whole UHECR data is still an open question. There is a second, quite general constraint on locating UHECR acceleration close to an AGN black hole. A $\sim100\,\mu$ far infrared photon will be able to create pions in the Coulomb field of a $\sim10\,{\rm EeV}$ cosmic ray. The UHECR acceleration site must be sufficiently far away from the AGN that the photon density does not prevent energy loss or photodisintegration. Similar remarks apply to the pair production threshold which involves $\sim30\,{\rm GHz}$ radio photons. As discussed below, even more stringent constraints must be satisfied to avoid photodisintegration of the heaviest nuclei. The end result is that the only way that processes directly connected to an AGN magnetosphere could account for UHECR is if they involve low luminosity AGN and operate at some remove from the actual black hole magnetosphere. Similar arguments can be used to rule out UHECR production around or directly associated with the magnetospheres of stellar mass black holes either close to formation in GRB or later on in binary X-ray sources. We now turn to the viability of UHECR acceleration further away from massive or stellar black holes, within relativistic jets. \subsection{Relativistic Plasma Jet Models} Relativistic jets are ubiquitous in the universe and their acceleration, collimation and deceleration processes provide a variety of acceleration sites for UHECRs. The central engine of relativistic jets is still under debate. Two driving mechanisms are commonly invoked, heating mediated by accretion ("nurture"), which leads to pressure-driven jets, and the Blandford-Znajek mechanism where the jet is powered by a rapidly spinning black hole ("nature") which leads to magnetically-driven jets \citep{Blandford:2022pwc}. The magnetization factor has important implications for modeling the acceleration of UHECRs and the production of secondary $\gamma$-rays and neutrinos. Because the jets are structured both laterally but also temporally, the ejected plasma layers have different velocities and hence shear acceleration or diffusive shock acceleration can take place at internal shocks in the jets, lobes, or at the oblique shocks in the jet's boundary layer, or far beyond the Bondi radius at external shocks when the jet decelerates in the ambient medium. These mechanisms allow the transfer of the kinetic energy of the relativistic outflow to cosmic rays. This is mediated by magnetic turbulence and the micro-physics at play in generating the magnetohydrodynamic waves is important and yet poorly constraint in relativistic jets. Efficient dissipation of magnetic energy can happen through turbulence or reconnection and leads to the emission of high energy photons that can interact with the cosmic rays via photointeractions (pion production for protons or Giant Dipole Resonance, GDR, for nuclei). How this operates in relativistic outflows is still unsettled and depends on the configuration of the magnetic fields. The two configurations commonly invoked are: a large scale, ordered field of a single polarity, leading to a highly magnetized relativistic jet; or a magnetic field of alternating polarity leading to a "striped" jet made of regions with toroidal field of alternating polarity. Instabilities such as the kink can also drive magnetic dissipation. Therefore, for relativistic jets, the following questions remain: What is the best sites for UHECR acceleration in relativistic jets? Are the heavy nuclei able to survive to the high energy photon fields present around relativistic sources? Let us review below some of the most promising jetted source candidates for UHECR sources. \subsubsection{Jet Power requirement for UHECR} Let us recall the jet power requirement for UHECR production. In the following we consider a relativistic plasma jet with half opening angle is $\theta$, $r$ is the jet height above the launching region. We use masses in units of solar mass $\bar{m}={M}/{M_\odot}$, luminosities in units of $L_{Edd}=1.26\,10^{38}\bar{m}$ erg/s and radii in units of gravitational radius $\bar{r}={r}/{r_g}$ with $r_g=3\cdot10^{5} \bar{m} $ cm. Quantities in the jet comoving frame are denoted with a prime symbol. The jet expansion time in the comoving frame is $t'_{\rm exp} = r/(\Gamma\beta c)$, where $\Gamma$ is the bulk Lorentz factor of the jet $\Gamma=(1-\beta^2)^{-1/2}$. The condition for jet stability implies that the comoving signal crossing timescale is smaller than $t_{\rm exp}$, which gives the condition $\Gamma\theta\lesssim\beta_s/\beta$ where $\beta_s c$ is the Alfv\'en velocity. In relativistic magnetized jets $\beta\approx\beta_s\approx1$. The acceleration region cannot extend further than the transverse distance $r\theta$. The escape time sideways is thus $t'_{\rm esc} = (r\theta)/(\Gamma c)$. The jet luminosity $L_{j}=\pi(\theta r)^2\beta\Gamma^2cu'_j$ where $u'_j$ is the comoving energy density. The magnetic field of a relativistic jet with magnetic luminosity $L_B\equiv\xi_B L_{j}$, bulk Lorentz factor $\Gamma$ and half opening angle $\theta$ is \begin{eqnarray} B'&=&\frac{2\left(\xi_B L_j/c\right)^{1/2}}{\theta\Gamma r}\nonumber\\ &\approx& 4.3 \cdot 10^8 (\xi_B\bar{L}_j/\bar{m})^{1/2}\bar{r}^{-1}(\theta\Gamma)^{-1}{\rm G}\,. \label{Bfield} \end{eqnarray} The magnetic field $B'$ is related to the total energy density $u'_{j}$ by: $B'=\sqrt{4\pi\xi_Bu'_{j}}$. The acceleration time $t'_{acc}= t'_L/\xi_{acc}= E'/(ZeB'c\xi_{acc})$ with $\xi_{acc}\lesssim1$ (in numerical experiences $\xi_{acc}\lesssim0.1$ making the usually assumed Bohm diffusion overoptimistic). The condition $t'_{acc}=t'_{esc}$ (following \citep{1984ARA&A..22..425H}) gives: \begin{eqnarray} E'_{\rm max}&=&\frac{\xi_{acc}r\theta Z e B'}{\Gamma}\approx300\, \xi_{acc}Z[B'_{\rm G}r_{\rm cm}](\Gamma\theta)\Gamma^{-2}\, {\rm eV}\nonumber\\&\approx& 3.\cdot10^{16} \xi_{acc} Z\left(\xi_B\bar{L}_j\bar{m}\right)^{1/2}\Gamma^{-2} {\rm eV}\,, \label{emax_conf} \end{eqnarray} which gives the condition on the observed jet luminosity: \begin{equation} L_{j}>8.4\cdot10^{44}\xi_B^{-1} \Gamma^{2}\xi_{acc}^{-2}\left(\frac{E_{\rm max}}{Z\,10^{20}{\rm eV}}\right)^2 {\rm erg \,s^{-1}} \ , \label{emax_conf} \end{equation} where $E_{\rm max}/Z$ is the maximum rigidity reachable in the observer's frame. { It is important to note that here only the mean intensity of the magnetic field $B'$ is considered. However in the usual model of diffusive shock acceleration (DSA), the largest turbulence scale of the magnetic field plays an important role, as the relevant escape time is the escape time {\it upstream} of the shock front. As the shocks microphysics is poorly understood (especially in the mildly-relativistic regime), the usual prescription for the particles confinement is that a cosmic ray remains accelerated at the shock if the Larmor radius of the particle is smaller than the largest turbulence scale of the magnetic field in the shocked region, i.e. $r_L\lesssim \lambda_{\rm max}$. It is important to keep in mind that $\lambda_{\rm max}$ can be a small fraction of the size of the system. Therefore, in the case of DSA, $t'_{\rm esc} = \xi_{esc}(r\theta)/(\Gamma c)$ with $\xi_{esc}\lesssim1$.\\ Also, the above estimate of the maximum rigidity reachable is based on a geometrical argument. However there are several energy losses that will limit the acceleration process. The maximum comoving energy $E'_{\rm max}$ of the accelerated particles is estimated by equating their acceleration time $t'_{acc}$ with the relevant loss time $t'_{loss}$, and the related "opacity" is defined as $\tau_{loss}\equiv{t'_{esc}}/{t'_{loss}}$. } The synchrotron opacity is given by \begin{equation} \tau_{syn}=\frac{(r\theta)Z^4\sigma_TB^2E}{6\pi m_e^2c^4\Gamma}\left(\frac{m_e}{m_A}\right)^4\,. \end{equation} Concerning photodissociation, the lowest energy and highest cross-section process is the giant dipole resonance. The energy loss rate due to GDR interaction with the photons of energy $\epsilon'$ is $(t'_{\rm GDR})^{-1}\simeq\sigma_{GDR}n'(\epsilon')c{\Delta\epsilon_{\rm GDR}}/ {\epsilon_{\rm GDR}} \,\,{\rm s^{-1}}$ with $\sigma_{GDR}=1.45\,10^{-27}A$ cm$^2$, $\Delta\epsilon_{\rm GDR}=8$ MeV and $\epsilon_{\rm GDR}\approx 42.65A^{-0.21}$ MeV. The photon energy $\epsilon'$ is given by the threshold condition for GDR interaction: $E'_{\rm N}(A)\epsilon'\approx2.5\,10^{17} (A/56)$ eV$^2$. The GDR opacity is $\tau_{GDR}\equiv{t'_{esc}}/{t'_{GDR}}$. With an opacity close to 1, we are in the optimum working condition for the energy loss process considered. Only when the shock becomes transparent the maximum rigidity is limited by the escape from the magnetized region. Particle escape acts as a high pass filter: only particles close to the maximum rigidity $\simeq E'_{\rm max} /Z$ can escape from the magnetized region upstream of the shock wave. We expect that the spectrum for the escaping particles is thus very hard, almost a delta function around $\simeq E'_{\rm max} /Z$. \subsubsection{Gamma-Ray Bursts Jets} Gamma-ray bursts (GRBs) are associated with relativistic jets from newly born stellar-mass black holes. There are three different types of GRBs, which are associated with different progenitors. Long GRBs (lGRBs) are associated with the core-collapse of massive stars (more precisely, to SN Ib/Ic). They are the most powerful type of GRBs, with a jet luminosity able to reach $\sim 10^{52}$~erg/s. They are also the rarest, with a rate of $\sim 1$~Gpc$^{-3}$yr$^{-1}$ and a $\gamma$-ray luminosity density $\sim 6\, 10^{42}$~erg Mpc$^{-3}$yr$^{-1}$. Short GRBs (sGRBs) are the aftermath of binary neutron star and black hole neutron star mergers and have jet luminosities of typically $\lesssim 10^{49}$~erg~s$^{-1}$. Finally, there is another class of GRBs, the low luminosity GRBs (LLGRBs) which have lower luminosities (typically $L_{\rm jet}\lesssim 10^{50}$~erg~s$^{-1}$, $\sim100$ times the rate of lGRBs \citep{2007ApJ...662.1111L}. In the case of GRBs jets, there are two types of shocks that have been widely considered for diffusive shock acceleration: external shocks and internal shocks \citep{1995PhRvL..75..386W,1995ApJ...453..883V}. External shocks are caused by the jet deceleration in the ambient medium (these are responsible for the GRB afterglow) and are superluminal. Many authors pointed out the inefficiency of Fermi acceleration at ultra-relativistic shocks, unless a strong small-scale turbulence is present in the downstream medium \citep{2006ApJ...641..984N,2006ApJ...645L.129L}. Acceleration in ultra-relativistic external shocks can produce cosmic rays with energies above a few $Z\times 10^{15}$ eV (PeVatrons) see e.g., \citep{1999MNRAS.305L...6G,2021ApJ...915L...4G}- but not the highest energy cosmic rays. Internal shocks are expected to form inside the jet once the fast layers of the plasma jet catch up with the slower parts. In that case, the shock in the comoving frame is mildly relativistic (with Lorentz factor of $\sim$1.1 to $\sim$2) so efficient diffusive shock acceleration can take place. It has been shown that the spectrum of UHECR emitted by GRB internal shocks can reach maximum rigidities of 8~EV (iron at $\sim$100 EeV) for the highest luminosities ($L_{\rm jet}\gtrsim 10^{52}$~erg/s), making diffusive shocks acceleration at internal shocks a promising candidate for UHECR production \citep{10.1093/mnras/stv893}. However, the caveat for the long GRB model is that they fall short to account for the required UHECR luminosity density (typically by a factor $\sim100$) unless they have a poor $\gamma$-ray efficiency \citep{2010ApJ...722..543E,10.1093/mnras/stv893}; or, it has been proposed that one single Galactic event can be responsible for most of the UHECR flux if the local extragalactic magnetic field is large and can trap efficiently $\lesssim$8 EV rigidity particles \citep{Eichler:2016mut}. \subsubsection{Active Galactic Nucleus Jets} The jets associated with massive black holes in active galactic nuclei, where the outflow is magnetically independent of a central black hole/accretion disk source, have also been proposed as UHECR accelerators. (By magnetically independent, we mean that the large-scale electrical current that flows along the inner jet has mostly left the jet, to be replaced by small scale current supporting small scale magnetic field.) Here two main mechanisms have been proposed, relativistic shock fonts and magnetic reconnection in a relativistic boundary layer. As just pointed out, relativistic shocks are qualitatively different from their nonrelativistic counterparts and they may be much less efficient in accelerating the highest energy particles. Partly for this reason, it has been suggested that it is the nonrelativistic backflow after a relativistic jet passes through a relativistic termination shock that is where more efficient UHECR acceleration can occur \citep{10.1093/mnras/sty2936}. As the number of suitable sources within the UHECR horizon is small, there is the future possibility of making tentative associations with the highest energy particles may be possible \citep{Bell:2021pkk}. There is good observational evidence that relativistic jets decelerate, dissipate, accelerate and radiate through their surfaces, especially close to their massive black hole sources. Presumably gas is entrained into the electromagnetic jet from a much more slowly moving external medium. These can be characterized as boundary layers from a gas dynamical perspective and as a cylindrical, relativistic current sheet from an electromagnetic perspective. It is presumably the second description that is most promising as an accelerator. Either way, this is a site where magnetic reconnection will take place. Non-relativistic reconnection is a process which has been extensively studies and there is much space physics data to compare with simulations. Overall it is not very efficent in creating very high energy particles. However, relativistic acceleration has recently been studied quite extensively and it involves novel features, especially the launching of high speed plasmoids \citep{Sironi:2020mzd}. There does not seem to be any natural way to accelerate particles to the very highest energy but this is certainly worth further consideration. \subsubsection{Tidal Disruption Events Jets} There has been much recent observational attention paid to Tidal Disruption Events (TDE), where individual stars orbit massive ($M\lesssim10^8\,{\rm M}_\odot$) black holes in otherwise dormant galaxies passing by so close that they are they are ripped apart by tidal forces with the debris falling back onto the black hole on timescales of months to years and radiating in the optical and X-ray bands especially. A minority of TDEs are found to be jetted and there is evidence that the jets have Lorentz factors $\gtrsim10$, like blazars \citep{10.1093/mnras/stad344}. They have also been proposed as sources of UHECR \citep{Farrar:2014yla}. The estimated rate of jetted TDEs beamed towards us is $2\times10^{-11}\,{\rm Mpc}^{-3}\,{\rm yr}^{-1}$ \citep{Andreoni:2022afu}. A conventional estimate of the total rate of jetted TDE is perhaps a hundred times larger. On this basis, we can estimate the energy required per jetted TDE to account for ${\cal L}_{\rm UHECR}$ (Sec.~2) as $\sim0.03\,{\rm M}_\odot c^2$. This is not impossible but it seems implausible given the very much lower radiative efficiencies typically inferred. \bigskip \subsection{Hierarchical Acceleration Model} \subsubsection{Diffusive Shock Acceleration} An alternative class of UHECR acceleration models invokes Diffusive Shock Acceleration (DSA, see \citep{BE87} for a review) at giant intergalactic shock fronts, most notably those surrounding rich clusters of galaxies but also at shock fronts surrounding the filaments which connect these clusters. There are many possibilities. Here, we will strict attention to the more limited but more prescriptive and, consequently, more refutable idea that the entire cosmic ray spectrum observed at Earth is the result of successive DSA from distinct source classes. Specifically, we consider a hierarchical model in which the observed $\sim1\,{\rm GeV}-\sim3\,{\rm PeV}$ cosmic rays originate in nearby supernova remnants. The pressure from these cosmic rays drives a magnetocentrifugal Galactic wind which passes through a Galactic Wind Termination Shock (GWTS) where re-acceleration takes place. Some of the highest energy particles from the GWTS escape upstream and contributes to the shin part of the spectrum; the remainder are transmitted into the circumgalactic and intergalactic media. GWTS acceleration is even more powerful around galaxies associated with AGN and starbursts. These intermediate energy cosmic rays permeate the intergalactic medium and are input to DSA at the large intergalactic shocks. This ``holistic’’ interpretation features propagation as well as acceleration. Of course, there are many alternative acceleration sites that could dominate different parts of the spectrum in a hierarchical scheme, as discussed above, or, alternatively, the intergalactic acceleration might operate mainly on mildly relativistic particles injected directly at the shock front. DSA is a specific mechanism for converting a significant fraction of the kinetic energy flux, measured in the frame of the shock, of the gas upstream of a strong shock front into high energy particles that are transmitted downstream. It relies upon scattering by hydromagnetic disturbances/waves. Individual particles of rigidity (momentum per unit charge) $R$, with mean free paths $\ell(R)$, will diffuse against the flow with an upstream scale height $L\sim\ell c/u$, where $u$ is the speed of the shock relative to the upstream gas. In the simplest, test particle version of this mechanism at a planar shock front, relativistic particles will increase their rigidity by a fractional amount $\Delta R/R\sim4(1-1/r)u/3c$, where $r$ is the shock compression ratio, every time they make a double crossing of the shock front. On kinematic grounds, the probability that an individual cosmic ray not return upstream is $4u/rc$. The probability that a relativistic particle incident upon the shock front with initial rigidity $R$ be transmitted downstream with rigidity greater than $xR$ is \begin{equation} P(>x)=x^{\frac3{r-1}};\quad x>\ge1. \end{equation} In this limit the shock behaves as a linear system, convolving the input upstream momentum space particle distribution function with a power law, Greens’ function to give the transmitted downstream distribution function. The slope of the power law does not depend upon the details of the scattering and no particles escape upstream. The maximum rigidity, $R_{\rm max}$ that can be accelerated by a given shock is usually determined by the condition that $L$ must be less than the size of the shock, typically its radius of curvature. Note that this condition is, at least, $c/u$ times more stringent that the Hillas criterion. Alternatively, it takes $\sim(c/u)^2$ scatterings, not one orbit, to double a particle's energy. DSA is inherently slower than the more violent mechanisms discussed above. The question is ``Can it still account for the highest energy particles we actually observe?''. Realistically, the test particle approximation fails in many ways. The acceleration is influenced by the cosmic-ray pressure gradient decelerating the gas ahead of the shock and changing the kinematics. In addition the cosmic rays can account for a significant fraction of the total energy of the flow and, on account of their different effective specific heat ratio, change the determination of $r$ through the conservation laws. The accelerated cosmic rays also create and sustain the wave turbulence that scatters them \citep{10.1093/mnras/172.3.557}. The most direct, and, arguably, the most important, form of scattering involves waves with wavelengths resonant with the particle gyro radii $r_{\rm L}(R)$. If the rms magnetic field amplitude of these waves is $\delta B(r_{\rm L})=\delta B(R)$, then $\ell\sim(B/\delta B)^2r_{\rm L}$, assuming random phases and Gaussian statistics (both questionable). If we assume that the waves are nonlinear over some range of $R$, then $\ell\sim r_{\rm L}$ and we call this Bohm diffusion. The efficiency of cosmic ray acceleration, $\epsilon_{\rm CR}$, which can be measured by the ratio of the downstream cosmic ray pressure to the upstream momentum flux, is then determined by the processes that determine the rms magnetic field strength. There are several possible sources of this wave turbulence. If cosmic rays stream with mean speed in excess of the Alfv\'en speed in a uniform magnetic field, then they will excite the growth of resonant Alfv\'en waves propagating along the direction of their mean velocity. These waves will then scatter the cosmic rays in pitch angle and reduce their mean motion so that it is not much more than the Alfv\'en speed. The linear growth rate is roughly $(n_{\rm CR}/n_i)c/r_{\rm L}$, where $n_{\rm CR}$ measures the number of resonant particles and $n_i$ is the background ion density. An alternative possibility is that waves are created hydromagnetically with wavelength much larger than $r_{\rm L}$ which initiates a turbulence spectrum where the waves with shorter wavelengths will have linear amplitudes. A third possibility is that the waves are generated with wavelengths shorter than $r_{\rm L}$ by the motion of electrons carrying the return current that must balance the current associated with the accelerated cosmic rays. Extensive ``PIC'' and related simulations have vindicated the generation of wave turbulence and elucidated how these and other processes interplay in DSA. \subsubsection{Supernova Remnants} Although we are mostly concerned with questions of UHECR acceleration, it is instructive to contrast intergalactic shocks with precursor acceleration of lower energy particles at smaller shock fronts. It is a reasonable conjecture that analogous physical processes operate at all high Mach number shocks, scaled by $L$ and the momentum flux. While there has long been strong evidence associating supernova remnants with the majority of Galactic cosmic rays up to PeV energy, the argument for the reminder of the spectrum being due to DSA, is no better than circumstantial. What is clear is that if DSA accounts for most of the total spectrum, the acceleration must occur with near maximal efficiency with respect to $R_{\rm max}$ and $\epsilon_{\rm CR}$ in all three sites. For this reason, it is interesting to look at this problem inductively --- to ask what would be required from the complex nonlinear plasma processes in the vicinity of the shock to account for the observations--- rather than deductively, to attempt to calculate these processes from first principles. Despite this commonality, there are important distinctions to be made. The first involves the kinematics. Most supernova remnants exhibit a quasi-spherical shock, convex on the upstream side, expanding into a near uniform interstellar medium, though stellar winds and molecular clouds can certainly introduce observable differences in many examples. This geometry facilitates the escape of the highest energy particle upstream as opposed to their transmission downstream, as assumed under the test particle approximation. Over half of the very highest energy particles can leave the remnant in this fashion. The original model of DSA essentially ignored these particles, supposing, instead, that the cosmic rays that we observed were transmitted downstream and were then released, following adiabatic decompression at the end of the remnant's lifetime. By contrast, it could be that most of the observed cosmic rays escape upstream with $R\sim R_{\rm max}$ which decreases as the remnant ages. To be quantitative, the cosmic ray luminosity, per unit area, of the Galactic disk is generally estimated to be at least ten percent of the local supernova remnant luminosity per unit disk area. ($E_{\rm SNR}\sim10^{51}\,{\rm erg}$ released every $\sim30\,{\rm yr}$ throughout the Galaxy.) Now, in order to accelerate protons to $\sim3\,{\rm PeV}$ energy at a SNR expanding with speed $u_{\rm SNR}$ and radius $R_{\rm SNR}$, requires that the gyro radius be $\lesssim u_{\rm SNR}R_{\rm SNR}/c$ or $B\gtrsim100(u_{\rm SNR}/10,000\,{\rm km\,s}^{-1})^{-1}(R_{\rm SNR}/1\,{\rm pc})^{-1}\,\mu{\rm G}$, perhaps two orders of magnitude large than the rms Galactic field. Furthermore, the magnetic field strength is strongly bounded above by requiring that its contribution to the momentum flux, $B^2/4\pi$ be less than that of the gas, which is $\sim0.3E_{\rm SNR}R_{\rm SNR}^{-3}$, adopting the Sedov solution, or $B\lesssim10(R_{\rm SNR}/1{\rm pc})^{-3/2}\, {\rm mG}$. This interval allows efficient cosmic ray acceleration by young SNR up to the knee in the spectrum. \subsubsection{Galactic Wind Termination Shocks} The next level in this proposed hierarchy involves galactic winds. A large motivation for proposing a wind in our Galaxy comes from cosmic-ray observations. The measured ratio of light to medium nuclides at $\sim\,{\rm GeV}$ energy allows us to infer that these cosmic rays escape the disk after traversing a grammage $\sim6\,{\rm g\,cm}^{-2}$. This implies a residence time $\sim30-100\,{\rm Myr}$, much shorter than a disk rotation period. Cosmic rays leave the disk continuously and it is reasonable to suppose that they carry gas with them. The wind provides a chimney or exhaust for the cosmic rays (as well as the hot gas which is unable to cool radiatively) created by the SNR. The existence of an outflowing wind, occupying most of the volume of the Galactic halo, is not inconsistent with the presence of denser gas clouds falling inward, as observed. Various models of this outflow have been entertained. One recent model posits the presence of a vertical magnetic field at the disk and that $\sim\,{\rm GeV}$ cosmic rays stream along it just faster than the Alfv\'en speed so that Alfv\'en waves are excited and couple the cosmic rays to the gas, allowing it to achieve the escape speed from the solar radius. The cosmic rays alone drive the wind \citep{Mukhopadhyay:2023kel}. A more powerful wind is possible if the vertical magnetic field corotates with the disk. In this case the cosmic rays just elevate the gas to modest altitude $\sim$ a few kpc, at which point the field develops a large radial component and the gas is flung out magnetocentrifugally and it is the orbiting interstellar medium that provides most of the power for the wind. Under this scenario, the interstellar medium should be seen as an open system losing a significant fraction of its mass, and angular momentum as well as energy over a Galactic lifetime. This possibility is not generally entertained in models of the interstellar medium. The outflow will pass through three magnetohydrodynamic (Alfv\'enic and magnetosonic) critical points before crossing a termination shock, typically at a radius $R_{\rm GWTS}\sim200\,{\rm kpc}$, with speed $u_{\rm GWTS}\sim1000\,{\rm km\,s}^{-1}$, before joining the circumgalactic medium. The flow expands with density $\rho$ decreasing by roughly three orders of magnitude. The cosmic ray rigidities decrease $\propto\rho^{1/3}$, roughly one order of magnitude. The poloidal component of the magnetic field decreases from $\sim2\,\mu{\rm G}$ at the disk to $B_p\sim3\,{\rm nG}$ at the shock, while the toroidal field is roughly ten times larger, $B_\phi\sim30\,{\rm nG}$. The termination shock can reaccelerate $\sim{\rm GeV-PeV}$ cosmic rays to fill out the $\sim\,{\rm PeV-EeV}$ shin in the spectrum. The shock surface is likely stationary and concave on the upstream side with radius. This changes DSA in significant and observable ways. For modest cosmic-ray rigidity, say below the reduced peak of the cosmic-ray spectrum $R\lesssim3\,{\rm PV}$, the shock will be effectively planar and a reaccelerated spectrum will be transmitted downstream. However, when $\ell\gtrsim u_{\rm GWTS}R_{\rm GWTS}/c$, the high energy cosmic rays are able to cross the Galactic halo and continue their acceleration at a quite different part of the shock. This acceleration can continue up to a rigidity where $\ell\sim R_{\rm GWTS}$, perhaps $\sim100\,{\rm PV}$. When, as is usually the case, cosmic ray energy rather than rigidity spectra are displayed, the composition of the shin particles becomes progressively heavier, as is observed. Again, to be quantitative, the wind may account for as much as $\sim1\,{\rm M}_\odot{\rm yr}^{-1}$mass loss from the galaxy and a corresponding power $\sim5\times10^{41}\,{\rm erg\,s}^{-1}$, more than the cosmic ray power. A turbulent magnetic field strength $B_{\rm RMS}(100\,{\rm PV})\sim0.3B_\phi(R_{\rm GWTS})$ suffices to sustain acceleration within a closed shock surface throughout the shin observed at Earth. \subsubsection{Intergalactic Shock Fronts} We now turn to the final and most relevant level in this simple hierarchy when the $\sim\,{\rm PeV-EeV}$ cosmic rays accelerated by galaxies, especially those that are more active than our Galaxy, are incident upon intergalactic shock fronts (e.g. \citep{1995ApJ...454...60N,1997MNRAS.286..257K}, Simeon et al. in preparation) . The most powerful shocks, prominent in cosmological simulations, surround rich clusters of galaxies which show Mach numbers that can exceed $\sim100$. The assembly of clusters, over cosmological time, is complex but we will idealize the shock as a stationary spherical surface surrounding most of the cluster. These shocks have not been observed directly as yet. Deep searches at low radio frequencies seems well-motivated and would be very prescriptive, if successful. Simulations also show quasi-cylindrical filaments that may connect clusters and which may be surrounded by quasi-cylindrical infall shocks. The Milky Way may lie within such a filament. The kinematics of a stationary cluster accretion shock, CAS, is different from that at SNR and GWTS. The convergence of gas ahead of the shock results in adiabatic heating of lower energy particles. This acceleration, which supplements DSA, will be less efficient for higher energy particles, which diffuse outward, relative to the gas flow. We continue to view the problem inductively and ask what scattering would have to be present in order for nearby cluster shocks to account for the observed UHECR spectrum. If we take an accretion shock surrounding the Virgo cluster then we observation combined with cosmological simulations suggest a shock radius $R_{\rm CAS}\sim2\,{\rm Mpc}$ an infall speed and density ahead of the shock $u_-\sim1000\,{\rm km\,s}^{-1}$ and $\rho_-\sim10^{-29}\,{\rm g\,cm}^{-3}$. The shock is strong and the rate at which gas kinetic energy crosses it is $\sim10^{45}\,{\rm erg\,s}^{-1}$. If we adopt a rich cluster density of $\sim10^{-5}\,{\rm Mpc}^{-3}$, then the cluster gas kinetic energy luminosity density is $\sim30{\cal L}_{\rm UHECR}$. There seems to be sufficient power to account for the observed UHECR flux. The larger challenge is to accelerate heavy nuclei, optimally, to $R_{\rm max}\sim10\,{\rm EV}$. In this model, we detect the UHECR that escape upstream with a spectrum peaking at a rigidity $\sim1-8\,{\rm EV}$ which will vary from cluster to cluster. The cosmic rays which are transmitted downstream will never be directly observable, though they might be seen at much lower energy through their $\gamma$-ray emission. In order to accelerate cosmic rays to $R_{\rm max}\sim10\,{\rm EV}$, we require that $R_{\rm CAS}>(c/3u_-)\ell(R_{\rm max}$. If we adopt Bohm diffusion, then the rms, resonant field ahead of the shock on these scales is $B_{\rm rms}\sim1\,\mu{\rm G}$. This contrasts with a general field in the IGM that is generally expected to be $\lesssim1\,{\rm nG}$. The isotropic magnetic pressure associated with this field is roughly an order of magnitude smaller than the nominal momentum flux carried by the background gas $\sim10^{-13}\,{\rm dyne\,cm}^{-2}$. Provided that the turbulence does not dissipate and pre-heat the gas, thereby reducing the shock Mach number and the efficacy of DSA, a maximum rigidity $R_{\rm max}\sim10\,{\rm EV}$ seems attainable. However, if, for example, protons with significantly larger rigidity than this are identified, it seems hard to see how accretion shocks around relatively well-observed and simulated, local clusters of galaxies could account for their acceleration. Now turn to the input. Of course, we do not know what is the intergalactic spectrum produced by all the varied types of galaxy that surround the clusters. There are many possible contributors that we we may not be observing at Earth, for example starbursts and the winds they drive, pulsar wind nebulae, jets associated with spinning black holes and a network of weaker intergalactic shock fronts. (It is possible to associate the observed light ankle with filament shock acceleration in which case this comprises the spectrum transmitted downstream.) If we just confine our attention to the SNR spectrum produced by our local Galactic disk, then we expect a source spectrum $S(R)\propto R^{-2.2}$, as produced by all interstellar shocks over their lifetimes. This should extend up to a rigidity of order a few PV. At this point the rigidity spectrum should steepen as these additional sources fill out the shin part of the spectrum. (An energy spectrum will exhibit progressively heavier nuclei as observed.) Incident cosmic rays with rigidity well below the UHECR range will be convected adiabatically into the shock and see it as an essentially planar shock accelerator. Provided that the compression ratio of the subshock is neither too large, specifically $2.5\lesssim r\lesssim3.5$, then it will be the cosmic rays near the knee in the spectrum at few $PV$ that will dominate the UHECR spectrum that is reflected upstream back into the IGM. This will be reflected in the UHECR composition. In this model, the GeV particles that dominate the GCR spectrum and freshly injected particles are minor contributors to the output UHECR spectrum. How reasonable is it for a CAS to behave in this extreme fashion? And, by extension, how reasonable is it for SNR and GWTS to accelerate cosmic rays to energies $\sim 3$ and $\sim300\,{\rm PeV}$ respectively? Here, a rather different approach from that followed in most of the impressive simulations that have been completed may be helpful. Instead of starting from a quiescent initial state and monitoring the progressive acceleration of cosmic rays, and their associated local turbulence, to ever greater rigidity, limited by dynamic range and runtime, it might be interesting to start with the near-maximally turbulent state described and see if it can be self-sustaining. The dynamics is dominated by the highest energy particles and the background gas. PeV cosmic rays can be injected at the shock front. A key feature of this approach is that, as the rigidity increases, the particles diffuse further ahead of the shock front until they either escape upstream or are transmitted downstream with probabilities $\sim$ a half, ending their acceleration. It will also be necessary to include the curvature of the shock which facilitates this escape. These particles, with $R\sim R_{\rm max}$, will provide the first encounter that the inflowing intergalactic gas has with the shock. The gas itself is, presumably, still quite weakly magnetised with a pressure much smaller than the (anisotropic) pressure of the UHECR. Indeed, the pressure of the cosmic rays is likely to dominate the thermal pressure of the IGM. There is the possibility of hydromagnetic instability, similar to the firehose and mirror instabilities but also including the mean velocity of the UHECR fluid through the gas. This turbulence should grow as the gas falls further into the cluster with wavelengths longer than the local UHECR gyro radius ($\sim1-10B_{\mu{\rm G}}^{-1}\,{\rm kpc}$), but shorter than the distance from the shock. The turbulence, which should be quasi-stationary in space, will have decreasing outer scale as the shock is approached, and should evolve to shorter wavelengths, progressively scattering lower rigidity particles through strongly nonlinear, resonant interaction as the shock is approached. This ``bootstrap'' mechanism (adopting the physics as opposed to the statistics usage \citep{Blandford:2007zza}) provides a different approach to exploring the complex physics of DSA. In the context of cluster shocks, it needs to be augmented by the inclusion of photopion and photodisintegration loss which are important because, unlike with the SNS, GWTS cases, the acceleration timescale is not short compared with the propagation timescale. Indeed, intergalactic and Galactic diffusion needs to be included in the problem alongside the background velocity field which transitions from inflow to cosmological outflow about 7~Mpc away from the cluster for Virgo \citep{Shaya:2017wnh}. The complex thermal and multiphase nature of the IGM, which is likely to be influenced by the hypothesized magnetic turbulence, is also a large part of the problem. Finally, the downstream boundary condition on particle acceleration, which is normally idealized as strong scattering in a uniform flow could turn out to be important. \section{Discussion and Future Goals} Cosmic-rays have been known for almost one century and gave birth to particle physics, but we still don't know where they come from. The question of what else in the Galaxy besides supernovae makes cosmic rays remains an open question and the origin of UHECR (cosmic rays with energy above 0.1 EeV) is still unknown, with both extragalactic and Galactic origin scenarios constrained but not ruled out. So far, only electromagnetic, gravitational, neutrino signals trace high energy sources. The possible candidate sources of UHECRs -- relativistic jets associated with binary neutron star mergers, core-collapse supernovae or active galactic nuclei; magnetars; accretion shocks around clusters of galaxies and filaments; wind bubbles associated with ultra fast outflows; tidal disruption events -- still remain candidates and no direct evidence implicating them has been found so far. The question of which of these sources can be the accelerators of the ultra-high energy cosmic rays we detect up to $\sim 200$ EeV still remains unanswered. Although much debated, the common picture is that a transition from a Galactic to an extragalactic origin for cosmic radiation occurs somewhere between the knee and the ankle, and that the cosmic-ray composition becomes heavier as the energy increases. The interpretation of the anisotropies that emerge at the highest energies needs a better understanding of the UHECR composition. The status is that the lack of observational understanding of the UHECR composition leaves freedom for speculation about their origin. This is also an interesting problem in the understanding of the physics of the extensive air shower development; the observed muon number, which is is a key observable to infer the mass composition of UHECR, shows an excess compared to air shower simulations with state-of-the-art QCD models. This is known as the "Muon Puzzle" (\citep{Albrecht:2021cxw} for a recent account). Future data from the LHC, in particular oxygen beam collision would be needed to better address this issue. There is currently an effort from the UHECR community to better probe the cosmic ray composition at the highest energies. AugerPrime, the ongoing upgrade of the PAO has been designed to enhance the sensitivity of composition analyses by adding scintillator surface detectors and radio antennae to the existing Cherenkov detectors \citep{Stasielak:2021hjm}. TA$\times$4, the ongoing upgrade of TA (an additional 500 scintillator surface detectors designed to provide a fourfold increase in effective area) is being deployed and is partly in operation \citep{bib:tax4_nim}. A few energy events have already been detected by Auger and TA above 150~EeV. With better statistics at the highest energies, it may be possible to unveil the sources \citep{Globus:2022qcr}. The basic reason is that the background of distant sources, which dominates at lower energies, is less important at high energies as a consequence of the GZK cutoff.For example, at 300~EeV, only protons have a horizon $\gtrsim30$~Mpc, including the Virgo cluster and M87 at $\sim$~16 Mpc. Any detection of intermediate or heavy mass nuclei at these energies would have to originate from sources at $\lesssim 3$~Mpc, again severely limiting the options for sources, probably pointing to transients in the local group. The detection of doublets or multiplets of "extreme energy" cosmic ray events (i.e. UHECR with energy above 150 EeV) with a composition-sensitive detector could rule out many currently viable source models \citep{Globus:2022qcr}. Using recent Galactic magnetic field models \citep{Jansson:2012pc,TF17}, we calculated “treasure” sky maps to identify the most promising directions for detecting extreme energy cosmic-ray doublets, events that are close in arrival time and direction and predicted the incidence of doublets as a function of the nature of the source host galaxy. Based on the asymmetry in the distribution of time delays, we showed that observation of doublets might distinguish source models. This is again why larger exposures and a better approach to mass composition is needed. Identifying the sources of UHECR, ushering in the dawn of cosmic ray astronomy and exploring the otherwise inaccessible fundamental physics involved constitute three strong reasons for taking cosmic-ray studies to the next level so as to measure individual cosmic-ray masses at the highest energy. Larger and more capable cosmic-ray detector arrays, such as the Global Cosmic Ray Observatory \citep{horandel2022gcos} will be needed to accomplish this. \bigskip \section*{Acknowledgments} We thank our colleagues that helped so much in shaping our understanding of UHECR physics and phenomenology: Denis Allard, Chen Ding, Glennys Farrar, Anatoli Fedynitch, Payel Mukhopadhyay, Etienne Parizot, Enrico Peretti, Paul Simeon, Alan Watson. N.G.’s research is supported by the Simons Foundation, the Chancellor Fellowship at UCSC and the Vera Rubin Presidential Chair.\\ The breadth of topics briefly reviewed here is associated with an extensive bibliography which cannot be reflected in a proceedings. Apologies are proffered to colleagues whose important research is not cited.
1,941,325,220,301
arxiv
\section{Introduction} \label{sec:intro} Detailed measurements of atomic spectra were key to the discovery of quantum mechanics and the development of relativistic quantum electrodynamics (QED). Today, precision atomic spectroscopy underpins the SI system of units, provides the values of some fundamental constants, and enables precise tests of Standard Model calculations. Looking for deviations between precise spectroscopic measurements and their Standard Model predictions thus provides a powerful way to set constraints on new physics \cite{Safronova2018}. One powerful approach looks for small effects that break symmetries such as parity (P violation) or time-reversal (T-violation). Alternatively, one can compare experimental and theoretical transition frequencies. If additional force mediators (bosons) were present that coupled strongly enough to the nucleus and electrons, they would modify the frequency of spectral lines. Thus, by comparing experimentally measured spectra with theory the existence of new so-called fifth forces can be tested down to very small interaction strengths. In recent years extensions of the Standard Model, e.g., modified gravity \cite{Brax:2010gp, Brax:2014zba}, axions \cite{Frugiuele:2016rii, Berengut:2017zuo}, new gauge boson \cite{Jaeckel:2010xx, Jentschura:2018zjv}, have been constrained in this way. In particular if the force mediator $X$ is light, i.e., below 1 MeV in mass, and couples to partons and electrons, the limits obtained from atomic spectroscopy are many orders of magnitude stronger than from any other laboratory-based experiment, including high-energy collider experiments \cite{Karshenboim:2010cg,Karshenboim:2010ck,Jaeckel:2010xx}. While modifications of the Standard Model through light bosons are predicted by various models, they arguably receive strong constraints from astrophysical sources \cite{Grifols:1986fc,Raffelt:2012sp, Viaux:2013lha}, e.g. the energy loss from the sun, globular clusters or supernovae. However, the need for independent laboratory-based experiments has been pointed out frequently --- see, e.g., \cite{Masso:2005ym,Jaeckel:2006xm,Jaeckel:2006id,Brax:2007ak}. As an example, a prominent class of light-scalar models potentially related to modified gravity and dark energy, are chameleons \cite{Khoury:2003aq,Brax:2007ak,Burrage:2014oza}. Chameleons have a mass that depends on the energy density of their environment and thus can avoid being produced in stars, thereby avoiding astrophysical bounds. One of the main uncertainties in the prediction of spectral lines arises due to the difficulty of solving the Schr{\"o}dinger or Dirac equations for many interacting electrons. Even state-of-the-art calculations for species commonly used in atomic clocks only attain a fractional uncertainty of $\sim 10^{-5}$ \cite{Safronova2008}, which is 14 orders of magnitude lower than the current experimental precision. To circumvent this limitation, it has been proposed to look for new physics using the difference in spectral line positions between isotopes (isotope shifts) \cite{Delaunay:2016brc,Berengut:2017zuo}, rather than by direct comparison with theory. Although promising \cite{Ohayon2019}, the method is limited by the requirement that at least three stable isotopes with two suitable transitions exist for each element. An alternative approach is to use light atomic species such as H or He for which full standard model predictions of line positions including QED corrections (Lamb shift) and weak interactions (Z boson exchange) are possible. Even here however, the complex structure of the nucleus, in particular the details of its charge distribution, limits the achievable accuracy. Spectroscopic data that does not strongly depend on the details of how the nucleus is modelled can thus help to improve the sensitivity on the presence of new forces. In this paper, we explore how the precision spectroscopy of states with a high principal quantum number $n$ (Rydberg states) might be used to set constraints on physics beyond the Standard Model. In principle such states offer several advantages that could be exploited in a search for new physics. Firstly, the overlap of Rydberg wave functions with the nucleus scales as $n^{-3}$, vastly reducing their sensitivity to nuclear effects. The radiative lifetime also scales as $n^{-3}$, meaning that narrow transitions from low-lying atomic states are available that span the UV to NIR wavelength range that amenable to precision laser spectroscopy. The $n^{-2}$ scaling of the energy levels means that for each atomic species a large number of such transitions are available within a narrow spectral range. Lastly, there is a natural length scale associated with the atomic wave function that scales as $n^2$. As we will show, being able to vary this length scale enables tests which are sensitive to the corresponding length scale associated with any new forces \cite{Karshenboim:2010cg}. Here we take hydrogen as a model system in which to explore the use of Rydberg states in searches for new physics. Measurements with a fractional uncertainty of 10 ppt or better are already available for $n$ up to 12. We calculate the (non-relativistic) spectrum of the combination of a hydrogenic Coulomb potential and a Yukawa potential arising from new physics to high accuracy. By combining the resulting energies with previously derived relativistic, QED and hyperfine corrections, we obtain predicted atomic transition frequencies that can be compared directly to experimental data to set a constraint on the strength of a new physics interaction. We consider in detail how uncertainties due to the Rydberg constant and the proton charge radius can be reduced or eliminated altogether, and show how a global statistical analysis can be used to derive robust atomic physics constraints. Lastly we develop proposals for future improved tests using atomic Rydberg spectroscopy in atomic hydrogen and other species. The structure of the paper is as follows. In Section \ref{sec:param} we introduce the simplified model used to parameterize the effect of new physics. Calculations of the effect of new bosons on atomic energy levels are presented in section \ref{sec:NPshifts}. We assess the current experimental reach for new physics (NP) in Section \ref{section:current}. In Section~\ref{sec:improve} we discuss the impact of potential experimental and theoretical improvements on the uncertainty budget and in how far this can result in tighter constraints of new physics. We offer a summary and conclusions in Section~\ref{sec:conc}. \section{Parametrisation of new physics} \label{sec:param} With the discovery of the Higgs boson \cite{Aad:2012tfa,Chatrchyan:2012xdj}, for the first time a seemingly elementary scalar sector was established in nature. Such a particle would mediate a new short-ranged force, the so-called Higgs boson force \cite{Haber:1978jt}. While the Higgs boson force is very difficult to measure in atom spectroscopy \cite{Delaunay:2016brc}, many extensions of the Standard Model predict elementary scalar or vector particles with a very light mass. Examples include axions \cite{Frugiuele:2016rii,Berengut:2017zuo,Gupta:2019ueh}, modified-gravity models \cite{Brax:2010gp,Brax:2010jk}, millicharged particles \cite{Abel:2008ai, Goodsell:2009xc,Jaeckel:2010ni}, Higgs-portal models \cite{Schabinger:2005ei,Patt:2006fw} and light $Z'$ \cite{Holdom:1985ag,Foot:1991kb}. To remain as model-independent as possible in parameterizing deformations from the Standard Model (SM), it has become standard practice to express new physics contributions in terms of so-called simplified models \cite{Alves:2011wf}. The idea is to add new degrees of freedom to the Standard Model Lagrangian without asking how such states arise from a UV complete theory. Thus, one can describe the dynamics and phenomenological implications of new degrees of freedom without making further assumptions on the UV theory from which they descend. For example, if we assume a fifth force to be mediated through a novel spin-0 particle $X_0$ that couples to leptons and quarks with couplings $g_{l_i}$ and $g_{q_i}$ respectively, we can augment the Standard Model Lagrangian $\mathcal{L_\mathrm{SM}}$ to \begin{equation} \mathcal{L} = \mathcal{L}_{\rm SM} + \sum_{i} \left[g_{l_{i}} \bar{l}_i l_i + g_{q_{i}} \bar{q}_i q_i \right ] X_0. \label{eq:intnew} \end{equation} Here $i$ denotes the three flavor generations and $l_i$ and $q_i$ refer to the mass basis of the SM fermions. We note that the interactions of Eq.~(\ref{eq:intnew}) could be straightforwardly extended to (axial)vector or pseudoscalar particles and to flavor off-diagonal interactions, e.g. $g_{q_{ij}} \bar{q}_i q_j X_0$ with $i \neq j$. Further, we should emphasize that the operators of Eq.~(\ref{eq:intnew}) are gauge invariant only after electroweak symmetry breaking, which implies that the coefficients $g_{f_{ij}}$ must implicitly contain a factor $v/\Lambda_{\rm NP}$ ($v$ is the vacuum expectation value of the Higgs field and $\Lambda_{\rm NP}$ is a new physics scale). However, this is only important for the interpretation of the observed limit we derive on $g_{f_{i}}$. Studying Rydberg states in hydrogen atoms, we will set a limit on the combined interaction $g_e g_N$, where $g_e$ and $g_N$ corresponds to the interaction of $X_0$ respectively with the electron and the nucleon. With the Lagrangian of Eq.~(\ref{eq:intnew}), the interaction mediated by the NP boson $X_0$ between these two particles contributes an additional Yukawa potential $V(r)$ to the Hamiltonian. Denoting by $r$ the distance between the electron and the nucleon and by $m_{X_0}$ the mass of the particle, \begin{equation} V(r) = (-1)^{s+1}\frac{g_e g_N}{4 \pi}~\frac{1}{r}~e^{-m_{X_0} r}, \label{eq:yukpot} \end{equation} where $s$, an integer, is the spin of the force mediator (e.g., $s=0$ for a scalar particle). Higher integer-spin mediators would also give rise to a Yukawa potential of this form. There is however a subtle difference in the sign of this potential between even and odd integer-spin force carriers. Lorentz invariance and the unitarity of the transition matrix element lead to an attractive (repulsive) force if $g_e g_N > 0$ ($g_e g_N < 0$) in the case of an even-spin mediator, and to an attractive (repulsive) force if $g_e g_N < 0$ ($g_e g_N > 0$) in the case of an odd-spin mediator. For example, as the charges for the Higgs boson (spin-0) and the graviton (spin-2) are the particles' masses, the Higgs force and gravity are both attractive. As we want to remain agnostic about the force carrier and the way it interacts with the nucleons and electrons, in the following we will allow both positive and negative values for $g_e g_N$. Finally, we note that an excellent recent review of this type of simplified model is provided in \cite{Safronova2018}. \section{New Physics level shifts} \label{sec:NPshifts} The presence of the interaction potential $V(r)$ would affect the atomic transition frequencies. Its effect can be evaluated perturbatively. To first order in $V(r)$, and neglecting spin-orbit coupling and other relativistic corrections, the energy of a hydrogenic state of principal quantum number $n$, orbital angular momentum quantum number $l$ and radial wave function $R_{nl}(r)$ is shifted by a quantity $\delta E_{nl}^{\rm NP}$, with \begin{equation} \delta E_{nl}^{\rm NP} = \int_0^\infty |R_{nl}(r)|^2 V(r)\, r^2\, dr. \label{eq:NPshift} \end{equation} Since the interaction is spherically symmetric, the perturbation is diagonal in $l$ and in the magnetic quantum number $m$, and $\delta E_{nl}^{\rm NP}$ does not depend on $m$. Taking into account $V(r)$ to all orders, which we have done as a test of our numerical methods, confirms that second- and higher-order terms of the perturbation series are completely negligible for the couplings of interest, i.e. $|g_e g_N| < 10^{-11}$. The shift $\delta E_{nl}^{\rm NP}$ takes on a particularly simple form in the limit $m_{X_0}\rightarrow 0$: Since \begin{equation} |\delta E_{nl}^{\rm NP}| < \frac{|g_eg_N|}{4\pi} \int_0^\infty |R_{nl}(r)|^2\, \frac{1}{r}\, r^2\, dr, \end{equation} the virial theorem guarantees that \begin{equation} |\delta E_{nl}^{\rm NP}| < \frac{|g_eg_N|}{4\pi}\,\frac{(-2E_n)}{\alpha Z}, \end{equation} where $E_n$ is the non-relativistic energy of the $(n,l)$ states, $\alpha$ is the fine structure constant and $Z$ is the number of protons in the nucleus. Moreover, \begin{equation} \lim_{m_{X_0}\rightarrow 0} |\delta E_{nl}^{\rm NP}/E_n | = \frac{|g_eg_N|}{2\pi\alpha Z}. \label{eq:smallmasses} \end{equation} (See Appendix~\ref{app:conversion} for the origin of the factor of $1/\alpha$ and more generally for the conversion between natural units and atomic units.) Simple analytical forms can also be derived, e.g., for states with maximum orbital angular momentum ($l=n-1$). However, in most cases $\delta E_{nl}^{\rm NP}$ is best evaluated numerically. Various approaches to this problem have been considered over the years, as has the calculation of energy levels for a superposition of a Coulomb potential and a Yukawa potential (the Hellmann potential) \cite{Adamowski85,Dutt86,Bag87,Hall01,Ikhdair07,Roy08,Nasser11,Ikhdair13,Onate16}. The most accurate results reported to date are those of Ref.~\cite{Roy08}, in which the energies of the ground state and first few excited states were obtained to approximately 13 significant figures using a generalized pseudo-spectral method. Our approach to this problem is different and does not seem to have been used so far in this context: We expand the radial wave functions on a finite Laguerre basis of Sturmian functions $S_{\nu l}^{\kappa}(r)$ \cite{Rotenberg62}, find the generalized eigenvectors of the matrix representing the unperturbed Hamiltonian in that basis, and use these to calculate the first order energy shift $\Delta E_{nl}$. Here \begin{align} S_{\nu l}^{\kappa}(r) = \sqrt{\frac{\kappa (\nu -1)!} {(\nu +l)(\nu +2l)!}} (2\kappa r)^{l+1} &e^{-\kappa r} L_{\nu -1}^{2l+1}(2\kappa r), \nonumber \\ & \nu=1,2,\ldots, \end{align} with $\kappa$ a positive parameter which can be chosen at will. These basis functions have already been used in this context, but in a different way \cite{Nasser11}. Sturmian bases have proved to be convenient in precision calculations of properties of hydrogenic systems \cite{Broad85,CPC}. We obtain the eigenenergies and wave functions of the unperturbed Hamiltonian by solving the generalized eigenvalue problem \begin{equation} {\sf H}_0{\sf c} = E\, {\sf Sc}, \label{eq:eigenvalueprob} \end{equation} where ${\sf H}_0$ is the matrix representing the unperturbed non-relativistic Hamiltonian of hydrogen in this basis and ${\sf S}$ is the overlap matrix of the basis functions (Sturmian functions are not mutually orthogonal). The corresponding matrix elements and the elements of the matrix ${\sf V}$ representing the Yukawa potential can be obtained in closed form using standard integrals and recursion formula~\cite{Gradshteyn}. Having the eigenvectors ${\sf c}$, the energy shifts are then calculated as $\delta E_{\sf c}^{\rm NP} = {\sf c^T V c}$. Since the functions $\{S_{\nu l}^{\kappa}(r), \nu=1,2,\ldots\}$ form a complete set, the eigenvalues $E$ and energy shifts $\delta E_{\sf c}^{\rm NP}$ obtained with a basis of $N$ of these functions ($\nu =1,\ldots,N$) converge variationally to the exact eigenenergies and exact energy shifts when $N\rightarrow \infty$. We repeat the calculations for several different values of $\kappa$ and different basis sizes so as to monitor the convergence of our results and the impact of numerical inaccuracies. With an appropriate choice of $\kappa$, and taking $N$ up to 200, the calculated energy shifts converged to at least 8 significant figures {}\footnote{See Supplemental Material at [URL, to be added by the Publisher] for tables of values of $\delta E_{nl}^{\rm NP}$ for $n$ up to 80, $l$ up to 25, and four different values of $M_{X_0}$ ranging from 1 to 1000~eV.}. Using the same method, but solving the generalized eigenvalue problem for the full Hamiltonian rather than the unperturbed Hamiltonian, we could also reproduce the results of Ref.~\cite{Roy08} to the 14 significant figures given in that article. The results of these calculations are summarized in Figs.~\ref{fig:fig1}, \ref{fig:varyingmass_581226} and \ref{fig:fixedrelshift_581226}. These results, like all the other numerical results discussed in this paper, refer to the specific case of atomic hydrogen. We will therefore assume that $Z=1$ from now on. Fig.~\ref{fig:fig1} shows the general trends. The fractional shift is largest for light bosons, where the range of the Yukawa potential is comparable to or larger than the range of the atomic wave function. In agreement with Eq.~(\ref{eq:smallmasses}), $|\delta E{nl}^{\rm NP}/E_n| \lesssim |g_eg_N|/2\pi\alpha$ for low masses. As $m_{X_0}$ increases, the shift decreases, but in a way that depends on the shape of the atomic wave function through both $n$ and $l$. The effect of the Yukawa potential is largest at the origin. As $n$ and $l$ increase, the probability density of the atomic wave function in the region close to the nucleus is reduced, leading to a smaller NP shift. \begin{figure}[h] \centering \includegraphics[width=8.5cm]{curves4.pdf} \caption{The NP shift, $\delta E_{nl}^{\rm NP}$, divided by the non-relativistic energy of the state, $E_n$, for the states of atomic hydrogen with $0 \leq l \leq 4$ and $n = 5$ ($E_n/h = -1.32\times10^{11}$~kHz, where $h$ is Planck's constant), 8 ($E_n/h = -5.14\times10^{10}$~kHz), 12 ($E_n/h = -2.28\times10^{10}$~kHz) or 26 ($E_n/h = -4.87\times10^9$~kHz). A value of $g_e g_N$ of $1 \times 10^{-12}$ is assumed. From top to bottom, $m_{X_0} = 1$~eV (orange circles), 10~eV (green circles), 100~eV (brown circles), or 1~keV (black circles). } \label{fig:fig1} \end{figure} \renewcommand{\arraystretch}{1.2} \begin{table} \caption{The range of the Yukawa potential ($\Lambda$), expressed as a multiple of the Bohr radius, and the principal quantum number $n_\Lambda$ for which this range is equal to that of the corresponding $l=0$ state to the closest approximation possible, for three values of $m_{X_0}$, the mass of the NP particle.} \label{table:C} \begin{center} \begin{ruledtabular} \begin{tabular}{p{2cm}p{3cm}p{0.7cm}} $m_{X_0}$ & $\Lambda$ & $n_\Lambda$\\ \tableline\\[-4mm] 1 eV & $3.73 \times 10^{3}\, a_0$ & 50 \\ 100 eV & $3.73 \times 10^{1}\, a_0$ & 5 \\ 10 keV & $3.73 \times 10^{-1}\, a_0$ & 1 \end{tabular} \end{ruledtabular} \end{center} \end{table} To gain further insight, in Fig.~\ref{fig:varyingmass_581226}, we investigate the relationship between the two characteristic length scales of the problem, i.e., the range of the Yukawa potential, $\Lambda = 1/m_{X_0}$, and the range of the atomic wave function. The latter can be characterized by the expectation value $\langle nl | r | nl \rangle$, which for $l=0$ states is $3a_0n^2/2$ where $a_0$ is the Bohr radius. We see that these two ranges are comparable for principal quantum numbers $n \sim n_\Lambda$, where $n_\Lambda$ is the integer closest to $(2\,\Lambda/3\,a_0)^{1/2}$. Representative values of $n_\Lambda$ are given in Table~\ref{table:C}. The NP shift is accurately predicted by Eq.~(\ref{eq:smallmasses}) for $n\ll n_\Lambda$ and is much smaller than that limit for $n\gg n_\Lambda$. The fractional shift is plotted in Figs.~\ref{fig:varyingmass_581226}(a) and (b), respectively against the ratio of these two characteristic lengths and against the boson mass, for a range of values of $n$ and $l$. These curves show that for masses below $\sim 50$~eV, the shift decreases with $n$ but is essentially independent of $l$. Above this breakpoint, the shift decreases much more rapidly for d-states ($l=2$) than for s-states ($l=0$). This trend is even more marked for higher values of $l$ (not shown in the figure). In fact, for states with $l = n-1$ (which is the maximum value of the orbital angular momentum for the principal quantum number $n$), $|\delta E_{nl}^{\rm NP}/E_n|$ decreases as fast as $n^{-2n}$ when $n$ increases beyond $n_\Lambda$. In Figure~\ref{fig:fixedrelshift_581226}, we fix the value of the fractional NP shift at $|\delta E_{nl}^{\rm NP}/E_n| = 10^{-12}$, and show how the resulting constraint on the mass $m_{X_0}$ and the effective coupling $g_e g_N$ depend on the quantum numbers $n$ and $l$. Thus combining measurements for different values of $n$ and $l$ could provide additional information on the properties of the fifth-force carrier, i.e., its mass and its couplings to the electron and nuclei. \begin{figure}[h] \centering \includegraphics[width=8.5cm]{varyingmass_581226.pdf} \caption{The NP shift, $\delta E_{nl}^{\rm NP}$, divided by the non-relativistic energy of the state, $E_n$, for the states of atomic hydrogen with $n=5$ (solid curves), $n=8$ (dashed-dotted curves), $n=12$ (dashed curves) or $n=26$ (dotted curves) and $l=0$ (black curves) or $l=2$ (red curves), vs., (a) the range of the NP potential divided by the characteristic length scale of the atomic wave function, $\langle nl | r | nl\rangle$, or (b) the mass of the NP particle. A value of $g_e g_N$ of $1 \times 10^{-12}$ is assumed. } \label{fig:varyingmass_581226} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm]{fixedrelshift_581226.pdf} \caption{The coupling constant $g_e\,g_N$ at which the relative shift $|\delta E_{nl}^{\rm NP}/E_n|$ is $1\times 10^{-12}$, vs.\ the mass of the NP particle. The line styles and colors are the same as in Fig.~\ref{fig:varyingmass_581226}. Green triangles: the results for the ground state. } \label{fig:fixedrelshift_581226} \end{figure} \section{NP bounds based on current spectroscopic data} \label{section:current} In a nutshell, the existence of a new physics interaction could be brought to light by demonstrating a significant difference between the measured transition frequency for a transition from a state $a$ to a state $b$, $\Delta_{ba}^{\rm exp}$, and the corresponding prediction of the Standard Model, $\Delta_{ba}^{\rm SM}$ (or, better, by demonstrating such a difference for a set of transitions). Bounds on the strength of the new physics interaction can be set by finding the most positive and most negative values of $g_eg_N$ for which $\Delta_{ba}^{\rm exp}$ is consistent with the theoretical value $\Delta_{ba}^{\rm SM} + \Delta_{ba}^{\rm NP}$ with \begin{equation} \Delta_{ba}^{\rm NP} = (\delta E_{n_bl_b}^{\rm NP} - \delta E_{n_al_a}^{\rm NP})/h, \end{equation} where $h$ the Planck's constant. However, $\Delta_{ba}^{\rm SM}$ depends on the Rydberg constant $R_{\infty}$, whose value is primarily obtained by matching spectroscopic data to theory \cite{Codata2014}. Setting bounds on $g_eg_N$ makes it therefore necessary to evaluate $\Delta_{ba}^{\rm SM}$ with a value of $R_\infty$ itself obtained with allowance made for the possibility of new physics shifts on the relevant atomic transitions. Frequency intervals have been both measured and calculated to a very high level of precision for transitions in hydrogen, deuterium and muonic hydrogen. However, a new physics interaction might couple an electron differently to a deuteron than to a proton, and couple a proton differently to a muon than to an electron. It is therefore prudent, when establishing such bounds, to use data pertaining to only one of these three systems rather than using mixed sets of data. We consider bounds based exclusively on hydrogen results in this paper. $\Delta_{ba}^{\rm SM}$ is the sum of a gross structure contribution $\Delta_{ba}^{\rm g}$ (as given by the elementary treatment based on the Schr\"odinger equation) and of various corrections arising from the Dirac equation, from QED effects and from the hyperfine coupling \cite{Codata2014, Horbatsch2016, Yerokhin2019}. In terms of the Rydberg frequency, ${\cal R} = c\,R_{\infty}$, \begin{equation} \Delta_{ba}^{\rm g} = {\cal R}\,\left(\frac{1}{n_a^2}-\frac{1}{n_b^2}\right)\, \frac{m_{\rm r}}{m_e}, \end{equation} where $m_{\rm r}$ is the reduced mass of the atom and $m_e$ is the mass of the electron. It is convenient to factorize $\Delta_{ba}^{\rm g}$ into the product ${\cal R}\,\tilde{\Delta}_{ba}^{\rm g}$, with \begin{equation} \tilde{\Delta}_{ba}^{\rm g} = \left(\frac{1}{n_a^2}-\frac{1}{n_b^2}\right)\, \frac{m_{\rm r}}{m_e}. \end{equation} The difference $\Delta_{ba}^{\rm SM} - \Delta_{ba}^{\rm g}$ depends on $R_p$, the charge radius of the proton, through a term roughly proportional to $R_p^2$ \cite{Horbatsch2016,Yerokhin2019}. We denote this term by $R_p^2\,\tilde{\Delta}_{ba}^{\rm ns}$, aggregate all the other corrections into a shift $\Delta_{ba}^{\rm oc}$, and write \begin{equation} \Delta_{ba}^{\rm SM} = {\cal R}\,\tilde{\Delta}_{ba}^{\rm g} + R_p^2\,\tilde{\Delta}_{ba}^{\rm ns} + \Delta_{ba}^{\rm oc}. \label{eq:Deltath} \end{equation} The term $\Delta_{ba}^{\rm oc}$ includes fine structure and recoil corrections as well as QED and hyperfine shifts. Detailed work by a number of authors has yielded expressions for these corrections in terms of ${\cal R}$, of $R_p$ and of a small number of fundamental constants determined from measurements in physical systems other than hydrogen. The values of ${\cal R}$ and $R_p$ recommended by the Committee on Data of the International Council for Science (CODATA) were co-determined by a global fit of the theory to a large set of data, including deuterium data \cite{Codata2014}. Taking new physics shifts into account in a determination of ${\cal R}$ based entirely on hydrogen data thus involves a simultaneous redetermination of $R_p$. Eq.~(\ref{eq:Deltath}) is a convenient starting point for such calculations {}\footnote{ The terms $R_p^2\,\tilde{\Delta}_{ba}^{\rm ns}$ and $\Delta_{ba}^{\rm oc}$ are also proportional to ${\cal R}$; however, their dependence in ${\cal R}$ is normally not important for the determination of this constant as these terms are much smaller than ${\cal R}\,\tilde{\Delta}_{ba}^{\rm g}$ unless $n_a = n_b$. $R_p^2\,\tilde{\Delta}_{ba}^{\rm ns}$ is effectively zero for $l\not= 0$. For $l = 0$, this term depends on the proton radius both through the overall factor $R_p^2$ and through a dependence of $\tilde{\Delta}_{ba}^{\rm ns}$ on $R_p$; however, the latter dependence is weak and does not complicate the calculation.}. Bearing this in mind, we derive bounds on the value of $g_eg_N$ in the following way: Given experimental transition frequencies for several different intervals, e.g., $\Delta_{b_1a_1}^{\rm exp}$, $\Delta_{b_2a_2}^{\rm exp}$, $\Delta_{b_3a_3}^{\rm exp}$, etc., we calculate a value of ${\cal R}$ and a value of $R_p$ by matching these results with the corresponding theoretical frequency intervals, \begin{align} \Delta_{b_ia_i}^{\rm th} = {\cal R}\,\tilde{\Delta}_{b_ia_i}^{\rm g} + R_p^2\,\tilde{\Delta}_{b_ia_i}^{\rm ns} + & \Delta_{b_ia_i}^{\rm oc} + \Delta_{b_ia_i}^{\rm NP}, \nonumber\\ & \; \qquad i = 1,2,3,\ldots \label{eq:ansatz} \end{align} The values of these two parameters are determined by correlated $\chi^2$-fitting. We then obtain bounds on the coupling constant by finding the most positive and most negative values of $g_eg_N$ for which the model fits the data at the 5\% confidence level. The sensitivity to new physics arises because of the dependence of the NP shift on the quantum numbers $n$ and $l$ illustrated in Figs. 1-3. Put simply, states with high values of $n$ and $l$ are only weakly sensitive to new physics, whereas the opposite is the case for low-lying states. Before describing the results of this analysis, we briefly discuss the existing experimental results relevant for this calculation and the related theoretical uncertainties. Further details about the calculation can be found in Appendix \ref{app:fittingcurr}. \subsection{Existing spectroscopic data for hydrogen} Clearly, detecting a NP interaction from spectroscopic data sets a challenging level of precision and accuracy on the measurements. Apart from the hyperfine splittings of the 1s and 2s states, which are not directly relevant here, the only hydrogen frequency intervals currently known to an accuracy better than 1~kHz are the 1s~--~2s interval, which has been measured with an experimental error of 10~Hz (i.e., a relative error of 0.004 ppt) \cite{Parthey2011,Matveev2013}, and intervals between circular states with $n$ ranging from 27 to 30, for which unpublished measurements with an experimental error of a few Hz (about 10~ppt) have been made \cite{DeVries2002}. Circular states are states with $|m| = l = n-1$. The recommended value for the Rydberg constant is based on the 1s~--~2s measurement as well as on a number of measurements with a larger error \cite{Codata2014,Codata2017}. The latter include measurements of the 2s~--~8s, 2s~--~8d and 2s~--~12d intervals made in the late 1990s with an experimental error ranging from 6 to 9~kHz (i.e., of the order of 10~ppt) \cite{deBeauvoir1997,Schwob1999, deBeauvoir2000}. Until recently, no other transitions between hydrogen states differing in $n$ had been measured with an error of less than 10~kHz. However, the centroid of the 2s~--~4p interval has now been determined with an error of 2.3~kHz \cite{Beyer2017}, and that of the 1s~--~3s interval with an error of 2.6~kHz (1~ppt) \cite{Fleurbaey:2018fih}. \subsection{Theoretical uncertainty} The overall uncertainty on the SM predictions of hydrogen energy levels is mainly contributed by uncertainties on the values of the Rydberg constant, of the proton radius and of various QED corrections. Uncertainties on the values of other fundamental constants also contribute, although not in a significant way at the level of precision these energy levels can currently be calculated. The uncertainty on the values of the Rydberg constant and the proton radius does not affect our calculation of the bounds on $g_eg_N$ (recall that within our approach, these values are determined together with the bounds themselves in a self-consistent way). Recent compilations of the relevant QED corrections and their uncertainties can be found in \cite{Horbatsch2016} and \cite{Yerokhin2019}. These corrections roughly scale as $n^{-3}$ and strongly depend on $l$. Ref.~\cite{Horbatsch2016} gives the combined theoretical uncertainty on the energy of a state of principal quantum number $n$ as $(2.3/n^3)$~kHz for $l=0$, excluding the error contributed by the uncertainty on $R_p$, and as less than 0.1~kHz for $l > 0$. Except for the 1s~--~2s interval, the experimental uncertainty rather than the theoretical uncertainty is thus the main limitation for setting bounds on $g_eg_N$ based on the current spectroscopic data. \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{Values of the Rydberg frequency obtained by previous authors or derived in this work, assuming no NP interaction. The numbers between parentheses are the uncertainties on the last digit quoted.} \label{table:R} \begin{center} \begin{ruledtabular} \begin{tabular}{ll} Reference & ${\cal R} $\\ \tableline\\[-4mm] \multirow{1}{*}{CODATA 2014 \cite{Codata2014}} & $\mbox{3 289 841 960 355(19)}~\mbox{kHz}$ \\[0mm] \multirow{1}{*}{Beyer {\it et al.} \cite{Beyer2017}} & $\mbox{3 289 841 960 226(29)}~\mbox{kHz}$ \\[0mm] \multirow{1}{*}{Fleurbaey {\it et al.} \cite{Fleurbaey:2018fih}} & $\mbox{3 289 841 960 362(41)}~\mbox{kHz}$ \\[0mm] \multirow{1}{*}{De Vries \cite{DeVries2002}} & $\mbox{3 289 841 960 306(69)}~\mbox{kHz}$ \\[0mm] \multirow{1}{*}{Dataset A} & $\mbox{3 289 841 960 306(18)}~\mbox{kHz}$\\[0mm] \multirow{1}{*}{Dataset B} & $\mbox{3 289 841 960 356(23)}~\mbox{kHz}$\\[0mm] \multirow{1}{*}{Dataset C} & $\mbox{3 289 841 960 306(18)}~\mbox{kHz}$ \end{tabular} \end{ruledtabular} \end{center} \end{table} \subsection{Bounds based on existing data} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{bounds_reev_paper.pdf} \caption{Upper bounds on the possible value of $|g_e g_N|$ derived from existing spectroscopic data, (a) for an attractive interaction, (b) for a repulsive interaction. Shaded areas: region excluded at the 95\% confidence level (data set A). Solid and dashed curves: bounds based on the same set of transitions as for the shaded areas, minus the 2s~--~4p transition (data set B, solid curves) or minus the transitions between high lying circular states (data set C, dashed curves). Dotted curves: bounds arising from a comparison of experimental results for these high lying circular states to theoretical predictions based on the data set C. } \label{fig:bounds_paper} \end{figure} Bounds on the NP interaction strength derived as explained above are presented in Figs.~\ref{fig:bounds_paper}(a) and \ref{fig:bounds_paper}(b), respectively for attractive and repulsive interactions. These results are based on three different sets of data, which we refer to as sets A, B and C. Set A groups all the existing high precision spectroscopic measurements in hydrogen, namely all the 18 experimental hydrogen transition frequencies included in the CODATA~2014 least square fit~\cite{Codata2014}, the recent results of Ref.~\cite{Beyer2017} for the 2s~--~4p interval and of Ref.~\cite{Fleurbaey:2018fih} for the 1s~--~3s interval, and the results of Ref.~\cite{DeVries2002} for the transitions between high lying circular states. The other two sets are the same as Set A but without the 2s~--~4p results (Set B) or without the circular states results (Set C). The corresponding values of ${\cal R}$ obtained when assuming no NP shift are given in Table~\ref{table:R}, together with the recommended value of this constant \cite{Codata2014}, values based on the recent measurements of either the 2s~--~4p or the 1s~--~3s intervals \cite{Beyer2017,Fleurbaey:2018fih} and a value based entirely on measurements of transitions between the circular states \cite{DeVries2002}. As is well known, the results of Ref.~\cite{Beyer2017} are discrepant with both the CODATA results and those of Ref.~\cite{Fleurbaey:2018fih} in regards to the values of ${\cal R}$ and $R_p$, but yield a value of $R_p$ in good agreement with measurements in muonic hydrogen \cite{Antognini2013}. The values of ${\cal R}$ obtained from Dataset B are in close agreement with the CODATA 2014 value and have an uncertainty of a similar magnitude, although the CODATA fit also included spectroscopic measurements in deuterium and scattering data. Including the results of Ref.~\cite{Beyer2017} in the fit reduces ${\cal R}$ significantly (the change is large because of the particularly small experimental error on these measurements). Our main results for the current bounds on $g_eg_N$ are based on Dataset A and are represented by the shaded areas in Fig.~\ref{fig:bounds_paper}. They set a constraint of better than $10^{-11}$ over the range of $10^1$~--~$10^3$~eV. As seen from the figure, the shape of the excluded area somewhat differs between attractive and repulsive interactions, particularly in the region around 100 eV. This difference indicates that the range of allowed values of $g_eg_N$ is not centred on zero --- though we emphasise that a value of zero remains compatible with the experimental data. The regions below the shaded areas indicate the range of values of $g_eg_N$ compatible with the data, given the experimental and theoretical errors {}\footnote{These results are consistent with the hydrogen bound proposed in Ref.~\cite{Karshenboim:2010cg}, which was obtained in a different way.} Next we consider the effect of removing individual measurements from the calculation. Removing the recent 2s~--~4p measurement \cite{Beyer2017} has a considerable effect, not only weakening the overall bound, as expected, but also changing the shape of the excluded region. These differences reflect the aforementioned inconsistencies in the values of the Rydberg constant and the proton radius derived from the results of Ref.~\cite{Beyer2017} with those obtained in the CODATA 2014 fit. The effect of removing this measurement illustrates the perils of selectively setting bounds using individual measurements or combinations of measurements. Whilst individual measurements may be precise, their accuracy can only be gauged against other measurements, particularly independent measurements of the same transitions. Instead of removing the 2s~--~4p measurements, we now remove the unpublished circular state measurements of Ref.~\cite{DeVries2002} and use Dataset C. The result is a substantial weakening of the NP bound for lower masses, illustrating the importance of using measurements of states with a large spatial extension when probing for a NP interaction with a low value of $m_{X_0}$ \cite{Karshenboim:2010cg}. Although small, the NP shift of the circular states is not negligible when the range of the interaction is long enough. This leads to a decrease in the relative shift of these states compared to the low lying states when $m_{X_0}\rightarrow 0$, and hence to a weakening of the bounds on $|g_eg_N|$ {}\footnote{Since completion of this work we have become aware of a new measurement of the 2s$_{1/2}$~--~2p$_{1/2}$ Lamb shift [N.~Bezginov, T.~Valdez, M.~Horbatsch, A.~Marsman, A.~C.~Vutha and E.~A.~Hessels, {\it A Measurement of the Atomic Hydrogen Lamb Shift and the Proton Charge Radius}, Science {\bf 365}, 1007 (2019)]. Adding this result to those already included in Dataset A and carrying out the global fitting procedure on this extended set yields bounds incompatible with a zero value of $g_eg_N$ at the 95\% confidence level. It is our opinion that this surprising result merely reflects the inconsistencies in the data noted in the text, which are exacerbated by the inclusion of this new measurement. }. In summary, we have derived global NP bounds based on all available measurements for hydrogen, with no input from other atomic species. The sensitivity of the bound to individual measurements and to the Rydberg constant illustrates that bounds set using measurements on individual transitions should be treated with a degree of caution. The strong additional constraint provided by high-lying states at low masses motivate precision measurements for states with both higher $n$ and $l$. For the latter we note the proposal of the Michigan group \cite{Ramos2017}. Before closing this section, we note that bounds on $g_eg_N$ can also be found by comparing the values of ${\cal R}$ derived from different sets of transitions. For example, let ${\cal R}_C(m_{X_0},g_eg_N)$ and ${\cal R}_{D}(m_{X_0},g_eg_N)$ be the NP-dependent values of ${\cal R}$ obtained by fitting the theoretical model respectively to Dataset C and to the circular state results of Ref.~\cite{DeVries2002}. These two sets of data are completely independent of each other, and by contrast to ${\cal R}_C(m_{X_0},g_eg_N)$, the calculation of ${\cal R}_{D}(m_{X_0},g_eg_N)$ is insensitive to uncertainties on the proton radius and to poorly known QED corrections. The corresponding errors on these Rydberg frequencies, $\sigma_C$ and $\sigma_{D}$, are also functions of $m_{X_0}$ and $g_eg_N$. As these errors are not correlated with each other, bounds on the NP coupling constant can be obtained by finding the most positive and most negative values of $g_eg_N$ such that \begin{equation} |{\cal R}_C(m_{X_0},g_eg_N) - {\cal R}_{D}(m_{X_0},g_eg_N)| = f\,\sqrt{\sigma_C^2 +\sigma_{D}^2} \label{eq:bounds2} \end{equation} for a given choice of $f$ (this constant sets the confidence limit of the bounds --- we take $f=2$). The results are also shown in Fig.~\ref{fig:bounds_paper} (the dotted curves). Cancellations of NP shifts are at the origin of the large weakening of these bounds between 1 and 10~keV. They are similar, below 300~eV, to those obtained from the global fit of the same set of data (the shaded areas). Compared to a global fit, however, this approach to setting bounds is potentially more sensitive to systematic errors in some of the measurements. We thus prefer to take the shaded areas as the best representation of the constraint on $g_eg_N$ that can be set on the basis of the current body of spectroscopic work in hydrogen. \section{Scope for tighter bounds} \label{sec:improve} Three factors limit the strength of the current bound shown in Fig.~\ref{fig:bounds_paper}. The first is the experimental uncertainty of the measured energy levels. So far, only the 1s~--~2s interval has been measured with a relative uncertainty below the 0.01~ppt level. For higher states such as the measurements at $n=12$, the $\sim 1$ kHz uncertainty is approximately one hundred times larger or more. The second factor is the range of quantum numbers $n$ and $l$ for which precise data exist. The importance of additional measurements is highlighted in Fig.~\ref{fig:bounds_paper}. Lastly, the limitations on the SM calculation of the energies also plays an important role. Here also there is much to be gained by working with higher-lying Rydberg states. In this section we consider the prospects for improvements in each of these three areas. \subsection{Improved measurements} In this section we consider the effect of reducing the current experimental uncertainty approximately 100-fold, such that all transition frequencies in the dataset are known to the 10~Hz level currently available for the 1s~--~2s interval. As an aspirational goal we also consider what could be achieved with measurements at the 1~Hz level. A detailed discussion of future experiments is outside the scope of this article. Here we briefly discuss the dominant sources of uncertainties with the 10~Hz goal in mind. The focus is on laser spectroscopy of low-$l$ states; improved measurements of circular Rydberg states are considered in \cite{Ramos2017}. The current measurement uncertainty includes contributions from both the background electromagnetic environment and atomic motion. Fundamental limits are provided by the radiative linewidth and black-body radiation (BBR). We calculated the radiative width and black-body shift and broadening of the relevant states (Appendix \ref{BBR}). At $n=9$, the radiative linewidth (which varies as $n^{-3}$) is approximately~100 kHz for the s state and roughly ten times larger for the d state. The simple lineshape when radiative broadening dominates should enable line centres to be determined with high accuracy, with recent measurements in hydrogen determining line centres to one part in 10,000 of the linewidth \cite{Beyer2017}. As described in Appendix \ref{BBR}, we find that black-body related uncertainties can be neglected even at 300~K provided that the temperature can be stabilised to 0.01~K. Concerning stray fields, we note that the magnetic moment of low $l$ states does not vary with $n$. Therefore, methods developed for precision measurements with low $n$ states can be applied. For s-states the very small differential Zeeman shift is easily controlled at the sub-Hz level \cite{Huber1999,deBeauvoir2000}, while for d-states differential measurements such as those routinely carried out in optical atomic clocks \cite{Derevianko2011} can be used to largely eliminate magnetic field errors. A much greater challenge is presented by the DC Stark shift, which scales as $n^2$ and $n^7$ for the linear and quadratic components respectively. A detailed analysis of the effect of the DC Stark shift on the hydrogen Rydberg spectrum is provided in \cite{deBeauvoir2000}. In their experiments a stray field of $\sim$3~mV~cm$^{-1}$ was reported, leading to a final contribution to the uncertainty at the kHz level. However other experiment have shown that stray fields can be reduced to the 30~$\upmu$V~cm$^{-1}$ level by performing electrometry with high-$n$ states ($n>100$) \cite{Frey1993,Osterwalder1999}. Drift rates as low as 2~$\upmu$V~cm$^{-1} \mathrm{h}^{-1}$ have also been measured \cite{Hogan2018}. Such measurements could be performed independently using a co-electrometry with a different species \cite{Osterwalder1999,Hogan2018}. For a field of 30~$\upmu$V~cm$^{-1}$, the quadratic Stark effect is dominant for s-states, and measurements with 10 Hz uncertainty should be possible up to $n=23$, with $n\approx 40$ accessible if the stray field is determined to 1~$\upmu$V~cm$^{-1}$. For d-states, the linear Stark effect dominates, but differential measurements between different $|m|$ states should enable the first order shift to be cancelled. The resulting uncertainty thus becomes dominated by the residual quadratic shift. Considering motional effects, we note that all measurements of hydrogen energy levels to date have been performed in atomic beams, where second-order Doppler effects limit the achievable linewidth to approximately 1~MHz. A complex velocity-dependent lineshape analysis is thus required to extract the true line center to the current 1~kHz accuracy \cite{deBeauvoir2000}. In other atomic species, using ultracold atoms has enabled a dramatic reduction in the uncertainty of optical frequency measurements. Sub-10 Hz uncertainty has been achieved with untrapped atoms \cite{Wilpers2007}, while measurements based on atoms confined in magic-wavelength traps are entirely limited by the uncertainty in the microwave-based definition of the SI second \cite{campbell2008,Kim2017}. For Rydberg states, experiments with ultracold atoms are dominated by the large level shifts due to the long-range van der Waals interaction \cite{Beguin2013}, which scales as $n^{11}$. Control over the number of atoms and interparticle distance and geometry is therefore essential. Confining atoms to a volume of $\sim 1\ \upmu \mathrm{m}^3$ would also largely eliminate errors due to field gradients. Therefore, a suitable platform could consist of individual hydrogen atoms confined in a single optical tweezer or tweezer array. Single-atom arrays have now been achieved with a growing range of atomic \cite{Bergamini2004,Cooper2018,Norcia2018,Saskin2019} and even molecular \cite{Liu2018,Anderegg2019} species. Substantial hurdles exist for realising a similar system in hydrogen, not least the difficulty of laser cooling \cite{Setija1993}, which has so far proven essential for loading the optical tweezers. However alternative approaches such as loading from a hydrogen Bose-Einstein condensate \cite{Fried1998}, careful dissociation of laser-cooled hydride molecules \cite{Lane2015} or in-trap Sisyphus cooling \cite{Saijun2011} may also provide possible routes. Here we assume that such a system may be realised, and that the contribution of the Doppler and recoil effects can be reduced below the natural linewidth of the transition by using well established two-photon spectroscopy techniques \cite{deBeauvoir2000}, possibly in combination with resolved sideband cooling \cite{Kaufman2012}. Trap-induced AC Stark shifts are eliminated by extinguishing the trap light during the spectroscopy, as is common in Rydberg experiments with tweezer arrays. Overall, we consider that a target of extending the range of states measured with an absolute uncertainty of 10 Hz or better to the full Rydberg series of s- and d-states up to a principal quantum number of $n\approx 40$ is feasible. We note that this is still some way off the spectroscopic state-of-the-art achieved with cold trapped atoms. For circular states, 10~Hz uncertainty has already been achieved \cite{DeVries2002}; here achieving a precision of 0.1 Hz in future measurements seems feasible. \subsection{Improved theory} \label{section:improvedtheory} Improved measurements at the 10~Hz level would also provide a challenge to the current theory of SM corrections to hydrogen energy levels. Uncertainties in $\cal{R}$ and $R_p$ could be removed by using the global fitting procedure described in Section \ref{section:current}. Concerning the remaining correction due to QED and other effects, we note that the current uncertainty on the Lamb shift of the 2p$_{1/2}$ state is 21~Hz, including the uncertainty on the shift of the centroid of that level due to the hyperfine coupling \cite{Yerokhin2019}. As the theoretical error on QED and hyperfine corrections scales roughly like $1/n^3$ and has been found to be smaller for states with larger orbital angular momentum, the theoretical error for the states with $l > 0$ is already expected to be below 10~Hz for $n \geq 3$ and below 1~Hz for $n \geq 6$. The situation for s-states is less clear. Current work assumes that the error on these corrections scales as $n^{-3}$, at least down to the 100~Hz level \cite{Horbatsch2016,Yerokhin2019}. Given that the current theoretical uncertainty on the energy of the 2s state is about 2~kHz, achieving an accuracy of 10~Hz would require the evaluation of QED corrections that are currently rather poorly known. Alternatively, the data may be fitted to a theoretical model which does not rely on accurate values of the Lamb shift but instead treats the theoretical error on this quantity as a fitting parameter, assuming a $n^{-3}$ scaling. We used such a model to obtain the illustrative results presented in Section~\ref{NumIll} (the method is outlined in Appendix~\ref{app:fittingproj}). However, further theoretical work would be necessary to confirm that the $n^{-3}$ scaling still holds down to errors as small as 10~Hz or less. \subsection{Numerical illustration} \label{NumIll} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{bounds_paper2_2boxes.pdf} \caption{Upper bounds on the possible value of $|g_e g_N|$ for an attractive interaction, as derived from hypothetical spectroscopic data. For comparison, the region excluded by the analysis of the current data is represented by a shaded area. (a) Solid curve and long-dashed curve: Bounds based on a set of transitions between s-states (solid curve) and between d-states (long-dashed curve), assuming a 10~Hz experimental error and a theoretical error scaling as stated in the text. Short-dashed curve and dotted curve: the same as respectively the solid curve and the long-dashed curve but assuming a 1~Hz experimental error. (b) Solid curve: Bound obtained by comparing the value of the Rydberg constants derived from the same sets of transitions between s-states and between d-states as in panel (a). Dashed curve and dotted curve: the same as the solid curve but with a further comparison with values of the Rydberg constant derived from transitions between circular states. } \label{fig:bounds_paper2} \end{figure} Fig.~\ref{fig:bounds_paper2} illustrates the improvement on the NP bounds which could be expected from reducing the experimental error on transition frequencies to the 10~Hz level or to an aspirational 1~Hz level. Each of the bounds shown in Fig.~\ref{fig:bounds_paper2}(a) was obtained by comparing the predictions of the Standard Model to a set of hypothetical data, the latter having been generated from a model including a NP shift. The details of the calculation are given in Appendix~\ref{app:fittingproj}. The two blue curves plotted in Fig.~\ref{fig:bounds_paper2}(a) represent the bounds derived in this way from an arbitrary and hypothetical set of eight transitions between s-states, namely the 1s~--~2s, 2s~--~5s, 2s~--~8s, 2s~--~9s, 2s~--~11s, 2s~--~15s, 2s~--~21s and 2s~--~30s transitions. As seen from the figure, these results would improve the current spectroscopic bounds by two orders of magnitude over a wide range of values of $m_{X_0}$, assuming an experimental error of 10~Hz. Reducing the error to 1~Hz would yield a three orders of magnitude improvement. Using only transitions between states with $l>0$ would remove the uncertainty on how the theoretical error scales with $n$. In practice, an experimental value for such a transition could be obtained, e.g., by measuring the 2s to $(n,l)$ and 2s to $(n',l')$ intervals and subtracting one from the other to find the $(n,l)$ to $(n',l')$ interval. The two orange curves plotted in Fig.~\ref{fig:bounds_paper2}(a) represent the bounds derived from a set of transitions between d-states only (namely the 8d~--~9d, 8d~--~11d, 8d~--~15d, 8d~--~21d and 8d~--~30d transitions). While proceeding in this way has the advantage of avoiding the scaling issue, it has the disadvantage of taking into account only states with a relatively small NP shift. Correspondingly, and as is illustrated by the numerical results of Fig.~\ref{fig:bounds_paper2}(a), the bounds derived from such a set of data would be less stringent than those derived from data that include transitions from or between deeply bound states. As mentioned above, bounds on $g_eg_N$ can also be obtained by comparing the values of Rydberg constants derived from different sets of data. Assuming a 10~Hz experimental error and performing this comparison between the same sets of transitions as in Fig.~\ref{fig:bounds_paper2}(a) gives the bound represented by a solid curve in Fig.~\ref{fig:bounds_paper2}(b). This bound is slightly tighter but generally differs little from that obtained directly from the fit of the transitions between s-states. The dashed curve and dotted curve show that this bound could be lowered still further by also comparing these two values of the Rydberg constants with the value derived from transitions between circular states --- i.e., transitions of the form $(n,l=n-1) \leftrightarrow (n'=n+1,l'=n'-1)$. We consider two different sets of such transitions in Fig.~\ref{fig:bounds_paper2}(b). We took $n=10$, 15, 20, 25 or 30 and assumed an experimental error of 0.5~kHz on these transitions to calculate the bound represented by a dashed curve, whereas for the bound represented by a dotted curve we took $n=40$, 41, 42, 43 or 44 and assumed an experimental error of 0.1~Hz. Because the electronic density is concentrated further away from the nucleus when $n > 40$ than when $n \leq 30$, adding the first or the second of these two sets of transitions lowers the bound in different ranges of values of $m_{X_0}$. \section{Summary and Conclusions} \label{sec:conc} In summary, we have considered how the entire set of currently available spectroscopic data may be used to set global constraints on NP models that can be parameterized as a Yukawa-type interaction. Such interactions would naturally lead to so-called fifth forces which are a being searched for intensively \cite{Adelberger:2003zx,Berengut:2017zuo}. Light force mediators have been intensively tested in lab experiments, e.g. through the Casimir effect \cite{Bordag:2001qi}. As such searches rely in general on all atoms in a macroscopic object to contribute coherently and in concert to the resulting force on a test object, they do not probe directly the existence of a force on a microscopic level. This leaves large classes of new physics models untested. For example, forces mediated via kinetic mixing between a photon and a new $Z'$ \cite{Holdom:1985ag,Foot:1991kb} can easily avoid such bounds, as the atom as a whole is not charged under the fifth force. The experiments discussed above, however, would remain sensitive to such an interaction. In addition, while other laboratory based experiments lose sensitivity for mediator masses above $100~\mathrm{eV}$, atomic spectroscopy for hydrogen atoms retains a good sensitivity up to masses of $10~\mathrm{keV}$. Thus, to our knowledge, the presented predicted limits provide the strongest constraints in laboratory based experiments obtained so far for that mass range. The bounds we obtained in this work appear to be weaker than those set by astrophysical bounds. However, astrophysical bounds rely on the thermal production of light force mediators in stars \cite{Grifols:1986fc,Grifols:1988fv}. Particles like chameleons avoid such production and thereby constraints from measurements of the energy transport in stars. Here atomic spectroscopy can help to close gaps in the landscape of Standard Model extensions and provide an independent test of the physics models underlying the assumptions of the models for the evolution of stars. We further argue that this type of laboratory-based bound is unique, since it is independent of any many-body physics effects, such as astrophysical models or the complex subtleties of isotope shifts in many-electron atoms. Global constraints of this type also reduce the sensitivity to systematic errors in individual measurements, such as those which are currently giving rise to the so-called proton radius puzzle. We therefore argue that there is a strong case for improved measurements in hydrogen based on extensions of current methods for precision optical frequency measurements in laser-cooled and trapped atoms. An important element would be extending the reach of measurements to higher principal quantum numbers, which has substantial benefits due to the dependence of the new physics shift on the shape of the wave function discussed in Section~\ref{sec:NPshifts}. The ideal platform would be trapped single atoms or arrays of atoms with a well-controlled spacing, such as an optical tweezer array, opening also the tantalising prospect of an engineered many-body quantum system with a complete SM description. An extension of this work would consider other simple atoms which have a complete SM description, such as D and He, or even positronium, muonic hydrogen or muonium. More sophisticated statistical analysis methods might enable measurements in all of these systems to be combined into a highly robust extended extended bound, or to create sensitive differential searches. Concrete limits could be obtained for various classes of new physics models, e.g., chameleons or kinetic mixing. \acknowledgements We gratefully acknowledge the contribution of Masters students Andrew Spiers, Natalie McAndrew-D'Souza and Suniyah Minhas to the early stages of this work. We also acknowledge helpful discussions with Martin Bauer, David Carty, Stephen Hogan and Joerg Jaeckel, and thank Savely Karshenboim and Krzysztof Pachucki for information about QED corrections. This work made use of the Hamilton HPC Service of Durham University. MPAJ acknowledges funding from the EPSRC responsive mode grant EP/R035482/1. The project also received funding from the European Union's Horizon 2020 research and innovation programme under the EMPIR grant agreement 17FUN03-USOQS.
1,941,325,220,302
arxiv
\section{Introduction}\label{Intro} Brightest cluster galaxies (BCGs) are the most massive and most luminous galaxies in the universe today. BCGs are so massive ($M_{star}>10^{11} M_{\odot}$) that their formation and evolution is closely tied to the large scale structure of the universe (Conroy et al. 2007). Semi-analytical models show how BCGs are formed through complex hierarchical merging of many small galaxies and originate within the densest dark matter halos of primordial density fluctuations. BCGs reside only in overdense regions of the Universe such as galaxy clusters and groups where merging occurs at a high rate over cosmic time (de Lucia \& Blaizot 2007). It is precisely the accretion of numerous stellar systems that give the BCGs their apparently homogenous properties. For instance, their total luminosities are relatively constant and can be used as standard candles (Lauer \& Postman 1992). For several decades the luminosity profiles of elliptical galaxies were modeled with the empirical $R^{1/4}$ de Vaucouleurs law (de Vaucouleurs 1948). However Lugger (1984) and Schombert (1986) showed that most elliptical galaxies have a flux excess at large radii with respect to the $R^{1/4}$ model. Schombert (1987) modelled the BCG light profiles with a power law rather than the de Vaucouleurs law underscoring the different nature of BCGs and standard elliptical galaxies. A model used virtually by every author in recent years to fit the light profiles of a wide range of stellar systems is the generalization of the de Vaucouleurs law introduced by S\'ersic (1963, 1968). The S\'ersic model in the form $R^{1/n}$ is a mixture of bulge and disk components using only three free parameters ($\mu_{e}$, $r_{e}$, and $n_s$) instead of four ($\mu_{e}$, $r_{e}$, $\mu_0$, and $r_0$) (see Section 3 and the comprehensive review of Graham and Driver 2005). More recently, several papers have suggested that a simple S\'ersic $R^{1/n}$ law does not properly model the luminosity profile of some elliptical (usually cD) galaxies. Gonzalez et al. (2003, 2005) found that the best fit to the light profiles of 30 BCGs was a double $R^{1/4}$ de Vaucouleurs profile. Seigar et al. (2007) studied the light profiles of five cD galaxies and showed that a S\'ersic plus an exponential function are necessary to accurately reproduce an inner and an outer component present in their surface brightness profiles. Donzelli et al. (2007) estimated that roughly half of 82 elliptical galaxies belonging to the 3CR radio catalog also require a S\'ersic + exponential model to properly fit their light profiles. Using numerical simulations Hopkins et al. (2009) propose that dissipational mergers are at the origin of the double components light profiles in the core of elliptical galaxies. According to their models a violent relaxation of stars whose parent galaxies participate in the merger is responsible for the creation of an outer component while a central starburst gives rise to the inner component. We use a homogeneous and uniquely large sample of ground-based imaging of Abell clusters to carefully examine the luminosity profiles of 430 BCGs and determine the best fitting function and structural parameters. This is key to properly constrain dynamical models and the merging history of these galaxies. The paper is organized as follows. In Section 2, we present the observations and reductions, while in Section 3 we describe data processing. Light profile fitting procedure and structural parameters are discussed in Section 4, while in Section 5 we discuss the results and in Section 6 we summarize the conclusions. \section{Observations}\label{observations} BCG images used in this work were provided by M. Postman (STScI) who kindly gave us access to the raw data. They were obtained under photometric conditions using the KPNO 2.1 m and 4 m telescopes, and the CTIO 1.5 m telescope between 1989 November and 1995 April over a total of 13 observing runs (see Table 1). Five of these runs are described in more detail in Postman \& Lauer (1995). Briefly, all the images were acquired in the Kron-Cousins $R_c$ band and have typically exposure times of 200-600 s. During these runs seeing conditions were very good to fair, namely, FWHM = 1"-2", and nights were photometric.\\ In order to flat field the images a series of dome flats were obtained each night. This allowed for flatfielding with a typical accuracy better than 0.5\% of the sky level. Photometric calibration was obtained by observing 10-15 Landolt (1983) standard stars per night. This also enabled us to calculate extinction coefficients and to check the zero point on each night of the observing runs. The overall photometric accuracy was better than 0.02 mag, much smaller than the typical errors of the BCG photometric parameters, which are more sensitive to background subtraction, and fitting models. One of the key points of this homogeneous sample is that approximately 50 BCGs were observed in common between the different runs. Many of these BCGs were actually observed in five or more runs. This not only allowed us to verify and compare reductions for all observing runs, but also to improve the accuracy of the luminosity profiles as discussed in the next section. The rms scatter for the integrated magnitudes of the galaxies is 0.03 mag, while for the luminosity profiles at $\mu_{R} =$ 22.5 $mag$ $arcsec^{-2}$ is 0.11 $mag$ $arcsec^{-2}$. \section{Data Processing and Sky Level}\label{DP} All images were processed following the standard recipes: bias and flat field corrections using IRAF routines. After this process we carefully inspect the images in order to determine the best method to subtract the sky background. Gauging the sky is a crucial step since sky subtraction has a significant influence on the faint end of the luminosity profiles and therefore on the structural parameters we derive. In most cases a two-dimensional first-degree polynomial was sufficient to give an accurate fit to the sky, and we used the residuals distribution to estimate the uncertainty of the sky level $\sigma_{sky}$. The importance of sky cleaning is summarized in detail in Coenda et al. (2005). Similar tests were made to measure the effect of seeing on the luminosity profiles. The effects of seeing dictate that the minimum radius for a suitable fit to the luminosity profile is $r=1.5\times$FWHM, this is particularly true for large galaxies, i.e., apparent radius greater than $\sim12\times$FWHM (Coenda et al. 2005). We show in detail in the next section that BCG galaxies were relatively bright and extended and in most cases the apparent radius of the galaxy was greater than $20\arcsec$. \section{Luminosity Profiles and Profile Fitting}\label{PF} We use the {\sc ellipse} routine (Jedrzejeswki, 1987) within Space Telescope Science Analysis System (STSDAS) to extract the luminosity profiles of the BCGs. We apply this routine to the processed, sky subtracted images. In many cases galaxy overlapping is an issue due to the crowded fields around BCGs. To solve this, we apply the technique described in Coenda et al. (2005) which consists in masking the overlapping regions before profile extraction. Then we obtain the luminosity profile and structural parameters (center coordinates, ellipticity, and position angle of the isophotes), and construct a model BCG galaxy that is subtracted from the original image. The residual image is then used to extract the luminosity profile of the overlapping galaxies. The process is repeated several times until the profile of the target galaxy converges. Isophote fitting was performed down to a count level of 2$\sigma_{sky}$; i.e., the fitting procedure was stopped when the isophote level was around twice the background dispersion (pixel-to-pixel variance), which in our case corresponds to surface magnitudes between $\mu_{R} \sim$ 23.5 - 24.5 $mag$ $arcsec^{-2}$, depending on the observational run when the cluster was observed. For each cluster, we usually obtain the luminosity profiles for the three brightest galaxies in the field. This preliminary selection is made by eye. In those cases where the selection is not obvious, we also obtain additional luminosity profiles, i.e., for the five brightest galaxies. The final BCG selection is made through the galaxy metric luminosity, that is the luminosity enclosed in a radius of 14.5 kpc, we also included galaxy redshift to ensure cluster membership. Redshift data were obtained from the NASA/IPAC Extragalactic Database (NED). As discussed in the introduction, there are a wide variety of functions to perform luminosity profile fitting. We adopted the S\'ersic profile $R^\beta$, where the concentration parameter $\beta = 1/n$ is the inverse of the S\'ersic index (S\'ersic 1968): \begin{equation} \label{Ser} I(r)=I_{e} exp\Big\{-b_n\Big[\Big(\frac{r}r_{e}\Big)^{\beta} - 1\Big]\Big\}. \end{equation} In this equation $I_{e}$ is the intensity at $r = r_{e}$ the radius that encloses half of the total luminosity (also known as the effective radius or half-light radius). In this context $b_n$ can be calculated using $b_n \sim 2\/n-0.33$ (Caon et al. 1993). We use the {\sc nfit1d} routine within STSDAS to find the coefficients that best fit the light profiles of each galaxy. This task uses a $\chi^2$ minimization scheme to fit the best non linear functions to the light profiles tables we obtained with {\sc ellipse} (Schombert \& Bothun 1987). The fitting procedure is carried out only in the portion of the galaxy surface brightness profile where the signal-to-noise ratio was greater than 3 ($S/N > 3$). We did this in order to avoid the regions at the faint end of the luminosity profiles, in which errors are large. Moreover, we have also excluded the inner 3"-4" of the luminosity profiles to avoid the blurring effects of seeing (see the test described in Coenda et al. 2005). Errors on the structural parameters were calculated following the method described by Coenda et al. (2005). Briefly, this technique consists of creating test images with model galaxies that have known luminosity profile parameters. Then we artificially add and subtract to those images a constant value corresponding to $\sigma_{sky}$. Finally we extract and refit the new luminosity profile as explained above. These newly obtained parameters give us the respective upper and lower limits. To a first approximation a single S\'ersic model provides a good fit to the light profile of our sample of BCGs, as shown by Graham et al. (1996). However, for almost half of the sample we noticed that a single S\'ersic function fails to properly reproduce the BCGs surface brightness in the range $\sim22.0$ $mag$ $arcsec^{-2}$. Note that this case is very similar to that presented in La Barbera et al. (2008, their Figure 18). It is evident that the S\'ersic function do not properly fit the luminosity profiles of these elliptical galaxies in the inner 4$\arcsec$, where the residuals are almost $\sim0.3$ $mag$ $arcsec^{-2}$. It is not necessary to have very deep luminosity profiles, as in the case of Seigar et al. (2007), to realize that for certain galaxies, even in regions of bright surface luminosity, a single S\'ersic model cannot account for the concave shape of the luminosity profile. For these galaxies their light profiles were best fitted adding to the S\'ersic model of Equation (1) an outer exponential function (Freeman, 1970): \begin{equation} \label{exp} I(r)= I_{0} exp\Big(-\frac{r}r_{0}\Big). \end{equation} In this equation $I_{0}$ is the central intensity and $r_{0}$ is the length scale. The inclusion of this equation in the fitting function does not necessarily means that the galaxy has a disk component in the usual sense of rotation-supported stellar system. We have chosen the exponential function since it is the simplest function to account for the "extra-light" observed in the above mentioned galaxies. It is worth to mention that we also tried with a second S\'ersic function since it has three degrees of freedom instead two. However, in terms of the $\chi^2$ coefficient it is not better than the S\'ersic plus exponential fit. Figure 1 shows a good example: the case of BCG A0690 where clearly a single S\'ersic function cannot account for the concavity of the luminosity profile. Even when the fitting becomes adequate at the faint end of the profile, the model cannot reproduce the surface magnitude in the interval between 7 and 15 kpc, which corresponds to a surface magnitude in the range of 21-22 $mag$ $arcsec^{-2}$. Error bars in this region have approximately the size of the squares. On the other hand, Figure 2 shows a much better fit due to the inclusion of the exponential component (dashed line) in the fitting model. We show that the 21-22 $mag$ $arcsec^{-2}$ region is now in perfect agreement with the model and the fitting functions can also properly describe the faint end of the luminosity profile. In fact, the reduced $\chi^2$ coefficient we obtain with a S\'ersic + Exponential model is about one order of magnitude smaller than those obtained with the single S\'ersic fit. As a general rule, for most of those cases that are initially fitted with the single S\'ersic function and in which $n > 8$ and $r_{e} > 300$ kpc, it is necessary to include the exponential component to obtain a proper fit. Intensity parameters are then converted into surface brightness, expressed in $mag$ $arcsec^{-2}$ by the equation $m=-2.5log(I)$, while units of $r_e$, and $r_0$ are converted to kpc. Errors for $r_e$ and $r_0$ are smaller than 15\% while for $\mu_e$, and $\mu_0$ errors are below 0.20 $mag$ $arcsec^{-2}$. Total luminosities of both S\'ersic and the exponential components are finally computed using the derived photometric parameters and integrating separately both components of Equations (1) and (2) as follows: \begin{equation} \label{luminosidad} L=\int_{0}^{\infty}I(r)2\pi rdr, \end{equation} which yields \begin{equation} L_{Sersic}=I_{e}r_{e}^{2}\pi\frac{2exp(b)}{\beta k^{2/\beta}} \Gamma(2/\beta) \end{equation} for the S\'ersic component and \begin{equation} L_{exp}=2\pi I_{0}r_{0}^{2} \end{equation} for the exponential component. $\Gamma(2/\beta)$ is the gamma function. Total apparent magnitudes are then converted into absolute magnitudes. Throughout this paper we assume a Hubble constant $H_0$ = 70 $km$ $s^{-1}$ $Mpc^{-1}$ together with $\Omega_M$ = 0.27 and $\Omega_{\Lambda}$ = 0.73.\\ \section{Results and Discussion}\label{RD} Table 2 lists the photometric parameters obtained through the fitting procedure described in Section 3 for all BCGs of the sample. Columns 1-6 list the Abell BCG name, S\'ersic parameters $\mu_{e}$, $r_{e}$, $n$, exponential parameters $\mu_0$ and $r_0$. Columns 7-10 give absolute magnitude of the S\'ersic component, absolute magnitude of the exponential component, total absolute magnitude of the BCG and S\'ersic to exponential component light ratio. Column 11 contains the logarithmic slope of the metric luminosity or $\alpha$ parameter which is defined as $\alpha = d(Log(L_m))/d(log(r_m))$, where $L_m$ is the total BCG luminosity within a circular aperture of radius $r_m$ centered on the BCG nucleus. Following Postman \& Lauer (1995) we have calculated this parameter at r = 14.5 kpc. Columns 12-15 list the inner ellipticity (measured at 10$\arcsec$), outer ellipticity (measured at $\sim23-24$ $mag$ $arcsec^{-2}$), and inner and outer position angles of the isophotes where the ellipticities are measured. Position angles are from north to east and the typical observed errors are $\sim5^o$, while typical errors for ellipticities are 0.06. Finally, column 16 lists the metric absolute magnitude also calculated at r = 14.5 kpc. The data above show that 225 out of 430 BCGs, or 52\% of the sample, have a single S\'ersic luminosity profile, while the remaining 205 BCGs (48\%) need a double component of S\'ersic + exponential to properly fit their luminosity profiles. We note that we find 27 galaxies ($\sim6\%$ of the whole sample) that have $n < 1.5$ for the inner S\'ersic component, which is nearer to an exponential rather than a de Vaucoleurs profile ($n \sim 4$). Moreover, all but three, have double component luminosity profiles. This is particularly interesting since it has been suggested that BCGs usually have high S\'ersic indices (Graham et al. 1996). However, Seigar et al. (2007) observe an inner exponential behavior in 3 out of 5 galaxies of their sample, suggesting that this may be more common in cD galaxies than previously thought. A visual inspection of these galaxies reveals that they have high ellipticities. We calculated an average outer ellipticity $e = 0.32\pm0.09$, which is slightly higher than the average found for double profile BCGs (see the next section). In general terms we have also noticed that the inner components of the double profile BCGs have effective radii $r_e \sim 1-10$ kpc and S\'ersic indices $n \sim1-6$, being the averages $r_e = 5 \pm4$ kpc and $n = 3.7 \pm1.5$ respectively. This values are quite similar to those reported by Gonzalez et al. (2003) in their preliminary paper for a sample of 31 BCGs. However, these authors use a different approach to fit the luminosity profiles. They use two S\'ersic functions instead our S\'ersic + exponential approach. \subsection{Single Profile BCGs versus Double Profile BCGs} A question that arises after our analysis is to establish if single profile BCGs actually differ in morphology from double profile BCGs. The models of the light profiles we apply do not imply conjectures on galaxy morphologies. Are BCGs with single and double profile of a different nature? If so, is this difference environmental or intrinsic? In order to answer these questions we carried out a series of tests in which we explored BCGs properties together with the global cluster properties. One of them is the Kormendy relation (KR; Kormendy 1977) which is presented in Figure 3. This is an empirical scaling relation between surface brightness $\mu_e$ and effective radius $r_e$ and it represents the projection of the Fundamental Plane (Djorgovski \& Davies 1987). For the case of single profile BCGs both $\mu_e$ and $r_e$ are directly obtained from the fitting profile. However, in the case of double profile BCGs, we calculate these parameters from the double profile, i.e., using the sum of the S\'ersic and exponential profiles. A linear regression applied to the whole sample gives\\ \begin{center} $\mu_e$ = 18.72($\pm0.06$) + 3.13($\pm0.05$) log($r_e$/kpc)\\ \end{center} The slope of the KR obtained for all BCGs is $a_{BCG} = 3.13 \pm 0.05$ which is similar to the value obtained by Oegerle \& Hoesel (1991) for a sample of 43 BCGs (i.e., $a_{BCG} = 3.12 \pm 0.14$). However, Bildfell et al. (2008) obtain for a sample of BCGs selected from 48 X-ray luminous clusters ($a_{BCG} = 3.44 \pm 0.13$), which is considerably steeper than our value and than the value of "normal" ellipticals $a_{ellip} = 3.02 \pm 0.14$ (Oegerle \& Hoesel, 1991). Nevertheless, when we apply the same analysis to single profile and double profile BCGs separately we obtain for single profile BCGs \begin{center} $\mu_e$ = 18.65($\pm0.07$) + 3.29($\pm0.06$) log($r_e$/kpc)\\ \end{center} and for double profile BCGs \begin{center} $\mu_e$ = 19.03($\pm0.10$) + 2.79($\pm0.08$) log($r_e$/kpc).\\ \end{center} Different slopes suggest that the formation timescale of the single profile BCGs could differ from their double profile counterpart (von der Linden et al. 2007). It is interesting to note that these values are calculated integrating the galaxy luminosity profiles up to infinity. If we consider a finite radius instead, total luminosities will change and therefore both $r_e$ and $\mu_e$ will also change. We then have also calculated the KR for both subsamples considering different galaxy radii ($r = 100, 200, 300$ kpc). We observed that the slope of the KR flattens for smaller radii and tends to a similar value for both subsamples ($a\sim2.6$ at $r = 100$ kpc). This effect could easily explain the differences found in the size-luminosity relation between von der Linden et al. (2007) and Lauer et al. (2007) and Bernardi et al. (2007). von der Linden (2007) defines the $r_e$ by integrating luminosity profiles up to the $\mu = 23$ $mag$ $arcsec^{-2}$ isophote, while Lauer et al. (2007) and Bernardi et al. (2007) integrate the luminosity profiles up to infinity. We also find that single profile BCGs show a median total absolute magnitude $M_{T, single} = -23.8 \pm 0.7$, while double profile BCGs have $M_{T, double} = -24.0 \pm 0.5$. The Kolmogorov-Smirnov (K-S) test applied to these data indicates that the $M_T$ distributions for single profile and double profile BCGs are statistically different at 99.4\% confidence level. Figure 4 shows the total absolute magnitude distributions for single and double profile BCGs. To verify our findings and to rule out a possible dependence on our fitting models, we calculated the total luminosity within different diaphragms with radius ranging from 5 to 70 kpc. We find that integrated luminosities for both subsamples are indistinguishable up to $r = 15$ kpc. This can be seen in Figure 5 where we plot the absolute integrated magnitude versus the radius of the circular diaphragm expressed in kpc. The vertical line indicates $r = 14.5$ kpc where metric luminosity and alpha parameter are calculated. Average integrated luminosities beyond 20 kpc are $\sim 0.2$ mag brighter for double profile BCGs. We have applied the K-S test to both subsamples and results indicate that they do not statistically differ for $r =$ 5, 10, and 14.5 kpc, while for larger radii the integrated luminosity distribution are truly different at the 99 \% confidence level. We highlight that $r = 20$ kpc is close to value for which the S\'ersic component equals the exponential component (see the case for A0690 in Figure 2). From the data tabulated in Table 2, it is straightforward to compute the radius where $I_{Sersic}$ = $I_{exp}$ for each of the double profile BCGs. Averaging these values we find $<r> = 13 \pm 5$ kpc at $<\mu> = 22.5 \pm 0.7$ $mag$ $arcsec^{-2}$. In other words, this result corroborates that the extra-light observed in double profile BCGs originates in the intermediate regions of the galaxies and not in the inner regions. The same conclusion can also be derived from Figure 6 where we plotted the total sum of the luminosity profiles corresponding to the single and double profile BCGs. Prior to the sum, individual profiles are normalized to their effective radii. Note that the extra-light of the double profile BCGs becomes apparent in the region 1 $ < r/r_e < $5, which roughly correspond to $\sim15-75$ kpc. \subsection{Ellipticities and Isophote Twisting} We have also explored other photometric parameters which are not directly related to the surface brightness profile fitting functions. Figure 7 shows the ellipticity distributions for single and double profile BCGs, and Figure 8 shows the absolute value of the inner minus outer ellipticity distributions for the same subsamples. Outer ellipticities are measured at $\mu \sim$ 23-24 $mag$ $arcsec^{-2}$ which is approximately half a magnitude brighter than our limit surface magnitudes. Similarly, the inner ellipticity is measured at $r \sim 4-5\arcsec$, which corresponds to 3-4 times the average seeing. In Figure 7 we show that double profile BCGs have higher ellipticities that single profile BCGs. We obtain an average ellipticity $<e_{double}> = 0.30 \pm 0.10$ for double profile BCGs and $<e_{single}> = 0.26 \pm 0.11$ for single profile BCGs. A K-S test applied to these data indicates that the ellipticity distributions for single and double profile BCGs are statistically different at a 98.3\% confidence level. Similar results are obtained for the inner minus outer ellipticity of these subsamples. We obtain an average $< \Delta e_{double} > = 0.15 \pm 0.10$ for the double profile BCGs, while for single profile BCGs we have $< \Delta e_{single} > = 0.10 \pm 0.09$. Again, the K-S test indicates different distributions at 99.9\% confidence level. The logarithmic slope of the metric luminosity (calculated at $r = 14.5$ kpc) $\alpha$ is also higher in double profile BCGs ($< \alpha_{double} > = 0.65\pm0.12$) than in single profile BCGs ($< \alpha_{single} > = 0.59\pm0.14$). Figure 9 presents the distributions of $\alpha$ for single and double profile BCGs respectively. A K-S test applied to these data establishes that the distributions are statistically different at a 99.9\% confidence level. The presence of isophote twisting was also explored. We calculated the outer minus inner position angle of the isophotes for those galaxies with ellipticities greater than 0.15 since position angle errors associated with rounder isophotes are large. We found similar values for both single and double profile BCGs ($< \Delta pa > = 8^o \pm9^o$). \subsection{S\'ersic + Exponential or Exponential + S\'ersic?} By combining a large set of hydrodynamical simulations spanning a broad range of luminosity profiles of various masses, Hopkins et al. (2009) show an alternative way to separate luminosity profiles into an inner starburst component and outer pre-starburst component for "cusp" ellipticals which are formed via gas-rich mergers. These authors show that dissipational mergers give rise to two-component luminosity profiles which can be accounted by an exponential function (inner component) plus a S\'ersic model (outer component). The exponential function accounts for the extra-light that was formed in a compact central starburst and makes the inner light profile of the galaxy deviate from a single S\'ersic in the galaxy core. The outer component was formed by violent relaxation of the stars already present in the precursor galaxies. We fitted to our double component BCGs an exponential (inner) + S\'ersic (outer) model in the order proposed by Hopkins et al. (2009). As an example, we show the results of this fitting for the BCG A0690 in Figure 10. The agreement between model and measurements is excellent and the Hopkins model properly accounts for the luminosity profile. We also compared the rms and $\chi^2$ values with those obtained with our original S\'ersic + exponential model and we find that both models are equally good. In other words, our approach and the Hopkins model are, from a mathematical point of view, equivalent. The results of fitting exponential + S\'ersic models to our double profile galaxies are summarized in Table 3, which lists the same 10 parameters as in Table 2. In this case S\'ersic parameters $\mu_{e}$, $r_{e}$ and $n$ correspond to the outer component, while the exponential $\mu_0$ and $r_0$ parameters correspond to the inner component. Obtained $\chi^2$ values indicate that this fit proved to be equally good as the inner S\'ersic + outer exponential form. However, for some galaxies we inexorably obtained unrealistic ($\geq 300$ kpc) effective radius using the Hopkins model, this is not the case for the S\'ersic + exponential approach. Given that our results suggest that the extra-light comes from the intermediate regions (see also the next section), we believe that the S\'ersic + exponential profile fitting is the appropriate selection for the BCGs analized in the present work. \subsection{Extra-light and D-cD Envelopes} Around $\sim20\%$ of giant elliptical galaxies have extensive, low-luminosity envelopes. These galaxies are known as D type, and those with the largest envelopes are denominated cD galaxies (Mackie, 1992). The envelopes, which are seen as deviations from the de Vaucouleurs profile, are quite faint, occur below 24 $mag$ $arcsec^{-2}$ in the $V$ band, and they extend well beyond 100 kpc in projected size. Thus, only few giant ellipticals have confirmed envelopes. We explore if the extra-light found at intermediate radii is related to an eventual cD envelope. From the works of Kemp et al. (2007), Seigar et al. (2007), Mackie (1992), and Schombert (1986, 1987, 1988), we have found 24 BCGs, cataloged as cD galaxies with confirmed envelopes, in common with our sample. These are: A85, A150, A151, A193, A262, A358, A539, A779, A1177, A1238, A1767, A1795, A1809, A1904, A1913, A2028, A2147, A2162, A2199, A2366, A2572, A2589, A2634,and A2670. Table 2 shows that 19 (79\%) of these 24 galaxies are double profile BCGs, while 5 (21\%) are single profile BCGs. Note that three galaxies (A151, A1767, and A2589) from those five single profile BCGs, have $r_e > 100$ kpc, and $n > 6$. These parameters are very close (see Section 4) to the limit ($r_e > 300$ kpc, $n > 8$) where it is necessary to include an additional exponential profile to obtain a reasonable fit. On the other hand, one must also note that a visual classification as cD, does not necessarily imply the presence of an envelope. Schombert (1986), and Seigar et al.\ (2007) cataloged A496, A505, A1691, A2029, A2052, A2107, A2197, and A2666, as cDs without envelopes. For all those galaxies, except A2197, our surface profile modeling is consistent with just one component. These results strongly suggest that the extra-light found in double component BCGs at intermediate radii is related to the faint envelope. Moreover, they indicate that this component is not only confined to the outskirt of the parent galaxy. Galaxy halos generally considered an outer component in galaxy structure appear to originate in the inner regions of BCGs. \subsection{BCGs Morphologies and IR Emission} Although a visual inspection of our BCGs suggests that they all are early-type galaxies, we probe the possibility that the differences we observe in the profiles are due to actual differences in the morphology. The Galaxy Zoo project (http://zoo1.galaxyzoo.org/) provides the morphological types for a large sample of galaxies observed by the Sloan Digital Sky Survey (SDSS; Lintott et al. 2010). In this catalog 66 of the BCGs studied here have a morphological classification: 31 (47\%) are single profile BCGs, while 35 (53\%) are double profile BCGs. The Lintott et al. (2010) catalog gives the probability that a particular galaxy is an early-type galaxy or late-type (spiral) galaxy. We find that both single and double profile BCGs have the same probability to be classified as elliptical galaxies in the Galaxy Zoo catalog. However, from the 31 single profile BCGs with morphological classification only 5 (16\%) belong to the 'unknown' category, while for the 35 double profile BCGs in the catalog this number rises up to 9 (26\%). It is also interesting to note that a visual inspection of these nine double profile BCGs with unknown morphology are mostly interacting galaxies with two or three near companions, while the five single profile BCGs with unkown classification appear to us as normal ellipticals. We have scrutinized the infrared emission of a subsample of our BCGs using data from Quillen et al. (2008). They report on an imaging survey with the $Spitzer Space Telescope$ of 62 BCGs with optical line emission. We have 12 BCGs in common with Quillen et al. (2008), 6 having single component luminosity profiles and 6 having double component luminosity profiles. Analysis of the 24-8 $\mu$m flux ratios shows that only one (17\%) of single profile BCG (A2052) have infrared excess, while for double profile BCGs this number increases to 4 (67\%). Infrared excess is a star formation signature. In fact, O'Dea et al. (2010) in their study of seven BCGs using $Hubble Space Telescope (HST)$ ultra-violet and $Spitzer$ infrared data found that all these galaxies have extended UV continuum and $L-\alpha$ emission as well as an infrared excess. Based on their findings O'Dea et al. (2010) confirm that the BCGs they study are actively forming stars. Moreover, they suggest that the IR excess is indeed associated with star formation and they also confirm that the FUV continuum emission extends over a region $\sim$ 7-28 kpc. Although these results are only for a few BCGs in our sample, they suggest that the "extra-light" observed in the double profile BCGs indicates active star formation in the intermediate regions of these galaxies. \subsection{Global Cluster Properties} In this section we compare BCGs properties to those of the host cluster such as cluster X-ray luminosity, the projected distance of the BCG with respect to the center of the X-ray emission and the BCG position angle with respect to that of the cluster. We have used data taken from Ledlow et al. (2003) to determine the offset in kpc between the X-ray peak and the optical position of the BCG. X-ray cluster luminosities were taken from Sadat et al. (2004) data, while the position angle of the clusters are obtained from Plionis (1994), Rhee \& Katgert (1987), and Binggeli (1982). In this case we have only selected those clusters with ellipticities $>$ 0.15, since for smaller ellipticities position angles have large errors.\\ Single and double profile BCGs have similar orientations relative to the whole cluster and X-ray luminosity distributions. We did not observe any difference between the single profile and double profile BCG samples with respect to the cluster position angle and X-ray cluster luminosity. However, Bildfell et al. (2008) report that brighter BCGs are located closer to the X-ray peak emission. In our case, considering that on average double profile BCGs are brighter than single profile BCGs we should observe larger offsets in single profile BCGs. Nevertheless, we find no significant differences between single profile and double profile BCGs offsets with respect to the X-ray center emission of the cluster. Moreover, we do not find any correlation between the total absolute magnitude and X-ray offset. Figure 11 shows that there is no obvious trend between the total absolute magnitude of both single profile and double profile BCGs. \section{Conclusions}\label{Conclusions} We have established the existence of two subpopulations of BCGs based on their luminosity profiles. We analyze a uniquely large sample of 430 BCGs and find that 48\% of these galaxies have a light profile that deviates from the standard single S\'ersic model. The luminosity profiles of these galaxies are in fact better described by a double component model consisting of an inner S\'ersic profile and an outer exponential component. The necessity of an outer exponential component conveys the presence of extra-light at intermediate radii corresponding to surface magnitudes $\sim 22 mag arcsec^{-2}$. We have found strong evidence from a subsample of 24 BCGs that extra-light is closely related to the presence of a faint envelope. Similarly, from other subsample of 12 BGCs we also found evidence that links extra-light and star formation. This work highlights the need to cover a large spatial scale when deriving the structural parameters of large galaxies. Accurate parameters can only be obtained when the entire galaxy is considered and this often requires the creation of composite light profiles using data from different telescopes as clearly illustrated by Kormendy et al. (2009). $HST$ detectors provide a superb spatial resolution that has been fundamental for the study of the deviation from the S\'ersic law of the light profile in the inner regions of galaxies i.e. cusps and evacuated cores (Ferrarese et al. 2006). However, $HST$ detectors do not provide the field of view necessary to truthfully derive the structural parameters of galaxies with a light profile that deviates from a single S\'ersic law at large radii. \section{Acknowledgments} We are thankful to M. Postman (STScI) for giving us access to the data used in this study. We also wish to thank the anonymous referee for his useful comments which helped to clarify and strengthen this paper. This research has made use of the NASA Astrophysics Data System Bibliographic services (ADS) and the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\\ This work has been partially supported with grants from Consejo Nacional de Investigaciones Cient\'\i ficas y T\'ecnicas de la Rep\'ublica Argentina (CONICET), Secretar\'\i a de Ciencia y Tecnolog\'\i a de la Universidad de C\'ordoba and Ministerio de Ciencia y Tecnolog\'\i a de C\'ordoba, Argentina.\\ Bernardi, M., Hyde, J.B., Sheth, R.K., Miller, C.J., Nichol, R.C. 2007, AJ, 133, 1741\\ Bildfell, C.; Hoekstra, H.; Babul, A.; Mahadavi, A. 2008, MNRAS, 389, 1637\\ Binggeli, B. 1982, A\&A, 107, 338\\ Caon, N., Capaccioli, M., \& D'Onofrio, M. 1993, MNRAS, 265, 1013\\ Coenda, V., Donzelli, C.J., Muriel, H., Quintana, H., \& Infante, L. 2005, AJ, 129, 1237\\ Conroy, C., Wechsler, R. H., \& Kravtsov, A. V. 2007, ApJ, 668, 826\\ de Lucia, G. \& Blaizot, J. 2007, MNRAS, 375, 2\\ de Vaucouleurs, G. 1948, Ann. Astrophys., 11, 247\\ Djorgovski, S., \& Davis, M. 1987, ApJ, 313, 59\\ Donzelli, C.J., Chiaberge, M., Macchetto, F. D., Madrid, J.P., Capetti, A., \& Marchesini, D. 2007, ApJ, 667, 780\\ Ferrarese, L., et al. 2006, ApJS, 164, 334\\ Freeman, K. C. 1970, ApJ, 160, 811\\ Gonzalez, A. H., Zabludoff, A. I., \& Zaritsky, D. 2003, Ap\&SS, 285, 67\\ Gonzalez, A. H., Zabludoff, A. I., \& Zaritsky, D. 2005, ApJ, 618, 195\\ Graham, A.W., \& Driver, S.P. 2005, PASA, 22, 118\\ Graham, A.W., Lauer, T.R., Colless, M. \& Postman, M. 1996, ApJ, 465, 534\\ Hopkins, P.F., Cox, T. J., Dutta, S. N., Hernquist, Lars., Kormendy, J., Lauer, T. R. 2009, ApJS, 181, 135\\ Jedrzejewski, R. 1987, MNRAS, 226, 747\\ Kent, S.M. 1985, ApJS, 59, 115\\ Kent, S.N., Guzm\'an Jim\'enez, V., Ram\'irez Beraud, P., Hern\'andez Ibarra, F.J., \& P\'erez Grana, J.A. 2007, in IAU Symp. 235, Galaxy Evolution Across the Hubble Time, ed. F. Combes \& J. Palous (Cambridge: Cambridge Univ. Press), 213\\ Kormendy, J. 1977, ApJ, 218, 333 Kormendy, J., Fisher, D. B., Cornell, M. E., \& Bender, R. 2009, ApJS, 182, 216\\ La Barbera, F., Busarello, G., Merluzzi, P., De La Rosa, I., Coppola, G., \& Haynes, C.P. 2008, PASP, 120, 681\\ Landolt, A.U. 1983, AJ, 88, 853\\ Lauer, T. R., \& Postman, M. 1992, ApJ, 400, L50\\ Lauer, T.R., et al. 2007, ApJ, 662, 808\\ Ledlow, M.J., Voges, W., Owen, F.N., \& Burns, J.O. 2003, AJ, 126, 2740\\ Lintott, C. et al. 2011, MNRAS, 410, 166\\ Lugger, P.M. 1984, ApJ, 286, 106\\ Mackie, G. 1992, ApJ, 400, 65\\ Oegerle, W.R.; Hoessel, J.G. 1991, ApJ, 375, 150\\ O'Dea, K. et al. 2010, ApJ, 719, 1619\\ Plionis, M. 1994, ApJS, 95, 401\\ Postman, M. \& Lauer, T. R. 1995, ApJ, 440, 28\\ Quillen, A.C. et al. 2008, ApJS, 176, 39\\ Rhee, G., \& Katgert, P. 1987, A\&A, 183, 217\\ Sadat, R., Blanchard, A., Kneib, J.P., Mathez, G., Madore, B., \& Mazzarella, J.M. 2004, A\&A, 424, 1097\\ Seigar, M.S., Graham, A.W., Jerjen, H. 2007, MNRAS, 378, 1575\\ Schombert, J.M. 1986, ApJS, 60, 603\\ Schombert, J.M. 1987, ApJS, 64, 643\\ Schombert, J.M. 1988, ApJ, 328, 475\\ Schombert, J.M. \& Bothun, G.D. 1987, AJ, 93, 60\\ S\'ersic, J.L. 1963, Boletin de la Asociaci\'on Argentina de Astronom\'ia, 6, 41\\ S\'ersic, J.L. 1968, Atlas de Galaxias Australes (C\'ordoba: Obs. Astron\'om.)\\ von der Linden, A., Best, P.N., Kauffmann, G., \& White, S. 2007, MNRAS, 379, 867\\ \begin{figure} \center \plotone{fig1a.ps} \caption{ BCG A0690 luminosity profile with the single S\'ersic fitting. Upper scale is in arcsec. } \label{fig1a} \end{figure} \begin{figure} \center \plotone{fig1b.ps} \caption{ Inner S\'ersic (short dashed line) + outer exponential (long dashed line) fitting model for A0690 luminosity profile. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig2a.ps} \caption{ Kormedy relation for sample galaxies. Black dots represent single profile BCGs, while red dots represent double profile BCGs. Best fits for both subsamples are also shown. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig3a.ps} \caption{ Total magnitude distributions for single (black line) and double profile (red line) BCGs. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig3b.ps} \caption{ Integrated absolute magnitudes vs. diaphragm radius in kpc. The vertical line indicates where alpha parameter and metric magnitudes are calculated. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig7a.ps} \caption{ Luminosity profiles obtained using the stacking technique for all single and double profile BCGs. Prior to the stacking the individual profiles were normalized at the effective radius. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig4a.ps} \caption{ Outer ellipticity distributions for single (black line) and double profile (red line) BCGs. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig4b.ps} \caption{ Outer minus inner ellipticity distributions for single (black line) and double profile (red line) BCGs. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig5.ps} \caption{ $\alpha$ parameter distributions for single (black line) and double profile (red line) BCGs. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig1c.ps} \caption{ Exp + S\'ersic model fit for A0690 luminosity profile. Outer S\'ersic (short dashed line) and inner exponential (long dashed lined) can be observed. } \label{fig1} \end{figure} \begin{figure} \center \plotone{fig7.ps} \caption{ BCGs total absolute magnitude vs. X-ray offset for the whole BCG sample. } \label{fig1} \end{figure} \clearpage \begin{deluxetable}{rcccc} \tabletypesize{\scriptsize} \tablecaption{BCGs Imaging Runs \label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{Run} & \colhead{Date} & \colhead{Observatory} & \colhead{Pixel Scale (")} & \colhead{FOV (')} } \startdata 1 & 1989 Oct & CTIO & 0.273 & 3.6 \\ 2 & 1989 Nov & KPNO 4m & 0.299 & 4.0 \\ 3 & 1990 Nov & CTIO & 0.273 & 3.6 \\ 4 & 1991 Mar & KPNO 2.1m & 0.304 & 5.2 \\ 5 & 1991 Apr & CTIO & 0.434 & 7.4 \\ 6 & 1993 Sep & KPNO 2.1m & 0.304 & 5.2 \\ 7 & 1993 Nov & CTIO & 0.434 & 7.4 \\ 8 & 1994 May & KPNO 2.1m & 0.304 & 5.2 \\ 9 & 1994 May & CTIO & 0.434 & 7.4 \\ 10 & 1994 Oct & KPNO 2.1m & 0.304 & 5.2 \\ 11 & 1994 Dec & CTIO & 0.434 & 7.4 \\ 12 & 1995 Apr & CTIO & 0.434 & 7.4 \\ 13 & 1995 Apr & KPNO 2.1m & 0.304 & 5.2 \\ \enddata \end{deluxetable} \clearpage \begin{deluxetable}{lccccccccccccccc} \tabletypesize{\scriptsize} \tablecaption{BCGs Photometrical Parameters \label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$\mu_e$} & \colhead{$r_e$} & \colhead{$n$} & \colhead{$\mu_0$} & \colhead{$r_0$} & \colhead{$M_{Sersic}$} & \colhead{$M_{exp}$} & \colhead{$M_{T}$} & \colhead{$S/e$} & \colhead{$\alpha$} & \colhead{$e_{in}$} & \colhead{$e_{out}$}& \colhead{$pa_{in}$}& \colhead{$pa_{out}$} & \colhead{$M_{metric}$}\\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} & \colhead{(12)}& \colhead{(13)}& \colhead{(14)}& \colhead{(15)}& \colhead{(16)} } \startdata A0014 &26.65 &387.58 &5.08 & --- & --- &-26.65 & 0.00 &-26.65 & --- &1.10 &0.35 &0.56 & 56.5 & 52.0 &-22.87\\ A0027 &24.41 & 32.51 &7.58 & --- & --- &-23.67 & 0.00 &-23.67 & --- &0.59 &0.24 &0.28 & -54.0 & -61.2 &-22.18\\ A0071 &20.73 & 5.18 &5.49 & --- & --- &-23.28 & 0.00 &-23.28 & --- &0.49 &0.06 &0.05 & 51.1 & 31.1 &-22.45\\ A0074 &23.24 & 23.04 &6.45 & --- & --- &-24.05 & 0.00 &-24.05 & --- &0.54 &0.28 &0.30 & 42.2 & 39.6 &-22.93\\ A0075 &21.86 & 10.58 &4.74 & --- & --- &-23.55 & 0.00 &-23.55 & --- &0.55 &0.09 &0.23 & 85.0 & 60.4 &-22.74\\ A0076 &24.08 & 34.80 &7.58 & --- & --- &-24.09 & 0.00 &-24.09 & --- &0.57 &0.05 &0.17 & -66.4 &-118.7 &-22.80\\ A0077 &26.39 &190.00 &6.94 & --- & --- &-25.65 & 0.00 &-25.65 & --- &0.91 &0.13 &0.51 & -62.5 & -64.0 &-23.04\\ A0080 &21.59 & 5.92 &2.24 & --- & --- &-22.29 & 0.00 &-22.29 & --- &0.46 &0.11 &0.06 & -64.2 & -71.0 &-21.98\\ A0085 &21.43 & 12.43 &0.86 &21.63 &26.00 &-23.47 &-24.24 &-24.68 & 0.49 &1.30 &0.11 &0.35 & -27.3 & -32.1 &-23.18\\ A0086 &22.55 & 14.89 &6.17 & --- & --- &-23.76 & 0.00 &-23.76 & --- &0.56 &0.26 &0.17 & 70.6 & 74.8 &-22.90\\ A0102 &23.73 & 32.22 &7.41 & --- & --- &-24.36 & 0.00 &-24.36 & --- &0.61 &0.13 &0.11 & -64.2 & -41.3 &-22.97\\ A0114 &22.73 & 20.87 &5.85 & --- & --- &-24.26 & 0.00 &-24.26 & --- &0.59 &0.16 &0.28 & 72.9 & 73.6 &-23.01\\ A0116 &21.44 & 9.44 &4.52 & --- & --- &-23.74 & 0.00 &-23.74 & --- &0.58 &0.17 &0.17 & 5.5 & 15.5 &-22.93\\ A0117 &24.06 & 47.47 &6.06 & --- & --- &-24.73 & 0.00 &-24.73 & --- &0.76 &0.12 &0.28 & -7.7 & -19.9 &-23.20\\ A0119 &26.71 &247.61 &7.87 & --- & --- &-25.77 & 0.00 &-25.77 & --- &0.83 &0.12 &0.33 & 34.8 & 35.9 &-23.01\\ A0126 &20.74 & 3.10 &3.61 & --- & --- &-21.78 & 0.00 &-21.78 & --- &0.26 &0.04 &0.09 & -38.6 & 38.9 &-21.55\\ A0133 &21.10 & 7.64 &1.11 &21.46 &23.80 &-22.87 &-24.23 &-24.50 & 0.29 &0.93 &0.07 &0.43 & 33.7 & 22.4 &-22.95\\ A0134 &21.19 & 5.86 &4.72 & --- & --- &-22.98 & 0.00 &-22.98 & --- &0.47 &0.14 &0.05 & 10.5 & 1.6 &-22.40\\ A0147 &21.75 & 10.17 &6.06 & --- & --- &-23.64 & 0.00 &-23.64 & --- &0.49 &0.16 &0.09 & 33.6 & 31.6 &-22.83\\ A0150 &20.93 & 6.34 &1.91 &22.01 &26.20 &-22.91 &-23.89 &-24.26 & 0.40 &0.75 &0.11 &0.43 & -2.9 & -4.8 &-22.81\\ A0151 &25.86 &146.25 &8.06 & --- & --- &-25.51 & 0.00 &-25.51 & --- &0.78 &0.17 &0.26 & 73.5 & 76.9 &-23.11\\ A0152 &18.92 & 1.38 &5.65 &20.93 &11.80 &-22.24 &-23.32 &-23.66 & 0.37 &0.84 &0.04 &0.28 & 39.0 & 17.2 &-22.64\\ A0154 &21.82 & 11.13 &5.21 &22.33 &28.00 &-23.76 &-23.72 &-24.50 & 1.04 &0.66 &0.22 &0.39 & -38.9 & -44.1 &-23.05\\ A0158 &19.06 & 1.66 &4.15 &20.97 &10.20 &-22.28 &-22.90 &-23.39 & 0.56 &0.71 &0.13 &0.12 & -59.4 & -48.6 &-22.58\\ A0160 &20.38 & 3.04 &1.04 &21.05 &12.50 &-21.49 &-23.18 &-23.38 & 0.21 &0.78 &0.03 &0.24 & -73.1 & -92.1 &-22.44\\ A0161 &20.53 & 3.65 &4.74 &22.33 &23.50 &-22.64 &-23.40 &-23.84 & 0.50 &0.67 &0.05 &0.31 & 4.9 & -42.9 &-22.07\\ A0168 &24.51 & 48.01 &7.41 & --- & --- &-24.38 & 0.00 &-24.38 & --- &0.65 &0.06 &0.25 & -25.6 & -26.5 &-22.83\\ A0171 &22.75 & 17.23 &6.49 & --- & --- &-24.04 & 0.00 &-24.04 & --- &0.63 &0.14 &0.12 & 86.5 & 30.7 &-23.00\\ A0174 &20.44 & 3.71 &0.98 &21.56 &13.02 &-21.97 &-22.89 &-23.27 & 0.43 &0.66 &0.02 &0.16 & 5.9 & -64.9 &-22.53\\ A0179 &21.14 & 5.27 &3.73 &21.92 &16.30 &-22.62 &-22.93 &-23.54 & 0.75 &0.63 &0.22 &0.38 & -27.3 & -25.6 &-22.35\\ A0189 &21.13 & 7.32 &3.06 & --- & --- &-23.15 & 0.00 &-23.15 & --- &0.40 &0.42 &0.50 & -2.8 & -6.7 &-22.24\\ A0193 &18.60 & 1.78 &4.13 &21.52 &22.00 &-22.84 &-23.97 &-24.30 & 0.36 &0.59 &0.04 &0.23 & -69.9 & -73.3 &-22.88\\ A0194 &18.59 & 1.30 &1.53 &19.95 & 7.70 &-21.51 &-23.12 &-23.34 & 0.23 &0.68 &0.25 &0.36 & 56.6 & 60.5 &-22.58\\ A0195 &21.62 & 8.16 &6.58 & --- & --- &-23.42 & 0.00 &-23.42 & --- &0.50 &0.09 &0.05 & -84.7 & 11.2 &-22.69\\ A0208 &21.66 & 7.95 &5.32 &22.36 &26.80 &-23.37 &-23.76 &-24.34 & 0.70 &0.67 &0.06 &0.33 & 10.7 & 25.5 &-22.95\\ A0225 &25.24 & 84.64 &7.09 & --- & --- &-24.95 & 0.00 &-24.95 & --- &0.78 &0.15 &0.27 & 81.2 & 81.6 &-22.92\\ A0240 &24.61 & 55.91 &5.38 & --- & --- &-24.50 & 0.00 &-24.50 & --- &0.86 &0.09 &0.21 & 4.6 & 9.3 &-22.74\\ A0245 &21.95 & 8.70 &3.28 & --- & --- &-23.02 & 0.00 &-23.02 & --- &0.57 &0.21 &0.27 & -65.2 & -81.4 &-22.30\\ A0246 &23.73 & 24.12 &6.33 & --- & --- &-23.70 & 0.00 &-23.70 & --- &0.62 &0.16 &0.39 & 17.8 & 11.0 &-22.42\\ A0257 &19.86 & 2.99 &4.26 &21.13 &11.90 &-22.91 &-23.21 &-23.83 & 0.75 &0.64 &0.26 &0.42 & 9.9 & 24.1 &-22.76\\ A0260 &20.95 & 6.17 &4.50 &21.85 &20.90 &-23.17 &-23.46 &-24.08 & 0.76 &0.62 &0.12 &0.33 & 46.3 & 73.2 &-22.89\\ A0261 &20.19 & 4.08 &2.60 &21.69 &15.30 &-22.79 &-23.00 &-23.65 & 0.83 &0.57 &0.10 &0.18 & -2.7 & -9.8 &-22.81\\ A0262 &20.33 & 3.18 &1.75 &21.16 &16.60 &-21.54 &-23.33 &-23.52 & 0.19 &0.76 &0.07 &0.41 & -73.6 & 42.0 &-22.21\\ A0267 &25.90 &105.21 &6.29 & --- & --- &-24.66 & 0.00 &-24.66 & --- &0.90 &0.19 &0.33 & 78.4 & 72.0 &-22.58\\ A0268 &20.23 & 2.85 &3.70 &22.80 &14.30 &-22.31 &-21.89 &-22.88 & 1.47 &0.47 &0.14 &0.27 & 2.7 & -11.5 &-22.06\\ A0279 &22.81 & 18.86 &5.29 &22.56 &30.60 &-24.01 &-23.76 &-24.65 & 1.25 &0.81 &0.11 &0.24 & 40.2 & 47.6 &-23.10\\ A0292 &23.37 & 45.49 &3.11 & --- & --- &-25.01 & 0.00 &-25.01 & --- &0.88 &0.31 &0.58 & -84.9 & -85.6 &-22.94\\ A0295 &21.47 & 4.64 &2.99 &21.44 &17.30 &-21.84 &-23.48 &-23.70 & 0.22 &0.66 &0.11 &0.44 & 76.4 & 67.0 &-22.76\\ A0311 &22.79 & 15.71 &5.43 &22.36 &38.00 &-23.59 &-24.38 &-24.81 & 0.48 &0.80 &0.17 &0.53 & 15.7 & 29.0 &-22.83\\ A0326 &21.06 & 5.91 &3.14 & --- & --- &-22.87 & 0.00 &-22.87 & --- &0.38 &0.42 &0.11 & 21.5 & 87.7 &-22.22\\ A0347 &20.95 & 3.99 &3.66 & --- & --- &-21.82 & 0.00 &-21.82 & --- &0.32 &0.30 &0.21 & -76.7 & -66.5 &-21.28\\ A0357 &21.61 & 7.00 &2.51 & --- & --- &-22.68 & 0.00 &-22.68 & --- &0.35 &0.41 &0.47 & -42.1 & -39.9 &-21.85\\ A0358 &23.57 & 32.81 &5.13 & --- & --- &-24.34 & 0.00 &-24.34 & --- &0.76 &0.17 &0.28 & 3.9 & 11.0 &-22.87\\ A0376 &25.76 & 94.82 &8.00 & --- & --- &-24.66 & 0.00 &-24.66 & --- &0.73 &0.05 &0.27 & -82.9 & 81.7 &-22.71\\ A0396 &20.47 & 2.35 &1.66 & --- & --- &-20.87 & 0.00 &-20.87 & --- &0.22 &0.47 &0.35 & 7.7 & 7.7 &-20.42\\ A0397 &20.31 & 3.82 &3.44 &21.78 &19.20 &-22.66 &-23.38 &-23.83 & 0.51 &0.61 &0.14 &0.40 & 39.2 & 60.5 &-22.87\\ A0399 &21.82 & 10.41 &1.22 &21.83 &33.30 &-22.92 &-24.65 &-24.85 & 0.20 &1.13 &0.23 &0.43 & 46.5 & 39.8 &-22.80\\ A0401 &28.27 &968.92 &7.46 & --- & --- &-27.25 & 0.00 &-27.25 & --- &1.02 &0.26 &0.58 & 35.8 & 24.7 &-23.01\\ A0404 &20.33 & 4.04 &1.09 &21.26 &12.40 &-22.27 &-23.03 &-23.47 & 0.50 &0.62 &0.27 &0.37 & 22.4 & 24.7 &-22.57\\ A0415 &22.20 & 9.26 &3.88 &21.95 &26.00 &-22.91 &-24.02 &-24.36 & 0.36 &0.86 &0.12 &0.32 & -20.1 & -12.5 &-22.72\\ A0419 &21.68 & 7.33 &4.55 & --- & --- &-22.85 & 0.00 &-22.85 & --- &0.40 &0.28 &0.32 & -61.2 & -25.0 &-22.06\\ A0423 &25.05 & 61.90 &6.80 & --- & --- &-24.48 & 0.00 &-24.48 & --- &0.70 &0.19 &0.47 & -75.5 & -70.3 &-22.56\\ A0428 &18.82 & 1.65 &3.51 &21.71 & 6.80 &-22.43 &-21.29 &-22.75 & 2.86 &0.41 &0.03 &0.09 & 8.0 & 48.6 &-22.18\\ A0450 &22.01 & 8.26 &4.35 & --- & --- &-22.92 & 0.00 &-22.92 & --- &0.49 &0.21 &0.16 & 12.5 & -2.2 &-22.25\\ A0496 &25.74 &101.00 &5.62 & --- & --- &-24.57 & 0.00 &-24.57 & --- &0.78 &0.16 &0.37 & -8.6 & -3.9 &-22.75\\ A0497 &20.95 & 2.41 &3.58 &21.51 & 9.70 &-21.31 &-22.44 &-22.77 & 0.35 &0.81 &0.20 &0.28 & 26.5 & 31.0 &-21.95\\ A0500 &20.90 & 5.92 &2.90 &22.44 &30.00 &-23.03 &-23.79 &-24.23 & 0.50 &0.63 &0.21 &0.37 & 9.0 & 33.0 &-22.83\\ A0505 &24.64 & 78.09 &6.80 & --- & --- &-25.29 & 0.00 &-25.29 & --- &0.84 &0.14 &0.17 & -51.9 & 11.0 &-23.39\\ A0514 &23.13 & 21.69 &5.56 & --- & --- &-23.98 & 0.00 &-23.98 & --- &0.61 &0.19 &0.31 & -53.3 & -49.9 &-22.90\\ A0533 &22.59 & 16.32 &4.98 & --- & --- &-23.75 & 0.00 &-23.75 & --- &0.55 &0.24 &0.17 & -61.2 & -44.8 &-22.69\\ A0536 &25.23 & 84.14 &6.54 & --- & --- &-25.15 & 0.00 &-25.15 & --- &0.84 &0.29 &0.41 & -34.1 & -29.2 &-23.05\\ A0539 &21.03 & 5.56 &4.76 &22.31 &22.30 &-22.91 &-23.15 &-23.79 & 0.80 &0.50 &0.04 &0.36 & -11.3 & -2.0 &-22.59\\ A0543 &19.33 & 2.39 &2.48 & --- & --- &-22.63 & 0.00 &-22.63 & --- &0.27 &0.07 &0.02 & -64.1 & -47.6 &-22.06\\ A0548 &21.41 & 7.07 &4.08 &20.39 & 6.60 &-22.98 &-22.45 &-23.50 & 1.63 &0.64 &0.34 &0.14 & 64.4 & 82.7 &-22.73\\ A0550 &22.20 & 10.71 &4.26 &23.00 &51.50 &-23.33 &-24.52 &-24.83 & 0.34 &0.76 &0.15 &0.38 & -43.4 & -56.8 &-22.79\\ A0553 &21.69 & 9.57 &3.44 &23.30 &57.80 &-23.48 &-24.46 &-24.83 & 0.40 &0.67 &0.16 &0.57 & -68.4 & -67.1 &-22.87\\ A0564 &20.09 & 3.86 &1.28 &23.03 &29.20 &-22.64 &-23.28 &-23.76 & 0.55 &0.39 &0.23 &0.35 & 29.5 & 42.1 &-22.54\\ A0568 &21.59 & 11.11 &2.87 & --- & --- &-23.84 & 0.00 &-23.84 & --- &0.69 &0.18 &0.11 & 31.9 & -22.7 &-23.07\\ A0569 &22.11 & 13.01 &4.72 & --- & --- &-23.22 & 0.00 &-23.22 & --- &0.50 &0.13 &0.15 & -4.4 & 34.3 &-22.54\\ A0576 &20.35 & 4.19 &4.74 & --- & --- &-22.94 & 0.00 &-22.94 & --- &0.32 &0.17 &0.08 & -60.2 & 46.9 &-22.37\\ A0582 &20.38 & 3.79 &4.88 &22.95 &26.60 &-22.92 &-23.09 &-23.76 & 0.86 &0.51 &0.01 &0.04 & -52.3 & 16.2 &-22.63\\ A0592 &20.55 & 4.44 &4.29 &22.00 &13.30 &-22.17 &-21.67 &-22.70 & 1.58 &0.52 &0.07 &0.21 & 84.6 & -86.6 &-22.58\\ A0595 &21.76 & 7.99 &5.62 & --- & --- &-23.27 & 0.00 &-23.27 & --- &0.49 &0.06 &0.16 & 76.2 & 78.8 &-22.51\\ A0600 &23.54 & 24.95 &4.52 & --- & --- &-23.89 & 0.00 &-23.89 & --- &0.73 &0.10 &0.35 & 26.8 & -1.1 &-22.60\\ A0602 &21.18 & 5.32 &3.30 & --- & --- &-22.56 & 0.00 &-22.56 & --- &0.47 &0.17 &0.23 & 58.0 & 41.3 &-22.39\\ A0607 &21.59 & 8.46 &2.68 & --- & --- &-23.26 & 0.00 &-23.26 & --- &0.69 &0.02 &0.07 & -60.2 & 31.1 &-22.81\\ A0612 &23.12 & 30.92 &5.13 & --- & --- &-24.96 & 0.00 &-24.96 & --- &0.96 &0.06 &0.01 & -81.9 & 4.2 &-23.54\\ A0634 &21.72 & 10.91 &3.07 & --- & --- &-23.22 & 0.00 &-23.22 & --- &0.50 &0.17 &0.14 & -77.2 & 67.5 &-22.46\\ A0644 &24.11 & 29.04 &5.59 &22.34 &31.40 &-23.64 &-24.01 &-24.59 & 0.71 &0.85 &0.23 &0.49 & 7.4 & 10.4 &-22.55\\ A0671 &20.14 & 4.99 &2.21 &21.33 &23.30 &-23.23 &-24.29 &-24.64 & 0.38 &0.73 &0.21 &0.27 & 23.7 & 30.7 &-23.20\\ A0690 &21.65 & 9.54 &5.24 &22.54 &28.30 &-23.72 &-23.65 &-24.44 & 1.06 &0.66 &0.10 &0.20 & 41.7 & 53.0 &-23.06\\ A0695 &22.59 & 17.78 &3.75 & --- & --- &-23.90 & 0.00 &-23.90 & --- &0.70 &0.23 &0.29 & -53.4 & -54.3 &-22.79\\ A0744 &24.10 & 40.05 &6.33 & --- & --- &-24.43 & 0.00 &-24.43 & --- &0.76 &0.03 &0.06 & 71.6 & -49.6 &-22.86\\ A0757 &20.80 & 4.44 &2.17 &23.23 &23.50 &-22.31 &-22.42 &-23.12 & 0.90 &0.49 &0.10 &0.12 & 26.2 & 69.8 &-22.23\\ A0779 &20.70 & 7.99 &3.72 &21.60 &27.20 &-23.57 &-23.97 &-24.54 & 0.69 &0.58 &0.16 &0.41 & -23.7 & -21.0 &-23.10\\ A0780 &21.94 & 10.39 &3.70 &22.30 &26.25 &-23.30 &-23.60 &-24.21 & 0.76 &0.86 &0.12 &0.32 & -36.5 & -34.6 &-22.85\\ A0819 &24.42 & 57.38 &6.85 & --- & --- &-24.94 & 0.00 &-24.94 & --- &0.71 &0.25 &0.45 & -22.9 & -27.4 &-22.98\\ A0834 &17.97 & 0.68 &2.87 &19.55 & 4.20 &-21.21 &-22.38 &-22.70 & 0.34 &0.47 &0.36 &0.27 & -10.2 & -5.7 &-22.17\\ A0838 &24.18 & 30.16 &7.04 & --- & --- &-23.70 & 0.00 &-23.70 & --- &0.62 &0.11 &0.27 & 6.5 & 86.0 &-22.39\\ A0841 &21.15 & 7.34 &4.35 &22.88 &33.50 &-23.48 &-23.61 &-24.30 & 0.89 &0.63 &0.09 &0.30 & -29.8 & -28.1 &-22.95\\ A0865 &21.03 & 3.79 &1.17 & --- & --- &-21.53 & 0.00 &-21.53 & --- &0.23 &0.45 &0.51 & -31.6 & -29.9 &-21.08\\ A0912 &22.18 & 9.14 &5.75 & --- & --- &-23.03 & 0.00 &-23.03 & --- &0.43 &0.11 &0.13 & 68.6 & -9.1 &-22.29\\ A0930 &19.48 & 1.93 &4.03 &22.16 &14.80 &-22.14 &-22.50 &-23.09 & 0.72 &0.58 &0.08 &0.18 & -27.0 & -8.2 &-22.07\\ A0957 &22.10 & 14.38 &3.89 &22.90 &25.20 &-23.89 &-23.92 &-24.66 & 0.97 &0.86 &0.20 &0.22 & -81.9 & 57.3 &-23.00\\ A0970 &21.81 & 6.50 &5.24 &23.01 &23.90 &-22.61 &-22.70 &-23.41 & 0.92 &0.61 &0.10 &0.16 & -54.6 & -61.3 &-22.19\\ A0978 &20.74 & 5.67 &5.21 &21.91 &23.70 &-23.36 &-23.76 &-24.33 & 0.69 &0.64 &0.08 &0.24 & 7.7 & -18.1 &-22.99\\ A0979 &21.87 & 9.13 &4.59 &22.63 &20.10 &-23.19 &-22.68 &-23.71 & 1.60 &0.65 &0.07 &0.11 & -50.0 & -39.3 &-22.71\\ A0993 &22.54 & 12.29 &6.41 & --- & --- &-23.35 & 0.00 &-23.35 & --- &0.58 &0.06 &0.06 & 52.8 & 1.8 &-23.05\\ A0994 &22.61 & 18.69 &4.90 & --- & --- &-23.98 & 0.00 &-23.98 & --- &0.54 &0.35 &0.41 & -21.3 & -18.6 &-22.70\\ A0999 &20.89 & 6.10 &5.08 &23.28 &33.50 &-23.15 &-22.94 &-23.80 & 1.22 &0.44 &0.19 &0.24 & -9.1 & -29.4 &-22.55\\ A1003 &19.43 & 1.90 &2.58 &20.89 &11.80 &-21.96 &-23.30 &-23.58 & 0.29 &0.70 &0.06 &0.29 & -31.5 & -53.8 &-22.65\\ A1016 &19.77 & 2.90 &3.89 &22.23 &13.50 &-22.52 &-22.02 &-23.05 & 1.59 &0.43 &0.10 &0.14 & 9.6 & 0.1 &-22.35\\ A1020 &20.27 & 3.93 &5.41 &22.73 &18.60 &-23.10 &-22.46 &-23.58 & 1.79 &0.53 &0.05 &0.16 & -46.6 & -41.8 &-22.76\\ A1021 &20.33 & 4.44 &1.58 &21.67 &15.70 &-22.82 &-23.31 &-23.84 & 0.64 &0.64 &0.17 &0.25 & -88.3 & -89.8 &-22.92\\ A1032 &21.19 & 7.93 &3.86 & --- & --- &-23.52 & 0.00 &-23.52 & --- &0.50 &0.36 &0.32 & 31.5 & 25.1 &-22.59\\ A1035 &23.46 & 28.91 &5.38 & --- & --- &-24.33 & 0.00 &-24.33 & --- &0.73 &0.06 &0.15 & -9.4 & -14.6 &-22.99\\ A1066 &20.73 & 6.52 &2.26 &23.01 &47.10 &-23.31 &-24.22 &-24.61 & 0.43 &0.72 &0.18 &0.30 & -40.4 & -60.3 &-22.84\\ A1069 &21.28 & 6.61 &5.03 &21.20 &16.30 &-23.18 &-23.71 &-24.23 & 0.62 &0.78 &0.28 &0.12 & -86.8 & -85.6 &-22.98\\ A1100 &20.49 & 4.99 &2.33 &21.97 &20.90 &-22.88 &-23.39 &-23.92 & 0.62 &0.55 &0.19 &0.38 & -10.0 & -17.5 &-22.74\\ A1139 &21.39 & 6.74 &4.83 &22.45 &22.60 &-22.92 &-22.99 &-23.71 & 0.93 &0.57 &0.06 &0.32 & -51.2 & -82.0 &-22.55\\ A1142 &22.65 & 15.08 &6.67 &23.46 &49.00 &-23.51 &-23.60 &-24.31 & 0.92 &0.54 &0.15 &0.24 & 36.6 & 51.1 &-22.55\\ A1145 &21.49 & 6.31 &3.56 & --- & --- &-22.70 & 0.00 &-22.70 & --- &0.44 &0.17 &0.16 & -44.3 & -50.1 &-22.12\\ A1149 &19.97 & 3.11 &3.95 &21.58 &16.30 &-22.74 &-23.35 &-23.84 & 0.57 &0.65 &0.11 &0.33 & 68.8 & 69.7 &-22.76\\ A1169 &22.08 & 9.21 &6.41 & --- & --- &-23.21 & 0.00 &-23.21 & --- &0.48 &0.22 &0.28 & -32.0 & -75.3 &-22.25\\ A1171 &20.03 & 3.54 &2.62 &22.42 &22.00 &-22.78 &-23.18 &-23.75 & 0.69 &0.63 &0.30 &0.49 & 44.6 & 43.7 &-22.26\\ A1177 &23.13 & 20.88 &6.41 &22.29 &29.00 &-23.70 &-23.62 &-24.41 & 1.08 &0.71 &0.13 &0.42 & 47.5 & 39.8 &-22.77\\ A1185 &21.45 & 10.10 &1.87 &23.94 &95.70 &-23.21 &-24.60 &-24.87 & 0.28 &0.66 &0.12 &0.07 & 39.5 & -15.0 &-22.87\\ A1187 &20.80 & 5.69 &2.93 &21.80 &16.70 &-23.14 &-23.25 &-23.95 & 0.91 &0.64 &0.18 &0.21 & -81.6 & -65.2 &-22.95\\ A1190 &23.57 & 21.29 &4.74 &21.85 &20.50 &-23.48 &-23.63 &-24.31 & 0.87 &0.95 &0.28 &0.32 & -7.6 & -12.4 &-22.72\\ A1203 &21.58 & 9.28 &2.54 &23.43 &39.10 &-23.33 &-23.44 &-24.14 & 0.90 &0.75 &0.16 &0.17 & -47.1 & -25.8 &-22.81\\ A1213 &20.93 & 7.84 &2.42 & --- & --- &-23.41 & 0.00 &-23.41 & --- &0.50 &0.27 &0.09 & 66.5 & 88.5 &-22.75\\ A1216 &19.77 & 2.69 &1.85 &21.22 & 9.80 &-22.18 &-22.54 &-23.12 & 0.72 &0.52 &0.22 &0.30 & -50.2 & -46.5 &-22.38\\ A1228 &19.39 & 2.34 &4.18 &21.81 &12.10 &-22.52 &-22.25 &-23.14 & 1.28 &0.43 &0.06 &0.19 & 13.8 & -42.3 &-22.42\\ A1238 &21.00 & 4.64 &4.67 &22.15 &22.50 &-22.67 &-23.48 &-23.90 & 0.47 &0.68 &0.08 &0.43 & -79.9 & -77.7 &-22.56\\ A1257 &17.37 & 0.82 &2.54 &20.26 & 4.70 &-21.97 &-21.72 &-22.61 & 1.26 &0.24 &0.10 &0.13 & -46.5 & -43.0 &-22.10\\ A1267 &21.28 & 7.21 &4.42 & --- & --- &-23.05 & 0.00 &-23.05 & --- &0.33 &0.23 &0.30 & 44.9 & 53.4 &-22.43\\ A1270 &20.34 & 6.21 &1.59 & --- & --- &-23.44 & 0.00 &-23.44 & --- &0.48 &0.43 &0.20 & -46.1 & -45.3 &-22.76\\ A1279 &24.32 & 38.01 &6.80 & --- & --- &-24.07 & 0.00 &-24.07 & --- &0.63 &0.17 &0.31 & 59.4 & 55.8 &-22.56\\ A1308 &21.99 & 9.41 &4.13 &23.29 &34.60 &-23.09 &-23.20 &-23.90 & 0.90 &0.55 &0.10 &0.12 & -26.0 & -38.3 &-22.61\\ A1314 &19.76 & 3.67 &2.54 &21.67 &23.70 &-22.84 &-23.82 &-24.19 & 0.41 &0.57 &0.19 &0.36 & 86.8 & -86.2 &-22.81\\ A1317 &21.93 & 10.26 &4.98 &22.52 &51.90 &-23.50 &-24.92 &-25.18 & 0.27 &0.73 &0.18 &0.57 & 38.9 & 55.4 &-22.89\\ A1318 &23.96 & 35.95 &5.38 & --- & --- &-24.18 & 0.00 &-24.18 & --- &0.54 &0.22 &0.29 & -8.8 & -7.6 &-22.75\\ A1334 &21.66 & 8.47 &5.05 & --- & --- &-23.32 & 0.00 &-23.32 & --- &0.51 &0.21 &0.22 & 88.8 & 77.4 &-22.53\\ A1344 &24.04 & 52.80 &5.08 & --- & --- &-24.98 & 0.00 &-24.98 & --- &0.72 &0.39 &0.32 & 80.3 & 78.8 &-23.05\\ A1365 &22.79 & 15.86 &6.62 & --- & --- &-23.80 & 0.00 &-23.80 & --- &0.50 &0.21 &0.22 & -56.0 & -62.1 &-22.67\\ A1367 &23.00 & 25.76 &5.56 & --- & --- &-23.97 & 0.00 &-23.97 & --- &0.56 &0.13 &0.24 & 4.1 & -10.8 &-22.74\\ A1371 &21.74 & 7.06 &5.88 &22.66 &19.90 &-22.99 &-22.72 &-23.62 & 1.28 &0.65 &0.08 &0.19 & 59.6 & 44.0 &-22.50\\ A1377 &21.90 & 10.47 &4.98 & --- & --- &-23.50 & 0.00 &-23.50 & --- &0.54 &0.23 &0.10 & 59.1 & 69.5 &-22.66\\ A1400 &23.35 & 22.75 &3.34 & --- & --- &-23.66 & 0.00 &-23.66 & --- &0.81 &0.16 &0.23 & -61.3 & -65.9 &-22.45\\ A1424 &19.58 & 3.02 &4.27 &20.60 & 9.90 &-23.17 &-23.30 &-23.99 & 0.89 &0.63 &0.25 &0.15 & 72.4 & 70.3 &-23.12\\ A1436 &20.96 & 5.57 &5.49 & --- & --- &-23.21 & 0.00 &-23.21 & --- &0.44 &0.10 &0.10 & -79.8 & -65.4 &-22.56\\ A1452 &23.87 & 34.05 &6.85 & --- & --- &-24.34 & 0.00 &-24.34 & --- &0.66 &0.08 &0.12 & 48.0 & 52.9 &-22.80\\ A1474 &19.71 & 1.74 &1.10 &19.63 & 7.40 &-21.20 &-23.67 &-23.78 & 0.10 &0.67 &0.51 &0.58 & -12.1 & -10.0 &-22.57\\ A1507 &20.54 & 4.45 &3.45 &21.48 &15.60 &-22.88 &-23.34 &-23.89 & 0.65 &0.70 &0.20 &0.29 & 48.0 & 44.1 &-22.82\\ A1520 &22.35 & 16.82 &5.85 &22.29 &34.30 &-24.23 &-24.25 &-24.99 & 0.98 &0.75 &0.16 &0.33 & -17.7 & -32.1 &-23.26\\ A1526 &22.00 & 9.05 &5.92 & --- & --- &-23.29 & 0.00 &-23.29 & --- &0.49 &0.12 &0.29 & -88.4 & -54.0 &-22.39\\ A1534 &20.75 & 6.44 &3.46 &22.32 &20.90 &-23.50 &-23.17 &-24.10 & 1.35 &0.65 &0.20 &0.18 & -48.9 & -52.9 &-23.01\\ A1569 &19.78 & 3.06 &3.17 &20.37 &11.00 &-22.85 &-23.77 &-24.16 & 0.43 &0.78 &0.24 &0.14 & -74.2 & -71.1 &-23.19\\ A1589 &24.90 & 76.44 &8.20 & --- & --- &-25.19 & 0.00 &-25.19 & --- &0.73 &0.25 &0.41 & 73.4 & 65.8 &-23.21\\ A1610 &20.83 & 6.07 &5.15 &22.38 &19.50 &-23.46 &-22.92 &-23.98 & 1.65 &0.55 &0.09 &0.25 & -85.0 & -68.0 &-23.01\\ A1630 &20.07 & 4.01 &1.44 &21.70 &21.10 &-22.66 &-23.76 &-24.10 & 0.36 &0.61 &0.23 &0.37 & 83.1 & 86.4 &-22.85\\ A1631 &15.76 & 0.36 &2.89 &22.56 & 4.50 &-22.00 &-19.48 &-22.10 &10.17 &0.67 &0.12 &0.12 & 19.0 & 18.6 &-21.75\\ A1644 &23.78 & 67.02 &3.08 & --- & --- &-25.38 & 0.00 &-25.38 & --- &1.02 &0.24 &0.48 & 47.4 & 42.8 &-23.00\\ A1648 &20.47 & 4.95 &0.96 &21.81 &24.20 &-22.58 &-24.01 &-24.27 & 0.27 &0.52 &0.32 &0.41 & 59.4 & 55.1 &-22.78\\ A1691 &22.55 & 22.89 &2.99 & --- & --- &-24.39 & 0.00 &-24.39 & --- &0.92 &0.18 &0.07 & -34.3 & -14.1 &-23.16\\ A1709 &23.08 & 17.18 &6.80 & --- & --- &-23.57 & 0.00 &-23.57 & --- &0.55 &0.08 &0.16 & -53.0 & -20.1 &-22.63\\ A1736 &23.34 & 39.24 &6.21 & --- & --- &-25.00 & 0.00 &-25.00 & --- &0.58 &0.33 &0.43 & -44.4 & -45.1 &-23.26\\ A1741 &21.56 & 7.88 &6.58 & --- & --- &-23.49 & 0.00 &-23.49 & --- &0.48 &0.16 &0.11 & -34.9 & -39.2 &-22.69\\ A1749 &20.41 & 5.81 &3.27 &22.04 &22.40 &-23.53 &-23.54 &-24.29 & 0.99 &0.59 &0.25 &0.22 & -85.8 & -76.0 &-23.10\\ A1767 &25.23 &101.01 &6.02 & --- & --- &-25.30 & 0.00 &-25.30 & --- &0.81 &0.09 &0.39 & -47.5 & -3.6 &-23.16\\ A1773 &21.22 & 7.12 &5.62 &21.18 &27.70 &-23.52 &-24.93 &-25.19 & 0.27 &0.65 &0.27 &0.07 & 30.0 & 12.5 &-22.82\\ A1775 &20.71 & 5.77 &1.06 &21.30 &17.80 &-22.70 &-23.83 &-24.16 & 0.35 &0.85 &0.12 &0.16 & -19.2 & -28.6 &-23.00\\ A1780 &22.04 & 13.08 &3.40 & --- & --- &-23.75 & 0.00 &-23.75 & --- &0.64 &0.29 &0.30 & 83.4 & 75.8 &-22.70\\ A1795 &20.94 & 6.02 &1.88 &21.70 &27.50 &-22.83 &-24.37 &-24.60 & 0.24 &0.81 &0.10 &0.34 & 9.5 & 15.4 &-22.94\\ A1800 &22.88 & 45.78 &2.38 & --- & --- &-25.46 & 0.00 &-25.46 & --- &1.09 &0.39 &0.57 & -18.6 & -22.2 &-23.23\\ A1809 &21.78 & 12.38 &4.93 &22.81 &45.50 &-24.07 &-24.37 &-24.98 & 0.76 &0.69 &0.18 &0.33 & 64.7 & 54.7 &-23.26\\ A1825 &22.29 & 10.97 &4.90 &23.24 &27.90 &-23.26 &-22.84 &-23.82 & 1.47 &0.63 &0.12 &0.23 & -15.0 & -8.4 &-22.60\\ A1827 &22.01 & 10.76 &6.06 & --- & --- &-23.64 & 0.00 &-23.64 & --- &0.57 &0.15 &0.07 & 36.0 & 54.2 &-22.77\\ A1828 &21.04 & 7.22 &2.53 & --- & --- &-23.28 & 0.00 &-23.28 & --- &0.44 &0.36 &0.37 & -33.6 & -28.7 &-22.53\\ A1831 &22.75 & 18.26 &4.98 &21.80 &28.20 &-23.99 &-24.37 &-24.95 & 0.70 &0.86 &0.10 &0.41 & -20.0 & -29.8 &-23.23\\ A1836 &23.22 & 26.32 &5.24 & --- & --- &-24.14 & 0.00 &-24.14 & --- &0.60 &0.09 &0.29 & 48.6 & 37.8 &-22.91\\ A1873 &20.62 & 5.55 &2.30 &22.40 &22.50 &-23.07 &-23.23 &-23.90 & 0.87 &0.60 &0.13 &0.26 & -80.8 & -87.1 &-22.88\\ A1890 &23.78 & 49.81 &5.18 & --- & --- &-25.08 & 0.00 &-25.08 & --- &0.76 &0.21 &0.34 & 40.6 & 42.9 &-23.26\\ A1898 &23.37 & 18.80 &6.13 & --- & --- &-23.54 & 0.00 &-23.54 & --- &0.60 &0.11 &0.17 & 1.9 & 21.2 &-22.44\\ A1899 &21.50 & 9.54 &2.83 & --- & --- &-23.41 & 0.00 &-23.41 & --- &0.58 &0.36 &0.33 & -43.1 & 82.3 &-22.59\\ A1904 &21.91 & 12.00 &4.76 &21.85 &22.30 &-23.88 &-23.80 &-24.59 & 1.08 &0.76 &0.17 &0.35 & 20.9 & 44.4 &-23.18\\ A1913 &21.99 & 10.73 &6.02 & --- & --- &-23.55 & 0.00 &-23.55 & --- &0.56 &0.04 &0.00 & 24.3 & -72.1 &-22.76\\ A1964 &19.74 & 2.78 &5.29 &22.10 &23.00 &-22.89 &-23.58 &-24.04 & 0.53 &0.57 &0.16 &0.10 & 14.6 & -24.2 &-22.52\\ A1982 &20.94 & 5.80 &3.09 &21.44 &16.10 &-22.96 &-23.42 &-23.96 & 0.65 &0.67 &0.30 &0.40 & 48.3 & 46.9 &-22.72\\ A1983 &21.89 & 11.28 &6.29 & --- & --- &-23.76 & 0.00 &-23.76 & --- &0.34 &0.26 &0.29 & 28.4 & 26.9 &-22.59\\ A1991 &26.26 &201.64 &6.41 & --- & --- &-25.74 & 0.00 &-25.74 & --- &0.90 &0.18 &0.36 & 15.6 & 9.7 &-22.86\\ A2020 &19.37 & 1.43 &4.08 &19.76 & 4.70 &-21.59 &-22.38 &-22.81 & 0.49 &0.46 &0.48 &0.44 & -35.5 & -35.4 &-21.96\\ A2022 &19.94 & 3.62 &3.55 &21.98 &22.70 &-22.99 &-23.61 &-24.10 & 0.57 &0.65 &0.12 &0.12 & -34.2 & -56.8 &-22.88\\ A2025 &19.89 & 4.84 &2.07 &21.90 &22.30 &-23.61 &-23.86 &-24.49 & 0.79 &0.56 &0.21 &0.06 & -51.1 & -45.9 &-23.44\\ A2028 &21.57 & 10.27 &3.32 &22.78 &35.40 &-23.68 &-23.86 &-24.52 & 0.84 &0.71 &0.20 &0.32 & -81.1 & -81.4 &-23.05\\ A2029 &26.30 &439.41 &5.78 & --- & --- &-27.40 & 0.00 &-27.40 & --- &1.05 &0.26 &0.52 & 20.5 & 26.6 &-23.58\\ A2040 &20.57 & 3.67 &5.21 &21.94 &26.20 &-22.54 &-23.90 &-24.17 & 0.28 &0.72 &0.12 &0.44 & -55.2 & -55.8 &-22.43\\ A2052 &24.76 & 73.57 &3.89 & --- & --- &-24.58 & 0.00 &-24.58 & --- &0.88 &0.16 &0.35 & 34.6 & 35.5 &-22.56\\ A2055 &21.19 & 5.15 &3.80 &21.86 &21.00 &-22.51 &-23.52 &-23.88 & 0.39 &0.42 &0.25 &0.35 & -26.1 & 27.1 &-21.89\\ A2063 &24.15 & 44.25 &4.27 & --- & --- &-24.21 & 0.00 &-24.21 & --- &0.80 &0.04 &0.38 & 39.0 & 29.1 &-22.71\\ A2065 &23.87 & 27.62 &7.30 & --- & --- &-23.90 & 0.00 &-23.90 & --- &0.67 &0.18 &0.09 & 2.9 & 11.1 &-22.51\\ A2107 &23.19 & 35.86 &3.68 & --- & --- &-24.64 & 0.00 &-24.64 & --- &0.79 &0.13 &0.26 & -71.3 & -63.5 &-23.09\\ A2143 &21.06 & 5.41 &6.21 &21.87 &15.10 &-23.20 &-22.99 &-23.85 & 1.21 &0.72 &0.06 &0.08 & 21.1 & 16.3 &-22.84\\ A2147 &22.28 & 14.87 &3.23 &23.15 &78.10 &-23.58 &-25.04 &-25.29 & 0.26 &0.81 &0.23 &0.64 & 12.8 & 11.9 &-22.68\\ A2152 &21.86 & 9.98 &6.45 & --- & --- &-23.53 & 0.00 &-23.53 & --- &0.48 &0.07 &0.08 & 16.4 & -70.5 &-22.68\\ A2162 &20.85 & 7.19 &3.92 &22.46 &28.20 &-23.40 &-23.38 &-24.14 & 1.02 &0.57 &0.24 &0.36 & 5.1 & 1.0 &-22.81\\ A2184 &21.65 & 7.05 &3.19 & --- & --- &-22.66 & 0.00 &-22.66 & --- &0.49 &0.15 &0.26 & -15.0 & -17.7 &-22.17\\ A2197 &19.25 & 3.71 &1.54 &20.32 &13.50 &-23.05 &-23.88 &-24.29 & 0.47 &0.60 &0.29 &0.37 & -39.7 & -35.3 &-23.24\\ A2198 &21.42 & 5.64 &5.00 & --- & --- &-22.66 & 0.00 &-22.66 & --- &0.42 &0.12 &0.20 & -8.2 & -3.6 &-21.98\\ A2199 &21.18 & 7.95 &1.40 &21.82 &26.90 &-22.76 &-23.91 &-24.23 & 0.35 &0.81 &0.15 &0.36 & 27.1 & 33.3 &-22.81\\ A2247 &21.13 & 6.67 &2.19 & --- & --- &-22.73 & 0.00 &-22.73 & --- &0.41 &0.10 &0.23 & 27.2 & 3.4 &-22.34\\ A2248 &22.53 & 13.34 &6.41 & --- & --- &-23.56 & 0.00 &-23.56 & --- &0.56 &0.07 &0.19 & 14.9 & 13.0 &-22.66\\ A2250 &21.28 & 7.12 &4.02 & --- & --- &-23.22 & 0.00 &-23.22 & --- &0.44 &0.16 &0.10 & -36.6 & -74.4 &-22.59\\ A2256 &20.84 & 8.91 &1.87 & --- & --- &-23.73 & 0.00 &-23.73 & --- &0.52 &0.35 &0.30 & -80.6 & -76.1 &-22.97\\ A2271 &23.96 & 52.68 &3.86 & --- & --- &-24.86 & 0.00 &-24.86 & --- &0.90 &0.21 &0.37 & -57.8 & -53.3 &-22.94\\ A2293 &23.19 & 19.05 &5.99 & --- & --- &-23.68 & 0.00 &-23.68 & --- &0.73 &0.26 &0.33 & 76.8 & 72.4 &-22.61\\ A2296 &19.63 & 1.42 &4.35 &21.02 & 8.60 &-21.31 &-22.39 &-22.73 & 0.37 &0.60 &0.19 &0.40 & 68.1 & 72.4 &-21.72\\ A2308 &24.51 & 44.68 &6.37 & --- & --- &-24.28 & 0.00 &-24.28 & --- &0.74 &0.07 &0.26 & 38.5 & 17.9 &-22.73\\ A2309 &20.02 & 3.49 &4.18 &22.17 &17.80 &-22.88 &-22.86 &-23.62 & 1.02 &0.57 &0.21 &0.22 & -61.2 & -68.0 &-22.67\\ A2319 &25.16 & 71.27 &5.00 &21.19 &17.60 &-24.40 &-23.83 &-24.91 & 1.69 &1.12 &0.15 &0.18 & 4.4 & 20.1 &-22.99\\ A2331 &22.07 & 6.98 &5.18 &21.67 &17.50 &-22.57 &-23.44 &-23.84 & 0.45 &0.86 &0.06 &0.27 & -86.3 & 89.4 &-22.59\\ A2361 &22.83 & 15.13 &7.14 & --- & --- &-23.58 & 0.00 &-23.58 & --- &0.52 &0.07 &0.16 & 50.6 & 49.7 &-22.67\\ A2362 &22.52 & 15.75 &6.13 & --- & --- &-23.91 & 0.00 &-23.91 & --- &0.55 &0.15 &0.09 & 68.6 & 85.0 &-23.01\\ A2366 &19.64 & 3.60 &1.64 &21.32 &16.50 &-22.87 &-23.56 &-24.02 & 0.53 &0.59 &0.14 &0.15 & -30.6 & -21.7 &-23.08\\ A2372 &20.16 & 4.35 &1.55 &21.97 &17.40 &-22.75 &-23.05 &-23.66 & 0.76 &0.58 &0.13 &0.29 & 7.2 & -4.4 &-22.81\\ A2381 &17.19 & 0.43 &4.10 &20.31 & 3.20 &-20.86 &-20.70 &-21.54 & 1.15 &0.22 &0.22 &0.35 & 6.1 & 4.8 &-21.03\\ A2382 &20.41 & 3.97 &1.55 &21.31 &16.30 &-22.33 &-23.59 &-23.89 & 0.31 &0.82 &0.17 &0.10 & 8.6 & -67.8 &-22.76\\ A2383 &21.35 & 5.54 &3.64 & --- & --- &-22.62 & 0.00 &-22.62 & --- &0.36 &0.33 &0.36 & -47.7 & -49.8 &-21.98\\ A2388 &25.06 & 57.29 &6.71 & --- & --- &-24.22 & 0.00 &-24.22 & --- &0.68 &0.13 &0.44 & -80.5 & -75.2 &-22.46\\ A2399 &21.75 & 12.37 &3.91 & --- & --- &-23.91 & 0.00 &-23.91 & --- &0.62 &0.32 &0.21 & -77.3 & -70.1 &-22.92\\ A2401 &23.47 & 26.80 &6.99 & --- & --- &-24.16 & 0.00 &-24.16 & --- &0.67 &0.07 &0.17 & -12.2 & -34.9 &-23.03\\ A2412 &20.33 & 4.55 &1.79 &23.43 &90.80 &-22.81 &-25.23 &-25.34 & 0.11 &0.57 &0.15 &0.11 & 31.0 & 22.6 &-22.65\\ A2415 &23.75 & 31.01 &6.94 & --- & --- &-24.20 & 0.00 &-24.20 & --- &0.60 &0.21 &0.35 & 27.1 & 25.5 &-22.76\\ A2457 &21.10 & 8.46 &2.60 &22.33 &31.60 &-23.52 &-23.98 &-24.52 & 0.65 &0.73 &0.26 &0.41 & 86.2 & 83.0 &-23.06\\ A2459 &21.25 & 8.03 &2.36 & --- & --- &-23.25 & 0.00 &-23.25 & --- &0.49 &0.28 &0.26 & 13.2 & 3.0 &-22.68\\ A2462 &20.29 & 4.35 &5.13 &22.37 &26.20 &-23.40 &-23.70 &-24.32 & 0.76 &0.61 &0.03 &0.26 & -89.8 & -52.5 &-23.05\\ A2480 &19.86 & 3.45 &3.82 &22.23 &22.80 &-23.16 &-23.51 &-24.10 & 0.72 &0.51 &0.03 &0.48 & -18.0 & -40.0 &-23.00\\ A2492 &20.93 & 5.61 &2.81 &22.25 &18.50 &-22.98 &-23.04 &-23.76 & 0.95 &0.63 &0.13 &0.29 & -70.0 & 89.6 &-22.80\\ A2495 &21.70 & 6.56 &1.30 &21.52 &26.60 &-22.20 &-24.59 &-24.71 & 0.11 &1.01 &0.18 &0.43 & 50.4 & 50.7 &-22.70\\ A2511 &21.60 & 7.66 &2.83 & --- & --- &-22.92 & 0.00 &-22.92 & --- &0.49 &0.22 &0.22 & 81.3 & 75.1 &-22.31\\ A2524 &21.19 & 7.10 &4.65 &22.03 &17.90 &-23.44 &-23.14 &-24.05 & 1.33 &0.63 &0.19 &0.28 & 14.5 & 1.9 &-22.93\\ A2525 &20.52 & 3.78 &4.31 &23.00 &22.60 &-22.69 &-22.66 &-23.43 & 1.03 &0.56 &0.07 &0.16 & -61.4 & -48.7 &-22.35\\ A2559 &23.97 & 36.62 &5.18 &21.65 &22.80 &-24.28 &-24.04 &-24.91 & 1.25 &0.86 &0.29 &0.53 & 29.7 & 31.8 &-23.01\\ A2572 &19.18 & 2.54 &1.37 &21.31 &17.30 &-22.43 &-23.62 &-23.93 & 0.34 &0.50 &0.23 &0.18 & -85.8 & -19.2 &-22.81\\ A2589 &28.54 &725.91 &9.52 & --- & --- &-26.43 & 0.00 &-26.43 & --- &0.78 &0.12 &0.51 & -11.1 & 5.0 &-22.96\\ A2593 &21.93 & 6.60 &4.83 &20.54 &12.00 &-22.40 &-23.65 &-23.95 & 0.33 &0.90 &0.18 &0.38 & 75.2 & 73.8 &-22.44\\ A2596 &19.91 & 2.30 &1.02 &20.50 & 6.10 &-21.62 &-22.44 &-22.86 & 0.47 &0.55 &0.05 &0.16 & -76.0 & -70.8 &-22.47\\ A2618 &20.79 & 6.60 &1.69 &21.94 &25.50 &-23.11 &-23.95 &-24.36 & 0.46 &0.80 &0.06 &0.26 & 77.2 & -86.4 &-23.15\\ A2622 &24.53 & 51.76 &6.13 & --- & --- &-24.49 & 0.00 &-24.49 & --- &0.78 &0.20 &0.42 & -43.3 & -55.4 &-22.78\\ A2625 &21.60 & 10.49 &3.10 & --- & --- &-23.67 & 0.00 &-23.67 & --- &0.54 &0.37 &0.36 & 84.9 & 85.9 &-22.78\\ A2626 &24.10 & 37.52 &4.48 &21.48 &17.00 &-24.14 &-23.58 &-24.65 & 1.67 &1.05 &0.24 &0.37 & 32.3 & 32.0 &-22.92\\ A2630 &22.56 & 15.00 &5.56 & --- & --- &-23.83 & 0.00 &-23.83 & --- &0.59 &0.05 &0.31 & -49.4 & -50.5 &-22.87\\ A2634 &20.33 & 4.66 &3.16 &21.27 &20.60 &-22.98 &-24.00 &-24.35 & 0.39 &0.67 &0.18 &0.45 & 31.0 & 34.3 &-22.98\\ A2637 &23.01 & 22.64 &6.58 & --- & --- &-24.38 & 0.00 &-24.38 & --- &0.58 &0.21 &0.37 & 62.9 & 86.1 &-23.17\\ A2644 &23.31 & 22.24 &5.46 & --- & --- &-23.94 & 0.00 &-23.94 & --- &0.57 &0.19 &0.30 & 16.9 & 28.3 &-22.73\\ A2656 &20.46 & 3.20 &4.29 &21.22 &13.20 &-22.48 &-23.37 &-23.76 & 0.44 &0.81 &0.09 &0.22 & -27.1 & -8.4 &-22.81\\ A2657 &21.24 & 2.74 &3.94 &19.76 & 3.50 &-21.04 &-21.67 &-22.15 & 0.56 &0.36 &0.11 &0.07 & 1.8 & -9.3 &-21.95\\ A2660 &22.08 & 10.53 &5.62 &22.42 &20.60 &-23.40 &-22.94 &-23.95 & 1.53 &0.63 &0.25 &0.45 & -35.3 & -27.8 &-22.65\\ A2661 &26.51 &104.60 &6.13 & --- & --- &-24.53 & 0.00 &-24.53 & --- &0.92 &0.28 &0.30 & -76.0 & -36.4 &-22.27\\ A2665 &23.94 & 50.12 &4.15 & --- & --- &-24.77 & 0.00 &-24.77 & --- &0.93 &0.13 &0.32 & -64.7 & -80.7 &-23.00\\ A2666 &22.97 & 25.07 &6.37 & --- & --- &-24.34 & 0.00 &-24.34 & --- &0.61 &0.26 &0.32 & 60.6 & 60.7 &-23.05\\ A2670 &21.13 & 7.49 &5.43 &21.75 &22.30 &-23.69 &-23.88 &-24.54 & 0.84 &0.70 &0.09 &0.08 & 81.4 & -39.7 &-23.28\\ A2675 &24.31 & 40.53 &6.45 & --- & --- &-24.24 & 0.00 &-24.24 & --- &0.71 &0.09 &0.44 & 43.8 & 45.0 &-22.78\\ A2678 &21.85 & 8.54 &6.41 & --- & --- &-23.32 & 0.00 &-23.32 & --- &0.48 &0.18 &0.23 & 17.7 & 19.0 &-22.55\\ A2716 &20.71 & 5.35 &6.45 & --- & --- &-23.43 & 0.00 &-23.43 & --- &0.43 &0.21 &0.17 & -78.3 & -81.0 &-22.66\\ A2717 &20.15 & 2.80 &2.84 &21.25 &18.60 &-22.08 &-23.88 &-24.07 & 0.19 &0.87 &0.05 &0.18 & -17.3 & 8.3 &-22.64\\ A2731 &22.31 & 16.35 &6.62 & --- & --- &-24.12 & 0.00 &-24.12 & --- &0.55 &0.36 &0.23 & 88.0 & 77.0 &-22.76\\ A2734 &21.70 & 9.87 &1.27 &22.08 &32.30 &-22.91 &-24.29 &-24.56 & 0.28 &1.03 &0.19 &0.50 & 21.5 & 21.6 &-22.76\\ A2764 &22.17 & 14.97 &5.43 & --- & --- &-24.13 & 0.00 &-24.13 & --- &0.60 &0.16 &0.18 & -37.4 & 1.5 &-23.06\\ A2765 &19.33 & 2.05 &4.59 &21.75 & 9.30 &-22.58 &-21.98 &-23.08 & 1.74 &0.54 &0.24 &0.29 & 51.7 & 51.8 &-22.09\\ A2799 &22.79 & 16.32 &6.10 & --- & --- &-23.73 & 0.00 &-23.73 & --- &0.63 &0.22 &0.25 & 42.8 & 32.0 &-22.54\\ A2800 &22.38 & 14.39 &4.42 & --- & --- &-23.70 & 0.00 &-23.70 & --- &0.62 &0.25 &0.42 & 63.4 & 75.7 &-22.71\\ A2806 &19.85 & 3.12 &3.32 &21.11 & 7.20 &-22.59 &-21.86 &-23.04 & 1.97 &0.46 &0.16 &0.20 & -62.5 & -58.7 &-22.60\\ A2814 &22.20 & 12.53 &5.10 & --- & --- &-23.73 & 0.00 &-23.73 & --- &0.59 &0.21 &0.26 & -34.6 & -31.0 &-22.78\\ A2819 &20.27 & 5.19 &2.62 & --- & --- &-23.36 & 0.00 &-23.36 & --- &0.63 &0.24 &0.18 & 42.4 & 35.2 &-22.62\\ A2824 &21.68 & 7.93 &5.26 & --- & --- &-23.29 & 0.00 &-23.29 & --- &0.45 &0.14 &0.03 & 65.4 & 30.1 &-22.51\\ A2836 &21.80 & 6.84 &2.75 & --- & --- &-22.46 & 0.00 &-22.46 & --- &0.49 &0.07 &0.10 & -53.4 & -37.0 &-22.12\\ A2841 &21.11 & 7.02 &4.93 &22.16 &20.23 &-23.46 &-23.21 &-24.10 & 1.26 &0.63 &0.26 &0.36 & -14.2 & -10.8 &-22.87\\ A2854 &21.39 & 4.80 &5.59 &22.11 &24.50 &-22.41 &-23.66 &-23.96 & 0.32 &0.65 &0.18 &0.50 & -2.6 & -15.7 &-22.28\\ A2859 &20.74 & 4.48 &5.29 &22.48 &15.90 &-22.91 &-22.37 &-23.43 & 1.64 &0.53 &0.15 &0.23 & -89.4 & -77.9 &-22.92\\ A2864 &18.86 & 1.55 &5.08 &21.75 &10.20 &-22.48 &-22.16 &-23.08 & 1.34 &0.55 &0.17 &0.26 & 28.6 & 34.4 &-22.17\\ A2870 &22.96 & 22.83 &4.63 & --- & --- &-23.97 & 0.00 &-23.97 & --- &0.70 &0.27 &0.24 & 10.5 & 11.6 &-22.72\\ A2877 &19.91 & 5.63 &3.68 &21.04 &19.90 &-23.63 &-23.89 &-24.52 & 0.78 &0.68 &0.14 &0.28 & -69.5 & -92.9 &-23.44\\ A2881 &19.41 & 2.00 &2.67 &21.66 & 8.70 &-22.12 &-21.88 &-22.76 & 1.26 &0.46 &0.16 &0.15 & 46.4 & 65.0 &-22.22\\ A2889 &19.88 & 1.44 &1.01 &22.05 & 5.70 &-20.29 &-20.41 &-21.10 & 0.90 &0.34 &0.42 &0.38 & -44.4 & -41.4 &-20.49\\ A2896 &18.95 & 2.21 &3.68 &21.01 & 8.50 &-22.71 &-22.23 &-23.25 & 1.56 &0.40 &0.27 &0.33 & -62.6 & -65.8 &-22.44\\ A2911 &19.64 & 2.34 &2.82 &22.01 &11.90 &-21.72 &-21.67 &-22.45 & 1.04 &0.47 &0.17 &0.23 & -18.4 & 14.5 &-21.78\\ A2923 &22.24 & 13.42 &4.15 & --- & --- &-23.69 & 0.00 &-23.69 & --- &0.66 &0.03 &0.29 & 66.1 & 75.8 &-22.89\\ A2954 &20.76 & 3.80 &1.68 & --- & --- &-22.04 & 0.00 &-22.04 & --- &0.32 &0.18 &0.26 & 27.3 & 33.6 &-21.73\\ A2955 &19.71 & 2.58 &5.21 &21.53 &19.00 &-22.92 &-23.89 &-24.27 & 0.41 &0.74 &0.11 &0.49 & 78.6 & 67.6 &-22.84\\ A2992 &21.38 & 7.57 &4.69 & --- & --- &-23.31 & 0.00 &-23.31 & --- &0.43 &0.29 &0.30 & -59.6 & -53.9 &-22.51\\ A3004 &22.51 & 13.82 &5.24 & --- & --- &-23.58 & 0.00 &-23.58 & --- &0.63 &0.12 &0.13 & -54.9 & 37.2 &-22.59\\ A3009 &23.29 & 32.22 &3.58 & --- & --- &-24.43 & 0.00 &-24.43 & --- &0.88 &0.16 &0.44 & -71.3 & -74.5 &-23.02\\ A3027 &23.36 & 20.33 &5.75 &23.01 &32.60 &-23.67 &-23.46 &-24.32 & 1.22 &0.69 &0.26 &0.38 & 84.0 & 85.7 &-22.62\\ A3047 &21.99 & 8.22 &4.55 &21.93 &23.03 &-23.01 &-23.84 &-24.26 & 0.46 &0.81 &0.22 &0.45 & -59.4 & -48.3 &-22.80\\ A3074 &23.73 & 45.61 &4.33 & --- & --- &-24.88 & 0.00 &-24.88 & --- &0.84 &0.32 &0.43 & 35.2 & 33.8 &-23.04\\ A3077 &22.53 & 11.53 &4.55 & --- & --- &-23.17 & 0.00 &-23.17 & --- &0.50 &0.46 &0.49 & 74.3 & 67.3 &-22.08\\ A3078 &23.86 & 38.66 &4.83 & --- & --- &-24.45 & 0.00 &-24.45 & --- &0.81 &0.16 &0.32 & 30.6 & 16.0 &-22.90\\ A3089 &20.44 & 3.42 &4.65 &21.50 &10.40 &-22.55 &-22.43 &-23.24 & 1.12 &0.61 &0.17 &0.29 & 76.4 & 79.6 &-22.41\\ A3093 &22.00 & 10.57 &3.44 &21.64 &18.40 &-23.34 &-23.60 &-24.23 & 0.79 &0.81 &0.28 &0.46 & 20.3 & 18.8 &-22.89\\ A3094 &22.38 & 15.17 &5.24 & --- & --- &-23.92 & 0.00 &-23.92 & --- &0.69 &0.16 &0.17 & 34.3 & 52.3 &-22.84\\ A3095 &19.63 & 2.88 &3.11 &21.50 &11.20 &-22.77 &-22.59 &-23.43 & 1.18 &0.48 &0.15 &0.31 & -5.3 & -16.0 &-22.75\\ A3098 &26.40 &186.16 &6.90 & --- & --- &-25.55 & 0.00 &-25.55 & --- &0.84 &0.24 &0.48 & -40.0 & -39.4 &-22.80\\ A3100 &21.92 & 10.00 &2.11 & --- & --- &-22.97 & 0.00 &-22.97 & --- &0.73 &0.32 &0.14 & 67.0 & 70.4 &-22.33\\ A3104 &18.96 & 2.03 &3.22 &21.38 &21.04 &-22.73 &-24.11 &-24.38 & 0.28 &0.66 &0.25 &0.45 & 63.0 & 68.7 &-22.77\\ A3107 &20.72 & 4.07 &3.28 &22.92 &25.49 &-22.47 &-22.96 &-23.50 & 0.63 &0.54 &0.19 &0.18 & -30.2 & -44.7 &-22.21\\ A3108 &22.79 & 12.73 &4.67 & --- & --- &-23.04 & 0.00 &-23.04 & --- &0.58 &0.25 &0.36 & 70.5 & 65.7 &-22.08\\ A3109 &24.03 & 47.17 &6.41 & --- & --- &-24.81 & 0.00 &-24.81 & --- &0.71 &0.33 &0.16 & 16.0 & 22.2 &-22.89\\ A3110 &20.30 & 6.54 &1.89 &20.87 &12.80 &-23.67 &-23.55 &-24.36 & 1.11 &0.61 &0.50 &0.39 & -2.1 & -2.1 &-23.21\\ A3111 &21.72 & 7.84 &5.18 &21.70 &16.70 &-23.18 &-23.31 &-24.00 & 0.89 &0.75 &0.12 &0.14 & 33.2 & 62.9 &-22.87\\ A3112 &24.95 & 71.55 &6.54 &22.01 &33.50 &-24.87 &-24.50 &-25.45 & 1.40 &0.88 &0.33 &0.59 & 9.2 & 6.7 &-23.06\\ A3120 &21.67 & 8.16 &4.18 &22.01 &24.50 &-23.03 &-23.67 &-24.15 & 0.56 &0.79 &0.27 &0.48 & -26.5 & -37.1 &-22.57\\ A3122 &20.51 & 5.83 &5.78 &22.82 &27.10 &-23.76 &-23.20 &-24.27 & 1.68 &0.60 &0.27 &0.39 & 38.0 & 61.4 &-22.96\\ A3123 &18.69 & 1.70 &3.60 &20.69 & 8.10 &-22.63 &-22.68 &-23.41 & 0.95 &0.54 &0.33 &0.34 & -81.9 & -64.7 &-22.65\\ A3125 &19.83 & 2.86 &3.04 &21.83 & 9.70 &-22.52 &-21.92 &-23.01 & 1.73 &0.49 &0.07 &0.13 & -2.9 & -34.0 &-22.46\\ A3128 &23.58 & 32.96 &6.10 & --- & --- &-24.45 & 0.00 &-24.45 & --- &0.67 &0.19 &0.20 & 11.3 & 25.5 &-22.99\\ A3133 &24.37 & 25.14 &6.80 & --- & --- &-23.18 & 0.00 &-23.18 & --- &0.59 &0.25 &0.36 & -26.8 & -29.0 &-21.82\\ A3135 &22.71 & 14.77 &5.88 & --- & --- &-23.56 & 0.00 &-23.56 & --- &0.56 &0.13 &0.27 & 33.2 & 51.3 &-22.57\\ A3142 &20.80 & 5.23 &3.32 &21.92 &17.64 &-22.94 &-23.16 &-23.81 & 0.81 &0.64 &0.21 &0.30 & -12.1 & -21.4 &-22.71\\ A3144 &22.52 & 11.20 &7.35 & --- & --- &-23.20 & 0.00 &-23.20 & --- &0.48 &0.14 &0.12 & -69.1 & -26.7 &-22.38\\ A3151 &18.38 & 1.22 &2.70 &21.27 &11.10 &-22.09 &-22.81 &-23.27 & 0.52 &0.70 &0.09 &0.28 & -67.3 & -17.0 &-22.33\\ A3152 &23.10 & 20.44 &6.29 & --- & --- &-24.05 & 0.00 &-24.05 & --- &0.56 &0.32 &0.34 & 0.9 & 2.6 &-22.59\\ A3153 &23.84 & 25.55 &5.78 & --- & --- &-23.85 & 0.00 &-23.85 & --- &0.71 &0.16 &0.39 & 87.1 & 92.6 &-22.56\\ A3158 &23.45 & 30.76 &6.02 & --- & --- &-24.43 & 0.00 &-24.43 & --- &0.73 &0.11 &0.14 & 63.3 & 59.7 &-23.06\\ A3164 &23.18 & 20.63 &7.04 & --- & --- &-23.91 & 0.00 &-23.91 & --- &0.59 &0.10 &0.20 & -59.8 & 30.3 &-22.94\\ A3188 &19.77 & 2.00 &6.02 & --- & --- &-22.19 & 0.00 &-22.19 & --- &0.32 &0.20 &0.28 & -50.7 & -66.8 &-21.40\\ A3193 &23.10 & 19.45 &6.49 & --- & --- &-23.72 & 0.00 &-23.72 & --- &0.54 &0.10 &0.24 & -82.5 & 87.3 &-22.57\\ A3195 &23.28 & 22.96 &5.99 & --- & --- &-24.02 & 0.00 &-24.02 & --- &0.67 &0.32 &0.39 & 64.0 & 60.6 &-22.72\\ A3223 &20.01 & 4.49 &1.84 &21.61 &16.00 &-23.06 &-23.23 &-23.90 & 0.85 &0.56 &0.26 &0.26 & 0.6 & -6.8 &-22.98\\ A3225 &19.61 & 2.40 &0.93 &20.28 & 7.50 &-21.74 &-22.88 &-23.21 & 0.35 &0.64 &0.13 &0.16 & -11.3 & -8.0 &-22.68\\ A3229 &20.06 & 2.40 &2.07 & --- & --- &-21.54 & 0.00 &-21.54 & --- &0.24 &0.46 &0.48 & -78.7 & -85.9 &-21.07\\ A3231 &22.28 & 7.98 &5.56 & --- & --- &-22.69 & 0.00 &-22.69 & --- &0.49 &0.07 &0.15 & 84.4 & 84.5 &-21.90\\ A3234 &21.60 & 7.72 &4.93 & --- & --- &-23.41 & 0.00 &-23.41 & --- &0.49 &0.20 &0.13 & 27.9 & 33.3 &-22.55\\ A3266 &20.23 & 4.04 &5.59 &21.02 &26.10 &-23.20 &-24.89 &-25.09 & 0.21 &0.83 &0.12 &0.31 & 77.4 & 63.4 &-23.19\\ A3301 &21.83 & 13.08 &3.11 &23.26 &80.22 &-23.81 &-25.06 &-25.36 & 0.32 &0.81 &0.07 &0.37 & -45.8 & -42.7 &-23.15\\ A3332 &20.94 & 5.94 &4.12 &23.16 &36.36 &-23.24 &-23.55 &-24.16 & 0.75 &0.52 &0.12 &0.44 & 48.5 & 47.7 &-22.83\\ A3336 &22.95 & 20.24 &4.12 &23.28 &42.50 &-23.90 &-23.77 &-24.59 & 1.12 &0.82 &0.22 &0.34 & 19.5 & 17.8 &-22.82\\ A3341 &21.09 & 4.95 &6.06 &21.71 &16.80 &-22.72 &-23.14 &-23.70 & 0.68 &0.72 &0.21 &0.33 & 11.8 & 21.7 &-22.44\\ A3351 &21.63 & 7.82 &6.25 & --- & --- &-23.43 & 0.00 &-23.43 & --- &0.52 &0.10 &0.06 & -34.1 & 71.5 &-22.58\\ A3354 &20.15 & 3.80 &2.38 &22.75 &22.40 &-22.69 &-22.82 &-23.51 & 0.89 &0.47 &0.09 &0.15 & 3.9 & 1.2 &-22.63\\ A3367 &21.91 & 10.08 &3.46 & --- & --- &-23.19 & 0.00 &-23.19 & --- &0.39 &0.27 &0.35 & 52.1 & 54.9 &-22.24\\ A3374 &14.82 & 0.24 &2.79 &19.08 & 4.00 &-22.02 &-22.67 &-23.14 & 0.55 &0.46 &0.44 &0.40 & -70.5 & -70.6 &-22.17\\ A3376 &20.91 & 7.24 &2.87 &22.28 &30.50 &-23.37 &-23.91 &-24.43 & 0.61 &0.68 &0.28 &0.44 & 64.4 & 67.9 &-22.87\\ A3380 &21.71 & 7.91 &5.38 &23.40 &56.80 &-23.12 &-24.16 &-24.51 & 0.38 &0.58 &0.07 &0.41 & 81.2 & 69.2 &-22.56\\ A3381 &24.08 & 29.06 &6.45 & --- & --- &-23.60 & 0.00 &-23.60 & --- &0.64 &0.11 &0.18 & 36.0 & 29.9 &-22.28\\ A3389 &21.84 & 13.47 &3.07 & --- & --- &-23.77 & 0.00 &-23.77 & --- &0.69 &0.18 &0.16 & 60.3 & 19.8 &-22.88\\ A3390 &21.20 & 5.15 &5.65 &21.91 &15.90 &-22.66 &-22.83 &-23.50 & 0.86 &0.65 &0.04 &0.12 & -79.5 & -75.0 &-22.55\\ A3391 &22.44 & 22.42 &3.31 & --- & --- &-24.41 & 0.00 &-24.41 & --- &0.74 &0.24 &0.38 & 76.1 & 45.6 &-23.15\\ A3395 &26.19 &202.47 &5.78 & --- & --- &-25.71 & 0.00 &-25.71 & --- &0.88 &0.38 &0.57 & -56.2 & -63.4 &-22.56\\ A3404 &24.18 & 56.46 &3.28 & --- & --- &-25.09 & 0.00 &-25.09 & --- &1.14 &0.29 &0.39 & 66.6 & 70.6 &-22.94\\ A3407 &25.64 & 99.66 &7.87 & --- & --- &-24.90 & 0.00 &-24.90 & --- &0.74 &0.08 &0.02 & -88.9 & 34.5 &-22.84\\ A3408 &22.46 & 16.24 &5.78 &23.25 &46.90 &-23.93 &-23.85 &-24.64 & 1.07 &0.64 &0.08 &0.33 & -43.7 & -39.8 &-23.02\\ A3420 &20.70 & 4.42 &4.85 &23.22 &19.60 &-22.86 &-22.08 &-23.29 & 2.06 &0.44 &0.20 &0.22 & 38.7 & 38.8 &-22.42\\ A3429 &24.66 & 53.10 &7.04 & --- & --- &-24.43 & 0.00 &-24.43 & --- &0.70 &0.21 &0.47 & 39.7 & 38.8 &-22.67\\ A3490 &21.28 & 6.22 &4.81 &22.69 &26.70 &-23.17 &-23.43 &-24.06 & 0.79 &0.67 &0.13 &0.26 & 37.5 & 57.1 &-22.73\\ A3497 &24.02 & 32.71 &5.46 & --- & --- &-23.97 & 0.00 &-23.97 & --- &0.75 &0.18 &0.29 & 50.6 & 43.8 &-22.47\\ A3528 &22.24 & 14.03 &5.15 &22.33 &36.20 &-23.91 &-24.36 &-24.91 & 0.66 &0.68 &0.17 &0.47 & 0.2 & -5.8 &-23.16\\ A3530 &22.94 & 17.98 &5.71 &21.47 &25.30 &-23.80 &-24.43 &-24.91 & 0.56 &0.80 &0.23 &0.45 & -54.7 & -34.6 &-23.05\\ A3531 &22.03 & 4.76 &6.10 &21.98 &17.50 &-21.76 &-23.02 &-23.31 & 0.31 &0.85 &0.37 &0.51 & 15.9 & 15.3 &-21.71\\ A3532 &22.33 & 11.85 &5.65 &22.40 &42.40 &-23.51 &-24.64 &-24.97 & 0.36 &0.76 &0.15 &0.39 & 64.5 & 79.2 &-22.89\\ A3537 &22.17 & 17.15 &5.52 & --- & --- &-23.84 & 0.00 &-23.84 & --- &0.57 &0.31 &0.27 & -12.5 & -26.0 &-22.70\\ A3549 &21.91 & 8.42 &5.26 & --- & --- &-23.05 & 0.00 &-23.05 & --- &0.45 &0.29 &0.26 & 3.1 & 3.0 &-22.14\\ A3549 &23.03 & 11.69 &5.81 & --- & --- &-22.78 & 0.00 &-22.78 & --- &0.58 &0.10 &0.09 & -68.2 & 69.6 &-21.95\\ A3552 &22.27 & 20.56 &6.02 & --- & --- &-24.70 & 0.00 &-24.70 & --- &0.60 &0.08 &0.06 & 22.3 & 87.4 &-22.58\\ A3553 &22.33 & 9.53 &4.00 & --- & --- &-22.81 & 0.00 &-22.81 & --- &0.56 &0.04 &0.09 & -88.9 & 47.3 &-22.22\\ A3554 &23.55 & 20.03 &5.78 &21.46 &20.20 &-23.39 &-23.91 &-24.43 & 0.62 &0.82 &0.24 &0.34 & -35.6 & -49.2 &-22.68\\ A3556 &22.15 & 18.39 &4.05 & --- & --- &-24.42 & 0.00 &-24.42 & --- &0.68 &0.32 &0.14 & -35.9 & -40.1 &-23.18\\ A3559 &24.73 &112.07 &4.63 & --- & --- &-25.76 & 0.00 &-25.76 & --- &0.99 &0.13 &0.35 & -22.8 & -21.0 &-23.26\\ A3559 &18.15 & 1.61 &4.88 &20.60 &14.10 &-23.22 &-23.98 &-24.42 & 0.50 &0.61 &0.19 &0.42 & 28.4 & 47.3 &-23.18\\ A3560 &20.62 & 5.86 &3.57 & --- & --- &-22.54 & 0.00 &-22.54 & --- &0.46 &0.21 &0.07 & 72.8 & -45.3 &-22.11\\ A3562 &22.28 & 15.39 &2.92 &22.96 &72.90 &-23.74 &-25.21 &-25.46 & 0.26 &0.80 &0.20 &0.57 & 89.7 & 84.0 &-22.88\\ A3564 &21.93 & 7.10 &5.78 & --- & --- &-22.78 & 0.00 &-22.78 & --- &0.43 &0.13 &0.05 & -50.0 & -41.9 &-22.20\\ A3564 &21.95 & 10.15 &4.13 & --- & --- &-23.35 & 0.00 &-23.35 & --- &0.54 &0.14 &0.11 & -0.8 & -12.5 &-22.62\\ A3565 &23.47 & 36.59 &7.75 & --- & --- &-24.10 & 0.00 &-24.10 & --- &0.64 &0.09 &0.05 & 60.3 & 8.7 &-22.76\\ A3570 &20.62 & 4.88 &4.22 & --- & --- &-22.99 & 0.00 &-22.99 & --- &0.39 &0.15 &0.12 & 5.2 & 73.2 &-22.50\\ A3571 &25.51 &232.87 &4.57 & --- & --- &-26.54 & 0.00 &-26.54 & --- &1.16 &0.24 &0.64 & 1.7 & 3.8 &-23.14\\ A3572 &21.12 & 6.30 &5.68 & --- & --- &-23.22 & 0.00 &-23.22 & --- &0.40 &0.07 &0.12 & 30.0 & -64.2 &-22.61\\ A3574 &22.15 & 11.42 &6.17 &20.69 &17.80 &-22.94 &-23.74 &-24.17 & 0.48 &0.76 &0.14 &0.45 & 68.6 & 67.2 &-22.64\\ A3575 &20.94 & 3.70 &5.99 & --- & --- &-22.24 & 0.00 &-22.24 & --- &0.32 &0.07 &0.13 & -66.8 & -63.9 &-21.78\\ A3581 &25.38 & 74.80 &8.20 & --- & --- &-24.22 & 0.00 &-24.22 & --- &0.65 &0.13 &0.34 & -65.4 & -70.8 &-22.41\\ A3605 &21.58 & 10.17 &1.46 &23.35 &56.40 &-23.20 &-24.27 &-24.62 & 0.37 &0.90 &0.10 &0.22 & 59.0 & 51.9 &-22.80\\ A3626 &22.10 & 10.36 &1.05 & --- & --- &-22.55 & 0.00 &-22.55 & --- &0.78 &0.31 &0.41 & 19.5 & 32.7 &-21.97\\ A3631 &21.57 & 6.64 &0.81 & --- & --- &-22.01 & 0.00 &-22.01 & --- &0.49 &0.31 &0.25 & -50.9 & -52.7 &-21.66\\ A3651 &22.29 & 19.27 &2.46 & --- & --- &-24.10 & 0.00 &-24.10 & --- &0.91 &0.13 &0.19 & -34.1 & -34.8 &-23.08\\ A3656 &24.07 & 41.74 &8.00 & --- & --- &-24.45 & 0.00 &-24.45 & --- &0.61 &0.17 &0.18 & -55.5 & -67.1 &-22.84\\ A3667 &20.88 & 7.19 &1.47 &22.01 &32.00 &-23.09 &-24.32 &-24.62 & 0.32 &0.90 &0.17 &0.36 & 67.9 & -46.1 &-23.03\\ A3677 &22.02 & 8.90 &4.78 & --- & --- &-22.96 & 0.00 &-22.96 & --- &0.47 &0.23 &0.31 & -45.7 & -42.1 &-22.11\\ A3687 &25.22 & 71.69 &6.99 & --- & --- &-24.63 & 0.00 &-24.63 & --- &0.72 &0.22 &0.38 & -46.5 & -44.2 &-22.57\\ A3698 &21.89 & 10.61 &6.37 & --- & --- &-23.16 & 0.00 &-23.16 & --- &0.42 &0.31 &0.44 & 8.4 & 5.4 &-22.13\\ A3703 &23.23 & 19.73 &6.13 &23.09 &41.10 &-23.73 &-23.84 &-24.54 & 0.90 &0.70 &0.22 &0.47 & -87.8 & 79.9 &-22.59\\ A3716 &22.34 & 11.69 &5.88 &21.57 &20.00 &-23.39 &-23.74 &-24.33 & 0.73 &0.78 &0.05 &0.32 & 63.5 & 56.7 &-22.91\\ A3731 &20.21 & 4.32 &3.04 &22.32 &23.17 &-22.99 &-23.28 &-23.90 & 0.77 &0.56 &0.11 &0.30 & -13.0 & -0.3 &-22.80\\ A3733 &20.21 & 2.79 &1.22 &21.23 &11.40 &-21.47 &-22.71 &-23.01 & 0.32 &0.64 &0.13 &0.20 & 7.3 & -2.2 &-22.16\\ A3736 &24.14 & 65.96 &5.85 & --- & --- &-25.33 & 0.00 &-25.33 & --- &0.79 &0.31 &0.48 & 24.4 & 23.9 &-23.24\\ A3742 &21.57 & 8.81 &7.04 & --- & --- &-23.01 & 0.00 &-23.01 & --- &0.45 &0.28 &0.14 & -46.5 & -39.1 &-22.15\\ A3744 &22.33 & 12.68 &6.37 & --- & --- &-23.48 & 0.00 &-23.48 & --- &0.55 &0.02 &0.11 & 39.8 & 12.8 &-22.65\\ A3747 &22.21 & 13.72 &4.35 & --- & --- &-23.61 & 0.00 &-23.61 & --- &0.54 &0.35 &0.41 & 38.7 & 25.9 &-22.52\\ A3756 &22.03 & 10.94 &5.18 & --- & --- &-23.49 & 0.00 &-23.49 & --- &0.50 &0.22 &0.31 & -37.2 & -43.4 &-22.60\\ A3764 &22.36 & 12.13 &6.25 & --- & --- &-23.60 & 0.00 &-23.60 & --- &0.56 &0.12 &0.23 & -68.2 & -31.7 &-22.70\\ A3771 &19.25 & 2.00 &2.44 &21.84 &14.90 &-22.28 &-22.92 &-23.40 & 0.56 &0.58 &0.16 &0.26 & 21.6 & 10.3 &-22.44\\ A3775 &18.69 & 1.56 &2.56 &20.44 & 8.30 &-22.44 &-23.16 &-23.61 & 0.51 &0.61 &0.20 &0.04 & -4.3 & 84.7 &-22.63\\ A3781 &22.30 & 7.13 &6.10 & --- & --- &-22.40 & 0.00 &-22.40 & --- &0.56 &0.05 &0.24 & 72.8 & 39.8 &-21.74\\ A3782 &22.70 & 16.45 &5.56 &23.07 &36.40 &-23.75 &-23.54 &-24.41 & 1.22 &0.70 &0.08 &0.37 & -20.2 & -52.9 &-22.89\\ A3785 &22.46 & 16.31 &5.62 & --- & --- &-24.07 & 0.00 &-24.07 & --- &0.61 &0.15 &0.14 & 5.2 & -13.3 &-23.05\\ A3796 &21.98 & 8.89 &5.26 &22.47 &23.20 &-23.19 &-23.24 &-23.97 & 0.95 &0.65 &0.22 &0.32 & -27.8 & -43.8 &-22.60\\ A3806 &21.05 & 8.54 &1.44 &22.26 &31.00 &-23.36 &-24.08 &-24.53 & 0.52 &0.90 &0.14 &0.21 & 67.4 & 62.2 &-23.12\\ A3809 &26.22 &144.88 &7.30 & --- & --- &-25.16 & 0.00 &-25.16 & --- &0.81 &0.13 &0.44 & 87.0 & 88.9 &-22.70\\ A3822 &24.77 & 66.64 &6.41 & --- & --- &-24.87 & 0.00 &-24.87 & --- &0.68 &0.30 &0.37 & -22.8 & -19.0 &-22.87\\ A3825 &17.26 & 0.61 &4.05 &20.68 & 4.40 &-21.79 &-21.26 &-22.31 & 1.62 &0.28 &0.34 &0.26 & -46.8 & -67.6 &-21.53\\ A3826 &22.59 & 26.06 &3.19 & --- & --- &-24.65 & 0.00 &-24.65 & --- &0.81 &0.35 &0.38 & -22.6 & -23.0 &-23.17\\ A3836 &22.31 & 15.43 &4.98 &22.58 &64.40 &-24.16 &-25.49 &-25.77 & 0.29 &0.80 &0.15 &0.43 & -45.5 & -23.7 &-23.30\\ A3837 &23.70 & 21.38 &6.67 & --- & --- &-23.56 & 0.00 &-23.56 & --- &0.58 &0.14 &0.16 & 7.2 & 0.7 &-22.45\\ A3844 &20.25 & 2.77 &4.37 &24.30 &56.02 &-22.28 &-23.32 &-23.67 & 0.38 &0.41 &0.13 &0.21 & 57.3 & 107.8 &-21.75\\ A3851 &20.01 & 3.52 &1.16 &20.95 &11.90 &-22.29 &-23.23 &-23.61 & 0.42 &0.56 &0.28 &0.27 & -88.6 & 78.1 &-22.68\\ A3864 &26.28 &226.18 &6.67 & --- & --- &-26.15 & 0.00 &-26.15 & --- &0.88 &0.25 &0.58 & -45.9 & -44.6 &-23.18\\ A3869 &21.94 & 10.51 &6.41 & --- & --- &-23.50 & 0.00 &-23.50 & --- &0.45 &0.19 &0.27 & -51.1 & -52.5 &-22.61\\ A3879 &19.19 & 2.12 &3.89 &21.61 &11.50 &-22.67 &-22.55 &-23.36 & 1.13 &0.56 &0.23 &0.25 & -52.4 & -37.5 &-22.42\\ A3880 &23.46 & 31.92 &3.83 & --- & --- &-24.24 & 0.00 &-24.24 & --- &0.89 &0.05 &0.38 & 20.9 & -37.4 &-22.85\\ A3886 &22.04 & 11.71 &5.05 & --- & --- &-23.69 & 0.00 &-23.69 & --- &0.54 &0.20 &0.15 & -2.7 & 5.1 &-22.74\\ A3895 &23.38 & 25.51 &3.91 & --- & --- &-23.85 & 0.00 &-23.85 & --- &0.75 &0.23 &0.36 & 2.5 & -6.2 &-22.61\\ A3912 &21.36 & 7.55 &0.99 &21.97 &16.90 &-22.57 &-23.02 &-23.57 & 0.66 &0.91 &0.17 &0.26 & 34.8 & 15.8 &-22.62\\ A3915 &22.41 & 12.82 &6.45 &23.36 &29.33 &-23.74 &-22.95 &-24.17 & 2.08 &0.71 &0.17 &0.29 & 86.9 & 87.3 &-22.85\\ A3959 &17.17 & 0.62 &3.18 &20.67 & 6.00 &-21.92 &-22.07 &-22.75 & 0.87 &0.46 &0.42 &0.45 & 22.6 & 20.3 &-21.72\\ A3969 &21.92 & 11.74 &4.78 & --- & --- &-24.00 & 0.00 &-24.00 & --- &0.67 &0.13 &0.08 & -24.5 &-106.5 &-23.02\\ A3985 &23.46 & 38.82 &3.07 & --- & --- &-24.74 & 0.00 &-24.74 & --- &0.92 &0.44 &0.61 & 30.5 & 30.4 &-22.55\\ A4008 &22.76 & 21.08 &4.29 & --- & --- &-24.10 & 0.00 &-24.10 & --- &0.71 &0.22 &0.34 & 57.3 & 51.8 &-22.91\\ A4038 &23.76 & 31.17 &7.41 & --- & --- &-23.96 & 0.00 &-23.96 & --- &0.57 &0.15 &0.36 & -39.3 & -47.1 &-22.60\\ A4049 &21.75 & 9.04 &5.32 &23.39 &43.40 &-23.26 &-23.48 &-24.12 & 0.81 &0.57 &0.08 &0.12 & 66.6 & 47.8 &-22.71\\ A4053 &21.81 & 9.74 &7.35 & --- & --- &-23.63 & 0.00 &-23.63 & --- &0.51 &0.10 &0.12 & -85.0 & -71.0 &-22.71\\ A4059 &24.17 & 77.05 &4.41 & --- & --- &-25.49 & 0.00 &-25.49 & --- &0.98 &0.22 &0.41 & -22.1 & -23.2 &-23.21\\ \enddata \tablecomments{Col. (1), Abell Cluster; col (2), effective surface magnitude; col. (3), effective radius (kpc); col. (4), $n$ parameter; col. (5), central surface magnitude; col. (6), scale length (kpc); col. (7),Sersic absolute magnitude; col. (8), exponential absolute magnitude; col. (9), total absolute magnitude; col. (10), S\'ersic/exponential ratio; col. (11), $\alpha$ parameter; col. (12), inner ellipticity; col. (13), outer ellipticity; col. (14), inner position angle ; col. (15), outer position angle; col. (16), Metric absolute magnitude.} \end{deluxetable} \clearpage \begin{deluxetable}{lccccccccccccccc} \tabletypesize{\scriptsize} \tablecaption{BCGs Photometrical Parameters (Hopkins model) \label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$\mu_e$} & \colhead{$r_e$} & \colhead{$n$} & \colhead{$\mu_0$} & \colhead{$r_0$} & \colhead{$M_{Sersic}$} & \colhead{$M_{exp}$} & \colhead{$M_{T}$} & \colhead{$e/S$} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & } \startdata A0085 & 23.72 & 47.0 & 0.6 & 21.42 & 14.3 & -23.91 & -23.84 & -24.63 & 0.935 \\ A0133 & 23.23 & 39.0 & 1.2 & 21.04 & 6.9 & -24.29 & -22.66 & -24.51 & 0.222 \\ A0150 & 24.34 & 49.2 & 3.5 & 20.84 & 3.3 & -24.22 & -21.25 & -24.29 & 0.065 \\ A0152 & 22.77 & 20.5 & 1.4 & 19.63 & 1.7 & -23.54 & -21.07 & -23.65 & 0.102 \\ A0193 & 25.18 & 106.4 & 3.5 & 18.99 & 1.9 & -25.04 & -21.86 & -25.10 & 0.053 \\ A0194 & 22.05 & 14.2 & 1.7 & 18.39 & 1.1 & -23.31 & -21.09 & -23.44 & 0.130 \\ A0208 & 25.42 & 92.5 & 4.6 & 20.24 & 2.5 & -24.87 & -21.43 & -24.91 & 0.042 \\ A0257 & 22.64 & 18.5 & 1.9 & 19.72 & 2.1 & -23.68 & -21.58 & -23.82 & 0.144 \\ A0260 & 24.83 & 69.3 & 5.7 & 19.43 & 1.6 & -24.66 & -21.04 & -24.70 & 0.036 \\ A0262 & 23.41 & 34.9 & 1.8 & 20.07 & 2.4 & -23.69 & -20.92 & -23.77 & 0.078 \\ A0268 & 23.80 & 17.9 & 4.0 & 20.18 & 1.9 & -22.77 & -20.87 & -22.95 & 0.173 \\ A0279 & 24.02 & 52.5 & 3.1 & 20.46 & 3.3 & -24.73 & -21.70 & -24.79 & 0.061 \\ A0295 & 23.15 & 25.6 & 3.5 & 19.36 & 1.4 & -23.95 & -20.76 & -24.01 & 0.053 \\ A0311 & 25.52 & 146.2 & 3.6 & 20.73 & 3.6 & -25.48 & -21.57 & -25.51 & 0.027 \\ A0386 & 22.21 & 8.8 & 2.3 & 19.40 & 1.8 & -22.54 & -21.47 & -22.88 & 0.375 \\ A0397 & 23.88 & 37.9 & 2.9 & 19.39 & 2.0 & -23.98 & -21.50 & -24.09 & 0.101 \\ A0399 & 23.88 & 62.9 & 1.8 & 21.90 & 7.6 & -24.97 & -22.07 & -25.05 & 0.069 \\ A0404 & 23.08 & 21.0 & 1.5 & 20.35 & 3.7 & -23.26 & -22.01 & -23.56 & 0.316 \\ A0415 & 23.51 & 40.2 & 1.4 & 20.99 & 4.2 & -24.88 & -22.32 & -24.98 & 0.095 \\ A0498 & 25.12 & 73.3 & 3.8 & 20.83 & 2.6 & -24.48 & -20.85 & -24.52 & 0.035 \\ A0500 & 25.48 & 97.6 & 4.8 & 20.32 & 3.1 & -24.91 & -21.82 & -24.97 & 0.058 \\ A0539 & 24.12 & 39.2 & 3.8 & 19.37 & 1.7 & -23.94 & -21.24 & -24.03 & 0.083 \\ A0548 & 21.35 & 10.1 & 1.6 & 19.62 & 1.6 & -23.32 & -20.84 & -23.42 & 0.102 \\ A0553 & 24.88 & 65.0 & 4.5 & 20.60 & 3.2 & -24.58 & -21.57 & -24.65 & 0.062 \\ A0564 & 28.01 & 334.8 & 4.3 & 20.12 & 3.7 & -25.03 & -22.43 & -25.13 & 0.091 \\ A0564 & 27.66 & 267.4 & 4.0 & 20.12 & 3.7 & -24.86 & -22.43 & -24.97 & 0.107 \\ A0582 & 24.16 & 29.6 & 3.6 & 19.78 & 2.1 & -23.45 & -21.42 & -23.61 & 0.154 \\ A0671 & 23.68 & 51.2 & 2.9 & 19.72 & 3.1 & -24.87 & -22.19 & -24.96 & 0.085 \\ A0690 & 24.86 & 69.3 & 4.6 & 20.24 & 2.9 & -24.74 & -21.71 & -24.80 & 0.062 \\ A0757 & 23.71 & 19.1 & 3.4 & 20.67 & 2.9 & -22.80 & -21.10 & -23.00 & 0.210 \\ A0779 & 23.86 & 59.1 & 4.9 & 19.21 & 2.0 & -24.89 & -21.35 & -24.93 & 0.038 \\ A0780 & 23.55 & 37.2 & 3.1 & 19.28 & 1.6 & -24.36 & -21.22 & -24.42 & 0.055 \\ A0834 & 21.45 & 7.6 & 0.8 & 19.92 & 2.0 & -22.39 & -21.08 & -22.68 & 0.299 \\ A0841 & 24.10 & 44.4 & 3.3 & 19.95 & 2.8 & -24.29 & -21.86 & -24.40 & 0.106 \\ A0957 & 23.22 & 60.2 & 2.6 & 20.21 & 4.6 & -24.69 & -21.62 & -24.75 & 0.059 \\ A0970 & 26.34 & 97.5 & 6.1 & 20.98 & 2.5 & -24.04 & -20.52 & -24.08 & 0.039 \\ A0978 & 24.11 & 51.3 & 3.1 & 19.74 & 2.6 & -24.50 & -21.79 & -24.58 & 0.083 \\ A0979 & 24.11 & 36.7 & 5.0 & 20.74 & 2.2 & -24.01 & -20.50 & -24.05 & 0.039 \\ A0999 & 23.77 & 28.4 & 6.1 & 19.73 & 1.9 & -23.71 & -20.90 & -23.79 & 0.075 \\ A1003 & 22.78 & 20.3 & 1.5 & 19.45 & 1.7 & -23.53 & -21.26 & -23.66 & 0.123 \\ A1016 & 23.26 & 17.5 & 5.9 & 19.57 & 1.5 & -23.16 & -20.65 & -23.26 & 0.099 \\ A1020 & 23.68 & 24.5 & 3.9 & 20.14 & 2.5 & -23.49 & -21.40 & -23.64 & 0.145 \\ A1066 & 25.96 & 159.9 & 2.4 & 21.16 & 7.2 & -25.05 & -22.70 & -25.17 & 0.115 \\ A1069 & 22.80 & 26.3 & 2.0 & 20.06 & 2.4 & -24.18 & -21.37 & -24.25 & 0.075 \\ A1100 & 25.24 & 74.8 & 6.3 & 20.61 & 3.0 & -24.52 & -21.22 & -24.57 & 0.048 \\ A1139 & 24.75 & 52.6 & 5.3 & 19.76 & 1.7 & -24.07 & -20.79 & -24.12 & 0.049 \\ A1142 & 24.61 & 52.3 & 5.1 & 19.82 & 2.0 & -24.12 & -20.96 & -24.18 & 0.055 \\ A1149 & 24.61 & 49.6 & 5.0 & 18.58 & 1.3 & -24.25 & -21.61 & -24.34 & 0.088 \\ A1155 & 24.10 & 31.3 & 3.5 & 20.41 & 2.6 & -23.61 & -21.30 & -23.73 & 0.120 \\ A1171 & 25.40 & 70.1 & 3.6 & 21.09 & 4.3 & -24.05 & -21.66 & -24.16 & 0.111 \\ A1185 & 23.56 & 29.8 & 4.2 & 21.01 & 2.6 & -23.86 & -20.43 & -23.91 & 0.042 \\ A1187 & 22.87 & 22.1 & 1.9 & 20.08 & 3.0 & -23.79 & -21.93 & -23.97 & 0.180 \\ A1203 & 22.88 & 20.5 & 3.7 & 21.09 & 1.8 & -23.94 & -19.82 & -23.97 & 0.022 \\ A1216 & 22.89 & 15.7 & 3.5 & 19.78 & 1.8 & -23.21 & -21.00 & -23.34 & 0.130 \\ A1228 & 23.88 & 25.1 & 5.7 & 19.34 & 1.5 & -23.34 & -20.89 & -23.45 & 0.105 \\ A1238 & 25.50 & 92.3 & 4.0 & 20.11 & 2.3 & -24.59 & -21.24 & -24.63 & 0.046 \\ A1257 & 21.74 & 6.9 & 1.4 & 18.32 & 1.1 & -21.96 & -21.20 & -22.40 & 0.495 \\ A1308 & 25.48 & 70.4 & 6.5 & 21.05 & 2.6 & -24.21 & -20.55 & -24.25 & 0.034 \\ A1314 & 24.79 & 77.1 & 4.0 & 19.18 & 2.2 & -24.66 & -21.86 & -24.73 & 0.076 \\ A1317 & 25.12 & 137.6 & 2.4 & 20.74 & 4.8 & -25.57 & -22.23 & -25.62 & 0.046 \\ A1424 & 21.93 & 14.6 & 1.7 & 19.19 & 1.9 & -23.77 & -21.87 & -23.95 & 0.172 \\ A1474 & 21.45 & 12.3 & 1.0 & 19.65 & 1.6 & -23.68 & -21.09 & -23.78 & 0.092 \\ A1507 & 22.83 & 22.4 & 1.5 & 19.88 & 2.7 & -23.68 & -21.86 & -23.87 & 0.187 \\ A1520 & 24.64 & 85.8 & 4.2 & 19.94 & 2.8 & -25.31 & -21.86 & -25.36 & 0.042 \\ A1534 & 24.13 & 41.4 & 5.5 & 20.55 & 3.1 & -24.41 & -21.50 & -24.48 & 0.069 \\ A1569 & 21.95 & 17.1 & 1.5 & 19.36 & 2.0 & -24.04 & -21.76 & -24.17 & 0.122 \\ A1610 & 24.34 & 42.5 & 7.3 & 19.86 & 1.8 & -24.36 & -20.91 & -24.41 & 0.042 \\ A1648 & 23.72 & 43.6 & 1.0 & 20.55 & 5.2 & -24.06 & -22.62 & -24.32 & 0.266 \\ A1749 & 23.59 & 35.7 & 3.6 & 19.78 & 2.9 & -24.34 & -22.07 & -24.46 & 0.124 \\ A1773 & 23.91 & 47.2 & 2.3 & 20.27 & 3.4 & -24.47 & -22.00 & -24.58 & 0.102 \\ A1775 & 23.22 & 31.5 & 1.5 & 20.76 & 5.3 & -24.04 & -22.42 & -24.26 & 0.224 \\ A1795 & 27.15 & 354.7 & 6.2 & 20.98 & 3.5 & -26.09 & -21.33 & -26.11 & 0.012 \\ A1809 & 24.65 & 76.6 & 3.8 & 20.12 & 3.5 & -25.03 & -22.16 & -25.11 & 0.071 \\ A1831 & 23.96 & 63.7 & 3.0 & 20.23 & 2.7 & -25.23 & -21.54 & -25.27 & 0.033 \\ A1904 & 23.73 & 45.6 & 3.6 & 19.97 & 2.3 & -24.82 & -21.43 & -24.86 & 0.044 \\ A1964 & 24.18 & 44.1 & 2.1 & 20.01 & 2.8 & -23.97 & -21.75 & -24.10 & 0.130 \\ A1982 & 22.43 & 19.8 & 1.0 & 19.98 & 3.2 & -23.56 & -22.08 & -23.81 & 0.255 \\ A2022 & 24.80 & 65.6 & 3.7 & 19.72 & 2.5 & -24.45 & -21.81 & -24.54 & 0.088 \\ A2028 & 25.46 & 100.1 & 6.2 & 21.10 & 3.8 & -25.06 & -21.37 & -25.09 & 0.033 \\ A2040 & 27.66 & 561.3 & 5.0 & 19.87 & 2.2 & -26.34 & -21.29 & -26.35 & 0.010 \\ A2147 & 28.07 & 691.6 & 5.8 & 21.17 & 5.5 & -26.45 & -21.97 & -26.47 & 0.016 \\ A2151 & 22.94 & 25.4 & 2.1 & 19.55 & 1.9 & -23.77 & -21.20 & -23.87 & 0.093 \\ A2162 & 23.02 & 26.1 & 3.8 & 19.33 & 1.9 & -24.02 & -21.31 & -24.11 & 0.083 \\ A2197 & 22.23 & 23.8 & 2.6 & 19.21 & 2.6 & -24.37 & -22.11 & -24.50 & 0.125 \\ A2199 & 24.12 & 58.2 & 2.9 & 21.38 & 5.6 & -24.50 & -21.64 & -24.57 & 0.072 \\ A2309 & 23.92 & 31.3 & 4.0 & 19.46 & 1.9 & -23.72 & -21.39 & -23.84 & 0.117 \\ A2319 & 22.72 & 31.0 & 1.7 & 20.71 & 1.9 & -24.58 & -20.22 & -24.60 & 0.018 \\ A2331 & 24.38 & 51.1 & 3.2 & 20.32 & 1.8 & -24.35 & -20.52 & -24.38 & 0.029 \\ A2366 & 23.62 & 35.0 & 3.0 & 19.58 & 2.8 & -24.14 & -22.15 & -24.30 & 0.160 \\ A2372 & 25.34 & 64.3 & 7.2 & 20.68 & 3.7 & -24.22 & -21.67 & -24.32 & 0.095 \\ A2457 & 24.65 & 71.8 & 4.8 & 20.76 & 3.8 & -24.93 & -21.66 & -24.98 & 0.049 \\ A2462 & 25.41 & 86.2 & 4.6 & 19.83 & 2.6 & -24.71 & -21.90 & -24.79 & 0.075 \\ A2480 & 26.55 & 144.4 & 7.5 & 19.60 & 2.2 & -24.92 & -21.73 & -24.98 & 0.053 \\ A2492 & 23.84 & 30.6 & 4.1 & 20.70 & 2.8 & -23.95 & -21.19 & -24.03 & 0.079 \\ A2524 & 23.20 & 25.6 & 2.3 & 20.34 & 2.9 & -23.95 & -21.68 & -24.07 & 0.124 \\ A2559 & 23.14 & 41.4 & 2.2 & 20.31 & 2.5 & -24.92 & -21.29 & -24.96 & 0.035 \\ A2572 & 28.80 & 602.5 & 9.1 & 19.36 & 2.2 & -25.67 & -21.82 & -25.70 & 0.029 \\ A2572 & 26.02 & 131.5 & 5.2 & 19.25 & 2.3 & -24.84 & -21.95 & -24.91 & 0.070 \\ A2593 & 22.92 & 19.5 & 1.4 & 20.52 & 1.7 & -23.22 & -20.17 & -23.28 & 0.060 \\ A2626 & 22.92 & 29.0 & 2.3 & 19.18 & 0.8 & -24.40 & -19.93 & -24.41 & 0.016 \\ A2634 & 26.93 & 283.2 & 7.8 & 19.56 & 1.7 & -25.77 & -21.01 & -25.78 & 0.012 \\ A2656 & 23.03 & 22.8 & 2.0 & 19.69 & 1.7 & -23.77 & -21.19 & -23.87 & 0.093 \\ A2657 & 21.19 & 5.5 & 1.4 & 17.45 & 0.5 & -22.11 & -20.30 & -22.29 & 0.190 \\ A2660 & 25.01 & 64.9 & 6.5 & 20.11 & 1.7 & -24.49 & -20.51 & -24.52 & 0.026 \\ A2670 & 23.54 & 40.8 & 2.6 & 19.84 & 2.8 & -24.57 & -21.99 & -24.67 & 0.092 \\ A2717 & 23.39 & 36.5 & 1.8 & 19.73 & 2.0 & -24.16 & -21.29 & -24.24 & 0.071 \\ A2734 & 25.48 & 127.8 & 3.9 & 22.45 & 8.1 & -25.27 & -21.62 & -25.30 & 0.035 \\ A2806 & 21.68 & 9.0 & 1.4 & 19.09 & 1.8 & -22.64 & -21.51 & -22.96 & 0.355 \\ A2841 & 23.70 & 35.1 & 4.1 & 19.77 & 2.2 & -24.28 & -21.48 & -24.36 & 0.076 \\ A2859 & 23.75 & 25.0 & 3.6 & 20.30 & 2.4 & -23.42 & -21.13 & -23.54 & 0.122 \\ A2864 & 23.50 & 18.1 & 3.2 & 19.73 & 1.6 & -22.93 & -20.92 & -23.08 & 0.158 \\ A2877 & 22.94 & 37.7 & 4.1 & 18.62 & 1.8 & -24.78 & -21.80 & -24.85 & 0.064 \\ A2881 & 23.03 & 12.8 & 3.7 & 19.62 & 1.6 & -22.70 & -20.92 & -22.89 & 0.195 \\ A2896 & 21.99 & 11.1 & 3.5 & 18.84 & 1.4 & -23.15 & -21.16 & -23.31 & 0.159 \\ A3045 & 23.17 & 17.9 & 1.0 & 20.47 & 3.7 & -22.68 & -21.95 & -23.13 & 0.508 \\ A3095 & 22.61 & 15.0 & 1.3 & 19.73 & 2.6 & -22.94 & -21.91 & -23.30 & 0.389 \\ A3104 & 23.26 & 36.3 & 1.3 & 19.82 & 2.7 & -24.25 & -21.94 & -24.37 & 0.119 \\ A3107 & 25.84 & 75.1 & 4.5 & 20.61 & 2.9 & -23.85 & -21.24 & -23.94 & 0.091 \\ A3110 & 22.02 & 18.5 & 0.9 & 19.93 & 5.0 & -23.84 & -23.13 & -24.30 & 0.518 \\ A3111 & 23.57 & 33.6 & 2.9 & 20.66 & 2.7 & -24.18 & -21.07 & -24.24 & 0.057 \\ A3112 & 23.91 & 74.2 & 2.5 & 21.08 & 4.2 & -25.48 & -21.62 & -25.51 & 0.028 \\ A3120 & 24.82 & 74.4 & 3.9 & 20.58 & 2.8 & -24.64 & -21.07 & -24.68 & 0.037 \\ A3122 & 24.29 & 44.8 & 6.0 & 19.48 & 2.3 & -24.43 & -21.85 & -24.53 & 0.093 \\ A3123 & 21.98 & 11.5 & 1.3 & 18.95 & 1.7 & -22.97 & -21.74 & -23.27 & 0.322 \\ A3125 & 22.59 & 11.8 & 0.9 & 19.85 & 2.6 & -22.25 & -21.74 & -22.78 & 0.625 \\ A3142 & 24.48 & 47.9 & 4.8 & 20.69 & 2.7 & -24.26 & -21.05 & -24.32 & 0.052 \\ A3223 & 23.05 & 23.3 & 2.8 & 20.35 & 3.5 & -23.83 & -21.86 & -23.99 & 0.164 \\ A3225 & 22.16 & 12.9 & 1.1 & 19.58 & 2.3 & -22.93 & -21.75 & -23.24 & 0.337 \\ A3266 & 22.83 & 43.6 & 1.3 & 19.92 & 3.1 & -25.01 & -22.04 & -25.08 & 0.064 \\ A3301 & 26.53 & 240.0 & 5.2 & 20.89 & 5.2 & -25.70 & -22.20 & -25.74 & 0.040 \\ A3332 & 28.15 & 326.5 & 9.2 & 20.84 & 3.7 & -25.16 & -21.63 & -25.20 & 0.039 \\ A3336 & 24.45 & 61.0 & 2.8 & 21.20 & 5.0 & -24.60 & -21.89 & -24.69 & 0.082 \\ A3341 & 24.76 & 57.7 & 4.5 & 19.29 & 1.4 & -24.22 & -20.80 & -24.26 & 0.043 \\ A3354 & 26.09 & 80.0 & 6.8 & 20.56 & 3.4 & -23.93 & -21.63 & -24.05 & 0.121 \\ A3374 & 20.79 & 6.5 & 1.0 & 17.99 & 0.6 & -22.72 & -20.43 & -22.85 & 0.121 \\ A3376 & 25.10 & 85.9 & 5.2 & 20.43 & 3.3 & -24.89 & -21.63 & -24.95 & 0.050 \\ A3380 & 28.80 & 593.9 & 7.4 & 20.62 & 3.3 & -25.57 & -21.44 & -25.60 & 0.022 \\ A3390 & 24.31 & 39.8 & 4.2 & 19.74 & 1.6 & -23.84 & -20.68 & -23.89 & 0.055 \\ A3408 & 25.06 & 82.1 & 5.1 & 20.16 & 2.8 & -24.78 & -21.52 & -24.84 & 0.049 \\ A3420 & 23.56 & 19.3 & 2.8 & 20.43 & 2.7 & -22.91 & -21.29 & -23.13 & 0.224 \\ A3528 & 24.78 & 92.0 & 3.8 & 19.92 & 2.7 & -25.30 & -21.83 & -25.34 & 0.041 \\ A3530 & 24.04 & 67.3 & 3.2 & 19.93 & 2.1 & -25.25 & -21.28 & -25.28 & 0.026 \\ A3531 & 24.96 & 56.2 & 3.2 & 19.73 & 1.2 & -23.85 & -20.09 & -23.88 & 0.031 \\ A3532 & 28.22 & 728.6 & 6.5 & 20.37 & 2.8 & -26.64 & -21.44 & -26.65 & 0.008 \\ A3554 & 23.45 & 41.2 & 2.5 & 19.18 & 1.3 & -24.60 & -20.96 & -24.64 & 0.035 \\ A3559 & 22.75 & 29.3 & 2.0 & 18.85 & 1.9 & -24.45 & -22.08 & -24.56 & 0.113 \\ A3562 & 28.77 & 1160.1 & 6.9 & 21.30 & 5.1 & -27.08 & -21.77 & -27.09 & 0.007 \\ A3574 & 23.36 & 48.2 & 3.3 & 18.52 & 1.0 & -24.53 & -20.40 & -24.55 & 0.022 \\ A3605 & 27.28 & 305.7 & 4.1 & 21.47 & 8.2 & -25.42 & -22.67 & -25.51 & 0.079 \\ A3667 & 28.77 & 812.0 & 8.6 & 21.39 & 5.8 & -26.37 & -21.91 & -26.39 & 0.016 \\ A3703 & 25.45 & 101.2 & 3.6 & 21.19 & 4.3 & -24.78 & -21.54 & -24.83 & 0.051 \\ A3716 & 23.49 & 40.6 & 2.6 & 19.79 & 2.1 & -24.49 & -21.31 & -24.55 & 0.053 \\ A3731 & 25.84 & 94.8 & 7.1 & 20.15 & 2.7 & -24.53 & -21.47 & -24.59 & 0.060 \\ A3733 & 23.48 & 24.2 & 1.9 & 20.17 & 2.3 & -23.18 & -21.08 & -23.33 & 0.144 \\ A3741 & 25.47 & 86.1 & 3.4 & 21.18 & 3.9 & -24.40 & -21.35 & -24.47 & 0.060 \\ A3782 & 26.04 & 130.0 & 6.8 & 20.76 & 2.7 & -25.02 & -20.88 & -25.04 & 0.022 \\ A3796 & 24.66 & 54.6 & 3.6 & 20.77 & 3.1 & -24.25 & -21.24 & -24.32 & 0.062 \\ A3806 & 25.01 & 81.3 & 4.9 & 21.63 & 6.7 & -24.93 & -22.09 & -25.01 & 0.073 \\ A3825 & 22.15 & 6.8 & 0.7 & 19.41 & 1.5 & -21.23 & -20.85 & -21.81 & 0.699 \\ A3844 & 26.38 & 66.8 & 3.7 & 20.79 & 3.0 & -22.96 & -21.15 & -23.15 & 0.188 \\ A3851 & 22.90 & 21.1 & 1.9 & 20.12 & 3.1 & -23.54 & -21.83 & -23.75 & 0.206 \\ A3879 & 23.17 & 18.3 & 3.1 & 19.24 & 1.7 & -23.26 & -21.49 & -23.45 & 0.196 \\ A3897 & 24.94 & 67.7 & 3.8 & 19.84 & 2.6 & -24.55 & -21.88 & -24.64 & 0.085 \\ A3912 & 23.93 & 30.6 & 1.4 & 21.46 & 7.6 & -23.20 & -22.49 & -23.65 & 0.518 \\ A4049 & 23.85 & 31.5 & 4.4 & 19.63 & 1.8 & -23.77 & -21.03 & -23.86 & 0.080 \\ \enddata \tablecomments{Col. (1), Abell Cluster; col (2), effective surface magnitude; col. (3), effective radius (kpc); col. (4), $n$ parameter; col. (5), central surface magnitude; col. (6), scale length (kpc); col. (7),Sersic absolute magnitude; col. (8), exponential absolute magnitude; col. (9), total absolute magnitude; col. (10), exponential/S\'ersic ratio.} \end{deluxetable} \end{document}
1,941,325,220,303
arxiv
\section{Introduction} \label{intro} In social and economic life, as well as in many research fields such as data mining, image processing and bioinformatics, we often have the need to separate a set of objects to different groups according to their similarity to each other, so that we can subsequently represent or process different groups according to their different characteristics. As a unsupervised learning approach catering this general need, clustering data analysis has been extensively used in research and real life \citep{jain2010data}. In the Big Data era, complicated systems are often measured from multiple angles. As a result, the same set of objects is often described by both their individual characteristics and their pairwise relationship. Often, the two types of data are from different sources. For example, companies, such as Amazon and Netflix, often need to divide customers into groups of different consumption patterns, so that they can correctly recommend commodities to a certain customer. In this scenario, the personal information of a customer, such as the age and historical shopping records, is the vectorial data that we can use for clustering. The interrelationship between customers, such as how often two customers shop together and how often they like same Facebook posts, is the network data that can be used for clustering. For another instance, in bioinformatics research, we often need to cluster genes into different groups, which ideally correspond to different gene regulatory modules or biochemical functions. In this scenario, the expression of genes under different conditions, such as microarray data from different tissues or different environmental stimulus, is the vectorial data for gene clustering. The network data for gene clustering includes gene regulatory networks, protein-protein interaction data and whether a pair of genes belongs to a same Gene Ontology group. Therefore, we have to integrate both vectorial data and network data to better elucidate the group structure among the objects. Traditional clustering methods are designed for either vectorial data alone or network data alone \citep{buhmann1995data}. In the most commonly used vectorial data, each object is represented by a vector of the same dimension. The similarity between two objects is reflected by certain distance measure of the two corresponding vectors. The problem of clustering vectorial data have been studied for more than 60 years. The most widely used methods include K-means clustering \citep{macqueen1967some,tavazoie1999systematic}, Gaussian mixture model \citep{mclachlan1988mixture,fraley2002model} and hierarchical clustering \citep{sibson1973slink,defays1977efficient,eisen1998cluster}. Most vectorial data clustering methods adopted a central clustering approach by searching for a set of prototype vectors. \citet{jain2010data} provided a good review on this subject. The other data type is network data, where the similarity between two objects is directly given without describing the characteristics of individual objects. The problem of clustering network data arose only in the recent two decades. The methods for network data clustering include entropy-based methods \citep{buhmann1994maximum,park2006validation}, spectrum-based methods \citep{swope2004describing,bowman2009using}, cut-based methods \citep{muff2009identification}, path-based method \citep{jain2012identifying}, modified self-organizing maps \citep{seo2004self}, mean field model \citep{hofmann1997pairwise}, probabilistic relational model \citep{taskar2001probabilistic, nowicki2001estimation, mariadassou2010uncovering}, and Newman's modularity function \citep{newman2006modularity}. \citet{fortunato2010community} provided a good review on this subject. As shown above, vectorial data clustering and network data clustering have both been intensively studied. In contrast, as to our knowledge, no existing methods can integrate the clustering information in the two individual data types parallelly within a coherent framework. There are several papers on the direction of data integration which can take both vectorial and network data as input, but these methods either transform the network data to vectorial data as in the latent position space approach \citep{hoff2002latent, handcock2007model, gormley2011mixture}, or transform the vectorial data to network data as in \citet{zhou2010clustering} and \citet{gunnemann2010subspace, gunnemann2011db}. The explicit or implicit data transformation needs an artificial design of a latent metric space for converting the network data or an artificial design of a distance measure for converting the vector data. Thus they cannot avoid the arbitrary weighting of the clustering information from two data types. In reality, we seldom know how to weight one data type again another. For example, the vectorial data and the network data may come from independent studies which may have used different techniques to check the similarity of objects at different levels. Thus, essentially we have no good way to weight one against the other. The only common and comparable thing behind the two data types is how likely each pair of objects is within a same cluster. In this paper, we developed an integrative probabilistic clustering method called ``Shared Clustering'' for clustering vectorial data and network data simultaneously. We assume that the vectorial data is independent from the network data conditional on the cluster labels. Our probabilistic model treats the two types of data equally instead of treating one as the covariate of the other, and models their contribution to clustering directly instead of converting one type to another. We perform the statistical inference in the Bayesian framework. The Markov chain Monte Carlo (MCMC) algorithm, or more specifically the Gibbs sampler, is employed to sample the parameters and cluster labels. The paper is organized as follows. We first describe the model of Shared Clustering, then the inference method is described in detail, followed by applications to both synthesized data and real data. A summary and discussion are provided at the end. \section{Problem statement and model specification} \label{probl} We consider the clustering of $N$ objects according to their vectorial data $\mathbf{x}_i$ and pairwise data $\mathbf{y}_{ij}$, where\emph{ }$i,j=1,\ldots, N$ are the indexes of objects. Let $\boldsymbol X$ be the $N$\emph{-}by-$q$\emph{ }matrix formed by $\mathbf{x}_i$ whose dimension is $q$, and $\boldsymbol Y$ be the $N$\emph{-}by-$N$\emph{ }square matrix formed by $\mathbf{y}_{ij}$. Note that $\mathbf{y}_{ij}$ can be deemed as the weight of the link from the $i$-th object to the $j$-th object on the network. In the Shared Clustering model, we assume that the vectorial data $\boldsymbol X$ and the network data $\boldsymbol Y$ share a common clustering structure $\boldsymbol{C}=(c_{1},\ldots,c_{N})$, where the cluster label of the $i$-th object is $c_{i}=1,\ldots, K$, and $K$ is the total number of clusters. Given $\boldsymbol C$, all $\mathbf{x}_i$ and $\mathbf{y}_{ij}$ are assumed to be independently following their corresponding component distributions. Thus, the joint likelihood function is $L(\boldsymbol X,\boldsymbol Y|\boldsymbol \Phi, \boldsymbol \Psi, \boldsymbol C) = \prod\limits_{i=1}^N f(\mathbf{x}_{i}|\phi_{c_i}) \cdot \prod\limits_{i=1}^N \prod\limits_{j=1}^N g(\mathbf{y}_{ij}|\psi_{c_i,c_j})$, where $\boldsymbol\Phi=(\phi_{1},\ldots,\phi_{K})$ and $\boldsymbol\Psi=(\psi_{1},\ldots,\psi_{K})$ represent all component specific parameters, $f(\cdot)$ and $g(\cdot)$ represent the component distributions. We further assume that each of the $N$ cluster labels follows a multinomial distribution with the probability vector $\boldsymbol P=(p_{1},\ldots,p_{K})\in\mathbbm{R}^K$, namely $c_i=k$ with probability $p_k$, $k=1,\ldots,K$. Intuitively, the meaning of $\boldsymbol P$ is the prior probabilities that each object is assigned to the corresponding clusters. In summary, the generative version of the model can be stated as: \begin{equation}\begin{split}& c_i\sim\mathrm{Multinomial}(\boldsymbol P),\\& \mathbf{x}_i|c_{i}\sim\mathrm{f}(\mathbf{x}_{i}|\phi_{c_i}),\\& \mathbf{y}_{ij}|c_{i},c_j\sim\mathrm{g}(\mathbf{y}_{ij}|\psi_{c_i,c_j}). \end{split}\label{generative}\end{equation} The dependency structure of all random variables is shown in Fig. \ref{depstruc}. \begin{figure}[h] \noindent \begin{centering} \includegraphics[scale=0.3]{Dependency} \par\end{centering} \caption{Dependency structure of the variables in Shared Clustering} \label{depstruc} \end{figure} The general joint clustering model in Equation \eqref{depstruc} conveys the main idea to integrate the model for vectorial data and the model for network data probabilistically by conditioning on shared cluster labels, no matter what models are used for individual data types. The component distributions of $\boldsymbol X$ and $\boldsymbol Y$ can be any distribution combinations depending on the specific types of the given data. For example, $f(\cdot)$ can be either a continuous distribution or a discrete distribution depending on the given vectorial data. A proper distribution $g(\cdot)$, say Poisson distribution, may be induced to model network variables if an integer weighted graph is given. For a concrete study of the joint clustering model, we assume that vectorial data follows a Gaussian Mixture Model (GMM) \citep{mclachlan1988mixture,fraley2002model} and the network data follows a Stochastic Block Model (SBM) \citep{nowicki2001estimation}. More specifically, we assume that the vectorial data $\mathbf{x}_{i}$ follows a multivariate Normal distribution with $\phi_{c_i}=(\mu_{c_i},\Sigma_{c_i})$, and $\mathbf{y}_{ij}$ is a binary variable following Bernoulli distribution with linking probability equal to $\psi_{c_i,c_j}$. Here the network interested is an undirected graph without self loop, thus $\boldsymbol Y$ is an $N$-by-$N$ symmetric matrix with all diagonal entries being zero. In this remaining part of this paper, we will mainly focus on these two specified distribution assumptions for $\boldsymbol X$ and $\boldsymbol Y$ respectively, and we call this combination the Normal-Bernoulli model. Under the Normal-Bernoulli model, the conditional distribution of each vector $\mathbf{x}_i\in\mathbbm{R}^q,i=1,\ldots,N$ given $c_{i}$ is \begin{equation}\mathbf{x}_i|c_{i}\sim\mathrm{N}(\mu_{c_{i}},\Sigma_{c_{i}}),\label{x_i|c_i}\end{equation} where $\mu_{c_{i}}\in\mathbbm{R}^q$ is the mean vector and $\Sigma_{c_{i}}$ is the $q$-by-$q$ covariance matrix. From Equation \eqref{x_i|c_i}, $\mathbf{x}_i$ belongs to the $k$-th cluster if and only if $c_{i}=k$, where $k=1,\ldots,K$. In SBM, a network is partitioned into several blocks according to the number of total clusters $K$. Variables $y_{ij}$ within each individual block are controlled by a same set of parameters. In our case the parameter $\boldsymbol \Psi$ for network data is therefore a $K$-by-$K$ probability matrix, with each element describing the corresponding Bernoulli distribution within a certain block. Thus the distribution of the edge variable $y_{ij},i,j=1,\ldots,N,i\neq j$ is \begin{equation}y_{ij}|c_i,c_j\sim \mathrm{Bernoulli}(\psi_{c_i,c_j}),\label{y_{ij}|c_i,c_j}\end{equation} namely $g(y_{ij}|\boldsymbol \Psi,c_{i},c_{j})=\psi_{c_i,c_j}^{y_{ij}}\cdot(1-\psi_{c_i,c_j})^{1-y_{ij}}$. In terms of undirected networks, the network data $\boldsymbol Y$ or the adjacency matrix of the graph is symmetric. Hence we have $y_{ij}=y_{ji}$ and $\psi_{c_i,c_j}=\psi_{c_j,c_i}$. Thus only the lower (or upper) triangles of $\boldsymbol Y$ and $\boldsymbol\Psi$ need to be considered. Given a dataset $\boldsymbol D=(\boldsymbol X,\boldsymbol Y)$ of $N$ objects defined as above and the total number of clusters $K$, our task is to infer the true cluster membership $\boldsymbol C$. In other words, we work on how the $N$ objects should be divided into $K$ clusters according to the integrative information of their vectorial data and network data. \section{Method description} \label{metho} \subsection{Prior distributions} \label{prior} For Bayesian inference, we need to specify prior distributions for unknown parameters. In the case that little prior knowledge about the parameters is available, we choose flat priors as in most Bayesian data analyses. Meanwhile, we would like to use fully conjugate priors to ease the posterior sampling. Although different prior sets could be assigned to the $K$ different cluster components, we use the same prior settings in absence of the prior knowledge of the $K$ different component distributions. As stated in Section \ref{probl}, the cluster labels in $\boldsymbol C$ follow a multinomial distribution. One common way is to fix $p_i=1/K$, but this indicates a strong prior belief that each cluster is of equal size. Thus we instead treat all $p_i$ as unknown and assume the vector $\boldsymbol P=(p_{1},\ldots,p_{K})$ follows a Dirichlet distribution with prior parameter vector $\boldsymbol a\in\mathbbm{R}^K$, i.e., $\boldsymbol P\sim\mathrm{Dirichlet}(\boldsymbol a)$. As for the multivariate Normal distributions, a conventional fully conjugate prior setting discussed in \citet{Rossi2006GMM}, which is a special case of multivariate regression, is to assume that the mean vector $\mu_k$ follows multivariate Normal distribution given the covariance matrix $\Sigma_k$, and $\Sigma_k$ follows Inverse-Wishart distribution. Namely, $\Sigma_k\sim\mathrm{IW_q}(T,v_0)$ and $\mu_k\sim\mathrm{N}(\mu_{0},\alpha^{-1}\Sigma_{k})$, where $T$ is the $q\times q$ location matrix of Inverse-Wishart prior on $\Sigma_k$, $v_0$ is the corresponding degree of freedom, $\mu_0$ is the mean of the multivariate Normal prior on $\mu_k$, and $\alpha$ is a precision parameter. In our experiments demonstrated in later sections, we will use the default priors described in \citet{Rossi2006GMM} and its R implementation \citep{bayesmR} for the vectorial data. For the network data $\boldsymbol Y$, the conjugate prior for individual Bernoulli parameter $\psi_{c_i,c_j}$ is Beta distribution with shape parameters $\beta_1$ and $\beta_2$, i.e., $\psi_{c_i,c_j}\sim\mathrm{Beta}(\beta_1,\beta_2)$. And again we uniformly set the same pair of $(\beta_1,\beta_2)$ for every $\psi_{c_i,c_j}$ for the lack of nonexchangeable prior knowledge. In our simulation studies, $\beta_1$ and $\beta_2$ are set both equal to a relatively small quantity which is slightly larger than 1. Sensitivity analysis in Section \ref{sensi} shows the two prior settings result in little disparity. \subsection{Posterior distributions} \label{poste} The full joint posterior distribution of all parameters is proportional to the product of the joint likelihood and the joint prior distributions, thus we have \begin{equation}\begin{split}p(&\boldsymbol P,\boldsymbol C,\boldsymbol \Phi,\boldsymbol \Psi|\boldsymbol X,\boldsymbol Y)\\&\propto p(\boldsymbol X,\boldsymbol Y|\boldsymbol P,\boldsymbol C,\boldsymbol \Phi,\boldsymbol \Psi)p(\boldsymbol P,\boldsymbol C,\boldsymbol \Phi,\boldsymbol \Psi)\\&=p(\boldsymbol X|\boldsymbol C,\boldsymbol \Phi)p(\boldsymbol Y|\boldsymbol C,\boldsymbol \Psi)p(\boldsymbol \Phi)p(\boldsymbol \Psi)p(\boldsymbol C|\boldsymbol P)p(\boldsymbol P).\end{split}\label{full_post}\end{equation} \subsection{Gibbs sampling algorithm} \label{gibbs} We use Gibbs sampler to conduct the Bayesian inference, which samples the parameters from their conditional posterior distributions iteratively \citep{robert2004monte}. The specified conditional posteriors of the model parameters for Gibbs sampling are provided in the appendix. Our algorithm is similar to the case of Gibbs sampling for GMM or SBM alone, with the essential difference that the distributions of cluster labels are now associated with the two types of data jointly. A pseudo code of the algorithm is presented in Table \ref{pseudo code}. \begin{center} \begin{table*} \begin{singlespace} \noindent \centering{}% \begin{tabular}{ll} & \textbf{Gibbs Sampler for Shared Clustering Model} \tabularnewline 1: & Set all hyper-priors $\mu_0$,$\alpha$,$T$,$v_0$,$\boldsymbol a$,$\beta_1$,$\beta_2$ and initialize $\boldsymbol P$,$\boldsymbol C$\tabularnewline 2: & \textbf{for} each iteration \textbf{do} \tabularnewline 3: & \quad \textbf{for }$k=1,\ldots,K$\textbf{do} \tabularnewline 4: & \quad\quad Under current cluster label $\boldsymbol C$, extract the $k$-th component of $\boldsymbol X$ \tabularnewline 5: & \quad\quad Sample $\Sigma_k$ using Equation \eqref{app_sigma_post} \tabularnewline 6: & \quad\quad Sample $\mu_k$ using Equation \eqref{app_mu_post} \tabularnewline 7: & \quad\quad \textbf{for}$j=1,\ldots,K$ \textbf{do} \tabularnewline 8: & \quad\quad\quad Under current cluster label $\boldsymbol C$, extract the block of cluster $k$ and $j$ from $\boldsymbol Y$ \tabularnewline 9: & \quad\quad\quad Sample $\psi_{k,j}$ using Equation \eqref{app_psi_post} \tabularnewline 10: & \quad\quad \textbf{end for} \tabularnewline 11: & \quad\textbf{end for} \tabularnewline 12: & \quad\textbf{for }$i=1,\ldots,N$\textbf{do} \tabularnewline 13: & \quad\quad \textbf{for }$k=1,\ldots,K$ \textbf{do} \tabularnewline 14: & \quad\quad\quad Set $c_i=k$ and calculate $p(\mathbf x_i,\mathbf y_i|\boldsymbol \Phi,\boldsymbol\Psi,c_i=k,C_{-i})p(c_i=k|\boldsymbol P)$\tabularnewline 15: & \quad\quad \textbf{end for} \tabularnewline 16: & \quad\quad Sample $c_i$ from $p(\mathbf x_i,\mathbf y_i|\boldsymbol \Phi,\boldsymbol\Psi,c_i,C_{-i})p(c_i|\boldsymbol P)$ after normalizing\tabularnewline 17: & \quad \textbf{end for}\tabularnewline 18: & \quad Sample $\boldsymbol P$ using Equation \eqref{app_p_post} \tabularnewline 19: & \quad Calculate the unnormalized joint posterior probability as in Equation \eqref{full_post} \tabularnewline 20: & \textbf{end for}\tabularnewline \end{tabular} \caption{Pseudo code of the algorithm} \label{pseudo code} \end{singlespace} \end{table*} \par\end{center} After running the chain until convergence, the remaining iterations after burn-in are used for posterior inference. More specifically, when a point estimate of the clustering label $\boldsymbol C$ is needed, we use the maximum a posteriori (MAP) estimation, i.e. the iteration with the maximal joint posterior probability \citep{sorenson1980parameter}. Using MAP can bypass the label-switching problem. To quantify the clustering uncertainty, we use the whole converged sample by summarizing it in a heatmap of the posterior pairwise co-clustering probability matrix. This heatmap can provide us a way of selecting the number of clusters $K$ (see Section \ref{selec}). \section{Synthetic data experiments} \label{simul} \subsection{Experimental design} \label{exper} We test the performance of our method under diverse scenarios. The difficulty of a clustering problem is determined by many factors, including the number of clusters $K$, the number of objects $N$, the tightness of clusters and the relative locations of clusters. We design different difficulty levels for $\boldsymbol X$ and $\boldsymbol Y$ separately, and test on their combinations. For the vectorial data $\boldsymbol X$, we tried three different shapes (denoted as shape=1,2,3) and two overlapping conditions (with or without overlap), which are shown in Fig. \ref{type and ovl}. Corresponding parameters are listed in Table \ref{para for type and ovl}. For easy visualization, these examples are limited as two-dimensional. Higher dimensional cases are tested in Section \ref{simu3}. \begin{center} \begin{figure*} \begin{centering} \includegraphics[scale=0.3]{\string"bunch4_11\string".pdf} \includegraphics[scale=0.3]{\string"bunch5_17\string".pdf} \includegraphics[scale=0.3]{\string"bunch6_15\string".pdf} \par\end{centering} \begin{centering} \includegraphics[scale=0.3]{\string"bunch7_15\string".pdf} \includegraphics[scale=0.3]{\string"bunch8_11\string".pdf} \includegraphics[scale=0.3]{\string"bunch9_11\string".pdf} \par\end{centering} \centering{} \caption{Vectorial data examples, each with 3 clusters represented by different point symbols of different colors} \label{type and ovl} \end{figure*} \begin{table*} \begin{centering} \begin{tabular}{|l|l|l|} \hline \textbf{(shape, overlap)} & \textbf{Mean} & \textbf{Variance-Covariance (}$\Sigma_1$, $\Sigma_2$, $\Sigma_3$\textbf{)}\tabularnewline \hline \multirow{3}{*}{(1, with)} & $\mu_1=(1.1,1.1)^T$ & \multirow{3}{*}{$\begin{bmatrix}0.1&-0.03\\-0.03&0.1\end{bmatrix},\begin{bmatrix}0.15&-0.09\\-0.09&0.15\end{bmatrix},\begin{bmatrix}0.15&-0.09\\-0.09&0.15\end{bmatrix}$}\tabularnewline & $\mu_2=(2.1,2.3)^T$ & \tabularnewline & $\mu_3=(3.3,1.1)^T$ & \tabularnewline \hline \multirow{3}{*}{(2, with)} & $\mu_1=(1.2,1.2)^T$ & \multirow{3}{*}{$\begin{bmatrix}0.2&-0.1\\-0.1&0.2\end{bmatrix},\begin{bmatrix}0.1&0.05\\0.05&0.1\end{bmatrix},\begin{bmatrix}0.1&0.05\\0.05&0.1\end{bmatrix}$}\tabularnewline & $\mu_2=(1.4,2.4)^T$ & \tabularnewline & $\mu_3=(2.4,1)^T$ & \tabularnewline \hline \multirow{3}{*}{(3, with)} & $\mu_1=(1,0.6)^T$ & \multirow{3}{*}{$\begin{bmatrix}0.2&0.05\\0.05&0.2\end{bmatrix},\begin{bmatrix}0.2&0.05\\0.05&0.2\end{bmatrix},\begin{bmatrix}0.25&-0.12\\-0.12&0.25\end{bmatrix}$}\tabularnewline & $\mu_2=(2.5,2.5)^T$ & \tabularnewline & $\mu_3=(2.25,1)^T$ & \tabularnewline \hline \multirow{3}{*}{(1, without)} & $\mu_1=(1.1,1.1)^T$ & \multirow{3}{*}{$\frac{1}{3}\begin{bmatrix}0.1&-0.02\\-0.02&0.1\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.15&-0.03\\-0.03&0.15\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.15&-0.03\\-0.03&0.15\end{bmatrix}$}\tabularnewline & $\mu_2=(2.1,2.5)^T$ & \tabularnewline & $\mu_3=(3.5,1.1)^T$ & \tabularnewline \hline \multirow{3}{*}{(2, without)} & $\mu_1=(1,1.5)^T$ & \multirow{3}{*}{$\frac{1}{3}\begin{bmatrix}0.2&-0.03\\-0.03&0.2\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.1&0.02\\0.02&0.1\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.1&0.02\\0.02&0.1\end{bmatrix}$}\tabularnewline & $\mu_2=(2,3)^T$ & \tabularnewline & $\mu_3=(3,1)^T$ & \tabularnewline \hline \multirow{3}{*}{(3, without)} & $\mu_1=(1,1)^T$ & \multirow{3}{*}{$\frac{1}{3}\begin{bmatrix}0.1&0.02\\0.02&0.1\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.1&0.02\\0.02&0.1\end{bmatrix},\frac{1}{3}\begin{bmatrix}0.2&-0.03\\-0.03&0.2\end{bmatrix}$}\tabularnewline & $\mu_2=(4,4)^T$ & \tabularnewline & $\mu_3=(3,2)^T$ & \tabularnewline \hline \end{tabular} \par\end{centering} \centering{} \caption{Parameters for generating the data in Fig. \ref{type and ovl}} \label{para for type and ovl} \end{table*} \par\end{center} For the network data $\boldsymbol Y$, the difficulty of clustering is controlled by the relative magnitude of the linking probabilities in $\boldsymbol \Psi$. As we can expect, clustering a network would be easier if there are more within-cluster edges and less between-cluster edges. Reflecting on the probability matrix $\boldsymbol \Psi$, the ``noise'' level depends on whether the diagonal elements are significantly larger than off-diagonal elements. We test on network examples with both high noise and low noise. Fig. \ref{high low noise} shows two examples. The corresponding probability matrices are provided in Table \ref{para low and high}. \begin{center} \begin{figure*} \begin{centering} \includegraphics[scale=0.4]{\string"bunch7_6\string".pdf} \includegraphics[scale=0.4]{\string"bunch2_2\string".pdf} \par\end{centering} \centering{} \caption{High noise (left) and low noise (right) networks} \label{high low noise} \end{figure*} \begin{table}[H] \begin{doublespace} \noindent \begin{centering} \begin{tabular}{|c|c|c|} \hline & \textbf{High noise} & \textbf{Low noise}\tabularnewline $\boldsymbol \Psi$ & $\begin{pmatrix}0.6&0.25&0.35\\0.25&0.65&0.35\\0.35&0.35&0.65\end{pmatrix}$ & $\begin{pmatrix}0.8&0.15&0.2\\0.15&0.9&0.25\\0.2&0.25&0.9\end{pmatrix}$\tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Probability matrix of the networks in Fig. \ref{high low noise}} \label{para low and high}\end{doublespace} \end{table} \par\end{center} \subsection{Accuracy measure} \label{accur} To evaluate our method and compare with other methods using simulated data where the true cluster memberships are known, we adopt the widely used Adjusted Rand Index (ARI) \citet{hubert1985comparing} to measure the consistency between the inferred clustering and the ground truth. For each pair of true cluster label $\boldsymbol C_{true}$ and inferred $\boldsymbol C_{inf}$, a contingency table is established first and ARI is then calculated according to the formula in \citet{hubert1985comparing}. An ARI with value 1 means the clustering result is completely correct compared to the truth, and ARI will be less than 1 or even negative when objects are wrongly clustered. An advantage of using ARI is that we don't need to explicitly match the labels of the two comparing clusterings. What we are interested in is whether a certain set of objects belong to a same cluster rather than the label number itself. Since currently no similar methods perform simultaneous clustering for both vectorial and network data, we compare our method with results from individual data types and their intuitive combination. Methods in \citet{Rossi2006GMM} and \citet{schmidt2013nonparametric} are used to cluster the vectorial data (denote as ``Vec'') and network data (denote as ``Net''), respectively. These clustering results are stored in $\boldsymbol C_{vec}$ and $\boldsymbol C_{net}$. An intuitive way to combine them is the idea of multiple voting as used by ensemble methods \citep{Dietterich:2000:LectureNotes,Fred:Jain:2002:ICPR,Strehl:Ghosh:2003:JMLR}, which combine different clustering results in an post-processing fashion. In our scenario, we construct a contingency table between $\boldsymbol C_{vec}$ and $\boldsymbol C_{net}$ to find the best mapping, then calculate an average ARI as compared to the truth (denoted as ``Combine''). As an extra reference, we also take the better clustering between ``Net'' and ``Vec'' (denoted as ``Oracle'') as if we know which data type we shall trust. Thus, ``Oracle'' represents the upper bound of the performance for post-processing ensemble methods on our scenario, but it is not really achievable since we do not know which one to trust before we know the true clustering. \subsection{Simulation results in cases with different data conditions} \label{simu1} We simulated data from nine cases listed in Table \ref{K=00003D3 N=00003D30}, which represents different combinations of the vectorial and network conditions. In this first experiment, we set $K=3$ and $N=30$ with each cluster containing 10 objects. For each of the nine cases, 10 independent data sets ($\boldsymbol X$ and $\boldsymbol Y$) are generated. When running our Shared Clustering method, we observe that 1000 iterations seem sufficient for the MCMC algorithm to converge for cases in this subsection, while more iterations are needed in later experiments such as high dimensional cases. Thus we run our MCMC algorithm for 2000 iterations and MAP is used to calculate an ARI as the accuracy of the MCMC chain. For each of the cases in Table \ref{K=00003D3 N=00003D30}, ten independent datasets are generated. For each of the dataset, we independently run ten MCMC chains and the median ARI of the ten chains is used as the accuracy of the algorithm on this dataset. The mean and standard deviation of the ten ARIs from the ten datasets are used to represent the performance of the algorithm on a specific case. The same datasets are used to assess the performance of ``Vec'', ``Net'', ``Combine'' and ``Oracle''. The simulation results are summarized in Table \ref{K=00003D3 N=00003D30}. \begin{center} \begin{table*} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{Case} & \textbf{noise} & \textbf{overlap} & \textbf{shape} & \textbf{Shared} & \textbf{Combine} & \textbf{Oracle} & \textbf{Net} & \textbf{Vec}\tabularnewline \hline 1 & low & with & 1 & \textbf{1(0)} & 0.742(0.124) & 1(0) & 1(0) & 0.580(0.166)\tabularnewline \hline 2 & low & with & 2 & \textbf{1(0)} & 0.531(0.053) & 1(0) & 1(0) & 0.306(0.075)\tabularnewline \hline 3 & low & with & 3 & \textbf{1(0)} & 0.607(0.055) & 1(0) & 1(0) & 0.418(0.085)\tabularnewline \hline 4 & high & without & 1 & \textbf{0.956(0.080)} & 0.468(0.191) & 0.907(0.079) & 0.270(0.253) & 0.907(0.079)\tabularnewline \hline 5 & high & without & 2 & \textbf{0.951(0.093)} & 0.501(0.194) & 0.955(0.055) & 0.294(0.242) & 0.955(0.055)\tabularnewline \hline 6 & high & without & 3 & \textbf{0.971(0.065)} & 0.517(0.191) & 1(0) & 0.287(0.247) & 1(0)\tabularnewline \hline 7 & high & with & 1 & \textbf{0.884(0.121)} & 0.346(0.211) & 0.588(0.189) & 0.286(0.251) & 0.575(0.190)\tabularnewline \hline 8 & high & with & 2 & \textbf{0.672(0.213)} & 0.221(0.147) & 0.381(0.184) & 0.297(0.255) & 0.305(0.083)\tabularnewline \hline 9 & high & with & 3 & \textbf{0.720(0.197)} & 0.272(0.133) & 0.480(0.115) & 0.291(0.217) & 0.418(0.085)\tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Clustering performance on the nine cases with $K=3$ and $N=30$. The mean (sd) ARI of each case is calculated from 10 independent trials} \label{K=00003D3 N=00003D30} \end{onehalfspace} \end{table*} \par\end{center} From Table \ref{K=00003D3 N=00003D30}, one can conclude that when one data type, ($\boldsymbol X$ or $\boldsymbol Y$) was ``clean'' (easy to cluster) while the other data type was ``dirty'' (as in Case 1-3 and Case 4-6), the single-data-type method corresponding to the clean data would give outstanding performance despite that the other single-data-type method was nearly of no use in terms of clustering. The ``Combine'' method was naturally deteriorated by the ``dirty'' side. However, Shared Clustering has the ability to take advantage of the clean data meanwhile largely prevent the negative effects from the dirty data. Its accuracy is similar to that of ``Oracle'' and much better than ``Combine''. When both two types of data were ``dirty'' (as in Case 7-9), neither of the single-data-type methods could work. But again, Shared Clustering performed significantly better than single-data-type methods, ``Combine'' and ``Oracle''. \subsection{Simulation results in large number of objects} \label{simu2} Observing the relatively low ARIs when both $\boldsymbol X$ and $\boldsymbol Y$ were ``dirty'', we were interested in whether a larger number of objects ($N$) could increase clustering accuracy. Thus we further tested three cases with $N=90$ and each cluster containing 30 objects. Simulation settings for vectorial data remained unchanged as in Case 7-9, however, the original ``high noise'' setting for network data was no longer that ``noisy'' under the increased number of objects since clustering would be surely easier with more connections. Therefore, for cases with 30 objects in each cluster, instead of using the ``high'' noise setting in Table \ref{para low and high}, we define $\boldsymbol \Psi=\begin{pmatrix}0.55&0.3&0.4\\0.3&0.6&0.4\\0.4&0.4&0.6\end{pmatrix}$ as ``very high'' noise level, to roughly match the difficulty of network data with the corresponding vectorial data. The results are shown in Table \ref{K=00003D3 N=00003D90}. As expected, the performance based on the network data alone is dramatically increased although we increased the noise level, but the performance based on the vectorial data alone is hardly changed. The Shared Clustering also showed an improved performance with higher ARIs and lower standard deviations. \begin{center} \begin{table*} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{Case} & \textbf{noise} & \textbf{overlap} & \textbf{type} & \textbf{Shared} & \textbf{Combine} & \textbf{Oracle} & \textbf{Net} & \textbf{Vec}\tabularnewline \hline 10 & very high & with & 1 & \textbf{0.911(0.047) } & 0.410(0.107) & 0.626(0.146) & 0.626(0.146) & 0.369(0.071) \tabularnewline \hline 11 & very high & with & 2 & \textbf{0.884(0.058) } & 0.391(0.062) & 0.630(0.145) & 0.630(0.145) & 0.304(0.034) \tabularnewline \hline 12 & very high & with & 3 & \textbf{0.833(0.066) } & 0.465(0.058) & 0.648(0.109) & 0.624(0.156) & 0.415(0.047) \tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Clustering performance on the three cases with $K=3$ and $N=90$. The mean (sd) ARI of each case is calculated from 10 independent trials} \label{K=00003D3 N=00003D90} \end{onehalfspace} \end{table*} \par\end{center} \subsection{Simulation results in large number of clusters} \label{simu3} We were also interested in the performance of the method when $K$ increases. Hence we extended our experiments to test some cases with a larger cluster number $K=10$. More specifically, four of the ten mean vectors fall into the rectangular region located by point (1, 1), (1, 4), (4, 1) and (4, 4); three of the ten fall into the rectangular region located by (4, 7), (4, 10), (7, 7) and (7, 10); the other three are in the region located by (6, 3), (6, 8), (10, 3) and (10, 8). And the covariance matrices of the ten clusters are randomly assigned among three different types: non-correlated $\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix}$, positively-correlated $\begin{pmatrix}0.5&0.4\\0.4&0.5\end{pmatrix}$, and negatively-correlated $\begin{pmatrix}0.5&-0.4\\-0.4&0.5\end{pmatrix}$. The motivation of these designs is to avoid cases where many clusters crushed together or many clusters separated too far, which are either impossible or too easy for clustering. Fig. \ref{Xplots K=00003D10} provides two plots of examples for vectorial data with $N=100$ and $N=300$ respectively. \begin{onehalfspace} \noindent \begin{center} \begin{figure*} \begin{onehalfspace} \noindent \begin{centering} \includegraphics[scale=0.4]{\string"Scatters10_20\string".pdf} \includegraphics[scale=0.4]{\string"Scatters10_34\string".pdf} \par\end{centering} \noindent \centering{} \caption{Examples of $\boldsymbol X$ with $N=100$ and $N=300$, each with 10 clusters represented by different point symbols of different colors} \label{Xplots K=00003D10} \end{onehalfspace} \end{figure*} \par\end{center} \end{onehalfspace} For network data, we used newly defined levels of noise called ``moderate'' and ``hard.'' The ``moderate'' level is designed to be relatively easier for clustering than the ``hard'' level. The two 10-by-10 probability matrices are presented in Table S1 in the supplementary document. We tested on four cases for this study on the effect of K, two with ten objects in each cluster ($N=100$) and the other two with thirty objects in each cluster ($N=300$). The tests are similar to those with three clusters, namely for each case we use the three methods and calculate the ARI means and standard deviations. The experiment results are shown in Table \ref{large clusters K=00003D10}. \begin{onehalfspace} \noindent \begin{center} \begin{table*} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{Case} & \textbf{N} & \textbf{noise} & \textbf{Shared} & \textbf{Combine} & \textbf{Oracle} & \textbf{Net} & \textbf{Vec}\tabularnewline \hline 13 & 100 & moderate & \textbf{0.805(0.058)} & 0.449(0.021) & 0.635(0.043) & 0.635(0.043) & 0.457(0.043)\tabularnewline \hline 14 & 100 & messy & \textbf{0.496(0.073)} & 0.126(0.019) & 0.450(0.047) & 0.036(0.017) & 0.450(0.047)\tabularnewline \hline 15 & 300 & moderate & \textbf{0.869(0.034)} & 0.532(0.050) & 0.798(0.037) & 0.798(0.037) & 0.481(0.059)\tabularnewline \hline 16 & 300 & messy & \textbf{0.913(0.053)} & 0.327(0.055) & 0.496(0.057) & 0.340(0.135) & 0.479(0.059)\tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Clustering performance on the four cases with $K=10$. The mean (sd) ARI of each case is calculated from 10 independent trials} \label{large clusters K=00003D10}\end{onehalfspace} \end{table*} \par\end{center} \end{onehalfspace} The previous experiments on $K=3$ have shown that network clustering on a small number of objects performs badly when the ``noise'' is relatively high. When the number of clusters grows to ten, the situation became even worse. This can be clearly seen in Case 14, where the clustering on network alone completely failed and even the Shared Clustering method could not improve the accuracy since no extra information is provided by the network data. However, if we increased the number of objects (Case 16), Shared Clustering was again showing a big advantage. \subsection{Simulation results in higher dimensional vectorial data} \label{simu4} In the real world, vectorial data are more likely to have more than two dimensions. Here we present two experiments for higher dimensional $\boldsymbol X$, with dimension $q=5$ and $20$ respectively. Besides $\mu_k$ and $\Sigma_k$, all the other parameter settings, including number of clusters, number of objectives, and noise level of network data are identical to those in Case 10-12, as we purely attempt to examine the effects of higher dimensions. Numerical details of the mean vectors ($\mu_k$) of vectorial data for $q=5$ and $q=20$ are shown in Table S2 and S3 respectively in the supplementary document. The mean values of the first 5 dimensions in the $q=20$ case are set as the same as the corresponding mean in the $q=5$ case. For the covariance matrices, all diagonal elements are set as 1 and off-diagonal elements are sampled uniformly between -0.05 and 0.05. The experiments results are listed in Table \ref{high dim}. Again for each case, we used 10 independent datasets to test our method. The large dimensions required more MCMC iterations to get converged samples. For $q=5$, each chain was run 3000 iterations with the first 2000 as burn-in; for $q=20$, the numbers are 4000 with 3000 burn-in. From Table \ref{high dim}, we observed that as the dimension grew from $q=5$ to $q=20$, with more supporting clustering information from the extra dimensions, clustering for vectorial data improved significantly and it made Shared Clustering even better. In both cases, Shared Clustering embraced significantly better ARI scores. \noindent \begin{center} \begin{table*} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Case} & \textbf{q} & \textbf{Shared} & \textbf{Combine} & \textbf{Oracle} & \textbf{Net} & \textbf{Vec}\tabularnewline \hline 17 & 5 & \textbf{0.983(0.018)} & 0.723(0.044) & 0.810(0.052) & 0.698(0.098) & 0.773(0.083) \tabularnewline \hline 18 & 20 & \textbf{1(0)} & 0.627(0.246) & 0.890(0.180) & 0.705(0.100) & 0.868(0.213) \tabularnewline \hline \end{tabular} \par\end{centering} \end{onehalfspace} \caption{High dimensional experiments with $K=3$ and $N=90$ } \label{high dim} \end{table*} \par\end{center} \subsection{Selection of the number of clusters} \label{selec} One advantage of our Bayesian approach is to quantify the clustering uncertainty from the converged MCMC sample. For each pair of objects $i$ and $j$, by counting the times that they have a common cluster label among the sample, we can estimate their pairwise co-clustering probability, which indicates how likely the objects $i$ and $j$ are from the same cluster. Repeating this counting process for all pairs of objects, we get a $N$-by-$N$ pairwise co-clustering probability matrix. This matrix is then processed to draw a heatmap, from which the cluster structure can be easily visualized. The above heatmap method can provide an intuitive way to select the number of clusters. Take Case 7 in Section \ref{simu1} as an example. We run our Shared Clustering with $K=2, 3, 4$ separately until converge. Corresponding heatmaps are drawn in Fig. \ref{heatmaps} with the help of the R package ``pheatmap'' \citep{pheatmap}. From the heatmaps, one can draw a conclusion that $K=3$ is the best choice in this case, because $K=3$ gives a clearer cluster structure. When $K$ is set to be too big as in the $K=4$ case, certain cluster in the $K-1$ heatmap is forced to break, but which cluster to break is of uncertain, thus resulting in blurry bars in the $K$ heatmap. When $K$ is set to be too small as in the $K=2$ case, some clusters in the $K+1$ heatmap are forced to merge, but which cluster to break is of uncertain, thus resulting in a non-homogenous block in the $K$ heatmap. \begin{figure*} \begin{centering} \includegraphics[scale=0.3]{\string"K2heatmap\string".pdf} \includegraphics[scale=0.3]{\string"K3heatmap\string".pdf} \includegraphics[scale=0.3]{\string"K4heatmap\string".pdf} \par\end{centering} \centering{} \caption{Heatmaps from different numbers of clusters (from left to right: $K=2, 3, 4$), where $K=3$ is the true number} \label{heatmaps} \end{figure*} \subsection{Prior sensitivity of the network parameters} \label{sensi} We conducted sensitivity analysis for the prior setting of the network parameter $\psi$ which follows Beta distribution with shape parameters $\beta_1$ and $\beta_2$. Instead of the prior setting mentioned in Section \ref{prior}, here we test the uniform prior with $\beta_1=\beta_2=1$. Case 10-12 in Section \ref{simu2} are re-done and the results are shown in Table \ref{sensitivity analysis}. The performance indicates that Shared Clustering behaved stably under different Beta distribution priors. \begin{center} \begin{table} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|c|c|} \hline \textbf{Case} & Original & \textbf{$\beta_1=\beta_2=1$}\tabularnewline \hline 10 & 0.911(0.047) & 0.967(0.032) \tabularnewline \hline 11 & 0.884(0.058) & 0.877(0.058) \tabularnewline \hline 12 & 0.833(0.066) & 0.846(0.069) \tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Clustering performance on case 10-12 with $\beta_1=\beta_2=1$. The mean (sd) ARI of each case is calculated from 10 independent trials} \label{sensitivity analysis} \end{onehalfspace} \end{table} \par\end{center} \section{Real data experiment} \label{reald} We tested our algorithm using a real gene dataset used in \citet{gunnemann2010subspace}. The original processed data in \citet{gunnemann2010subspace} contain 3548 genes with gene interactions as edges; each gene has 115 gene expression values, thus the dimension of the vectorial data is 115. \citet{gunnemann2010subspace} used GAMEer to detect multiple subnetworks from the whole large complex network. We aim at checking whether the subnetworks from GAMer are clearly supported by both the vectorial data and the network data. The selected genes are listed in Table \ref{Real clusters}, and the gene IDs are from \citet{gunnemann2010subspace}. \noindent \begin{center} \begin{table*} \begin{onehalfspace} \noindent \begin{centering} \begin{tabular}{|c|ccccccccccc|} \hline Subnetwork 1 & 52 & 202 & 233 & 399 & 458 & 320 & 1078 & 1110 & 731 & 1345 & 1392\tabularnewline & 2096 & 1458 & 2432 & 2132 & 1384 & 3423 & 1702 & & & & \tabularnewline Subnetwork 2 & 352 & 337 & 391 & 398 & 410 & 460 & 485 & 411 & 1127 & 1213 & 1653\tabularnewline Subnetwork 3 & 285 & 614 & 672 & 702 & 885 & 1117 & 2617 & 3382 & 3438 & & \tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{} \caption{Selected genes for clustering (gene ID identical to the data given by \citet{gunnemann2010subspace}) } \label{Real clusters} \end{onehalfspace} \end{table*} \par\end{center} Due to the missing values contained in the original vectorial data, all dimensions containing missing values for the 40 selected genes are discarded, resulting in a 19 dimensional data set. However, 19 is still larger than the number of genes in any of the three subnetworks, which makes GMM under an ill condition \citep{fraley2002model}. We thus employed PCA to reduce the dimension of the vectorial data while maintaining most of the variation in the data. The scree plot \citep{mardia1979multivariate} of PCA is shown in Fig. \ref{Screeplot}. According to the plot, we choose the first four principal components as the finalized vectorial data. \noindent \begin{center} \begin{figure}[H] \noindent \begin{centering} \includegraphics[scale=0.5]{Screeplot} \par\end{centering} \noindent \centering{} \caption{Scree plot of the principal components calculated from the 19 non-missing dimensions of the selected 40 genes} \label{Screeplot} \end{figure} \par\end{center} The interactions between genes are originally directed. We convert the directed graph to undirected by simply considering any existed link as an edge, namely $y_{ij}=1$ when there is an edge either pointing from $i$ to $j$ or pointing from $j$ to $i$. The processed undirected network is displayed in Fig. \ref{Real network}. \noindent \begin{center} \begin{figure}[H] \noindent \begin{centering} \includegraphics[scale=0.5]{\string"Real_network\string".pdf} \par\end{centering} \noindent \centering{} \caption{Network of the selected genes for clustering (gene ID identical to the data given by \citet{gunnemann2010subspace})} \label{Real network} \end{figure} \par\end{center} After running our algorithm with $K=2,3,4$, we used the heatmap approach introduced in Section \ref{selec} to determine the most reasonable $K$. The best number of clusters turned out to be three which is consistent with \citet{gunnemann2010subspace}. Under $K=3$, the clustering result from Shared Clustering fully confirmed (ARI=1) the subnetwork memberships of the 40 genes listed in Table \ref{Real clusters}. In many real data problem, the dimension of the vectorial data is bigger than the number of objects, which makes it difficult to fit the GMM part of our model. In this example, we used PCA to reduce the dimension while trying to maintain most variation of the data. Other methods are also possible. For example, one can introduce a shrinkage estimator or assume certain sparsity structure when estimating the covariance matrix. One can also perform variable selection when doing clustering \citep{Raftery2006selection}. \section{Summary and discussion} \label{summa} In this paper, we introduced the new probabilistic integrative clustering method which can cluster vectorial and relational data simultaneously. We introduced the Shared Clustering model within a general framework and also provided a specific Normal-Bernoulli model. A Gibbs sampling algorithm is provided to perform the Bayesian inference. We ran intensive simulation experiments to test the performance of Shared Clustering by controlling various factors such as cluster size, number of clusters, noise level of network data, shape and dimension of vectorial data, etc. At the same time, a model selection approach is discussed by using the MCMC sample. Finally a gene subnetwork data set was employed to demonstrate the applicability of the method in real world. The new joint probabilistic model is characterized by a more efficient information utilization, thus shows a better clustering performance. Although we mainly concerned undirected graphs for the network data in this paper, SBM can handle directed graph by simply loosing the symmetric requirement for the adjacency matrix $\boldsymbol Y$ and the probability matrix $\boldsymbol \Psi$. In this case, the edge variable $\mathbf{y}_{ij}$ and $\mathbf{y}_{ji}$ are modelled as independent, and both upper and lower triangle of $\boldsymbol Y$ are useful. The number of parameters in $\boldsymbol \Psi$ increases from $K(K+1)/2$ to $K^2$. Moreover, the edge variables can be extended beyond binary ones. For instance, $\mathbf{y}_{ij}$ can be Poisson variables when modeling count-weighted graphs as in \citet{mariadassou2010uncovering}, or Normal variables if the network data is continuous data. Similar logic can be applied to the vectorial data part. The vectorial data can be continuous, discrete or even mixed type. For continuous and discrete types, the distribution assumption can be chosen accordingly. As for mixed type, for instance, the vector can be $\mathbf{x}_{i}=(\mathbf{x}_{i1},\mathbf{x}_{i2})^T$ where $\mathbf{x}_{i1}$ is a vector with continuous data and $\mathbf{x}_{i2}$ is a vector with discrete data. Then the distribution of vectorial data $f(\cdot)$ mentioned in Section \ref{probl} is the joint distribution of the two random vectors. If $\mathbf{x}_{i1}$ and $\mathbf{x}_{i2}$ are independent, $\mathbf{x}_{i1}\sim f_1(\cdot)$ and $\mathbf{x}_{i2}\sim f_2(\cdot)$, we will have $f(\cdot)=f_1(\cdot)f_2(\cdot)$. In summary, as stated in Equation \ref{probl}, depending on the observed types of data available, Shared Clustering may handle different combinations of $f(\cdot)$ and $g(\cdot)$. Since this paper mainly studied one specific model, the Normal-Bernoulli model, performance of Shared Clustering under other distributions for network and vectorial data as mentioned above is still waiting for examination. Besides the method of selecting the number of clusters $K$ that we discussed in Section \ref{selec}, future studies are also needed to conduct model selection in a more principled way. For example $K$ may be treated as a random variable and sampled by the Reversible-Jump MCMC \citep{green1995reversible}. \citet{mcdaid2013improved} and \cite{friel2013bayesian} have proposed faster techniques to tackle this issue avoiding the computationally expensive Reversible-Jump MCMC. Also, in some situations, user may need to solve the label switching problem \citep{jasra2005markov} in the MCMC sample. The technique developed in \citet{li2014pivotal} can be used to tackle this. In our current model, we independently model vectorial data and network data given the cluster labels, thus there is no trade-off between $\boldsymbol X$ and $\boldsymbol Y$. However, under certain circumstance, if one has subjective knowledge of how the two parts should be weighted, a tuning parameter can be introduced to control the contribution of the two types of data. Specifically, let $\eta$ be the tuning parameter, the joint posterior in Equation \eqref{full_post} can be re-written as a weighted one: \begin{equation}\begin{split}p(&\boldsymbol P,\boldsymbol C,\boldsymbol \Phi,\boldsymbol \Psi|\boldsymbol X,\boldsymbol Y, \eta)\\&\propto p(\boldsymbol X, \boldsymbol \Phi|\boldsymbol C)^{\eta}p(\boldsymbol Y, \boldsymbol \Psi|\boldsymbol C)^{1-\eta}p(\boldsymbol C, \boldsymbol P).\end{split}\label{weighted_likelihood}\end{equation} Note that $\eta$ is a pre-specified tuning parameter, not to be treated as random variable in the Bayesian inference. Further studies is also needed in this kind of extension of the Shared Clustering model. In our model, we assume that both vectorial data and network data share the same clustering labels, which is the base of performing joint clustering. In reality, we may not know whether we can assume this same clustering. Strict testing of this assumption is still an open problem. In practice, we can compare the two clustering produced by individual data type using ARI. If the ARI is too small, we shall doubt the assumption and avoid jointly modeling the two data sets. \begin{acknowledgements} This research is partially supported by two grants from the Research Grants Council of the Hong Kong SAR (Project No. CUHK 400913 and 14203915). The R codes and supplementary documents in this paper are available at \\ \url{https://github.com/yunchuankong/SharedClustering}. \end{acknowledgements} \section{Introduction} \label{intro} Your text comes here. Separate text sections with \section{Section title} \label{sec:1} Text with citations \cite{RefB} and \cite{RefJ}. \subsection{Subsection title} \label{sec:2} as required. Don't forget to give each section and subsection a unique label (see Sect.~\ref{sec:1}). \paragraph{Paragraph headings} Use paragraph headings as needed. \begin{equation} a^2+b^2=c^2 \end{equation} \begin{figure} \includegraphics{example.eps} \caption{Please write your figure caption here} \label{fig:1} \end{figure} \begin{figure*} \includegraphics[width=0.75\textwidth]{example.eps} \caption{Please write your figure caption here} \label{fig:2} \end{figure*} \begin{table} \caption{Please write your table caption here} \label{tab:1} \begin{tabular}{lll} \hline\noalign{\smallskip} first & second & third \\ \noalign{\smallskip}\hline\noalign{\smallskip} number & number & number \\ number & number & number \\ \noalign{\smallskip}\hline \end{tabular} \end{table}
1,941,325,220,304
arxiv
\section{Introduction} Amid the gamut of approaches, whose goal is a quantum theory of gravity, some clearly protrude, namely, they entail the modification of the dispersion relation \cite{[1]}. These ideas appear in several models, for instance, quantum--gravity approaches based upon non--commutative geometry \cite{[2], [3]}, or loop--quantum gravity models \cite{[4], [5]}, etc. In them Lorentz symmetry becomes only an approximation for quantum space \cite{[6], [7], [8]}, and do entail modifications in some fundamental physical concepts, as the uncertainty principle \cite{[9]}, for instance. The quest for quantum--gravity effects is not restricted to the case of dispersion relations. Indeed, Dirac equation can also be employed \cite{[10]} in this context, and hence we may look for the consequences of this sort of models in the motion equation of 1/2 spin particles. For instance, the spreading of a wave packet is modified, and in principle, it is possible to detect effects induced by quantum--gravity theory by monitoring this parameter, or, as already shown \cite{[10]}, Larmor precession presents a novel dependence, and in consequence the angular velocity of the expectation values of the components of the spin allows us to test our modified Dirac equation. In the present work the consequences, upon the roots of the so--called degree of coherence function \cite{[11]}, of a deformed dispersion relation are analyzed. This will be done for the case of a beam comprising two di\-fferent frequencies. At this point it is noteworthy to comment the existence, already, of a work \cite{[12]} containing a qualitative analysis of the modifications emerging in the interference pattern of a Michelson device. In the aforementioned work the modifications in the phase shifts have been studied, nevertheless the changes in the roots of the degree of coherence function has not yet been addressed. The conditions to be sa\-tisfied, in order to detect this kind of quantum gravity effect, will be also deduced. Finally, some words will be said concerning the possible future work in the realm of deformed dispersion relations and its detection resorting to higher--order coherence effects (such as the so--called Hanbury--Brown--Twiss effect \cite{[11]}), or to non--monocromatic light sources, a case that, by the way, up to now strangely has not been seriously considered, and the one could shed some light upon the present issue. \bigskip \bigskip \section{Degree of coherence function and deformed dispersion relations} \bigskip \bigskip As already mentioned above several quantum--gravity models predict a modified dispersion relation \cite{[1],[2], [3], [4], [5]}, the one can be characterized, phenomenologically, through corrections hinging upon Planck's length, $l_p$, \begin{equation} E^2 = p^2\Bigl[1 - \alpha\Bigl(El_p\Bigr)^n\Bigr]. \label{Disprel1} \end{equation} Here $\alpha$ is a coefficient, usually of order 1, and whose precise value depends upon the considered quantum--gravity model, while $n$, the lowest power in Planck's length leading to a non--vanishing contribution, is also model dependent. Casting (\ref{Disprel1}) in ordinary units we have \begin{equation} E^2 = p^2c^2\Bigl[1 - \alpha\Bigl(E\sqrt{G/(c^5\hbar)}\Bigr)^n\Bigr]. \label{Disprel2} \end{equation} The expression \begin{equation} p =\hbar k, \label{Mom1} \end{equation} leads us to \begin{equation} k =\frac{E/(c\hbar)}{\Bigl[1 - \alpha\Bigl(E\sqrt{G/(c^5\hbar)}\Bigr)^n\Bigr]^{1/2}}. \label{k1} \end{equation} Since we expect very tiny corrections, then the following expansion is justified \begin{equation} k =\frac{E}{c\hbar}\Bigl[1 + \frac{\alpha}{2}\Bigl(E\sqrt{G/(c^5\hbar)}\Bigr)^n + \frac{3}{8}\alpha^2\Bigl(E\sqrt{G/(c^5\hbar)}\Bigr)^{2n}+...\Bigr]. \label{k2} \end{equation} Let us now consider two beams with energies $E_1$ and $E_2$, respectively, such that $E_2 = E_1 + \Delta E$, and in addition, it will be assumed that they interfere in a Michelson device \cite{[11]}. As is already known each frequency produces an interference pattern, and at this point it will be su\-pposed that the corresponding beat frequency is to high to be detected \cite{[11]}, i.e., the output intensity is obtained adding the intensities associated with each frequency contained in the input. Under these conditions the measured intensity reads \begin{equation} I = I_1\Bigl[1 + \cos(\omega_1\tau_1)\Bigr] + I_2\Bigl[1 + \cos(\omega_2\tau_2)\Bigr]. \label{Intensity1} \end{equation} In this last expression $I_1$ and $I_2$ denote the intensities of the two beams, $\omega_1$, $\omega_2$ the corresponding frequencies, and \begin{equation} \tau_1 =2d/c_1, ~~\tau_2 =2d/c_2. \label{Time} \end{equation} Here $d$ is the difference in length in the two interfero\-meter arms, and $c_1$, $c_2$, the corresponding velocities, here the velocity has a non--trivial energy dependence \cite{[1]}, i.e., $c_1 \not=c_2$. From now on we will assume that $I_1 = I_2$, such that $I_0=I_1+ I_2$, therefore the detected intensity can be cast in the following form \begin{equation} I = I_0\Bigl[1 + \gamma(d)\Bigr]. \label{Intensity2} \end{equation} In this last equation the so--called degree of coherence function $\gamma(d)$ has been introduced \cite{[11]}, the one for our situation reads ($k_1$ and $k_2$ are the corresponding wave numbers) \begin{equation} \gamma(d)= \cos\Bigl([k_1 + k_2]d/2\Bigr)\cos\Bigl([k_1 - k_2]d/2\Bigr). \label{Degree1} \end{equation} A fleeting glimpse at (\ref{k1}) clearly shows that (\ref{Degree1}) does depend upon $\alpha$ and $n$, and in consequence the roots of the degree of coherence function will be modified by the presence of a deformed dispersion relation. The expression providing us the roots of the degree of coherence function is \begin{equation} \Bigl(k_1 - k_2\Bigr)d/2 = \pi/2. \label{Root1} \end{equation} Resorting to our previous expressions we may rewrite (\ref{Root1}) as \begin{eqnarray} d = c\hbar\pi\Bigl\{\Delta E + \frac{\alpha}{2}E_1\Bigl(E_1\sqrt{G/(c^5\hbar)}\Bigr)^n\nonumber\\ \times\Bigl[(n+1)\frac{\Delta E}{E_1} +\frac{n(n+1)}{2}\Bigl(\frac{\Delta E}{E_1}\Bigr)^2 +... \Bigr]\Bigr\}^{-1}. \label{Root12} \end{eqnarray} Let us now define $\beta = \Delta E/E_1$, a real number smaller than 1. In the present proposal we will consider two possible values for $n$, namely: \bigskip \bigskip \subsection{Case n= 1} \bigskip \bigskip For this situation we have that the roots of the degree of coherence function become, approximately \begin{eqnarray} d = \frac{c\hbar\pi}{E_1}\Bigl\{\beta -\frac{\alpha}{2}\Bigl(E_1\sqrt{G/(c^5\hbar)}\Bigr) \Bigl[2 + \beta\Bigr]\Bigr\}. \label{Root3} \end{eqnarray} For the sake of clarity let us assume that $\alpha\sim 1$, a restriction that is not devoid of physical content \cite{[1]}. The possibility of detecting this deformed dispersion relation will hinge upon the fulfillment of the condition \begin{eqnarray} \vert D - d\vert > \Delta d. \label{Exp1} \end{eqnarray} In this last equation $D$ denotes the usual value in the difference of the interferometer arms at which the degree of coherence function vanishes (that is when $\alpha =0$), whereas $\Delta d$ is the corresponding experimental resolution. This can be cast in the following form \begin{eqnarray} \frac{\Delta E}{E_1}> \frac{2\Delta d}{\pi l_p} - 1. \label{Exp2} \end{eqnarray} Recalling that from square one it was assumed that our device cannot detect the beat frequencies, i.e., if $T$ denotes the time resolution of the measuring device, then \begin{eqnarray} \vert\omega_2 - \omega_1\vert T/2>>1. \label{Exp3} \end{eqnarray} This last condition may be rewritten as \begin{eqnarray} T\Delta E>\hbar. \label{Exp4} \end{eqnarray} In other words, (\ref{Exp2}) and (\ref{Exp4}) are the two conditions to be fulfilled if the case $n=1$ and $\alpha \sim 1$ is to be detected. \bigskip \bigskip \subsection{Case n= 2} \bigskip \bigskip Under these conditions ($n = 2$ and $\alpha \sim 1$) the roots of the degree of coherence function read, approximately \begin{eqnarray} d = \frac{c\hbar\pi}{E_1}\Bigl\{\beta -\frac{\alpha}{2}\Bigl(E_1\sqrt{G/(c^5\hbar)}\Bigr)^2\Bigl[3 + 3\beta+ \beta^2\Bigr]\Bigr\}. \label{Root4} \end{eqnarray} The expression tantamount to (\ref{Exp2}) is \begin{eqnarray} \Bigl\{3 + 3\Bigl(\frac{\Delta E}{E_1}\Bigr) + \Bigl(\frac{\Delta E}{E_1}\Bigr)^2\Bigr\}E_1> 2\frac{c\hbar\Delta d}{\pi l^2_p}. \label{Exp5} \end{eqnarray} The impossibility of detecting beat frequencies translates, once again, as \begin{eqnarray} T\Delta E>\hbar. \label{Exp6} \end{eqnarray} \bigskip \bigskip \section{Conclusions} \bigskip \bigskip In the present work the possibility of detecting two different deformed dispersion relations, resorting to the analysis of the roots of the degree of coherence function, has been carried out. The impossibility of detecting beat frequencies renders only one condition, see expressions (\ref{Exp4}) and (\ref{Exp6}). The experimental difficulty appears in connection with (\ref{Exp2}) and (\ref{Exp5}), which are the restrictions to be satisfied in order to detect this kind of effects. Forsooth, since we have, from square one, imposed the condition $\frac{\Delta E}{E_1} <1$, then (\ref{Exp2}) entails a very stringent restriction, namely a experimental resolution very close to Planck's length, i.e., $\Delta d\sim l_p$. The case $n=2$ becomes even worse, as usual \cite{[1]}. A rough estimate of the required energy, for the case in which $\Delta d\sim 10^{-4}$cm, renders energies higher than the so--called GZK limit for cosmic rays \cite{[13]}. A fleeting glimpse at (\ref{Exp5}) clearly shows us that in this case the problem stems from the presence of the factor $l^2_p$. The conclusions that have been drawn from the previous analysis have a not very optimistic atmosphere, and in consequence we may wonder if interferometry could be useful in the present context. In order to address this issue let us recall that in the extant literature the considered experimental proposals do reduce to the case of first order coherence experiments \cite{[12]} (Michelson interferometer falls within this category \cite{[11]}), or they consider only a finite number of monocromatic sources \cite{[12]}. At this point it is noteworthy to comment that optical experiments offer a much richer realm of possibilities. Forsooth, higher--order coherence effects, for instance, the so called Hanbury--Brown--Twiss effect \cite{[11]}), could be also considered and explored within the present context, or the effects of a deformed dispersion relation upon a light source with a continuous frequency distribution could be studied. The results of the analysis of the aforementioned proposals will be published elsewhere. \begin{acknowledgments} We dedicate the present work to Michael Ryan on occa\-sion of his $60^{th}$ birthday. This research was supported by CONACYT Grant 42191--F. A. C. would like to thank A.A. Cuevas--Sosa for useful discussions and literature hints. \end{acknowledgments}
1,941,325,220,305
arxiv
\section{\label{intro} Introduction} The conversion of magnetic field energy into internal energy of plasmas in weakly collisional or collisionless plasmas is not fully understood and has important implications for space plasma and astrophysical plasma phenomena~\citep{parashar2015}. A magnetic field in a non-potential form can serve as a source of energy for plasma heating and/or particle acceleration in numerous space, solar and astrophysical processes such as coronal heating~\citep{ballegooijen86,parker88,peera2015}, solar flares~\citep{li2017,Chen2020,Fu2020}, solar wind acceleration~\citep{pontieu07}, and flares in relativistic jets \citep{Zhang2015,Guo2015,Comisso2018,Zhang2018,Zhang2020}. These events involve multiple phenomena operating on a wide range of scales. At large scales, the plasma behaves like a fluid and nonlinear interactions such as turbulence can cause an energy cascade toward small scales where kinetic physics becomes important~\citep{leamon98,gary2004,sahraoui2009,chandran10,wan2012,perri2012,tenBarge2013}. While the overall energy conversion rate can be controlled by turbulence at large scales~\citep{wu13,peera2015}, unclear mechanisms at kinetic scales determine the proportion of energy transferred to ions or electrons. Ions gain much more energy than electrons in high-energy turbulent astrophysical systems~\citep{zhdankin2019}. Changing the level of fluctuation at kinetic scales causes a change in dominant heating mode. For a higher amplitude of fluctuation, ions gain energy more effectively, while a lower amplitude supports more electron heating~\citep{gary16,hughes17,shay2018}. A number of recent works searched for the responsible mechanisms and the locations for the energy conversion by using the statistics of properties of the energy transfer rate per unit volume from electromagnetic energy to plasma energy, ${\pp J}\cdot{\pp E}$, where ${\pp J}$ is a current density and ${\pp E}$ is an electric field. Some previous studies explored how different components of ${\pp J}\cdot{\pp E}$ contribute to the total conversion rate. For example, one can consider the perpendicular component of ${\pp J}$ as the sum of multiple currents due to the anisotropic and non-uniform pressure tensor, such as the grad-B current, the curvature current, and the polarization current~\citep{Dahlin2014,li2017,li2018,Li2019}. In the case of large-scale magnetic reconnection, ${\pp J}\cdot{\pp E}$ from the curvature current dominates. In the case of decaying turbulence with no dominant magnetic reconnection site, the currents are also intermittent and form sheet-like structures~\citep{camporeale2018}. For this case, the main contribution to ${\pp J}\cdot{\pp E}$ comes from the interactions between particles and the parallel component of the electric field~\citep{wan2012,makwana2017}. Another line of work has considered the conversion into internal energy due to the interaction between the pressure tensor and gradient of the components of bulk flows \citep{Yang2017,Yang2017PRE,Du2018,Du2020}. Examination of the spatial distribution, species contributions, and directional components of ${\pp J}\cdot{\pp E}$ is useful for understanding how a magnetic field loses its energy to particles. Furthermore, we point out the utility of decomposing ${\pp E}$ into irrotational (${\pp E}_{\rm ir}$) and solenoidal (divergence-free, ${\pp E}_{\rm so}$) components. The irrotational component arises from the charge separation between ions and electrons. Consider the rates of change of magnetic field and electric field energy according to Maxwell's equations: \begin{eqnarray} \frac{\partial}{\partial t}\left(\frac{B^2}{2\mu_0}\right) &=& -\frac{1}{\mu_0}{\bf B}\cdot\nabla\times{\bf E} \label{eq:B2} \\ \frac{\partial}{\partial t}\left(\frac{\epsilon_0E^2}{2}\right) &=& \frac{1}{\mu_0}{\bf E}\cdot\nabla\times{\bf B}-{\bf J}\cdot{\bf E}. \label{eq:E2} \end{eqnarray} The first term on the right hand side of Equation~\ref{eq:E2} has the same spatial average as the negative of the right hand side of Equation~\ref{eq:B2}, which represents the rate of conversion of magnetic energy to electric energy. From Equation~\ref{eq:B2}, the irrotational component does not contribute to the time evolution of the magnetic field. In that sense, it cannot exchange energy with the magnetic field. Only the solenoidal component ${\pp E}_{\rm so}$ can directly drain energy from the magnetic field. Figure~\ref{fig:conversion} shows an illustration of how overall energy is converted between the magnetic field, electric field and particle kinetic energy (of both bulk flow and thermal energy). The energy density conversion rate between the magnetic field and the electric field is ${\pp B}\cdot\nabla\times{\pp E}_{\rm so}/\mu_0$, while ${\pp J}\cdot{\pp E}$ is the energy density conversion rate between the electric field and particles. The characteristics of ${\pp E}_{\rm so}$ control the energy conversion between the magnetic field and the electric field. As will be discussed shortly, our simulation results show that the energy density in the electric field remains low with a very slow rate of change, compared to the magnetic energy or kinetic energy. Furthermore, we find that the spatial average $\langle{\bf J\cdot E_{ir}}\rangle$ is very low. Therefore, the net energy flow is from the magnetic field through the solenoidal electric field energy and then, at about the same rate, through to the particle kinetic energy via ${\bf J\cdot E_{so}}$. In other words, ${\bf E_{so}}$ controls the flow of magnetic energy to particle kinetic energy, with a spatial distribution marked by ${\bf J\cdot E_{so}}$. The analysis of ${\pp J}\cdot{\pp E}$ from previous studies generally included a major contribution from ${\pp E}_{\rm ir}$, which may obscure the net energy-conversion mechanism and spatial distribution. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{scheme.png} \caption{\small Schematic of the net energy transfer between the magnetic field, the electric field, and particles via $\langle {\pp B}\cdot\nabla\times{\pp E}_{\rm so}/\mu_0\rangle$ and $\langle {\pp J}\cdot{\pp E}\rangle$. According to our simulation results, for decaying 2.5D turbulence the two net energy transfer rates are nearly equal, and the net transfer $\langle{\bf J\cdot E}\rangle$ is dominated by the solenoidal component $\langle{\bf J\cdot E_{so}}\rangle$. } \label{fig:conversion} \end{center} \end{figure} In this work, we investigate how the magnetic energy is converted into the kinetic energy of particles. We run particle-in-cell simulations for cases of 2.5-dimensional (2.5D) decaying turbulence with an initially uniform guide magnetic field. The purpose of studying decaying turbulence is to study the evolution of turbulence without the strong influence of the turbulent energy input (forcing) that would have to be added {\it ad hoc} to maintain a steady state. Simulations in 2.5D are useful for effectively utilizing computational resources, particularly for the case of a strong guide field, where the turbulent dynamics involve stronger spatial variations along the two directions perpendicular to that field \citep{montgomery82}, i.e., they have a quasi-2D nature. We analyze the behavior of ${\pp J}\cdot{\pp E}$ from ions and electrons separately. We {use Helmholtz decomposition~\citep{kida1992,yang2021} to} separate the electric field into the irrotational component and the solenoidal component. We focus on how the solenoidal component drains energy out of the magnetic field and find which types of particles gain energy from this component. We consider multiple magnitudes of the guide field and observe how the energy conversion is modified. We find that the simulations have two phases. In the early phase, magnetic islands are formed. In the later phase, these magnetic islands merge. As plasma dynamics in the later phase is less sensitive to the initial conditions and more closely related to astrophysical applications, we focus on the energy transfer in this phase. In the later phase, the interaction between particles and the parallel component of the solenoidal electric field is the main mechanism to drain energy out of the magnetic field in the case of a strong guide field. The interaction is strongly localized, and we find that three good indicators of the spatial locations are 1) the ratio of the out-of-plane electric field to the in-plane magnetic field, 2) the out-of-plane component of the non-ideal electric field, and 3) the magnitude of the estimate of current helicity. \section{\label{model} Simulation Setup} Simulations presented here were performed using VPIC~\citep{bowers2008}, which is an explicit particle-in-cell (PIC) code that has been used to study magnetic reconnection \citep{daughton2011,Guo2020a,Guo2020b}, turbulence \citep{wan2015}, and particle energization \citep{Guo2014,Guo2015,Guo2019,Li2019b}. The simulations are 2.5D with $y$ as the ignorable coordinate. The initial magnetic field is the sum of a uniform guide magnetic field and an additional in-plane magnetic field in the form of several Fourier modes with a certain range of wavenumbers: \begin{equation} {\pp B}(x, z) = B_0\hat{y}+ {\pp \delta b} (x, z), \end{equation} where \begin{equation} {\pp \delta b}(x,z)= \sum_{n=1}^{8}\sum_{m=1}^{8} {\pp b}(n,m)e^{i(2\pi n x/L+2\pi m z/L+\phi_{\rm mn})}, \end{equation} $B_0\hat{y}$ is the uniform guide field and $L$ is the dimension of the cubical box. The wavenumber components along the $x$ and $z$ directions vary from $2\pi/L$ to $16\pi/L$. The phases $\phi_{\rm mn}$ are randomized. The Fourier coefficient ${\pp b}(n,m)$ is initially in the $x$-$z$ plane and is perpendicular to the corresponding wavenumber to maintain $\nabla\cdot {\pp B}=0$. The magnitude of ${\pp b}(n,m)$ is uniform and is set to give $\langle {\pp \delta b}^2\rangle=b_0^2$, where $b_0$ is the unit of the magnetic field in the simulations. This form of the magnetic field is associated with plasma current along the $+y$ and $-y$ directions and contains a total energy available for energy conversion of equal to $(\langle B^2 \rangle-B^2_0)L^3/(2\mu_0)=b^2_0 L^3/(2\mu_0)$. There is no initial electric field and the plasma pressure is uniform. Therefore at its early state, the forces in the plasma are extremely unbalanced. The plasma consists of ion-electron pairs with the ion-to-electron mass ratio $m_i/m_e$ of $25$ or $100$. The initial particle distributions are Maxwellian with a uniform density $n_0$ and a uniform temperature for ions and electrons. The initial electron thermal speed is $0.2c$. The initial bulk flow velocities of ions and electrons give a net-zero mass flow but provide the current density ${\pp J}=\nabla\times{\pp B}/\mu_0$. The unit of time is the nominal inverse ion cyclotron frequency for the in-plane magnetic field, $\Omega^{-1}_{\rm i}=m_{\rm i}/(q_{\rm i} b_0)$ and the unit of length is the initial ion inertial length $d_i = c/\omega_{\rm pi}$, where $\omega_{\rm pi}$ is the initial ion plasma frequency. The initial density $n_0$ is set so that the initial electron plasma frequency is $\omega_{\rm pe}=\Omega_{\rm e}=q_{\rm e}b_0/m_{\rm e}$. For $m_i/m_e=25$, we ran simulations with $B_0/b_0$ =$0.1$, $0.5$, $1$ and $2$. For $m_i/m_e=100$, the simulation is with $B_0/b_0=2$. Therefore, the initial values of plasma beta range from $0.016$ to $0.08$. The domain size is $L\times L \times L$ = $102.4 d_{\rm i}\times 102.4 d_{\rm i} \times 102.4 d_{\rm i}$. The resolution of the simulations is $N_x \times N_y \times N_z = 2048\times 1 \times 2048$ for $m_i/m_e =25 $ and $4096\times 1 \times 4096$ for $m_i/m_e =100$. The number of macroparticles for each species per cell is 400. Periodic boundary conditions are used. Initially, the magnetic fluctuation in all simulations contains the total energy of $5.4\times 10^5\, b^2_0 d^3_i/\mu_0$. Simulations were run until the magnetic free energy dropped to $\sim 12\%$ of the total energy from the initial magnetic fluctuation. All our reported results are for $m_i/m_e$ = 25 except where noted. \section{\label{result}Results} \begin{figure*}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{jy.eps} \caption{\small The $y$-component of the current density at $\Omega_i t=0$, $4$, $150$, and $300$ from the simulations with $m_i/m_e = 25$ and $B_0/b_0=0.1 $ (top) and $B_0/b_0=2$ (bottom).} \label{fig:jy} \end{center} \end{figure*} \subsection{Plasma Dynamics} Figure~\ref{fig:jy} shows the $y$-component of the current density, $j_y$, at $\Omega_i t=0$, $4$, $150$ and $300$ from the simulations with $B_0/b_0=0.1$ and $2$. The initial plasma configuration is unstable since the forces are extremely imbalanced. At this early time, the plasma flow is highly compressible as current structures contract to form magnetic islands. Figures~\ref{fig:div}(a) shows the compressible fraction of the ion flow energy $\langle U^2_{\rm i,ir} \rangle/\langle U^2_{\rm i}\rangle$, which is the ratio of the kinetic energy of the compressible (irrotational) ion flow $U_{\rm i,ir}$ to the total kinetic energy of the ion flow $U_{\rm i}$. This ratio quickly rises at the beginning and reaches a maximum value greater than 0.8 after a few ion cyclotron times in all simulations. Later, the ratio declines and remains fairly stationary after $\Omega_{\rm i}\tau=5$ (dashed line). The current contraction also builds free energy in $B_y$ as shown in Figure~\ref{fig:div}(b). From the simulations, the magnitude of $B_y-B_0$ is high at the centers of the magnetic islands. After a few ion cyclotron times, the current within each flux rope contracts, reducing energy in the magnetic field. Multiple major magnetic islands become obvious after the contraction of plasma currents inside each island as shown in Figure~\ref{fig:jy} at $\Omega_{\rm i}t=4$, $150$, and $300$. {The conditions at $t\lesssim\tau$, the island-forming phase, involve imbalanced forces that reflect the initial condition and are unlikely to appear in nature.} \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{div_2figs.eps} \caption{\small (a) Compressible fraction of the ion flow energy $\langle U^2_{\rm i,ir}/U^2_{\rm i}\rangle$ as a function of time. (b) Spatial average of the square of the magnetic fluctuation in the $y$ direction ,i.e., the free energy of the guide magnetic field. Dashed line indicates time $\tau$ between the island-forming and decay phases.} \label{fig:div} \end{center} \end{figure} At $t\gtrsim\tau$ {, the current-merging or decaying phase}, the forces are only weakly imbalanced, and the root-mean-square of current density reaches its maximum value at $\Omega_{\rm i}t=25$. {In this phase, multiple magnetic islands merge as the currents sharing the same direction attract one another. Before the merging of two islands is complete, a current sheet emerges between them and becomes a site for magnetic reconnection accompanied with a number of small islands.} This phase is more applicable to natural situations in space plasmas. For stronger $B_0$, there are more of these tiny islands; see the plots of $j_y$ at $\Omega_i t=150$ and $300$ in Figure~\ref{fig:jy}. These emerging plasmoids from the reconnection have a short lifetime as they are destroyed by smaller-scale magnetic reconnection after colliding with the surrounding plasma. {As the current merging continues, there are fewer sites for magnetic reconnection and associated plasma activity} \subsection{Energy Conversion Profiles} \subsubsection{Overview} Figure~\ref{fig:energies} shows the magnetic fluctuation energy, electric field energy, total ion energy, total electron energy, and {total transferable energy} as time series from the simulations with $B_0/b_0=0.1$ and $2$. The electric energy is tiny and nearly constant, so it does not serve as a significant source or sink of energy, and refering to Figure~\ref{fig:conversion}, ${\pp J}\cdot{\pp E}$ can accurately represent both electric and magnetic energy conversion rates per unit volume. When $B_0$ changes, the overall amount of energy gained by ions and electrons also changes. Electrons eventually gain more energy for higher $B_0$~\citep{gary16,hughes17}. In our 2.5D simulations, all quantities depend on $x$ and $z$ while $y$ is an ignorable coordinate. By applying Faraday's law to our scenarios, only the $y$ component of the electric field can modify the in-plane magnetic field while the in-plane electric field can only modify $B_y$. Since the initial magnetic fluctuation is in the $x-z$ plane, the overall conversion of energy from the magnetic field is well represented by the integral of $J_yE_y$ over space and time. This integration includes magnetic energy lost during the island-forming and the decaying phases. At $t\lesssim \tau$, energy in $B_x$ and $B_z$ drops via $E_y$ but the energy in $B_y$ increases. During the island-forming phase, $\langle J_yE_y\rangle$ is higher than $\langle {\pp J}\cdot{\pp E}\rangle$ because part of energy from $\langle J_yE_y\rangle$ is used for building strong free energy $\langle(B_y-B_0)^2\rangle$ at centers of the magnetic islands via the in-plane solenoidal electric field ${\pp E}_{{\rm so},xz}$. Therefore, $\langle J_yE_y\rangle$ must be less than $\langle {\pp J}\cdot{\pp E}\rangle$ during the decaying phase. \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{energies.eps} \caption{\small Time series of the total magnetic fluctuation energy, total electric energy, total ion energy, total electron energy, { total transferable energy} and (a) $B_0/b_0=0.1$ and (b) $B_0/b_0=2$.} \label{fig:energies} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{e_energy_ratio_2figs.eps} \caption{\small Fraction of electric energy in the (a) $y-$component and (b) irrotational component as a function of time. Dashed line indicates time $\tau$ between island-forming and decaying phases.} \label{fig:e_energy} \end{center} \end{figure} Since the island-forming phase and decaying phase have distinct mechanisms of energy conversion, these can be analyzed separately. As the decaying phase is more applicable to astrophysical situations, we focus more on the energy conversion during this later phase of the simulation. We examine the mechanism that takes energy out of $B_y$ while $E_y$ continues to take energy out of the in-plane components of the magnetic field. Figures~\ref{fig:e_energy}(a) shows the ratio of the energy in $E_y$ to the total electric field energy at times up to $30 \Omega^{-1}_i$. As $B_0$ is increased, $E_y$ provides a lower energy fraction at $t\gtrsim \tau$. For $B_0/b_0=2$, the energy in $E_y$ is around $5\%$ of the total electric field energy at $t\gtrsim \tau$. Meanwhile, the energy in ${\pp E}_{\rm ir}$ can be as high as $80\%$ of the total electric field energy for $B_0/b_0=2$ at $t\gtrsim \tau$, as shown in Figure~\ref{fig:e_energy}(b). Since ${\pp E}_{\rm ir}$ provides a significant fraction of the electric energy, which even dominates at large $B_0$, a large portion of ${\pp J}\cdot{\pp E}$ represents energy exchange between ions and electrons without directly involving the conversion of magnetic energy. The analysis of how ${\pp J}\cdot{\pp E}$ take energy out of the magnetic field must consider the large contribution from ${\pp E}_{\rm ir}$ into account. \subsubsection{Island-forming Phase} First consider $\langle {\pp J}\cdot{\pp E}\rangle$ at $t<\tau$. We separately consider the energy transfer to ions and electrons as well as the contributions from ${\pp E}_{\rm ir}$, ${\pp E}_{{\rm so},xz}$ and $E_y$ to $\langle {\pp J}\cdot{\pp E}\rangle$ (Figure~\ref{fig:rates_early}). (Note that ${\bf E}_y$ is purely solenoidal.) At $t\lesssim\tau$, simulations with $B_0/b_0=0.1$ and $2$ both show that electrons are mainly responsible for taking energy out of the in-plane magnetic field via ${\bf E}_y$. Electrons then transfer most of this energy to ions via ${\pp E}_{\rm ir}$ (recall that ${\pp E}_{\rm ir}$ is associated with charge separation) and ions gain more energy than electrons in this phase. The profiles of the energy transfer rates to electrons and ions via ${\pp E}_{\rm ir}$ are very similar but with opposite signs. This implies that the particle interactions with ${\pp E}_{\rm ir}$ {mostly do not store or extract electric} energy in ${\pp E}_{\rm ir}$ but only cause the energy exchange between particles of opposite charges. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{ej_rates_early.eps} \caption{\small Panels (a) and (b) show the rates of energy transfer to ions and electrons, respectively, for various electric field components with $B_0/b_0=0.1$ for $t<\tau$. Panels (c) and (d) show the same rates to ions and electrons, respectively, with $B_0/b_0=2$. } \label{fig:rates_early} \end{center} \end{figure*} For simulations with $m_i/m_e=25$, and $B_0/b_0=0.1$, $0.5$, $1$ and $2$, the total energy $W_{25,1}$ gained by particles during the $1^{st}$ (island-forming) phase from the beginning to $t=\tau$ was $2.5\times 10^5$, $1.9\times 10^5$, $1.3\times 10^5$ and $1.0\times 10^5$, respectively in units of $b^2_0 d^3_{\rm i}/\mu_0$. Less energy is transferred to particles during this phase when $B_0$ is higher. Table~\ref{tab:erate_25} shows the fraction of the total electromagnetic energy transfer to particles that go to ions and electrons via ${\pp E}_{\rm ir}$, ${\pp E}_{{\rm so},xz}$, $E_y$ and the total eletric field ${\pp E}$ from the beginning to $t=\tau$, for each simulation. \begin{table} \caption{\label{tab:erate_25} The fractions of the total electromagnetic energy transfer to particle energy that go to ions and electrons during the island-forming phase $(t<\tau)$ via interactions with ${\pp E}_{\rm so,xz}$, $E_y$, ${\pp E}_{\rm ir}$, and ${\pp E}$.} \begin{ruledtabular} \begin{tabular}{cccccc} $B_0/b_0$ & Species &${\pp J}_{\rm s}\cdot {\pp E}_{{\rm so}, xz}$ & ${\pp J}_{\rm s}\cdot {\pp E}_y$ & ${\pp J}_{\rm s} \cdot {\pp E}_{\rm ir}$ & ${\pp J}_{\rm s} \cdot {\pp E}$ \\ \hline \multirow{2}{*}{0.1} & Ion & -0.01 & 0.28 &0.52 & 0.79 \\ & Electron & -0.05 & 0.78 & -0.52 & 0.21\\ \hline \multirow{2}{*}{0.5} & Ion & -0.08 &0.35 &0.55 & 0.82 \\ & Electron & -0.17 &0.92 &-0.57 & 0.18 \\ \hline \multirow{2}{*}{1} & Ion & -0.18 &0.30 & 0.76 & 0.88 \\ & Electron & -0.26 & 1.17 &-0.79 & 0.12 \\ \hline \multirow{2}{*}{2} & Ion & -0.24 &0.19 & 0.88 & 0.83 \\ & Electron & -0.13 & 1.28 &-0.98 & 0.17 \\ \end{tabular} \end{ruledtabular} \end{table} In all simulations, electrons take energy from the in-plane components of the magnetic field and transfer a large amount of energy to ions via ${\pp E}_{\rm ir}$. Both ions and electrons lose some energy to form strong $|B_y-B_0|$ at the centers of magnetic islands. As $B_0/b_0$ is increased up to 1, more energy is transferred to $B_y$ and the energy transfer via ${\pp E}_{\rm ir}$ is also stronger. At $t=\tau$, ions have gained more energy than electrons even though electrons are the main species interacting with $E_y$ and reducing the magnitude of the in-plane magnetic field. \subsubsection{Decaying Phase} At $t > \tau$, the current contraction is less active. Magnetic energy drops due to the merging of magnetic islands. We carry on the analysis of energy conversion by considering the contributions from ions and electrons, again separating effects of ${\pp E}_{\rm ir}$ from ${\pp E}_{\rm so}$. In this phase, we found that separating the solenoidal field into ${\bf E}_y$ and ${\pp E}_{{\rm so},xz}$ is not the most effective way to understand the energy conversion. In this phase, energy conversion often takes place where both $B_y$ and the in-plane magnetic field lose energy together. We found that separating the solenoidal electric field into the components parallel ${\pp E}_{\rm so,\parallel}$ and perpendicular ${\pp E}_{\rm so,\perp}$ to the local magnetic field is more suitable for a strong guide field. \footnote{We have considered the relevance of components parallel and perpendicular to {\bf B} vs.\ out-of-plane ($y$) and in-plane ($x$-$z$) components for the decaying phase. We found that resolving components parallel and perpendicular to {\bf B} provides a clearer physical distinction for particle acceleration (direct parallel acceleration vs.\ perpendicular drift motion) and also a clearer distinction among components of the pressure tensor.} Figures~\ref{fig:rates_later}(a) and ~\ref{fig:rates_later}(c) show the energy conversion rates to electrons and ions via ${\pp E}_{\rm so}$ for $B_0/b_0=0.1$ and $2$, respectively, at $t>\tau$. For $B_0/b_0=0.1$, $\langle {\pp J}_{i,\perp}\cdot{\pp E}_{\rm so}\rangle$ is positive and dominates other rates. For $B_0/b_0=2$, $\langle {\pp J}_{e,\perp}\cdot{\pp E}_{\rm so}\rangle$ and $\langle {\pp J}_{i,\perp}\cdot{\pp E}_{\rm so}\rangle$ fluctuate with large magnitudes, share similar shapes and seems to have the opposite signs. On the other hand, the rates $\langle {\pp J}_{e,\parallel}\cdot{\pp E}_{\rm so}\rangle$ and $\langle {\pp J}_{i,\parallel}\cdot{\pp E}_{\rm so}\rangle$ remain mostly positive with smaller magnitudes. The rates from ${\pp E}_{\rm ir}$ are shown in Figure~\ref{fig:rates_later}(b) for $B_0/b_0=0.1$ and Figure~\ref{fig:rates_later}(d) for $B_0/b_0=2$. These rates fluctuate with stronger magnitude when $B_0$ is higher. The rates for ions and electrons are nearly equal and opposite almost all the time, representing energy transfer among particle species with little net conversion of electromagnetic energy to particle motion. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{ej_rates_later.eps} \caption{\small The rates of energy conversion to ions and electrons by interacting with (a) components of ${\pp E}_{\rm so}$ and (b) ${\pp E}_{\rm ir}$ from the simulations with $B_0/b_0=0.1$ while panels (c) and (d) shows the energy conversion rates to ions and electrons from interaction with components of ${\pp E}_{\rm so}$ and ${\pp E}_{\rm ir}$, respectively, with $B_0/b_0=2$. } \label{fig:rates_later} \end{center} \end{figure*} Let $W_{25,2}$ be the total energy gained by particles in the $2^{nd}$ (decaying) phase after $t=\tau$ for each simulation with $m_i/m_e=25$. The values are $2.3\times 10^5$, $2.8\times 10^5$, $3.4\times 10^5$ and $3.7\times 10^5$ in units of $b^2_0 d^3_{\rm i}/\mu_0$ for $B_0/b_0=0.1$, $0.5$, $1$ and $2$, respectively. Table~\ref{tab:erate_25_2} shows the fractions of the total energy gain that go to ions and electrons via ${\pp E}_{\rm so,\parallel}$, ${\pp E}_{\rm so,\perp}$, ${\pp E}_{\rm ir}$ and the total electric field ${\pp E}$. The values in the table are fractions of $W_{25,2}$. For $B_0/b_0=0.1$, the most effective motion to transfer energy via ${\pp E}_{\rm so}$ is the perpendicular motion of ions. This motion takes $0.51W_{25,2}$ of the converted energy while the parallel and the perpendicular motions of electrons take $\sim 0.20W_{25,2}$ each. The interaction between ions and electrons via ${\pp E}_{\rm ir}$ is weak. The total amounts of energy gained by ions and electrons for $t>\tau$ are $0.65W_{25,2}$ and $0.35W_{25,2}$, respectively. For small $B_0$ at $t>\tau$, the energy transferred via ${\bf E}_y$ still dominates and $92\%$ of electromagnetic energy conversion is via ${\bf E}_y$. Therefore, separating the rates from ${\pp E}_{\rm so,\parallel}$ and ${\pp E}_{\rm so,\perp}$ is not particularly useful. As $B_0$ increases, the parallel motion of ions becomes the most effective motion to take energy from ${\pp E}_{\rm so}$. The particle interaction through ${\pp E}_{\rm ir}$ is also stronger as ${\pp E}_{\rm ir}$ is stronger for higher $B_0$. For $B_0/b_0=2$, the parallel motion of ions takes $0.91W_{25,2}$ from ${\bf E}_{\rm so}$ while ${\bf E}_{\rm ir}$ takes $0.67W_{25,2}$ from ions and gives $0.68W_{25,2}$ to electrons. For $t>\tau$, the energy gained by electrons is $0.73W_{25,2}$ while ions gain only $0.27W_{25,2}$. As $B_0$ increases, the energy conversion by the parallel electric field becomes stronger. For $B_0/b_0=2$, ${\pp E}_{\rm so,\parallel}$ is responsible for almost $100\%$ of $W_{25,2}$ while $E_y$ takes $88\%$ of $W_{25,2}$ out of the magnetic field. The dominance of ${\pp E}_{\rm so,\parallel}$ implies that energy in $B_y$ and the in-plane magnetic field are likely drained at the same locations via ${\pp E}_{\rm so,\parallel}$. For comparison, we calculate the energy conversion via ${\pp E}_{\rm so,\parallel}$ and ${\pp E}_{\rm so,\perp}$ in Table \ref{tab:erate_25_before} for $t<\tau$. For $B_0/b_0=0.1$, the magnetic field lies mostly in the $x$-$z$ plane initially and the perpendicular direction is nearly along the $y$ direction. Therefore, in the first phase, the sum of ${\pp J}_{\perp}\cdot {\pp E}_{\rm so,\perp}$ from both species, with $E_y$ as the main component in ${\pp E}_{\rm so,\perp}$, is responsible for $95\%$ of the energy conversion. For the case of $B_0/b_0=2$, the initial magnetic field tends to be more aligned with the $y$ direction. As $E_y$ is the main component of ${\pp E}_{\rm so,\parallel}$, the sum of ${\pp J}_{\parallel}\cdot {\pp E}_{\rm so,\parallel}$ from both species is responsible for $89\%$ of the energy conversion. Thus these results are consistent with our conclusion from the previous section that $E_y$ plays the dominant role in energy conversion for $t<\tau$ (see Table~\ref{tab:erate_25}). The energy transfer processes in the phases $t<\tau$ and $t>\tau$ are very different. Electrons are the main species to draw energy from the magnetic field in the early phase but lose most of the energy to ions via the charge separation and ${\bf E}_{\rm ir}$. In the later phase, ions are more responsible for draining energy from the magnetic field. For $B_0/b_0=0.5$, $1$ and $2$, these ions lose energy to electrons via the charge separation and ${\bf E}_{\rm ir}$. The interaction between ions and electrons via ${\bf E}_{\rm ir}$ is stronger as $B_0$ increases. { Our analysis is insensitive to choices of the boundary time from $\Omega_{\rm i}\tau=5$ to $10$. By using $\Omega_{\rm i}\tau=10$, the results change numerically but remain qualitatively similar. The results for $m_i/m_e=100$ are also qualitatively similar to those for $m_i/m_e=25$.} \begin{table} \caption{\label{tab:erate_25_2} The fraction of the total electromagnetic energy transfer to particle energy that go to ions and electrons via interactions with with ${\pp E}_{\rm so,\parallel}$, ${\pp E}_{\rm so,\perp}$, ${\pp E}_{\rm ir}$, and ${\pp E}$ during the decaying phase ($t>\tau$).} \begin{ruledtabular} \begin{tabular}{cccccc} $B_0/b_0$ & Species &${\pp J}_{\rm s,\parallel}\cdot {\pp E}_{\rm so,\parallel}$ & ${\pp J}_{\rm s,\perp}\cdot {\pp E}_{\rm so,\perp}$ & ${\pp J}_{\rm s} \cdot {\pp E}_{\rm ir}$ & ${\pp J}_{\rm s} \cdot {\pp E}$ \\ \hline \multirow{2}{*}{0.1} & Ions & 0.04 & 0.51 & 0.10 & 0.65 \\ & Electrons & 0.20 & 0.23 &-0.08 & 0.35\\ \hline \multirow{2}{*}{0.5} & Ions & 0.63 &0.02 & -0.07 & 0.58 \\ & Electrons & 0.16 & 0.18 &0.08 & 0.42 \\ \hline \multirow{2}{*}{1} & Ions & 1.21 & -0.01 & -0.68 & 0.52 \\ & Electrons & -0.27 & 0.05 & 0.70 & 0.48 \\ \hline \multirow{2}{*}{2} & Ions & 0.91 & 0.03 & -0.67 & 0.27 \\ & Electrons & 0.10 &-0.05 &0.68 & 0.73 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:erate_25_before} The fraction of the total electromagnetic energy transfer to particle energy that go to ions and electrons via interactions with ${\pp E}_{\rm so,\parallel}$ and ${\pp E}_{\rm so,\perp}$ for the island-forming phase ($t<\tau$).} \begin{ruledtabular} \begin{tabular}{cccc} $B_0/b_0$ & Species &${\pp J}_{\rm s,\parallel}\cdot {\pp E}_{\rm so,\parallel}$ & ${\pp J}_{\rm s,\perp}\cdot {\pp E}_{\rm so,\perp}$ \\ \hline \multirow{2}{*}{0.1} & Ions & 0.06 & 0.21 \\ & Electrons & $\approx 0$ & 0.73 \\ \hline \multirow{2}{*}{0.5} & Ions & 0.01 &0.26 \\ & Electrons & 0.23 &0.52 \\ \hline \multirow{2}{*}{1} & Ions & -0.02 &0.14 \\ & Electrons & 0.48 & 0.43 \\ \hline \multirow{2}{*}{2} & Ions & 0.03&-0.08 \\ & Electrons & 0.86 &0.29 \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Indicators of Energy Conversion Locations} From the previous section, the overall rate of energy transfer to particles in the decaying phase is approximately equal to the integration of $J_\parallel E_{\rm so, \parallel}$ over $t>\tau$. In all the simulations, $\nabla\times{\pp B} \approx \mu_0{\pp J}$ is valid at most locations so we can write \begin{equation} J_\parallel E_{\rm so, \parallel} \approx \frac{1}{\mu_0} {\pp E}_{\rm so, \parallel}\cdot\nabla\times{\pp B} = \frac{1}{\mu_0}{\pp B}\cdot\nabla\times{\pp E}_{\rm so, \parallel}, \label{eq:rates_b2} \end{equation} noting that ${\pp E}_{\rm so,\parallel}$ is parallel to ${\pp B}$, so there is no Poynting vector associated with ${\pp E}_{\rm so,\parallel}$. The last term is the rate of change of the local magnetic energy density due to ${\pp E}_{\rm so, \parallel}$ from Faraday's law. Therefore, the energy transferred via the parallel motion is local and the magnetic energy quite instantly becomes the kinetic energy of particles where $\nabla\times{\pp B}\approx \mu_0{\pp J}$. Figure~\ref{fig:ej_locations}(a) shows the color plot of $J_\parallel E_{\rm so, \parallel}$ over the entire domain at $t=22\Omega^{-1}_i$ from the simulations with $B_0/b_0=2$. Contours of vector potential $A\hat{y}$ are shown as black $(A<0)$ and grey $(A>0)$. In 2D simulations, these curves are also the projection of the magnetic field lines on the $x$-$z$ plane. The magnetic islands are surrounded by circular closed curves while the current sheets are located at boundaries between magnetic islands that share the same rotation direction. The rate $J_\parallel E_{\rm so, \parallel}$ fluctuates strongly and is highly intermittent in space. The energy is transferred strongly both ways, either from fields to particles ($J_{\rm parallel} E_{\rm so,\parallel}>0$, red color) or from particles to fields ($J_{\rm parallel} E_{\rm so,\parallel}<0$, blue color). From the previous sections we have learned that the spatial average of $J_\parallel E_{so,\parallel}$ is positive, indicating a net conversion of energy from fields to particle kinetic energy. We can see two types of structures with strong (positive or negative) values of $J_\parallel E_{so,\parallel}$: 1) Near the centers of magnetic islands, there are bipolar structures, with positive and negative $J_\parallel E_{so,\parallel}$. We have confirmed that the positive and negative values nearly cancel, and these regions make a tiny contribution to the net energy conversion rate $\langle J_\parallel E_{so,\parallel}\rangle$. 2) Near the edges of magnetic islands, there are elongated regions with strong $J_\parallel E_{so,\parallel}$, including regions of magnetic reconnection. The positive values of $J_\parallel E_{so,\parallel}$ dominate and these regions account for the overall positive energy conversion.Therefore, it would be useful to find quantities that indicate these regions near the edges of magnetic islands, especially if they provide some insight into why the energy conversion mostly occurs in these specific locations. In this section, we examine various quantities as possible indicators of energy conversion locations, and specifically of regions with strong and positive $J_\parallel E_{\rm so, \parallel}$ on average. We test these indicators using the following steps. For any continuous indicator $X$, we identify the median $X_m$ of $X$ values at all grid points over the spatial domain as a time series (for $t>\tau$). We integrate the total energy $\epsilon_{X,h}$ transferred to particle kinetic energy by $J_\parallel E_{\rm so,\parallel}$ from the region with $X>X_m$ over the time domain of interest and compare $\epsilon_{X,h}$ with the total energy $\epsilon_X$ from $J_\parallel E_{\rm so,\parallel}$ over the spatial and time domains. We then calculate a measure of predictive power, $S_X=2|\epsilon_{X,h}/\epsilon_X-0.5|$. Because a monotonic transformation of $X$ does not change its percentiles, with this construction, $S_x$ is invariant upon any monotonic transformation of the indicator $X$, and $0\leq S_X\leq 1$. If $S_X$ is close to $1$, then $X$ is a good indicator of energy conversion locations. We have {tested many} indicators using the data from simulations with $B_0/b_0=2$. Table~\ref{tab:indicators} shows a list of indicators and their predictive power regarding energy transferred via $J_\parallel E_{\rm so,\parallel}$ for all times $t>\tau$. We include $|E_{\rm so,\parallel}|$ and $|J_\parallel|$, which multiply to make $|J_\parallel||E_{\rm so,\parallel}|$, only for comparison. We found three good indicators in addition to $|E_{\rm so,\parallel}|$ and $|J_\parallel|$ themselves. \begin{table} \caption{\label{tab:indicators} Indicators $X$ and a measure of their predictive power, $S_X$, based on the fraction of the energy conversion $J_\parallel E_{\rm so,\parallel}$ that occurs in the region with $X$ greater than its median $X_m$. $|J_\parallel|$ \& $|E_{\rm so,\parallel}|$ themselves, indicated in italics, are included for comparison purposes.} \begin{ruledtabular} \begin{tabular}{lcc} Indicator & Formula & $S_X$\\ \hline {\it Magnitude of parallel current} & $|{\pp J}_\parallel|$ & 0.9 \\ Magnitude of ratio of out-of-plane electric field to in-plane magnetic field & $|E_y|/|B_{xz}|$ & 0.82 \\ {\it Magnitude of parallel component of solenoidal electric field} &$|{\pp E}_{\rm so,\parallel}|$ & 0.8 \\ Magnitude of $y$-component of non-ideal electric field & $\delta E_y=|{\pp E}_{y}+({\pp U}\times{\pp B})_{y,\rm so}|$ & 0.8 \\ Estimate of current helicity & $H=|{\pp B}\cdot\nabla\times{\pp B}/\mu_0|$ & 0.76 \\ Magnitude of in-plane component of magnetic fluctuation & $|B^2_x+B^2_z|$ & 0.46 \\ Magnitude of vorticity & $|\nabla\times {\pp U}|$ & 0.46 \\ Magnitude of solenoidal component of non-ideal electric field & $|{\pp E}_{\rm so}+({\pp U}\times{\pp B})_{\rm so}|$ & 0.36 \\ Magnitude of non-ideal electric field & $|{\pp E}+{\pp U}\times{\pp B}|$ & 0.34 \\ Divergence of bulk velocity & $\nabla\cdot {\pp U}$ & 0.26 \\ Magnitude of cross helicity & $|{\pp U}\cdot({\pp B}-B_0\hat{y})|$ & 0.1 \\ Magnitude of $y$-component of magnetic fluctuation & $|B_y-B_0|$ & 0.06 \\ Magnitude of curvature of magnetic field & $|\hat{B}\cdot\nabla \hat{B}|$ & 0.06 \\ Magnitude of magnetic helicity & $|{\pp A}\cdot{\pp B}|$ & 0.06 \\ Magnitude of magnetic field & $|B|$ & 0 \\ \end{tabular} \end{ruledtabular} \end{table} The best such indicator is $|E_y|/|B_{xz}|$, which is based on an indicator suggested by~\cite{lapenta2021}. That work actually examined the formal transformation velocity ${\bf v}_L=c^2{\bf B}\times{\bf E}/E^2$ such that a Lorentz transformation would nullify the two components of the magnetic field in the plane perpendicular to the electric field, as an indicator of reconnection regions in 3D space. The magnitude of ${\bf v}_L$ is almost always superluminal, with $v_L<c$ only in the vicinity of reconnection sites. For our configuration, with a guide magnetic field component $B_y$, reconnection sites should have ${\bf B}_{xz}\approx0$ and we can simply use $v_L=c^2|B_{xz}|/|E_y|$. As a further simplification, we divide $v_L$ by $c^2$ and invert to obtain $|E_y|/|B_{xz}|$, which has units of velocity and is usually close to zero, but is large near null points of ${\bf B}_{xz}$ that have non-zero $E_y$, i.e., near reconnection regions. We found that $S_{|E_y|/|B_{xz}|}$ is 0.82,i.e., $91\%$ of the energy transferred via $J_\parallel E_{\rm so, \parallel}$ occurs in regions with $|E_y|/|B_{xz}|$ higher than its median value, which are typically near reconnection sites. This indicator has higher predictive power for $J_\parallel E_{\rm so, \parallel}$ than $|E_{\rm so, \parallel}|$ itself. The second best indicator is the magnitude of the $y-$component of the non-ideal electric field, $\delta E_y =\hat{y}\cdot({\pp E}+{\pp U}\times {\pp B})$, where ${\pp U}$ is the plasma bulk velocity. At locations with a high magnitude of ${\pp E}+{\pp U}\times {\pp B}$, the frozen-in approximation is no longer valid. However, we find that the magnitude of ${\pp E}+{\pp U}\times {\pp B}$ is not as good indicator as $\delta E_y$. We found that $S_{\delta E_y}$ is 0.8, implying that $90\%$ of the energy transferred via $J_\parallel E_{\rm so, \parallel}$ occurs in regions with $\delta E_y$ above its median value. This indicator has a predictive power for $J_\parallel E_{\rm so,\parallel}$ equal to $|E_{\pm so,\parallel}|$ itself. This implies that strong energy conversion often takes place where the frozen-in approximation fails which also typically occurs near reconnection sites. The third best indicator is an estimate $H=|{\pp B}\cdot\nabla\times{\pp B}|/\mu_0$ of the current helicity $|{\bf J}\cdot{\bf B}|$. We can write \begin{equation} J_\parallel E_{\rm so, \parallel}=\frac{E_{\rm so,\parallel}|{\bf J}\cdot{\bf B}| }{B}, \label{eq:rates_b2} \end{equation} so the current helicity is directly linked with $J_\parallel E_{\rm so, \parallel}$. We use $H=|{\pp B}\cdot\nabla\times{\pp B}|/\mu_0$ as an estimate of the current helicity that only involves magnetic field measurements, as may be more readily available from spacecraft data. We found that $S_H$ is 0.76, implying that $88\%$ of the energy transferred via $J_\parallel E_{\rm so, \parallel}$ occurs in the region with $H$ above its median value. \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{ej044.eps} \caption{\small (a) 2D color plot of $J_\parallel E_{\rm so,\parallel}$ at $t=22\Omega^{-1}_i$. (b)-(d) Spatial regions (black) where the indicator is (b) $|E_y|/|B_{xz}|$ less than its 10th percentile, or (c) $\delta E_y$ or (d) $H$ greater than its 90th percentile. } \label{fig:ej_locations} \end{center} \end{figure} To demonstrate how $J_\parallel E_{\rm so, \parallel}$ is sensitive to the indicators, we numerically calculate the average value of $\langle J_\parallel E_{\rm so, \parallel}\rangle$ for each percentile of $X$. The percentile is calculated over the whole domain for $t>\tau$. Figure~\ref{fig:indc_his} shows $\langle J_\parallel E_{\rm so, \parallel}\rangle$ {normalized by its average value over the whole domain at $t>\tau$} as a function of the percentile of $\delta E_y$, $|E_y|/|B_{xz}|$ and $H$. The value of $\langle J_\parallel E_{\rm so, \parallel}\rangle$ becomes larger when the percentile of $|E_y|/|B_{xz}|$, $\delta E_y$ or $H$ is higher. The regions with one of $|E_y|/|B_{xz}|$ higher than $89$ percentile, $\delta E_y$ higher than $90$ percentile, or $H$ higher than $93$ percentile are responsible for $50\%$ of the energy conversion. Figures~\ref{fig:ej_locations}(b)-(d) show the regions selected by the values of $|E_y|/|B_{xz}|$ (b), $\delta E_y$ (c) and $H$ (d) higher than their $90$ percentile by black coloring. The conversion sites selected by these indicators have either strongly positive or negative $J_\parallel E_{\rm so, \parallel}$. The regions selected by these three indicators are quite different. The regions selected by $|E_y|/|B_{xz}|$ are relatively contiguous. They are concentrated near the reconnection sites and away from the centers of magnetic islands. This is useful because, as noted above, the magnetic islands provide very little net energy conversion to the particles. $H$ selects both regions near the reconnection sites and centers of magnetic islands, though the latter make very little contribution to the net energy conversion. The regions selected by $\delta E_y$ are scattered quite broadly but statistically concentrated around the reconnection sites. \begin{figure}[ht] \begin{center} \includegraphics[width=0.54\textwidth]{his_indic.eps} \caption{\small The average of $J_\parallel E_{\rm so \parallel}$ {normalized by its average value} as a function of percentile of (a) $|E_y|/|B_{xz}|$, (b) $\delta E_y$, and (c) $H$ with $B_0/b_0=2$ for $t>\tau$. } \label{fig:indc_his} \end{center} \end{figure} \section{\label{discuss}Discussion and Summary} We have run 2.5D simulations of decaying turbulence. There are two phases of the simulations that give two distinct mechanisms of magnetic energy conversion. These phases are significantly modified by the magnitude of the guide field. As $B_0$ is higher, plasma becomes less compressible as the magnetic field partially confines the particles. Another effect of higher $B_0$ is the larger amplitude of the irrotational electric field. This field component is not directly responsible for the magnetic energy conversion and we therefore focus on the solenoidal electric field when searching for indicators of energy conversion locations. For $t<\tau$ or the island-forming phase, plasma current structures contract to form magnetic islands. In this phase, electron motions take energy out of the magnetic field but also lose a large amount of energy to ions via the irrotational electric field. While the in-plane magnetic field becomes weaker, the out-of-plane component becomes stronger at centers of magnetic islands by taking energy from the in-plane particle motions. The current contraction obviously depends on the magnitude of the guide magnetic field. As $B_0$ gets stronger, the fractional energy conversion in this phase is weaker, and the ratio of energy taken out of the in-plane magnetic field by electrons to ions becomes higher. The irrotational electric field becomes stronger and is responsible for more energy transfer from electrons to ions. Both ions and electrons lose more energy to build strong $B_y$ at centers of magnetic islands. At $t>\tau$ or the decaying phase, the magnetic islands merge. During island merging, the magnetic reconnection inevitably occurs. As $B_0$ is higher, the particle parallel motion becomes dominant at taking energy out of the in-plane magnetic field~\citep{wan2012,makwana2017}, and while ions gain more energy from the magnetic field, they lose a large amount of energy to electrons via the irrotational electric field. Electrons eventually gains more energy in this phase. Overall electrons become more effective at extracting the electromagnetic energy when $B_0$ is higher~\citep{shay2018}. This result is similar to the beta effect on the particle heating~\citep{parashar2018}. The energy conversion is mainly mediated by the parallel component of the solenoidal electric field via $J_\parallel E_{\rm so,\parallel}$, which is highly localized. Three particularly useful indicators of the regions of strong energy conversion are found. They are 1) the ratio of the out-of-plane electric field to the in-plane magnetic field \citep[related to the suggestion by][]{lapenta2021}, 2) the out-of-plane component of the non-ideal electric field, and 3) the magnitude of an estimate of current helicity. All of them select regions near the magnetic reconnection sites as locations of net energy conversion. Among them, $|E_y|/|B_{xz}|$ provides regions that are most contiguous and tied with multiple reconnection sites. We propose that any mechanisms involving these indicators should receive particular attention in order to gain more insight on how magnetic energy is converted to bulk and thermal motions in turbulence with a strong guide field. P.P. and D.R. would like to thank Thailand Science Research and Innovation and Ministry of Higher Education, Science, Research and Innovation (Thailand) for support through grants RTA 6280002 and RGNS 63-045. This work was partly supported by the International Atomic Energy Agency (IAEA) under Contract No. 22785. \nocite{*}
1,941,325,220,306
arxiv
\section{Introduction} Ever since a moving detector was introduced by Unruh \cite{Unr76} to depict analog Hawking effect \cite{Haw74,HawHar76} it has become a useful means to extract information about a quantum field, as is done in many work in relativistic quantum information \cite{RQI}. For cosmology a detector is often used to illustrate the Gibbons-Hawking effect \cite{GibHaw} an observer may find in de Sitter space. Here we treat a more general class of problems allowing for an arbitrary functional dependence of the cosmic scale factor $a(t)$ between two constant end states and not restricting our attention to thermal radiation. We ask how an Unruh-DeWitt (UD) detector \cite{Unr76,DeW79} with harmonic oscillator internal degrees of freedom $Q$ measuring an evolving quantum matter field $\Phi$ driven by the expansion of the universe would respond. The dynamical response of the detector contains information about the quantum field which is squeezed over the history of the universe. It is well-known that such nested dependence on $\Phi (\bm{x}, t)$ and then on $a(t)$ gives rise to nonMarkovian dynamics of $Q$. The question we pose here is, to what extent can a detector introduced at a later time in the history of the universe extract stored information about the matter field and from it the dynamics of the cosmos at an early time, as well as the information of the parametric process of the field. The challenge is in the memory effects accumulated in the evolutionary history. The problem we set forth to investigate consists of three components: a) cosmological expansion results in the squeezing of the quantum field and manifests as particle creation, b) the nonequilibrium dynamics of an UD detector in a squeezed quantum field, c) the dynamical response of the detector to the quantum matter field which has a nonMarkovian history. The first component is well known from work since the 70s. The second component we shall describe below. The third component is where the major challenge rests. The background themes are well-researched with many monographs dedicated to them: \textit{squeezed states} in quantum optics \cite{Walls,LouKni,ManWol}, \textit{open quantum systems} \cite{qos}, \textit{nonequilibrium quantum field dynamics} \cite{CalHu08,NEqFT}, and \textit{cosmological particle creation} in quantum field theory in curved spacetime \cite{BirDav,ParToms,HuVer20}. We shall give a brief description of these three major components, extending our earlier work and referring to the current literature. We then focus on the nonequilibrium dynamics of a UD detector interacting with a quantum field under time-dependent squeezing, here, due to the expansion of the universe, paying special attention to the nonMarkovian behavior. \subsection{Early universe quantum processes} As examples of cosmological problems involving squeezed quantum fields we mention three processes in the early universe. Then we introduce an UD detector, describe its nonequilibrium dynamics and discuss what it sees in various situations. The theoretical framework useful for our purpose is known as `squeezed open quantum systems' \cite{KMH97}.\\ \noindent 1) {\it Cosmological particle creation as squeezing of the quantum field} The amplitude of a wave mode can be parametrically amplified by a time-dependent drive. Same thing happens to vacuum fluctuations in a quantum field, resulting in the creation of particle pairs \cite{Schwinger}. The expansion of the universe acts like a drive, parametrically amplifying the quantum noise leading to cosmological particle creation \cite{Par69,Zel70}. The vacuum is `squeezed' in the evolutionary history and particles are produced \cite{GriSid} -- this is spontaneous creation. If there were particles present in an initial state they get amplified also with the same amplification factor -- this is stimulated production. A summary description of cosmological particle creation in terms of squeezing can be found in \cite{HKM94}. By applying the well established knowledge base about squeezing in quantum optics one can observe or design experiments simulating quantum processes in the early universe (e.g., \cite{CalHu04}) and in black holes (e.g., \cite{Garay}). This is the spirit of analog gravity \cite{analogG} invoking the similarity of the key physical processes and the commonality of the underlying issues. \\ \noindent 2) \textit{Initial state of inflation or phase of the universe} Making use of the properties of squeezing on quantum states Agullo and Parker \cite{AguPar} considered an initial mixed state with nonzero numbers of scalar particles present and calculated the induced stimulated production of density perturbations by the inflationary expansion of the universe from early times. They found that the effect of these initial perturbations is not diluted by inflation and can significantly enhance non-Gaussianities in the squeezed limit. Observations of these non-Gaussianities, they claim, can provide valuable information about the initial state of the inflationary universe. In a similar spirit the authors of \cite{GMM14} claim that ``the existence (or not) of a quantum bounce leaves a trace in the background quantum noise that is not damped and would be non-negligible even nowadays." \\ \noindent 3) \textit{Entropy generation} Entropy of quantum fields is a fundamental yet somewhat tricky issue. Early conceptual inquiries \cite{HuPav,HuKan} made explicit the underlying conditions which allow for the entropy of free quantum fields to be related to particle creation numbers (for boson fields). Namely, by adopting a Fock space representation and making statements only in terms of number operators, one implicitly ignores all quantum phase information which determines the coherence. (See \cite{KME} for the interplay of both quantities). From this it is easy to understand why the entropy associated with particle production is proportional to the degree of squeezing by the expansion of the universe \cite{GasGio93,Prokopec93,KMH97}. The relation between entanglement and entropy in particle creation is explicated in \cite{LCH10}. Entropy associated with gravitational perturbations in the inflationary universe is explored in \cite{BMP92,GGV93,KPS00,AMM05,CamPar,KPS12} and by many authors (see \cite{Bran20} for an extensive literature and the latest developments in terms entanglement entropy). It was suggested in \cite{GGV93} that the entropy produced by the squeezing of the universe is independent of its initial state. One can then ask, does this mean by measuring this entropy today one can find the accumulated degree of squeezing, and from it deduce the duration of the last inflationary expansion {regardless of the initial conditions }? We have suggested some samples of measurable physical quantities of a quantum field in a dynamical setting. Quantum decoherence, correlation and entanglement of gravitational perturbations in cosmology are related important subjects (see, e.g., \cite{LCH10,MarMenCosEnt,Bran20} and references therein). We shall discuss these from a detector response in squeezed quantum field perspective in later communications \cite{HHCosDec,HHCosEnt}. Introducing a detector to measure the quantum field driven by cosmological expansion can extend our investigative capabilities, as the detector response function registers both the matter field and the cosmic activities. For that, we need some knowledge of the nonequilibrium dynamics of the quantum field and the detector. \subsection{Nonequilibrium quantum dynamics} There are three players in the problem we are tackling: The expanding universe acting as a drive, the quantum field which responds to it in processes such as those listed above, and the UD detector, which we assume to be a harmonic oscillator. We want to find out from the nonMarkovian response of the detector and the nonequilibrium dynamics of the quantum field how much we can unlock the past history of the universe. \paragraph{`Witness' versus `Detective'} Here we make a distinction between two kinds of observers represented by an UD detector which is a simple harmonic oscillator with time-dependent natural frequency: one which has co-existed and evolved with the matter which we shall call the `witness' $W$. The other is introduced in a much later stage of cosmic evolution, which we call the `detective' $D$. It is what we design in all the cosmological experiments today to find out what could have happened earlier. For example, the detectors in COBE, WMAP and PLANCK are aimed at observing the radiation emitted from the surface of last scattering. So what is a witness? Witnesses are physical objects present from the inception which followed the evolution and are present today (or at the moment of detection). Examples are the heavy elements on earth which contain the histories of their journeys to here since their formation in the interiors of the neutron stars. In cosmology, they could be the light elements from nucleosynthesis, or baryons from baryogenesis or particles produced from vacuum fluctuations described earlier. They are quantities of relevance to what we can observe today which evolved alongside the expansion of the universe from the earliest moments. It could be the Planck time if one is interested in quantum gravity effects, or the GUT time if one believes that is the most important inflationary transition which can affect the main features of today's universe. Density contrasts and primordial gravitational waves, related to the scalar and tensor sectors of gravitational perturbations, are two important examples. Let us see this more explicitly. \paragraph{Gravitational Scalar and Tensor Perturbations} For density contrasts and gravitational perturbations: from the Einstein equations (e.g. Eq. (28.47) of \cite{ThoBla}) for the gravitational potentials $\Phi$, $\Psi$ (in Eq. (28.44) \textit{ibid}) the equations governing their normal modes, with suitable choice of gauges, obey equations of motion in a harmonic oscillator form with time-dependent natural frequency (E.g. Eq. (28.50) \textit{ibid}, with time measured by $\ln a (t)$, the scale factor, further simplified to Eq. (28.51) \textit{ibid}, with $\Phi = \Psi$ for a radiation dominated background). For gravitational waves it is long known (e.g., \cite{ForPar77}) that the two polarizations $h_+$, $h_{\times}$ each obey an equation like that of a massless minimally-coupled scalar field. The equation of motion of each normal mode in the Heisenberg picture is given by, e.g., Eq. (4.12) in \cite{FRV12} in the form of an equation for a harmonic oscillator with time-dependent frequency\footnote{These equations all have a first order time derivative term but by a change of variables can be cast to a form with only the second order time derivative term and a zero-order term with time-dependent frequency.}, with solution given by (4.14) in \cite{FRV12}. See\footnote{As an explanatory note in the differences between `nonMarkovian' or memory effects from that of `secular' effects it is enough to adopt the description of \cite{KapBur}: ``Secular growth is the phenomenon where the coeffients $c_n(t)$ of a perturbative evaluation of some observable in powers of some small coupling $|g|\ll 1$, are time-dependent and grow without bound at late times." Our treatment here is nonperturbative and we do not need to assume weak coupling. NonMarkovian refers to any process with some memory, as signified by a nonlocal kernel in the equations of motion. Integrating an integro-differential equation forward one time step requires the knowledge not only of the immediate past (Markovian limit), but also data in the remote past, to varying degrees of nonMarkovianity.} also ~\cite{Wu11,HF17}. In the first part of this paper we shall treat the nonequilibrium dynamics of a harmonic oscillator with time-dependent frequency, i.e., an UD detector regarded as the Witness, representing the physical quantities of interest as exemplified above. The nonMarkovian quantum Langevin equation is obtained by integrating over the quantum field which is squeezed by the expansion of the universe. Notice that the quantum field keeps a record of how it was squeezed, and the nonequilibrium dynamics of the $W$ detector contains the coarse-grained (`integrated over') information of the field. Thus the answer to the question we posed -- is it possible to retrieve some information about the early universe -- is answered by solving this integral-differential equation. We present the equation, with some discussion of the nonMarkovian feature, but leave the complete solutions to some later more able hands. \subsection{Detector response to nonMarkovian dynamics} In \cite{Unr76} the detector \textit{response function} refers to the field's Wightman function while the detector's \textit{selectivity} refers to its internal energy change (such as the excitation from a ground state to an excited state). The theoretical framework employed there is time-dependent perturbation theory (TDPT). Along this line there is a huge literature on this subject. See, e.g., \cite{Higuchi,GarProRes,GarProUni,LouSat} and references therein. \paragraph{Perturbative methods inadequate for nonMarkovian processes} Most prior work on UD detectors are based on TDPT, which is valid only for very weak couplings between the detector and the field and a short time after the detector-field coupling is switched on, and since the perturbative results become inaccurate quickly, any backreaction calculation of the field on the detector based on perturbation theory becomes unreliable just as quickly, incapacitating it to treat nonMarkovian effects\footnote{There were suggestions to go nonperturbative, e.g., \cite{BMMM13,BLF13}. The research program of one of us with collaborators on detector-field interactions since the 90s has emphasis in exact treatments, from the derivation of an exact nonMarkovian master equation for quantum Brownian motion \cite{HPZ,HM94}, two field interactions in de Sitter universe \cite{HPZdec,Banff} for the study of quantum decoherence, the uncertainty relation at finite temperatures \cite{HZ93}, squeezing and entropy generation \cite{HKM94,KMH97}, field-induced mutual nonMarkovianity in multiple-detectors \cite{RHA,CPR} to backreaction \cite{LinHu06,LinHu07} and quantum entanglement \cite{LCH08,LCH10,LinHu09,LinHu10,LCH15,LCH16,HHEnt}.}. Therefore to see the dynamics with memory it is highly desirable to have exact solutions (arbitrary coupling strength) to fully nonequilibrium (as opposed to linear response restricted by very weak coupling) dynamics. Since the harmonic oscillator detector linearly coupled to a quantum field is a Gaussian system we can in principle derive exact quantum stochastic equations for the open (reduced) system under strong coupling strength, for low temperatures and find exact solutions to them \cite{Fleming}. \paragraph{Response function} Note that the detector response in our context is not the Wightman function of the field but what is measured in the detector, and more importantly, this is a dynamical response which includes the field's influence on the detector. We do this with the influence functional \cite{if}, the closed-time-path \cite{ctp} formalisms and quantum Langevin equation \cite{QLE} techniques, well adapted to the detector-field system. We briefly summarize the key findings from these earlier work relevant to our present quests \cite{JH1,QRad,RHA,CPR} below. A nonMarkovian equation known as the Hu-Paz-Zhang (HPZ) master equation for the Brownian motion of a quantum harmonic oscillator in a general environment has been derived in \cite{HPZ}. The HPZ master equation is an exact nonMarkovian equation which preserves the positivity of the density operator and is valid for a) all temperatures, b) arbitrary spectral density of the bath, and c) arbitrary coupling strength between the system and the bath. Allowing for time-dependence in the natural frequency of the system oscillator and in the $N$-oscillators making up its thermal bath, Hu and Matacz \cite{HM94} derived the master equations for the reduced density matrix of a parametric quantum oscillator system in a squeezed thermal bath and applied them to a range of problems: They showed that a detector sees \textit{thermal radiance} under uniform acceleration (Unruh), in a 2D black hole (Hartle-Hawking) and in a (static) de Sitter (Gibbons-Hawking) spacetime. Later, Raval, Koks, Hu and Matacz \cite{RHK97,KHMR} showed that thermal radiation is emitted from a moving mirror and a collapsing shell, and near-thermal radiance is present in asymptotic conditions admitting exponential red-shifting. Here we continue to use the detector-field model to probe further into cosmological problems. \paragraph{Power spectrum} Besides the detector response a commonly invoked quantity of physical interest is the two-point correlation function. The power spectrum is defined to be a suitable normalization factor times the spatial Fourier transform of the contracted correlation function at equal times. Dynamical responses as two-point functions are commonly discussed in condensed matter physics \cite{Lovesey,Chaikin}. We focus only on cosmological problems here. In the two examples we mentioned earlier, the power spectrum of gravitational potential fluctuations is given by Eq. (28.67) and the correlation matrix by Eq. (28.69) of \cite{ThoBla}. The graviton two-point function is given by Eq. (4.34) of \cite{FRV12}, using the definition of the power spectrum defined in \cite{BFM92}. Those are, of course, our witnesses $W$ which co-existed and evolved with the quantum field throughout the entire cosmological history. \paragraph{What can a later time detector read about earlier history?} In the second part of this paper we consider a detector introduced at the present time for cosmological observations, what we called the `Detective' $D$. We shall consider a harmonic oscillator of constant frequency, but the field modes have time-dependent frequencies for all times, getting excited by the expanding universe. Thus the squeeze parameter varies in time. To simplify the investigation, we study the frequency variation in a statically-bounded evolution with a valid `in-state, out-state' description. At late times a stationarity condition exists which makes the problem more tractable. The validity of a fluctuation-dissipation relation in the detector after relaxation is addressed in a companion paper \cite{FDRSq}. Under this condition we inquire to what extent the dynamical response of the detector introduced at a late time can read into the complex structure of the quantum field and reveal some earlier history of the universe. There are other important cosmological problems a detector-field system can help to investigate. Quantum decoherence via a single detector-quantum field model and quantum entanglement between two detectors in a common quantum field are two important topics which unfortunately we have no space to delve into in this paper. Suffice it to say that the conceptual framework of open quantum systems and nonequilibrium quantum field theory have finally entered the mainstream of current cosmological research. See, e.g., \cite{Fukuma,Boyan,Burgess,Nelson} and references therein. This paper is organized as follows: In Section~II, we briefly summarize field quantization in dynamical spacetimes and show that, under suitable transformation, the amplitude functions of the field modes behave like parametric oscillators. With this, we can use as our starting point a simple scalar field with time-dependent effective mass in flat space. In Sec.~III, we derive a quantum equation of motion for the internal degree of freedom of an Unruh-DeWitt detector of fixed frequency coupled to a parametrically driven scalar field in Minkowski spacetime. Solving this equation for a detector, assumed to have co-existed with the field in its evolution, will tell us the past history of the field squeezed by the universe's expansion. In Sec.~IV, we examine the features of this parametrically driven quantum field, and show how nonadiabaticity in the time-evolution of the field modes results in the squeezing and particle creation. These are the physical observables in the evolutionary history of the field which we aim for a detector to pick up. In. Sec.~V, we consider the case of a detector introduced at the present time and investigate its internal dynamics to assess the potential of retrieving information about the past history of the field evolution. In Sec.~VI we summarize our findings for the two detector scenarios followed by some discussions. \section{Quantum fields in dynamical spacetimes} To make the presentation self-contained we give in this section a short summary of quantum field theory in curved space suitable for the considerations of cosmological processes mentioned earlier. There are many reviews and monographs devoted to this subject, e.g., \cite{FullingBook,WaldBook,BirDav,ParToms,HuVer20}. Readers familiar with this subject can skip this section all together. We follow the treatment of \cite{CalHu08} here. \subsection{Free fields in flat space} Some basic structure of quantum field theory (QFT) in flat space is needed for the construction of QFT in curved spacetime. Consider a free massive $m$ scalar field in Minkowski spacetime. The Heisenberg equation of motion for this field becomes the \textit{Klein-Gordon equation } ${\nabla }^{2}\hat{\Phi} (x)-m^{2}\hat{\Phi} (x)=0$. Assuming that the field lives in a finite large volume $V$ and expanding the scalar field operator in (spatial) Fourier modes, we have \begin{equation} \hat{\Phi} (t,\bm {x})=\frac{1}{\sqrt{V}}\sum_{\bm {k}}\hat{\varphi} _{\bm {k}}(t){u}_{\bm {k}}(\bm {x})\,, \label{modexp} \end{equation} where $\bm {k}=2\pi \bm {n}/L$, and $\bm {n}=(n_{1},n_{2},n_{3})$ in general consists of a triplet of integers. In Minkowski space the spatial mode functions are simply $u_{\bm {k}}=e^{i\bm {k}\cdot \bm {x}}$. In the infinite volume continuum limit this becomes \begin{equation} \hat{\Phi} (\bm {x},t)=\int\!\frac{d^{3}\bm{k}}{( 2\pi)^{\frac{3}{2}}}\;e^{i\bm {k}\cdot\bm {x}}\hat{\varphi} _{\bm {k}}( t) \label{d19a1} \end{equation} The operator-valued function $\hat{\varphi} _{\bm {k}}(t)$ for each mode $\bm {k}$ obeys a harmonic oscillator equation \begin{equation} \frac{d^{2}\hat{\varphi} _{\bm {k}}}{dt^{2}}+\omega _{\bm {k}}^{2}\hat{\varphi} _{\bm {k}}=0, \label{paramphi} \end{equation} where $\omega _{\bm {k}}^{2}= |\bm {k}|^{2}+ m^{2}$ in Minkowski space. Given two complex independent solutions $f_{\bm {k}}^{\vphantom{*}}$, $f_{\bm {k}}^{*}$ of Eq.~\eqn{paramphi}, we may write \begin{equation} \hat{\varphi} _{\bm {k}}( t) =\hat{a}_{+\bm {k}}^{\vphantom{\dagger}}f^{\vphantom{\dagger}}_{\bm {k}}( t) +\hat{a}_{-\bm {k}}^{\dagger }f_{\bm {k}}^{*\vphantom{\dagger}}( t) \,. \label{d19c} \end{equation} Let us introduce the the inner product product $(f,g)=f\dot{g}-g\dot{f}$ of the two functions $f$ and $g$, which is conserved by Eq.~\eqn{paramphi}, and impose the normalization \begin{equation} ( f_{\bm {k}}^{\vphantom{*}},\,f_{\bm {k}}^{*}) =i\, . \label{wron} \end{equation} The equal-time commutation relations are equivalent to \begin{align} \bigl[ \hat{a}_{\bm {k}}^{\vphantom{\dagger}},\hat{a}_{\bm {k}'}^{\vphantom{\dagger}}\bigr] =\bigl[ \hat{a}_{\bm {k}}^{\dagger},\hat{a}_{\bm {k}'}^{\dagger }\bigr] &=0\,, &\bigl[\hat{a}_{\bm {k}}^{\vphantom{\dagger}},\hat{a}_{\bm {k}'}^{\dagger }\bigr]& =\delta^{(3)} ( \bm {k}-\bm {k}')\,. \label{d19e} \end{align} The operators $\hat{a}_{\bm {k}}^{\vphantom{\dagger}}$, $\hat{a}_{\bm {k}}^{\dagger }$ may be interpreted as particle annihilation and creation operators. We say that each choice of the basis functions $f_{\bm {k}}$ constitutes a \textit{particle model}\index{particle model}, where $f_{\bm {k}}$ is the \textit{positive frequency}\index{frequency!positive} component and $f_{\bm {k}}^{*}$ is the \textit{negative frequency}\index{frequency!negative} component of the $\bm {k}$-th mode; the state which is annihilated by all $a_{\bm {k}}$ is the vacuum of the particle model. The vacua of different particle models are in general inequivalent. This situation becomes more challenging for quantum fields in a dynamical background field or spacetime. \subsection{QFT in curved spacetime} Consider a massive scalar field $\Phi(x)$ of mass $m$ coupled arbitrarily $\xi$ to a background spacetime with metric $g_{\mu \nu}$ and scalar curvature $R$. Its dynamics is described by the action \begin{equation} S_{\Phi}=-\frac{1}{2}\int\!d^{4}x\sqrt{-g}\;\biggl\{g^{\mu \nu }(x)\nabla _{\mu }\Phi(x) \nabla _{\nu }\Phi(x) +\bigl[m^{2}+\xi R(x)\bigr]\Phi^{2}(x)\biggr\}\,, \end{equation} where the 0th component of $x^\mu$ denotes the time $\{t\}$ and the 1, 2, 3 the spatial $\{\bm{x}\}$ components, with four-vector indices $\mu$, $\nu$ running from 0 to 3. (We may drop the $\mu$ index on $x^\mu$ for brevity). The field-spacetime coupling constant $\xi=0$ corresponds to minimal coupling and $\xi=1/6$ to conformal coupling. Here, $\nabla_{\mu}$ denotes the covariant derivative with respect to the background spacetime with metric tensor $g_{\mu \nu }$ and $g=\det g_{\mu \nu }$ is its determinant. The scalar field satisfies the wave equation \begin{equation} \Bigl[\square-m^{2}-\xi R(x)\Bigr]\Phi (x)=0\,, \label{E2} \end{equation} where the Laplace-Beltrami operator $\square$ defined on the background spacetime is given by, \begin{equation} \square= g^{\mu \nu }\nabla _{\mu }\nabla _{\nu }=\frac{1}{\sqrt{-g(x)}}\frac{\partial }{\partial x^{\mu }}\left[ g^{\mu \nu }(x)\sqrt{-g(x)}\,\frac{\partial }{\partial x^{\nu }}\right]\,. \end{equation} In flat space, Poincar\'{e} invariance ensures the existence of a unique global Killing vector $\partial _{t}$ orthogonal to all constant-time spacelike hypersurfaces, an unambiguous separation of the positive-and negative-frequency modes, and a unique and well-defined vacuum. In curved spacetime, general covariance precludes any such privileged choice of time and slicing. There is no natural mode decomposition and no unique vacuum \cite{Fulling73,FullingBook}. We assume the background spacetime under consideration has at least enough symmetry to allow for a normal mode decomposition of the invariant operator at any constant-time slice. In the canonical quantization approach, the quantum field $\Phi$ and its conjugate momentum $\Pi$ obey the equal time commutation relation \begin{equation} \bigl[ \Phi (\bm {x},t) ,\Pi(\bm {x}',t)\bigr]=i\, \delta^{(3)}(\bm {x},\bm {x}') \end{equation} Here the scalar delta function $\delta^{(3)}(\bm {x},\bm {x}')$ in curved spacetime satisfies \begin{equation} \int\!d^{3}\bm{x}'\sqrt{-h(\bm{x}')}\;\delta^{(3)} (\bm {x},\bm {x}')\,F(\bm {x}')=F(\bm {x})\,, \end{equation} where $h$ is the determinant of the 3-metric on the spacelike surface and $F$ is any well behaved test function. In the spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime with line element \begin{equation} ds^{2}=-dt^{2}+a^{2}(t)\,d\bm{x}^{2}\,, \end{equation} the spatial mode functions are $u_{\bm {k}}=e^{i\bm {k}\cdot\bm {x}}$ and the wave equation for the amplitude function $f_{\bm {k}}(t)$ of mode $\bm {k}$ in cosmic time $t$ becomes \begin{equation} \ddot{f}_{k}(t)+3H\;\dot{f}_{k}(t)+\bigl[\omega^{2}(t)+\xi R(t)\bigr]f_{k}(t)=0\,,\label{costweq} \end{equation} where an overdot denotes the derivative with respect to cosmic time, and because of spatial isotropy, $f_{\bm {k}}$ in fact is a function of $k=\lvert\bm{k}\rvert$. In addition we introduce \begin{align} \omega^{2}(t)&=\frac{k^{2}}{a^{2}(t)}+m^{2}\,,&R&=6\bigl[ \dot{H}(t)+{2}H^{2}(t)\bigr]\, , \label{qk} \end{align} with $H(t)=\dot{a}/a$ being the Hubble expansion rate of the background space. The inequivalence of Fock representation in curved space, originating from the absence of a global time-like Killing vector, leads to an unavoidable mixing of positive- and negative-frequency components of the field, and then particle creation. In cosmological spacetimes the vacua defined at different times of evolution are not equivalent, so cosmological particle creation is by nature a dynamically induced effect. We stress that the particles in this context are not produced by interactions; they are excitations of the free-field by the changing background spacetimes. The physical mechanism is also different from thermal particle creation in black holes \cite{Haw74}, accelerated detectors \cite{Unr76} or moving mirrors \cite{FulDav76,DavFul77}, which involves the presence of a horizon, but underlies dynamical Casimir effect \cite{DCE}. Unlike for QFT in constant background field or static spacetimes, a consistent separation into positive and negative energy solutions of the wave equation is not always possible. The definition of a vacuum\index{vacuum} state becomes a fundamental challenge in the construction of QFT in time-dependent backgrounds \cite{Fulling73}. There are a few situations where vacuum states in QFT in dynamical spacetimes are well-defined: 1) the so-called statically bounded or asymptotically stationary spacetimes, where it is assumed that at $t=\pm \infty $ the background spacetime becomes stationary and the background fields become constant; 2) conformally-invariant fields in conformally static spacetimes. In both cases the Fock spaces are well defined and one can calculate the amplitude for particle creation in an $S$-matrix sense. 3) If the background spacetime does not change too rapidly (quantified by a nonadiabaticity parameter described below) there is a conceptually clear and technically simple method in defining the so-called ($n$th order) \textit{adiabatic vacuum or number state}. We shall consider cases 1) and 2) here for simplicity and bypass Case 3) where their treatments can be found in papers since the 70s. We shall leave the topic of Bogoliubov transformation and particle creation, adiabatic number states and adiabatic regularization of the stress tensor for the reader to consult the original papers. \subsection{Conformally-related spacetimes} The spatially-flat FLRW spacetime $g_{\mu\nu}$ can be conformally related to the Minkowski spacetimes $\eta _{\mu \nu }$ by \begin{align} g_{\mu \nu }(x)&=a^{2}(\eta )\eta _{\mu \nu }\,,&ds^{2}&=a^{2}(\eta )(-d\eta ^{2}+d\bm {x}^{2})\,, \end{align} where the conformal time $\eta$ is defined by \begin{equation} \eta=\int^{t}\!\frac{ds}{a(s)}\,, \end{equation} Since the conformally-related spacetime is static, a global Killing vector $\partial _{\eta }$ exists which enables us to define a globally well-defined vacuum state. The vacuum defined by the mode decomposition with respect to $\partial _{\eta }$ is known as the conformal vacuum. For conformally-invariant fields, that is, a massless scalar field with $\xi =1/6$ in Eq.~\eqref{E2} in conformally-static spacetimes, there is no particle creation~\cite{Par72}. Any deviation from these conditions may result in particle production. Consider a real massive scalar field coupled to a spatially-flat FLRW metric with constant coupling $\xi $. It is convenient to introduce a conformal amplitude function $\chi _{\bf k}(\eta )\equiv a(\eta)f_{\bf k}(\eta )$ by rescaling the amplitude function $f_{\bf k}(t)$ in \eqref{costweq} for the normal mode ${\bf k}$. It then satisfies the equation of motion of the parametric oscillator, \begin{equation} \chi''_{k}(\eta)+\omega^{2}_c(\eta)\,\chi_{k}(\eta)=0\,, \label{contweq} \end{equation} where a prime denotes $d/d\eta $, $k \equiv | {\bf k}|$ and \begin{equation} \omega^{2}_c(\eta )=\omega^{2}(t)a^{2}(\eta)=k^{2}+a^{2}(\eta)\Bigl[m^{2}+(\xi -\frac{1}{6})\,R(\eta)\Bigr]\,,\label{Omega} \end{equation} is the conformal time-dependent frequency, with a subscript $c$ indicating that it is a conformally-related quantity. One sees that, for massless ($m=0$) conformally coupled ($\xi=\frac{1}{6}$) fields in a spatially flat FLRW universe, the conformal wave equation admits solutions \begin{equation} \chi_{k}(\eta )=A\,e^{+i\omega_c\eta }+B\,e^{-i\omega_c\eta}\,, \end{equation} which are of the same form as traveling waves in flat space. Since for $m=0$, $\xi=1/6$, we have a constant frequency $\omega=k$, the positive- and negative-frequency components remain separated and there is no particle production. {Take as another example, the minimally coupled ($\xi=0$), massless scalar field in the spatially-flat de Sitter universe (the Poincar\'e patch) with natural frequency \begin{equation} \omega_c^{2}(\eta)=k^{2}-\frac{2}{\eta^{2}}\,. \end{equation} The solutions to the wave equations for the amplitudes of the normal modes are given by \begin{equation} \chi_{k}(\eta )=A\,e^{+ik\eta}\Bigl(1+\frac{i}{k\eta}\Bigr)+B\,e^{-ik\eta}\Bigl(1-\frac{i}{k\eta}\Bigr)\,. \end{equation} We see a frozen power spectrum of the field on the superhorizon scale $k\eta\ll1$. This example illustrates an important fact that the quantum field can have distinct behavior over various scales in the frequency modulation, exhibited through different stages of the field's evolution. In this work we look into the possibility of identifying these features imprinted in the nonMarkovian dynamics of one or more detectors that measure/interact with the quantum field.} {The frequency modulation \eqref{Omega} captures the essence of the quantum fields in many interesting cosmological spacetimes which can be conformally-related to that in flat space. We shall henceforth focus on parametric quantum fields in flat space, establishing a generic model for the investigation of nonMarkovianity in the dynamics of the detector and the changing field, then explore features that may allow us to fathom the past history of the Universe.} \section{Quantum oscillator in a parametric quantum field} Using the model of~\cite{HM94} we study the nonequilibrium dynamics of a Unruh-DeWitt detector coupled to a quantum field $\Phi(\bm{x}, t)$ in a cosmological spacetime. {The expansion of the universe acts like a drive which changes the natural frequencies of the normal modes of the quantum field like in a parametric oscillator. Similarly for the quantum field driven by a moving conducting plate, as in the dynamical Casimir effect \cite{DCE}. We shall call a driven quantum field with time-varying normal-mode natural frequency a `parametric field' for short. (Note that an undriven quantum field has {a trivial} time dependence $e^{\pm i \omega t}$.) After the conformal transformation outlined in the previous section, its amplitude has a generic form like what the amplitudes of a parametric field obey in flat space. Hereafter, we shall treat a real quantum scalar field $\Phi(\bm{x}, t)$ parametrically driven and evolving in Minkowski space. The action for the dynamics of the internal degrees of freedom $Q_{n}$ of the $n^{\text{th}}$ quantum parametric oscillator is given by \begin{equation} S_{\textsc{udw}}=\int\!d^{3}\bm{x}\!\int\!dt\;\sum_{n}\frac{M}{2}\Bigl[\dot{Q}_{n}^{2}(t)-\Omega_{b}^{2}(t)\,Q_{n}^{2}(t)\Bigr]\,\delta^{(3)}(\bm{x}-\bm{z}_{n})\,, \end{equation} of mass $M$ and the bare oscillating frequency $\Omega_{b}$, and the detector's external degree of freedom follows a prescribed trajectory $\bm{z}_{n}(t)$. The quantum scalar field $\Phi(\bm{x},t)$ has a time-dependent mass $\mathfrak{m}(t)$, which may accommodate the effects from the external time-dependent sources, such as $a(t)$, that accounts for the parametric process of the field. Its action takes the form \begin{equation}\label{E:dgksjfbrt} S_{\textsc{b}}=\int\!d^{3}\bm{x}\!\int\!dt\;\frac{1}{2}\biggl\{\Bigl[\partial_{t}\Phi(\bm{x},t)\Bigr]^{2}-\Bigl[\nabla_{\bm{x}}\Phi(\bm{x},t)\Bigr]^{2}-\mathfrak{m}^{2}(t)\,\Phi^{2}(\bm{x},t)\biggr\}\,. \end{equation} The interaction between the detector and the field takes a simple bilinear form \begin{equation} S_{\textsc{int}}=\int\!d^{3}\bm{x}\!\int\!dt\;\sum_{n}e(t)Q_{n}(t)\delta^{(3)}(\bm{x}-\bm{z}_{n})\,\Phi(\bm{x},t)\,, \end{equation} in which the detector-field coupling $e(t)$ is allowed to be time-dependent to include additional effects of the external driving sources. These actions lead to the following coupled Heisenberg equations for the internal degrees $\hat{Q}_{n}(t)$ of freedom of the detectors and the field $\hat{\Phi}(\bm{x},t)$, \begin{align} M\,\ddot{\hat{Q}}_{n}(t)+M\Omega_{b}^{2}(t)\,\hat{Q}_{n}(t)&=e(t)\,\hat{\Phi}(\bm{z}_{n},t)\,,\label{E:ihtdnkj1}\\ \frac{\partial^{2}}{\partial t^{2}}\hat{\Phi}(\bm{x},t)-\nabla^{2}_{\bm{x}}\hat{\Phi}(\bm{x},t)+\mathfrak{m}^{2}(t)\,\hat{\Phi}(\bm{x},t)&=\sum_{n}e(t)Q_{n}(t)\delta^{(3)}(\bm{x}-\bm{z}_{n})\,.\label{E:ihtdnkj2} \end{align} Formally solving \eqref{E:ihtdnkj2} gives \begin{align} \hat{\Phi}(\bm{x},t)&=\hat{\Phi}_{h}(\bm{x},t)+\int\!d^{3}\bm{x}'ds\;G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',s)\sum_{n}e(s)Q_{n}(s)\delta^{(3)}(\bm{x}'-\bm{z}_{n})\notag\\ &=\hat{\Phi}_{h}(\bm{x},t)+\int_{0}^{t}\!ds\;\sum_{n}e(s)\,G_{R,0}^{(\Phi)}(\bm{x},t;\bm{z}_{n},s)Q_{n}(s)\,,\label{E:irnddfgd} \end{align} where $\hat{\Phi}_{h}(\bm{x},t)$ is the homogeneous solution to the wave equation \eqref{E:ihtdnkj2} and describes the free field, while $\hat{\Phi}(\bm{x},t)$ on the lefthand side of \eqref{E:irnddfgd}, which we call the interacting field, comprises the back-action of the internal degree of freedom of the detector in the form of radiation field \cite{QRad} as can be seen from the second term on the righthand side. This difference is subtle but worth noticing. The two-point function $G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')$ is the retarded Green's function of the field, satisfying the inhomogeneous wave equation \begin{equation*} \frac{\partial^{2}}{\partial t^{2}}G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')-\nabla^{2}_{\bm{x}}G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')+\mathfrak{m}^{2}(t)\,G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')=\delta^{3}(\bm{x}-\bm{x}')\delta(t-t')\,. \end{equation*} Plugging \eqref{E:irnddfgd} back to \eqref{E:ihtdnkj1} thus yields \begin{equation}\label{E:gbksddf} M\,\ddot{\hat{Q}}_{n}(t)+M\Omega_{b}^{2}(t)\,\hat{Q}_{n}(t)=e(t)\hat{\Phi}_{h}(\bm{z}_{n},t)+\sum_{j}\int_{0}^{t}\!ds\;e(t)G_{R,0}^{(\Phi)}(\bm{z}_{n},t;\bm{z}_{j},s)e(s)Q_{j}(s)\,. \end{equation} This governs the nonequilibrium dynamics of the internal degree of freedom of the detector, coupled to a parametric field. The expressions on the righthand side result from the detector-field interaction. The first term describes the fluctuating force due to the quantum fluctuations of the free field. The second term is a nonlocal expression, which contains 1) a local frictional force, 2) a {\it self-non-Markovian effect} of the detector due to the finite effective mass of the field, and 3) {\it mutual non-Markovian influences} between detectors, mediated by the field. We will dwell on these points after we have derived the field dynamics. \section{Dynamics of parametrically-driven quantum field} Suppose the parametric process of the free classical field starts at $t=\mathfrak{t}_{i}>0$ and ends at $\mathfrak{t}_{f}$, during which the effective mass $\mathfrak{m}(t)$ increases monotonically and smoothly from one fixed value $\mathfrak{m}_{i}$ to another $\mathfrak{m}_{f}>\mathfrak{m}_{i}$. The duration of the parametric process is then $\mathfrak{t}=\mathfrak{t}_{f}-\mathfrak{t}_{i}$. Thus before $\mathfrak{t}_{i}$ and after $\mathfrak{t}_{f}$, the field $\Phi(\bm{x},t)$ behaves like a free, real massive scalar field. Note that in this section we suppress the subscript $h$ for the free field. We will put the subscript back when needed. Let us expand $\Phi(\bm{x},t)$ by \begin{equation} \Phi(\bm{x},t)=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{\frac{3}{2}}}\;e^{+i\bm{k}\cdot\bm{x}}\,\varphi_{\bm{k}}(t)\,, \end{equation} with the mode function $\varphi_{\bm{k}}^{*}(t)=\varphi_{-\bm{k}}^{\vphantom{*}}(t)$. The action of the free field \eqref{E:dgksjfbrt} then takes the form of the action for a collection of parametric oscillators \begin{equation} S=\int\!dt\;\frac{1}{2}\int\!d^{3}\bm{k}\,\Bigl\{\dot{\varphi}_{\bm{k}}^{\vphantom{*}}(t)\dot{\varphi}_{\bm{k}}^{*}(t)-\omega^{2}(t)\,\varphi_{\bm{k}}^{\vphantom{*}}(t)\varphi_{\bm{k}}^{*}(t)\Bigr\}\,, \end{equation} with natural frequency $\omega(t)$ \begin{equation}\label{E:kgfbdf} \omega^{2}(t)=\bm{k}^{2}+\mathfrak{m}^{2}(t)\,, \end{equation} such that the frequency monotonically and smoothly changes from $\omega_{i}$ at the beginning of the parametric process to $\omega_{f}$ at the end \begin{align} \omega_{i}^{2}&=\bm{k}^{2}+\mathfrak{m}_{i}^{2}\,,&\omega_{f}^{2}&=\bm{k}^{2}+\mathfrak{m}_{f}^{2}\,. \end{align} The mode function $\varphi_{\bm{k}}(t)$ satisfies the equation of motion of a classical parametric oscillator \begin{equation}\label{E:rirtyrd} \ddot{\varphi}_{\bm{k}}(t)+\omega^{2}(t)\,\varphi_{\bm{k}}(t)=0\,. \end{equation} Its solutions of the corresponding Heisenberg equation can be formally given in terms of the initial conditions at $t=0<\mathfrak{t}_{i}$ \begin{align} \hat{\varphi}_{\bm{k}}(t)&=d^{(1)}_{\bm{k}}(t)\,\hat{\varphi}_{\bm{k}}(0)+d^{(2)}_{\bm{k}}(t)\,\hat{\pi}_{-\bm{k}}(0)\,,\label{E:dgbksbgsd1}\\ \hat{\pi}_{\bm{k}}(t)&=\dot{d}^{(1)}_{-\bm{k}}(t)\,\hat{\varphi}_{-\bm{k}}(0)+\dot{d}^{(2)}_{-\bm{k}}(t)\,\hat{\pi}_{\bm{k}}(0)\,, \end{align} where $\hat{\pi}_{\bm{k}}=\dot{\varphi}_{-\bm{k}}(t)$ is the mode function of the canonical momentum $\hat{\Pi}$ conjugated to the field $\hat{\Phi}$. We introduce a special set of homogeneous solutions $d^{(1)}_{\bm{k}}(t)$, $d^{(2)}_{\bm{k}}(t)$ to \eqref{E:rirtyrd}, satisfying \begin{align} d^{(1)}_{\bm{k}}(0)&=1\,,&\dot{d}^{(1)}_{\bm{k}}(0)&=0\,,&d^{(2)}_{\bm{k}}(0)&=0\,,&\dot{d}^{(2)}_{\bm{k}}(0)&=1\,, \end{align} for each mode $\bm{k}$. They are particularly convenient in the context of nonequilibrium dynamics. The canonical commutation relation $[\hat{\varphi}_{\bm{k}}(t),\hat{\pi}_{\bm{k}'}(t)]=i\,\delta_{\bm{k}\bm{k}'}$ gives \begin{equation}\label{E:fgksbrt} d^{(1)}_{\bm{k}}(t)\dot{d}^{(2)}_{\bm{k}}(t)-d^{(2)}_{\bm{k}}(t)\dot{d}^{(1)}_{\bm{k}}(t)=1\,, \end{equation} which is nothing but the Wronskian corresponding to \eqref{E:rirtyrd}. We also note that $d^{(i)}_{\bm{k}}(t)$ only depends on the magnitude of $\bm{k}$ due to \eqref{E:kgfbdf}. Now suppose at $t=0$, that is, $f_{\bm {k}}=1$ in \eqref{d19c}, we rewrite the field-mode operator and its momentum in terms of the creation and annihilation operators $\hat{a}^{\vphantom{\dagger}}_{\bm{k}}$, $\hat{a}^{\dagger}_{\bm{k}}$, \begin{align} \hat{\varphi}_{\bm{k}}&=\frac{1}{\sqrt{2\omega_{i}}}\,\bigl(\hat{a}^{\dagger}_{\bm{k}}+\hat{a}^{\vphantom{\dagger}}_{\bm{k}}\bigr)\,,&\hat{\pi}_{\bm{k}}=i\sqrt{\frac{\omega_{i}}{2}}\,\bigl(\hat{a}^{\dagger}_{-\bm{k}}-\hat{a}^{\vphantom{\dagger}}_{-\bm{k}}\bigr)\,, \end{align} such that by \eqref{E:dgbksbgsd1}, we can express $\hat{\varphi}_{\bm{k}}(t)$ in terms of $\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)$ and $\hat{a}^{\dagger}_{\bm{k}}(0)$ \begin{align}\label{E:rnbryjy} \hat{\varphi}_{\bm{k}}(t)=\frac{1}{\sqrt{2\omega_{i}}}\Bigl\{\Bigl[d^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)+\Bigl[d^{(1)}_{\bm{k}}(t)+i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\dagger}_{\bm{k}}(0)\Bigr\}\,, \end{align} with the shorthand notation $\omega(t)=\omega_{i}$ when $t\leq\mathfrak{t}_{i}$. Eq.~\eqref{E:rnbryjy} gives the formal, exact dynamics at any time $t>0$ of the free parametric field according to the action \eqref{E:dgksjfbrt} with given initial conditions at $t=0$. Then, the field operator $\hat{\Phi}(\bm{x},t)$ has a plane-wave expansion of the form \begin{align} \hat{\Phi}(\bm{x},t)&=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{\frac{3}{2}}}\frac{1}{\sqrt{2\omega_{i}}}\Bigl\{\Bigl[d^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)\,e^{+i\bm{k}\cdot\bm{x}}\Bigr.\notag\\ &\qquad\qquad\qquad\qquad\qquad+\Bigl.\Bigl[d^{(1)}_{\bm{k}}(t)+i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\dagger}_{\bm{k}}(0)\,e^{-i\bm{k}\cdot\bm{x}}\Bigr\}\,,\label{E:ngkfjg1} \end{align} and the corresponding momentum operator is \begin{align} \hat{\Pi}(\bm{x},t)&=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{\frac{3}{2}}}\frac{1}{\sqrt{2\omega_{i}}}\Bigl\{\Bigl[\dot{d}^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,\dot{d}^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)\,e^{+i\bm{k}\cdot\bm{x}}\Bigr.\notag\\ &\qquad\qquad\qquad\qquad\qquad+\Bigl.\Bigl[\dot{d}^{(1)}_{\bm{k}}(t)+i\,\omega_{i}\,\dot{d}^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\dagger}_{\bm{k}}(0)\,e^{-i\bm{k}\cdot\bm{x}}\Bigr\}\,.\label{E:ngkfjg2} \end{align} It can be easily seen that for the standard Klein-Gordon field in the unbounded Minkowski space, we have $\omega(t)=\omega_{i}$, and \begin{align} \dot{d}^{(1)}_{\bm{k}}(t)&=\cos\omega_{i} t\,,&\dot{d}^{(2)}_{\bm{k}}(t)&=\frac{1}{\omega_{i}}\,\sin\omega_{i} t\,, \end{align} and the expansions \eqref{E:ngkfjg1} and \eqref{E:ngkfjg2} revert to the conventional plane-wave forms we are familiar with. With the field expansion \eqref{E:ngkfjg1}, we are ready to construct its Green's functions. The retarded Green's function $G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')$ of the free field $\hat{\Phi}(\bm{x},t)$ is given by \begin{align}\label{E:bfkfkrte} G_{R,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')&=i\,\theta(t-t')\,\bigl[\hat{\Phi}(\bm{x},t),\hat{\Phi}(\bm{x}',t')\bigr]\notag\\ &=-\theta(t-t')\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{3}}\;e^{+i\bm{k}\cdot(\bm{x}-\bm{x}')}\Bigl\{d^{(1)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')-d^{(2)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')\Bigr\}\,, \end{align} where $\theta(t)$ is the unit-step function. This is state-independent in the current setting, but in contrast to the cases~\cite{CPR,PRE18,HHEnt,LinHu07,LinHu06} we have treated, it is in general nonstationary due to the parametric process involved. {The terms in the curly brackets will not reduce to $d^{(2)}_{\bm{k}}(t-t')$.} However, for the scenario we are considering, it behaves like the standard retarded Green's function of the massive field, but with different masses $\mathfrak{m}$ in the regimes $t$, $t'\leq\mathfrak{t}_{i}$ and $t$, $t'\geq\mathfrak{t}_{f}$. In these two regimes, this retarded Green's function become stationary in time translation. \begin{figure} \centering \scalebox{0.4}{\includegraphics{GrFnmm}} \caption{The retarded Green's function of a massive Klein-Gordon field. Note that it has a non-zero contribution inside the forward lightcone.}\label{Fi:GrFnmm} \end{figure} The retarded Green's function and the Hadamard function of the free massive field of constant mass $m$ are given by~\cite{bjorken&drell} \begin{align} G_{R,0}(x,x')&=\frac{\theta(T)}{2\pi}\,\Bigl[\delta(\sigma^{2})-\theta(\sigma^{2})\,\frac{m}{2\sqrt{\sigma^{2}}}\,J_{1}(m\sqrt{\sigma^{2}})\Bigr]\,,\label{E:kdfgbksfb1}\\ G_{H,0}(x,x')&=\frac{1}{4\pi}\Bigl[\theta(+\sigma^{2})\,\frac{m}{2\sqrt{+\sigma^{2}}}\,Y_{1}(m\sqrt{+\sigma^{2}})+\theta(-\sigma^{2})\,\frac{m}{\pi\sqrt{-\sigma^{2}}}\,K_{1}(m\sqrt{-\sigma^{2}})\Bigr]\,,\notag \end{align} where $T=t-t'$ and the spacetime interval $\sigma^{2}=T^{2}-\bm{R}^{2}$ with $\bm{R}=\bm{x}-\bm{x}'$. In the case of a single detector at a fixed spatial location, we have $\bm{R}=0$, and hence $\sigma^{2}>0$ always. We see from \eqref{E:kdfgbksfb1} that other than the lightcone structure enforced by the first term on the righthand side, it has an additional term that does not vanish for $\sigma^{2}>0$, that is, in a timelike interval. It characterizes a duration of the memory of the order $m^{-1}$. This implies that the detector can affect its own subsequent motion by the radiation field it emits at the present time. In principle, the field has a longer memory for a lighter mass, but the small mass also reduces the ``strength'' of this {\it self-nonMarkovian} effect. Hence there is a compromise. Next we turn to the Hadamard function $G_{H,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')=\dfrac{1}{2}\,\langle\bigl\{\hat{\Phi}(\bm{x},t),\hat{\Phi}(\bm{x}',t')\bigr\}\rangle$, that is, the noise kernel of the field. It reflects the correlation of the field and provides information about the magnitude of the noise force on the detector. From the mode expansion \eqref{E:ngkfjg1}, we have \begin{align} &\quad G_{H,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')\notag\\ &=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{3}}\frac{1}{2\omega_{i}}\biggl\{\,e^{+i\bm{k}(\bm{x}-\bm{x}')}\Bigl(\langle\hat{a}_{\bm{k}}^{\dagger}(0)\hat{a}_{\bm{k}}^{\vphantom{\dagger}}(0)\rangle+\frac{1}{2}\Bigr)\Bigl[2d^{(1)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')+2\omega_{i}^{2}d^{(2)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')\Bigr]\biggr.\notag\\ &\qquad\qquad\qquad\;+e^{+i\bm{k}(\bm{x}+\bm{x}')}\,\langle\hat{a}_{\bm{k}}^{\hphantom{\dagger}2}(0)\rangle\Bigl[d^{(1)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')-\omega_{i}^{2}\,d^{(2)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')\Bigr.\notag\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\Bigl.i\,\omega_{i}\,d^{(1)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')-i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')\Bigr]\notag\\ &\qquad\qquad\qquad\;+\biggl.e^{-i\bm{k}(\bm{x}+\bm{x}')}\,\langle\hat{a}_{\bm{k}}^{\dagger2}(0)\rangle\Bigl[d^{(1)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')-\omega_{i}^{2}\,d^{(2)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')\Bigr.\label{E:brtyufb}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Bigl.i\,\omega_{i}\,d^{(1)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t')+i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)d^{(1)}_{\bm{k}}(t')\Bigr]\biggr\}\,.\notag \end{align} Here $\langle\cdots\rangle$ is the expectation value taken with respect to the initial state of the field at $t=0$. Thus if the initial state is a stationary state, then the last two lines vanish. Note that the operators inside the expectation values are evaluated at the initial time. Since the Hadamard function is state-dependent, we need to specify the configuration of interest in more details before proceeding. Suppose we are interested in the {regime} $t_{i}>\mathfrak{t}_{f}$, after the parametric process of the field ceases. It is known {and shown below} that the state of the oscillator before the parametric process will be squeezed and rotated at the end of the process. This implies that, for the field which is a collection of parametric oscillators, the squeezing and rotation will be mode-dependent, so at the end of the parametric process, the state of the field is squeezed by a two-mode squeeze operator $\hat{S}_{2}(\zeta_{\bm{k}})$, {where} the squeeze parameter $\zeta_{\bm{k}}$ takes a polar decomposition, $\zeta_{\bm{k}}=\eta_{\bm{k}}\,e^{+i\theta_{\bm{k}}}$ with $\eta_{\bm{k}}\geq0$ and $0\leq\theta_{\bm{k}}<2\pi$. For the moment let's neglect the effect of rotation, because it can be absorbed into the squeeze angle $\theta_{\bm{k}}$ if necessary. We assume that at the initial time $t=0$ before the parametric process, the field is in a thermal state $\hat{\rho}_{\beta}^{(\Phi)}$. Then after the process the field will be in a squeezed thermal state \begin{equation} \hat{\rho}_{\textsc{st}}^{(\Phi)}=\prod_{\bm{k}}\hat{S}_{2}^{\vphantom{\dagger}}(\zeta_{\bm{k}})\,\hat{\rho}_{\beta}^{(\Phi)}(\omega_{i})\,\hat{S}^{\dagger}_{2}(\zeta_{\bm{k}})\,,\label{E:ngritris} \end{equation} where the two-mode squeeze operator $\hat{S}_{2}(\zeta_{\bm{k}})$ is given by \begin{equation} \hat{S}_{2}(\zeta_{\bm{k}})=\exp\Bigl[\zeta_{\bm{k}}^{*\vphantom{\dagger}}\hat{a}_{\bm{k}}^{\vphantom{\dagger}}(0)\hat{a}_{\shortminus\bm{k}}^{\vphantom{\dagger}}(0)-\zeta_{\bm{k}}^{\vphantom{\dagger}}\hat{a}_{\bm{k}}^{\dagger}(0)\hat{a}_{\shortminus\bm{k}}^{\dagger}(0)\Bigr]\,. \end{equation} The product in \eqref{E:ngritris} is understood as a shorthand notation that the squeeze operators $\hat{S}_{2}(\zeta_{\bm{k}})$ are applied onto $\hat{\rho}_{\beta}^{(\Phi)}$ mode-by-mode wise. The squeeze operators of different $\bm{k}$ are orthogonal to one another. The thermal state $\hat{\rho}_{\beta}^{(\Phi)}(\omega_{i})$ is of the form \begin{align}\label{E:fgkjdsfg} \hat{\rho}_{\beta}^{(\Phi)}(\omega_{i})&=\frac{1}{Z_{\beta}}\,\prod_{\bm{k}}\exp\Bigl[-\beta\bigl(n_{\bm{k}}+\frac{1}{2}\bigr)\,\omega_{i}\Bigr]\,\lvert n_{\bm{k}}\rangle\langle n_{\bm{k}}\rvert\,,&\omega_{i}&=\sqrt{\smash[b]{\lvert\bm{k}\rvert_{\vphantom{i}}^{2}+\mathfrak{m}_{i}^{2}}}\,, \end{align} where $Z_{\beta}$ is the corresponding partition function, $Z_{\beta}=\operatorname{Tr}_{\Phi}\hat{\rho}_{\beta}^{(\Phi)}(\omega_{i})$. We can also place the effects of squeezing onto the field operator, and treat the field \eqref{E:brtyufb} as the consequences of the squeeze operator acting on the \textit{in}-modes, that is, $\hat{S}_{2}^{\dagger}(\zeta)\,\hat{\Phi}_{\textsc{in}}(\bm{x},t)\,\hat{S}_{2}(\zeta)$ with \begin{equation}\label{E:gkrbfkdf} \hat{\Phi}_{\textsc{in}}(\bm{x},t)=\int\!\frac{d^{3}\bm{x}}{(2\pi)^{\frac{3}{2}}}\;\frac{1}{\sqrt{2\omega_{i}}}\Bigl[\hat{a}_{\bm{k}}^{\vphantom{\dagger}}(0)\,e^{+i\bm{k}\cdot\bm{x}-i\omega_{i}t}+\hat{a}_{\bm{k}}^{\dagger}(0)\,e^{-i\bm{k}\cdot\bm{x}+i\omega_{i}t}\Bigr]\,. \end{equation} so that at any intermediate times, we have \begin{align*} \hat{\Phi}(\bm{x},t)&=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{\frac{3}{2}}}\;\frac{1}{\sqrt{2\omega_{i}}}\Bigl\{\Bigl[d^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)\,e^{+i\bm{k}\cdot\bm{x}}\Bigr.\notag\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Bigl.\Bigl[d^{(1)}_{\bm{k}}(t)+i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)\Bigr]\,\hat{a}^{\dagger}_{\bm{k}}(0)\,e^{-i\bm{k}\cdot\bm{x}}\Bigr\}\notag\\ &=\int\!\!\frac{d^{3}\bm{k}}{(2\pi)^{\frac{3}{2}}}\;\frac{1}{\sqrt{2\omega_{i}}}\Bigl\{e^{-i\omega_{i}t}\,\hat{S}^{\dagger}(\zeta_{\bm{k}})\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)\,\hat{S}_{2}(\zeta_{\bm{k}})\,e^{+i\bm{k}\cdot\bm{x}}\Bigr.\notag\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Bigl.e^{+i\omega_{i}t}\,\hat{S}_{2}^{\dagger}(\zeta_{\bm{k}})\,\hat{a}^{\dagger}_{\bm{k}}(0)\,\hat{S}(\zeta_{\bm{k}})\,e^{-i\bm{k}\cdot\bm{x}}\Bigr\}\,. \end{align*} As such, we have \begin{align*} G_{H,0}^{(\Phi)}(\bm{x},t;\bm{x}',t')=\frac{1}{2}\operatorname{Tr}_{\Phi}\biggl[\hat{\rho}^{(\Phi)}_{\beta}\bigl\{\hat{\Phi}(\bm{x},t),\hat{\Phi}(\bm{x}',t')\bigr\}\biggr]=\frac{1}{2}\operatorname{Tr}_{\Phi}\biggl[\hat{\rho}^{(\Phi)}_{\textsc{st}}\bigl\{\hat{\Phi}_{\textsc{in}}(\bm{x},t),\hat{\Phi}_{\textsc{in}}(\bm{x}',t')\bigr\}\biggr]\,. \end{align*} If we let \begin{equation}\ \hat{S}_{2}^{\dagger}(\zeta_{\bm{k}})\,\hat{a}_{\bm{k}}(0)\,\hat{S}_{2}^{\vphantom{\dagger}}(\zeta_{\bm{k}})=\alpha_{\bm{k}}^{\vphantom{*}}\,\hat{a}^{\vphantom{\dagger}}_{\bm{k}}(0)+\beta_{\shortminus\bm{k}}^{*}\,\hat{a}^{\dagger}_{\shortminus\bm{k}}(0)\,,\label{E:irgubdf} \end{equation} with $\alpha_{\bm{k}}^{\vphantom{*}}=\cosh\eta_{\bm{k}}^{\vphantom{*}}$, $\beta_{\bm{k}}^{\vphantom{*}}=-e^{-i\theta_{\bm{k}}^{\vphantom{*}}}\,\sinh\eta_{\bm{k}}^{\vphantom{*}}$, then we find \begin{equation}\label{E:fnljsb1} d^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,d^{(2)}_{\bm{k}}(t)=e^{-i\omega_{i}t}\,\alpha_{\bm{k}}^{\vphantom{*}}(t)+e^{+i\omega_{i}t}\,\beta_{\bm{k}}^{\vphantom{*}}(t)\,. \end{equation} Similarly, for the conjugate momentum $\hat{\Pi}(\bm{x},t)$, we also find \begin{equation}\label{E:fnljsb2} \dot{d}^{(1)}_{\bm{k}}(t)-i\,\omega_{i}\,\dot{d}^{(2)}_{\bm{k}}(t)=-i\,\omega_{i}\,e^{-i\omega_{i}t}\,\alpha_{\bm{k}}(t)+i\,\omega_{i}\,e^{+i\omega_{i}t}\,\beta_{\bm{k}}(t)\,. \end{equation} Eqs.~\eqref{E:fnljsb1} and \eqref{E:fnljsb2} lead to \begin{align} \alpha_{\bm{k}}(t)&=\frac{1}{2\omega_{i}}\,e^{+i\omega_{i}t}\,\Bigl[\omega_{i}\,d^{(1)}_{\bm{k}}(t)+i\,\dot{d}^{(1)}_{\bm{k}}(t)-i\,\omega_{i}^{2}d^{(2)}_{\bm{k}}(t)+\omega_{i}\,\dot{d}^{(2)}_{\bm{k}}(t)\Bigr]\,,\label{E:gbkrjbkdg1}\\ \beta_{\bm{k}}(t)&=\frac{1}{2\omega_{i}}\,e^{-i\omega_{i}t}\,\Bigl[\omega_{i}\,d^{(1)}_{\bm{k}}(t)-i\,\dot{d}^{(1)}_{\bm{k}}(t)-i\,\omega_{i}^{2}d^{(2)}_{\bm{k}}(t)-\omega_{i}\,\dot{d}^{(2)}_{\bm{k}}(t)\Bigr]\,.\label{E:gbkrjbkdg2} \end{align} We can verify that \begin{equation} \lvert\alpha_{\bm{k}}\rvert^{2}-\lvert\beta_{\bm{k}}\rvert^{2}=d^{(1)}_{\bm{k}}(t)\dot{d}^{(2)}_{\bm{k}}(t)-\dot{d}^{(1)}_{\bm{k}}(t)d^{(2)}_{\bm{k}}(t)=1\,, \end{equation} The righthand side of \eqref{E:irgubdf} gives the Bogoliubov transformation of the creation and annihilation operators at initial time, and the time-dependent Bogoliubov coefficients $\alpha_{\bm{k}}(t)$ and $\beta_{\bm{k}}(t)$ contain the information of the parametric processes of the field, which is encapsulated in the squeeze parameter $\zeta_{\bm{k}}(t)$. In particular, the factor $\lvert\beta_{\bm{k}}(t)\rvert^{2}$ gives the number density of particle production of the field quanta during the parametric amplification process. To see this better, let us define $\hat{b}_{\bm{k}}=\hat{S}_{2}^{\dagger}(\zeta_{\bm{k}})\,\hat{a}_{\bm{k}}(0)\,\hat{S}_{2}^{\vphantom{\dagger}}(\zeta_{\bm{k}}$. If the field is initially in the vacuum state $\lvert0_{a}\rangle$ defined by $\hat{a}_{\bm{k}}(0)$, namely, $\hat{a}_{\bm{k}}(0)\lvert0_{a}\rangle=0$, then we can readily show that \begin{equation}\label{E:nvkdjfsd} \sum_{\bm{k}}\langle0_{a}\vert\hat{b}_{\bm{k}}^{\dagger}\hat{b}_{\bm{k}}^{\vphantom{\dagger}}\vert0_{a}\rangle=\sum_{\bm{k}}\lvert\beta_{\bm{k}}\rvert^{2}\,. \end{equation} That is, the resulting two-mode squeezed vacuum has a nonvanishing number of particles, given by \eqref{E:nvkdjfsd}, compared to the initial vacuum. If the initial number state $\lvert n_{a}\rangle$ of the field is not a vacuum, and but has nonzero particle numbers $N_{\bm{k}}^{(a)\vphantom{\dagger}}=\langle n_{a}\vert\hat{a}_{\bm{k}}^{\dagger}(0)\hat{a}_{\bm{k}}^{\vphantom{\dagger}}(0)\vert n_{a}\rangle\neq0$, then we find \begin{equation} \sum_{\bm{k}}\langle n_{a}\vert\hat{b}_{\bm{k}}^{\dagger}\hat{b}_{\bm{k}}^{\vphantom{\dagger}}\vert n_{a}\rangle=\sum_{\bm{k}}\Bigl\{N_{\bm{k}}^{(a)\vphantom{\dagger}}+2\lvert\beta_{\bm{k}}\rvert^{2}\Bigl(N_{\bm{k}}^{(a)\vphantom{\dagger}}+\frac{1}{2}\Bigr)\Bigr\}\,. \end{equation} We clearly see the second term on the righthand side corresponds to the stimulated production of particles, in reference to \eqref{E:nvkdjfsd}. In particular, the created particles obey the probability distribution \begin{equation} P(n_{\bm{k}})=\frac{1}{\lvert\alpha_{\bm{k}}(\mathfrak{t}_{f})\rvert^{2}}\frac{\lvert\beta_{\bm{k}}(\mathfrak{t}_{f})\rvert^{2n_{\bm{k}}}}{\lvert\alpha_{\bm{k}}(\mathfrak{t}_{f})\rvert^{2n_{\bm{k}}}} \end{equation} after the parametric process, where $n_{\bm{k}}$ is the number of created particles in mode $\bm{k}$. Moreover, since $\beta_{\bm{k}}(t)$ can be expressed in terms of the fundamental solution of the equation of motion of the field, it contains information of the field's evolution under the parametric process. The fundamental solutions $d^{(i)}_{\bm{k}}(t)$ and their derivatives can be used to quantify the degree of nonadiabaticity $\mathfrak{N}_{\bm{k}}$ of each field mode \begin{equation}\label{E:bgksfkfsd} \mathfrak{N}_{\bm{k}}(\mathfrak{t}_{f})=\frac{1}{2\omega_{i}\omega_{f}}\,\dot{d}^{(1)2}_{\bm{k}}(\mathfrak{t}_{f})+\frac{\omega_{f}}{2\omega_{i}}\,d^{(1)2}_{\bm{k}}(\mathfrak{t}_{f})+\frac{\omega_{i}}{2\omega_{f}}\,\dot{d}_{\bm{k}}^{(2)2}(\mathfrak{t}_{f})+\frac{\omega_{i}\omega_{f}}{2}\,d^{(2)2}_{\bm{k}}(\mathfrak{t}_{f})\,, \end{equation} where $\mathfrak{t}_{f}$ denotes the moment the parametric process ends. It is essentially the ratio of the energy of each field mode at the end of an arbitrary transition to the counterpart energy for the adiabatic process if the field mode is initially in its ground state. We now turn to the dynamics of the detectors in the parametric field. \section{Dynamics of detector in an evolving quantum field} With these ingredients at hand, we are ready to discuss the nonequilibrium dynamics of an oscillator-detector coupled to a quantum field which evolves in time. {We use the simplest case of parametric frequency modulation to illustrate how the past history of evolution may register in the detector's observables, and then give a brief discussion of more generic cases.} In the detective scenario, suppose we have only one detector located at fixed $\bm{z}$ whose internal degree of freedom $\hat{Q}(t)$ has a constant oscillator frequency. We further assume that the coupling strength $e$ is a constant which does not change in time. Then the equation of motion \eqref{E:gbksddf} is simplified to \begin{equation}\label{E:irinffgdd} \ddot{\hat{Q}}(t)+\Omega^{2}_{b}(t)\,\hat{Q}(t)-\frac{e^{2}}{M}\int_{0}^{t}\!ds\;G_{R,0}^{(\Phi)}(t-s;\bm{z})\,\hat{Q}(s)=\frac{e}{M}\,\hat{\Phi}_{h}(\bm{z},t)\,, \end{equation} with \begin{equation} \frac{1}{2}\,\langle\bigl\{\hat{\Phi}_{h}(\bm{z},t),\hat{\Phi}_{h}(\bm{z},t')\bigr\}\rangle_{\textsc{st}}=G_{H,0}^{(\Phi)}(t,t';\bm{z})\,, \end{equation} We introduce a shorthand notation, $G_{H,0}^{(\Phi)}(t,t';\bm{z})=G_{H,0}^{(\Phi)}(\bm{z},t;\bm{z},t')$, when the two-point function is evaluated at the same spatial point $\bm{z}$. Here we have shifted the time origin such that $t=0$ is the moment when the coupling between the field and the internal degree of freedom of the detector (the harmonic oscillator) is switched on. According to earlier discussions, the field will behave like a squeezed field, after the time $t>\mathfrak{t}_{f}$ (now $\mathfrak{t}_{f}<0$), with a constant but mode-dependent squeeze parameter $\zeta_{\bm{k}}$. Moreover, for mathematical simplicity, we assume that the initial state of the total system takes a product form, and the initial state of the harmonic oscillator has the properties \begin{align} \langle\hat{Q}(0)\rangle&=0\,,&\langle\hat{P}(0)\rangle&=0\,,&\frac{1}{2}\,\langle\bigl\{\hat{Q}(0),\hat{P}(0)\bigr\}\rangle&=0 \end{align} where $\hat{P}$ is the canonical momentum conjugated to the displacement operator $\hat{Q}$, and $\langle\cdots\rangle$ is the expectation value taken with respect to the initial state of the detector's internal degree of freedom. At the same spatial point, the retarded Green's function of the field in \eqref{E:irinffgdd} reduces to \begin{equation} G_{R,0}^{(\Phi)}(\sigma;\bm{z})=-\frac{\theta(\sigma)}{4\pi}\Bigl\{2\delta'(\sigma)+\frac{m}{\sqrt{\sigma^{2}}}\,J_{1}(m\sqrt{\sigma^{2}})\Bigr\}\,,\label{E:turrnfjgdd} \end{equation} where now $\sigma=t-t'$, and $\theta(\sigma)\operatorname{sgn}(\sigma)=\theta(\sigma)$. The first term on the righthand side restricts the effect mediated by the field to the future lightcone of any source coupled with the field, and since the source is massive it cannot travel at a speed as high as the lightspeed, hence the first term represents a local effect in the equation of motion \eqref{E:irinffgdd}. In fact, it will also contribute to a frequency renormalization. The second term has an additional effect not seen for a massless field. Since it is always nonzero inside the future lightcone of the source, it will induce a time-delayed effect on the source itself at later times. It implies that the effect is history-dependent and thus non-Markovian. These will be most clearly seen when we write \eqref{E:irinffgdd} as \begin{equation}\label{E:bdfhdd} \ddot{\hat{Q}}(t)+\Omega^{2}_{\textsc{r}}(t)\,\hat{Q}(t)+2\gamma\,\dot{Q}(t)+2\gamma\int_{0}^{t}\!ds\;\frac{\mathfrak{m}_{f}}{t-s}\,J_{1}(\mathfrak{m}_{f}(t-s))\,\hat{Q}(s)=\frac{e}{M}\,\hat{\Phi}_{h}(\bm{z},t)\,, \end{equation} for $t>0$, where we recall that $\gamma=e^{2}/(8\pi M)$, $M$ the mass of the oscillator's internal degree of freedom and $\mathfrak{m}_{f}$ the effective mass of the field due to the parametric process. We observe that when the mass $\mathfrak{m}_{f}$ of the field quantum goes to zero, the fourth term on the lefthand side vanishes. On the other hand, in general it does not vanish for $t>s$, that is, in a timelike interval, so the massive field can induce a non-Markovian effect on a detector, meaning, the imprints of a detector on the field at earlier moments will affect the same detector at the present moment. Thus the evolution of the detector depends on its own past history. It is clearly seen here that most of the effects will come within the past interval of the order $\mathfrak{m}_{f}^{-1}$, with the effective strength proportional to $\gamma\mathfrak{m}^{2}_{f}$. We observe that the nonMarkovian effect of a more massive field in this configuration has a shorter range, but its effective strength increases faster with the effective mass $\mathfrak{m}_{f}$. Similar to the field described in the previous section, we can construct a special set of homogeneous solution to \eqref{E:bdfhdd}, $d_{Q}^{(1)}(t)$ and $d_{Q}^{(2)}(t)$, which satisfy the initial conditions \begin{align} d_{Q}^{(1)}(0)&=1\,,&\dot{d}_{Q}^{(1)}(0)&=0\,,&d_{Q}^{(2)}(0)&=0\,,&\dot{d}_{Q}^{(2)}(0)&=1\,. \end{align} When the physical frequency $\Omega_{\textsc{r}}$ is not a function of time, they take the rather simple forms \begin{align} d_{Q}^{(1)}(t)&=e^{-\scriptstyle{\upsilon} t}\Bigl[\cos\varpi t+\frac{\scriptstyle{\upsilon}}{\varpi}\,\sin\varpi t\Bigr]\,,&d_{Q}^{(2)}(t)&=\frac{e^{-\scriptstyle{\upsilon} t}}{\varpi}\,\sin\varpi t\,, \end{align} with \begin{align} \mathfrak{z}&=-\sqrt{\smash[b]{\gamma}^{2}-\Omega^{2}_{\gamma}\mp2\smash[b]{\gamma}\sqrt{\smash[b]{\mathfrak{m}_{f}^{2}}-\Omega^{2}_{\gamma}}}\,,&\upsilon&=+\operatorname{Re}\mathfrak{z}\,,&\varpi&=-\operatorname{Im}\mathfrak{z}\,, \end{align} where $\Omega^{2}_{\gamma}=\Omega_{\textsc{r}}^{2}-\gamma^{2}$ is the resonance frequency of this driven system \eqref{E:bdfhdd}. If we make the Taylor expansion of $\mathfrak{z}$ with respect to the small $\mathfrak{m}_{f}$, we obtain \begin{equation} \mathfrak{z}\simeq-\gamma\pm i\Omega_{\gamma}+\Bigl[\frac{\gamma}{2(\Omega^{2}_{\gamma}+\gamma^{2})}\mp i\,\frac{\gamma^{2}}{2\Omega_{\gamma}(\Omega^{2}_{\gamma}+\gamma^{2})}\Bigr]\,\mathfrak{m}_{f}^{2}\,. \end{equation} When $\mathfrak{m}_{f}/\Omega_{\gamma}$ is small, we see that $d_{Q}^{(1,2)}(t)$ are not much different from their counterparts of the detector coupled to a massless field. However, they are distinct in characteristics from their massless-field correspondences $d_{\bm{k}}^{(1,2)}(t)$ in two aspects. First, their amplitudes decay with increasing time due to dissipation, which is the reactive force of the radiation field \cite{QRad} emitted by the internal degree of freedom of the detector from its nonstationary motion. This will bring in another interesting feature we will cover later. Second, since the internal degree of freedom of the detector has a history-dependent dynamics, its motion at any moment will depend on all of its earlier states accumulatively, as well as on the field configurations over the history. This has some profound implications if the detector is coupled to the field before the onset of the parametric process of the field. The parametric process of the field will leave imprints on the nonequilibrium dynamics of $\hat{Q}(t)$. In the full duration before the detector's internal degrees of freedom reach complete equilibration, the time dependencies of various physical observables of the detector could allow us to decrypt {certain} information of the time dependence of the parametric processes of the field, via the nonlocal term and the noise force in the detector's dynamics. Even if the detector is coupled to the field after the end of the field's parametric process, we can still extract partial information about the field. We have argued that at the end of the process, the field acquires mode-dependent but constant squeezing with respect to its initial configuration. The overall effect of squeezing can be read off from the relaxation dynamics of the detector, in particular, from the covariance matrix elements of the detector's internal degree of freedom such as displacement or momentum dispersion. The finer information about the mode dependence of squeezed field can even be extracted if the detector responds only to a specific narrow band of the spectrum of the field. Richer information can be extracted when we have multiple, well separated detectors. In addition to the fact that we can receive the information each detector reports about the field configuration at its location, the detectors can also exert mutual influence on one another \cite{RHA,CPR}. As seen from \eqref{E:gbksddf}, we will have additional nonlocal influences from the other detectors. The motion of the internal degree of freedom of one detector will induce radiation of the field, which will propagate to the other detectors according to the nonlocal terms in the equation of motion \eqref{E:gbksddf}. All told, the dynamics of the detectors can be extremely complicated, with multiple scales and structures, {but the challenge is exactly in this -- the retrieval of these embedded information} to reconstruct the field's parametric process. As an illustrative example of the {\it mutual nonMarkovian} effect, we consider a simpler configuration, where two uncoupled UD detectors are coupled to the same real {\it massless} thermal scalar field of initial temperature $\beta^{-1}$~\cite{LinHu09,HHEnt,PRE18}. In this case, the self-nonMarkovian effect in the massive system studied here is absent, so the effects of the mutual nonMarkovian influence stands out. In~\cite{HHEnt}, mutual nonMarkovianity introduces a fractional change of the order $\sin\Omega_{\gamma}\ell/(\Omega_{\gamma}\ell)$ to the damping constant, and of the order \begin{equation} \frac{\gamma}{\Omega_{\gamma}}\frac{\cos\Omega_{\gamma}\ell}{\Omega_{\gamma}\ell}\,, \end{equation} to the oscillating frequency of the normal modes of the internal degrees of freedom of the two detectors, when $\Omega_{\gamma}\ell\gg1$ and $\gamma/\Omega_{\gamma}<1$. Here $\ell$ is the distance between these two detectors. Although these corrections are usually small, their decaying sinusoidal character in distance could be a distinguished signature for identifying this mutual non-Markovian effect. Consider, for example, how the mutual influence affects the detector's observables. We see that the total mechanical energy becomes \begin{equation} E(\infty)=\frac{1}{2}\operatorname{Im}\int_{-\infty}^{\infty}\!\frac{d\kappa}{2\pi}\;\coth\frac{\beta\kappa}{2}\biggl\{\frac{\Omega_{\gamma}^{2}+\kappa^{2}}{\Omega_{\gamma}^{2}-\kappa^{2}-i2\gamma\kappa-\dfrac{2\gamma}{\ell}\,e^{i\kappa\ell}}+\frac{\Omega_{\gamma}^{2}+\kappa^{2}}{\Omega_{\gamma}^{2}-\kappa^{2}-i2\gamma\kappa+\dfrac{2\gamma}{\ell}\,e^{i\kappa\ell}}\biggr\} \end{equation} once their motion is fully relaxed. {The mutual influences, the $\ell$ dependence, appear in the denominators in the integrand.} \section{Summary and discussions} We now can summarize our findings and respond quantitatively to the questions raised at the beginning (see Abstract): a) Is it possible to access the nonMarkovian information contained in a Witness $W$ modeled by a UD detector \textit{evolving with} a quantum field in an expanding universe? b) To what extent can a `Detective' $D$ detector \textit{introduced today} decipher the past history of the universe through the memories imprinted in the quantum field? We describe the scenarios below, and, to cover a broader scope, include cases with two or more detectors in our discussions, the analysis of which we hope to provide in future communications. \subsection{Witness scenario} In the Witness scenario, as the detectors evolve in progression with the parametric variation of the field, they record moment by moment all the accumulated effects the field have on their past histories. By tracking the time dependence of the detectors' physical observables one can gather a fair amount of knowledge about the parametric process of the field. One can assume different classes of time dependence of the scale factors: power law, as in radiation-dominated universe, exponential, as in inflationary universe and determine the functional form of the parametric process (the squeezing) of the quantum field. We can formally express the two-point functions of the field in terms of the fundamental solutions $d_{\bm{k}}^{(1,2)}$ as in \eqref{E:bfkfkrte} and \eqref{E:brtyufb}. They in turn allow us to use \eqref{E:irinffgdd} to find the fundamental solutions $d_{Q}^{(1,2)}$ for the internal degrees of freedom, from which we can construct the detector responses or other physical observables of the detectors. These responses can be worked out as templates for different classes of cosmic expansions. \subsection{Detective scenario} For the Detective scenario, the available information is more limited because the effects of the parametric process of the field is condensed into the complex squeezing parameter $\zeta_{\bm{k}}$, which is mode-dependent, but independent of the spacetime location (This can be taken as the leading order in a quasi-local expansion) after the process ends. Similar to the aforementioned scheme, the squeeze parameter can be related to $d_{\bm{k}}^{(1,2)}$ at the end of the parametric process, $t=\mathfrak{t}_{f}$ by \eqref{E:gbkrjbkdg1}, \eqref{E:gbkrjbkdg2} or \eqref{E:bgksfkfsd}, so it may allow us to fix the values $d_{\bm{k}}^{(1,2)}(\mathfrak{t}_{f})$. However they are not enough to determine the details of the parametric process except for some simple cases, but power law and exponential time dependence of the scale factor are simple enough cases used in practice. As precautions we mention several caveats in the information retrieval capabilities of quantum detectors in these nonMarkovian processes: i) {We have only} finite time retrieval {or finite resolution} of information in nonMarkovian processes, ii) The mapping from the functional form of the parametric process {of the field} to the responses of the detectors may not be retro one-to-one {due to (a) coarse-graining of the field dynamics and (b) the relaxation process in the detector}, so the reconstruction of the parametric process of the quantum field may be limited to general patterns, rather than sharp 1-1 correspondences. iii) Since the detector is a quantum mechanical system the retrieval of the records in the detector without collapsing its state requires some skillful manipulations, but there are ways in quantum optics and quantum information designed for these purposes. With two or more detectors one can obtain additional information of the quantum field such as the quantum correlation and entanglement properties. \subsection{Severe limitations of perturbative treatment} {We commented in the Introduction on the severe limitations of perturbative treatments of memories. One is tempted to resort to perturbation methods if a small parameter like the coupling constant between the detector (system) and the field (bath) is present in the theory. However, an immediate concern is that such an approach will usually lose its precision as the evolution time prolongs because the coupling constant is related to the relaxation time scale. The late-time result of the reduced system obtained by perturbative analysis is usually unreliable. In particular, take \eqref{E:bdfhdd} as an example, a low-order perturbative treatment easily misses the renormalization in the theory and the dissipative backreaction in the equation of motion. Both tend to appear in higher-order considerations than the fluctuation backreaction does. Thus the results obtained from low-order perturbative treatments often show a characteristic growth in time. Physically, this is very similar to free diffusion. For the system described by \eqref{E:bdfhdd}, it is not difficult to see how this growth arises. Let us ignore the $\mathcal{O}(e^{2})$ term for the moment, then \eqref{E:bdfhdd} describes a undamped, noise-driven Brownian system. Our experience with the Brownian motion tells us that the displacement or the velocity dispersions usually grow with time\footnote{The validity of this statement actually depends on the bath dynamics and the form of the coupling between the system and the bath. It is true for the Ohmic bath of a scalar field we are considering. But for a supra-Ohmic bath, the growth may be more tamed~{\cite{HL11}}.}. After some time, these dispersions will be sufficiently large such that contributions from the $\mathcal{O}(e^{2})$ term in \eqref{E:bdfhdd} can no longer be neglected. A detailed analysis will show that it tends to counteract the effect of the fluctuation force from the bath, and may reduce the rate of further dispersive growth. For the linear system we are considering here this likely leads to equilibration. However, for a more complicated reduced system like a nonlinear oscillator, other things can happen. For example, the damping force may not be able to dissipate fast enough the energy stored in the nonlinear potential, which is presumably bounded from below but has been excited by the fluctuation force, for the system dynamics to dwindle down to the lowest part of the potential.} \subsection{Related problems} To broaden the scope of our investigation further, what we have studied here for cosmology can be applied to {\it dynamical Casimir effects} (DCE)~\cite{DCE} which share the same underlying theoretical structure. For DCE, a moving mirror squeezing the quantum field produces particle pairs. However, there are differences in the key questions asked. In DCE the drive and its functionality are explicitly introduced in the design of the experiments, thus no one would be asking the kind of `archeology' question so important in cosmology: what means do we have to find out about the universe's past? In DCE a more natural question one would ask is, how would a detector in a quantum field respond after it is squeezed by a moving mirror? The nonMarkovian quantum stochastic equations governing the detector's internal degrees of freedom derived here can be of use for the analysis of the experimental data. The other aspect which we have not discussed but had been addressed in cosmology involves the drive dynamics. If its trajectory is not a given fixed function of time but a dynamical variable, then it is even closer to the cosmological condition, where the scale factor describing the universe's expansion obeys the Einstein equations which needs to be solved self-consistently with the field equations. This is known as the cosmological backreaction problem \cite{HuVer20}. Interesting questions asked in cosmology have exact analogies in DCE, and answers provided in one can inspire the other. To end this discussion we return to our detector. In our derivations we have {implemented} a stationarity condition for the UD detector after full relaxation. That implies the existence of a \textit{fluctuation-dissipation relation} in a quantum Brownian oscillator interacting with a squeezed quantum field. This relation is proven in a companion paper~\cite{FDRSq}, the existence of which ensures this condition can be achieved in the detector. Such a relation for a quantum Brownian oscillator (or a harmonic atom) in a squeezed thermal field has many applications, as discussed in~\cite{FDRSq}. {The foundational and more complex issue of nonMarkovianity in cosmology, especially in the light of quantum wave functions or trajectories, has yet to be more fully addressed.} \\\\ \noindent\textbf{Acknowledgment} J.-T. Hsiang is supported by the Ministry of Science and Technology of Taiwan under Grant No.~MOST 110-2811-M-008-522.
1,941,325,220,307
arxiv
\section{Introduction}\label{sec:intro} The theory of linear flow networks provides a powerful framework, allowing to study systems ranging from water supply networks~\cite{Hwan96,diaz2016} and biological networks, such as leaf venation networks~\cite{Corson2010,Kati10,Hu2013}% , to resistor networks~\cite{bollobas1998,kirchhoff_ueber_1847,van_mieghem_2017}, or AC power grids~\cite{Hert06,Wood14}. Failures of transportation links in these networks can have catastrophic consequences up to a complete collapse of the network. As a result, link failures in linear flow networks and their prevention are a field of active study~\cite{strake2018,kaiser_collective_2020,gavrilchenko_resilience_2019,cetinay_topological_2018,Guo17,guo_localization_2020_2,guo_localization_2020}. The study of linear flow networks is intimately related to graph theory since most phenomena can be analysed on purely topological grounds~\cite{bollobas1998}. This connection dates back to work by Kirchhoff~\cite{kirchhoff_ueber_1847} who analysed resistor networks, and introduced several major tools that are now the basis of the theory of complex networks, such as the matrix tree theorem~\cite{kirchhoff_ueber_1847,maxwell_2010,bollobas1998}. These tools can now serve as a basis for the analysis of failure spreading in AC power grids, which can be modelled as linear flow networks based on the DC approximation~\cite{Wood14}. A substantial part of security analysis in power grids is dedicated to the study of transmission line outages since they can lead to cascading outages in a series of failures~\cite{Pour06,witthaut_nonlocal_2015,yang_small_2017}. The topological approach to failure spreading has been exploited to demonstrate that the strength of flow rerouting after link failures decays with distance to the failing link~\cite{strake2018,kaiser_collective_2020,gavrilchenko_resilience_2019,cetinay_topological_2018}. In particular, the so-called rerouting distance based on cycles in the network has been found to predict flow rerouting very well~\cite{strake2018}. However, the analysis of flow rerouting still lacks a theoretical foundation. Here, we demonstrate that these observations made for flow rerouting may be understood based on a formalism originally developed to study current flows in resistor networks that uses spanning trees (STs) of the underlying graph. Moreover, the formalism explains recent results regarding the shielding against failure spreading in complex networks. This publication is structured as follows; in the first section, we give an overview over the theory of linear flow networks and present an important lemma that relates the current flows in these networks to STs. In the next section, we demonstrate the analogy between such networks and AC power grids in the DC approximation and relate the ST formulation to line outages studied in power system security analysis. Finally, we show how this formulation may be used to understand why certain connectivity features inhibit failure spreading extending on recent results~\cite{Kaiser_2020}. \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{Compare_shielding_effect.pdf} \caption{Different methods for mitigating failure spreading in linear flow networks. (a) The failure of a single link (red) with unit flow results in flow changes $\Delta F$ (color code) throughout the Scandinavian power grid. (b) Failure spreading to Finland may be reduced by strengthening a link that horizontally separates Sweden and Finland. (c) Adding nodes, thus increasing the length of the rerouting path, reduces failure spreading to Finland as well. (d) Adding two links to construct a network isolator results in a complete vanishing of flow changes in the other part of the grid. Grid topology was extracted from the open energy system model PyPSA-Eur~\cite{horsch_2018}. } \label{fig:shielding_subgraphs} \end{figure*} \section{Fundamentals of resistor networks} \label{sec:resistor_networks} Resistor networks are a prime example of linear flow networks and have inspired research throughout centuries~\cite{bollobas1998,kirchhoff_ueber_1847,belevitch_summary_1962}. A resistor network can be described using a graph as follows; let $G=(E, V)$ be a connected graph with vertex set $ V=\{v_1,...v_N\}$ and $M$ edges in the edge set $E$. Then we assign a weight $w_k$ to each edge $e_k=(a,b)$ in the graph given by the inverse resistance $w_k=R_{k}^{-1}$ between its terminal vertices $a$ and $b$. If there is a potential difference $v_k=V_a-V_b$ between the terminal vertices of edge $e_k=(a,b)$, according to Ohm's law there is a current flow $i_k$ between the two vertices given by \begin{align} i_{k}=\frac{v_k}{R_k}=\frac{V_a-V_b}{R_k}. \label{eq:ohms_law} \end{align} In order to give a direction to the current flow, we assign an arbitrary orientation to each edge in the graph that is encoded by the graph's edge-node-incidence matrix $\mathbf{B}\in\mathbb{R}^{N\times M}$ defined as~\cite{bollobas1998} \begin{align} B_{n,\ell} = \left\{ \begin{array}{r l} 1 & \; \mbox{if line $\ell$ starts at node $n$}, \\ - 1 & \; \mbox{if line $\ell$ ends at node $n$}, \\ 0 & \; \mbox{otherwise}. \end{array} \right. \label{eq:incidence_matrix} \end{align} The current flows and voltages are then subject to \textit{Kirchhoff's circuit laws}~\cite{kirchhoff_ueber_1847}. The first of the laws, typically referred to as Kirchhoff's current law, at an arbitrary node $j\in V(G)$ reads as \begin{align*} \sum_{e_k\in\Lambda(j)}^M i_k=I_j. \end{align*} Here, $I_j\in\mathbb{R}$ is the current injected into node $j$ and $\Lambda(j)\subset E(G)$ is the set of all edges the connect to node $j$ respecting their orientation. The current law may be regarded as a continuity equation and thus states that the inflows and outflows at each node in the network have to balance with the current injections at the respective node. It may be written more compactly making use of the node-edge-incidence matrix \begin{align} \mathbf{B}\mathbf{i}=\mathbf{I}, \label{eq:current_law} \end{align} where $\mathbf{i}=(i_1,...,i_M)^\top\in\mathbb{R}^M$ is a vector of current flows and $\mathbf{I}=(I_1,...,I_N)^\top\in\mathbb{R}^N$ a vector of current injections. On the other hand, we can also introduce a more compact notation for Ohm's law~(\ref{eq:ohms_law}) by defining a vector of nodal voltage levels $\mathbf{V}=(V_1,...,V_N)^\top\in\mathbb{R}^N$ and a diagonal matrix of edge resistances $\mathbf{R}=\operatorname{diag}(R_1,...,R_M)\in\mathbb{R}^{M\times M}$ such that Ohm's law reads as \begin{align} \mathbf{R}\mathbf{i}=\mathbf{B}^\top \mathbf{V}. \label{eq:voltage_law} \end{align} Combining Ohm's law with Kirchhoff's current law, we arrive at the following relationship between nodal voltages $\mathbf{V}$ and nodal current injections $\mathbf{I}$ \begin{align} \mathbf{I}=\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^\top \mathbf{V}. \label{eq:Poisson} \end{align} This Poisson-like equation has been analysed in different contexts~\cite{strake2018,Norman97,bollobas1998}. Note that Kirchhoff's voltage law is automatically satisfied by virtue of equation~(\ref{eq:current_law}), because the resulting vector of potential differences $\mathbf{v}=\mathbf{B}^T\mathbf{V}$ vanishes along any closed cycle due to the duality between the graph's cycle space and its cut space~\cite{bollobas1998,Dies10}. In addition to that, the potential at one node may be chosen freely without affecting the result. \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{Spanning_trees_in_er_graphs.pdf} \caption{Flow changes decay exponentially with cyclic paths in different networks. (a,d) Number of STs $\tau(G/p)$ in an Erd\H{o}s-R\'{e}nyi (ER) random graph $G(200,300)$ with $300$ edges and $200$ vertices (a) and in the power flow test case 'IEEE 118'~\cite{MATPOWER} (d) that contain a randomly chosen cyclic path $p$ (y-axis) plotted against the length of the path $\operatorname{len}(p)$ (x-axis). The number of STs decays exponentially with the length of the path, thus appearing linear on a logarithmic y-scale. (b,e) The rerouting distance scales exponentially with the LODF evaluated here for a single trigger for both grids. (c,f) The exponential scaling is preserved when averaging over all possible trigger links. Shading indicates $0.25$ and $0.75$ quantiles, line represents median. } \label{fig:distance_decay} \end{figure*} The matrix connecting the two quantities is referred to as weighted graph Laplacian or Kirchhoff matrix $\mathbf{L}=\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^\top\in\mathbb{R}^{N\times N}$ and characterises the underlying graph completely. It has the following entries~\cite{bollobas1998} \begin{align} L_{mn} = \left\{ \begin{array}{lll} \displaystyle\sum \nolimits_{\ell \in \Lambda(m)} w_{\ell} & \mbox{if } m = n; \\ [2mm] - w_{\ell} & \mbox{if } m \mbox{ is connected to } n \mbox{ by } \ell. \end{array} \right. \label{eq:Laplacian} \end{align} For a connected graph, this matrix has exactly one zero eigenvalue $\lambda_1=0$ with corresponding unit eigenvector $\mathbf{v}_1=\mathbf{1}/\sqrt{N}$ such that $\mathbf{L}\mathbf{1}=0$. For this reason, the matrix is non-invertible. This is typically overcome by making use of the graph's Moore-Penrose-pseudoinverse $\mathbf{L}^\dagger$ which has properties similar to the actual inverse~\cite{Moore1920}. With this formalism at hand, we can in principle now determine the current on any edge given a particular injection pattern $\mathbf{I}$ and edge resistances $\mathbf{R}$. As a start, consider the situation where each edge has a unit resistance $\mathbf{R}=\operatorname{diag}(1)$ and a unit current is injected into a particular vertex $s$ and withdrawn at another one $t$ such that $\mathbf{I}=\mathbf{e}_s-\mathbf{e}_t$, where $\mathbf{e}_i=(0,...,\underbrace{1}_{i},...,0)^\top\in\{0,1\}^M$ are the unit vectors with entry one at position $i$ and zero otherwise. In this situation, the current across any edge in the graph $\ell=(a,b)$ is given by the following lemma which dates back to Kirchhoff~\cite{maxwell_2010,kirchhoff_ueber_1847} and has been popularised by Shapiro~\cite{bollobas1998,Shap87}. \begin{lemma} Put a one-ampere current between the vertices $s$ and $t$ of a connected, unweighted graph $G$ such that $\mathbf{I}=\mathbf{e}_s-\mathbf{e}_t$. Then the current on any other edge $(a,b)$ is given by \begin{equation} i_{ab} = \frac{ \mathcal{N}(s,a\rightarrow b,t)-\mathcal{N}(s,b\rightarrow a,t)}{\mathcal{N}},\nonumber \end{equation} where $\mathcal{N}(s,a\rightarrow b,t)$ is the number of STs that contain a path from $s$ to $t$ of the form $s,\ldots,a,b,\ldots,t$ and $\mathcal{N}$ is the total number of STs of the graph. \label{lem:electricallemma} \end{lemma} Whereas this lemma only holds for graphs where all links have unit resistances, real-world resistor networks or other types of linear flow networks are typically weighted with non-homogeneous resistances. However, the extension to weighted networks is straightforward as summarised in the following corollary. \begin{corr} Put a one-ampere current between the vertices $s$ and $t$ of a connected, weighted graph $G$ such that $\mathbf{I}=\mathbf{e}_s-\mathbf{e}_t$. Then the current on any other edge $(a,b)$ is given by \begin{equation} i_{ab} = \frac{ \mathcal{N}^*(s,a\rightarrow b,t)-\mathcal{N}^*(s,b\rightarrow a,t)}{\mathcal{N}^*},\label{eq:ST_LODF} \end{equation} where $\mathcal{N}^*=\sum_{T\in \mathcal{T}}\prod_{e\in T}w_e$ is the sum over the products of the weights $w_e$ of all edges $e\in T$ that are part of the respective spanning tree $T$ and $\mathcal{T}$ is the set of all STs in the graph. We thus assign a weight to each ST given by the product of the weights of the edges on the ST and replace the unweighted STs in Lemma~\ref{lem:electricallemma} by weighted STs. \end{corr} We will demonstrate in the following sections how this lemma and corollary may be made use of to understand how failure spreading may be mitigated in linear flow networks such as AC power grids in the DC approximation. \section{Analogy between resistor networks and power flow in electrical grids} \label{sec:resistor_powergrids} Importantly, the theoretical framework developed in the last section may not only be applied to resistor networks. In this section, we demonstrate how these results may be used to gain insight into the mitigation of failure spreading in power grids. \subsection{Modelling power grids as linear flow networks} Most electric power transmission grids are made up of AC transmission lines and are as such governed by the non-linear AC power flow equations~\cite{Wood14}. However, the real power flow over transmission lines can be simplified to a linear flow model in what is referred to as the DC approximation of the AC power flow. This approximation is based on the following assumptions: \begin{itemize} \item Nodal voltages vary little. \item Transmission lines are purely inductive, i.e. their resistance is negligible compared to their reactance $r_\ell\ll x_\ell,~\forall \ell \in E(G)$. \item Differences between nodal voltage angles $\vartheta_n,~n\in V(G)$ of neighbouring nodes $n,m$ are small $\vartheta_n-\vartheta_m\ll1$. \end{itemize} Typically, these assumptions are met if the power grid is not heavily loaded~\cite{Purc05}. As a result, the real power flow $F_\ell$ along a transmission line $e_\ell=(n,m)\in E(G)$ in the DC approximation depends linearly on the nodal voltage phase angles $\vartheta_n$ of neighbouring nodes \begin{equation} F_\ell = b_\ell (\vartheta_n-\vartheta_m). \end{equation} Here $b_\ell\approx x_\ell^{-1}$ is the line susceptance of line $\ell$. Thus, the vector of real power flow along the transmission lines in the power grid $\mathbf{F}=(F_1, ... ,F_M)^\top\in\mathbb{R}^M$ takes the role of current flow vector in the case of resistor networks. On the other hand, the nodal voltage phase angles $\mathbf{\vartheta}=(\vartheta_1,...,\vartheta_N)^\top\in\mathbb{R}^N$ take the role of the nodal voltages $\mathbf{V}$ and line weights are given by the line susceptances $b_k$ of an edge $e_k$ in correspondence with the inverse resistances $r^{-1}_k$ in the case of resistor networks. Thus, Ohm's law~(\ref{eq:voltage_law}) translates to power grids as \begin{align*} \mathbf{F}=\mathbf{B}_d\mathbf{B}^\top\mathbf{\vartheta}. \end{align*} Here, $\mathbf{B}_d = \operatorname{diag}(b_1,...,b_M)\in\mathbb{R}^{M\times M}$ is the diagonal matrix of line susceptances. Again, Kirchhoff's current law~(\ref{eq:current_law}) holds and we may express it using vector quantities as follows~\cite{Wood14,strake2018} \begin{align*} \mathbf{B}\mathbf{F} = \mathbf{P}. \end{align*} Here, $\mathbf{P}=(P_1,...,P_N)^\top\in\mathbb{R}^N$ is the vector of nodal power injections which thus takes the role of nodal current injections $\mathbf{I}$. We summarise these equivalences in Table~\ref{tab:resistor_dc}. \subsection{Sensitivity factors in power grid security analysis} \begin{table}[tb!] \centering \caption{\footnotesize \label{tab:resistor_dc}Analogy between resistor networks and AC power grids in the DC approximation.} \begin{tabular}{lc|lc} \multicolumn{2}{c|}{DC approximation} &\multicolumn{2}{c}{Resistor network} \\ \hline\hline Power injections &$\mathbf{P}$ & Nodal current &$\mathbf{I}$ \\ Real power flow & $\mathbf{F}$& Current flow&$\mathbf{i}$\\ Nodal phase angles & $\mathbf{\vartheta}$ & Nodal Voltages &$\mathbf{V}$\\ Line susceptances & $b_e$ & Inverse edge resistance & $r_e^{-1}$\\ \end{tabular} \end{table} In power grids security analysis, linear sensitivity factors are used to study and prevent line overloads which would prevent the power grids from running properly~\cite{Wood14}. One of these factors is the \textit{Power Transfer Distribution Factor} (PTDF). The PTDF$_{s,t,k}$ then quantifies the change in flow $\Delta F_k$ on line $e_k\in E(G)$ if a power $\Delta P$ is injected at node $r$ and withdrawn from node $s$. It is calculated as \begin{align} \text{PTDF}_{r,s,k} = \frac{\Delta F_{k}}{\Delta P}. \label{eq:PTDF} \end{align} In addition to this factor, one typically considers the \textit{Line Outage Distribution Factor} (LODF) which measures the change in power flow on a line $e_m$ when another line $e_k$ fails~\cite{Wood14} \begin{align} \text{LODF}_{m,k}=\frac{\Delta F_m}{F_k^{(0)}}. \label{eq:lodf} \end{align} Here, $F_k^{(0)}$ is the flow on line $e_k$ before the outage. Mathematically, these two quantities are related as follows if $e_k=(r,s)$ is the failing link~\cite{Wood14} \begin{align} \text{LODF}_{m,k} = \frac{\text{PTDF}_{r,s,m}}{1-\text{PTDF}_{r,s,k}}. \label{eq:LODF_PTDF} \end{align} \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{sg_spanningtrees.pdf} \caption{Spanning trees (STs) may be used to explain the shielding effect of certain connectivity structures between different parts of a network. (a,b) A square grid is divided into two parts by either weakening the links connecting two parts (a, blue, $w_e=0.1$) or strengthening the links perpendicularly separating the two parts (b, blue, $w_e=10$). (c,d) For both divisions, the failure of a single link with unit flow (red) significantly reduces failure spreading to the other part of the network. (e-h) Different STs (black) that contain specific paths of the form $(v_0 = r, v_1, \ldots, v_i = m, v_{i+1} = n, v_{i+2}, \ldots, v_k=s)$ used to calculate the flow changes on link $(m,n)$ for a failure of link $(r,s)$ by virtue of Eq~(\ref{eq:ST_LODF}). (e,f) For the weakly connected network shown in panel (a,c), a monitoring link in the same part (e) may lead to STs that contain only one weak link (blue shading). Thus, the contribution of this ST to the sum over all STs is much stronger than for a monitoring link in the other part, where STs have to contain at least two weak links (f, blue shading). (g,h) For the strongly connected network shown in panel (g,h), the STs contributing highest are the ones containing all edges with strong weights (g, blue shading). (h) If links $(m,n)$ and $(r,s)$ are in different parts, no ST may contain all edges with strong weights (blue shading), thus reducing failure spreading in this case.} \label{fig:sg_spanningtrees} \end{figure*} \subsection{Spanning tree description of link failures} On the basis of the analogy between electrical grids and resistor networks developed in the last sections, we will now show how the ST formula presented in Lemma~\ref{lem:electricallemma} may be used for power systems security analysis. In the language of power grids, the lemma yields the $\text{PTDF}_{s,t,m}$ for an edge $e_m=(a,b)$ if a unit power $\Delta P$ is injected at node $r$ and withdrawn from node $s$. For this reason, the PTDF may be calculated as follows \begin{align} \text{PTDF}_{s,t,m}=\frac{\mathcal{N}^*(s,a\rightarrow b,t)-\mathcal{N}^*(s,b\rightarrow a,t)}{\mathcal{N}^*}. \label{eq:PTDF_trees} \end{align} Based on Eq.~\eqref{eq:LODF_PTDF} which yields the LODF expressed in terms of the PTDF, we can make use of this expression to derive an equivalent expression for the LODF. If $e_k=(r,s)$ is the failing link and $e_m=(a,b)$ the link where the flow changes are monitored, the expression based on Eq.~\eqref{eq:PTDF_trees} reads as \begin{align} \text{LODF}_{m,k}&=\frac{\mathcal{N}^*(r,a\rightarrow b,s)-\mathcal{N}^*(r,b\rightarrow a,s)}{\mathcal{N}^*-(\mathcal{N}^*(r,r\rightarrow s,s)-\mathcal{N}^*(r,s\rightarrow r,s))}\nonumber\\ &=\frac{\mathcal{N}^*(r,a\rightarrow b,s)-\mathcal{N}^*(r,b\rightarrow a,s)}{\mathcal{N}^*-\mathcal{N}^*(r,r\rightarrow s,s)}\nonumber\\ &=\frac{\mathcal{N}^*(r,a\rightarrow b,s)-\mathcal{N}^*(r,b\rightarrow a,s)}{\mathcal{N}^*_{\backslash \{k\}}}. \label{eq:LODF_trees} \end{align} Here, $\mathcal{N}^*_{\backslash \{k\}}$ denotes the weight of all STs in the graph evaluated \emph{after} removing the edge $e_k$. We thus found an expression for the LODFs that is based purely on certain STs in the graph. This equation is the basis of our analysis of subgraphs inhibiting failure spreading which we will perform in the following sections. Note that a similar expression for the LODFs based on spanning 2-forests has recently been derived by Guo et al.~\cite{Guo17}. \section{Mitigating failure spreading} \label{sec:mitigating_spreading} We have seen in the last section that the spreading of failures is studied using LODFs in power systems security analysis. To prevent large flow changes on other links after the failure of a link $e_k$ which may potentially trigger dangerous cascades of failures, it is desirable for overall power system security to keep the LODFs small. A natural question to ask is thus: Can we design or alter the network topology in such a way that LODFs stay small? Based on Equation~\eqref{eq:LODF_trees} expressing the LODF in terms of STs, this question may be addressed in a purely topological manner. In particular, we deduce three strategies to reduce the effect of failure spreading \begin{enumerate} \item Fixing long paths between trigger link $e_k$ and monitoring link $e_l$ leaves only few degrees of freedom which reduces the relative contribution of the numerator in Eq.~\eqref{eq:LODF_trees} \item Fixing specific paths between trigger link $e_k$ and monitoring link $e_l$ can force links of large weights to be not contained in the numerator, thus reducing its relative contribution to Eq.~\eqref{eq:LODF_trees} \item Introducing symmetric elements between parts of the network may lead to a complete balancing between the two contributions in the numerator of Eq.~\eqref{eq:LODF_trees} \end{enumerate} We will address each of the strategies in the following subsections. \subsection{The role of the rerouting distance} \label{sec:rrdist} With Eq.~\eqref{eq:LODF_trees} expressing LODFs using STs at hand it is intuitively clear that certain paths in the network should play an important role in predicting the overall effect of line outages. In particular, we can see immediately that for a given failing link $e_k$, the numerator in Eq.~\eqref{eq:LODF_trees} depends on the paths going through the link monitoring the flow changes $e_l$ whereas the denominator does not. Therefore, we expect the flow changes to be smaller on another link $e_m$ that has a longer minimum path going through $e_m$ and $e_k$ compared to link $e_l$. This is due to the fact that reducing the number of possible path in the sum over all STs $\mathcal{N}^*(r,a\rightarrow b,s)$ effectively reduces the number of STs by fixing a certain path. This intuitive idea is demonstrated to hold also quantitatively in Figure~\ref{fig:distance_decay},a,d: We illustrate that the number of STs $\tau(G/p)$ scales approximately exponentially with the length of the cyclic path contained in the STs for an unweighted Erd\H{o}s-R\'{e}nyi (ER) random graph $G(200,300)$ with $300$ edges and $200$ vertices~\cite{Erdos1960} (a) and the power flow test case 'IEEE 118'~\cite{MATPOWER,josz_ac_2016} (d). To study this scaling, we contract a cyclic path $p$ between two arbitrarily chosen edges and quantify the number of STs using Kirchhoff's matrix tree theorem~\cite{kirchhoff_ueber_1847}. The theorem states that the number of STs in a graph may be calculated using the determinant of the graph's Laplacian matrix \begin{align*} \tau(G)=\operatorname{det}(L_u). \end{align*} Here, $L_u$ is the matrix obtained from the Laplacian matrix $L$ of $G$ obtained by removing row and column corresponding to an arbitrarily chosen vertex $u\in V(G)$. The number of STs $\tau(G/p)$ containing a path $p$ may be calculated by contracting the path in the graph and the Laplacian matrix and then taking the determinant of the resulting Laplacian. Taking the difference in the numerator of Eq.~(\ref{eq:LODF_trees}) between the path and a reversed path will in general not affect the exponential scaling since the difference of two exponentials with different exponents or different prefactors will again scale exponentially. \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{Network_isolators.pdf} \caption{Network isolators that lead to a complete vanishing of LODFs are created using certain symmetric paths in the network. (a) STs that contain a path starting at node $r$ and terminating at node $s$ and containing the edge $(m,n)$ (blue) or $(n,m)$ (red) have to cross the subgraph consisting of dotted, coloured edges in the centre. Since each path can contain each vertex and edge only once, each ST passing through the subgraph in one way (blue) has a counterpart passing through the subgraph in the other way (red). (b) Failure of a link (red) results in vanishing LODFs (colour code) in the part connected by a network isolator as predicted using the ST formulation of link failures.} \label{fig:Network_isolators} \end{figure*} We may thus expect an exponential decay of LODFs with the length of fixed, cyclic paths. This result complements recent progress made in the understanding of the role played by distance for failure spreading in linear flow networks. In Ref.~\cite{strake2018}, it was shown that flow changes after a link failure are not captured well by the ordinary graph distance between the failing link and the link monitoring flow changes. Instead, a different distance measure referred to as rerouting distance captures this effect much better. It is defined as follows; \begin{defn} A \emph{rerouting path} from vertex $r$ to vertex $s$ via the edge $(m,n)$ is a path \begin{equation} (v_0 = r, v_1, \ldots, v_i = m, v_{i+1} = n, v_{i+2}, \ldots, v_k=s)\nonumber \end{equation} or \begin{equation} (v_0 = r, v_1, \ldots, v_i = n, v_{i+1} = m, v_{i+2}, \ldots, v_k=s)\nonumber \end{equation} where no vertex is visited twice. The \emph{rerouting distance} between two edges $(r,s)$ and $(m,n)$ denoted by $${\rm edist}_{\text{re}}[(r,s),(m,n)]$$ is the length of the shortest rerouting path from $r$ to $s$ via $(m,n)$ plus the length of edge $(r,s)$. Equivalently, it is the length of the shortest cycle crossing both edges $(r,s)$ and $(m,n)$. If no such path exists, the rerouting distance is defined to be $\infty$. \label{def-reroute-dist} \end{defn} The rerouting distance defined this way is a proper distance metric. With the arguments made before at hand it is intuitively clear why the rerouting distance performs very well in predicting the effects of line outages. Indeed, we observe an exponential scaling of the LODFs for a given trigger link in the ER random graph Figure~\ref{fig:distance_decay},b and in the test case 'IEEE 118' (e). \subsection{The role of strong and weak network connectivity} \label{sec:connectivity} Our second strategy to reduce failure spreading after link failures is based on fixing specific paths in the network in such a way that they cannot contain certain links with large weights. This way, the numerator in Equation~\eqref{eq:LODF_trees} does not contain the contribution of the links with large weights whereas the denominator does, thereby reducing the overall impact of the link failure. Note that in contrast to the last section, the fixed paths do not necessarily have to be long to prevent failure spreading. We will demonstrate this strategy for two cases: First, we use this reasoning to demonstrate that weakening the links between two parts of the network -- thus effectively dividing it into communities -- may reduce failure spreading between them. This is expected as weakly connected networks generally suppress failure spreading from one part to the other one, but this also limits the possibility of power flow between the parts. This is no longer true for the second strategy: we illustrate why also strengthening the links that separate two parts of the network horizontally reduces the impact of link failures. The two strategies are illustrated for a simple $3\times 6$ square grid in Figure~\ref{fig:sg_spanningtrees}. The failure of a link $e_k=(r,s)$ (dotted, orange) leads to different contribution of the numerator in Equation~\eqref{eq:LODF_trees} if the monitoring link $e_\ell=(m,n)$ (green) is contained in the same part (e) as compared to a different, weakly connected part (f) in an otherwise symmetrical situation. Note that the distance between monitoring link and trigger link is also the same in both, panels e and f. For a link in the same part, the numerator also contributes with STs containing only \textit{one} weak link (thin line, blue shading). For a trigger link located in the other part, each ST connecting trigger has to contain at least \textit{two} weak links (shaded blue). Since the contribution in the numerator is proportional to the product of all weights along the ST and the situation is otherwise symmetric, we expect a weaker LODF and thus a shielding effect if the two links are contained in different, weakly connected parts. In panels (b) and (d), we demonstrate that strong, horizontal connections have a similar effect on failure spreading: If the monitoring link $e_\ell=(m,n)$ is contained in the same part of the network as the trigger link $e_k=(r,s)$ (g), now separated through strong connections, spanning trees connecting the two links may contain \textit{two} -- or generally: all -- strong links. For a trigger link in the other part of the network, the spanning tree connecting them can contain maximally \textit{one} -- or generally: all minus one -- strong links. Again, the term in the numerator scales with the link weights contained in the spanning trees. Therefore, we expect the effect of link failures to be stronger for links located in the same part as compared to links contained in the other part which is confirmed when simulating the failure of a single link in panel (d). \subsection{The role of symmetry} \label{sec:symmetry} As a third strategy for reducing failure spreading, we suggest building networks in such a way that the terms in the numerator of Equation~\eqref{eq:LODF_trees} balance. In this case, failure spreading reduces to zero for the respective links. In order to balance the terms in the numerator of Equation~\eqref{eq:LODF_trees}, we need the spanning trees passing through the monitoring link $e_\ell=(a,b)$ in both directions to have exactly the same weight \begin{align*} \mathcal{N}^*(r,m\rightarrow n,s)&=\mathcal{N}^*(r,n\rightarrow m,s)\\ \Rightarrow\quad \sum_{T\in \mathcal{T}(r,m\rightarrow n,s)}\prod_{e\in T}w_e &=\sum_{T\in \mathcal{T}(r,n\rightarrow m,s)}\prod_{e\in T}w_e . \end{align*} Here, $\mathcal{T}(r,m\rightarrow n,s)$ is the set of all spanning trees containing a path of the form $(r,...,m,n,...,s)$. This equality is for example fulfilled if for each tree $T\in \mathcal{T}(r,m\rightarrow n,s)$ there is a counterpart $T\in \mathcal{T}(r,n\rightarrow m,s)$ of the same weight. This may be accomplished by introducing certain symmetric elements, referred to as \textit{network isolators}~\cite{Kaiser_2020}, into the graph as demonstrated in Figure~\ref{fig:Network_isolators}: For each ST connecting trigger link $e_k=(r,s)$ and monitoring link $e_\ell = (m,n)$ and containing a path of the form $(r,...,m,n,...,s)$ (grey and blue lines) there is an ST containing a path of the form $(r,...,n,m,...,s)$ (grey and red lines). If we compare the product of weights for a single tree $T_0\in \mathcal{T}(r,m\rightarrow n,s)$ and its counterpart $T_0^*\in \mathcal{T}(r,n\rightarrow m,s)$, such that both contain exactly the same edges except for the edges connecting the two parts, i.e., the links marked as blue and red arrows in Figure~\ref{fig:Network_isolators}, we can see that these products are equal except for the links $r_1$ and $r_2$ (red links) being contained only in $T_0$, and $b_1$ and $b_2$ (blue links) being contained only in $T_0^*$. We can thus conclude that the above equality is fulfilled, i.e., the product of weights is equal for both trees $T_0$ and $T_0^*$, if \begin{align*} b_1\cdot b_2 = r_1\cdot r_2. \end{align*} In this case, a failure of link $e_k=(r,s)$ does not result in any flow changes on link $e_\ell=(m,n)$ at all. This reasoning has been generalised recently, where the concept was termed \textit{network isolators}~\cite{Kaiser_2020}. We also note that similar arguments were put forward by Guo et al~\cite{Guo17}. On general grounds, network isolators are defined as follows~\cite{Kaiser_2020} \begin{lemma} \label{theo:weighted} Consider a linear flow network consisting of two parts with vertex sets $ V_1$ and $ V_2$ and assume that a single link in the induced subgraph $G( V_1)$ fails, i.e.~a link $(r,s)$ with $r,s \in V_1$. If the adjacency matrix of the mutual connections has unit rank ${\rm rank}(\mathbf{A}_{12}) = 1$, then the flows on all links in the induced subgraph $G( V_2)$ are not affected by the failure, that is \begin{equation} \begin{aligned} \Delta F_{m,n} \equiv 0 \quad \forall m,n \in V_2. \end{aligned}\nonumber \end{equation} The subgraph corresponding to the mutual interactions is referred to as \textbf{network isolator}. \end{lemma} Note that network isolators of arbitrary size may be understood using the same reasoning as presented above for a network isolator consisting of only four links. \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{Sign_reversal.pdf} \caption{Sign reversal of LODFs by symmetric subgraphs. (a,b) Modifying the subgraph connecting two graphs from the two parallel lines to the two crossing lines leads to a sign reversal of the LODFs in the connecting subgraphs (shades of grey). This is in line with the compensatory effect of the symmetric subgraphs used to create network isolator in Figure~\ref{fig:Network_isolators}.} \label{fig:sign_reversal} \end{figure} \subsubsection{Sign reversal of flow changes} Based on the symmetric elements -- the network isolators -- introduced in the last section, we can demonstrate yet another application of the ST formulation of link failures: We can modify the grid in such a way that the LODFs and thus the flow changes change their sign. This is again based on the symmetry of LODFs in terms of the paths $(r,...,m, n,...,s)$ and $(r,...,n, m,...,s)$. If we apply a symmetric modification such that paths of the first form are replaced by parts of the latter one, we can reverse the sign of the resulting flow changes. In particular, if we interchange the two terms appearing in the nominator of Eq.(\ref{eq:LODF_trees}= for a subset of edges, we can change the sign of the LODF for these edges \begin{align*} \mathcal{N}^*(r,m\rightarrow n,s)&\rightarrow \mathcal{N}^*(r,n\rightarrow m,s)\\ \mathcal{N}^*(r,n\rightarrow m,s)&\rightarrow \mathcal{N}^*(r,m\rightarrow n,s)\\ \Rightarrow \text{LODF}_{\ell,k}&\rightarrow -\text{LODF}_{\ell,k}. \end{align*} This can achieved using a modification similar to the one shown in Fig.~\ref{fig:Network_isolators},a: If the initial network contains the subgraph indicated by dotted, blue arrows in the centre, we can revert the sign of the $\text{LODF}_{\ell,k}$ by changing this subgraph to the one indicated by red, dotted arrows. This is demonstrated in Figure~\ref{fig:sign_reversal}: Changing the subgraph in the centre connecting the two graphs from the "x"-shaped subgraph (a) to the "="-shaped subgraph (b) leads to a sign reversal of the LODFs in the second graph (shades of grey) while the magnitude of LODFs is the same in both panels. This modifications thus allows to simulatenously change the sign of all LODFs in a subgraph which may prevent overloads that are caused by flows going in a particular direction. \section{Conclusion} We demonstrated how a spanning tree formulation of link failures may be used to understand which topological patterns aid the mitigation of failure spreading in power grids and other types of linear flow networks. In particular, we derived and explained three strategies for reducing the effect of link failures in linear flow networks based on spanning trees. Our results offer a new understanding of previous strategies used to inhibit failure spreading in power grids and may thus help increasing power grid security. All strategies analysed here for reducing failure spreading are based on extending -- or at least not reducing -- the network's ability to transport flows. This is in contrast to typical containment strategies in power grid security which are based on islanding the power grid, i.e. reducing the connectivity for the sake of security. We illustrated how to exploit the intimate connection to graph theory to find and analyse subgraphs that allow for improving both power grid resilience and efficiency at the same time. Our results offer a new understanding on a graph-theoretical level of network structures that have been found to inhibit or enhance failure spreading. We illustrated the fruitful approach of analysing failure spreading in power grids by using spanning trees for several subgraphs but are confident that other subgraphs for enhancing or inhibiting failure spreading may be unveiled using this formalism. \section*{Acknowledgments} We gratefully acknowledge support from the German Federal Ministry of Education and Research (grant no. 03EK3055B) and the Helmholtz Association (via the joint initiative ``Energy System 2050 -- A Contribution of the Research Field Energy'' and the grant no.~VH-NG-1025).
1,941,325,220,308
arxiv
\section{Introduction} In this era of exponential information growth, it is now possible to collect abundant data from different sources to perform diverse tasks, such as social computing, environmental analysis, and disease prediction. These data are usually heterogeneous and possess distinct physical properties such that they can be categorized into different groups, each of which is then regarded as a particular view in multi-view learning. For example, in video surveillance \citep{wang2013intelligent}, placing multiple cameras at different positions around one area might enable better surveillance of that area in terms of accuracy and reliability. In another example, to accurately recommend products to target customers \citep{jin2005maximum}, it is necessary to comprehensively describe the product by its image, brand, supplier, sales history, and user feedback. Effective descriptors have already been developed for object motion recognition: (i) histograms of oriented gradients (HOG) \citep{dalal2005histograms}, which focus on static appearance information; (ii) histograms of optical flow (HOF) \citep{laptev2008learning}, which capture absolute motion information; and (iii) motion boundary histograms (MBH) \citep{dalal2006human}, which encode related motion between pixels. Rather than requiring that all the examples should be comprehensively described based on each individual view, it might be better to exploit the connections and differences between multiple views to better represent examples. A number of multi-view learning algorithms \citep{blum1998combining,lanckriet2004learning,jia2010factorized,chen2012large,chaudhuri2009multi} have thus emerged to effectively fuse multiple features from different views. These have been widely applied to various computer vision and intelligent system problems. Although an optimal description of the data might be obtained by integration of multiple views, in practice it is difficult to guarantee that all the candidate views can be accessed simultaneously. For example, establishing a camera network for video surveillance is a huge project that takes time to realize. The number of views used in tracking and detection has increased. Newly developed recommendation systems might well have their images and text descriptions in place, but they require a period of time to accumulate sales and, therefore, user feedback, which are key factors influencing the decisions of prospective customers. Images can be depicted by diverse visual features with distinct acquisition costs. For example, a few milliseconds might be sufficient to extract the color histogram or SIFT descriptors from a normal-sized image, but time-cost clustering and mapping processes are further required to generate bag-of-word (BoW) features from SIFT descriptors. Recent deep learning methods need longer time (usually hours or even days) to obtain a reasonable model for image feature extraction. Conventional multi-view learning algorithms \citep{cai2013multi,kumar2011learning} have been developed in ideal settings in which all the views are accessed simultaneously. The real world, however, presents a more challenging multi-view learning scenario formed from multiple streaming views. Newly arrived views might contain fresh and timely information, that are beneficial for further improving the multi-view learning performance. To make existing multi-view learning methods applicable to this streaming view setting, a naive approach might be to treat a new view arriving as a new stage each time and then running the multi-view learning algorithms again with the new views. However, this approach is likely to suffer from intensive computational costs or serious performance degradation. In contrast, here we propose an effective streaming view learning algorithm that assumes the view function subspaces in the well-trained multi-view model over sufficient past views are stable and fine tunes their combination weights for an efficient model update. We provide theoretical analyses to support the feasibility of the proposed algorithm in terms of convergence and estimation error. Experimental results in real-world clustering and classification applications demonstrate the practical significance of investigating streaming views in the multi-view learning problem and the effectiveness of the proposed algorithm. \section{Problem Formulation} In the standard multi-view learning setting, we are provided with $n$ examples of $m$ views $\{(x_{i}^{1}, \cdots, x_{i}^{m})\}_{i=1}^{n}$, where $x_{i}^{v}\in\mathbb{R}^{D_{v}}$ is the feature vector on the $v$-th view of the $i$-th example. The feature matrices on different views are thus denoted as $\{X^{1}, \cdots, X^{m}\}$, where $X^{v}\in \mathbb{R}^{D_{v}\times n}$. Subspace-based multi-view learning approaches aim to discover a subspace shared by multiple views, such that the information from multiple views can be integrated in that subspace: \begin{equation} (f^1, \cdots, f^m): (x_{i}^1, \cdots, x_{i}^{m})\rightarrow z_{i}, \end{equation} where $f^{v}$ is the view function on the $v$-th view, and $z_{i}\in \mathbb{R}^{d}$ is the latent representation in the subspace $\mathcal{Z}$. Based on the unified representation $z$ of the multi-view example, the subsequent tasks, including classification, regression, and clustering, can easily be accomplished. Most existing multi-view learning algorithms explicitly assume that all views $\{X^{1}, \cdots, X^{m}\}$ are static and can be simultaneously accessed for multi-view model learning. If new views $\{X^{m+1}, \cdots, X^{m+k}\}$ are provided, the question arises of how to upgrade the well-trained multi-view model $(f^{1}, \cdots, f^{m})$ over the past $m$ views using the latest information. It is unreasonable to simply neglect the newly arrived views and ignore the possibility of updating the model. On the other hand, naively maintaining a training pool composed of all views, enriching the pool with each newly arrived view, and then re-launching the multi-view learning algorithm will be resource (storage and computation) consuming. It is, therefore, necessary to investigate this challenging multi-view learning problem in the general streaming view setting, where multiple views arrive in a streaming format. \subsection{Streaming View Learning} In this section, we first present a naive approach to handle new views. We then develop a sophisticated streaming view learning algorithm that reduces the burden of learning new views. \subsubsection{A Naive Approach} Assume that multiple example views $\{x^{1}, \cdots, x^{m}\}$ are generated from a latent data point $z$ in the subspace, \begin{equation}\label{eq:view_func} x^{v} = f^{v}(z) = W_{v}z, \end{equation} where view function $f^{v}$ parameterized by $W_{v}\in\mathbb{R}^{D_{v}\times d}$ can be assumed to be linear for simplicity. In practice, different feature dimensions are usually correlated; for example, distinct image tags in BoW features might be related to each other. It is thus reasonable to encourage a low rank of $W_{v}$. Moreover, the low-rank $\{W_{v}\}_{v=1}^{m}$ implies that the latent subspace contains comprehensive information to generate multiple view spaces while the inverse procedure is infeasible, which is consistent with our assumption that multiple views are generated from a latent subspace. Within the framework of empirical risk minimization, view functions $W=\{W_{1}, \cdots, W_{m}\}$ can be solved with the following problem: {\small \begin{equation}\label{eq:batch} \min_{W, z} \; \frac{1}{nm}\sum_{i=1}^{n}\sum_{v=1}^{m}\|x_{i}^{v}-W_{v}z_{i}\|_{2}^{2} + C_{1}\sum_{v=1}^{m}\|W_{v}\|_{*} + C_{2}\sum_{i=1}^{n}\|z_{i}\|_{2}^{2}, \end{equation} }\noindent where a least-squared loss is employed to measure the reconstruction error of multi-view examples, and a trace norm is applied to regularize view functions. Suppose that, by solving Problem (\ref{eq:batch}), we have already a well-trained multi-view model $\{W_{1}, \cdots, W_{m}\}$ over $m$ views $\{X^{1}. \cdots, X^{m}\}$. Considering a new arriving view $X^{m+1}$, we are then faced with challenging problems of how to discover the view function $W_{m+1}$ for the new view and how to upgrade the view functions $\{W_{1}, \cdots, W_{m}\}$ on past $m$ views. It is a straightforward extension to simultaneously handle more than one new view. Within the framework of Eq. (\ref{eq:batch}), the latent representations $\{z_{i}\}_{i=1}^{n}$ previously learned over $m$ past views are regarded as fixed. Then, the new view function $W_{m+1}$ can be efficiently solved by \begin{equation} \min_{W_{m+1}} \; \frac{1}{n}\sum_{i=1}^{n}\|x_{i}^{m+1}-W_{m+1}z_{i}\|_{2}^{2} + C_{1}\|W_{m+1}\|_{*}. \end{equation} Problem (\ref{eq:batch}) can then be naturally adapted to $m+1$ views, and alternately optimizing view functions $\{W_{v}\}_{v=1}^{m+1}$ and subspace representations $\{z_{i}\}_{i=1}^{n}$ for several iterations will output the optimal multi-view model. This naive approach to handling streaming views can be treated as a stochastic optimization strategy for solving Problem (\ref{eq:batch}). The view function for the new view can be efficiently discovered with the help of latent representations learned on past views; however, it is computationally expensive to upgrade the view functions on past views by re-launching Problem (\ref{eq:batch}), especially when the number of views and the view function dimensions are large. \subsubsection{Streaming View Learning} We begin the development of the streaming view learning algorithm by carefully investigating the view function. Note that any matrix $W_{v}\in \mathbb{R}^{D_{v}\times d}$ can be represented as the sum of rake-one matrices: \begin{equation}\label{eq:svd_view_func} W_{v} = \sum_{ij}\sigma_{ij}^{v}a_{i}^{v}(b_{j}^{v})^{T}, \end{equation} where $span(a_{1}^{v}, a_{2}^{v}, \cdots)=\mathbb{R}^{D_{v}}$ and $span(b_{1}^{v}, b_{2}^{v}, \cdots)=\mathbb{R}^{d}$, and $\{\sigma^{v}_{ij}\}$ are the coefficients to combine different subspaces. Based on the new formulation of the view function in Eq. (\ref{eq:svd_view_func}), Problem (\ref{eq:batch}) can be reformulated as {\small \begin{equation}\label{eq:new_batch} \begin{split} \min \; & \frac{1}{nm}\sum_{i=1}^{n}\sum_{v=1}^{m}\|x_{i}^{v}-A_{v}S_{v}B_{v}^{T}z_{i}\|_{2}^{2} + C_{1}\sum_{v=1}^{m}\|S_{v}\|_{*} + C_{2}\sum_{i=1}^{n}\|z_{i}\|_{2}^{2}\\ \text{w.r.t.} & \; \forall v\; A_{v}\in \mathbb{R}^{D_{v}\times k_{v}}, B_{v}\in \mathbb{R}^{d\times k_{v}}, S_{v} \in\mathbb{R}^{k_{v}\times k_{v}}; \forall i\; z_{i}\in\mathbb{R}^{d}\\ \text{s.t.} &\; \forall v \; A_{v}^{T}A_{v} = I, \quad B_{v}^{T}B_{v}=I, \end{split} \end{equation} }\noindent where $A_{v}$ and $B_{v}$ correspond to the column and row spaces of $W_{v}$ respectively, $S_{v}$ contains the weights to combine different rank-one subspaces, and $k_{v}$ indicates the number of active function subspaces on the $v$-th view. Suppose that we already have well trained view functions $\{(A_{v}, B_{v}, S_{v})\}_{v=1}^{m}$ over $m$ views. For the new $(m+1)$-th view, we can efficiently discover its view function $(A_{m+1}, B_{m+1}, S_{m+1})$ given the fixed latent representations $\{z_{i}\}_{i=1}^{n}$, {\small \begin{equation} \begin{split} \min &\; \frac{1}{n}\sum_{i=1}^{n}\|x_{i}^{m+1}-A_{m+1}S_{m+1}B_{m+1}^{T}z_{i}\|_{2}^{2} + C_{1}\|S_{m+1}\|_{*} \\ \text{w.r.t.} & \quad A_{m+1} \in \mathbb{R}^{D_{m+1}\times k_{m+1}}, \; B_{m+1}\in \mathbb{R}^{d\times k_{m+1}}, \\ & \quad \; S_{m+1} \in\mathbb{R}^{k_{m+1}\times k_{m+1}}\\ \text{s.t.} &\quad A_{m+1}^{T}A_{m+1} = I, \quad B_{m+1}^{T}B_{m+1}=I. \end{split} \end{equation} }\noindent The remaining task is to then upgrade the view functions on past $m$ views using the latest information. As mentioned above, completely re-training the model on past views is computationally expensive since a large number of variables need to be learned. Instead, we propose to fine-tune the previously well-trained multi-view model using the following objective function: {\small \begin{equation}\label{eq:new_sz} \begin{split} \min\; & \frac{1}{n(m+1)}\sum_{i=1}^{n}\sum_{v=1}^{m+1}\|x_{i}^{v}-A_{v}S_{v}B_{v}^{T}z_{i}\|_{2}^{2} \\ &\quad \quad \quad \quad \quad + C_{1}\sum_{v=1}^{m}\|S_{v}\|_{*} + C_{2}\sum_{i=1}^{n}\|z_{i}\|_{2}^{2}\\ \text{w.r.t.} & \; \forall v \; S_{v} \in\mathbb{R}^{k_{v}\times k_{v}}; \quad \forall i\; z_{i}\in\mathbb{R}^{d},\\ \end{split} \end{equation} }\noindent where we have fixed the row and column spaces of view functions on multiple views and attempted to update view functions by adjusting their coefficients for subspace combination. Since the view functions are now mainly determined by a set of smaller matrices $\{S_{v}\}_{v=1}^{m+1}$, where $S_{v}\in\mathbb{R}^{k_{v}\times k_{v}}$ with $k_{v}\ll \min(D_{v},d)$, solving Problem (\ref{eq:new_sz}) is often much cheaper than solving Problem (\ref{eq:new_batch}) or (\ref{eq:batch}) with $m+1$ views. After solving or updating the view functions on $(m+1)$ views, the multi-view model can then process another new view. Meanwhile, the current multi-view model can be used to predict the latent representation of a new multi-view example followed by subsequent tasks. \section{Optimization} The proposed streaming view learning algorithm involves optimization over latent representations $z=\{z_{i}\}_{i=1}^{m}$ and function subspaces on multiple views $\{(A_{v}, B_{v})\}_{v=1}^{m}$ and their corresponding combination weights $\{S_{v}\}_{v=1}^{m}$. In this section, we employ an alternating minimization strategy to optimize these variables. The whole optimization procedure is summarized in Algorithm 1. \subsection{Optimization Over Latent Representations} Fixing view functions $\{W_{v}=A_{v}S_{v}B_{v}^{T}\}_{v=1}^{m}$ on multiple views, the optimization problem w.r.t. the latent representation of the $i$-th example is \begin{equation}\label{eq:pro_zi} \min_{z_{i}} \; \frac{1}{nm}\sum_{v=1}^{m}\|x_{i}^{v}-W_{v}z_{i}\|_{2}^{2} + C_{2}\|z_{i}\|_{2}^{2}, \end{equation} which is easy to solve in a closed form. \subsection{Optimization Over View Function Subspaces} By fixing the latent representations, the view functions on multiple views can be independently optimized via \begin{equation} \min_{W_{v}} \; g(W_{v})+ C_{1}\|W_{v}\|_{*}, \end{equation} where \begin{equation} g(W_{v}) = \frac{1}{nm} \sum_{i=1}^{n}\|x_{i}^{v}-W_{v}z_{i}\|_{2}^{2}. \end{equation} The proximal gradient descent method \citep{ji2009accelerated} has been widely used to solve this problem by reformulating it to, {\small \begin{equation}\label{eq:proximal} \min_{W_{v}} \; \frac{\eta_{t}}{2}\|W_{v}-\big( W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})\big) \|_{F}^{2} + C_{1}\|W_{v}\|_{*}, \end{equation} }\noindent where $\eta_{t}$ is the step size in the $t$-th iteration. It turns out that Problem (\ref{eq:proximal}) can be solved by singular value thresholding (SVT) \citep{cai2010singular}, \begin{equation}\label{eq:soft} W_{v}^{t} = soft\big(W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1}), \frac{C_{1}}{\eta_{t}}\big), \end{equation} where $soft(X, C) = A(\Sigma-CI)_{+}B^{T}$ with singular value decomposition $X=A\Sigma B^{T}$ for $X$. By operating Eq. (\ref{eq:soft}), we can obtain the view function subspace. However, Eq. (\ref{eq:soft}) requires accurate SVD over $W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})$, which is computationally expensive given the large dimension of $W_{v}$. Recall that this step is only used to identify the view function subspaces, and the view function is more accurately discovered by optimizing the combination weight. Therefore, it is unnecessary to compute the SVD of $W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})$ very accurately. We apply the power method \citep{halko2011finding} with several iterations to approximately calculate $W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})\approx\widehat{A}_{v}\widehat{\Sigma}_{v}\widehat{B}_{v}^{T}$. We initialize the optimization method by $(W_{v})_{0}=X_{v}Z^{T}(ZZ^{T})^{-1}$, and assume $(W_{v})_{0} = {A}_{v}\Sigma_{v}{B}_{v}^{T}$ is the reduced SVD of $(W_{v})_{0}$, where ${A}_{v} \in \mathbb{R}^{D_{v}\times k_{v}}$, ${B}_{v} \in \mathbb{R}^{d\times k_{v}}$ and $\Sigma_{v}\in \mathbb{R}^{k_{v}\times k_{v}}$ is diagonal. At each iteration, we calculate $W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})\approx\widehat{A}_{v}\widehat{\Sigma}_{v}\widehat{B}_{v}^{T}$ using the cheaper power method and then filter out the singular vectors $\{\widetilde{A}_{v},\widetilde{B}_{v}\}$ with singular values greater than $C_{1}/\eta_{t}$. The column and row function subspaces can thus be discovered as the orthonormal bases of $span(A_{v}, \widetilde{A}_{v})$ and $span(B_{v}, \widetilde{B}_{v})$, respectively. \subsection{Optimization Over Combination Weights} Fixing the latent representations and the discovered view function subspaces, the optimization problem w.r.t. the combination weight $S_{v}$ on the $v$-th view is \begin{equation} \min_{S_{v}} \; h(S_{v})+ C_{1}\|S_{v}\|_{*}, \end{equation} where \begin{equation} h(W_{v}) = \frac{1}{nm} \sum_{i=1}^{n}\|x_{i}^{v}-A_{v}S_{v}B_{v}^{T}z_{i}\|_{2}^{2}. \end{equation} Similarly, the proximal gradient technique can be applied to obtain an equivalent objective function, {\small \begin{equation}\label{eq:pro_sv} \min_{S_{v}} \; \frac{\eta_{t}}{2}\|S_{v}-\big( S_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla h(S_{v}^{t-1})\big) \|_{F}^{2} + C_{1}\|S_{v}\|_{*}. \end{equation} }\noindent Since $S_{v}$ is a small $k_{v}\times k_{v}$ matrix, using the SVT method with an exact SVD operation to solve $S_{v}$ is feasible. \begin{algorithm}[tb] \caption{Streaming View Learning} \label{alg:lrml} \begin{algorithmic} \STATE {\bfseries Input:} $\{X^{v}\}_{v=1}^{m+1}$, $\{z_{i}\}_{i=1}^{n}$, $\{A_{v}, S_{v}, B_{v}\}_{v=1}^{m}$ \STATE {\bfseries PART 1- Solve new view function:} \STATE {\bfseries Initialize} $X_{v}Z^{T}(ZZ^{T})^{-1}={A}_{v}\Sigma_{v}{B}_{v}^{T}$, $v=m+1$ \FOR {$t=1, \cdots, $} \STATE $(\widehat{A}_{v},\widehat{\Sigma}_{v},\widehat{B}_{v}^{T}) \leftarrow Power\big(W_{v}^{t-1} - \frac{1}{\eta_{t}}\nabla g(W_{v}^{t-1})\big)$ \STATE $(\widetilde{A}_{v},\widetilde{B}_{v}) \leftarrow SVT(\widehat{A}_{v}\widehat{\Sigma}_{v}\widehat{B}_{v}^{T}, C_{1}/\eta_{t})$ \STATE $A_{v}\leftarrow QR([A_{v}, \widetilde{A}_{v}])$ and $B_{v}\leftarrow QR([B_{v}, \widetilde{B}_{v}])$ \STATE Solve $S_{v}$ via Problem (\ref{eq:pro_sv}) \STATE $(A_{v}^{'}, S_{v}^{'}, B_{v}^{'}) \leftarrow SVD(S_{v})$ \STATE $A_{v}\leftarrow A_{v}A_{v}^{'}$, $S_{v}\leftarrow S_{v}S_{v}^{'}$, $B_{v}\leftarrow B_{v}B_{v}^{'}$ \STATE $W_{v}^{t}\leftarrow A_{v}S_{v}B_{v}^{T}$ \ENDFOR \STATE {\bfseries PART 2- Upgrade view functions:} \FOR {$t=1, \cdots, $} \STATE $\forall \; i$ Solve $z_{i}$ via Problem (\ref{eq:pro_zi}) \STATE $\forall \; v$ Solve $S_{v}$ via Problem (\ref{eq:pro_sv}) \ENDFOR \STATE {\bfseries Return $\{A_{v}, S_{v}, B_{v}\}_{v=1}^{m+1}$} \end{algorithmic} \end{algorithm} \section{Theoretical Analysis} \label{sec:theory} Here we conduct a theoretical analysis to reveal important properties of the proposed streaming view learning algorithm. We use the following theorem to show that the latent representations $Z=[z_{1}, \cdots, z_{n}]$ become increasingly stable as streaming view learning progresses. \begin{theorem}\label{the:z} Given the latent representations $Z^{m-1}$ learned over $m-1$ views, and $Z^{m}$ learned over past $m-1$ views and the new $m$-th view (i.e. $m$ views in total), we have $\|Z^{m}-Z^{m-1}\|_{F}=\mathcal{O}(1/m)$. \end{theorem} \begin{proof} Given \begin{equation} J_{m}(Z) = \frac{1}{m}\sum_{v=1}^{m}\ell(X^{v}, W^{v}, Z)+C_{2}\|Z\|_{F}^{2}, \end{equation} we have {\small \begin{equation}\nonumber \begin{split} J_{m}(Z) - &J_{m-1}(Z) = \frac{1}{m}\ell(X^{m}, W^{m}, Z) + \frac{1}{m}\sum_{v=1}^{m-1}\ell(X^{v}, W^{v}, Z)\\ - & \frac{1}{m-1}\sum_{v=1}^{m-1}\ell(X^{v}, W^{v}, Z) \\ = & \frac{1}{m}\ell(X^{m}, W^{m}, Z) - \frac{1}{m(m-1)}\sum_{v=1}^{m-1}\ell(X^{v}, W^{v}, Z). \end{split} \end{equation} }\noindent Since $\ell(\cdot)$ used in the algorithm is Lipschitz in its last argument, $J_{m}(Z) - J_{m-1}(Z)$ has a Lipschitz constant $\mathcal{O}(1/m)$. Assuming the Lipschitz constant of $J_{m}(Z) - J_{m-1}(Z)$ is $\theta_{m}$, we have {\small \begin{equation}\nonumber \begin{split} J_{m-1}(Z^{m}) - &J_{m-1}(Z^{m-1}) = J_{m-1}(Z^{m}) - J_{m}(Z^{m}) + J_{m}(Z^{m}) \\ -& J_{m}(Z^{m-1}) + J_{m}(Z^{m-1}) - J_{m-1}(Z^{m})\\ \leq & J_{m-1}(Z^{m}) - J_{m}(Z^{m}) + J_{m}(Z^{m-1}) - J_{m-1}(Z^{m})\\ \leq & \theta_{m}\|Z^{m}-Z^{m-1}\|_{F}. \end{split} \end{equation} }\noindent Since $Z^{m-1}$ is the minimum of $J_{m-1}(Z)$, we have \begin{equation} J_{m-1}(Z^{m})-J_{m-1}(Z^{m-1}) \geq 2C_{2}\|Z^{m}-Z^{m-1}\|_{F}^{2}. \end{equation} Combining the above results, we have, \begin{equation} \|Z^{m}-Z^{m-1}\|_{F} \leq \frac{\theta_{m}}{2C_{2}}=\mathcal{O}(1/m), \end{equation} which completes the proof. \end{proof} Theorem \ref{the:z} reveals that the streaming views are helpful for deriving a stable multi-view model. We next analyze the optimality of the discovered view function for the newly arrived view using the following theorem. \begin{theorem}\label{the:subspace} The optimization steps of Part 1 in Algorithm 1 can guarantee that the solved function subspaces of the new view converge to a stationary point. \end{theorem} The proof of Theorem \ref{the:subspace} is listed in supplementary material due to page limitation. This remarkable result shows that the proposed algorithm can efficiently discover the optimal view function subspaces. We next analyze the influence of perturbation of the latent representations on the view function estimation by the following theorem, whose detailed proof is listed in supplementary material. \begin{theorem}\label{the:diff_w} Fixing $\|X^{v}\|\leq \Upsilon$ for each view. Given latent representation $Z$ with $\|Z\|_{F}\leq \Omega$, the optimal view function on the $v$-th view is denoted as $W_{v}$. For $\widetilde{Z}$ with $\|\widetilde{Z}-Z\|_{F}\leq \epsilon$, the optimal view function on the $v$-th view is defined as $W_{v}^{'}$. Suppose the smallest eigenvalue of $\widetilde{Z}\widetilde{Z}^{T}$ is lower bounded by $\lambda>0$, and both the rank of $W_{v}$ and $W_{v}^{'}$ are lower than $k$. The following error bound holds \begin{equation} \|W_{v}^{'}-W_{v}\|_{F} \leq \frac{1}{\lambda} \big(\Upsilon^{2}\frac{2\epsilon\Omega+\epsilon^{2}}{C_{1}} + \epsilon\Upsilon+ 2\sqrt{k+1}\big) \end{equation} \end{theorem} This theoretical analysis allows us to summarize as follows. For the newly arrived view, the proposed algorithm is guaranteed to discover its optimal view function based on the convergence analysis in Theorem \ref{the:subspace}. According to Theorem \ref{the:z}, the learned latent representation will become increasingly stable with more streaming views. Hence, given a small perturbation on latent representation matrix $Z$, the difference between the target view functions is also small. Most importantly, according to perturbation theory \citep{li1998relative}, it is thus reasonable to assume that the view function subspaces are approximately consistent given the bounded perturbation on the matrix; therefore, it is feasible to simply fine tune the combination weights of these subspaces for better reconstruction. On the other hand, if the number of past views is small, we can use the standard multi-view learning algorithm to re-train the model over past views and the new views together with acceptable resource cost. \section{Experiments} We next evaluated the proposed SVL algorithms for clustering and classification of real-world datasets. The SVL algorithm was compared to canonical correlation analysis (CCA) \citep{hardoon2004canonical}, the convex multi-view subspace learning algorithm (MCSL) \citep{white2012convex}, the factorized latent sparse with structured sparsity algorithm (FLSSS) \citep{jia2010factorized}, and the shared Gaussian process latent variable model (sGPLVM) \citep{shon2005learning}. Since these comparison algorithms were not designed for the streaming view setting, we adapted the algorithms for fair comparison such that they employed the idea of multi-view learning to handle new views. Specifically, for each multi-view comparison algorithm, the outputs of the well-trained multi-view model over past views were treated as temporary views, which were then combined with the newly arrived view to train a new multi-view model. Note that we did not adopt the trick to completely re-train multi-view learning algorithms using past views and new views simultaneously, since it is infeasible for practical applications considering the intensive computational cost. The real-world datasets used in experiments were the \textit{Handwritten Numerals} and \textit{PASCAL VOC'07} datasets. The Handwritten Numerals dataset is composed of $2,000$ data points in 0 to 9 ten-digit classes, where each class contains 200 data points. Six types of features are employed to describe the data: Fourier coefficients of the character shapes (FOU), profile correlations (FAC), Karhunen-Lo$\grave{e}$ve coefficients (KAR), pixel averages in $2\times 3$ windows (PIX), Zernike moments (ZER), and morphological features (MOR). The \textit{PASCAL VOC'07} dataset contains around $10,000$ images, each of which was annotated with 20 categories. Sixteen types of features have been used to describe each image including GIST, image tags, 6 color histograms (RGB, LAB, and HSV over single-scale or multi-scale images), and 8 bag-of-features descriptors (SIFT and hue densely extracted or for Harries-Laplacian interest points on single-scale or multi-scale images). \subsection{Multi-view Clustering and Classification} For each algorithm, half of the total views were used for initialization to well train a base multi-view model, and then multi-view learning was conducted with the streaming views. We fixed the dimension of the latent subspace as $100$ for different algorithms. Based on the multi-view example subspaces learned through the proposed SVL algorithm and its comparison algorithms, the k-means and SVM methods were launched for subsequent clustering and classification, respectively. Clustering performance was assessed by normalized mutual information (NMI) and accuracy (ACC), while classification performance was measured using mean averaged precision (mAP). \begin{table*}[!tbh] \caption{NMI on the \textit{Handwritten Numerals} dataset. The number following each feature denotes the number of new views processed. `Single' implies directly launching k-means on the current view.} \label{tab:clustering} \begin{center} \begin{tabular}{ccccccc} \hline Algorithm & FAC & FOU & KAR (0) & MOR (1) & PIX (2) & ZER (3)\\ \hline \hline Single & $0.679\pm 0.032$ & $0.547\pm 0.028$ & $0.666\pm 0.030$ & $0.643\pm 0.034$ & $0.703\pm 0.040$ & $0.512\pm 0.025$ \\ CCA & - & - & $0.755\pm 0.039$ & $0.777\pm 0.038$ & $0.772\pm 0.061$ & $0.799\pm 0.038$ \\ FLSSS & - & - & $0.833\pm 0.047$ & $0.837\pm 0.029$ & $0.830\pm 0.028$ & $0.840\pm 0.027$ \\ sGPLVM & - & - & $0.785\pm 0.044$ & $0.794\pm 0.021$ & $0.799\pm 0.056$ & $0.827\pm 0.055$ \\ MCSL & - & - & $0.798\pm 0.028$ & $0.805\pm 0.047$ & $0.814\pm 0.069$ & $0.815\pm 0.026$ \\ SVL & - & - & $0.826\pm 0.049$ & $0.840\pm 0.044$ & $0.866\pm 0.037$ & $0.871\pm 0.042$ \\ \hline \end{tabular}\vskip -0.2in \end{center}\vskip -0.1in \end{table*} \begin{table*}[!tbh] \vskip -0.2i \caption{ACC on the \textit{Handwritten Numerals} dataset. The number following each feature denotes the number of new views processed. `Single' implies directly launching k-means on the current view.} \label{tab:clustering_acc} \begin{center} \begin{tabular}{ccccccc} \hline Algorithm & FAC & FOU & KAR (0) & MOR (1) & PIX (2) & ZER (3) \\ \hline \hline Single & $0.707\pm 0.065$ & $0.556\pm 0.062$ & $0.689\pm 0.051$ & $0.614\pm 0.058$ & $0.694\pm 0.067$ & $0.534\pm 0.052$ \\ CCA & - & - & $0.709\pm 0.051$ & $0.710\pm 0.015$ & $0.706\pm 0.037$ & $0.809\pm 0.065$ \\ FLSSS & - & - & $0.819\pm 0.038$ & $0.831\pm 0.035$ & $0.851\pm 0.027$ & $0.849\pm 0.045$ \\ sGPLVM & - & - & $0.777\pm 0.052$ & $0.796\pm 0.035$ & $0.788\pm 0.053$ & $0.805\pm 0.058$ \\ MCSL & - & - & $0.790\pm 0.045$ & $0.804\pm 0.055$ & $0.804\pm 0.036$ & $0.836\pm 0.050$ \\ SVL & - & - & $0.813\pm 0.052$ & $0.816\pm 0.032$ & $0.908\pm 0.050$ & $0.927\pm 0.050$ \\ \hline \end{tabular} \end{center}\vskip -0.1in \end{table*} \begin{table}[th] \vskip -0.2i \caption{mAP on the \textit{PASCAL VOC'07} dataset. `8' means that multi-view learning begins with 8 views, while the following number denotes the number of new views processed.} \label{tab:classification} \begin{center} \begin{tabular}{cccccc} \hline Algorithm & $8(0)$ & $8(2)$ & $8(4)$ & $8(6)$ & $8(8)$ \\ \hline \hline CCA & $0.314$ & $0.335$ & $0.347$ & $0.357$ & $0.368$ \\ FLSSS & $0.507$ & $0.518$ & $0.521$ & $0.532$ & $0.539$ \\ sGPLVM & $0.441$ & $0.458$ & $0.473$ & $0.487$ & $0.487$ \\ MCSL & $0.448$ & $0.463$ & $0.469$ & $0.475$ & $0.480$ \\ SVL & $0.554$ & $0.558$ & $0.562$ & $0.564$ & $0.571$ \\ \hline \end{tabular} \end{center}\vskip -0.15in \end{table} The performance of different algorithms with respect to the progress of streaming view learning of the clustering task are shown in Tables \ref{tab:clustering} and \ref{tab:clustering_acc}. Classification results under similar settings are presented in Table \ref{tab:classification}. In each table, multi-view learning algorithms learn the views from the left to the right column in a streaming manner, such that the results presented on the right side have already been helped by the views on the left. The different multi-view learning algorithms consistently improve clustering and classification performance when more new view information becomes available. Although the base multi-view learning model of the proposed SVL algorithm only achieves comparable or slightly inferior performance to that of comparison algorithms, SVL significantly improves its performance by optimally learning new view functions and upgrading past view functions, such that the advantages of SVL becomes more obvious with increasing of numbers of new views processed. Specifically, in the fifth column of Table \ref{tab:clustering}, the NMI of SVL improves about $20\%$ over that of single-view algorithm and $5\%$ over that of multi-view FLSSS algorithm. On the \textit{PASCAL VOC'07} dataset, training multi-view model over 8 views is already computational expensive, let alone completely re-training with new views. \subsection{Algorithm Analysis} We next varied the dimensionality $d$ of latent representations. The clustering performance of SVL on the \textit{Handwritten Numerals} dataset is presented in Table \ref{tab:d}. The performance of the lower-dimensional latent representations is limited, whereas with increased $d$ the latent representations have more power to describe multi-view examples, and the SVL algorithm achieves stable performance. \begin{table}[th] \vskip -0.2in\setlength{\tabcolsep}{3pt} \caption{NMI of SVL with different dimensionalities of latent representations on the \textit{Handwritten Numerals} dataset.} \label{tab:d} \begin{center} \begin{tabular}{ccccccc} \hline $d$ & FAC & FOU & KAR (0) & MOR (1) & PIX (2) & ZER (3) \\ \hline \hline 10 & - & - & $0.655$ & $0.655$ & $0.688$ & $0.705$\\ 20 & - & - & $0.680$ & $0.762$ & $0.771$ & $0.772$\\ 50 & - & - & $0.755$ & $0.778$ & $0.809$ & $0.838$\\ 100 & - & - & $0.826$ & $0.840$ & $0.866$ & $0.871$\\ 150 & - & - & $0.829$ & $0.834$ & $0.865$ & $0.888$\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[!thb]\vskip -0.2in \begin{center} \includegraphics[width=0.8\columnwidth]{./view_num3.pdf} \end{center}\vskip -0.2in \caption{Classification performance of SVL using different numbers of views for initialization on the \textit{PASCAL VOC'07} dataset.} \label{fig:m}\vskip -0.2in \end{figure} \begin{figure}[!thb]\vskip -0.2in \begin{center} \includegraphics[width=0.8\columnwidth]{./view_order2.pdf} \end{center}\vskip -0.2in \caption{Classification Performance of SVL with distinct view orders on the PASCAL VOC'07 dataset. }\vskip -0.1in \label{fig:order} \end{figure} To examine the influence of the number of views used to initialize the base multi-view SVL model, we started SVL with different numbers of views on the \textit{PASCAL VOC'07} dataset. The variability in performance is presented in Figure \ref{fig:m}. If the base multi-view model was initialized with insufficient views, the resulting performance was limited (see the first group in Figure \ref{fig:m}). This is due to the large estimation error of the functions over the views used to initialize the model. Conversely, if the multi-view model over past views was already well trained, we easily applied SVL to extend the model to handle new views without intensive computational cost, whilst also guaranteeing stable performance improvements. These phenomena are consistent with our theoretical analyses. Finally, we examined the influence of the order of streaming views on the learning performance of SVL, and the classification results are shown in Figure \ref{fig:order}. For each group in Figure \ref{fig:order}, the view order is randomly determined. It can be seen that although classification performance variations is diverse, the resulting performances are roughly equivalent with distinct streaming view orders. \section{Conclusions} \vspace{-0.05in} Here we investigate the streaming view problem in multi-view learning, in which views arrive in a streaming manner. Instead of discarding the multi-view model trained well over past views, we regard the subspaces of the well-trained view functions as stable and fine-tune the weights for function subspaces combinations while processing new views. In this way, the resulting SVL algorithm can efficiently learn view functions for the new view and update view functions for past views. The convergence issue of the proposed algorithm is theoretically studied, and the influence of streaming views on the multi-view model is addressed. Comprehensive experiments conducted on real-world datasets demonstrate the significance of studying the streaming view problem and the effectiveness of the proposed algorithm.
1,941,325,220,309
arxiv
\section{Introduction} By an interpolation inequality in quasi-Banach spaces $X_1 , X_2$, and $X$ with $X_1 \cap X_2 \subset X$, we mean any inequality of the form $$ \|f\|_X \lesssim \|f\|_{X_1}^{1-\theta} \|f\|_{X_2}^\theta \quad (f \in X_1 \cap X_2 ), $$ where $0< \theta < 1$ is a constant. Here for two nonnegative quantities $a$ and $b$, we write $a\lesssim b$ if $a \le C b$ for some positive constant $C$. If $a \lesssim b $ and $b \lesssim a $, we write $a \sim b$. The celebrated Gagliardo-Nirenberg inequality \cite{Ga,Niren} is an interpolation inequality in Sobolev spaces of integral order on the $n$ dimensional Euclidean space ${\mathbb R}^n$. Interpolation inequalities in more general function spaces have been obtained for fractional Sobolev spaces \cite{BreMir}, for Triebel-Lizorkin and Besov spaces \cite{BreMir,Ozawa,Wa1}, for Fourier-Herz spaces \cite{Chi}, for Soblev-Lorentz spaces \cite{dao,HaYuZh,McC1,Wa2}, and most recently for Triebel-Lizorkin-Lorentz and Besov-Lorentz spaces \cite{WWY}. For $s \in \mathbb R$, $1 \le p< \infty$, and $1 \le q, r \le \infty$, the Triebel-Lizorkin-Lorentz space $F_{p,q}^{s,r}$ on $\mathbb R^n$ is the generalization of the Triebel-Lizorkin space $F_p^{s,r}$ obtained by replacing the underlining Lebesgue space $L^p$ in the definition of $F_p^{s,r}$ by the Lorentz space $L^{p,q}$. Sobolev-Lorentz spaces $H_{p,q}^s$ and Besov-Lorentz spaces $B_{p,q}^{s,r}$ are defined in the same way. The corresponding homogeneous spaces can be also considered and denoted by $\dot{F}_{p,q}^{s,r}$, $\dot{H}_{p,q}^s$, and $\dot{B}_{p,q}^{s,r}$, respectively. All these function spaces of Sobolev-Lorentz type will be studied in more details in the preliminary Section 2. The main purpose of this paper is to establish general interpolation inequalities in Triebel-Lizorkin-Lorentz spaces and Besov-Lorentz spaces for both inhomogeneous and homogeneous cases. First of all, for Triebel-Lizorkin-Lorentz spaces, we consider the following interpolation inequalities of the most general form: \begin{equation}\label{itp-eq-inhomf} \|f\|_{\ihsf{s}{p}{q}{r}}\lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta , \end{equation} \begin{equation}\label{itp-eq-homf} \|f\|_{\hsf{s}{p}{q}{r}}\lesssim \|f\|_{\hsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsf{s_2}{p_2}{q_2}{r_2}}^\theta , \end{equation} where $s , s_1 , s_2 \in \mathbb{R}$, $1 \le p,p_1,p_2 < \infty$, $ 1 \le q,q_1,q_2, r,r_1,r_2 \le \infty$, and $0<\theta<1$ are numbers satisfying some conditions. To state our results in a concise way, let $s_* , p_* , q_*$, and $r_*$ be defined by $$ s_* = (1-\theta )s_1 + \theta s_2 ,\quad \frac{1}{p_*}= \frac{1-\theta}{p_1}+\frac{\theta}{p_2} , $$ $$ \frac{1}{q_*}= \frac{1-\theta}{q_1}+\frac{\theta}{q_2} , \quad\mbox{and}\quad \frac{1}{r_*}= \frac{1-\theta}{r_1}+\frac{\theta}{r_2}. $$ We first show that the condition \begin{equation}\label{nec-condition-common} s_* -s \ge \frac{n}{p_*}-\frac{n}{p} \ge 0 \end{equation} is necessary and sufficient for the interpolation inequality (\ref{itp-eq-inhomf}) to hold for $q \ge q_* $ and $r \ge r_*$. See Theorems \ref{thm-itp-inhomf} and \ref {thm-necessity-00} in Section 3 for precise statements. We are more interested in finding sufficient and necessary conditions for (\ref{itp-eq-inhomf}) to hold for $q < q_* $ or $r < r_* $. For the case when $r =1 < r_* =\infty$, we prove a rather complete result in Theorems \ref{thm-itp-inhomf} and \ref{thm-necessity-0}: if either one of the two conditions \begin{enumerate} \item[(a)] $s_* =s$, $p_*=p$, $s_1\neq s_2$ \item[(b)] $s_* > s$, $s_*-s \ge n/p_*-n/p \ge 0$ \end{enumerate} is satisfied, then (\ref{itp-eq-inhomf}) holds when $q \ge q_*$ and $r =1 < r_* =\infty$; conversely, if (\ref{itp-eq-inhomf}) holds when $q = q_*$, $r =1 < r_* =\infty$, and $1<p, p_1 , p_2 < \infty$, then either (a) or (b) must be satisfied. However, for the case when $q=1< q_* = \infty$, we have only a partial result. Particularly, in Theorem \ref{thm-necessity}, we show that if (\ref{itp-eq-inhomf}) holds when $q = 1< q_* =\infty$, $r =1 < r_* =\infty$, and $1<p, p_1 , p_2 < \infty$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_* > s$, $p_*=p$, $p_1 \neq p_2$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s>n/p_*-n/p>0$. \end{enumerate} Furthermore, it is shown in Theorem \ref{thm-itp-inhomf} that the conditions (ii), (iii), and (iv) are sufficient for the inequality (\ref{itp-eq-inhomf}) to hold for $q = 1< q_* =\infty$ and $r =1 < r_* =\infty$. But we have been unable to prove sufficiency of (i) yet, which seems to be still open. We next establish several results on sufficient and necessary conditions for the interpolation inequality (\ref{itp-eq-homf}) in homogeneous Triebel-Lizorkin-Lorentz spaces. Among other things, we show in Theorem \ref{thm-itp-homf} that if the condition (iii) is satisfied and $1< p, p_1 , p_2 < \infty$, then (\ref{itp-eq-homf}) holds for $q = 1< q_* =\infty$ and $r =1 < r_* =\infty$, that is, there holds the following interpolation inequality: \begin{equation}\label{inter-ineq-homo-TLL} \|f\|_{\dot F^{s,1}_{p,1}}\lesssim \|f\|_{\dot F^{s_1,\infty}_{p_1,\infty}}^{1-\theta}\|f\|_{\dot F^{s_2,\infty}_{p_2,\infty}}^\theta . \end{equation} For inhomogeneous and homogeneous Besov-Lorentz spaces, we consider the following interpolation inequalities: \begin{equation}\label{itp-eq-inhomb} \|f\|_{\ihsb{s}{p}{q}{r}}\lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta , \end{equation} \begin{equation}\label{itp-eq-homb} \|f\|_{\hsb{s}{p}{q}{r}}\lesssim \|f\|_{\hsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsb{s_2}{p_2}{q_2}{r_2}}^\theta , \end{equation} where $s , s_1 , s_2 \in \mathbb{R}$, $1 \le p,p_1,p_2 \le \infty$, $ 1 \le q,q_1,q_2, r,r_1,r_2 \le \infty$, and $0<\theta<1$. In Theorems \ref{thm-itp-inhomb} and \ref{thm-necessity-00-b}, we show that the condition (\ref{nec-condition-common}) is necessary and sufficient for the interpolation inequality (\ref{itp-eq-inhomb}) to hold for $q \ge q_* $ and $r \ge r_*$. It will be also shown in Theorem \ref{thm-itp-inhomb} that if one of the conditions (ii), (iii), and (iv) is satisfied, then the interpolation inequality \[ \|f\|_{\ihsb{s}{p}{1}{1}}\lesssim \|f\|_{\ihsb{s_1}{p_1}{\infty}{\infty}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{\infty}{\infty}}^\theta \] holds even for the limiting case when one of $ p , p_1 $, and $p_2$ equals to $1$ or $ \infty$. The corresponding result for homogeneous spaces is guaranteed by Theorem \ref{thm-itp-homb}, from which we conclude that if the condition (iii) is satisfied, then \begin{equation}\label{inter-ineq-homo-BL} \|f\|_{\dot B^{s,1}_{p,1}}\lesssim \|f\|_{\dot B^{s_1,\infty}_{p_1,\infty}}^{1-\theta}\|f\|_{\dot B^{s_2,\infty}_{p_2,\infty}}^\theta . \end{equation} This result was already proved by Wang et al. \cite{WWY} for the special case when $1< p_1 , p_2 \le \infty$. The second purpose of this paper is to extend Gagliardo-Nirenberg inequalities to the setting of Lorentz spaces by making use of the interpolation inequalities (\ref{itp-eq-homf}), (\ref{inter-ineq-homo-TLL}), and (\ref{inter-ineq-homo-BL}) together with some embedding results. By a standard argument, it can be shown (see Theorem \ref{SFE2-inhom}) that $$ \|f\|_{\hsf{s}{p}{q}{2}} \sim \|f\|_{\hs{s}{p}{q}} = \|\Lambda^s f\|_{L^{p,q}} $$ for $s\in\mathbb{R}$, $1<p<\infty$, and $1\le q \le \infty$, where $\Lambda^s = (-\Delta)^{s/2}$ is the fractional Laplacian of order $s$. Therefore, from (\ref{inter-ineq-homo-TLL}), we can deduce that the generalized Gagliardo-Nirenberg inequality \begin{equation}\label{inter-ineq-homo-SL} \|\Lambda^s f\|_{L^{p,q}}\lesssim \|\Lambda^{s_1} f\|_{L^{p_1,q_1}}^{1-\theta}\|\Lambda^{s_2} f\|_{L^{p_2,\infty}}^\theta \end{equation} holds for $1 \le q , q_1\le \infty$, if $s, s_1 , s_2 \in \mathbb R$, $1< p, p_1 , p_2 < \infty$, and $0<\theta < 1$ satisfy the condition (iii). Particularly, taking $s=s_1 =0$ in (\ref{inter-ineq-homo-SL}), we have \begin{equation}\label{inter-ineq-homo-SL-00} \|f\|_{L^{p,q}}\lesssim \|f\|_{L^{p_1,q_1}}^{1-\theta}\|\Lambda^{s_2} f\|_{L^{p_2,\infty}}^\theta , \end{equation} provided that $s_2 > 0$, $1 < p , p_1 , p_2 < \infty$, and $0<\theta<1$ satisfy \begin{equation}\label{cond-GGNI} \frac{1}{p}=\frac{1-\theta}{p_1}+\theta\left(\frac{1}{p_2}-\frac{s_2}{n}\right) \quad\text{and}\quad\frac{1}{p_1} \neq \frac{1}{p_2}-\frac{s_2}{n}. \end{equation} Furthermore, it will be shown in Theorem \ref{cor-BL1} that (\ref{inter-ineq-homo-SL-00}) holds even for the limiting case when $p=q=\infty$ or $1 \le p_1 =q_1 \le \infty$, if $s_2 > 0$, $1 < p \le \infty$, $1 \le p_1 \le \infty$, $1< p_2 < \infty$, and $0<\theta<1$ satisfy (\ref{cond-GGNI}). Several useful inequalities can be derived from (\ref{inter-ineq-homo-SL-00}); for instance, if $1 \le p_1 <p < \infty$, $1<p_2 < \infty$, and $s_2 =n/p_2 $, then \[ \|f\|_{L^{p,1}} \lesssim \|f\|_{L^{p_1, q_1}}^{p_1 /p}\|\Lambda^{n/p_2} f\|_{L^{p_2 ,\infty}}^{1-p_1/p} , \] where $q_1 =\infty$ if $p_1 >1$ and $q_1 =1$ if $p_1 =1$. This inequality holds even when $p_1 =1$ and $q_1=\infty$, by Theorem \ref{cor2}. More inequalities are provided in the examples following Theorems \ref{cor-BL1} and \ref{cor2}, which generalize recent results in \cite{dao,McC1,WWY} to some limiting cases. The rest of the paper is organized as follows. In Section 2, we provide some preliminary results for Triebel-Lizorkin-Lorentz spaces and Besov-Lorentz spaces. In Section 3, we state and prove our main results on sufficiency and necessity for interpolation inequalities in Triebel-Lizorkin-Lorentz spaces. Analogous results for Besov-Lorentz spaces are established in Section 4. Section 5 is devoted to deriving generalized Gagliardo-Nirenberg inequalities in Lorentz spaces from the interpolation inequalities in Sections 3 and 4. Finally, proofs of some results in the previous sections are provided in Appendix. \section{Preliminaries} For two quasi-normed spaces $X$ and $Y$, we say that $X$ is continuously embedded into $Y$ and write $X\hookrightarrow Y$ if $X\subset Y$ and $\|f\|_{Y} \lesssim \|f\|_{X}$ for all $f \in X$. If $X$ is not continuously embedded into $Y$, we write $X\not\hookrightarrow Y$. Furthermore, $X=Y$ always means that $X\hookrightarrow Y$ and $Y\hookrightarrow X$. In this preliminary section, we define Lorentz spaces, Triebel-Lizorkin-Lorentz spaces and Besov-Lorentz spaces. Basic interpolation and embedding results are then introduced, with proofs of some results being postponed later in Appendix. \subsection{Lorentz spaces} For $1\le p < \infty$, $1\le q \le \infty$ or $p=q=\infty$, let $L^{p,q}$ denote the Lorentz space of measurable functions $f$ on ${\mathbb R}^n$, which is a quasi-Banach space equipped with the quasi-norm \begin{equation*}\label{lorentz-quasi-norm} \|f \|_{L^{p,q}}= \begin{cases} \displaystyle \left(p \int _0 ^\infty [\alpha \mu_f(\alpha)^{1/p}]^q\frac{d\alpha}{\alpha} \right )^{1/q}&\text{if}\,\, p,q<\infty\\ \quad \displaystyle \sup_{\alpha>0} \alpha \mu_f(\alpha)^{1/p} &\text{if}\,\, p<\infty,\,q=\infty \\ \inf \{\alpha>0: \mu_f(\alpha) =0\} &\text{if}\,\, p=q=\infty , \end{cases} \end{equation*} where $\mu_f$ denotes the distribution function of $f$: $\mu_f(\alpha)= |\{x\in\mathbb{R}^n:|f(x)|>\alpha\}|$. For notational convenience, we define $L^{\infty,q}=\{0\}$ for $1 \le q<\infty$. Note then that $L^{p,q_1}\hookrightarrow L^{p,q_2}$ if and only if $1\le q_1\le q_2\le\infty$. Moreover, if $1 \le p=q \le \infty$, then $L^{p,q}=L^{p}$. It is also well-known that if $1<p<\infty$, then the quasi-norm $\|\cdot\|_{L^{p,q}}$ is equivalent to a norm (see, e.g., \cite[Lemma 4.4.5]{Bennett}). The following H\"older inequality in Lorentz spaces is due to O'Neil \cite[Theorem 3.4]{oneil}. \begin{lem}\label{Holder-Lorentz} Let $1\le p,p_1,p_2,q,q_1,q_2 \le \infty$ satisfy $$ \frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}\quad\text{and}\quad \frac{1}{q}\le \frac{1}{q_1}+\frac{1}{q_2}. $$ Then for all $f\in L^{p_1,q_1}$ and $g\in L^{p_2,q_2}$, $$ \|fg\|_{L^{p,q}} \lesssim \|f\|_{L^{p_1,q_1}} \|g\|_{L^{p_2,q_2}}. $$ \end{lem} Let $\Gamma$ be an index set (typically, $\Gamma$ is the set $\mathbb{Z}$ of all integers or the set $\mathbb{N}_0$ of all nonnegative integers). For $1 \le r\le \infty$, we denote by $l^r=l^r(\Gamma)$ the Banach space of all complex-valued sequences $ \{a_j\} =\{a_j\}_{j\in\Gamma}$ such that \begin{equation*} \|\{a_j\}\|_{l^r}= \begin{cases} \displaystyle \left(\sum_{j\in\Gamma} |a_j|^r\right)^{1/r} &\quad\text{if}\,\,r<\infty \\ \,\, \displaystyle \sup_{j\in\Gamma} |a_j| &\quad\text{if}\,\,r=\infty \end{cases} \end{equation*} is finite. Note that $\| \{a_j\} \|_{l^{r_2}} \le \| \{a_j\} \|_{l^{r_1}}$ for $1\le r_1 \le r_2 \le \infty$. Furthermore, if $ 1\le r,r_1,r_2 \le \infty$ satisfy $1/r \le 1/r_1 + 1/r_2$, then $\| \{ a_j b_j \} \| _{l^r} \le \| \{a_j \}\| _{l^{r_1}}\| \{ b_j \} \| _{l^{r_2}}.$ For $1 \le p,q,r\le \infty$, let $L^{p,q}(l^r)=L^{p,q}(l^r(\Gamma))$ be the set of all sequences $\{f_j\}_{j\in\Gamma}$ of measurable functions on $\mathbb{R}^n$ such that \begin{equation*} \|\{f_j\}_{j\in\Gamma}\|_{L^{p,q}( l^r)} = \left\| \| \{ f_j \}_{j\in\Gamma} \|_{l^r} \right \|_{L^{p,q}} < \infty . \end{equation*} Similarly, $l^r(L^{p,q})=l^r(\Gamma;L^{p,q})$ is the set of all sequences $\{f_j\}_{j\in\Gamma}$ of measurable functions on $\mathbb{R}^n$ such that \begin{equation*} \|\{f_j\}_{j\in\Gamma}\|_{l^r(L^{p,q})} = \left\| \{\| f_j \|_{ L^{p,q}}\}_{j\in\Gamma} \right \|_{l^r} < \infty . \end{equation*} Then $L^{p,q}(l^r)$ and $l^r(L^{p,q})$ are quasi-Banach spaces equipped with the quasi-norms above. Moreover, each of the embeddings \begin{equation}\label{eq-ebd} L^{p,q_1}(l^{r_1})\hookrightarrow L^{p,q_2}(l^{r_2}) \quad\text{and}\quad l^{r_1}(L^{p,q_1}) \hookrightarrow l^{r_2}(L^{p,q_2}) \end{equation} holds if and only if $1\le q_1\le q_2\le \infty$ and $1\le r_1\le r_2\le \infty$. Note also that $$ \| \{|f_j| ^\alpha \} \|_{L^{p/\alpha,q/\alpha}(l^{r/\alpha})} = \| \{ f_j \}\|^\alpha _{L^{p ,q }(l^{r })} $$ whenever $0 < \alpha \le \min(p,q,r)$. We write $L^p(l^r)=L^{p,p}(l^r)$ and $l^r(L^{p,p})=l^r(L^p)$. The following interpolation result is well-known (see, for instance, \cite[Theorem 5.3.1]{Bergh}). \begin{lem}\label{interpolation-Lp} Let $1 \le p, p_1 , p_2 \le \infty$ and $0<\theta<1$ satisfy $$ p_1 \neq p_2 \quad \text{and}\quad \frac{1}{p}=\frac{1-\theta}{p_1}+\frac{\theta}{p_2}. $$ Then for $1\le q,q_1,q_2,r\le\infty$ with $q_i=\infty$ when $p_i=\infty$ for $i=1,2$, $$ (L^{p_1,q_1}(l^r),L^{p_2,q_2}(l^r))_{\theta,q}=L^{p,q}(l^r). $$ \end{lem} \subsection{Triebel-Lizorkin-Lorentz spaces} We denote the Schwartz class on ${\mathbb R}^n$ by $\mathscr{S}$ and its topological dual space by $\mathscr{S}'$. Let $\psi=\{\psi_j\}_{j\in\mathbb{N}_0}$ be a sequence in $C_c^{\infty}(\mathbb{R}^n)$ such that \begin{equation}\label{cond1-inhom} \mathrm{supp\,} \psi_0 \subset \{|\xi|\le2\},\quad \mathrm{supp\,} \psi_j\subset \{2^{j-1}\le |\xi|\le 2^{j+1}\}\;\;\text{for} \;j\in\mathbb{N}, \end{equation} \begin{equation}\label{cond2-inhom} \sup_{j\in\mathbb{N}_0} \sup_{\xi\in\mathbb{R}^n} 2^{j|\alpha|}|D^\alpha \psi_j(\xi)|<\infty\quad\text{for every multi-index} \,\, \alpha , \end{equation} and \begin{equation}\label{cond3-inhom} \sum_{j=0}^\infty \psi_j(\xi)=1 \quad\mbox{for every}\; \xi\in\mathbb{R}^n. \end{equation} Associated with $\psi=\{\psi_j\}_{j\in\mathbb{N}_0}$, let $\{\Delta_j^\psi\}_{j\in\mathbb{N}_0}$ be the sequence of Littlewood-Paley operators defined via the Fourier transform by $$\Delta^\psi_jf=\left(\psi_j \hat f\,\right)^\vee =\psi_j^\vee \ast f\quad \text{for $f\in\mathscr{S}'$.}$$ For $s\in\mathbb{R}$, $1\le p <\infty$, and $1\le q,r \le \infty$, we denote by $\ihsf{s}{p}{q}{r}$ the space of all $f \in \mathscr{S}'$ such that \begin{equation*} \|f \|_{\ihsf{s}{p}{q}{r}} = \inhomnorm{s}{p}{q}{r}{\psi}{f} < \infty . \end{equation*} Then $\ihsf{s}{p}{q}{r}$ is a quasi-Banach space and will be called a Triebel-Lizorkin-Lorentz space. If $1 \le p=q< \infty$, then $\ihsf{s}{p}{q}{r}$ coincides with the more familiar Triebel-Lizorkin space $F_{p}^{s,r}$, which has been extensively studied by Triebel in \cite{Tri1,Tri4,Tri2,Tri3}. To define homogeneous spaces, let $\mathscr{S}_0$ be a closed subspace of $\mathscr{S}$ defined by $$ \mathscr{S}_0 = \{ \psi \in \mathscr{S} \, :\, D^\alpha \hat \psi(0) =0\,\,\,\text{for all multi-indices}\,\, \alpha \}. $$ The dual of $\mathscr{S}_0$ is denoted by $\mathscr{S}_0'$. The following is taken from \cite[Proposition 2.4]{Tri3}. \begin{lem}\label{uniqueness} For any $f\in \mathscr{S}_0'$, there exists $F\in\mathscr{S}'$ such that $\lg F,\psi \rangle = \lg f , \psi \rangle$ for all $\psi\in\mathscr{S}_0$. Moreover, if there is another $G\in\mathscr{S}'$ such that $\lg G, \psi \rangle =\lg f, \psi \rangle$ for all $\psi\in\mathscr{S}_0$, then $F-G$ is a polynomial. \end{lem} Let $\varphi=\{\varphi_j\}_{j\in\mathbb{Z}}$ be a sequence of functions in $C_c^\infty(\mathbb{R}^n)$ such that \begin{equation}\label{cond1-hom} \mathrm{supp\,} \varphi_j\subset \{2^{j-1}\le |\xi|\le 2^{j+1}\}\quad\text{for $j\in\mathbb{Z}$}, \end{equation} \begin{equation}\label{cond2-hom} \sup_{j\in\mathbb{Z}} \sup_{\xi\in\mathbb{R}^n} 2^{j|\alpha|}|D^\alpha \varphi_j(\xi)|<\infty\quad\text{for every multi-index $\alpha$}, \end{equation} and \begin{equation}\label{cond3-hom} \sum_{j\in\mathbb{Z}} \varphi_j(\xi)=1 \quad \mbox{for every}\,\, \xi\in\mathbb{R}^n\backslash\{0\}. \end{equation} As in the inhomogeneous case, let $\{\Delta_j^\varphi\}_{j\in\mathbb{Z}}$ be the sequence of Littlewood-Paley operators associated with $\varphi =\{\varphi_j\}_{j\in\mathbb{Z}}$: $$ \Delta_j^\varphi f = \left(\varphi_j \hat f \right)^\vee =\varphi_j^\vee \ast f \quad \text{for $f\in\mathscr{S}'$.} $$ Note that $\lpo{\varphi}{P}=0$ for every polynomial $P$. Therefore, in view of Lemma \ref{uniqueness}, for each $f\in\mathscr{S}_0'$, we can define $\Delta_j^\varphi f $ by $$\Delta_j^\varphi f=\Delta_j^\varphi F,$$ where $F\in\mathscr{S}'$ is any extension of $f$ to $\mathscr{S}$. For $s\in \mathbb{R}$, $1 \le p<\infty$, and $1\le q,r \le \infty$, the homogeneous Triebel-Lizorkin-Lorentz space $\hsf{s}{p}{q}{r}$ is defined as the space of all $f \in \mathscr{S}'_0$ such that \begin{equation*} \|f \|_{\hsf{s}{p}{q}{r}} = \homnorm{s}{p}{q}{r}{\varphi}{f} < \infty . \end{equation*} Then $\hsf{s}{p}{q}{r}$ is a quasi-Banach space. If $1 \le p=q<\infty$, $\hsf{s}{p}{q}{r}$ coincides with the homogeneous Triebel-Lizorkin space $\dot F^{s,r}_{p}$. The following interpolation result was proved in \cite[Theorem 2.4.2/1]{Tri2} for inhomogeneous Triebel-Lizorkin-Lorentz spaces $\ihsf{s}{p}{q}{r}$ with $1<r < \infty$. \begin{thm}\label{lem-itp-inhomf} Let $1<p,p_1,p_2<\infty$ and $0<\theta<1$ satisfy $$ p_1 \neq p_2 \quad \text{and}\quad \frac{1}{p}=\frac{1-\theta}{p_1}+\frac{\theta}{p_2}. $$ Then for $s\in\mathbb{R}$ and $1 \le q, r \le \infty$, $$ ( F ^{s,r}_{p_1}, F^{s,r}_{p_2})_{\theta,q} \hookrightarrow \ihsf{s}{p}{q}{r} \quad\mbox{and}\quad (\dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2})_{\theta,q} \hookrightarrow \hsf{s}{p}{q}{r}. $$ In addition, if $1<r \le \infty$, then $$ ( F ^{s,r}_{p_1}, F^{s,r}_{p_2})_{\theta,q} = \ihsf{s}{p}{q}{r} \quad\mbox{and}\quad (\dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2})_{\theta,q} = \hsf{s}{p}{q}{r}. $$ \end{thm} A proof of Theorem \ref{lem-itp-inhomf} will be provided later in Appendix. \subsection{Besov-Lorentz spaces} Let $s\in\mathbb{R}$ and $1\le p,q,r\le \infty$. If $\psi=\{\psi_j\}_{j\in\mathbb{N}_0}$ is a sequence in $C_c^{\infty}(\mathbb{R}^n)$ that satisfies \eqref{cond1-inhom}, \eqref{cond2-inhom}, and \eqref{cond3-inhom}, we define the Besov-Lorentz space $\ihsb{s}{p}{q}{r}$ by the space of all $f \in \mathscr{S}'$ such that \begin{equation*} \|f \|_{\ihsb{s}{p}{q}{r}} = \inhomnormb{s}{p}{q}{r}{\psi}{f} < \infty. \end{equation*} The homogeneous Besov-Lorentz space $\hsb{s}{p}{q}{r}$ is defined by the space of all $f\in \mathscr{S}_0'$ such that \begin{equation*} \|f \|_{\hsb{s}{p}{q}{r}} = \homnormb{s}{p}{q}{r}{\varphi}{f} < \infty , \end{equation*} where $\varphi=\{\varphi_j\}_{j\in\mathbb{Z}}$ is a sequence of functions in $C_c^\infty(\mathbb{R}^n)$ satisfying \eqref{cond1-hom}, \eqref{cond2-hom}, and \eqref{cond3-hom}. For $1 \le p=q \le \infty$, we write $B^{s,r}_p = \ihsb{s}{p}{p}{r}$ and $\dot B^{s,r}_p = \hsb{s}{p}{p}{r}$. The following embedding result is easily deduced from the definitions. \begin{lem}\label{thm-ebd-FB and BF} Let $s \in \mathbb{R}$ and $1 \le q \le \infty$. \begin{enumerate}[label = \textup{(\roman*)}] \item If $1\le p < \infty$, then $F^{s,\infty}_{p ,q} \hookrightarrow B^{s, \infty}_{p,q}$ and $\dot{F}^{s,\infty}_{p ,q} \hookrightarrow\dot B^{s, \infty}_{p,q}$. \item If $1 < p < \infty$ or if $p=q=1$, then $B^{s,1}_{p ,q} \hookrightarrow F^{s, 1}_{p,q}$ and $\dot{B}^{s,1}_{p ,q} \hookrightarrow\dot F^{s, 1}_{p,q}$. \end{enumerate} \end{lem} The interpolation space between two Besov spaces with the same $p$ is again a Besov space, as shown by the following result (see \cite[Section 6.4]{Bergh}, for example). \begin{thm}\label{lem-itp-homb} Let $s, s_1,s_2\in\mathbb{R}$ and $0<\theta<1$ satisfy $$s_1 \neq s_2 \quad\text{and}\quad s=(1-\theta)s_1+\theta s_2. $$ Then for $1\le p,r,r_1,r_2\le\infty$, $$ ( B ^{s_1,r_1}_{p}, B^{s_2,r_2}_{p})_{\theta,r} = B^{s,r}_p \quad \text{and}\quad ( \dot B ^{s_1,r_1}_{p}, \dot B^{s_2,r_2}_{p})_{\theta,r} =\dot B^{s,r}_p. $$ \end{thm} By Lemma \ref{thm-ebd-FB and BF} and Theorem \ref{lem-itp-homb}, we immediately obtain \begin{thm}\label{lem-itp-homf-ihomf} Let $s, s_1,s_2\in\mathbb{R}$ and $0<\theta<1$ satisfy $$s_1 \neq s_2 \quad\text{and}\quad s=(1-\theta)s_1+\theta s_2. $$ Then for $1 \le p< \infty$ and $1\le r,r_1,r_2\le\infty$, $$ ( F ^{s_1,r_1}_{p}, F^{s_2,r_2}_{p})_{\theta,r} = B^{s,r}_p \quad \text{and}\quad ( \dot F ^{s_1,r_1}_{p}, \dot F^{s_2,r_2}_{p})_{\theta,r} =\dot B^{s,r}_p. $$ \end{thm} \subsection{Some embedding results} The following is a list of more or less standard embedding results for the function spaces $\ihsf{s}{p}{q}{r}$, $\hsf{s}{p}{q}{r}$, $\ihsb{s}{p}{q}{r}$, and $\hsb{s}{p}{q}{r}$ for $s\ge0$, proofs of which will be given in Appendix. \begin{thm}\label{prop-relation} Let $s>0$ and $1\le q, r \le \infty$. \begin{enumerate}[label = \textup{(\roman*)}] \item If $1<p<\infty$, then $\ihsf{0}{p}{q}{2} = \hsf{0}{p}{q}{2}=L^{p,q} $. \item If $1<p<\infty$ or if $ p=q =1$, then $$ F^{0,1}_{p,q} \hookrightarrow L^{p,q} , \quad \dot{F}^{0,1}_{p,q} \hookrightarrow L^{p,q} , \quad\mbox{and}\quad \ihsf{s}{p}{q}{r}=L^{p,q}\cap \hsf{s}{p}{q}{r}. $$ \item If $1<p<\infty$ or if $1\le p=q \le \infty$, then $$ B^{0,1}_{p,q} \hookrightarrow L^{p,q} \hookrightarrow B^{0,\infty}_{p,q} , \quad \dot{B}^{0,1}_{p,q} \hookrightarrow L^{p,q} \hookrightarrow \dot B^{0,\infty}_{p,q} , \quad \mbox{and}\quad\ihsb{s}{p}{q}{r}=L^{p,q}\cap \hsb{s}{p}{q}{r}. $$ \end{enumerate} \end{thm} The embeddings between the Triebel-Lizorkin-Lorentz spaces and Besov-Lorentz spaces are completely characterized by Seeger and Trebels in \cite[Theorems 1.5 and 1.6]{seeger} as follows. \begin{thm}\label{ebd-inhom-TLL} Let $s_1,s_2\in\mathbb{R}$, $1 \le p_1,p_2<\infty$, and $1\le q_1,q_2,r_1,r_2\le \infty$. Then the embedding $$ \ihsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \ihsf{s_2}{p_2}{q_2}{r_2} $$ holds if and only if one of the following four conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_1=s_2$, $p_1=p_2$, $q_1\le q_2$, $r_1\le r_2$. \item $s_1 >s_2$, $p_1=p_2$, $q_1\le q_2$. \item $s_1 -s_2 = n/p_1- n/p_2 >0 $, $q_1\le q_2$. \item $s_1 -s_2 > n/p_1- n/p_2 >0 $. \end{enumerate} \end{thm} \begin{thm}\label{ebd-inhom-B} Let $s_1,s_2\in\mathbb{R}$, $1 \le p_1,p_2<\infty$, and $1\le q_1,q_2,r_1,r_2\le \infty$. Then the embedding $$ \ihsb{s_1}{p_1}{q_1}{r_1} \hookrightarrow \ihsb{s_2}{p_2}{q_2}{r_2} $$ holds if and only if one of the following four conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_1=s_2$, $p_1=p_2$, $q_1\le q_2$, $r_1\le r_2$. \item $s_1 >s_2$, $p_1=p_2$, $q_1\le q_2$. \item $s_1 -s_2 = n/p_1- n/p_2 >0 $, $r_1\le r_2$. \item $s_1 -s_2 > n/p_1- n/p_2 >0 $. \end{enumerate} \end{thm} \begin{rmk}\label{rmk-ebd-inhom-B} In fact, it can be shown that $\ihsb{s_1}{p_1}{q_1}{r_1} \hookrightarrow \ihsb{s_2}{p_2}{q_2}{r_2}$ holds even when $p_1=q_1=\infty$ or $p_2=q_2= \infty$ if one of the conditions (i)-(iv) is satisfied. Indeed, it is easy to show that if (i) or (ii) holds when $p_1 =p_2 =q_1 =q_2 =\infty$, then $B^{s_1,r_1}_{\infty}\hookrightarrow B^{s_2,r_2}_{\infty}$. Sufficiency of (iii) is proved by exactly the same argument as in the proof of Theorem \ref{ebd-hom-B} in Appendix. Finally, if (iv) holds when $p_1 < p_2 =q_2=\infty$, then $B^{s_1,r_1}_{p_1,q_1} \hookrightarrow B^{s_2 + n/p_1, r_2}_{p_1,q_1} \hookrightarrow B^{s_2,r_2}_{\infty}$ by sufficiency of (ii) and (iii). \end{rmk} It was remarked in \cite[p.1020]{seeger} that homogeneous counterparts of Theorems \ref{ebd-inhom-TLL} and \ref{ebd-inhom-B} can be proved by the same arguments as in the inhomogeneous case, but without explicit statements. For the sake of completeness, we state and prove some embedding theorems for homogeneous spaces. The following is due to Jawerth \cite[Theorem 2.1]{Jaw1}. \begin{thm}\label{thm-ebd-FB-2} Let $s_1,s_2\in\mathbb{R}$ and $1 \le p_1,p_2 < \infty$ satisfy $$ s_1 - s_2 =\frac{n}{p_1}-\frac{n}{p_2}>0. $$ Then $$ \dot{F}^{s_1,\infty}_{p_1} \hookrightarrow\dot F^{s_2,1}_{p_2} , \quad \dot{F}^{s_1,\infty}_{p_1} \hookrightarrow \dot B^{s_2,p_1}_{p_2} , \quad\mbox{and}\quad \dot{B}^{s_1,r }_{p_1} \hookrightarrow\dot B^{s_2,r }_{p_2} \quad (1 \le r \le \infty). $$ \end{thm} The proofs of the following theorems will be provided later in Appendix. \begin{thm}\label{ebd-hom-TLL} Let $s_1,s_2\in\mathbb{R}$, $1< p_1,p_2<\infty$, and $1\le q_1,q_2,r_1,r_2\le \infty$. Then the embedding $$ \hsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{r_2} $$ holds if and only if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_1=s_2$, $p_1=p_2$, $q_1\le q_2$, $r_1\le r_2$. \item $s_1 -s_2 = n/p_1- n/p_2 >0 $, $q_1\le q_2$. \end{enumerate} \end{thm} \begin{thm}\label{ebd-hom-B} Let $s_1,s_2\in\mathbb{R}$, $1 \le p_1,p_2 \le \infty$, and $1\le q_1,q_2,r_1,r_2\le \infty$. Assume that $q_i=\infty$ when $p_i=\infty$ for $i=1,2$. Then the embedding $$ \hsb{s_1}{p_1}{q_1}{r_1} \hookrightarrow\hsb{s_2}{p_2}{q_2}{r_2} $$ holds if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_1=s_2$, $p_1=p_2$, $q_1\le q_2$, $r_1\le r_2$. \item $s_1 -s_2 = n/p_1- n/p_2 >0 $, $r_1\le r_2$. \end{enumerate} \end{thm} \begin{rmk} It is not difficult to show that the embedding $\hsb{s_1}{p_1}{q_1}{r_1} \hookrightarrow\hsb{s_2}{p_2}{q_2}{r_2}$ holds only if $s_1 -s_2 = n/p_1- n/p_2 \ge 0 $ and $r_1\le r_2$ (see the proof of Theorem \ref{ebd-hom-TLL}). \end{rmk} Using Theorems \ref{ebd-hom-TLL} and \ref{ebd-hom-B}, we can also prove embedding results between Triebel-Lizorkin-Lorentz spaces and Besov-Lorentz spaces. \begin{thm}\label{thm-ebd-FB} Let $s_1,s_2\in\mathbb{R}$ and $1 < p_1,p_2 < \infty$ satisfy $$ s_1 - s_2 =\frac{n}{p_1}-\frac{n}{p_2}>0. $$ Then for $1 \le q , r \le \infty$, $$ \dot{F}^{s_1,\infty}_{p_1 ,q} \hookrightarrow\dot B^{s_2,q}_{p_2,1} \quad\mbox{and}\quad \dot{B}^{s_1,r}_{p_1,\infty} \hookrightarrow\dot F^{s_2,1}_{p_2 ,r}. $$ \end{thm} \begin{proof} Let $\overline{p}_1$ and $\overline{p}_2$ be any numbers such that $$ 1<\overline{p}_1 <p_1 < \overline{p}_2 < p_2 \quad\mbox{and}\quad \frac{2}{p_1} =\frac{1}{\overline{p}_1} + \frac{1}{\overline{p}_2}. $$ Then by Theorem \ref{lem-itp-inhomf}, Lemma \ref{thm-ebd-FB and BF}, Theorems \ref{ebd-hom-B}, and \ref{lem-itp-homb}, we have \begin{align*} \hsf{s_1}{p_1}{q}{\infty} =\left (\dot F^{s_1,\infty}_{\overline{p}_1}, \dot F^{s_1,\infty}_{\overline{p}_2}\right)_{1/2,q} &\hookrightarrow\left (\dot B^{s_1,\infty}_{\overline{p}_1}, \dot B^{s_1,\infty}_{\overline{p}_2}\right)_{1/2,q} \\ & \hookrightarrow \left (\dot B^{s_1 -n/\overline{p}_1+n/\overline{p}_2 ,\infty}_{\overline{p}_2}, \dot B^{s_1,\infty}_{\overline{p}_2} \right)_{1/2,q}\\ & = \dot B^{s_1-n(1/\overline{p}_1-1/\overline{p}_2)/2, q}_{\overline{p}_2}\hookrightarrow \dot B^{s_2,q}_{p_2,1} . \end{align*} Similarly, if $p_3$ is chosen so that $$ \overline{p}_2 < p_2 < {p}_3 < \infty\quad\mbox{and}\quad \frac{2}{p_2} =\frac{1}{\overline{p}_2} + \frac{1}{ {p}_3}, $$ then by Theorems \ref{lem-itp-homf-ihomf} and \ref{ebd-hom-TLL}, \begin{align*} \dot{B}^{s_1,r}_{p_1,\infty} \hookrightarrow \dot B^{s_2-n/p_2+n/\overline{p}_2,r}_{\overline{p}_2} &= \left (\dot F^{s_2 ,1}_{\overline{p}_2}, \dot F^{s_2 - 2n/p_2 + 2n/\overline{p}_2,1}_{\overline{p}_2} \right)_{1/2,r} \\ & \hookrightarrow \left (\dot F^{s_2,1}_{\overline{p}_2}, \dot F^{s_2,1}_{ {p}_3}\right)_{1/2,r} \hookrightarrow \hsf{s_2}{p_2}{r}{1} . \tag*{\qedhere} \end{align*} \end{proof} \section{Interpolation inequalities in Triebel-Lizorkin-Lorentz spaces} Throughout this and next sections, let $s , s_1 , s_2 \in \mathbb{R}$, $1 \le p,p_1,p_2 , q,q_1,q_2, r,r_1,r_2 \le \infty$, and $0<\theta<1$ be fixed numbers. Assume in addition that $q=\infty$ if $p=\infty$ and that $q_i=\infty$ if $p_i=\infty$ for $i=1,2$. \begin{rmk}\label{rmk-conditions} The condition $s_*-s\ge n/p_* - n/p \ge 0$ is necessary for each of the interpolation inequalities \eqref{itp-eq-inhomf} and (\ref{itp-eq-inhomb}) to hold. To show this, we adapt the proof of \cite[Theorem 2.3.9]{Tri1}. First we choose $\psi_0\in C_c^\infty(\mathbb{R}^n)$ such that $\mathrm{supp\,} \psi_0 \subset \{|\xi|\le 3/2\}$ and $\psi_0=1$ on $\{|\xi|\le 1\}$. For $j \ge 1$, define $\psi_j(\xi)= \psi_0(2^{-j}\xi) - \psi_0(2^{-j+1}\xi)$. Then $\{\psi\}_{j\in\mathbb{N}_0}$ satisfies \eqref{cond1-inhom}, \eqref{cond2-inhom}, and \eqref{cond3-inhom}. Note that $\mathrm{supp\,} \psi_j \subset \{ 2^{j-1} \le |\xi|\le 3 \cdot 2^{j-1}\}$ and $\psi_j=1$ on $\{3\cdot 2^{j-2} \le |\xi| \le 2^j\}$ when $j\ge1$. Let $f\in\mathscr{S}$ be chosen so that $\mathrm{supp\,} \hat{f}\subset \{a\le |\xi|\le b\}$ for some $3/4<a<b<1$. Then for each $k\in\mathbb{Z}$, \begin{equation*} \psi_j\widehat{f(2^k \cdot)}= \begin{cases} \widehat{f(2^k \cdot)} &\text{if $k\le0, j=0$ or $k=j\ge1$,}\\ 0 &\text{otherwise.} \end{cases} \end{equation*} Thus for $s\in\mathbb{R}$, $1\le p\le\infty$, and $1\le q,r\le\infty$, \begin{equation*} \|f(2^k \cdot)\|_{B^{s,r}_{p,q}} = \begin{cases} 2^{-kn/p}\|f\|_{L^{p,q}} &\text{if $k\le0$,}\\ 2^{k(s-n/p)}\|f\|_{L^{p,q}} &\text{if $k\ge1$.} \end{cases} \end{equation*} In addition, if $1\le p<\infty$, then $\|f(2^k \cdot)\|_{F^{s,r}_{p,q}} =\|f(2^k \cdot)\|_{B^{s,r}_{p,q}}$. Hence if \eqref{itp-eq-inhomf} or (\ref{itp-eq-inhomb}) holds, then we must have $$ 2^{-kn/p} \lesssim 2^{-kn/p_*} \quad \text{for $k\le 0$} \quad\mbox{and}\quad 2^{k(s-n/p)} \lesssim 2^{k(s_* - n/p_*)} \quad \text{for $k\ge1$}, $$ which imply that $s_*-s\ge n/p_* - n/p \ge 0$. Similarly, it can be shown that if \eqref{itp-eq-homf} or (\ref{itp-eq-homb}) holds, then $s_*-s = n/p_* - n/p \ge 0$ (see the proof of Theorem \ref{ebd-hom-TLL} in Appendix). \end{rmk} \begin{thm}\label{thm-itp-inhomf} Assume that \begin{equation*} s_* -s \ge \frac{n}{p_*}-\frac{n}{p} \ge 0 . \end{equation*} Assume in addition that $ 1 \le p , p_1 , p_2 < \infty $. Then the interpolation inequality \[ \|f\|_{\ihsf{s}{p}{q}{r}}\lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f \in\ihsf{s_1}{p_1}{q_1}{r_1} \cap \ihsf{s_2}{p_2}{q_2}{r_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $q_* \le q$, $r_* \le r$. \item $ s_1=s_2$, $p_* = p$, $p_1 \neq p_2$, $\max (r_1 , r_2 ) \le r$. \item $s_1 \neq s_2$, $p_* = p$, $q_* \le q$. \item $s_* > s $, $q_* \le q$. \item $s_* > s$, $p_*=p$, $p_1\neq p_2$. \item $s_* -s > n/p_* -n/p >0$. \item $s_* -s = n/p_* -n/p >0$, $s_2 -s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} To prove our main results, we need the following interpolation inequality in sequence spaces, taken from {\cite[Theorem 1.18.2]{Tri2}}. \begin{lem}\label{itp-sum} Assume that $s_1 \neq s_2$. Then for any complex sequence $\{a_j\}_{j\in\Gamma}$, $$ \sum_{j\in\Gamma} 2^{js_*}|a_j| \lesssim \left(\sup_{j\in\Gamma} 2^{js_1}|a_j|\right)^{1-\theta} \left( \sup_{j\in\Gamma} 2^{js_2}|a_j|\right)^{\theta}. $$ \end{lem} \begin{proof}[Proof of Theorem \ref{thm-itp-inhomf}] Assume that $f\in \ihsf{s_1}{p_1}{q_1}{r_1} \cap \ihsf{s_2}{p_2}{q_2}{r_2}$. \vspace{0.2cm} \noindent \emph{Case I.} Suppose that $p_* =p$, $q_* \le q$, and $r_* \le r$. Then noting that $$ 2^{js_*}|\lpo{\psi}{f}| = 2^{(1-\theta)js_1}|\lpo{\psi}{f}|^{1-\theta} 2^{\theta js_2}|\lpo{\psi}{f}|^\theta, $$ we have $$ \left \| \left\{ 2^{js_*}\Delta_j^\psi f \right\}_{j\in\mathbb{N}_0}\right \|_{l^r} \le \left \| \left\{2^{(1-\theta)js_1}|\Delta_j^\psi f |^{1-\theta}\right\}_{j\in\mathbb{N}_0} \right \|_{l^{\frac{r_1}{1-\theta}}} \left \| \left\{2^{\theta js_2}| \Delta_j^\psi f |^\theta \right\}_{j\in\mathbb{N}_0} \right \|_{l^{\frac{r_2}{\theta}}}. $$ Hence by Theorem \ref{ebd-inhom-TLL} and Lemma \ref{Holder-Lorentz}, \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}} & \lesssim \|f\|_{\ihsf{s_*}{p}{q}{r}} = \inhomnorm{s_*}{p}{q}{r}{\psi}{f}\\ &\lesssim \left\| \left\{ 2^{(1-\theta) js_1}|\Delta_j^\psi f|^{1-\theta}\right\}_{j\in\mathbb{N}_0} \right\|_{L^{\frac{p_1}{1-\theta},\frac{q_1}{1-\theta}}(l^{\frac{r_1}{1-\theta}})} \left\| \left\{ 2^{\theta js_2}|\Delta_j^\psi f|^{\theta}\right\} _{j\in\mathbb{N}_0} \right\|_{L^{\frac{p_2}{\theta},\frac{q_2}{\theta}}(l^{\frac{r_2}{\theta}})}\\ &= \inhomnorm{s_1}{p_1}{q_1}{r_1}{\psi}{f}^{1-\theta} \inhomnorm{s_2}{p_2}{q_2}{r_2}{\psi}{f}^\theta \\ & = \|f\|^{1-\theta}_{\ihsf{s_1}{p_1}{q_1}{r_1}} \|f\|^\theta_{\ihsf{s_2}{p_2}{q_2}{r_2}}. \end{align*} \noindent \emph{Case II.} Suppose that $ s_1=s_2$, $p_* = p$, $p_1 \neq p_2$, and $\max (r_1 , r_2 ) \le r$. Then by Theorem \ref{ebd-inhom-TLL} and Lemma \ref{interpolation-Lp}, \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}} & \lesssim \inhomnorm{s_*}{p}{q}{r}{\psi}{f} \\ & \lesssim \inhomnorm{s_*}{p_1}{q_1}{r}{\psi}{f}^{1-\theta} \inhomnorm{s_*}{p_2}{q_2}{r}{\psi}{f}^{\theta}\\ & \le \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}} ^{1-\theta} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta. \end{align*} \noindent \emph{Case III.} Suppose that $s_1 \neq s_2$, $p_* =p$, and $q_* \le q$. Then by Theorem \ref{ebd-inhom-TLL}, Lemmas \ref{itp-sum} and \ref{Holder-Lorentz}, \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}} &\lesssim \inhomnorm{s_*}{p}{q}{1}{\psi}{f}\\ &\lesssim \left\| \left(\sup_{j\in\mathbb{N}_0} 2^{js_1}|\lpo{\psi}{f}|\right)^{1-\theta} \left( \sup_{j\in\mathbb{N}_0} 2^{js_2}|\lpo{\psi}{f}|\right)^{\theta}\right\|_{L^{p,q}}\\ &\lesssim \left\| \left(\sup_{j\in\mathbb{N}_0} 2^{js_1}|\lpo{\psi}{f}|\right)^{1-\theta} \right\|_{L^{\frac{p_1}{1-\theta},\frac{q_1}{1-\theta}}}\left\|\left( \sup_{j\in\mathbb{N}_0} 2^{js_2}|\lpo{\psi}{f}|\right)^{\theta} \right\|_{L^{\frac{p_2}{\theta},\frac{q_2}{\theta}}}\\ & \le \|f\|^{1-\theta}_{\ihsf{s_1}{p_1}{q_1}{r_1}} \|f\|^\theta_{\ihsf{s_2}{p_2}{q_2}{r_2}}. \end{align*} \noindent \emph{Case IV.} Suppose that $s_* > s $ and $q_* \le q$. Then by Case I, $$ \|f\|_{\ihsf{s_*}{p_*}{q}{\infty}} \lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta. $$ Since $s_* >s$ and $s_* -s \ge n/p_* -n/p$, it follows from Theorem \ref{ebd-inhom-TLL} that \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}}\lesssim \|f\|_{\ihsf{s_*}{p_*}{q}{\infty}} \lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta. \end{align*} Sufficiency of the condition (i) is proved by Cases I and IV. \noindent \emph{Case V.} Suppose that $s_* > s$, $p_* =p$, and $p_1 \neq p_2$. We may assume that $p_1<p<p_2$. Then since $$ 0<\theta= \frac{1/p_1-1/p}{1/p_1-1/p_2}<1, $$ there exists a number $\overline{p}_1$, close to $p$, such that $$ p_1<\overline{p}_1<p<\overline{p}_2:=\frac{p\overline{p}_1}{2\overline{p}_1-p}<p_2, $$ $$ 0<\mu := \frac{1/p_1-1/\overline{p}_1}{1/p_1-1/p_2}<\theta, \quad\theta<\lambda:=2\theta - \mu <1, $$ $$ (1-\mu)s_1+\mu s_2 -s>0, \quad\text{and}\quad (1-\lambda )s_1 + \lambda s_2 -s >0. $$ Note that $$ \frac{2}{p}=\frac{1}{\overline{p}_1}+\frac{1}{\overline{p}_2}, $$ $$ (1-\mu)s_1 + \mu s_2 - s > n \left( \frac{1-\mu}{p_1}+\frac{\mu}{p_2}-\frac{1}{\overline{p}_1}\right)=0, $$ and $$ (1-\lambda)s_1 + \lambda s_2 - s > n \left( \frac{1-\lambda}{p_1}+\frac{\lambda}{p_2}-\frac{1}{\overline{p}_2}\right)=0. $$ Hence by Cases II and IV, \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}} &\lesssim \|f\|_{\ihsf{s}{\overline{p}_1}{\infty}{1}}^{1/2}\|f\|_{\ihsf{s}{\overline{p}_2}{\infty}{1}}^{1/2}\\ & \lesssim\left(\|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\mu}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\mu \right)^{1/2} \left(\|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\lambda}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\lambda\right)^{1/2} \\ &= \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta. \end{align*} \noindent \emph{Case VI.} Suppose that $s_* -s > n/p_* -n/p >0$. Then by Theorem \ref{ebd-inhom-TLL} and Case IV, $$\|f\|_{\ihsf{s}{p}{q}{r}} \lesssim \|f\|_{\ihsf{s_{*}}{p_{*}}{\infty}{\infty}} \lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^{\theta}. $$ \noindent \emph{Case VII.} Suppose that $s_* -s = n/p_* -n/p >0$ and $s_2 -s_1 \neq n/p_2 -n/p_1$. Then since $$ 0<\theta = \frac{s-n/p -s_1 + n/p_1}{s_2 -n/p_2 -s_1 + n/p_1}<1 \quad \mbox{and}\quad \frac{1}{p}< \frac{1-\theta}{p_1}+\frac{\theta}{p_2} \le 1 , $$ there exist numbers $\overline{p} >1$ and $\tilde{p} >1$, very close to $p$, such that $$ 0<\lambda := \frac{s- n/\overline{p}-s_1 + n/p_1}{s_2 -n/p_2 -s_1 + n/p_1}<\theta $$ $$ \theta < \mu := \frac{s- n /\tilde{p}-s_1 + n/p_1}{s_2 -n/p_2 -s_1 + n/p_1}<1, $$ $$ \frac{1}{\overline{p}} < \frac{1-\lambda}{p_1} + \frac{\lambda}{p_2}, \quad\text{and}\quad \frac{1}{\tilde{p}} < \frac{1-\mu}{p_1} + \frac{\mu}{p_2}. $$ Note that \[ (1-\lambda)s_1+ \lambda s_2 -s = n \left( \frac{1-\lambda}{p_1} + \frac{\lambda}{p_2} - \frac{1}{\overline{p}} \right)>0 \] and \[ (1-\mu)s_1+ \mu s_2 -s = n \left( \frac{1-\mu}{p_1} + \frac{\mu}{p_2} - \frac{1}{\tilde{p}} \right)>0 . \] Hence by Case IV, we have \begin{equation*} \|f\|_{\ihsf{s}{\overline{p}}{\infty}{r}} \lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\lambda} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\lambda \quad\mbox{and}\quad \|f\|_{\ihsf{s}{\tilde{p}}{\infty}{r}} \lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\mu} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\mu. \end{equation*} Define $$ \nu=\frac{\theta -\lambda}{\mu-\lambda}. $$ Then $$ 0<\nu<1,\quad \frac{1}{p}=\frac{1-\nu}{\overline{p}}+\frac{\nu}{\tilde{p}}, \quad\text{and}\quad \overline{p}\neq \tilde{p} . $$ Therefore by Case II, we get \begin{align*} \|f\|_{\ihsf{s}{p}{q}{r}} & \lesssim \|f\|_{\ihsf{s}{\overline{p}}{\infty}{r}}^{1-\nu} \|f\|_{\ihsf{s}{\tilde{p}}{\infty}{r}}^{\nu} \\ & \lesssim \left( \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\lambda} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\lambda \right)^{1-\nu} \left( \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\mu} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\mu \right)^\nu \\ & = \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta} \|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^{\theta}. \end{align*} We have completed the whole proof of the theorem. \end{proof} The following is an immediate consequence of Remark \ref{rmk-conditions} and Theorem \ref{thm-itp-inhomf}. \begin{thm}\label{thm-necessity-00} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 \le p, p_1,p_2 < \infty$, $1 \le q_1 , q_2 , r_1 , r_2 \le \infty$, and $0<\theta<1$. Then the interpolation inequality \[ \|f\|_{\ihsf{s}{p}{q_*}{r_*}}\lesssim \|f\|_{\ihsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsf{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f\in F^{s_1,r_1}_{p_1,q_1} \cap F^{s_2,r_2}_{p_2,q_2}$, if and only if $$ s_* -s \ge \frac{n}{p_*} -\frac{n}{p}\ge 0. $$ \end{thm} To further study necessary conditions for stronger interpolation inequalities, we recall from the theory of real interpolation that if $(X_1,X_2)$ is a compatible couple of quasi-Banach spaces and $X$ is a Banach space with $X_1\cap X_2 \subset X$, then the interpolation inequality $ \|f\|_X \lesssim \|f\|_{X_1}^{1-\theta} \|f\|_{X_2}^\theta $ holds for all $f\in X_1\cap X_2$ if and only if $(X_1,X_2)_{\theta,1}\hookrightarrow X$ (see \cite[Theorem 3.11.4]{Bergh}). \begin{thm}\label{thm-necessity-0} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, $1 \le q_1 , q_2 \le \infty$, and $0<\theta<1$. Then the interpolation inequality \begin{equation}\label{thm-necessity-inequality-0} \|f\|_{F^{s,1}_{p,q_*}}\lesssim \|f\|_{F^{s_1,\infty}_{p_1,q_1}}^{1-\theta}\|f\|_{F^{s_2,\infty}_{p_2,q_2}}^\theta \end{equation} holds for all $f\in F^{s_1,\infty}_{p_1,q_1} \cap F^{s_2,\infty}_{p_2,q_2}$, if and only if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$. \item $s_* > s$, $s_*-s \ge n/p_*-n/p \ge 0$. \end{enumerate} \end{thm} \begin{proof} By Theorem \ref{thm-itp-inhomf}, it remains to prove the necessity part of the theorem. Suppose that $\eqref{thm-necessity-inequality-0}$ holds for all $f\in F^{s_1,\infty}_{p_1,q_1} \cap F^{s_2,\infty}_{p_2,q_2}$. Then by Remark \ref{rmk-conditions}, we have $$ s_* -s \ge \frac{n}{p_*} -\frac{n}{p}\ge 0. $$ Hence there are two possibilities: (a) $s_* =s$, $p_*=p$ and (b) $s_* > s$, $s_*-s \ge n/p_*-n/p \ge 0$. Suppose that $s_1=s_2$. If $p_1\neq p_2$, then by Theorem \ref{lem-itp-inhomf} and the reiteration theorem, $$ F^{s_*,\infty}_{p_*,1} \hookrightarrow (F^{s_1,\infty}_{p_1,q_1},F^{s_2,\infty}_{p_2,q_2})_{\theta,1} , $$ which holds trivially when $p_1 =p_2$. Moreover, since $F^{s,1}_{p,q_*}$ is a Banach space, we have $$ F^{s_*,\infty}_{p_*,1} \hookrightarrow (F^{s_1,\infty}_{p_1,q_1}, F^{s_2,\infty}_{p_2,q_2})_{\theta,1}\hookrightarrow F^{s,1}_{p,q_*}. $$ Hence it follows from Theorem \ref{ebd-inhom-TLL} that $s_* > s$. By contraposition, we have shown that if $s_* =s$, then $s_1 \neq s_2$. This completes the proof. \end{proof} \begin{thm}\label{thm-necessity-01} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p < \infty$, $1 < p_1,p_2 < \infty$, $1\le r_1,r_2\le\infty$ and $0<\theta<1$. If the interpolation inequality \begin{equation}\label{thm-necessity-inequality-01} \|f\|_{F^{s,r_*}_{p,1}}\lesssim \|f\|_{F^{s_1,r_1}_{p_1,\infty}}^{1-\theta}\|f\|_{F^{s_2,r_2}_{p_2,\infty}}^\theta \end{equation} holds for all $f\in F^{s_1,r_1}_{p_1,\infty} \cap F^{s_2,r_2}_{p_2,\infty}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2- n/p_1$. \item $s_* > s$, $p_*=p$, $p_1 \neq p_2$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s>n/p_*-n/p>0$. \end{enumerate} \end{thm} \begin{proof} Suppose that $\eqref{thm-necessity-inequality-01}$ holds for all $f\in F^{s_1,r_1}_{p_1,\infty} \cap F^{s_2,r_2}_{p_2,\infty}$, that is, $(F^{s_1,r_1}_{p_1,\infty}, F^{s_2,r_2}_{p_2,\infty})_{\theta,1}\hookrightarrow F^{s,r_*}_{p,1}$. Then by Remark \ref{rmk-conditions}, there are four possibilities: (a) $s_* = s$, $p_*=p$, (b) $s_* > s$, $p_*=p$, (c) $s_*-s = n/p_*-n/p>0$, and (d) $s_*-s>n/p_*-n/p>0$. \noindent \emph{Case I.} Suppose that $p_1=p_2$. If $s_1 =s_2$, then by Lemma \ref{thm-ebd-FB and BF}, $$ B^{s_*,1}_{p_*} \hookrightarrow F^{s_*,1}_{p_* } \hookrightarrow (F^{s_1,r_1}_{p_1,\infty}, F^{s_2,r_2}_{p_2,\infty})_{\theta,1}\hookrightarrow F^{s,r_*}_{p,1}. $$ If $s_1 \neq s_2$, then by Theorem \ref{lem-itp-homf-ihomf}, we also have $$ B^{s_*,1}_{p_*}= (F^{s_1,r_1}_{p_1}, F^{s_2,r_2}_{p_2})_{\theta,1} \hookrightarrow (F^{s_1,r_1}_{p_1,\infty}, F^{s_2,r_2}_{p_2,\infty})_{\theta,1}\hookrightarrow F^{s,r_*}_{p,1}. $$ But it was shown in \cite[Theorem 1.1]{seeger} that $B^{s_*,1}_{p_*}\hookrightarrow \ihsf{s}{p}{1}{r_*}$ if and only if $p_*<p$. Hence it follows that $p_* < p$. By contraposition, we deduce that if $p_* =p$, then $p_1 \neq p_2$. \noindent \emph{Case II.} Suppose that $s_*-s=n/p_*-n/p$ and $s_2-s_1=n/p_2-n/p_1$. By symmetry, we may assume that $s_1\le s_2$. Then since $s-n/p=s_1-n/p_1=s_2- n/p_2$, it follows from Theorem \ref{ebd-inhom-TLL} that $\ihsf{s_2}{p_2}{\infty}{1}\hookrightarrow (F^{s_1,r_1}_{p_1,\infty},F^{s_2,r_2}_{p_2,\infty})_{\theta,1}$ but $\ihsf{s_2}{p_2}{\infty}{1} \not\hookrightarrow \ihsf{s}{p}{1}{r_*}$. This implies that if $s_*-s=n/p_*-n/p$, then $s_1-s_2 \neq n/p_1-n/p_2$. Combining the conclusions from Cases I and II, we complete the proof of the theorem. \end{proof} \begin{thm}\label{thm-necessity} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, and $0<\theta<1$. If the interpolation inequality \begin{equation*} \|f\|_{F^{s,1}_{p,1}}\lesssim \|f\|_{F^{s_1,\infty}_{p_1,\infty}}^{1-\theta}\|f\|_{F^{s_2,\infty}_{p_2,\infty}}^\theta \end{equation*} holds for all $f\in F^{s_1,\infty}_{p_1,\infty} \cap F^{s_2,\infty}_{p_2,\infty}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_* > s$, $p_*=p$, $p_1 \neq p_2$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s>n/p_*-n/p>0$. \end{enumerate} \end{thm} \begin{proof} The theorem immediately follows from Theorems \ref{thm-necessity-0} and \ref{thm-necessity-01}. \end{proof} The following is the homogeneous counterpart of Theorem \ref{thm-itp-inhomf}. \begin{thm}\label{thm-itp-homf} Assume that \begin{equation*} s_* -s = \frac{n}{p_*}-\frac{n}{p} \ge 0 . \end{equation*} Assume in addition that $ 1 < p , p_1 , p_2 < \infty $. Then the interpolation inequality \[ \|f\|_{\hsf{s}{p}{q}{r}}\lesssim \|f\|_{\hsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsf{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f \in\hsf{s_1}{p_1}{q_1}{r_1} \cap \hsf{s_2}{p_2}{q_2}{r_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $q_* \le q$, $r_* \le r$. \item $s = s_1=s_2$, $p_1 \neq p_2$, $\max (r_1 , r_2 ) \le r$. \item $s_* =s$, $s_1 \neq s_2$, $q_* \le q$. \item $s_* > s$, $q_* \le q$. \item $s_* > s$, $s_2 -s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} \begin{proof} The proof of the theorem is exactly the same as that of Theorem \ref{thm-itp-inhomf} except for using Theorem \ref{ebd-hom-TLL} instead of Theorem \ref{ebd-inhom-TLL}. \end{proof} The proofs of Theorems \ref{thm-necessity-00}, \ref{thm-necessity-0}, \ref{thm-necessity-01}, and \ref{thm-necessity} can be easily adapted to deduce their homogeneous counterparts from Remark \ref{rmk-conditions}, Theorems \ref{lem-itp-inhomf}, and \ref{ebd-hom-TLL}. \begin{thm}\label{thm-necessity-00-hom} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, $1 \le q_1 , q_2 , r_1 , r_2 \le \infty$, and $0<\theta<1$. Then the interpolation inequality \[ \|f\|_{\hsf{s}{p}{q_*}{r_*}}\lesssim \|f\|_{\hsf{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsf{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f\in \dot F^{s_1,r_1}_{p_1,q_1} \cap \dot F^{s_2,r_2}_{p_2,q_2}$, if and only if $$ s_* -s = \frac{n}{p_*} -\frac{n}{p}\ge 0. $$ \end{thm} \begin{thm}\label{thm-necessity-0-hom} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, $1 \le q_1 , q_2 \le \infty$, and $0<\theta<1$. Then the interpolation inequality \[ \|f\|_{\dot F^{s,1}_{p,q_*}}\lesssim \|f\|_{\dot F^{s_1,\infty}_{p_1,q_1}}^{1-\theta}\|f\|_{\dot F^{s_2,\infty}_{p_2,q_2}}^\theta \] holds for all $f\in \dot F^{s_1,\infty}_{p_1,q_1} \cap \dot F^{s_2,\infty}_{p_2,q_2}$, if and only if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$. \item $s_*-s = n/p_*-n/p > 0$. \end{enumerate} \end{thm} \begin{thm}\label{thm-necessity-hom-01} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, $1\le r_1,r_2\le\infty$, and $0<\theta<1$. If the interpolation inequality $$ \|f\|_{\dot F^{s,r_*}_{p,1}}\lesssim \|f\|_{\dot F^{s_1,r_1}_{p_1,\infty}}^{1-\theta}\|f\|_{\dot F^{s_2,r_2}_{p_2,\infty}}^\theta $$ holds for all $f\in \dot F^{s_1,r_1}_{p_1,\infty} \cap \dot F^{s_2,r_2}_{p_2,\infty}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \end{enumerate} \end{thm} \begin{thm}\label{thm-necessity-hom} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, and $0<\theta<1$. If the interpolation inequality $$ \|f\|_{\dot F^{s,1}_{p,1}}\lesssim \|f\|_{\dot F^{s_1,\infty}_{p_1,\infty}}^{1-\theta}\|f\|_{\dot F^{s_2,\infty}_{p_2,\infty}}^\theta $$ holds for all $f\in \dot F^{s_1,\infty}_{p_1,\infty} \cap \dot F^{s_2,\infty}_{p_2,\infty}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \end{enumerate} \end{thm} \section{Interpolation inequalities in Besov-Lorentz spaces} Using the arguments in the proofs of Theorems \ref{thm-itp-inhomf} and \ref{thm-itp-homf}, we prove interpolation inequalities in Besov-Lorentz spaces. \begin{thm}\label{thm-itp-inhomb} Assume that \[ s_* -s \ge \frac{n}{p_*}-\frac{n}{p} \ge 0 . \] Then the interpolation inequality \begin{equation*} \|f\|_{\ihsb{s}{p}{q}{r}}\lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta \end{equation*} holds for all $f \in\ihsb{s_1}{p_1}{q_1}{r_1} \cap \ihsb{s_2}{p_2}{q_2}{r_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $ q_* \le q$, $r_* \le r$. \item $p_* =p$, $p_1 \neq p_2$, $r_* \le r$. \item $p_* < p$, $r_* \le r$. \item $s_1\neq s_2$, $p=p_1=p_2$, $\max(q_1,q_2)\le q$. \item $s_* > s$, $p_*=p$, $q_*\le q$. \item $s_* > s$, $p_* =p$, $p_1 \neq p_2$. \item $s_* -s > n/p_* -n/p >0$. \item $s_* -s = n/p_* -n/p >0$, $s_2 - s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} \begin{proof} Assume that $f\in \ihsb{s_1}{p_1}{q_1}{r_1} \cap \ihsb{s_2}{p_2}{q_2}{r_2}$. \vspace{0.2cm} \noindent \emph{Case I.} Suppose that $p_* =p$, $q_* \le q$, and $r_* \le r$. Then by the same argument as Case I in the proof of Theorem \ref{thm-itp-inhomf}, \begin{align*} \|f\|_{\ihsb{s}{p}{q}{r}} &\lesssim \inhomnormb{s_*}{p}{q}{r}{\psi}{f}\\ &\lesssim \left\| \left\{ 2^{(1-\theta) js_1}|\Delta_j^\psi f|^{1-\theta}\right\}_{j\in\mathbb{N}_0} \right\|_{l^{\frac{r_1}{1-\theta}} ( L^{\frac{p_1}{1-\theta},\frac{q_1}{1-\theta}})} \left\| \left\{ 2^{\theta js_2}|\Delta_j^\psi f|^{\theta}\right\} _{j\in\mathbb{N}_0} \right\|_{l^{\frac{r_2}{\theta}} ( L^{\frac{p_2}{\theta},\frac{q_2}{\theta}}) }\\ & = \|f\|^{1-\theta}_{\ihsb{s_1}{p_1}{q_1}{r_1}} \|f\|^\theta_{\ihsb{s_2}{p_2}{q_2}{r_2}}. \end{align*} \noindent \emph{Case II.} Suppose that $p_* =p$, $p_1 \neq p_2$, and $r_* \le r$. Then by Lemma \ref{interpolation-Lp} and H\"older's inequality, \begin{align*} \|f\|_{\ihsb{s}{p}{q}{r}} & \lesssim \left\| \left\{2^{js_*}\|\Delta_j^\psi f\|_{L^{p,q}} \right\}_{j\in\mathbb{N}_0} \right\|_{l^{r} }\\ &\lesssim \left\| \left\{ 2^{j(1-\theta)s_1}\|\Delta_j^\psi f\|_{L^{p_1,q_1}}^{1-\theta} \right\}_{j\in\mathbb{N}_0} \right\|_{l^{\frac{r_1}{1-\theta}} } \left\| \left\{ 2^{j\theta s_2}\|\Delta_j^\psi f\|_{L^{p_2,q_2}}^{\theta} \right\} _{j\in\mathbb{N}_0} \right\|_{l^{\frac{r_2}{\theta}} }\\ &= \|f\|^{1-\theta}_{\ihsb{s_1}{p_1}{q_1}{r_1}} \|f\|^\theta_{\ihsb{s_2}{p_2}{q_2}{r_2}}. \end{align*} \noindent \emph{Case III.} Suppose that $p_* < p$ and $r_* \le r$. Then by Case I, we have $$\|f\|_{\ihsb{s_*}{p_*}{\infty}{r}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta.$$ Since $s_*- s \ge n /p_* -n /p > 0$, it follows from Theorem \ref{ebd-inhom-B} and Remark \ref{rmk-ebd-inhom-B} that $$ \|f\|_{\ihsb{s}{p}{q}{r}}\lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta. $$ Sufficiency of the condition (i) is proved by Cases I and III. \noindent \emph{Case IV.} If $s_1\neq s_2$, $p=p_1=p_2$, and $\max(q_1,q_2)\le q$, then by Lemma \ref{itp-sum}, \begin{align*} \|f\|_{\ihsb{s}{p}{q}{r}} & \lesssim \left\| \left\{2^{js_*}\|\Delta_j^\psi f\|_{L^{p,q}} \right\}_{j\in\mathbb{N}_0} \right\|_{l^{1} } \\ & \lesssim \left\| \left\{2^{js_1}\|\Delta_j^\psi f\|_{L^{p,q}} \right\}_{j\in\mathbb{N}_0} \right\|_{l^{\infty} }^{1-\theta} \left\| \left\{2^{js_2}\|\Delta_j^\psi f\|_{L^{p,q}} \right\}_{j\in\mathbb{N}_0} \right\|_{l^{\infty} }^{\theta} \\ &\lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}} ^{1-\theta} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta. \end{align*} \noindent \emph{Case V.} If $s_* > s$, $p_*=p$, and $q_*\le q$, then by Theorem \ref{ebd-inhom-B} and Case I, $$ \|f\|_{\ihsb{s}{p}{q}{r}}\lesssim\|f\|_{\ihsb{s_*}{p}{q}{\infty}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta . $$ \noindent \emph{Case VI.} If $s_* > s$, $p_* =p$, and $p_1 \neq p_2$, then by Theorem \ref{ebd-inhom-B} and Case II, $$ \|f\|_{\ihsb{s}{p}{q}{r}}\lesssim \|f\|_{\ihsb{s_*}{p}{q}{\infty}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta. $$ \noindent \emph{Case VII.} If $s_* -s > n/p_* -n/p >0$, then by Theorem \ref{ebd-inhom-B} and Case III, $$\|f\|_{\ihsb{s}{p}{q}{r}}\lesssim \|f\|_{\ihsb{s_{*}}{p_{*}}{\infty}{\infty}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^{\theta}. $$ \noindent \emph{Case VIII.} Suppose that $s_* -s = n/p_* -n/p >0$ and $s_2 - s_1 \neq n/p_2 -n/p_1$. Choose a nonzero number $\delta$ such that $$ s+|\delta| < s_*, $$ $$ 0<\lambda := \frac{s-\delta - n/p-s_1 + n/p_1}{s_2 -n/p_2 -s_1 + n/p_1}<\theta, $$ $$ \theta < \mu := \frac{s+\delta- n /p-s_1 + n/p_1}{s_2 -n/p_2 -s_1 + n/p_1}<1, $$ $$ \frac{1}{p} < \frac{1-\lambda}{p_1} + \frac{\lambda}{p_2}, \quad\text{and}\quad \frac{1}{p} < \frac{1-\mu}{p_1} + \frac{\mu}{p_2}. $$ Note that \[ (1-\lambda)s_1+ \lambda s_2 -(s-\delta) = n \left( \frac{1-\lambda}{p_1} + \frac{\lambda}{p_2} - \frac{1}{p} \right)>0 \] and \[ (1-\mu)s_1+ \mu s_2 -(s+\delta ) = n \left( \frac{1-\mu}{p_1} + \frac{\mu}{p_2} - \frac{1}{p} \right)>0 . \] Hence by Case III, we have \begin{equation*} \|f\|_{\ihsb{s-\delta}{{p}}{1}{\infty}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\lambda} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\lambda \quad\mbox{and}\quad \|f\|_{\ihsb{s+\delta}{{p}}{1}{\infty}} \lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\mu} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\mu. \end{equation*} Therefore, by Case IV, we get \begin{align*} \|f\|_{\ihsb{s}{p}{q}{r}} & \lesssim \|f\|_{\ihsb{s-\delta}{p}{1}{\infty}}^{1/2} \|f\|_{\ihsb{s+\delta}{p}{1}{\infty}}^{1/2} \\ & \lesssim \left( \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\lambda} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\lambda \right)^{1/2} \left( \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\mu} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\mu \right)^{1/2} \\ & = \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta} \|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^{\theta}, \end{align*} which completes the proof. \end{proof} \begin{thm}\label{thm-itp-homb} Assume that \[ s_* -s = \frac{n}{p_*}-\frac{n}{p} \ge 0 . \] Then the interpolation inequality \begin{equation*} \|f\|_{\hsb{s}{p}{q}{r}}\lesssim \|f\|_{\hsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsb{s_2}{p_2}{q_2}{r_2}}^\theta \end{equation*} holds for all $f \in\hsb{s_1}{p_1}{q_1}{r_1} \cap \hsb{s_2}{p_2}{q_2}{r_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $ q_* \le q$, $r_* \le r$. \item $s_* =s $, $p_1 \neq p_2$, $r_* \le r$. \item $s_* > s $, $r_* \le r$. \item $s_* =s$, $s_1\neq s_2$, $p_1=p_2$, $\max(q_1,q_2)\le q$. \item $s_* > s $, $s_2 - s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} \begin{proof} The theorem follows by using Theorem \ref{ebd-hom-B} instead of Theorem \ref{ebd-inhom-B} in the proof of Theorem \ref{thm-itp-inhomb}. \end{proof} From Remark \ref{rmk-conditions}, Theorems \ref{thm-itp-inhomb}, and \ref{thm-itp-homb}, we immediately obtain: \begin{thm}\label{thm-necessity-00-b} The interpolation inequality \[ \|f\|_{\ihsb{s}{p}{q_*}{r_*}}\lesssim \|f\|_{\ihsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\ihsb{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f\in B^{s_1,r_1}_{p_1,q_1} \cap B^{s_2,r_2}_{p_2,q_2}$, if and only if $$ s_* -s \ge \frac{n}{p_*} -\frac{n}{p}\ge 0. $$ \end{thm} \begin{thm}\label{thm-necessity-00-b-hom} The interpolation inequality \[ \|f\|_{\hsb{s}{p}{q_*}{r_*}}\lesssim \|f\|_{\hsb{s_1}{p_1}{q_1}{r_1}}^{1-\theta}\|f\|_{\hsb{s_2}{p_2}{q_2}{r_2}}^\theta \] holds for all $f\in \dot B^{s_1,r_1}_{p_1,q_1} \cap \dot B^{s_2,r_2}_{p_2,q_2}$, if and only if $$ s_* -s = \frac{n}{p_*} -\frac{n}{p}\ge 0. $$ \end{thm} Using the necessity part of Theorem \ref{ebd-inhom-B}, we can obtain several results on necessary conditions for interpolation inequalities in inhomogeneous Besov-Lorentz spaces. \begin{thm}\label{thm-necessity-01-b} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p <\infty$, $1<p_1,p_2 \le \infty$, $1\le r_1,r_2\le\infty$ and $0<\theta<1$. Then the interpolation inequality \begin{equation}\label{thm-necessity-inequality-01-b} \|f\|_{B^{s,r_*}_{p,1}}\lesssim \|f\|_{B^{s_1,r_1}_{p_1,\infty}}^{1-\theta}\|f\|_{B^{s_2,r_2}_{p_2,\infty}}^\theta \end{equation} holds for all $f\in B^{s_1,r_1}_{p_1,\infty} \cap B^{s_2,r_2}_{p_2,\infty}$, if and only if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* \ge s$, $p_*=p$, $p_1\neq p_2$. \item $s_*-s \ge n/p_* - n/p>0$. \end{enumerate} \end{thm} \begin{proof} By Theorem \ref{thm-itp-inhomb}, it suffices to prove the necessity part of the theorem. Suppose that $\eqref{thm-necessity-inequality-01-b}$ holds for all $f\in B^{s_1,r_1}_{p_1,\infty} \cap B^{s_2,r_2}_{p_2,\infty}$. Then since $$ s_* -s \ge \frac{n}{p_*} -\frac{n}{p}\ge 0, $$ there are two possibilities: (a) $s_* \ge s$, $p_*=p$ and (b) $s_*-s \ge n/p_* - n/p>0$. Suppose that $p_1=p_2$. If $s_1\neq s_2$, then by Theorem \ref{lem-itp-homb}, $$ B^{s_*,1}_{p_*} \hookrightarrow (B^{s_1,r_1}_{p_1}, B^{s_2,r_2}_{p_2})_{\theta,1}\hookrightarrow (B^{s_1,r_1}_{p_1,\infty}, B^{s_2,r_2}_{p_2,\infty})_{\theta,1}. $$ which holds trivially when $s_1 =s_2$. Moreover, since $\ihsb{s}{p}{1}{r_*}$ is a Banach space, $$ (B^{s_1,r_1}_{p_1,\infty}, B^{s_2,r_2}_{p_2,\infty})_{\theta,1}\hookrightarrow \ihsb{s}{p}{1}{r_*}. $$ By Theorem \ref{ebd-inhom-B}, we have $p_*<p$. Consequently, $p_*=p$ implies $p_1\neq p_2$. This completes the proof. \end{proof} \begin{thm}\label{thm-necessity-0-b} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, $1 \le q_1 , q_2 \le \infty$, and $0<\theta<1$. If the interpolation inequality \begin{equation}\label{thm-necessity-inequality-0-b} \|f\|_{B^{s,1}_{p,q_*}}\lesssim \|f\|_{B^{s_1,\infty}_{p_1,q_1}}^{1-\theta}\|f\|_{B^{s_2,\infty}_{p_2,q_2}}^\theta \end{equation} holds for all $f\in B^{s_1,\infty}_{p_1,q_1} \cap B^{s_2,\infty}_{p_2,q_2}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$, $s_2-s_1\neq n/p_2-n/p_1$. \item $s_* > s$, $p_*=p$. \item $s_*-s=n/p_*-n/p>0$, $s_2-s_1\neq n/p_2-n/p_1$. \item $s_*-s>n/p_*-n/p>0$. \end{enumerate} \end{thm} \begin{proof} Suppose that $\eqref{thm-necessity-inequality-0-b}$ holds for all $f\in B^{s_1,\infty}_{p_1,q_1} \cap B^{s_2,\infty}_{p_2,q_2}$. Then there are four possibilities: (a) $s_* =s$, $p_*=p$, (b) $s_* > s$, $p_*=p$, (c) $s_*-s = n/p_*-n/p > 0$, and (d) $s_*-s > n/p_*-n/p > 0$. Suppose that $s_1=s_2$. Then by Lemma \ref{thm-ebd-FB and BF}, Theorem \ref{lem-itp-inhomf}, and the reiteration theorem, $$ F^{s_*,\infty}_{p_*,1} \hookrightarrow (F^{s_1,\infty}_{p_1,q_1},F^{s_2,\infty}_{p_2,q_2})_{\theta,1} \hookrightarrow (B^{s_1,\infty}_{p_1,q_1},B^{s_2,\infty}_{p_2,q_2})_{\theta,1}\hookrightarrow B^{s,1}_{p,q_*}. $$ Hence it follows from \cite[Theorem 1.2]{seeger} that $s_* > s $. By contraposition, we have shown that if $s_* =s$, then $s_1 \neq s_2$. Suppose next that $s_*-s=n/p_*-n/p$ and $s_2-s_1=n/p_2-n/p_1$. We may assume that $s_1 \le s_2$. Then by Theorem \ref{ebd-inhom-B}, $$ B^{s_2,\infty}_{p_2, 1} \hookrightarrow(B^{s_1,\infty}_{p_1,q_1}, B^{s_2,\infty}_{p_2,q_2})_{\theta,1}\quad\text{but}\quad B^{s_2,\infty}_{p_2, 1} \not\hookrightarrow B^{s,1}_{p,q_*}. $$ Therefore, it follows that if $s_*-s=n/p_*-n/p$, then $s_2-s_1\neq n/p_2-n/p_1$. This completes the proof. \end{proof} \begin{thm}\label{thm-necessity-b} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 < p,p_1,p_2 < \infty$, and $0<\theta<1$. If the interpolation inequality \begin{equation*} \|f\|_{B^{s,1}_{p,1}}\lesssim \|f\|_{B^{s_1,\infty}_{p_1,\infty}}^{1-\theta}\|f\|_{B^{s_2,\infty}_{p_2,\infty}}^\theta \end{equation*} holds for all $f\in B^{s_1,\infty}_{p_1,\infty} \cap B^{s_2,\infty}_{p_2,\infty}$, then one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $p_*=p$, $s_1\neq s_2$, $p_1\neq p_2$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_* >s$, $p_*=p$, $p_1 \neq p_2$. \item $s_*-s=n/p_* - n/p>0$, $s_2-s_1\neq n/p_2 - n/p_1$. \item $s_*-s>n/p_*-n/p>0$. \end{enumerate} \end{thm} \begin{proof} The theorem immediately follows from Theorems \ref{thm-necessity-01-b} and \ref{thm-necessity-0-b}. \end{proof} Compared with homogeneous Triebel-Lizorkin-Lorentz spaces, one advantage of Besov-Lorentz spaces is that the interpolation inequalities in Theorems \ref{thm-itp-inhomb} and \ref{thm-itp-homb} do hold for the limiting case when one of $p$, $p_1$, and $p_2$ is equal to $1$ or $\infty$. Some conclusions of Theorem \ref{thm-itp-homf} can be extended to the limiting case when $p=1$, $p_1 =1$, or $ p_2=1$, by using the elementary embedding results in Lemma \ref{thm-ebd-FB and BF}. \begin{thm}\label{cor0} Let $s , s_1 , s_2 \in \mathbb{R}$, $1 \le p,p_1,p_2 < \infty$, $1 \le q , q_1 , q_2 \le \infty$, and $0<\theta<1$. Then the interpolation inequality \[ \|f\|_{\hsf{s}{p}{q}{1}}\lesssim \|f\|_{\hsf{s_1}{p_1}{q_1}{\infty}}^{1-\theta}\|f\|_{\hsf{s_2}{p_2}{q_2}{\infty}}^\theta \] holds for all $f \in\hsf{s_1}{p_1}{q_1}{\infty} \cap \hsf{s_2}{p_2}{q_2}{\infty}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_* =s$, $s_1\neq s_2$, $p=p_1=p_2 =1$, $q=q_1 =q_2 =1$. \item $ s_* -s =n/p_* -n/p >0$, $s_2 -s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} \begin{proof} The theorem is an immediate consequence of Theorem \ref{thm-itp-homb} and Lemma \ref{thm-ebd-FB and BF}. \end{proof} \section{Application to Gagliardo-Nirenberg inequalities} Applying interpolation inequalities in the previous sections, we shall extend Gagliardo-Nirenberg inequalities to Lorentz spaces, which are of some interest to theory of partial differential equations (see \cite{McC2} e.g.) \subsection{Sobolev-Lorentz spaces} For $s\in\mathbb{R}$, we define $J^s=(I-\Delta)^{s/2}:\mathscr{S}'\rightarrow\mathscr{S}'$ via the Fourier transform by $$ \widehat{J^s f} = \left( 1+|\xi|^2 \right)^{s/2} \hat f \quad\mbox{for}\,\, f \in \mathscr{S}'. $$ Then for $s\in\mathbb{R}$, $1 < p < \infty$, and $1 \le q \le \infty$, the Sobolev-Lorentz space $\ihs{s}{p}{q}$ is defined by $$ \ihs{s}{p}{q}=\{ f \in \mathscr{S}' : J^s f \in L^{p,q}\}. $$ The space $\ihs{s}{p}{q}$ equipped with the quasi-norm $\|f \|_{\ihs{s}{p}{q}} = \| J^s f \|_{L^{p,q}}$ is a quasi-Banach space. Obviously, $J^\sigma$ is an isometric isomorphism from $\ihs{s}{p}{q}$ onto $\ihs{s-\sigma}{p}{q}$ for $\sigma\in\mathbb{R}$. The homogeneous Sobolev-Lorentz space $\hs{s}{p}{q}$ is defined by $$ \hs{s}{p}{q}=\{ f \in \mathscr{S}'_0 : \Lambda^s f \in L^{p,q}\} , $$ where $\Lambda^s=(-\Delta)^{s/2}:\mathscr{S}'_0\rightarrow\mathscr{S}'_0$ is defined by $$ \langle\Lambda^sf,\psi\rangle= \left\lg f , \left( |\xi|^{s} \psi^\vee \right)\,\widehat{\ }\, \right\rangle \quad\text{for every $f\in\mathscr{S}_0' , \, \psi\in \mathscr{S}_0$}. $$ Note that $\Lambda^\sigma$ is an isometric isomorphism from $\hs{s}{p}{q}$ onto $\hs{s-\sigma}{p}{q}$ for any $\sigma\in\mathbb{R}$. For $1<p=q<\infty$, we write $H_{p}^s = \ihs{s}{p}{p}$ and $\dot{H}_p^s = \hs{s}{p}{p}$. \begin{thm}\label{SFE2-inhom} For $s\in\mathbb{R}$, $1<p<\infty$, and $1\le q \le \infty$, we have \[ \ihsf{s}{p}{q}{2}= \ihs{s}{p}{q}\quad \text{and} \quad \hsf{s}{p}{q}{2}= \hs{s}{p}{q}. \] \end{thm} \begin{proof} It was shown in \cite[Theorem 2.3.8 and Section 5.2.3]{Tri1} that if $s \in\mathbb{R}$, $1 \le p<\infty$, and $1\le r\le \infty$, then $J^s$ maps $F^{s,r}_p$ isomorphically onto $F^{0,r}_p$ and $\Lambda^s$ maps $\dot F^{s,r}_p$ isomorphically onto $\dot F^{0,r}_p$. It follows from the interpolation results in Theorem \ref{lem-itp-inhomf} that $J^s$ maps $\ihsf{s}{p}{q}{r}$ isomorphically onto $\ihsf{0}{p}{q}{r}$ and $\Lambda^s$ maps $\hsf{s}{p}{q}{r}$ isomorphically onto $\hsf{0}{p}{q}{r}$ for $1<r\le\infty$. Hence by Part (i) of Theorem \ref{prop-relation}, we complete the proof. \end{proof} By virtue of Theorem \ref{SFE2-inhom}, the following embedding results for Sobolev-Lorentz spaces are immediate consequences of Theorems \ref{ebd-inhom-TLL} and \ref{ebd-hom-TLL}. \begin{thm}\label{embedding-inhom} Let $s_1,s_2\in\mathbb{R}$, $1< p_1,p_2<\infty$, and $1 \le q_1,q_2\le \infty$. Then the embedding $$ \ihs{s_1}{p_1}{q_1} \hookrightarrow \ihs{s_2}{p_2}{q_2} $$ holds if and only if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $s_1\ge s_2$, $p_1=p_2$, $q_1\le q_2$. \item $s_1 -s_2 = n/p_1- n/p_2 >0 $, $q_1\le q_2$. \item $s_1 -s_2 > n/p_1- n/p_2 >0 $. \end{enumerate} \end{thm} \begin{thm}\label{ebd-hom} Let $s_1,s_2\in\mathbb{R}$, $1< p_1,p_2<\infty$, and $1\le q_1,q_2\le \infty$. Then the embedding $$ \hs{s_1}{p_1}{q_1} \hookrightarrow \hs{s_2}{p_2}{q_2} $$ holds if and only if $$ s_1-s_2=\frac{n}{p_1}-\frac{n}{p_2}\ge0\quad\text{and}\quad q_1\le q_2. $$ \end{thm} For $k\in\mathbb{N}$, $1 \le p < \infty$, and $1\le q\le \infty$, we consider \begin{align*} W^{k,p,q} = \{f\in \mathscr{S}' : D^\alpha f \in L^{p,q} \,\,\text{for}\,\, |\alpha|\le k\}, \end{align*} which is a quasi-Banach space equipped with the quasi-norm $$ \|f\|_{W^{k,p,q}} = \sum_{|\alpha|\le k} \|D^\alpha f\|_{L^{p,q}}. $$ The corresponding homogeneous space $\dot W^{k,p,q}$ is defined by $$\dot W^{k,p,q} = \{ f \in \mathscr{S}_0' : D^\alpha f \in L^{p,q} \,\,\text{for}\,\, |\alpha|= k\}.$$ More precisely, $\dot W^{k,p,q}$ consists of all $f\in\mathscr{S}_0'$ having an extension $F\in\mathscr{S}'$ such that $D^\alpha F\in L^{p,q}$ for all multi-indices $\alpha$ with $|\alpha|=k$. In this case, we define $D^\alpha f= D^\alpha F$ for $|\alpha|=k$ because such an extension $F$ is unique up to polynomials of degree at most $k-1$. Then $\dot W^{k,p,q}$ is a quasi-Banach space equipped with the quasi-norm $$ \|f\|_{\dot W^{k,p,q}} = \sum_{|\alpha|= k} \|D^\alpha f\|_{L^{p,q}}. $$ We define $W^{k,p}=W^{k,p,p}$ and $\dot W^{k,p} = \dot W^{k,p,p}$. Then it is well-known that $$ W^{k,p} = F^{k,2}_p =H^{k}_{p} \quad\mbox{and}\quad \dot{W}^{k,p}= \dot{F}^{k,2}_p = \dot{H}^{k}_{p} $$ for $1<p<\infty$ (see \cite[Subsections 2.5.6 and 5.2.3]{Tri1} e.g.). \begin{thm}\label{Interpolation result-W} Let $1<p, p_1 , p_2 <\infty$ and $0<\theta<1$ satisfy $$ p_1 \neq p_2 \quad \mbox{and}\quad \frac{1}{p}= \frac{1-\theta}{p_1}+ \frac{\theta}{p_2}. $$ Then for $k \in {\mathbb N}$ and $1 \le q \le \infty$, $$ (W^{k,p_1},W^{k,p_2})_{\theta,q}=W^{k,p,q} \quad \text{and} \quad (\dot W^{k,p_1},\dot W^{k,p_2})_{\theta,q}=\dot W^{k,p,q}. $$ \end{thm} \begin{proof} Since the first interpolation result is proved in \cite[Theorem 6]{Adams}, it remains to prove the second one. Let $K(t,f;X_1,X_2)$ denote the $K$-functional for a given compatible couple $(X_1,X_2)$ of two quasi-Banach spaces: $$ K(t,f;X_1,X_2) = \inf\{ \|f_1\|_{X_1} + t\|f_2\|_{X_2} : f=f_1+f_2,\, f_1\in X_1,\, f_2\in X_2 \}. $$ Then it is easy to show that if $f \in (\dot W^{k,p_1},\dot W^{k,p_2})_{\theta,q}$, then $$ \sum_{|\alpha| =k} K(t, D^\alpha f; L^{p_1}, L^{p_2}) \lesssim K(t,f;\dot W^{k,p_1}, W^{k,p_2}) . $$ To prove the reverse inequality, suppose that $f\in \dot W^{k,p,q}$. Note that \[ f = \frac{1}{(2\pi i)^k} \sum_{|\alpha|=k}\binom{k}{\alpha}\Lambda^{-k}\left[ \left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{D^\alpha f}\right) ^\vee\right] \quad \text{in $\mathscr{S}_0'$}. \] For each multi-index $\alpha$ with $|\alpha|=k$, we choose $g_1^\alpha \in L^{p_1}$ and $g_2^\alpha \in L^{p_2}$ such that $D^\alpha f= g_1^\alpha + g_2^\alpha$. Recall now that if $|\alpha|=k$, then $\xi^\alpha/|\xi|^k$ is a $L^r$-Fourier multiplier for $1<r<\infty$. Therefore, \begin{align*} K(t,f; \dot W^{k,p_1},W^{k,p_2}) &\lesssim \sum_{|\alpha|=k}\left( \left\| \Lambda^{-k}\left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_1^\alpha}\right)^\vee\right\|_{\dot W^{k,p_1}} + t \left\| \Lambda^{-k}\left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_2^\alpha}\right)^\vee\right\|_{\dot W^{k,p_2}}\right) \\ & \sim \sum_{|\alpha|=k} \left(\left\| \Lambda^{-k} \left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_1^\alpha}\right)^\vee\right\|_{\dot H^{k}_{p_1}} + t \left\| \Lambda^{-k} \left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_2^\alpha}\right)^\vee\right\|_{\dot H^{k}_{p_2}}\right) \\ & = \sum_{|\alpha|=k} \left( \left\| \left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_1^\alpha}\right)^\vee\right\|_{L^{p_1}} + t \left\| \left( \frac{\xi^\alpha}{|\xi|^{k}}\widehat{g_2^\alpha}\right)^\vee\right\|_{L^{p_2}}\right) \\ & \lesssim \sum_{|\alpha|=k} \left( \|g_1^\alpha \|_{L^{p_1}} + t \|g_2^\alpha\|_{L^{p_2}}\right). \end{align*} By the arbitrariness of $g_1^\alpha$ and $g_2^\alpha$ with $D^\alpha f= g_1^\alpha + g_2^\alpha$, we conclude that $$ K(t,f; \dot W^{k,p_1},W^{k,p_2}) \lesssim \sum_{|\alpha| =k} K(t, D^\alpha f; L^{p_1}, L^{p_2}) . $$ This completes the proof. \end{proof} By Theorems \ref{lem-itp-inhomf}, \ref{SFE2-inhom}, and \ref{Interpolation result-W}, we immediately obtain \begin{thm}\label{H equal W} For any $k\in\mathbb{N}$, $1<p<\infty$, and $1\le q\le\infty$, $$ \ihs{k}{p}{q}= W^{k,p,q} \quad \text{and} \quad \hs{k}{p}{q} =\dot W^{k,p,q}. $$ \end{thm} The embedding results below will be used to prove Gagliardo-Nirenberg inequalities involving the critical Lorentz space $L^{1,\infty}$. \begin{thm}\label{thm-ebd-B} \begin{enumerate}[label = \textup{(\roman*)}] \item If $1\le p \le \infty$, then $ \dot B^{n/p,1}_{p,\infty} \hookrightarrow L^\infty.$ \item If $1\le p < \infty$, then $ L^1 \hookrightarrow \dot B^{-n(1-1/p),\infty}_{p,1}$. \item If $1 \le p<\infty$, then $\dot B^{n/p,\infty}_{p,\infty} \hookrightarrow BMO$; more precisely, for each $f\in \dot B^{n/p,\infty}_{p,\infty}$, there exists $g\in BMO$, unique up to additive constants, such that \begin{equation}\label{BMO-equation} \lg f, \varphi \rangle = \int g\varphi \quad\text{for all $\varphi\in \mathscr{S}_0$} \quad\mbox{and}\quad \|g\|_{BMO}\lesssim \|f\|_{\dot B^{n/p,\infty}_{p,\infty}}. \end{equation} \end{enumerate} \end{thm} \begin{proof} (i) It follows from Theorems \ref{ebd-hom-B} and \ref{prop-relation} that $ \dot B^{n/p,1}_{p,\infty} \hookrightarrow \dot B^{0,1}_{\infty} \hookrightarrow L^\infty.$ (ii) By Theorems \ref{prop-relation} and \ref{ebd-hom-B}, $ L^1 \hookrightarrow \dot B^{0,\infty}_{1}\hookrightarrow \dot B^{-n(1-1/p),\infty}_{p,1}$. (iii) By Theorem \ref{ebd-hom-B}, it suffices to show that $\dot B^{n/p,\infty}_{p} \hookrightarrow BMO$ for $1<p<\infty$. Suppose that $1<p<\infty$ and $f\in \dot B^{n/p,\infty}_p$. Recall from the duality theorem in \cite[Chapter 3]{Peetre2} and Theorem \ref{thm-ebd-FB-2} that $$ \left (\dot B^{-n/p,1}_{p/(p-1)} \right)^* = \dot B^{n/p,\infty}_p \quad\text{and} \quad \mathscr{S}_0 \subset \dot F^{0,2}_1 \hookrightarrow \dot B^{-n/p,1}_{p/(p-1)}. $$ Hence for all $\varphi\in\mathscr{S}_0$, we have $$ | \lg f, \varphi \rangle | \lesssim \|f\|_{\dot B^{n/p,\infty}_p} \|\varphi\|_{\dot B^{-n/p,1}_{p/(p-1)}} \lesssim \|f\|_{\dot B^{n/p,\infty}_p} \|\varphi\|_{\dot F^{0,2}_{1}}. $$ It is well-known that $\mathscr{S}_0$ is dense in $\dot F^{0,2}_1$ and $\dot F^{0,2}_1$ is the Hardy space $\mathcal{H}^1$ (see, e.g., \cite[Theorem 2.2.9]{Gra1}). Thus there exists $F\in \left(\mathcal{H}^1\right)^*$ such that $$ \lg f, \varphi \rangle = \lg F, \varphi \rangle \quad\text{for all $\varphi\in \mathscr{S}_0$} \quad \text{and}\quad \|F\|_{(\mathcal{H}^1)^*} \lesssim \|f\|_{\dot B^{n/p,\infty}_p}. $$ Moreover, since $BMO$ is the dual space of $\mathcal{H}^1$, there exists $g\in BMO$ such that $$ \int_{\{|x|\le1\}} g =0, \quad \|g\|_{BMO} \lesssim \|F\|_{(\mathcal{H}^1)^*}, $$ and \begin{equation}\label{BMO-eq} \lg F, \varphi \rangle = \int g\varphi \end{equation} for all $\varphi\in \mathcal{H}^1_0$, where $\mathcal{H}^1_0$ is the space of all finite linear combinations of $L^2$-atoms for $\mathcal{H}^1$. Now we show that \eqref{BMO-eq} holds for all $\varphi\in\mathscr{S}_0$. Note that the integral in \eqref{BMO-eq} is well-defined for $\varphi\in\mathscr{S}_0$. Indeed, since $ \int_{\{|x|\le1\}} g =0, $ it follows from \cite[Proposition 3.1.5]{Gra1} that \begin{equation}\label{BMO-eq2} \int \frac{|g(x)|}{(1+|x|)^{n+1}}dx \lesssim \|g\|_{BMO}. \end{equation} Suppose that $\varphi\in\mathscr{S}_0.$ Then since $\varphi \in \dot F^{0,2}_1 = \mathcal{H}^1$, there exists a sequence $\{\varphi_N\}$ in $\mathcal{H}^1_0$ such that $\varphi_N \rightarrow \varphi$ in $\mathcal{H}^1$. For $M>0$, define \begin{equation*} g_M = \begin{cases} \quad M &\quad\text{if}\,\, g > M \\ \quad \,g &\quad\text{if}\,\, |g| \le M \\ -M &\quad\text{if}\,\, g<-M. \end{cases} \end{equation*} Then since $g_M$ is bounded, \begin{align*} \left| \int g_M (\varphi_N - \varphi) \right| &\lesssim \|g_M\|_{BMO} \|\varphi_N - \varphi\|_{\mathcal{H}^1} \lesssim \|g\|_{BMO} \|\varphi_N - \varphi\|_{\mathcal{H}^1}. \end{align*} Note that $|g_M (\varphi_N-\varphi)| \le |g(\varphi_N-\varphi)| \in L^1$. Thus letting $M\rightarrow \infty$, we obtain \begin{align*} \left| \int g (\varphi_N - \varphi) \right| \lesssim \|g\|_{BMO} \|\varphi_N - \varphi\|_{\mathcal{H}^1}. \end{align*} Therefore, $$ \lg F, \varphi \rangle = \lim_{N\rightarrow \infty} \lg F, \varphi_N \rangle = \lim_{N\rightarrow \infty} \int g \varphi_N = \int g \varphi, $$ which proves \eqref{BMO-equation}. Finally, the uniqueness of $g$ follows from \eqref{BMO-eq2} (see \cite[p. 243]{Tri1}). \end{proof} \begin{thm}\label{thm-ebd-Linf} If $1<p<\infty$, then $$ \dot{H}_{p,1}^{n/p} \hookrightarrow\hsf{n/p}{p}{1}{\infty} \hookrightarrow L^\infty, \quad \dot{H}_{p,\infty}^{n/p} \hookrightarrow\hsf{n/p}{p}{\infty}{\infty} \hookrightarrow BMO, $$ and $$ L^1 \hookrightarrow \hsf{-n(1-1/p)}{p}{\infty}{1} \hookrightarrow \dot{H}^{-n(1-1/p)}_{p,\infty}. $$ \end{thm} \begin{proof} By Theorem \ref{SFE2-inhom}, Lemma \ref{thm-ebd-FB and BF}, and Theorem \ref{thm-ebd-B}, we have \[ \dot{H}_{p,\infty}^{n/p} \hookrightarrow\hsf{n/p}{p}{\infty}{\infty} \hookrightarrow \hsb{n/p}{p}{\infty}{\infty} \hookrightarrow BMO . \] Choose any numbers $p_1$ and $p_2$ such that $1<p_1<p<p_2 < \infty$. Then by Theorems \ref{thm-ebd-B}, \ref{thm-ebd-FB}, and \ref{SFE2-inhom}, \[ \dot{H}_{p,1}^{n/p} \hookrightarrow\hsf{n/p}{p}{1}{\infty} \hookrightarrow \dot B^{n/p_2 ,1}_{p_2} \hookrightarrow L^\infty \] and \[ L^1 \hookrightarrow \dot{B}^{-n(1-1/p_1),\infty}_{p_1,1}\hookrightarrow \hsf{-n(1-1/p)}{p}{\infty}{1} \hookrightarrow \dot{H}^{-n(1-1/p)}_{p,\infty}. \qedhere \] \end{proof} The corresponding embedding results also hold for inhomogeneous spaces, proofs of which are exactly the same as that of Theorem \ref{thm-ebd-Linf} and so omitted. \begin{thm}\label{thm-ebd-Linf-inhom} If $1<p<\infty$, then $$ H_{p,1}^{n/p} \hookrightarrow \ihsf{n/p}{p}{1}{\infty} \hookrightarrow L^\infty, \quad H_{p,\infty}^{n/p} \hookrightarrow\ihsf{n/p}{p}{\infty}{\infty} \hookrightarrow BMO, $$ and $$ L^1 \hookrightarrow \ihsf{-n(1-1/p)}{p}{\infty}{1}\hookrightarrow H^{-n(1-1/p)}_{p,\infty}. $$ \end{thm} \subsection{Gagliardo-Nirenberg inequalities in Lorentz spaces} From Theorems \ref{thm-itp-inhomf}, \ref{thm-itp-homf}, and \ref{SFE2-inhom}, we immediately obtain the following interpolation inequalities in Sobolev-Lorentz spaces, which are indeed Galiardo-Nirenberg inequalities for fractional derivatives in Lorentz spaces. \begin{thm}\label{thm-itp-inhomh} Let $s, s_1,s_2\in\mathbb{R}$, $1< p, p_1,p_2<\infty$, and $1\le q, q_1,q_2\le \infty$. Assume that \[ s_* -s \ge \frac{n}{p_*}-\frac{n}{p} \ge 0 . \] Then the interpolation inequality \begin{equation*}\label{thm-itp-sobolev-inhom} \|f\|_{\ihs{s}{p}{q}}\lesssim \|f\|_{\ihs{s_1}{p_1}{q_1}}^{1-\theta}\|f\|_{\ihs{s_2}{p_2}{q_2}}^\theta \end{equation*} holds for all $f \in\ihs{s_1}{p_1}{q_1} \cap \ihs{s_2}{p_2}{q_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $q_* \le q$. \item $s_1=s_2$, $p_* =p$, $p_1 \neq p_2$. \item $s_* >s$, $p_*=p$, $p_1\neq p_2$. \item $s_* -s > n/p_* -n/p >0$. \item $s_* -s = n/p_* -n/p >0$, $s_2 -s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} \begin{thm}\label{thm-itp-homh} Let $s, s_1,s_2\in\mathbb{R}$, $1< p, p_1,p_2<\infty$, and $1\le q, q_1,q_2\le \infty$. Assume that \[ s_* -s = \frac{n}{p_*}-\frac{n}{p} \ge 0 . \] Then the interpolation inequality \begin{equation*} \|f\|_{\hs{s}{p}{q}}\lesssim \|f\|_{\hs{s_1}{p_1}{q_1}}^{1-\theta}\|f\|_{\hs{s_2}{p_2}{q_2}}^\theta \end{equation*} holds for all $f \in\hs{s_1}{p_1}{q_1} \cap \hs{s_2}{p_2}{q_2}$, if one of the following conditions is satisfied: \begin{enumerate}[label = \textup{(\roman*)}] \item $q_* \le q$. \item $s =s_1=s_2 $, $p_1 \neq p_2$. \item $ s_* >s$, $s_2 -s_1 \neq n/p_2 -n/p_1$. \end{enumerate} \end{thm} Next, we show that Gagliardo-Nirenberg inequalities in Lorentz spaces may hold even for the limiting case when some exponent is equal to $1$ or $\infty$. \begin{thm}\label{cor-BL1} Let $s > 0$, $1 < p \le \infty$, $1 \le p_1 , p_2 \le \infty$, and $0<\theta<1$ satisfy $$ \frac{1}{p}=\frac{1-\theta}{p_1}+\theta\left(\frac{1}{p_2}-\frac{s}{n}\right) \quad\text{and}\quad\frac{1}{p_1} \neq \frac{1}{p_2}-\frac{s}{n}. $$ Assume also that $ q =1$ if $p<\infty$ and $q=\infty$ if $p=\infty$, and that $ q_1 = \infty$ if $ p_1 >1$ and $q_1 =1$ if $p_1 =1$. Then for each $f \in L^{p_1 , q_1} \cap \hsb{s}{p_2}{\infty}{\infty}$, we have \[ \|f -c \|_{L^{p,q}}\lesssim \|f\|_{L^{p_1,q_1}}^{1-\theta}\|f\|_{\hsb{s}{p_2}{\infty}{\infty}}^\theta , \] where $c = c_{p_1} (f)$ is some constant with $c_{p_1} (f) =0 $ if $p_1 < \infty$. \end{thm} \begin{proof} Suppose that $f \in L^{p_1 , q_1} \cap \hsb{s}{p_2}{\infty}{\infty}$. Then by Theorems \ref{thm-itp-homb} and \ref{prop-relation}, there is a constant $c $ such that \[ \|f -c \|_{L^{p,q}} \lesssim \|f\|_{\dot{B}_{p,q}^{0,1}} \lesssim \|f\|_{L^{p_1,q_1}}^{1-\theta}\|f\|_{\hsb{s}{p_2}{\infty}{\infty}}^\theta . \] It remains to show that if $p_1 < \infty$, then we can take $c=0$. It is clear that if $p <\infty$ and $p_1 < \infty$, then $c $ must be zero. Suppose that $p=\infty$ and $p_1 <\infty$. Choosing any $\psi \in C_c^\infty(\mathbb{R}^n)$ such that $\psi=1$ on $\{|\xi|\le1\}$ and $\psi=0$ on $\{|\xi|\ge3/2\}$, we define $\varphi_j (\cdot) = \psi (2^{-j}\cdot)-\psi (2^{-j+1}\cdot)$ on $\mathbb{R}^n$ for each $j\in\mathbb{Z}$. Then since $\varphi=\{\varphi_j\}_{j\in\mathbb{Z}}$ is a sequence in $C_c^\infty(\mathbb{R}^n)$ satisfying \eqref{cond1-hom}, \eqref{cond2-hom}, and \eqref{cond3-hom}, we have \[ \sum_{j \in \mathbb{Z}} \|\Delta_j^\varphi f \|_{L^{\infty}} \lesssim \|f\|_{\dot{B}_{\infty}^{0,1}} < \infty , \] which implies that the series $\sum_{j \in \mathbb{Z}} \Delta_j^\varphi f$ converges in $L^{\infty}$ to a function $F$ satisfying $\|F\|_{L^{\infty}} \le \|f\|_{\dot{B}^{0,1}_{\infty}}$. On the other hand, since $f \in L^{p_1 , q_1} \cap L^{\infty}$, it can be easily deduced that the series $\sum_{j \in \mathbb{Z}} \Delta_j^\varphi f$ indeed converges to $f$ in the sense of distributions. Hence it follows that $F = f$ identically on $\mathbb{R}^n$. This completes the proof. \end{proof} Recall from Lemma \ref{thm-ebd-FB and BF}, Theorems \ref{prop-relation}, and \ref{SFE2-inhom} that $\hsf{s}{p}{\infty}{\infty} \hookrightarrow \hsb{s}{p}{\infty}{\infty}$ for $1 \le p< \infty$ and $\dot{H}^{s}_{p ,\infty} \hookrightarrow \hsf{s}{p}{\infty}{\infty} $ if $1 < p< \infty$. Therefore, from Theorem \ref{cor-BL1}, we can derive various interpolation inequalities which generalize Gagliardo-Nirenberg inequalities. Some of them are listed below. \begin{eg} The conditions of Theorem \ref{cor-BL1} are all satisfied when $1 \le p_1=p_2 < p <\infty$, $0< sp_1 <n$, and $1/p_1 -1/p =\theta s/n$. Hence it follows from Theorem \ref{cor-BL1} that if $0< sp_1 < n$, $1 \le p_1 < p< np_1/(n-sp_1 )$, and $\theta = n(1/p_1 -1/p)/s$, then \[ \|f\|_{L^{p,1}} \lesssim \|f\|_{L^{p_1,q_1}}^{1-\theta}\|f\|_{\hsb{s}{p_1}{\infty}{\infty}}^\theta \lesssim \|f\|_{L^{p_1,q_1}}^{1-\theta}\|f\|_{\hsf{s}{p_1}{\infty}{\infty}}^{\theta} , \] where $q_1 =\infty$ if $p_1 >1$ and $q_1 =1$ if $p_1 =1$. Moreover, if $p_1 >1$, then \[ \|f\|_{L^{p,1}} \lesssim \|f\|_{L^{p_1,\infty}}^{1-\theta}\|\Lambda^s f\|_{L^{p_1 ,\infty}}^{\theta} \quad\mbox{for all}\,\, f \in H^{s}_{p_1,\infty} . \] \end{eg} \begin{eg} The conditions of Theorem \ref{cor-BL1} are all satisfied when $1< p < \infty$ $p_1 =\infty$, $1 \le p_2 <\infty$, $0<sp_2<n$, and $1/p =\theta (1/p_2 -s/n )$. Hence, if $1 \le p_2 < \infty$ and $0< 1/p < 1/p_2 -s/n$, then \[ \|f -c \|_{L^{p,1}} \lesssim \|f\|_{L^{\infty}}^{1-\theta}\|f\|_{\hsb{s}{p_2}{\infty}{\infty}}^{\theta} \lesssim \|f\|_{L^{\infty}}^{1-\theta}\|f\|_{\hsf{s}{p_2}{\infty}{\infty}}^{\theta} \] for some constant $c $, where $0< \theta <1$ is defined by $1/p =\theta (1/p_2 -s/n )$. \end{eg} \begin{eg} Let $1 \le p_1 <p < \infty$, $1<p_2 < \infty$, and $s =n/p_2 $. Then taking $\theta=1-p_1/p$ in Theorem \ref{cor-BL1}, we have \begin{align*} \|f\|_{L^{p,1}}&\lesssim \|f\|_{L^{p_1, q_1}}^{p_1 /p}\|f\|_{\hsb{n/p_2}{p_2}{\infty}{\infty}}^{1-p_1/p} \\ &\lesssim \|f\|_{L^{p_1,q_1}}^{p_1/p}\|f\|_{\hsf{n/p_2}{p_2}{\infty}{\infty}}^{1-p_1/p} \\ &\lesssim \|f\|_{L^{p_1, q_1}}^{p_1 /p}\|\Lambda^{n/p_2} f\|_{L^{p_2 ,\infty}}^{1-p_1/p} , \end{align*} where $q_1 =\infty$ if $p_1 >1$ and $q_1 =1$ if $p_1 =1$. In particular, if $k \in \mathbb{N}$, $1 \le k < n$, and $n/k < p<\infty$, then $$ \|f\|_{L^{p,1}}\lesssim \|f\|_{L^{n/k,\infty}}^{n/kp} \| f\|_{\dot{W}^{k, n/k,\infty}}^{1-n/kp} \quad\mbox{for all}\,\, f \in W^{k,n/k,\infty} , $$ which refines the famous Ladyzhenskaya inequality in \cite{lad}. \end{eg} \begin{eg} Let $s >0$ and $1< p <\infty$ satisfy $sp >n$. Then by Theorem \ref{cor-BL1}, \begin{align*} \|f\|_{L^\infty}&\lesssim \|f\|_{L^{p,\infty}}^{1-n/sp}\|f\|_{\hsb{s}{p}{\infty}{\infty}}^{n/sp} \quad\quad\mbox{for all}\,\, f \in B^{s,\infty}_{p,\infty} \\ &\lesssim \|f\|_{L^{p,\infty}}^{1-n/sp}\|f\|_{\hsf{s}{p}{\infty}{\infty}}^{n/sp} \quad\quad\mbox{for all}\,\, f \in F^{s,\infty}_{p,\infty} \\ &\lesssim \|f\|_{L^{p,\infty}}^{1-n/sp} \| \Lambda^s f\|_{L^{p,\infty}}^{n/sp} \quad\mbox{for all}\,\, f \in H^{s}_{p,\infty} . \end{align*} By Theorem \ref{cor-BL1}, we also have $$ \|f\|_{L^\infty}\lesssim \|f\|_{L^{1}}^{1-\theta} \| \Lambda^s f\|_{L^{p,\infty}}^{\theta} \quad\mbox{for all}\,\, f \in L^1 \cap \dot{H}^{s}_{p,\infty} , $$ where $0< \theta <1$ is defined by $\theta (s/n -1/p) = 1-\theta $. \end{eg} \begin{eg} Let $s>0$ and $1< p<\infty$. Then by Theorem \ref{cor-BL1}, we have $$ \|f\|_{L^{p,1}}\lesssim \|f\|_{L^{1}}^{1-\theta} \| \Lambda^s f\|_{L^{p,\infty}}^{\theta} \quad\mbox{for all}\,\, f \in L^1 \cap \dot{H}^{s}_{p,\infty} , $$ where $0< \theta <1$ is defined by $\theta s/n = (1-\theta )(1-1/p)$. In particular, taking $s=1$ and $p=2$, we have \[ \|f\|_{L^{2,1}} \lesssim \|f\|_{L^{1}}^{2/(n+2)} \|\nabla f\|_{L^{2,\infty}}^{n/(n+2)}\quad\mbox{for all}\,\, f \in L^{1} \cap \dot{W}^{1,2, \infty}, \] which refines Nash's inequality in \cite{nash}. \end{eg} The $L^1$-norms in the above examples can be replaced by the weaker $L^{1,\infty}$-quasinorm, under some additional assumption for the case when $sp_2 >n$. \begin{thm}\label{cor2} Let $s > 0$, $1 < p \le \infty$, $1 \le p_1 \le \infty$, $1< p_2 < \infty$, and $0<\theta<1$ satisfy $$ \frac{1}{p}=\frac{1-\theta}{p_1}+\theta\left(\frac{1}{p_2}-\frac{s}{n}\right) \quad\text{and}\quad\frac{1}{p_1} \neq \frac{1}{p_2}-\frac{s}{n}. $$ Assume also that $ q =1$ if $p<\infty$ and $q=\infty$ if $p=\infty$. \begin{enumerate}[label=\textup{(\roman*)}] \item If $ p_1 >1$ or $sp_2 \le n$, then $$ \|f -c \|_{L^{p,q}}\lesssim \|f\|_{L^{p_1,\infty}}^{1-\theta} \| \Lambda^s f\|_{L^{p_2 ,\infty}}^{\theta} \quad\mbox{for all}\,\, f\in L^{p_1, \infty} \cap \dot{H}^{s}_{p_2 ,\infty} , $$ where $c = c_{p_1} (f)$ is some constant with $c_{p_1} (f) =0 $ if $p_1 < \infty$. \item If $p_1 =1$ and $sp_2 >n$, then $$ \|f \|_{L^{p,q}}\lesssim \|f\|_{L^{1,\infty}}^{1-\theta} \| \Lambda^s f\|_{L^{p_2 ,\infty}}^{\theta} \quad\mbox{for all}\,\, f\in L^{1, \infty} \cap H^{s}_{p_2 ,\infty} . $$ \end{enumerate} \end{thm} \begin{proof} \noindent \emph{Case I.} If $p_1 >1$, then the result is a special case of Theorem \ref{cor-BL1}. \noindent \emph{Case II.} Suppose that $p_1 =1$, $sp_2 \le n$, and $f\in L^{1, \infty} \cap \dot{H}^{s}_{p_2 ,\infty}$. If $sp_2 < n$, then by Lemma \ref{interpolation-Lp} and Theorem \ref{ebd-hom}, $$ \|f\|_{L^{p,1}} \lesssim \|f\|_{L^{1,\infty}}^{1-\theta}\|f\|_{L^{np_2 /(n-sp_2),\infty}}^\theta \lesssim \|f\|_{L^{1,\infty}}^{1-\theta}\|\Lambda^s f \|_{L^{p_2 ,\infty}}^\theta. $$ Suppose next that $sp_2 =n$. Then it follows from Theorem \ref{thm-ebd-Linf} that $\hs{s}{p_2}{\infty} \hookrightarrow BMO.$ Moreover, it was shown by Hanks \cite{hanks} that $(L^{1,\infty}, BMO)_{1-1/p,1}=L^{p,1}$. Consequently, $$ \|f\|_{L^{p,1}}\lesssim \|f\|_{L^{1,\infty}}^{1-\theta}\|f\|_{BMO}^{\theta} \lesssim \|f\|_{L^{1,\infty}}^{1-\theta} \|\Lambda^s f\|_{L^{p_2,\infty}}^\theta. $$ \noindent \emph{Case III.} Suppose that $p_1 =1$, $sp_2 >n$, and $f\in L^{1, \infty} \cap H^{s}_{p_2 ,\infty}$. Then it follows from Theorems \ref{embedding-inhom} and \ref{thm-ebd-Linf-inhom} that $f\in L^{1 ,\infty}\cap L^\infty \subset L^{p,q} $. Since \[ \frac{1}{p_2}-\frac{s}{n} < \frac{1}{p}< 1 \quad\mbox{and}\quad 0< \theta = \frac{1 -1/p }{1-1/p_2 +s/n } <1, \] there exists a number $\overline{p} >1 $, close to $1$, such that \[ \frac{1}{p_2}-\frac{s}{n} < \frac{1}{p} < \frac{1}{\overline{p}} \quad\mbox{and}\quad 0< \lambda := \frac{1/\overline{p} -1/p }{1/\overline{p}-1/p_2 +s/n } < 1 . \] Define \[ \mu = \frac{1-1/\overline{p}}{1-1/p} . \] Then since $1/ \overline{p} = (1-\mu)/1 +\mu/ p$ and $ 0< \mu < 1$, it follows from Lemma \ref{interpolation-Lp} that \[ \|f\|_{L^{\overline{p},\infty}} \lesssim \|f\|_{L^{1,\infty}}^{1-\mu} \|f\|_{L^{p,\infty}}^\mu . \] Note also that $$ \frac{1}{p}=\frac{1-\lambda}{\overline{p}}+\lambda\left(\frac{1}{p_2}-\frac{s}{n}\right) \quad\text{and}\quad\frac{1}{\overline{p}} \neq \frac{1}{p_2}-\frac{s}{n}. $$ Hence by Case I, \begin{align*} \|f\|_{L^{p,q}} &\lesssim \|f\|_{L^{\overline{p},\infty}}^{1-\lambda}\|\Lambda^s f \|_{L^{p_2,\infty}}^\lambda \\ & \lesssim \left( \|f\|_{L^{1,\infty}}^{1-\mu} \|f\|_{L^{p,\infty}}^\mu \right)^{1-\lambda}\|\Lambda^s f \|_{L^{p_2,\infty}}^\lambda \\ & \le C \left( \|f\|_{L^{1,\infty}}^{(1-\mu)(1-\lambda)} \|\Lambda^s f \|_{L^{p_2,\infty}}^{\lambda} \right)^{\frac{ 1}{1-\mu (1-\lambda)}} + \frac{1}{2} \|f\|_{L^{p,q}} \end{align*} for some constant $C>0$. Therefore, noting that \[ \frac{1-\mu(1-\lambda)}{\lambda}= \frac{1-\mu}{\lambda}+ \mu = \frac{1}{\theta} , \] we have \[ \|f\|_{L^{p,q}} \lesssim \|f\|_{L^{1,\infty}}^{\frac{(1-\mu)(1-\lambda)}{1-\mu (1-\lambda)}} \|\Lambda^s f \|_{L^{p_2,\infty}}^{\frac{ \lambda}{1-\mu (1-\lambda)}}= \|f\|_{L^{1,\infty}}^{1-\theta}\|\Lambda^s f \|_{L^{p_2,\infty}}^\theta . \qedhere \] \end{proof} From Theorem \ref{cor2}, we derive the following generalized Gagliardo-Nirenberg inequalities involving the $L^{1,\infty}$-quasinorm. \begin{eg} \begin{enumerate}[label=\textup{(\roman*)}] \item If $1<p , p_2 < \infty$, then $$ \|f\|_{L^{p,1}} \lesssim \|f\|_{L^{1, \infty}}^{1 /p}\|\Lambda^{n/p_2} f\|_{L^{p_2 ,\infty}}^{1- 1/p} \quad\mbox{for all}\,\, f \in L^{1,\infty} \cap \dot H^{n/p_2}_{p_2 ,\infty}. $$ \item If $s>0$ and $1< p<\infty$ satisfy $sp < n$, then $$ \|f\|_{L^{p,1}}\lesssim \|f\|_{L^{1,\infty}}^{1-\theta} \| \Lambda^s f\|_{L^{p,\infty}}^{\theta} \quad\mbox{for all}\,\, f \in L^{1,\infty} \cap \dot{H}^{s}_{p,\infty} , $$ where $0< \theta <1$ is defined by $\theta s/n = (1-\theta )(1-1/p)$. \item If $s>0$ and $1< p<\infty$ satisfy $sp>n$, then $$ \|f\|_{L^\infty}\lesssim \|f\|_{L^{1,\infty}}^{1-\theta} \| \Lambda^s f\|_{L^{p,\infty}}^{\theta} \quad\mbox{for all}\,\, f \in L^{1,\infty} \cap H^{s}_{p,\infty}, $$ where $0< \theta <1$ is defined by $\theta (s/n -1/p) = 1-\theta $. \end{enumerate} \end{eg} \section*{Appendix} \subsection*{A.1. Proof of Theorem \ref{lem-itp-inhomf}} Adapting the arguments in \cite{Bui}, we prove Theorem \ref{lem-itp-inhomf} only for the homogeneous case. The inhomogeneous case can be proved by the same argument. The following maximal inequality is due to Fefferman and Stein \cite{FS}. \begin{thm}\label{FS} Let $1<p<\infty$ and $1<r\le\infty$. Then for every sequence $\{f_j\}_{j\in\Gamma}$ in $L^{p}(l^r)$, $$\| \{Mf_j\}_{j\in\Gamma}\|_{L^{p}(l^r)} \lesssim \|\{f_j\}_{j\in\Gamma}\|_{L^{p}(l^r)},$$ where $Mg$ denotes the Hardy-Littlewood maximal function of a function $g$ on $\mathbb{R}^n$. \end{thm} The quasi-norm $\|\cdot\|_{\hsf{s}{p}{q}{r}}$ of $\hsf{s}{p}{q}{r}$ was defined in terms of a sequence $\{\varphi_j\}_{j\in\mathbb{Z}}$ of functions in $C_c^\infty(\mathbb{R}^n)$ satisfying \eqref{cond1-hom}, \eqref{cond2-hom}, and \eqref{cond3-hom}. But if $\psi \in C_c ^\infty(\mathbb{R}^n)$ satisfies $\mathrm{supp\,} \psi \subset \{1/2 \le |\xi|\le 2\}$ and $\psi >0$ on $\{3/5 \le |\xi|\le 5/3\}$, then the quasi-norm $\|\cdot\|_{\hsf{s}{p}{q}{r}}$ is equivalent to the quasi-norm corresponding to $\{\psi_j\}_{j\in\mathbb{Z}}$, where $\psi_j(\xi)=\psi(2^{-j}\xi)$. The following is taken from \cite[Lemma 6.9]{Fra1} (see also \cite[Exercise 1.1.5]{Gra1}). \begin{lem}\label{lem-function} Suppose that $\psi \in C_c ^\infty(\mathbb{R}^n)$ satisfies $\mathrm{supp\,} \psi \subset \{1/2 \le |\xi| \le 2 \}$ and $\psi >0$ on $\{3/5 \le |\xi| \le 5/3\}$. Then there exists $\varphi\in C_c^\infty(\mathbb{R}^n)$ such that $\mathrm{supp\,} \varphi \subset \{1/2 \le |\xi| \le 2\}$, $\varphi>0$ on $\{ 3/5 \le |\xi| \le 5/3 \}$, and $$ \sum_{j\in\mathbb{Z}} \varphi( 2^{-j} \xi)\psi( 2^{-j}\xi)=1\quad\text{on }\,\mathbb{R}^n\backslash\{0\}. $$ \end{lem} \begin{proof}[Proof of Theorem \ref{lem-itp-inhomf} for the homogeneous case] Let $\psi$ and $\varphi$ be two functions in $C_c^\infty(\mathbb{R}^n)$ satisfying all the properties of Lemma \ref{lem-function}. Define $\psi_j(\xi)= \psi(2^{-j}\xi)$ and $\varphi_j(\xi)= \varphi(2^{-j}\xi)$ for $j\in\mathbb{Z}$. It is quite easy to show that $$ K(t, \{2^{js}\Delta_j^\varphi f\}_{j\in\mathbb{Z}}; L^{p_1}(l^r), L^{p_2}(l^r)) \lesssim K(t, f ; \dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2}) . $$ Hence to complete the proof, it suffices to show that $$ K(t, f ; \dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2}) \lesssim K (t, \{2^{js}\Delta_j^\varphi f\}_{j\in\mathbb{Z}}; L^{p_1}(l^r), L^{p_2}(l^r))\quad\text{for $1<r\le\infty$}. $$ Suppose that $1<r \le \infty$ and $f\in \hsf{s}{p}{q}{r}$. Then by Lemma \ref{interpolation-Lp}, $$ \{ 2^{js} \Delta_j^\varphi f\}_{j\in\mathbb{Z}} \in L^{p,q}(l^r) = (L^{p_1}(l^r), L^{p_2}(l^r))_{\theta,q}. $$ Let $\{g_j\}_{j\in\mathbb{Z}}\in L^{p_1}(l^r)$ and $\{h_j\}_{j\in\mathbb{Z}} \in L^{p_2}(l^r)$ be chosen so that $2^{js}\Delta_j^\varphi f = g_j + h_j$ for each $j\in\mathbb{Z}$. Define $$ f_1 = \sum_{j\in\mathbb{Z}} 2^{-js} \psi_j^\vee \ast g_j \quad\text{and}\quad f_2= \sum_{j\in\mathbb{Z}} 2^{-js}\psi_j^\vee \ast h_j. $$ Then since $\sum_{j\in\mathbb{Z}} \varphi_j\psi_j =1$ on $\mathbb{R}^n\backslash\{0\}$, $$ f = \sum_{j\in\mathbb{Z}} ( \psi_j \varphi_j \hat{f}\,)^\vee = \sum_{j\in\mathbb{Z}} \psi_j^\vee \ast \Delta_j^\varphi f = f_1 + f_2. $$ Note that $$ \Delta_j^\varphi f_1 = \sum_{k\in\mathbb{Z}} 2^{-ks}( \varphi_j \psi_k \hat{g_k})^\vee = \sum_{l=-1}^{1} 2^{-(j-l)s} (\varphi_j \psi_{j-l})^\vee \ast g_{j-l} . $$ Moreover, if $j\in\mathbb{Z}$ and $-1 \le l \le 1$, then $$ (\varphi_j \psi_{j-l})^\vee (x) = 2^{jn}[\varphi(\cdot)\psi(2^l \cdot)]^\vee (2^j x) $$ so that \begin{align*} \left| (\varphi_j \psi_{j-l})^\vee \ast g_{j-l}(x) \right| &\le Mg_{j-l}(x)\int_{\mathbb{R}^n} \left|K_l(y)\right|dy \lesssim Mg_{j-l}(x), \end{align*} where $K_l$ is a decreasing majorant of $\varphi(\cdot)\psi(2^l\cdot)$ (see \cite[Corollary 2.1.12]{Gra2} for more details). Hence applying Theorem \ref{FS}, we get \begin{align*} \left\| \left\{2^{js}\Delta_j^\varphi f_1 \right\}_{j\in\mathbb{Z}}\right\|_{L^{p_1}(l^r)} &\lesssim \sum_{l=-1}^{1} \left\| \left\{M{g}_{j-l}\right\}_{j\in\mathbb{Z}}\right\|_{L^{p_1}(l^r)} \lesssim \left\| \left\{g_j\right\}_{j\in\mathbb{Z}}\right\|_{L^{p_1}(l^r)}. \end{align*} Similarly, $$ \left \| \{ 2^{js}\Delta_j^\varphi f_2 \}_{j\in\mathbb{Z}} \right\|_{L^{p_2}(l^r)} \lesssim \left \| \{ h_j\} _{j\in\mathbb{Z}}\right \|_{L^{p_2}(l^r)}. $$ Combining all the estimates, we get \begin{align*} K(t,f;\dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2}) &\le \| f_1 \|_{\dot F^{s,r}_{p_1}} + t \|f_2 \|_{\dot F^{s,r}_{p_2}} \\ &\lesssim \| \{ g_j\} _{j\in\mathbb{Z}} \|_{L^{p_1}(l^r)} + t \| \{h_j\} _{j\in\mathbb{Z}} \|_{L^{p_2}(l^r)}. \end{align*} By the arbitrariness of $\{ g_j\} _{j\in\mathbb{Z}}$ and $ \{h_j\} _{j\in\mathbb{Z}}$, we conclude that $$ K(t, f ; \dot F ^{s,r}_{p_1}, \dot F^{s,r}_{p_2}) \lesssim K(t, \{2^{js}\Delta_j^\varphi f\}_{j\in\mathbb{Z}}; L^{p_1}(l^r), L^{p_2}(l^r)). $$ This completes the proof of Theorem \ref{lem-itp-inhomf}. \end{proof} \subsection*{A.2. Proof of Theorem \ref{prop-relation}} \begin{proof}[Proof of Theorem \ref{prop-relation}] (i) It was proved in \cite[Theorem 2.5.6]{Tri1} and \cite[Theorem 6.1.2]{Gra2}, for example, that $ F_{p}^{0,2} = \dot{F}_{p}^{0,2}=L^{p}$ for $1<p<\infty$. Hence it follows from Lemma \ref{interpolation-Lp} and Theorem \ref{lem-itp-inhomf} that $\ihsf{0}{p}{q}{2} = \hsf{0}{p}{q}{2}=L^{p,q}$ for $1<p<\infty$ and $1 \le q \le \infty$. (ii) We choose $\psi \in C_c^\infty(\mathbb{R}^n)$ such that $\psi=1$ on $\{|\xi|\le1\}$ and $\psi=0$ on $\{|\xi|\ge3/2\}$. For each $j\in\mathbb{Z}$, define $\varphi_j (\cdot) = \psi (2^{-j}\cdot)-\psi (2^{-j+1}\cdot)$ on $\mathbb{R}^n.$ Then $\varphi=\{\varphi_j\}_{j\in\mathbb{Z}}$ is a sequence in $C_c^\infty(\mathbb{R}^n)$ satisfying \eqref{cond1-hom}, \eqref{cond2-hom}, and \eqref{cond3-hom}. Next, we define $\eta_0 = \psi$ and $\eta_j = \varphi_j$ for $j \ge 1$. Then $\eta=\{\eta_j\}_{j\ge0}$ satisfies \eqref{cond1-inhom}, \eqref{cond2-inhom}, and \eqref{cond3-inhom}. Note that $\varphi_j = \varphi_j \eta_0$ for $j <0 $. Moreover, since $\varphi_j^\vee(x)= 2^{jn}\varphi_0^\vee (2^jx)$, it follows that \[ \| \varphi_j^\vee\|_{L^{1}} \lesssim \int_{\mathbb{R}^n} |\psi ^\vee(x)| dx \lesssim 1 \quad\text{for }j\in \mathbb{Z}. \] On the other hand, since $ \Delta_j^\varphi f = \varphi_j^\vee \ast f$, it follows from Young's convolution inequality that each $\Delta_j^\varphi$ is a bounded linear operator on $L^t$ for every $1 \le t \le \infty$. Hence by real interpolation, we deduce that if $1<p<\infty$ or if $1\le p=q \le \infty$, then $$ \| \Delta_j^\varphi f\|_{L^{p,q}} \lesssim \|\varphi_j^\vee \|_{L^1} \|f\|_{L^{p,q}} \lesssim \|f\|_{L^{p,q}} \quad\mbox{for all}\,\, j \in \mathbb{Z}. $$ For $j < 0$, we also have \begin{equation*} \|\Delta_j^\varphi f \|_{L^{p,q}} = \| (\varphi_j \eta_0 \hat{f}\,)^\vee \|_{L^{p,q}} \lesssim \|(\eta_0 \hat{f}\,)^\vee\|_{L^{p,q}} = \| \Delta_0^\eta f \|_{L^{p,q}} \lesssim \|f\|_{L^{p,q}}. \end{equation*} Suppose now that $1<p<\infty$ or if $ p=q =1$. We first show that $\dot{F}^{0,1}_{p,q} \hookrightarrow L^{p,q}$. Suppose that $f \in \dot{F}^{0,1}_{p,q}$. Then since $\sum_{j \in \mathbb{Z}}|\Delta_j^\varphi f | \in L^{p,q}$, the series $\sum_{j \in \mathbb{Z}} \Delta_j^\varphi f$ converges in $L^{p,q}$ to a function, denoted by $F$. It is trivial that $\|F\|_{L^{p,q}} \le \|f\|_{\dot{F}^{0,1}_{p,q}}$. Moreover, since $L^{p,q} \subset L^1 + L^\infty \subset \mathscr{S}'$ and $\sum_{j \in \mathbb{Z}} \Delta_j^\varphi f = f$ in $\mathscr{S}_0'$, it follows that $F$ is a unique extension of $f$ in $L^{p,q}$. This proves that $\dot{F}^{0,1}_{p,q} \hookrightarrow L^{p,q}$. The embedding $F^{0,1}_{p,q} \hookrightarrow L^{p,q}$ can be proved similarly. To show that $\ihsf{s}{p}{q}{r}=L^{p,q}\cap \hsf{s}{p}{q}{r}$, suppose that $f \in \ihsf{s}{p}{q}{r}$. Then since $s>0$ and $L^{p,q} (l^r ) \hookrightarrow L^{p,q} (l^\infty)$, we have $$ \|f\|_{L^{p,q}} \lesssim\left\|\sum_{j=0}^\infty |\Delta_j^\eta f | \right\|_{L^{p,q}} \lesssim \left\| \sup_{j\in\mathbb{N}_0} 2^{js}| \Delta_j^\eta f | \right\|_{L^{p,q}} \lesssim \|f\|_{ F^{s,r}_{p,q}} . $$ Moreover, since $L^{p,q} (l^1) \hookrightarrow L^{p,q} (l^r)$, $L^{p,q}$ is normable, and $s>0$, it follows that \begin{align*} \|f\|_{\hsf{s}{p}{q}{r}} &\sim \left \| \{ 2^{js} \lpo{\varphi}{f} \}_{j\ge0} \right\|_{L^{p,q} (l^r)} + \left \| \{ 2^{js} \lpo{\varphi}{f}\}_{j<0} \right\|_{L^{p,q} (l^r)}\\ & \lesssim \left \| \{ 2^{js} \lpo{\varphi}{f} \}_{j\ge0} \right\|_{L^{p,q} (l^r)} + \sum_{j<0} 2^{js} \| \lpo{\varphi}{f} \|_{L^{p,q}} \\ & \lesssim \left \| \{ 2^{js} \Delta_j^\eta f \}_{j \ge 0} \right\|_{L^{p,q} (l^r)} + \| f \|_{L^{p,q}} \lesssim \|f\|_{\ihsf{s}{p}{q}{r}}. \end{align*} Conversely, if $f \in L^{p,q}\cap \hsf{s}{p}{q}{r}$, then \[ \|f\|_{\ihsf{s}{p}{q}{r}} \sim \left \| \{ 2^{js} \Delta_j^\eta f \}_{j > 0} \right\|_{L^{p,q} (l^r)} + \| \Delta_0^\eta f \|_{L^{p,q}} \lesssim \|f\|_{\hsf{s}{p}{q}{r}} + \| f \|_{L^{p,q}} . \] (iii) Suppose that $1<p<\infty$ or $1 \le p=q \le \infty$. Then for all $f\in L^{p,q}$, we have \begin{align*} \|f\|_{B^{0,\infty}_{p,q}}+ \|f\|_{\hsb{0}{p}{q}{\infty}} = \sup_{j\in\mathbb{N}_0} \|\Delta_j^\eta f\|_{L^{p,q}} + \sup_{j\in\mathbb{Z}} \|\Delta_j^\psi f\|_{L^{p,q}} \lesssim \|f\|_{L^{p,q}} , \end{align*} which implies that $ L^{p,q} \hookrightarrow B^{0,\infty}_{p,q} $ and $L^{p,q} \hookrightarrow \dot B^{0,\infty}_{p,q}$. Suppose that $f \in \dot{B}^{0,1}_{p,q}$. Then since $\sum_{j \in \mathbb{Z}} \|\Delta_j^\varphi f \|_{L^{p,q}}< \infty$ and $L^{p,q}$ is normable, the series $\sum_{j \in \mathbb{Z}} \Delta_j^\varphi f$ converges in $L^{p,q}$ to a function $F$. Moreover, $F$ is an extension of $f$ in $L^{p,q}$ satisfying $\|F\|_{L^{p,q}} \le \|f\|_{\dot{B}^{0,1}_{p,q}}$. It is also easy to show that $\|g\|_{L^{p,q}} \lesssim \|g\|_{ B^{0,1}_{p,q}}$ for all $g\in B^{0,1}_{p,q}$. Finally, the proof of (ii) can be easily adapted to prove that $\ihsb{s}{p}{q}{r}=L^{p,q}\cap \hsb{s}{p}{q}{r}$. \end{proof} \subsection*{A.3. Proofs of Theorems \ref{ebd-hom-TLL} and \ref{ebd-hom-B}} \begin{proof}[Proof of Theorem \ref{ebd-hom-TLL}] We first prove sufficiency of the conditions (i) and (ii) for the embedding $\hsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{r_2}$. Sufficiency of (i) immediately follows from the embedding result \eqref{eq-ebd}. On the other hand, it follows from Theorems \ref{thm-ebd-FB-2} that if $s_1-s_2=n/p_1-n/p_2>0$, then $\dot F^{s_1,\infty}_{p_1}\hookrightarrow \dot F^{s_2,1}_{p_2}$. Hence by real interpolation (Theorem \ref{lem-itp-inhomf}), we deduce that if (ii) holds, then $\hsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsf{s_1}{p_1}{q_2}{\infty} \hookrightarrow \hsf{s_2}{p_2}{q_2}{1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{r_2}. $ To prove the necessity, we assume that $\hsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{r_2}$. First of all, by a dilation argument, we easily obtain $$ s_1- \frac{n}{p_1} = s_2 - \frac{n}{p_2}. $$ Next, let $0<\varepsilon<1/10$ be fixed and choose $\varphi \in C_c^\infty (\mathbb{R}^n)$ such that $\mathrm{supp\,} \varphi \subset \{1/2+\varepsilon \le |\xi| \le 2-\varepsilon\}$, $\varphi>0$ on $\{3/5\le |\xi|\le 5/3\}$, and $\varphi=1$ on $\{1-\varepsilon \le |\xi|\le 1+\varepsilon\}$. Define $\varphi_j(\xi)=\varphi(2^{-j}\xi)$ for each $j\in\mathbb{Z}$. Then for $s\in\mathbb{R}$, $1\le p <\infty$, and $1\le q\le\infty$, $$ \|f\|_{\dot F^{s,r}_{p,q}} \sim \left\| \left\{ 2^{js}\Delta_j^\varphi f \right\}_{j\in\mathbb{Z}}\right\|_{L^{p,q}(l^r)}. $$ Choose any $\psi\in C_c^\infty(\mathbb{R}^n)$ with $\mathrm{supp\,} \psi \subset \{|\xi|\le\varepsilon\}.$ Then for all $\xi\in\mathbb{R}^n$, \begin{equation*} \varphi_j(\xi)\psi(\xi-2^{k}e_1)= \begin{cases} \psi(\xi-2^j e_1)&\text{if}\,\, j=k\ge 1,\\ \,\,0\quad&\text{if} \,\,k\ge1, j\neq k.\\ \end{cases} \end{equation*} Therefore, if $f\in \mathscr{S}$ is given by $$ \hat{f}(x)=\sum_{k=1}^N a_k \psi(\xi -2^k e_1) $$ for some complex numbers $a_1,\dots, a_N$, then \begin{equation*} \|f\|_{\dot F^{s,r}_{p,q}} \sim \| \{ 2^{js}a_j\}_{j=1}^N \|_{l^r}\|\psi ^\vee\|_{L^{p,q}}. \end{equation*} Since $\hsf{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{r_2}$, we have $$ \| \{ 2^{js_2}a_j\}_{j=1}^N \|_{l^{r_2}}\|\psi ^\vee\|_{L^{p_2,q_2}} \lesssim \| \{ 2^{js_1}a_j\}_{j=1}^N \|_{l^{r_1}}\|\psi ^\vee\|_{L^{p_1,q_1}} $$ for any $a_1,\dots, a_N$. Hence it follows that (a) $s_1\ge s_2$ and (b) if $s_1 = s_2$, then $r_1 \le r_2$. Now, fixing any number $\alpha$ with $0< \alpha < n \left(1 -1/ p_1 \right)$, we define $$ \overline{p}=\frac{np_1}{n+\alpha p_1}. $$ Then since $1< \overline{p}< p_1$ and $ \alpha= n/ \overline{p} - n/p_1 >0$, it follows from the sufficiency part of the theorem that \begin{equation*}\label{proof-lem-ebd} \hsf{s_1+\alpha}{\overline{p}}{q_1}{\infty} \hookrightarrow \hsf{s_1}{p_1}{q_1}{r_1}\hookrightarrow \hsf{s_2}{p_2}{q_2}{r_1} \hookrightarrow \hsf{s_2}{p_2}{q_2}{\infty}. \end{equation*} Recall from the proof of Theorem \ref{SFE2-inhom} that $\Lambda^\sigma$ maps $\hsf{s}{p}{q}{r}$ isomorphically onto $\hsf{s-\sigma}{p}{q}{r}$ for $s , \sigma \in \mathbb R$, $1<r\le\infty$, $1 < p< \infty$, and $1 \le q \le \infty$. Hence by Theorem \ref{prop-relation}, $$ \ihsf{s_1+\alpha+\beta}{\overline{p}}{q_1}{\infty} \hookrightarrow \hsf{s_1+\alpha+\beta}{\overline{p}}{q_1}{\infty} \hookrightarrow \hsf{s_2+\beta}{p_2}{q_2}{\infty} $$ for every $\beta \in {\mathbb R}$ with $s_1 +\alpha +\beta >0 $. Choose any $\beta$ with $\beta >-s_2$. Then since $$ s_1+ \alpha+\beta > s_1 +\alpha -s_2 = \frac{n}{\overline{p}}-\frac{n}{p_2} >0 \quad\mbox{and}\quad s_2 +\beta >0, $$ it follows from Theorems \ref{ebd-inhom-TLL} and \ref{prop-relation} that \begin{align*} \ihsf{s_1+\alpha+\beta}{\overline{p}}{q_1}{\infty} \hookrightarrow \hsf{s_2+\beta}{p_2}{q_2}{\infty} \cap \ihsf{0}{p_2}{q_2}{2} = \ihsf{s_2+\beta}{p_2}{q_2}{\infty}. \end{align*} Therefore, by the necessity part of Theorem \ref{ebd-inhom-TLL}, we deduce that $q_1 \le q_2$. \end{proof} To prove Theorem \ref{ebd-hom-B}, we need the following form of the Bernstein inequality. \begin{lem}\label{lem-hom-B} Suppose that $K \subset{\mathbb R}^n$ is compact, $1 \le p < \infty$, and $ 1 \le q \le \infty$. Then $$ \|f\|_{L^\infty} \lesssim d^{n/p} \|f\|_{L^{p,q}} $$ for all $f\in L^{p,q}\cap \mathscr{S}'$ with $\mathrm{supp\,} \hat{f} \subset K$, where $d$ is the diameter of $K$. \end{lem} \begin{proof} By a simple scaling argument, we may assume that $d=1$. Suppose that $f \in L^{p,q}\cap \mathscr{S}'$ and $\mathrm{supp\,} \hat{f} \subset K$. Then choosing $\psi \in \mathscr{S}$ such that $\hat{\psi}=1$ on $\{\xi \in \mathbb{R}^n: |\xi - \eta| \le 1 \text{ for some }\eta \in K\}$, we have $f = f \ast \psi$ on $\mathbb{R}^n$. Choose a function $\varphi\in\mathscr{S}$ such that $\varphi(0)=1$ and $\mathrm{supp\,}\hat{\varphi}\subset\{|\xi|\le1\}$. For $0<\delta<1$, define $$f_\delta(x)=\varphi(\delta x)f(x).$$ Then by Lemma \ref{Holder-Lorentz}, \begin{align*} |f_\delta(x)| & = \left| \int f_\delta(y) \psi (x-y)dy\right| \lesssim \| f_\delta\|_{L^{\infty}}^{1/2} \| \psi\|_{L^{2p/(2p-1),2q/(2q-1)}} \| f_\delta\|_{L^{p,q}}^{1/2}, \end{align*} from which it follows that \begin{equation*} \| f_\delta\|_{L^{\infty}} \lesssim \|f_\delta\|_{L^{p,q}} \lesssim \|f\|_{L^{p,q}}. \end{equation*} Letting $\delta \downarrow 0$, we complete the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{ebd-hom-B}] It immediately follows from \eqref{eq-ebd} that $\hsb{s_1}{p_1}{q_1}{r_1} \hookrightarrow \hsb{s_2}{p_2}{q_2}{r_2} $ when (i) holds. \noindent Suppose that (ii) holds and $f \in \hsb{s_1}{p_1}{q_1}{r_1}$. Then since the diameter of $\mathrm{supp\,} \widehat{\Delta_j^\varphi f}$ is at most $2^{j+1}$, it follows from Lemma \ref{lem-hom-B} that $$ \|\Delta_j^\varphi f \|_{L^{\infty}} \lesssim 2^{jn/p_1}\|\Delta_j^\varphi f\|_{L^{p_1,q_1}} \quad\mbox{for}\,\, j\in\mathbb{Z}. $$ Hence by Lemma \ref{interpolation-Lp}, we have \begin{align*} \|\Delta_j^\varphi f \|_{L^{p_2,q_2}} & \lesssim \|\Delta_j^\varphi f \|_{L^{p_1,q_1}}^{p_1/p_2} \|\Delta_j^\varphi f \|_{L^{\infty}}^{1-p_1/p_2} \\ &\lesssim 2^{jn(1/p_1-1/p_2)} \|\Delta_j^\varphi f \|_{L^{p_1,q_1}}= 2^{j(s_1-s_2)} \|\Delta_j^\varphi f \|_{L^{p_1,q_1}}, \end{align*} from which it follows that $\|f\|_{\hsb{s_2}{p_2}{q_2}{r_2}} \lesssim \|f\|_{\hsb{s_1}{p_1}{q_1}{r_1}}$. \end{proof}
1,941,325,220,310
arxiv
\section{Introduction} \label{Sec:Introduction} The classical results by Halmos \cite{HalmosPaper} and Rokhlin \cite{Rokhlin} state that a ``typical'' measure-preserving transformation on a probability space $(X,\mu)$ is weakly but not strongly mixing. More precisely, the set of weakly mixing transformations is a dense, $G_{\delta}$ (hence residual) set for the weak topology and the set of strongly mixing transformations are of first category. The combination of these two results proved the existence of a weakly but not strongly mixing transformation, without providing a concrete example (for such an example, see \cite{Chacon}). Since then, much research has been done in finding ``typical'' properties of measure-preserving dynamical systems. See, e.g., \cite{Nadkarni}, \cite{KatokStepin}, \cite{Ageev1}, \cite{King}, \cite{Ageev2}, \cite{AlpernPrasad}, \cite{Ageev3}, \cite{RueLazaro}, \cite{Ageev4}, \cite{Solecki}, \cite{Guiheneuf}. More than thirty years later, in completely unrelated efforts, Furstenberg \cite{FurstenbergOriginal} presented his celebrated ergodic theoretic proof of Szemeredi's Theorem on the existence of arbitrarily long arithmetic progressions in large subsets of $\mathbb{N}.$ While the original proof by Furstenberg used diagonal measures, the alternative proof by him, Katznelson, and Ornstein \cite{Furstenberg} building up a tower of so-called compact and weakly mixing extensions had much greater impact on further development of ergodic theory. This method of finding the characteristic factor has been extended to various ergodic theorems and is an active area of research, see, e.g., \cite{Chu}, \cite{ChuFrantzinakisHost}, \cite{AssaniPresser}, \cite{EisnerZorin}, \cite{BergelsonTaoZiegler}, \cite{FrantzinakisZorin}, \cite{EisnerKrause}, \cite{TaoZiegler} \cite{AssaniDuncanMoore}, \cite{Robertson}. For example, the correct characteristic factor for norm convergence of multiple ergodic averages was identified by Host and Kra \cite{HostKra} and has the structure of an inverse limit of nilsystems, see also Ziegler \cite{Ziegler}. The purpose of this paper is to prove analogues of the Halmos and Rokhlin category theorems for extensions (see Theorems \ref{Thm:CategoryTheorem} and \ref{Thm:FirstCategoryTheorem}), extending a result of Robertson on compact group extensions \cite{EARobertson}. Inspired by Rokhlin's skew product representation theorem (see, for example, \cite[p.69]{Glasner}), we consider extensions defined on product spaces with the natural projection as the factor map. We show that for a fixed product space, where both measures are non-atomic, a ``typical'' extension is weakly but not strongly mixing. Here, by an extension we mean an invertible extension of some (non-fixed), invertible, measure-preserving transformation on the factor. Note that the set of extensions is a closed, nowhere dense set in all invertible transformations on the product space (see Proposition \ref{Prop:GXIsClosed}), so the classical Halmos and Rokhlin results cannot be applied. The proof for weakly mixing extensions is a non-trivial adaptation of the original construction by Halmos. In particular, a ``typical'' extension does not have an intermediate nilfactor. For examples of systems lacking non-trivial nilfactors, see \cite{HostKraMaass}. The paper is organized as follows. After discussing some preliminaries in Section \ref{Sec:Preliminaries}, we consider the case of discrete extensions in Section \ref{Sec:Discrete}, in particular showing that there are no weak mixing extensions on these spaces (Proposition \ref{Prop:NoWeaklyMixingFinite}), but that permutations are dense (Theorem \ref{Thm:DensityOfPermutationsFinite}). We then prove the Weak Approximation Theorem for Extensions on the unit square (Theorem \ref{Thm:WATE}) in Section \ref{Sec:WAT} using the density result for discrete extensions mentioned above. In Section \ref{Sec:UniformApproximation} we generalize a few results, including Halmos' Uniform Approximation Theorem, which are necessary to prove our Conjugacy Lemma for Extensions (Lemma \ref{Lem:ConjugacyLemmaExtensions}) in Section \ref{Sec:ConjugacyLemma}. Section \ref{Sec:CategoryTheorem} is devoted to the proof that weakly mixing extensions on the unit square are residual and Section \ref{Sec:General} addresses the case of general vertical measure. In Section \ref{Sec:StrongMixing} we define strongly mixing extensions, and show that such extensions are of first category (Theorem \ref{Thm:FirstCategoryTheorem}). Finally in Section \ref{Sec:Questions} we formulate some open questions. After this paper was made public, the first part of Question \ref{Que:FixedFactor} was answered by Eli Glasner and Benjamin Weiss, see \cite{GlasnerWeiss}. \textbf{Acknowledgments.} The question of ``typical'' behavior of extensions was asked by Terence Tao for a fixed factor, motivated by \cite{HostKraMaass}, cf. Question \ref{Que:FixedFactor} and note in particular the following discussion on the difficulties with fixed factors. The author is very grateful to him for the inspiration. The author also thanks Tanja Eisner for introducing him to the problem and for many helpful discussions. The author is further thankful to Ben Stanley for being available to exchange ideas, the referee of this paper for careful reading and valuable comments which improved the paper, and to Bryna Kra, Michael Lin, Philipp Kunde, and Yonatan Gutman for helpful remarks. Lastly, the support of the Max Planck Institute is greatly acknowledged. \section{Preliminaries} \label{Sec:Preliminaries} As explained in the introduction, in this paper we will be working with extensions on product spaces through the natural projection. To be more precise, we let $(X, m)$ be a non-atomic standard probability space, $(Y, \eta)$ be a probability space, $(Z,\mu) = (X \times Y, m \times \eta),$ and $T,T'$ be measure-preserving transformations on $(Z,\mu), (X,m)$ respectively, such that $(Z,\mu,T)$ is an extension of $(X,m,T')$ through the natural projection map $\pi:Z \to X$ onto the first coordinate. We will assume throughout that $T,T'$ are invertible, and will identify two transformations if they differ on a set of measure zero. We will say ``$T$ is an extension of $T'$'' or ``$T$ extends $T'$'' if $T$ and $T'$ satisfy all conditions stated above. Throughout this paper, we will assume without loss of generality that $X$ is the unit interval and $m$ is the Lebesgue measure. We can assume this because all non-atomic standard probability spaces are isomorphic (see \cite[p. 61]{HalmosPaper}). Let $\mathcal{G}(Z)$ denote the set of all invertible, measure-preserving transformations on $(Z,\mu)$ and let $\mathcal{G}_X = \{T \in \mathcal{G}(Z): \exists \ T' \in \mathcal{G}(X) \text{ s.t. } T \text{ extends } T' \}$. Note that if we say $T \in \mathcal{G}_X,$ we assume that the transformation on the factor will be notated by $T'$. Further note that we will also write $\mathcal{G}_X$ to denote the corresponding set of Koopman operators. The weak topology on $\mathcal{G}(Z)$ is the topology defined by the subbasic neighborhoods $$N_{\epsilon}(T;E) = \{S \in \mathcal{G}(Z): \mu(TE \triangle SE) < \epsilon \},$$ where $\epsilon >0$ and $E$ is some measurable subset of $Z$. Note that if $Z$ is, say, the unit square with the Lebesgue measure, then it is sufficient for a subbasis to consider only dyadic sets (i.e., a finite union of dyadic squares). See \cite{HalmosLectures} for discussions of this topology. It is helpful to note that the weak topology happens to coincide with the weak (and strong) operator topology for the corresponding Koopman operators. Further, in this paper we will be interested in the weak topology on $\mathcal{G}_X,$ by which we mean the subspace topology inherited by the weak topology. We will need the following two metrics on $\mathcal{G}(Z)$ defined by \begin{align*} d(S,T) &:= \sup_E \mu(SE \triangle TE) \\ d'(S,T) &:= \mu \left(\{z \in Z : Sz \neq Tz \}\right) \end{align*} where the $\sup$ in the first definition is taken over all measurable sets $E$. These metrics were used by Halmos in his proof of the category theorem, see \cite{HalmosLectures}. We note that both metrics induce the same topology on $\mathcal{G}(Z)$, but that topology is not the weak topology. Moreover, they satisfy $d(S,T) \le d'(S,T)$ for all $S,T \in \mathcal{G}(Z).$ The last important note is that $d'$ is invariant under multiplication by transformations. That is to say, for all $R,S,T \in \mathcal{G}(Z),$ $$d'(RS,RT) = d'(S,T) = d'(SR,TR).$$ Let $L^2(Z|X)$ denote the Hilbert module over $L^{\infty}(X).$ More precisely, for $f \in L^2(Z),$ $$ f \in L^2(Z|X) \text{ if and only if } \mathbb{E}(\abs{f}^2|X)^{1/2} \in L^{\infty}(X),$$ where $\mathbb{E}(f | X)$ is the conditional expectation of $f$ with respect to $X$. More specifically, it is the conditional expectation with respect to $\mathcal{A}:= \{\pi^{-1}(A) : A \in \mathcal{L} \},$ where $\mathcal{L}$ is the Lebesgue sigma algebra on $X$. Let $$\norm{f}_{L^2(Z|X)} := \mathbb{E}(\abs{f}^2|X)^{1/2}$$ and $$\langle f,g \rangle_{L^2(Z|X)} := \mathbb{E}(f \overline{g} |X). $$ For more on $L^2(Z|X),$ see \cite{Tao}. One important result of $L^2(Z|X)$ that we do wish to emphasize for later is the Cauchy-Schwarz Inequality. \begin{prop} \label{Thm:C-S} Let $f,g \in L^2(Z|X).$ Then $$\abs{\langle f,g \rangle_{L^2(Z|X)}} \le \norm{f}_{L^2(Z|X)} \norm{g}_{L^2(Z|X)} \text{ a.e.}$$ \end{prop} Next we give a definition for weakly mixing extensions, cf. \cite{Tao}. \begin{defin} \label{Def:WeakMixingExtension} An extension $T$ of $T'$ is said to be \textit{weakly mixing} if for all $f,g \in L^2(Z|X),$ $$ \lim_{N \to \infty} \frac{1}{N} \sum_{n=0}^{N-1} \norm{\mathbb{E}(T^nf \overline{g} |X) - (T')^n \mathbb{E}(f|X) \mathbb{E}(g|X)}_{L^2(X)} = 0.$$ \end{defin} For other possible (equivalent) definitions of weakly mixing extensions, see \cite[p.192]{Glasner}. We denote by $\mathcal{W}_X \subset \mathcal{G}_X$ the set of weakly mixing extensions on $Z$. We finally prove that the Baire Category Theorem is applicable to $\mathcal{G}_X,$ and further that $\mathcal{G}_X$ is topologically a small subset of $\mathcal{G}(Z).$ \begin{prop} \label{Prop:GXIsClosed} Suppose $Y$ has more than one point. Then $\mathcal{G}_X$ is closed and nowhere dense in $\mathcal{G}(Z).$ \end{prop} \begin{proof} We first prove that $\mathcal{G}_X$ is closed. Let $T \in \mathcal{G}(Z) \backslash \mathcal{G}_X.$ We wish to find a neighborhod of $T$ that is disjoint from $\mathcal{G}_X.$ To this end, let $E \subset Z$ be a cylinder set (to be precise, $E$ is of the form $D \times Y$ for some measurable $D \subset X$), such that $TE$ is not a cylinder set, even up to measure zero. Define $M := \mu(E).$ Then for all cylinder sets $C$ with $\mu(C)=M, \mu(TE \triangle C) > 0.$ We claim that indeed $\inf_C \mu(TE \triangle C) > 0,$ where the $\inf$ is taken over all cylinder sets with measure exactly $M$. Suppose to the contrary that $\inf_C \mu(TE \triangle C)=0$. Let $C_n$ be a sequence of such cylinder sets such that not only $\mu(TE \triangle C_n) \to 0,$ but further such that $$\sum_{n=1}^{\infty} \mu(TE \backslash C_n) < \infty. $$ Define $$\hat{C} := \bigcup_{n=1}^{\infty} \bigcap_{m \ge n} C_m.$$ We claim that $\mu(TE \triangle \hat{C}) = 0.$ As $\hat{C}$ is clearly a cylinder set, we will arrive at a contradiction. First consider $$\hat{C} \backslash TE = \left(\bigcup_{n=1}^{\infty} \bigcap_{m \ge n} C_m \right) \backslash TE = \bigcup_{n=1}^{\infty} \bigcap_{m \ge n} (C_m \backslash TE).$$ Now because $\mu(C_m \triangle TE) \to 0, \mu\left(\bigcap_{m \ge n} (C_m \backslash TE) \right) = 0$ for all $n.$ But then $$\mu(\hat{C} \backslash TE) \le \sum_{n=1}^{\infty} \mu\left(\bigcap_{m \ge n} (C_m \backslash TE) \right) = 0.$$ On the other hand, $$TE \backslash \hat{C} = TE \backslash \left(\bigcup_{n=1}^{\infty} \bigcap_{m \ge n} C_m \right) = \bigcap_{n=1}^{\infty} \bigcup_{m \ge n} (TE \backslash C_m). $$ But by assumption, $\sum_{n=1}^{\infty} \mu(TE \backslash C_n) < \infty,$ so by the Borel-Cantelli lemma, $\mu(TE \backslash \hat{C})=0.$ Now, let $\epsilon := \inf_C \mu(TE \triangle C).$ We claim that for any $S \in \mathcal{G}_X, S \notin N_{\epsilon}(T;E).$ Indeed, $SE$ is (up to a null set) a cylinder set, and $\mu(SE)=M,$ so $\mu(TE \triangle SE) \ge \epsilon$ by definition of $\epsilon$. Now because $\mathcal{G}_X$ is closed, in order to prove that it is nowhere dense, it is sufficient to show that $\mathcal{G}(Z) \backslash \mathcal{G}_X$ is dense. Fix $T \in \mathcal{G}_X$, let $\epsilon > 0$ and let $$N_{\epsilon}(T) = \{S \in \mathcal{G}(Z): \mu(TE_i \triangle SE_i) < \epsilon, i = 1, \ldots, n \},$$ where $E_i$ are measurable sets. Now let $A \subset Z$ be a measurable set such that $0 \mu(A) < \epsilon,$ and $A$ is not a cylinder set. Further let $B \subset Z$ be a cylinder set such that $\mu(A \cap B) = \mu(A \cap B^c),$ and define $A_1 := A \cap B, A_2 := A \cap B^c.$ We now take $S \in \mathcal{G}(Z)$ with the following properties: if $z \in Z \backslash A, Sz := Tz, SA_1 = TA_2,$ and $SA_2=TA_1.$ Note that because $T$ is an extension and $A$ is not a cylinder set, $S \notin \mathcal{G}_X.$ Further note that $\{z \in Z : Sz \neq Tz \}=A.$ Therefore, $$\sup_E \mu(TE \triangle SE) = d(T,S) \le d'(T,S) = \mu(A) < \epsilon.$$ So $S \in N_{\epsilon}(T).$ \end{proof} By Proposition \ref{Prop:GXIsClosed}, $\mathcal{G}_X$ is a closed subset of a Baire space, so $\mathcal{G}_X$ is itself a Baire space. Further, because $\mathcal{G}_X$ is nowhere dense, the classical Halmos and Rokhlin results can provide no information about $\mathcal{G}_X.$ \section{Discrete Extensions} \label{Sec:Discrete} As stated in Section 2, throughout this paper we will let $(X,m)$ be the unit interval with Lebesgue measure. For this section, let $Z=X \times \{1,\ldots, L\},$ with $L \ge 2$, $w$ be a probability measure on $\{1, \ldots, L \}$, and $w_i := w(i)$ (without loss of generality, $w_i \neq 0$ for all $i$). Let $\mu$ be the product measure of $m$ and $w$ on $Z$. In this section we will be exploring some results regarding these discrete extension measure spaces. We begin by showing that such systems can never be weakly mixing extensions. \begin{prop} \label{Prop:NoWeaklyMixingFinite} Let $(Z,\mu), (X,m)$ be as above. Then $\mathcal{W}_X = \emptyset.$ \end{prop} \begin{proof} Fix $T \in \mathcal{G}_X.$ It suffices to show that there exists an $f \in L^2(Z|X)$ with relative mean zero (that is, $\mathbb{E}(f|X)=0$ $m-$almost everywhere) such that $$\lim_{N \to \infty}\frac{1}{N} \sum_{n=0}^{N-1}{ \left( \int_X \abs{\mathbb{E}(T^nf \overline{f}|X)}^2 dm \right)^{1/2} } \neq 0.$$ In particular, we will construct $f$ such that $\mathbb{E}(T^nf \overline{f}|X)(x)$ can take only a finite number of possible values, none of which are 0. Thus $\abs{\mathbb{E}(T^nf \overline{f}|X)}^2(x)$ is always positive, $\frac{1}{N} \sum_{n=0}^{N-1}{\abs{\mathbb{E}(T^nf \overline{f}|X)}^2(x)}$ is bounded away from 0, and $\frac{1}{N} \sum_{n=0}^{N-1}{ \left( \int_X \abs{\mathbb{E}(T^nf \overline{f}|X)}^2 dm \right)^{1/2} }$ cannot converge to the zero function on $X$. Consider $f(x,y)$, where $f(x,i) = 1$ for all $x \in X, i = 2, \ldots, L$, and $$f(x,1)=\frac{-\sum_{i=2}^{L}{w_i}}{w_1}$$ for all $x \in X$. It is easy to see that $f$ has relative mean zero when $L \ge 2$,which is why we made this assumption at the beginning of the section. Let $\sigma_{n,x}(i)$ be such that $T^n(x,i)=((T')^nx,\sigma_{n,x}(i))$ for all $(x,i) \in Z$. Now, $$\mathbb{E}(T^nf \overline{f}|X)(x) = \sum_{j=1}^{L}{w_j f(x,j) \fatn{j}} = \sum_{j=1}^{L}{w_j f(x,j) f(x, \sigma_{n,x}(j))},$$ with the last equality because $f$ is constant on any given level. Thus we see that because $T$ is invertible, $\sigma_{n,x}$ is a permutation on an $L$ element set, and the value of $\mathbb{E}(T^nf \overline{f}|X)(x)$ is completely determined by the specific permutation $\sigma_{n,x}$. As there are $L!$ permutations of the $L$ levels, there are finitely many possible values of $\mathbb{E}(T^nf \overline{f}|X)(x)$. To see $\mathbb{E}(T^nf \overline{f}|X)(x) \neq 0$, consider 2 cases. In the first case, we have $\sigma_{n,x}(1)=1$. In this case it is easy to see that every summand of $\sum_{j=1}^{L}{w_j f(x,j) f(x, \sigma_{n,x}(j))}$ is positive, and thus the sum is positive (in particular, nonzero). So now suppose $\sigma_{n,x}(i)=1, i \neq 1$. In this case we have $$\sum_{j=1}^{L}{w_j f(x,j) f(x, \sigma_{n,x}(j))}= f(x,1)(w_1+w_i) + \left(\sum_{j=2}^{L}{w_j}\right)-w_i.$$ Consider $$f(x,1)(w_1+w_i) = \frac{-\sum_{j=2}^{L}w_j}{w_1} (w_1 + w_i) = \left(-\sum_{j=2}^{L}w_j\right) \left(1 + \frac{w_i}{w_1}\right).$$ Note that $\left(-\sum_{j=2}^{L}w_j\right) \left(1 + \frac{w_i}{w_1}\right) \le -\sum_{j=2}^{L}w_j$. Thus, $$f(x,1)(w_1+w_i) + \left(\sum_{j=2}^{L}{w_j}\right)-w_i \le \left(-\sum_{j=2}^{L}w_j\right) + \left(\sum_{j=2}^{L}{w_j}\right)-w_i = - w_i < 0.$$ So $\mathbb{E}(T^nf \overline{f}|X)(x)$ is always nonzero, and $\abs{\mathbb{E}(T^nf \overline{f}|X)}^2(x)$ is always positive, as desired. \end{proof} We make two notes here. First, the proof of Proposition \ref{Prop:NoWeaklyMixingFinite} never used any assumptions on the factor, $(X,m,T'),$ and thus it will hold when the factor is any probability space, with any measure preserving transformation on that space. Second, the proof is still valid in the case that $Z$ has countably many levels instead of finitely many. The key observation is that for almost all $z$, if $z, Tz$ are on levels $k_1, k_2$ respectively, then $w(k_1)=w(k_2).$ As for any fixed $\alpha \in (0,1)$, there can be only finitely many levels $k$ with $w(k)=\alpha, T$ decomposes into invariant subsystems, to each of which we can apply Proposition \ref{Prop:NoWeaklyMixingFinite}. Though $\mathcal{W}_X$ is empty on these discrete extension spaces, they are still worth exploring. But before we can proceed, we will henceforth suppose that the probability measure $w$ is the normalized counting measure. That is, $w_i= \frac{1}{L}$ for all $i$. With this assumption, we extend the notion of dyadic sets and permutations on $X$ to dyadic sets and permutations on $Z.$ \begin{defin} \label{Def:DyadicFinite} If $D$ is a dyadic interval of rank $k$ in $X$, then a dyadic square of rank $k$ in $Z$ is a set of the form $D \times \{i\}$. A dyadic set in $Z$ is a union of dyadic squares. A dyadic permutation of rank $k$ on $Z$ is a permutation of the dyadic squares of rank $k$. A \textit{column-preserving (dyadic) permutation} (of rank $k$) on $Z$ is a dyadic permutation on $Z$ which is an extension of a dyadic permutation on $X$. \end{defin} We wish to generalize the fact that dyadic permutations are dense in $\mathcal{G}(X)$ to density of column-preserving permutations in $\mathcal{G}_X$. To this end, we make a couple notes. First we introduce the following notation: we write $A \subset i$ if there exists $A' \subset X$ such that $A = A' \times \{i\}$. Second, we will require the use of the following lemma by Halmos (for proof, see \cite[p.67]{HalmosLectures}). \begin{lem} \label{Lem:DyadicSetsPartition} Let $\{E_i : i = 1, \ldots n \}$ partition the unit interval, and $r_i$ be dyadic rationals such that $\sum_{i=1}^n r_i = 1$ and $\abs{m(E_i)-r_i} < \delta$ for some $\delta > 0$ and for all $i$. Then there exists $\{F_i : i = 1, \ldots n \}$, dyadic sets that partition the unit interval such that $m(F_i)=r_i$ and $m(E_i \triangle F_i) < 2 \delta$ for all $i$. \end{lem} We now move to the main result of this section. \begin{thm}[Density of column-preserving permutations] \label{Thm:DensityOfPermutationsFinite} Column-preserving permutations are dense in $\mathcal{G}_X.$ More precisely, let $T \in \mathcal{G}_X.$ Given $N_{\epsilon}(T)$, a dyadic neighborhood of $T$, there exists $Q \in N_{\epsilon}(T)$, a column-preserving permutation. \end{thm} \begin{proof} Without loss of generality, assume $$N_{\epsilon}(T)=\{S \in \mathcal{G}_X: \mu(TD_l \triangle SD_l)<\epsilon, l = 1, \ldots, L(2^n) \},$$ where $D_l$ are every dyadic square of some fixed rank $n$ (note that $D_{l_1}, D_{l_2}$ are disjoint up to boundary points). Let $k \in \{1, \ldots, L\}$, and let $P_k := \{D_i \cap TD_j|D_i \subset k, j = 1,\ldots, L(2^n)\}$. Note that $P_k$ partitions level $k$. If $\pi$ is the natural projection onto $X$, then let $P'_k := \pi P_k = \{\pi E| E \in P_k\}$. $P'_k$ is a partition of $X$. Let $P' = \{\hat{A}_{\lambda}| \lambda \in \Lambda \}$ be a common refinement of $P'_k$ for $k = 1, \ldots, L$, and let $P = \{\hat{A}_{\lambda,k} | \lambda \in \Lambda, k = 1, \ldots, L\}$ be a partition of $Z$ obtained by lifting every element of $P'$ to every level ($\hat{A}_{\lambda,k} \subset k$). Applying a weaker version of Lemma \ref{Lem:DyadicSetsPartition} (one where we do not care about the value of $\abs{m(E_i)-r_i}$ in the formulation of the lemma) to the partition $P'$, we obtain a partition $\{A_{\lambda}\}$ of $X$ into dyadic sets so that $m(\hat{A}_{\lambda} \triangle A_{\lambda})< \frac{L\epsilon}{2 \abs{\Lambda}}$. Applying Lemma \ref{Lem:DyadicSetsPartition} again, we get a partition of $X$ into dyadic sets $B_{\lambda}$ so that $m((T')^{-1}\hat{A}_{\lambda} \triangle B_{\lambda})< \frac{L \epsilon}{2 \abs{\Lambda}}$. Note the full strength of Lemma \ref{Lem:DyadicSetsPartition} guarantees we can select this partition so that $m(A_{\lambda})=m(B_{\lambda})$ (as $m(\hat{A}_{\lambda})=m((T')^{-1}\hat{A}_{\lambda})$). We can now lift $A_{\lambda},B_{\lambda}$ to sets $A_{\lambda,k},B_{\lambda,k}$ so that $A_{\lambda,k},B_{\lambda,k} \subset k$. Note that \begin{equation} \label{Eq:DiscreteApproximations} \mu(\hat{A}_{\lambda,k} \triangle A_{\lambda,k})< \frac{\epsilon}{2 \abs{\Lambda}}, \text{ and } \mu(T^{-1}\hat{A}_{\lambda,k_2} \triangle B_{\lambda,k_1})< \frac{\epsilon}{2 \abs{\Lambda}}. \end{equation} where $k_1, k_2$ are such that if $i,j$ are such that $\hat{A}_{\lambda,k_2} \subset D_i \cap TD_j,$ then $D_i \subset k_2, D_j \subset k_1$. We will now define $Q$ of some rank $r \in \mathbb{N}$ where $r$ is at least as large as the ranks of $D_i$ for every $i$, and $A_{\lambda},B_{\lambda}$ for every $\lambda$. We first define $Q'$ a dyadic permutation on $X$ as any dyadic permutation which maps $B_{\lambda}$ to $A_{\lambda}$ for every $\lambda$. Next we define $Q$, a column preserving permutation of rank $r$. First let $k_1,k_2$ be as before: if $i,j$ are such that $\hat{A}_{\lambda,k_2} \subset D_i \cap TD_j$, then $D_i \subset k_2, D_j \subset k_1$. Then $Q$ will be the extension of $Q'$ such that $B_{\lambda,k_1} \mapsto A_{\lambda,k_2}$. Note that for all $\lambda, k,$ we have that $\text{level } Q^{-1} A_{\lambda,k} = \text{level } T^{-1} \hat{A}_{\lambda,k},$ where $\text{level }A := k$ if and only if $A \subset k.$ We now show that $\mu(TD_j \triangle QD_j) < \epsilon$ for all $j$. Fix $j \in 1, \ldots, L(2^n),$ and define $k$ so that $D_j \subset k$. Let $\Lambda_j := \{\lambda \in \Lambda| (T')^{-1}\hat{A}_{\lambda} \subset \pi D_j \}$. For $\lambda \in \Lambda_j,$ let $i_{\lambda,j}$ be such that $T^{-1}\hat{A}_{\lambda, i_{\lambda,j}} \subset D_j$. Then $D_j = \bigcup_{\lambda \in \Lambda_j}{T^{-1}\hat{A}_{\lambda, i_{\lambda,j}}}.$ Further, by the definitions of $Q$ and $\Lambda_j,$ as well as the previous note, we have that $Q^{-1}A_{\lambda, i_{\lambda,j}} = B_{\lambda,k}.$ Note all unions and sums will be taken over $\lambda \in \Lambda_j.$ We have \begin{equation} \label{Eq:DiscreteFinal1} \mu\left(D_j \triangle \bigcup B_{\lambda,k}\right) = \mu \left(\bigcup T^{-1} \hat{A}_{\lambda,i_{\lambda,j}} \triangle \bigcup B_{\lambda,k}\right) \le \sum \mu(T^{-1}\hat{A}_{\lambda,i_{\lambda,j}} \triangle B_{\lambda,k}). \end{equation} But by (\ref{Eq:DiscreteApproximations}), $\mu(T^{-1}\hat{A}_{\lambda,i_{\lambda,j}} \triangle B_{\lambda,k}) < \frac{\epsilon}{2 \abs{\Lambda}}$ so $(\ref{Eq:DiscreteFinal1}) < \sum \frac{\epsilon}{2 \abs{\Lambda}} = \frac{\epsilon}{2}.$ Therefore $$\mu\left(QD_j \triangle \bigcup A_{\lambda,i_{\lambda,j}}\right) = \mu\left(D_j \triangle \bigcup B_{\lambda,k}\right) \le \frac{\epsilon}{2}.$$ On the other hand, \begin{equation} \label{Eq:DiscreteFinal2} \mu\left(\bigcup A_{\lambda,i_{\lambda,j}} \triangle TD_j\right) = \mu\left(\bigcup A_{\lambda,i_{\lambda,j}} \triangle \bigcup \hat{A}_{\lambda,i_{\lambda,j}}\right) \le \sum (A_{\lambda,i_{\lambda,j}} \triangle \hat{A}_{\lambda,i_{\lambda,j}}). \end{equation} Again by (\ref{Eq:DiscreteApproximations}), $\,u(A_{\lambda,i_{\lambda,j}} \triangle \hat{A}_{\lambda,i_{\lambda,j}}) < \frac{\epsilon}{2 \abs{\Lambda}}$ so $ (\ref{Eq:DiscreteFinal2}) < \sum \frac{\epsilon}{2 \abs{\Lambda}} = \frac{\epsilon}{2}$. Finally, $$\mu(TD_j \triangle QD_j) \le \mu\left(TD_j \triangle \bigcup A_{\lambda,i_{\lambda,j}} \right) + \left(\bigcup A_{\lambda,i_{\lambda,j}} \triangle QD_j\right) \le \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.$$ And because this holds for all $j,$ we have that $Q \in N_{\epsilon}(T).$ \end{proof} \section{Weak Approximation Theorem for Extensions on the Unit Square} \label{Sec:WAT} Now we let $(Z,m_2)$ be $X \times X$ with the Lebesgue measure. If we need further clarity, we will write the Lebesgue measure on $X$ as $m_1$, but in general we will denote both Lebesgue measures by $m$. We begin by drawing some connections to Section \ref{Sec:Discrete}. First, however, we need some more notation. For $L \in \mathbb{N},$ define $Z_L:= \bigcup_{j=0}^{L-1}{\left(X \times \left \{\frac{j}{L}\right \}\right)} \subset Z, \mu_L,$ a measure on $Z_L,$ to be the product of the Lebesgue measure with a normalized counting measure on $L$ points. Further, $\pi_L: Z \to Z_L$ be the natural projection onto $Z_L.$ That is, if $z = \left(x, \frac{j}{L} + \gamma \right)$ for $\gamma \in \left[0, \frac{1}{L} \right),$ then $\pi_L (z) = \left(x, \frac{j}{L} \right).$ \begin{defin} \label{Def:DiscreteEquivalent} Let $T \in \mathcal{G}(Z)$. We say that $T$ is \textit{discrete equivalent} if there exists $L$ and $T_L \in \mathcal{G}(Z_L)$, such that $(Z,m,T)$ is an extension of $(Z_L,\mu_L,T_L)$ through the factor map $\pi_L.$ Further, we say that $T$ is \textit{simply discrete equivalent} if $T$ is an identity extension. That is, if we write $Z$ as $Z_L \times \left[0, \frac{1}{L} \right),$ then $T=T_L \times I.$ If we wish to emphasize the number of levels, $L$, we will say $T$ is \textit{$L$-(simply) discrete equivalent}. \end{defin} Definition \ref{Def:DiscreteEquivalent} is fairly easy to visualize. We take the square and divide it into $L$ equal measure horizontal pieces. Then $T$ is discrete equivalent if $T$ moves fibers on each small piece to other such fibers, and is simply discrete equivalent if it does not move any points within the fiber. Note that in general, a discrete equivalent $T$ need not be in $\mathcal{G}_X.$ However, if $T \in \mathcal{G}_X,$ then $T_L$ is also an extension of $T'.$ Our goal for this section is to provide a version of Halmos' Weak Approximation Theorem (see \cite[p.65]{HalmosLectures}) when restricted to $\mathcal{G}_X$. Mostly this will mean proving a result equivalent to Theorem \ref{Thm:DensityOfPermutationsFinite}. However we first need to lay some ground work. Definitions of dyadic squares, sets, and permutations are all standard in this case, so we do not redefine them. Column-preserving permutations are defined just as they are in Definition \ref{Def:DyadicFinite}. Before moving on, we make a few remarks. \begin{remark} Lemma \ref{Lem:DyadicSetsPartition} holds on $(Z,m),$ because $(Z,m),$ like $(X,m),$ is a non-atomic standard probability space, and Lemma \ref{Lem:DyadicSetsPartition} holds for all such spaces (replacing ``dyadic sets'' in the statement of Lemma \ref{Lem:DyadicSetsPartition} with a class $\mathcal{B}$ which is isomorphic to the class of dyadic sets). Alternatively, one can simply prove Lemma \ref{Lem:DyadicSetsPartition} again in the context of the square. No part of the proof relies on the the fact that we were on the unit interval, so nothing changes in the proof. \end{remark} \begin{remark} We will make the following notational convenience. If $T \in \mathcal{G}(Z)$ and $S' \in \mathcal{G}(X),$ then we will write $S'T$ in place of $(S' \times I) T.$ \end{remark} \begin{remark} \label{Remark:PermutationsDiscreteEquivalent} If $Q \in \mathcal{G}(Z)$ is a dyadic permutation of rank $K$, then $Q$ is $L$-simply discrete equivalent with $L = 2^K.$ Further if $S' \in \mathcal{G}(X),$ then $S'Q$ is also $L$-simply discrete equivalent. If $Q$ is further an extension of $Q' \in \mathcal{G}(X),$ then $S'Q$ is an extension of $S'Q'.$ \end{remark} The key to our goal is the following strengthening of Lemma \ref{Lem:DyadicSetsPartition}. \begin{lem} \label{Lem:ColumnPartitions} Let $\{E_1,\ldots E_N \}$ be a finite partition of $Z$, $\epsilon > 0,$ and suppose $\{\tilde{F}_1,\ldots,\tilde{F}_N \}$ is another partition of $Z,$ where $\tilde{F}_i$ are all dyadic sets, and $m(E_i \triangle \tilde{F}_i) < \epsilon.$ Let $K := \text{\emph{max rank} } \tilde{F}_i$ and let $E_{ij} := E_i \cap \pi^{-1}C_j, j \in \{1, \ldots, 2^K \},$ where $C_j$ is a dyadic interval of rank $K$. Let $r_{ij}$ be dyadic rationals (possibly zero) such that $\sum_{i=1}^N r_{ij} = \frac{1}{2^K}$ for all $j$ and $\abs{m(E_{ij})-r_{ij}} < \frac{\epsilon}{2^{K}}$ for all $i,j.$ Then there exists $\{F_1, \ldots, F_N\}$ a partition of $Z$ such that $F_{i}$ is dyadic set for all $i, m(F_{ij}) = r_{ij}$ (with $F_{ij}$ similarly defined as $E_{ij}$) for all $i,j$, and $m(E_{i} \triangle {F_{i}}) < 3 \epsilon$ for all $i.$ \end{lem} \begin{proof} Similar to the definition of $E_{ij},$ define $\tilde{F}_{ij} := \tilde{F}_i \cap \pi^{-1}C_j.$ Note that by choice of $K, \tilde{F}_{ij}$ is of the product of $C_j$ and a dyadic set for all $i,j$. Now, for all $\tilde{F}_{ij}$ with $m(\tilde{F}_{ij}) > r_{ij},$ let $A_{ij} \subset \tilde{F}_{ij}$ of the form $A_{ij} = C_j \times B_{ij}$ with $B_{ij}$ a dyadic set, and $m(A_{ij}) = m(\tilde{F}_{ij}) - r_{ij}.$ Define $F_{ij} := \tilde{F}_{ij} \backslash A_{ij}.$ Let $A$ be the union of all $A_{ij}$ chosen up to this point. Now for $\tilde{F}_{ij}$ with $m(\tilde{F}_{ij}) < r_{ij},$ let $A_{ij} \subset A$ of the same form as above, this time with $m(A_{ij}) = r_{ij} - m(\tilde{F}_{ij}).$ In this case, define $F_{ij} := \tilde{F}_{ij} \cup A_{ij}.$ Now let $F_i := \bigcup_{j=1}^{2^K} F_{ij}.$ Note that some $F_{ij}$ may be empty. In particular, $F_{ij} = \emptyset$ if and only if $r_{ij}=0$. Note that by definition, $m(F_{ij}) = r_{ij}$ and note further that $\tilde{F}_{ij} \triangle F_{ij} = A_{ij}.$ We claim $$\sum_{j=1}^{2^K} m(A_{ij}) < 2 \epsilon$$ for all $i$. Let $i$ be fixed, and consider $$\sum_j m(A_{ij}) = \sum_j \abs{m(\tilde{F}_{ij}) - r_{ij}} \le \sum_j \abs{m(E_{ij}) - r_{ij}} + \sum_j \abs{m(E_{ij}) - m(\tilde{F}_{ij})}.$$ We have $\abs{m(E_{ij})-r_{ij}} < \frac{\epsilon}{2^K}$ so $\sum_j \abs{m(E_{ij}) - r_{ij}} < \epsilon.$ On the other hand, $$\sum_j \abs{m(E_{ij}) - m(\tilde{F}_{ij})} \le \sum_j m(E_{ij} \triangle \tilde{F}_{ij}) = m\left( \bigcup_j (E_{ij} \triangle \tilde{F}_{ij})\right).$$ Now, because $m(E_{ij_1} \cap \tilde{F}_{ij_2}) = 0$ if $j_1 \neq j_2,$ we have that $$m\left( \bigcup_j (E_{ij} \triangle \tilde{F}_{ij})\right) = m \left(\bigcup_j E_{ij} \triangle \bigcup_j \tilde{F}_{ij} \right) = m(E_i \triangle \tilde{F}_i) < \epsilon.$$ Therefore, $\sum_j m(A_{ij}) < 2 \epsilon.$ We will now show $m(E_i \triangle F_i) < 3 \epsilon.$ Firstly, we have $m(E_i \triangle F_i) \le m(E_i \triangle \tilde{F}_i) + m(\tilde{F}_i \triangle F_i).$ But $m(E_i \triangle \tilde{F}_i) < \epsilon.$ Further, $$m(\tilde{F}_i \triangle F_i) = m\left( \bigcup_j \tilde{F}_{ij} \triangle \bigcup_j F_{ij} \right) \le m\left( \bigcup_j (\tilde{F}_{ij} \triangle F_{ij}) \right) = \sum_j m(\tilde{F}_{ij} \triangle F_{ij}).$$ But as previously noted, $\tilde{F}_{ij} \triangle F_{ij} = A_{ij},$ and we already showed $\sum_j m(A_{ij}) < 2 \epsilon$. Thus, $m(E_i \triangle F_i) < 3 \epsilon$ as desired. \end{proof} With Lemma \ref{Lem:ColumnPartitions}, we can now prove the equivalent version of Theorem \ref{Thm:DensityOfPermutationsFinite} for the unit square, which will be the core result for proving our version of the Weak Approximation Theorem. \begin{thm}[Density of column-preserving permutations] \label{Thm:DensityOfPermutations} Column-preserving permutations are dense in $\mathcal{G}_X.$ More precisely, let $T \in \mathcal{G}_X.$ Given $N_{\epsilon}(T)$, a dyadic neighborhood of $T$, there exists $Q \in N_{\epsilon}(T)$, a column-preserving permutation. \end{thm} \begin{proof} We may assume without loss of generality that $$N_{\epsilon}(T) = \{S: \mu(TD_i \triangle SD_i)<\epsilon, l = 1, \ldots, (2^{2N}) \},$$ where $D_i$ are dyadic squares of some fixed rank $N$. We start with the case where $T'=I_X.$ Let $D_{ij} := D_i \cap TD_j.$ Note that $\{D_{ij}\}$ partitions $Z$. By Lemma \ref{Lem:DyadicSetsPartition}, there exists a partition of $Z$ into dyadic sets, $\{\tilde{E}_{ij}\},$ such that $m(D_{ij} \triangle \tilde{E}_{ij}) < \frac{\epsilon}{6M},$ where $M := 2^{2N}.$ Further by Lemma \ref{Lem:DyadicSetsPartition}, we can find a dyadic partition of $Z$ into sets $\{\tilde{F}_{ij}\}$ where $m(T^{-1}D_{ij} \triangle \tilde{F}_{ij}) < \frac{\epsilon}{6M}.$ Note that because $m(D_{ij}) = m(T^{-1}D_{ij}),$ we can assume that $m(\tilde{E}_{ij}) = m(\tilde{F}_{ij})$ Let $K = \text{max rank} \{\tilde{E}_{ij}, \tilde{F}_{ij}\}.$ We can now apply Lemma \ref{Lem:ColumnPartitions} to both $\tilde{E}_{ij}$ and $\tilde{F}_{ij}$ to get dyadic partitions $\{E_{ij}\}$ and $\{F_{ij}\}$ such that \begin{equation} \label{Eq:SquareApproximations} m(D_{ij} \triangle E_{ij}) < \frac{\epsilon}{2M} \text{ and } m(T^{-1}D_{ij} \triangle F_{ij}) < \frac{\epsilon}{2M}. \end{equation} Recall that if $C_k$ is a dyadic interval of rank $K$, then in the notation of Lemma \ref{Lem:ColumnPartitions}, $E_{ijk} := E_{ij} \cap \pi^{-1}C_k$ and $F_{ijk} := F_{ij} \cap \pi^{-1}C_k.$ Note that not only do we have $m(D_{ij}) = m(T^{-1}D_{ij}),$ but because $T$ is an extension of identity, $m(T^{-1}D_{ij} \cap \pi^{-1}C_k) = m(T^{-1}(D_{ij} \cap \pi^{-1}C_k)) = m(D_{ij} \cap \pi^{-1}C_k).$ Thus we are able to choose the same dyadic rationals in both applications of Lemma \ref{Lem:ColumnPartitions}, and subsequently have that $m(E_{ijk}) = m(F_{ijk})$ for $i,j = 1, \ldots 2^{2N}, k = 1, \ldots, 2^K.$ We now define $Q$ as the permutation which maps $F_{ijk}$ to $E_{ijk}.$ Note that in particular, $Q$ will map $F_{ij}$ to $E_{ij}.$ Further note that $Q$ will be an extension of the identity. Let $j$ be fixed. We will now show $m(QD_j \triangle TD_j) < \epsilon$. Recall $D_{ij}= D_i \cap TD_j,$ so $T^{-1}D_{ij} = T^{-1}D_i \cap D_j$ and $D_j = \bigcup_i T^{-1}D_{ij}.$ We have \begin{equation} \label{Eq:SquareFinal1} m\left(D_j \triangle \bigcup_i F_{ij} \right) = m \left( \bigcup_i T^{-1} D_{ij} \triangle \bigcup_i F_{ij} \right) \le \sum_i m(T^{-1} D_{ij} \triangle F_{ij}). \end{equation} But per (\ref{Eq:SquareApproximations}), $m(T^{-1}D_{ij} \triangle F_{ij}) < \frac{\epsilon}{2M},$ so $(\ref{Eq:SquareFinal1}) < \sum_i \frac{\epsilon}{2M} \le \frac{\epsilon}{2}$. Therefore $$ m\left(QD_j \triangle \bigcup_i E_{ij} \right) = m \left(D_j \triangle \bigcup_i F_{ij} \right) < \frac{\epsilon}{2}. $$ On the other hand, \begin{equation} \label{Eq:SquareFinal2} m\left( TD_j \triangle \bigcup_i E_{ij} \right) = m \left( \bigcup_i D_{ij} \triangle \bigcup_i E_{ij} \right) \le \sum_i m(D_{ij} \triangle E_{ij}). \end{equation} Again, per (\ref{Eq:SquareApproximations}), $m(D_{ij} \triangle E_{ij}) < \frac{\epsilon}{2M},$ so $(\ref{Eq:SquareFinal2}) < \sum_i \frac{\epsilon}{2M} = \frac{\epsilon}{2}.$ Therefore, $$ m(TD_j \triangle QD_j) \le m\left( TD_j \triangle \bigcup_i E_{ij} \right) + m\left(\bigcup_i E_{ij} \triangle QD_j \right) < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon. $$ As this holds for all $j,$ we have that $Q \in N_{\epsilon}(T).$ Now suppose $T$ is an extension of some invertible $T'.$ Define $\tilde{T}:= (T')^{-1}T.$ Then $\tilde{T}$ is an extension of the identity, so there exists a column-preserving permutation $\tilde{Q} \in N_{\epsilon/2}(\tilde{T}).$ But then $T'\tilde{Q} \in N_{\epsilon/2}(T)$ as $m(T'\tilde{Q}D_i \triangle TD_i) = m(\tilde{Q}D_i \triangle \tilde{T}D_i) < \frac{\epsilon}{2}.$ By Remark \ref{Remark:PermutationsDiscreteEquivalent}, $T'\tilde{Q}$ is $L$-simply discrete equivalent, with $L = 2^{\text{rank } \tilde{Q}}$. If we let $G_i:= \pi_L D_i$ and let $\tilde{N}_{\epsilon/2}(\pi_L(T'\tilde{Q})) := \{S_L: \mu_L(\pi_L(T'\tilde{Q})G_i \triangle S_LG_i) < \frac{\epsilon}{2} \forall i \},$ then by Theorem \ref{Thm:DensityOfPermutationsFinite} there exists a column-preserving dyadic permutation $\hat{Q} \in \tilde{N}_{\epsilon/2}(\pi_L(T'\tilde{Q})).$ Now we define $Q$ to be the simply discrete equivalent extension of $\hat{Q}.$ Note that because $L$ was dyadic, $Q$ is a (column-preserving) dyadic permutation. Further, $Q \in N_{\epsilon/2}(T'\tilde{Q})$ as $m(QD_i \triangle T'\tilde{Q}D_i) = \mu_L(\hat{Q}G_i \triangle \pi_L(T'\tilde{Q})G_i) < \frac{\epsilon}{2}.$ So $$m(QD_i \triangle TD_i) \le m(QD_i \triangle T'\tilde{Q}D_i) + m(T'\tilde{Q}D_i \triangle TD_i) < \epsilon$$ for all $i$, and thus $Q \in N_{\epsilon}(T).$ \end{proof} We close this section with the promised version of Halmos' Weak Approximation Theorem for extensions. \begin{thm}[Weak Approximation Theorem for Extensions] \label{Thm:WATE} Let $T\in \mathcal{G}_X$, and let $N_{\epsilon}(T)$ be a dyadic neighborhood of $T.$ Then for any $k_0 \in \mathbb{N},$ there exists $k \ge k_0$ and $Q \in \mathcal{G}_X$such that the following hold: \begin{itemize} \item $Q,Q'$ are dyadic permutations of rank $k$ on $Z,X$ respectively, \item $Q'$ is cyclic, \item $Q$ is periodic with period $2^k$ everywhere, \item $Q \in N_{\epsilon}(T).$ \end{itemize} \end{thm} \begin{proof} Because Theorem \ref{Thm:DensityOfPermutations} tells us that $N_{\epsilon/2}(T)$ will contain $P \in \mathcal{G}_X,$ ($P$ a column-preserving permutation) we need only prove the case where $T$ is a permutation itself (we will therefore proceed using $P, P'$ in place of $T,T'$). Fix $k_0 \in \mathbb{N}.$ Because $P$ is a permutation and $D_i$ is a dyadic set, $PD_i$ is also a dyadic set. Let $M$ be the maximum rank of $PD_i$ (so that $P$ is a permutation of rank $M$), $K$ be the number of disjoint cycles in $P',$ and $k$ be chosen to be greater than both $M$ and $k_0,$ and such that $\frac{K}{2^{k-1}} < \epsilon$. We will now construct $Q$ of rank $k.$ Note that following the proof there will be an example of this construction. To start, let $E_1$ be any dyadic square of rank $M$. If $\pi E_1$ is not a fixed point of $P',$ we have $Q$ map the ``first'' rank $k$ dyadic square (which we will henceforth refer to as a $k$-square) of $E_1$ to the ``first'' $k$-square in $PE_1.$ By ``first'' $k$-square, we mean the top left $k$-square. Now, if $(P')^2 \pi E_1 \neq \pi E_1,$ we continue to map to the first $k$-square in $(P')^2 \pi E_1.$ Eventually, however, we reach a point where $(P')^l \pi E_1 = \pi E_1.$ From where we are in $(P)^{l-1} E_1,$ we continue to map to the ``second'' $k$-square in $P^l E_1$ (by ``second'' we mean the one to the right of the first). Note that $P^l E_1 \neq E_1$ in general. We now repeat the entire process, replacing ``first'' for ``second,'' eventually ``third'' and so on, as well as replacing $E_1$ with $P^l E_1$. Eventually we will arrive at a $k$-square whose projection is at the far right of $(P')^{l-1} \pi E_1.$ At this point, we choose an $M$-square $E_2$ such that $\pi E_2$ is not in the $P'$ cycle of $\pi E_1$ (assuming such an $E_2$ exists). Then from our current position, we map to the first $k$-square of $E_2,$ and repeat the process. We continue on like this until we have exhausted every $P'$ cycle (including fixed points), at which point we return to the the first $k$-square of $E_1.$ Note that we have visited every $k$-column exactly once. We are not quite done yet, though. We now choose a $k$-square on the same column as the first $k$-square in $E_1,$ and we repeat the entire process. Now shifting to rows within the $M$-squares that correspond to our new choice of starting point. That is, in the original process, we were in the top row of every $M$-square, because our original $k$-square was in the top row. If our new $k$-square is in the 3rd row within its $M$-square, say, all our choices will be in the 3rd row of the respective $M$-squares. Repeating this process, we eventually define $Q$ for all $k$-squares. We now find a bound for $m(PD_i \triangle QD_i).$ Note that by our construction the only points that can be in $PD_i \triangle QD_i$ come from $k$-squares in $D_i$ whose projections are in the last $k$-interval in each $P'$ cycle. Let $E_j$ be such a $k$-square. Then $m(PE_j \triangle QE_j) \le \frac{2}{2^{2k}} = \frac{1}{2^{2k-1}}.$ There are $2^k$ such $E_j$ per $k$-column, and there are $K$ such $k$-columns. Thus, $m(PD_i \triangle QD_i) \le \bigcup_j m(PE_j \triangle QE_j) \le \frac{K 2^k}{2^{2k-1}} = \frac{K}{2^{k-1}} < \epsilon$. \end{proof} The construction in the proof of Theorem \ref{Thm:WATE} can be difficult to follow closely, so we provide an example of the construction. We first provide a $P$ which, in this case, will be of rank $2$. See Figure \ref{Fig:4x4} for reference on how we label the $2$-squares. Note that we will define $P$ using cycle decomposition notation. That is, if we write $R=(1 \ 2 \ 3),$ then we mean that the image under $R$ of the square labeled $1$ is the square labeled $2$. Similarly the image of ``$2$'' is ``$3$'' and the image of ``$3$'' is ``$1$''. Any squares not written explicitly in the decomposition are fixed points. Now, we let $P := (1 \ 11 \ 5 \ 3)(13 \ 15)(9 \ 7)(2 \ 6 \ 14)(4 \ 16 \ 12 \ 8).$ Note that $P$ extends $P' := (1 \ 3)$ on $X$. \begin{figure}[h] \caption{}\label{Fig:4x4} \includegraphics[scale=0.4]{4x4.pdf} \centering \end{figure} \begin{figure}[h] \caption{}\label{Fig:8x8} \includegraphics[scale=0.4]{8x8.pdf} \centering \end{figure} Suppose we were to construct $Q$ to be a rank $3$ permutation. Rather than write the entire cycle decomposition of $Q$ (as it would involve writing all $64$ $3$-squares), we label Figure \ref{Fig:8x8} to define $Q$. Here we have labeled the $3$-squares such that for a square labeled $(n,k),$ we have that $Q(n,k)=(n,k+1), (k \mod 8)$ (for consistency, here we have $8 \mod 8 := 8$ instead of $0$ as it typically would be). Further, if $n_1 \neq n_2,$ then $(n_1,k_1),(n_2,k_2)$ are in independent cycles. It is easy to see with this notation that $Q$ is an extension of a cyclic permutation $Q'$ on $X.$ We also note that the $Q$ we constructed is not the only possible $Q$ we could have constructed, as we have many free choices in the construction. To close this section, we note that a very simple modification of the proof of Theorem \ref{Thm:WATE} would yield a column preserving permutation $Q$ such that not only $Q'$ is cyclic, but $Q$ is cyclic as well. In our example seen in Figure \ref{Fig:8x8}, this modification would be accomplished by changing the definition of $Q$ slightly so that $Q(n,8)=(n+1,1),(n \mod 8)$. This formulation is more akin to the classical theorem. However, we choose the formulation given in Theorem \ref{Thm:WATE} as it is this formulation we need for further results. \section{Uniform Approximation} \label{Sec:UniformApproximation} Our goal in this section is to prove results that are generalizations of those needed for Halmos' classical Conjugacy Lemma (the key lemma for proving that weakly mixing transformations on $X$ are dense in $\mathcal{G}(X)$), and whose proofs quickly follow from the classical results and their proofs. \begin{lem} \label{Lem:PeriodicExtensionPartition} Let $T \in \mathcal{G}_X$ where $T'$ is periodic of period $n$ (almost) everywhere. Then there exists a set $E$ such that $E= \pi^{-1}E'$ for some $E' \subset X,$ and $\{E,TE, \ldots, T^{n-1}E\}$ partition $Z.$ \end{lem} \begin{proof} Because $T'$ is has period $n$ everywhere, there exists $E'$ such that $\{E',T'E',\ldots, (T')^{n-1}E'\}$ partitions $X$. Setting $E:=\pi^{-1}E'$ we have $$\{E,TE, \ldots, T^{n-1}E\}$$ are pairwise disjoint because $T$ extends $T'$. Further, because $m(E)=m(E')=\frac{1}{n},$ we have $m\left(\bigcup_{i=0}^{n-1} T^iE\right)= \sum_{i=0}^{n-1} m(T^iE) = 1,$ or $\bigcup_{i=0}^{n-1} T^iE = X$. \end{proof} Next we move to a version of Rokhlin's lemma (see, for example, \cite[p.71]{HalmosLectures}). \begin{lem} \label{Lem:AntiperiodicExtension} Let $T \in \mathcal{G}_X$ where $T'$ is antiperiodic. Then for every $n \in \mathbb{N}$ and $\epsilon >0$ there exists $E$ such that $E=\pi^{-1}E'$ for some $E',\{E,TE, \ldots, T^{n-1}E\}$ are pairwise disjoint, and $m\left(\bigcup_{i=0}^{n-1} T^iE\right) > 1-\epsilon.$ \end{lem} \begin{proof} Let $n \in \mathbb{N}$ and $\epsilon >0$. Because $T'$ is antiperiodic, there exists $E' \subset X$ such that $\{E',T'E',\ldots, (T')^{n-1}E'\}$ are pairwise disjoint and $m\left(\bigcup_{i=0}^{n-1} (T')^iE'\right) > 1-\epsilon$. Let $E:= \pi^{-1}E'$. Because $T$ extends $T',\{E,TE, \ldots, T^{n-1}E\}$ are pairwise disjoint. Further $m\left(\bigcup_{i=0}^{n-1} T^iE\right)= \sum_{i=0}^{n-1} m(T^iE)=\sum_{i=0}^{n-1} m((T')^iE') = m\left(\bigcup_{i=0}^{n-1} (T')^iE'\right) > 1-\epsilon.$ \end{proof} We conclude this section with a version of Halmos' Uniform Approximation Theorem (see \cite[p.75]{HalmosLectures}). \begin{thm}[Uniform Approximation Theorem for Extensions] \label{Thm:UATE} Let $T \in \mathcal{G}_X$\ where $T'$ is antiperiodic. Then for every $n \in \mathbb{N}$ and $\epsilon >0$ there exists $R \in \mathcal{G}_X,$ such that both $R$ and $R'$ are periodic with period $n$ almost everywhere, and $d'(R,T) \le \frac{1}{n} + \epsilon.$ \end{thm} \begin{proof} By Lemma \ref{Lem:AntiperiodicExtension}, there exists $E$ a cylinder set, such that $\{E,TE, \ldots, T^{n-1}E\}$ are pairwise disjoint, and $m\left(\bigcup_{i=0}^{n-1} T^iE\right) > 1-\epsilon$. If $z \in \bigcup_{i=0}^{n-2} T^iE,$ define $Rz := Tz,$ and if $z \in T^{n-1}E,$ define $Rz := T^{-(n-1)}z,$ thus making $R$ have period $n$ for all points on which we have thus far defined it. Further, because $T$ extends $T', R$ is also an extension. And for any definition of $R$ on the remainder of $Z$, we have $d'(R,T) \le m(T^{n-1}E) + \epsilon \le \frac{1}{n} + \epsilon.$ All that remains is to define $R$ on the remainder of $Z$ so that $R$ is an extension, and $R,R'$ have period $n.$ Since the remainder is a cylinder set, this can be done by defining $R'$ on the projection of the remainder, as you would in the classical case, and then letting $R= R' \times I$ on this set of measure $\epsilon$. \end{proof} \section{Conjugacy Lemma} \label{Sec:ConjugacyLemma} We now prove a generalization of Halmos' Conjugacy Lemma (see \cite[p.77]{HalmosLectures}), using the same techniques as Halmos' original proof. \begin{lem}[Conjugacy Lemma for Extensions] \label{Lem:ConjugacyLemmaExtensions} Let $T \in \mathcal{G}_X, T_0 \in \mathcal{G}_X$ such that $T'_0$ is antiperiodic, and let $N_{\epsilon}(T)= \{V \in \mathcal{G}_X: m(VD_i \triangle TD_i) < \epsilon, i=1,\ldots, N\}$ be a dyadic neighborhood of $T$. Then there exists $S \in \mathcal{G}_X$ such that $S^{-1}T_0S \in N_{\epsilon}(T).$ \end{lem} \begin{proof} Let $k_0 \in \mathbb{N}$ be greater than the ranks of all $D_i$ and $\frac{1}{2^{k_0-2}} < \epsilon.$ Further, let $Q \in N_{\epsilon/2}(T)$, a dyadic permutations of rank $k \ge k_0$ with all properties guaranteed by the Weak Approximation Theorem for Extensions, Theorem \ref{Thm:WATE} ($Q'$ is cyclic, $Q$ is $2^k$ periodic). Applying the Uniform Approximation Theorem for Extensions, Theorem \ref{Thm:UATE}, with $2^k$ in place of $n$ and $\frac{1}{2^k}$ in place of $\epsilon$, there exists $R \in \mathcal{G}_X$ such that $R,R'$ are have period $2^k$ almost everywhere, and $d'(R,T_0) \le \frac{1}{2^k} + \frac{1}{2^k} < \frac{\epsilon}{2}$. We will show $Q$ and $R$ are conjugate by some $S \in \mathcal{G}_X.$ Let $q = 2^k$ and $E_0,\ldots, E_{q-1}$ be cylinder sets of dyadic intervals of rank $k$ in $X,$ arranged so that $QE_i = E_{i+1} ~(i \mod q).$ Note that $m(E_i)=\frac{1}{q}$ By Lemma \ref{Lem:PeriodicExtensionPartition}, there exists $F_0$, a cylinder set, such that $m(F_0)=\frac{1}{q}$ and $F_0,RF_0,\ldots,R^{q-1}F_0$ partition $Z$. Let $F_i:=R^iF_0$. Let $S$ be any measure preserving transformation which maps $E_0$ to $F_0$ as an extension of some $S'.$ Then for $z \in E_i,$ let $Sz:=R^iSQ^{-i}z.$ This can be seen in the following diagram: \[ \xymatrix{ E_0\ar[r]^{Q}\ar[d]_{S}&E_1\ar[r]^{Q}\ar[d]_{S}&E_2\ar[r]^{Q}\ar[d]_{S}&\ldots\ar[r]^{Q}&E_{q-2}\ar[r]^{Q}\ar[d]_{S}&E_{q-1}\ar[d]_{S} \\ F_0\ar[r]_{R}&F_1\ar[r]_{R}&F_2\ar[r]_{R}&\ldots\ar[r]_{R}&F_{q-2}\ar[r]_{R}&F_{q-1} } \] Commutation of the diagram shows that $Q = S^{-1}RS.$ Further, because $Q,R,S\restriction_{E_0}$ are extensions of $Q',R',S'\restriction_{\pi E_0}, S$ is an extension of $S'$. Now, because $d'$ is invariant under group operations, we have $$d'(Q,S^{-1}T_0S) \le d'(S^{-1}RS,S^{-1}T_0S) = d'(R,T_0) < \frac{\epsilon}{2}.$$ Thus, for any $D_i$ we have: \begin{align*} \label{Eqn:ConjugacyFinal} m(TD_i \triangle S^{-1}T_0SD_i) &\le m(TD_i \triangle QD_i) + m(QD_i \triangle S^{-1}T_0SD_i) \\ &\le \frac{\epsilon}{2} + d(Q,S^{-1}T_0S). \end{align*} But $d \le d',$ so $m(TD_i \triangle S^{-1}T_0SD_i) \le \frac{\epsilon}{2} + d'(Q,S^{-1}T_0S) < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.$ \end{proof} This lemma is so important because as we will see in our main result in Theorem $\ref{Thm:CategoryTheorem},$ the conjugacy class of $\mathcal{W}_X$ is $\mathcal{W}_X$ itself. Thus, in proving Lemma \ref{Lem:ConjugacyLemmaExtensions}, we have indeed proven half of Theorem \ref{Thm:CategoryTheorem}. \section{Category Theorem} \label{Sec:CategoryTheorem} We are fast approaching our main goal: that $\mathcal{W}_X$ is a dense, $G_{\delta}$ subset of $\mathcal{G}_X$. Before we can prove it, we need to prove a few technical results. First we have a quick consequence of the Cauchy-Schwarz Inequality, Proposition \ref{Thm:C-S}, but it will be important enough to make a special note of it. \begin{prop} \label{Prop:C-SNorms} Let $f,g \in L^2(Z|X).$ Then $$ \norm{\mathbb{E}(f g |X)}_{L^2(X)} \le \norm{\norm{f}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{g}_{L^2(Z)}.$$ \end{prop} \begin{proof} By the Cauchy-Schwarz Inequality for $L^2(Z|X)$, we have $$\abs{\mathbb{E}(f g |X)} \le \mathbb{E}(\abs{f}^2|X)^{1/2} \mathbb{E}(\abs{g}^2|X)^{1/2}$$ pointwise. Now, by definition of $L^2(Z|X), \mathbb{E}(\abs{f}^2|X)^{1/2} \in L^{\infty}(X).$ Letting $M := \norm{\norm{f}_{L^2(Z|X)}}_{L^{\infty}(X)},$ we have \begin{equation} \label{Eq:C-S} \abs{\mathbb{E}(f g |X)} \le M \mathbb{E}(\abs{g}^2|X)^{1/2}. \end{equation} Now note that by Fubini, \begin{align*} \norm{\mathbb{E}(\abs{g}^2|X)^{1/2}}_{L^2(X)} &= \ \left(\int_X \left(\left( \int_Y \abs{g}^2(x,y) d \mu_Y \right)^{1/2}\right)^2 d \mu_X \right)^{1/2} \\ &= \ \left( \int_Z \abs{g}^2 dm_2 \right)^{1/2} = \norm{g}_{L^2(Z)}, \end{align*} where $Y=X, \mu_X=\mu_Y = m_1$ (the notation here was changed to clarify what integrals were intended). And so taking $L^2(X)$ norm on both sides of (\ref{Eq:C-S}), we arrive at the desired inequality. \end{proof} Our next goal is to prove that $T \in \mathcal{W}_X$ is equivalent to the existence of a subsequence $n_k$ such that for all $f,g \in L^2(Z|X),$ \begin{equation} \label{Eqn:RelWMSubsequence} \lim_{k \to \infty} \norm{\mathbb{E}(T^{n_k}f \overline{g} |X) - (T')^{n_k} \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} = 0. \end{equation} To this end, we first show that if (\ref{Eqn:RelWMSubsequence}) holds for an $L^2(Z)$-dense subset of $L^2(Z|X),$ it holds for all of $L^2(Z|X).$ \begin{lem} \label{Lem:RWML2Dense} Let $T \in \mathcal{G}_X,$ and let $D \subset L^2(Z|X),$ with $\norm{\norm{\overline{f}}_{L^2(Z|X)}}_{L^{\infty}(X)} \le 1$ for all $f \in D,$ such that $D$ is dense in the unit ball of $L^2(Z|X),$ but with respect to the $L^2(Z)$ norm topology. Further suppose that there exists a subsequence $n_k$ such that for all $f_i,f_j \in D,$ $$\lim_{k \to \infty} \norm{\mathbb{E}(T^{n_k}f_i \overline{f_j}|X) - (T')^{n_k}\mathbb{E}(f_i|X)\mathbb{E}(\overline{f_j}|X)}_{L^2(X)} = 0.$$ Then for all $h,g \in L^2(Z|X)$ $$\lim_{k \to \infty} \norm{\mathbb{E}(T^{n_k}h \overline{g}|X) - (T')^{n_k}\mathbb{E}(h|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} = 0.$$ \end{lem} \begin{proof} Fix $h,g \in L^2(Z|X)$ and let $\{h_j\}, \{g_j\} \subset D$ such that $$h_j \xrightarrow{L^2(Z)} h, g_j \xrightarrow{L^2(Z)} g.$$ We first claim that $$\lim_{j \to \infty} \norm{\mathbb{E}(T^nh_j \overline{g_j}|X) - \mathbb{E}(T^nh \overline{g}|X)}_{L^2(X)} = 0,$$ uniformly with respect to $n$. Indeed, we have \begin{align*} &\quad\ \ \norm{\mathbb{E}(T^nh_j \overline{g_j}|X) - \mathbb{E}(T^nh \overline{g}|X)}_{L^2(X)} \\ &\le \ \norm{\mathbb{E}(T^nh_j \overline{g_j}- T^n h_j \overline{g}|X)}_{L^2(X)} + \norm{\mathbb{E}(T^n h_j \overline{g} - T^nh \overline{g}|X)}_{L^2(X)} \\ &= \ \norm{\mathbb{E}(T^nh_j (\overline{g_j}- \overline{g})|X)}_{L^2(X)} + \norm{\mathbb{E}(\overline{g}(T^n h_j - T^nh)|X)}_{L^2(X)}. \end{align*} Now by Proposition \ref{Prop:C-SNorms} \begin{align*} \norm{\mathbb{E}(T^nh_j (\overline{g_j}- \overline{g})|X)}_{L^2(X)} &\le \norm{\norm{T^nh_j}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{\overline{g_j}- \overline{g}}_{L^2(Z)}, \\ \norm{\mathbb{E}(\overline{g}(T^n h_j - T^nh)|X)}_{L^2(X)} &\le \norm{\norm{\overline{g}}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{T^n(h-h_j)}_{L^2(Z)}. \end{align*} In turn, as $\norm{\norm{T^nh_j}_{L^2(Z|X)}}_{L^{\infty}(X)} = \norm{\norm{h_j}_{L^2(Z|X)}}_{L^{\infty}(X)}$ and $T$ is an isometry, we get \begin{align*} \norm{\norm{T^nh_j}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{\overline{g_j}- \overline{g}}_{L^2(Z)} &\le \norm{\norm{h_j}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{\overline{g_j}- \overline{g}}_{L^2(Z)}, \\ \norm{\norm{\overline{g}}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{T^n(h-h_j)}_{L^2(Z)} &\le \norm{\norm{\overline{g}}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{h-h_j}_{L^2(Z)}. \end{align*} Because $h_j \to h, g_j \to g$ in $L^2(Z),$ we have the desired result. Note that a similar argument using Proposition \ref{Prop:C-SNorms} will show $\mathbb{E}(\overline{g_j}|X) \to \mathbb{E}(\overline{g}|X), (T')^n\mathbb{E}( h_j |X) \to (T')^n\mathbb{E}(h|X))$ in $L^2(X).$ Now, \begin{align*} &\quad\ \norm{\mathbb{E}(T^nh \overline{g}|X) - (T')^n\mathbb{E}(h|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} \\ &\le \ \norm{\mathbb{E}(T^nh \overline{g}|X) - \mathbb{E}(T^nh_j \overline{g_j}|X)}_{L^2(X)} \\ &+ \ \norm{\mathbb{E}(T^nh_j \overline{g_j}|X) - (T')^n\mathbb{E}(h_j|X)\mathbb{E}(\overline{g_j}|X)}_{L^2(X)} \\ &+ \ \norm{(T')^n\mathbb{E}(h_j|X)\mathbb{E}(\overline{g_j}|X) - (T')^n\mathbb{E}(h|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)}. \end{align*} By hypothesis, there is a subsequence $n_k$ (independent of $j$) such that the middle term converges to 0. Further, the first and third terms converge to 0 as $j \to \infty$ uniformly in $n$, so $$\lim_{k \to \infty} \norm{\mathbb{E}(T^{n_k}h \overline{g}|X) - (T')^{n_k}\mathbb{E}(h|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} = 0$$ as desired. \end{proof} Next, recall that a function $f \in L^2(Z|X)$ is called \textit{generalized eigenfunction} for a given $T \in \mathcal{G}_X$ (see \cite[p.179]{Glasner}) if the $L^{\infty}(X)$-module spanned by $\{T^nf: n \in \mathbb{N}\}$ has finite rank. In other words, there exists $g_1, \ldots, g_l \in L^2(Z|X)$ such that for all $n$, there exists $c^n_j \in L^{\infty}(X), 1 \le j \le l$ such that $$T^nf(x,y) = \sum_j c^n_j(x) g_j(x,y).$$ \begin{lem} \label{Lem:RelWMSubsequence} Let $T \in \mathcal{G}_X$. Then $T \in \mathcal{W}_X$ if and only if there exists a subsequence $n_k$ such that for all $f,g \in L^2(Z|X)$ \begin{equation} \label{Eqn:RelWMSubsequence2} \lim_{k \to \infty} \norm{\mathbb{E}(T^{n_k}f \overline{g} |X) - (T')^{n_k} \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} = 0. \end{equation} \end{lem} \begin{proof} Let $T \in \mathcal{W}_X.$ Lemma \ref{Lem:RWML2Dense} tells us that we need only show that there exists a subsequence such that (\ref{Eqn:RelWMSubsequence2}) holds for an $L^2(Z)$-dense subset of $L^2(Z|X)$. Let $(f_i)_{i=1}^{\infty}$ be an enumeration of that dense subset, and let $l \mapsto (f_i,f_j), 1 \le l < \infty$ order the set $\{ (f_i,f_j) \}$ arbitrarily. Now, by the definition of a weakly mixing extension and the Koopman-von Neumann Lemma (see, e.g., \cite[p.54]{EinsiedlerWard}), for each $l$ there exists a subsequence $n'^l_{m}$ of upper asymptotic density 1 such that $$\lim_{m \to \infty} \norm{\mathbb{E}(T^{n'^l_{m}}f_i \overline{f_j} |X) - (T')^{n'^l_{m}} \mathbb{E}(f_i|X)\mathbb{E}(\overline{f_j}|X)}_{L^2(X)} = 0.$$ Stated differently, for the pair $f_i,f_j$ corresponding to $l$, we have the desired convergence along the dense subsequence $n'^l_{m}$. We now define a new density 1 subsequence for each $l$ inductively. Let $n^1_{m}:= n'^1_m,$ and for $l > 1,$ define $n^l_m$ to be a common density 1 subsequence of $n^{l-1}_m$ and $n'^l_m.$ Now define $n_k$ be a diagonal sequence obtained from $n^l_m,$ i.e., $n_k := n^k_k.$ It is easy to see that for all $f_i,f_j,$ (\ref{Eqn:RelWMSubsequence2}) holds. To prove the converse, suppose $T \in \mathcal{G}_X \backslash \mathcal{W}_X$. Then there exists $f \in L^2(Z|X) \backslash L^{\infty}(X)$ that is a generalized eigenfunction for $T$, see \cite[p.192]{Glasner}. Without loss of generality, $\norm{f}_{L^1(Z)}=1.$ Let $g_1, \ldots, g_l \in L^2(Z|X)$ be a basis for the module spanned by $T^nf.$ We want $g_i$ to be ``relatively orthonormal''. That is, we want $\mathbb{E}(g_i \overline{g}_j|X)=0 \ a.e.$ when $i \neq j$ and $\mathbb{E}(\abs{g_j}^2|X)=1 \ a.e.$ This can be accomplished with a relative Gram-Schmidt process. We start by defining $h'_1:= g_1.$ For any $x$ such that $\mathbb{E}(\abs{h'_1}^2|X)(x)=0,$ we have that $h'_1(x,y):=h'_{1,x}(y) \equiv 0.$ Thus by setting the corresponding $c^n_1(x)=0$ for all $n$ we can define $h_{1,x}(y)$ arbitrarily, so long as it's not identically 0. For all other $x$, define $h_{1,x}(y):= h'_{1,x}(y)$. Now, having defined $h_j, 1 \le j \le i-1,$ we define $h'_i$ by $$h'_i := g_i - \sum_{j=1}^{i-1} \frac{\mathbb{E}(g_i \overline{h}_j|X)}{\mathbb{E}(\abs{h_j}^2|X)}h_j.$$ Similar to the above, if there are any $x$ such that $\mathbb{E}(\abs{h'_i}^2|X)(x)=0,$ we define $h_{i,x}(y) \not\equiv 0$ (again changing $c^n_i(x)$ to 0 for all $n$) and for all other $x, h_{i,x}(y) := h'_{i,x}(y)$. Finally we normalize and redefine $g_j$ so that $$g_j(x,y):= \frac{h_j(x,y)}{\mathbb{E}(\abs{h_j}^2|X)(x)^{1/2}}.$$ Now define a function $j : \mathbb{N} \to \{1, \ldots, l\}$ such that $\norm{c^n_{j(n)}}_{L^2(Z)} \ge \norm{c^n_{i}}_{L^2(Z)}, 1 \le i \le l.$ Note that for each $n, \norm{c^n_{j(n)}}_{L^2(Z)} \ge 1/l$ as else $$\norm{T^nf}_{L^1(Z)} \le \sum_i \norm{c^n_i g_i}_1 \le \sum_i \norm{c^n_i}_2 \norm{g_i}_2 < 1.$$ Fix $n$ and suppose for now that $\mathbb{E}(f|X) \equiv 0$ almost everywhere. Note that this is guaranteed to be possible because if $f= f_0 + \mathbb{E}(f|X)$ is a generalized eigenfunction with basis $\{g_1, \ldots g_l\},$ then $f_0$ is a generalized eigenfunction with spanning set $\{g_1, \ldots, g_l, \mathbb{E}(f|X)\},$ and $\mathbb{E}(f_0|X) \equiv 0$ by design. Now, by relative orthonormality we have \begin{align*} &\quad \ \norm{\mathbb{E}(T^nf \overline{g}_{j(n)} |X) - (T')^n \mathbb{E}(f|X)\mathbb{E}(\overline{g}_{j(n)}|X)}_{L^2(X)} \\ &= \norm{\mathbb{E}(T^nf \overline{g}_{j(n)} |X)}_{L^2(X)} \\ &= \norm{\mathbb{E}\left(\left(\sum_{i=1}^l c^n_i g_i\right) \overline{g}_{j(n)} |X \right)}_{L^2(X)} \\ &= \norm{\mathbb{E}(c^n_{j(n)} g_{j(n)} \overline{g}_{j(n)} |X)}_{L^2(X)} \\ &= \norm{c^n_j \mathbb{E}(\abs{g_{j(n)}}^2|X)}_{L^2(X)} \\ &= \norm{c^n_j}_{L^2(Z)} \ge \frac{1}{l}. \end{align*} Define $B_i := j^{-1}(i)$. By the above work, if $n \in B_i,$ $$\norm{\mathbb{E}(T^nf \overline{g}_i |X)}_{L^2(X)} \ge \frac{1}{l}.$$ Now given any subsequence $(n_k)$ there will be at least one $i \in \{\1, \ldots, l\}$ such that $(n_k)$ intersects $B_i$ infinitely often. Thus, $\norm{\mathbb{E}(T^{n_k}f \overline{g_i} |X)}$ does not converge to 0 as $k \to \infty$. If $\mathbb{E}(f|X) \neq 0,$ we write $f = f_0 + h$ where $\mathbb{E}(f_0|X) = 0$ and $h := \mathbb{E}(f|X)$. Then \begin{align*} &\mathbb{E}(T^nf \overline{g} |X) - (T')^n \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X) \\ = \ &\mathbb{E}(T^n(f_0 + h) \overline{g} |X) - (T')^n \mathbb{E}(f_0 + h|X)\mathbb{E}(\overline{g}|X). \end{align*} By linearity of the conditional expectation, this is the same as $$\mathbb{E}(T^nf_0 \overline{g} |X) - (T')^n \mathbb{E}(f_0|X) \mathbb{E}(\overline{g}|X) + \mathbb{E}(T^n h \overline{g}|X) - (T')^n \mathbb{E}(h|X) \mathbb{E}(\overline{g}|X).$$ The second term is 0 as $\mathbb{E}(f_0|X) \equiv 0.$ Further, because $h \in L^{\infty}(X), \mathbb{E}(T^n h \overline{g}|X) = (T')^n h \mathbb{E}(\overline{g}|X) = (T')^n \mathbb{E}(h|X) \mathbb{E}(\overline{g}|X),$ which cancels with the fourth term. We are left with $\mathbb{E}(T^nf_0 \overline{g} |X)$ and have reduced this to the previous case. \end{proof} Finally, we arrive at our goal. \begin{thm}[Weakly Mixing Extensions are Residual] \label{Thm:CategoryTheorem} $\mathcal{W}_X$ is a dense, $G_{\delta}$ subset of $\mathcal{G}_X$. \end{thm} \begin{proof} We begin by proving that if $T \in \mathcal{W}_X$ and $S \in \mathcal{G}_X,$ then $S^{-1}TS \in \mathcal{W}_X.$ With the note that there are weakly mixing extensions of antiperiodic factors, we then use Lemma \ref{Lem:ConjugacyLemmaExtensions} to conclude $\mathcal{W}_X$ is dense. We need to prove that there exists a subsequence of \begin{align*} & \norm{\mathbb{E}((S^{-1}TS)^nf \overline{g} |X) - ((S')^{-1}T'S')^n \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} \\ = \ &\norm{\mathbb{E}(S^{-1}T^nSf \overline{g} |X) - (S')^{-1}(T')^nS' \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} \end{align*} which converges to 0. Indeed, the above equals \begin{align*} \ &\norm{\mathbb{E}(S^{-1}T^nSf (S^{-1} S)\overline{g} |X) - (S')^{-1}(T')^nS' \mathbb{E}(f|X)((S')^{-1} S')\mathbb{E}(\overline{g}|X)}_{L^2(X)} \\ = \ &\norm{(S')^{-1}\mathbb{E}(T^n(Sf) (S\overline{g}) |X) - (S')^{-1}(T')^n\mathbb{E}(Sf|X) \mathbb{E}(S\overline{g}|X)} \\ \le \ &\norm{(S')^{-1}} \norm{\mathbb{E}(T^n(Sf) (S\overline{g}) |X) - (T')^n\mathbb{E}(Sf|X) \mathbb{E}(S\overline{g}|X)} \\ = \ &\norm{\mathbb{E}(T^n(Sf) (S\overline{g}) |X) - (T')^n\mathbb{E}(Sf|X) \mathbb{E}(S\overline{g}|X)}. \end{align*} But $T$ is a weakly mixing extension of $T'$ so there exists a subsequence for which the above converges to 0. To prove that $\mathcal{W}_X$ is $G_{\delta},$ let $\{f_i\} \subset L^2(Z|X)$ be dense with respect to $L^2(Z)$ and for $i,j,k,n \in \mathbb{N},$ consider the sets $$A_{i,j,k,n} := \left\{S \in \mathcal{G}_X : \norm{\mathbb{E}(S^nf_i \overline{f_j} |X) - (S')^n \mathbb{E}(f_i|X)\mathbb{E}(\overline{f_j}|X)}_{L^2(X)} < \frac{1}{k} \right\}.$$ Due to Lemmas \ref{Lem:RWML2Dense}, \ref{Lem:RelWMSubsequence}, we see that $\bigcap_{i,j,k} \bigcup_{n \ge k} A_{i,j,k,n} = \mathcal{W}_X.$ Thus it is sufficient to prove that each $A_{i,j,k,n}$ is open. For this, we show that for fixed $n \in \mathbb{N},f,g \in L^2(Z|X)$ and $\epsilon > 0,$ the set $$ \{S \in \mathcal{G}_X : \norm{\mathbb{E}(S^nf \overline{g} |X) - (S')^n\mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)} < \epsilon \}$$ is open in the weak topology. To this end, we show that the complement $$V(n,f,g,\epsilon) := \{S \in \mathcal{G}_X : \norm{\mathbb{E}(S^nf \overline{g} |X) - (S')^n\mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)} \ge \epsilon \}$$ is closed. Let $(S_m) \subset V(n,f,g,\epsilon)$ be a sequence of Koopman operators with $(S_m)$ converging weakly to a Koopman operator $S$. Note that this implies that $S_m \to S$ strongly (as Koopman operators are all isometries). First note that in general, if we have functions $g,h,h_1, h_2, \ldots \in L^2(Z|X),$ and $h_m \to h$ in $L^2(Z),$ then $\mathbb{E}(h_mg|X) \to \mathbb{E}(hg|X)$ in $L^2(X).$ Indeed, by Proposition \ref{Prop:C-SNorms}, \begin{align*} \norm{\mathbb{E}(gh_m|X) - \mathbb{E}(gh|X)}_{L^2(X)} &= \norm{\mathbb{E}(g(h-h_m)|X)}_{L^2(X)} \\ &\le \norm{\norm{g}_{L^2(Z|X)}}_{L^{\infty}(X)} \norm{h_m-h}_{L^2(Z)}. \end{align*} Second, note that $S_m \to S$ strongly implies $(S_m)' \to S'$ strongly. To see this, let $h \in L^2(X)$ and let $\hat{h} \in L^2(Z)$ be defined so that $\hat{h}(x,y):=h(x)$ (that is, $\hat{h}$ is constant on fibers). Then by Fubini, $\norm{(S_m)'h-S'h}_{L^2(X)} = \norm{S_m \hat{h} - S \hat{h}}_{L^2(Z)}.$ Lastly note that if $S_m \to S$ strongly, then $S_m^n \to S^n$ strongly. With these facts, we see that $\mathbb{E}(S_m^nf \overline{g} |X) - (S_m')^n\mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)$ converges to $\mathbb{E}(S^nf \overline{g} |X) - (S')^n\mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)$ strongly. Thus, $S \in V(n,f,g,\epsilon),$ and so $V(n,f,g,\epsilon)$ is closed. \end{proof} \begin{remark} Our assumption that $(X,m)$ was the Lebesgue measure on the unit interval was only important for the proof of density, where we need a non-atomic probability space to have an antiperiodic factor. Proving $\mathcal{W}_X$ is $G_{\delta}$ never required anything of $X$, and so the proof will hold for $(X,m)$ being replaced by any probability space. \end{remark} \section{General Case} \label{Sec:General} There is still a case left to consider. Namely, the case where the ``vertical'' measure is neither purely non-atomic nor purely discrete. Let $(X,m,T')$ be as before with $T'$ invertible, and let $(Y, \eta)$ be a probability space where $Y = A \dot{\cup} B$, where $B$ is an at most countable set, each of which is an atom of $\eta$, and $\eta \restriction_{A}$ is non-atomic (in particular, $0 < \eta(A) < 1$). Let $(Z,\mu,T)=(X \times Y, m \times \eta,T),$ where $T \in \mathcal{G}_X$ is an extension of $T'.$ Let $C:=X \times A$ and $D:=X \times B$ We first show that $T$ cannot mix points on the discrete and non-discrete parts of $Y$. \begin{prop} \label{Prop:GeneralNoPointMixing} Let $C,D \subset Z$ be as defined above. Then up to a set of measure zero, $TC \subset C$ and $TD \subset D.$ \end{prop} \begin{proof} First, suppose there exists $D' \subset D$ such that $\mu(D') > 0$ and $TD' \subset C$. We can assume without loss of generality that there exists $k$, a level of $D,$ such that $D' \subset k$ (see Section \ref{Sec:Discrete} for an explanation of this notation). We claim that $m(TD')=0,$ so that $T$ is not measure preserving. Note that because $T$ is an extension of invertible $T'$ and $D'$ is contained to a single level of $D$, then for each fixed $x \in X, TD' \cap \pi^{-1}(x)$ contains at most one point. Therefore, by Fubini $$\mu(TD') = \int_Z \chi_{TD'} d\mu = \int_X \left( \int_Y \chi_{TD'}(x,y) d\eta \right) dm.$$ But by our previous note and the fact that $(A, \eta \restriction_A)$ is non-atomic, $\int_Y \chi_{TD'}(x,y) d\eta =0$ for all $x$, and thus $\mu(TD')=0.$ Now suppose there exists $C' \subset C$ such that $\mu(C')>0$ and $TC' \subset D.$ Note that there exists $x_0 \in X$ such that $\pi^{-1}(x_0) \cap C'$ is uncountable (else $\mu(C')=0$ with a similar argument as above). But $T(\pi^{-1}(x_0) \cap C') \subset T'x_0 \times B$ is a countable set. This contradicts the invertibility of $T$. \end{proof} A quick consequence of Proposition \ref{Prop:GeneralNoPointMixing} is that there are no weakly mixing extensions on $Z$. \begin{cor} \label{Cor:NoWeakMixingOnMixedVertical} Let $(X,m,T'), (Y,\eta), (Z,\mu,T), C,$ and $D$ be as defined above, with $A$ and $B$, continuous and discrete parts of $Y$, respectively, nonempty. Then $T \notin \mathcal{W}_X$. \end{cor} \begin{proof} Note that we can assume without loss of generality that $TC \subset C$ and $TD \subset D$ (that is, not up to a set of measure 0, but rather everywhere). Define $f(z)$ as \begin{center} $ f(z) := \left\{ \begin{array}{cc} \frac{1}{\eta(A)} & \text{if } z \in C \\ \frac{-1}{\eta(B)} & \text{if } z \in D \end{array} \right. $. \end{center} Clearly $f \in L^2(Z|X)$ and simple calculation shows that $f$ has relative mean zero. Further, by Proposition \ref{Prop:GeneralNoPointMixing}, $f$ is $T$-invariant, and so $$\mathbb{E}(T^nf \overline{f}|X)=\mathbb{E}(f \overline{f}|X) = \mathbb{E}(\abs{f}^2|X) > 0.$$ Therefore, $T$ is not a weakly mixing extension of $T'$. \end{proof} \begin{remark} As with the $G_{\delta}$ part of Theorem \ref{Thm:CategoryTheorem}, Proposition \ref{Prop:GeneralNoPointMixing} and Corollary \ref{Cor:NoWeakMixingOnMixedVertical} hold when $(X,m)$ is replaced with any standard probability space. \end{remark} \section{Strongly Mixing Extensions} \label{Sec:StrongMixing} In this section we first extend the notion of strongly mixing transformations to extensions, just as the notions of ergodic and weakly mixing transformations were extended to extensions. Afterwards we will show that the set of strongly mixing extensions form a set of first category in $\mathcal{G}_X$. \begin{defin} \label{Def:StronglyMixing} Let $(X,\nu), (Z,\mu)$ be probability spaces. We say that $T \in \mathcal{G}_X$ is a \textit{(strongly) mixing extension} of $T'$ or $T$ is \textit{(strongly) mixing relative to} $T'$ if for all $f,g \in L^2(Z|X),$ $$\lim_{n \to \infty} \norm{\mathbb{E}(T^nf \overline{g} |X) - (T')^n \mathbb{E}(f|X)\mathbb{E}(\overline{g}|X)}_{L^2(X)} = 0.$$ Let $\mathcal{S}_X \subset \mathcal{G}_X$ denote the set of strongly mixing extensions. \end{defin} Definition \ref{Def:StronglyMixing} yields some of the properties one would hope to have from the extension of the notion of strongly mixing transformations. For example, $\mathcal{S}_X$ is in general not empty. Indeed, any direct product transformation where the second component is strongly mixing will be a strongly mixing extension. We also have that if $X$ is a single point, then the definition coincides with classical strongly mixing transformations. Further, it is clear that $\mathcal{S}_X \subset \mathcal{W}_X$. We once again return to the case where $(X,m)$ is the unit interval with the Lebesgue measure, and $(Z,m)$ is the unit square with the Lebesgue measure. Analogous to Rokhlin's result and its proof, we now show that $\mathcal{S}_X$ is a first category subset of $\mathcal{G}_X$. \begin{thm}[Strongly Mixing Extensions are of First Category] \label{Thm:FirstCategoryTheorem} $\mathcal{S}_X \subset \mathcal{G}_X$ is of first category. \end{thm} \begin{proof} For $k \in \mathbb{N},$ let $P_k := \{T \in \mathcal{G}_X | T^k = I_Z \}.$ Note in particular that if $T \in P_k$ then $(T')^k = I_X.$ For $n \in \mathbb{N},$ let $\hat{P}_n := \bigcup_{k > n} P_k.$ Note that the Weak Approximation Theorem for Extensions (Theorem \ref{Thm:WATE}) implies that $\hat{P}_n$ is dense in $\mathcal{G}_X$. Let $A:= [0,1] \times [0, 1/2]$ (the bottom half of $Z$). Note that $\mathbb{E}(\chi_A |X) = 1/2$ for all $x \in X$. We now define new sets, $$M_k := \left\{T \in \mathcal{G}_X | \norm{\mathbb{E}(T^k \chi_A \chi_A|X) - (T')^k \mathbb{E}(\chi_A|X) \mathbb{E}(\chi_A|X)}_{L^2(X)} \le \frac{1}{5} \right\}.$$ Using the same arguments as used in the proof of Theorem \ref{Thm:CategoryTheorem} for the sets $V(n,f,g,\epsilon),$ we see that $M_k$ is closed for all $k$. Now let $$M := \bigcup_{n=1}^{\infty} \bigcap_{k > n} M_k.$$ It is easy to see that $\mathcal{S}_X \subset M$. Thus it is sufficient to show that $M$ is of first category. It is in turn sufficient to show that $\bigcap_{k > n} M_k$ is nowhere dense for all $n$, and further given that $\bigcap_{k > n} M_k$ is closed for all $n$, it is thus sufficient to show that $\mathcal{G}_X \backslash \bigcap_{k > n} M_k$ is dense. Lastly, as $\mathcal{G}_X \backslash \bigcap_{k > n} M_k = \bigcup_{k > n} (\mathcal{G}_X \backslash M_k),$ it will suffice to show that $P_k \subset (\mathcal{G}_X \backslash M_k)$ for all $k$ as then $\hat{P}_n = \bigcup_{k > n} P_k \subset \bigcup_{k > n} (\mathcal{G}_X \backslash M_k)$ and $\hat{P}_n$ is dense. Now, if $T \in P_k,$ then $T^k = I_Z, (T')^k = I_X,$ so \begin{align*} &\quad\ \norm{\mathbb{E}(T^k \chi_A \chi_A|X) - (T')^k \mathbb{E}(\chi_A|X) \mathbb{E}(\chi_A|X)}_{L^2(X)} \\ &= \norm{\mathbb{E}(\chi_A|X) - \mathbb{E}(\chi_A|X)^2} = \norm{\frac{1}{2}-\frac{1}{4}} = \frac{1}{4} > \frac{1}{5}. \end{align*} Thus, $T \notin M_k$. \end{proof} \begin{cor} \label{Cor:StrongAndWeakMixingExtensionsNotEqual} $\mathcal{S}_X$ is a proper subset of $\mathcal{W}_X.$ \end{cor} \section{Further Questions} \label{Sec:Questions} To conclude this paper, we formulate some open questions. \begin{question} \label{Que:AtomicFactor} Let $(X,\nu)$ be any probability space, $(Y,\eta)$ be a non-atomic probability space and $(Z,\mu) = (X \times Y, \nu \times \eta).$ Is $\mathcal{W}_X$ a dense, $G_{\delta}$ subset of $\mathcal{G}_X$? \end{question} It would be sufficient for Question \ref{Que:AtomicFactor} to consider the case where $(X,\nu)$ is purely atomic. We cannot use the Conjugacy Lemma in this case because there are no antiperiodic transformations on a discrete set. \begin{question} \label{Que:FixedFactor} Let $(X,m)$ be a (potentially non-atomic) probability space, $(Y,\eta)$ be a non-atomic probability space and $(Z,\mu) = (X \times Y, \nu \times \eta).$ Let $R \in \mathcal{G}(X)$ be fixed. Define \vspace{-5pt} $$\mathcal{G}_R := \{T \in \mathcal{G}_X : T \text{ extends } R \}$$ and \vspace{-5pt} $$\mathcal{W}_R := \{T \in \mathcal{G}_R : T \text{ is a weakly mixing extension of } R \}.$$ Is $\mathcal{W}_R$ a dense, $G_{\delta}$ subset of $\mathcal{G}_R$? Similarly let \vspace{-5pt} $$\mathcal{S}_R := \{T \in \mathcal{G}_R : T \text{ is a strongly mixing extension of } R \}.$$ Is $\mathcal{S}_R$ a first category subset of $\mathcal{G}_R?$ \end{question} The main difficulty as of now in answering Question \ref{Que:FixedFactor} for weakly mixing extensions is proving the density. The freedom of having a non-fixed factor allowed a generalization of Halmos' Conjugacy Lemma. One cannot conjugate in $\mathcal{G}_R$ unless $R$ is the identity on $X$, as $\mathcal{G}_R$ is not closed under inverses for any other invertible $R$. That is not even to mention lack of dyadic permutations or other tools which we had at our disposal throughout this paper. Even strongly mixing extensions at first present some trouble, as the proof relied on period $T$, and if $R$ is not periodic, then neither is $T$. We note that in the case of compact group extensions of a fixed weakly mixing $R$, Robinson gave a positive answer to the question of genericity of weakly mixing extensions \cite{EARobertson}. Further, we see from the following proposition that these Question \ref{Que:FixedFactor} cannot be derived from Theorems \ref{Thm:CategoryTheorem} and \ref{Thm:FirstCategoryTheorem}. Recall that $(X,m)$ is the unit interval with the Lebesgue measure. \begin{prop} For all $T' \in \mathcal{G}(X), \mathcal{G}_{T'}$ is a closed, nowhere dense subset of $\mathcal{G}_X.$ \end{prop} \begin{proof} Fix $T' \in \mathcal{G}(X).$ We first show that $\mathcal{G}_{T'}$ is closed. Let $S \in \mathcal{G}_X \backslash \mathcal{G}_{T'}.$ As $S' \neq T',$ there exists $E \subset X$ such that $m(S'E \triangle T'E) > 0.$ Let $\epsilon := m(S'E \triangle T'E) /2.$ Consider $$N_{\epsilon}(S) := \{R \in \mathcal{G}_X : \mu(R \pi^{-1}E \triangle S \pi^{-1}E) < \epsilon \}.$$ For any $T \in \mathcal{G}_{T'}, \mu(T \pi^{-1}E \triangle S \pi^{-1}E) = m(T'E \triangle S'E) = 2 \epsilon > \epsilon,$ so $T \notin N_{\epsilon}(S),$ and thus $\mathcal{G}_{T'}$ is closed. Now, to show $\mathcal{G}_{T'}$ is nowhere dense, we show $\mathcal{G}_X \backslash \mathcal{G}_{T'}$ is dense. Fix $T \in \mathcal{G}_{T'}, \epsilon > 0$ and let $$N_{\epsilon}(T) := \{S \in \mathcal{G}_X : \mu(TD_i \triangle SD_i) < \epsilon, i = 1, \ldots 2^{2N} \}$$ where $D_i$ are all dyadic squares of some rank $N$. Fix $i$ and let $E \subset \pi D_i$ with $m(E) < \epsilon$. Let $E_1, E_2$ be disjoint sets whose union is $E$ and $m(E_1)=m(E_2).$ Select $R \in \mathcal{G}(X)$ with the following properties: for $x \in X \backslash E, Rx = x,RE_1=E_2, RE_2 = E_1.$ Define $\tilde{T} := T (R \times I)$. Note that for $j$ such that $\pi D_j \neq \pi D_i, (R \times I)D_j = D_j,$ so for such $j, \mu(\tilde{T}D_j \triangle TD_j) = 0.$ On the other hand, if $\pi D_j = \pi D_i,$ then $\mu(\tilde{T}D_j \triangle TD_j) < m(E) = \epsilon.$ Thus $\tilde{T} \in N_{\epsilon}(T),$ but $\tilde{T} \notin \mathcal{G}_{T'}.$ \end{proof} \begin{question} \label{Que:Rigidity} Can one find an extension analogue of Katok's result (see, for example, \cite{Nadkarni}) that rigid transformations form a residual set? \end{question} Note that is not completely clear how one should define a rigid extension. The author currently has no notion of how it should be defined. \nocite{Glasner} \nocite{HalmosPaper} \nocite{Peterson} \nocite{Zhao} \bibliographystyle{plain}
1,941,325,220,311
arxiv
\section{Introduction} The gravitational instability of collisionless matter in a cosmological framework is usually studied within the Newtonian approximation, which basically consists in neglecting terms higher than the first in metric perturbations around a matter--dominated Friedmann--Robertson--Walker (FRW) background, while keeping non--linear density and velocity perturbations. This approximation is usually thought to produce accurate results in a wide spectrum of cosmological scales, namely on scales much larger than the Schwarzschild radius of collapsing bodies and much smaller than the Hubble horizon scale, where the peculiar gravitational potential $\varphi_g$, divided by the square of the speed of light $c^2$ to obtain a dimensionless quantity, keeps much less than unity, while the peculiar matter flow never becomes relativistic. To be more specific, the Newtonian approximation consists in perturbing only the time--time component of the FRW metric tensor by an amount $2\varphi_g/c^2$, where $\varphi_g$ is related to the matter density fluctuation $\delta$ via the cosmological Poisson equation, $\nabla_x^2 \varphi_g ({\vec x},\tau) = 4 \pi G a^2(\tau) \varrho_b(\tau) \delta({\vec x}, \tau)$, where $\varrho_b$ is the background matter density, $a(\tau)$ the appropriate FRW scale--factor and $\tau$ the conformal time. The fluid dynamics is then usually studied in Eulerian coordinates by accounting for mass conservation and using the cosmological version of the Euler equation for a self--gravitating pressureless fluid to close the system. To motivate the use of this ``hybrid approximation", which deals with perturbations of the matter and the geometry at a different perturbative order, one can either formally expand the correct equations of General Relativity (GR) in inverse powers of the speed of light or simply notice that the peculiar gravitational potential is strongly suppressed with respect to the matter perturbation by the square of the ratio of the perturbation scale $\lambda$ to the Hubble radius $r_H= c H^{-1}$ ($H$ being the Hubble constant): $\varphi_g/c^2 \sim \delta ~(\lambda / r_H)^2$. Such a simplified approach, however, already fails in producing an accurate description of the trajectories of relativistic particles, such as photons. Neglecting the relativistic perturbation of the space--space components of the metric, which in the so--called longitudinal gauge is just $-2\varphi_g/c^2$, would imply a mistake by a factor of two in well--known effects such as the Sachs--Wolfe, Rees--Sciama and gravitational lensing. The level of accuracy not only depends on the peculiar velocity of the matter producing the spacetime curvature, but also on the nature of the particles carrying the signal to the observer. Said this way, it may appear that the only relativistic correction required to the usual Eulerian Newtonian picture is that of writing the metric tensor in the ``weak field" form (e.g. Peebles 1993) \begin{equation} ds^2 = a^2(\tau) \biggl[ - \biggl(1 + {2\varphi_g \over c^2} \biggr) ~c^2 d\tau^2 + \biggl(1 - {2\varphi_g \over c^2} \biggr) ~d l^2 \biggr] \;. \end{equation} As we are going to show, this is not the whole story. It is well--known in fact that the gravitational instability of aspherical perturbations (which is the generic case) leads to the formation of very anisotropic structures whenever pressure gradients can be neglected (e.g. Shandarin et al. 1995 and references therein). Matter first flows in almost two--dimensional structures called pancakes, which then merge and fragment to eventually form one--dimensional filaments and point--like clumps. During the process of pancake formation the matter density, the shear and the tidal field formally become infinite along evanescent two--dimensional configurations corresponding to caustics; after this event a number of highly non--linear phenomena, such as vorticity generation by multi--streaming, merging, tidal disruption and fragmentation, occur. Most of the patology of the caustic formation process, such as the local divergence of the density, shear and tide, and the formation of multi--stream regions, are just an artifact of extrapolating the pressureless fluid approximation beyond the point at which pressure gradients and viscosity become important. In spite of these limitations, however, it is generally believed that the general anisotropy of the collapse configurations, either pancakes or filaments, is a generic feature of cosmological structures originated through gravitational instability, which would survive even in the presence of a collisional component. This simple observation shows the inadequacy of the standard Newtonian paradigm. According to it the lowest scale at which the approximation can be reasonably applied is set by the amplitude of the gravitational potential and is given by the Schwarzschild radius of the collapsing body, which is negligibly small for any relevant cosmological mass scale. What is completely missing in this criterion is the role of the shear which causes the presence of non--scalar contributions to the metric perturbations. A non--vanishing shear component is in fact an unavoidable feature of realistic cosmological perturbations and affects the dynamics in (at least) three ways, all related to non--local effects, i.e. to the interaction of a given fluid element with the environment. First, at the lowest perturbative order the shear is related to the tidal field generated by the surrounding material by a simple proportionality law. Second, it is related to a {\em dynamical} tidal induction: the modification of the environment forces the fluid element to modify its shape and density. In Newtonian gravity, this is an {\em action--at--a--distance} effect, which starts to manifest itself in second--order perturbation theory as an inverse--Laplacian contribution to the velocity potential (e.g. Catelan et al. 1995). Third, and most important here, a non--vanishing shear field leads to the generation of a traceless and divergenceless metric perturbation which can be understood as gravitational radiation emitted by non--linear perturbations. This contribution to the metric perturbations is statistically small on cosmologically interesting scales, but it becomes relevant whenever anisotropic (with the only exception of exactly one--dimensional) collapse takes place. In the Lagrangian picture such an effect already arises at the post--Newtonian (PN) level. Note that the two latter effects are only detected if one allows for non--scalar perturbations in physical quantities. Contrary to a widespread belief, in fact, the choice of scalar perturbations in the initial conditions is not enough to prevent tensor modes to arise beyond the linear regime in a GR treatment. Truly tensor perturbations are dynamically generated by the gravitational instability of initially scalar perturbations, independently of the initial presence of gravitational waves. This point is very clearly displayed in the GR Lagrangian second--order perturbative approach. The pioneering work in this field is by Tomita (1967), who calculated the gravitational waves $\pi^\alpha_{~\beta}$ emitted by non--linearly evolving scalar perturbations in an Einstein--de Sitter background, in the synchronous gauge. Matarrese, Pantano \& Saez (1994a,b) obtained an equivalent result but with a different formalism in comoving and synchronous coordinates. Recently a number of different approaches to relativistic effects in the non--linear dynamics of cosmological perturbations have been proposed. Matarrese, Pantano \& Saez (1993) proposed an algorithm based on neglecting the magnetic part of the Weyl tensor in the dynamics, obtaining strictly local fluid--flow evolution equations, i.e. the so--called ``silent universe". This formalism, however, cannot be applied to cosmological structure formation {\em inside} the horizon, where the non--local tidal induction cannot be neglected, i.e. the magnetic Weyl tensor $H^\alpha_{~\beta}$ is non--zero, with the exception of highly specific initial configurations (Matarrese et al. 1994a; Bertschinger \& Jain 1994; Bruni, Matarrese \& Pantano 1995a; the dynamical role of $H^\alpha_{~\beta}$ was also discussed by Bertschinger \& Hamilton 1994 and Kofman \& Pogosyan 1995). Rather, it is probably related to the non--linear dynamics of an irrotational fluid {\em outside} the (local) horizon (Matarrese et al. 1994a,b). One possible application (Bruni, Matarrese \& Pantano 1995b), is in fact connected to the {\em Cosmic No--hair Theorem}. Matarrese \& Terranova (1995) followed the more ``conservative" approach of expanding the Einstein and continuity equations in inverse powers of the speed of light, which then defines a Newtonian limit and, at the next order, post--Newtonian corrections. Their approach differs from previous ones, because of the gauge choice: we used synchronous and comoving coordinates, because of which this approach can be called a Lagrangian one. Various approaches have been proposed in the literature, which are somehow related. A PN approximation has been followed by Futamase (1991) to describe the dynamics of a clumpy universe. Tomita (1991) used non--comoving coordinates in a PN approach to cosmological perturbations. Shibata \& Asada (1995) recently developed a PN approach to cosmological perturbations, also using non--comoving coordinates. Kasai (1995) analyzed the non--linear dynamics of dust in the synchronous and comoving gauge. \section{Method} We consider a pressureless fluid with vanishing vorticity. Using synchronous and comoving coordinates, the line--element reads \begin{equation} ds^2 = a^2(\tau)\big[ - c^2 d\tau^2 + \gamma_{\alpha\beta}({\vec q}, \tau) dq^\alpha d q^\beta \big] \;, \end{equation} where we have factored out the scale--factor of the isotropic FRW solutions. By subtracting the isotropic Hubble--flow, we introduce a {\em peculiar velocity--gradient tensor} $\vartheta^\alpha_{~\beta} = {1 \over 2} \gamma^{\alpha\gamma} {\gamma_{\gamma\beta}}'$, where primes denote differentiation with respect to $\tau$. Thanks to the introduction of this tensor we can write the Einstein's equations in a cosmologically convenient form. The energy constraint reads \begin{equation} \vartheta^2 - \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu} + 4 {a' \over a} \vartheta + c^2 \bigl( {\cal R} - 6 \kappa \bigr) = 16 \pi G a^2 \varrho_b \delta \;, \end{equation} where ${\cal R}^\alpha_{~\beta}(\gamma)$ is the conformal Ricci curvature of the three--space with metric $\gamma_{\alpha\beta}$; for the background FRW solution $\gamma^{FRW}_{\alpha\beta} = (1 + {\kappa\over 4} q^2)^{-2} \delta_{\alpha\beta}$, one has ${\cal R}^\alpha_{~\beta}(\gamma^{FRW}) = 2 \kappa \delta^\alpha_{~\beta}$. We also introduced the density contrast $\delta \equiv (\varrho - \varrho_b) /\varrho_b$. The momentum constraint reads \begin{equation} \vartheta^\alpha_{~\beta||\alpha} = \vartheta_{,\beta} \;. \end{equation} The double vertical bars denote covariant derivatives in the three--space with metric $\gamma_{\alpha\beta}$. Finally, after replacing the density from the energy constraint and subtracting the background contribution, the extrinsic curvature evolution equation becomes \begin{equation} {\vartheta^\alpha_{~\beta}}' + 2 {a' \over a} \vartheta^\alpha_{~\beta} + \vartheta \vartheta^\alpha_{~\beta} + {1 \over 4} \biggl( \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu} - \vartheta^2 \biggr) \delta^\alpha_{~\beta} + {c^2 \over 4} \biggl[ 4 {\cal R}^\alpha_{~\beta} - \bigl( {\cal R} + 2 \kappa \bigr) \delta^\alpha_{~\beta} \biggr] = 0 \;. \end{equation} The Raychaudhuri equation for the evolution of the {\em peculiar volume--expansion scalar} $\vartheta$ reads \begin{equation} \vartheta' + {a' \over a} \vartheta + \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu} + 4 \pi G a^2 \varrho_b \delta =0 \;. \end{equation} The main advantage of this formalism is that there is only one dimensionless (tensor) variable in the equations, namely the spatial metric tensor $\gamma_{\alpha\beta}$. The only remaining variable is the density contrast which can be written in the form \begin{equation} \delta({\vec q}, \tau) = (1 + \delta_0({\vec q})) \bigl[\gamma({\vec q}, \tau)/ \gamma_0 ({\vec q}) \bigr]^{-1/2} - 1 \;, \end{equation} where $\gamma \equiv {\rm det} ~\gamma_{\alpha\beta}$. \section{Results and conclusions} The method is then based on a $1/c^2$ expansion of equations above which first of all leads to a new, purely Lagrangian, derivation of the Newtonian approximation (Matarrese \& Terranova 1995). One of the most important result in this respect is that we obtained a simple expression for the Lagrangian metric; exploiting the vanishing of the spatial curvature in the Newtonian limit we were able to write it in terms of the displacement vector ${\vec S}({\vec q}, \tau) = {\vec x}({\vec q},\tau) - {\vec q}$, from the Lagrangian coordinate ${\vec q}$ to the Eulerian one ${\vec x}$ of each fluid element (e.g. Buchert 1995 and references therein), namely \begin{equation} d s^2 = a^2(\tau) \biggl[ - c^2 d \tau^2 + \delta_{AB} \biggl(\delta^A_{~\alpha} + {\partial S^A({\vec q}, \tau) \over \partial q^\alpha} \biggr) \biggl(\delta^B_{~\beta} + {\partial S^B({\vec q}, \tau) \over \partial q^\beta} \biggr) \biggr] \;. \end{equation} A straightforward application of this formula is related to the Zel'dovich approximation. The spatial metric is that of Euclidean space in time--dependent curvilinear coordinates, consistently with the intuitive notion of Lagrangian picture in the Newtonian limit. Read this way, the complicated equations of Newtonian gravity in the Lagrangian picture become much easier: one just has to deal with the spatial metric tensor and its derivatives. The displacement vector is then completely fixed by solving the Raychaudhuri equation together with the momentum constraint in the $c \to \infty$ limit. Next, we can consider the post--Newtonian corrections to the metric and write equations for them. In particular, we can derive a simple and general equation for the gravitational--waves $\pi_{\alpha\beta}$ emitted by non--linear structures described through Newtonian gravity. The result can be expressed both in Lagrangian and Eulerian coordinates. In the latter case one has, \begin{equation} \nabla^2_x \pi_{AB} = \Psi^{(E)}_{v,AB} + \delta_{AB} \nabla_x^2 \Psi_v^{(E)} + 2 \biggl( \bar \vartheta \bar \vartheta_{AB} - \bar \vartheta_{AC} \bar \vartheta^C_{~~B} \biggr) \;, \end{equation} with capital latin labels $A,B, \dots = 1,2,3$ indicating Eulerian coordinates and $\nabla_x^2 \Psi_v^{(E)} = - \frac{1}{2} ( \bar \vartheta^2 - \bar \vartheta^A_{~B} \bar \vartheta^B_{~A} )$, which generally allows a simple derivation of $\pi_{AB}$, given the (gradients of the) velocity potential, $\bar \vartheta_{AB} = \partial^2 \Phi_v/\partial x^A \partial x^B$, by a convolution in Fourier space. These formulae would allow to calculate the amplitude of the gravitational--wave modes in terms of the velocity potential, which in turn can be deduced from observational data on radial peculiar velocities of galaxies. In the standard case, where the cosmological perturbations form a homogeneous and isotropic random field, we can obtain a heuristic perturbative estimate of their amplitude in terms of the {\em rms} density contrast and of the ratio of the typical perturbation scale $\lambda$ to the Hubble radius $r_H=c H^{-1}$. One simply has $\pi_{rms} / c^2 \sim \delta_{rms}^2 (\lambda / r_H )^2$. This effect gives rise to a stochastic background of gravitational waves which gets a non--negligible amplitude in the so--called {\em extremely--low--frequency} band (e.g. Thorne 1995), around $10^{-14}$ -- $10^{-15}$ Hz. We can roughly estimate that the present--day closure density of this gravitational--wave background is \begin{equation} \Omega_{gw}(\lambda) \sim \delta_{rms}^4 \biggl( {\lambda \over r_H} \biggr)^2 \;. \end{equation} In standard scenarios for the formation of structure in the universe, the typical density contrast on scales $1$ -- $10$ Mpc implies that $\Omega_{gw}$ is about $10^{-5}$ -- $10^{-6}$. We might speculate that such a background would give rise to secondary CMB anisotropies on intermediate angular scales: a sort of {\em tensor Rees--Sciama effect}. This issue will be considered in more detail elsewhere. The previous PN formula also applies to isolated structures, where the density contrast can be much higher than the {\em rms} value, and shear anisotropies play a fundamental role. A calculation of $\pi_{\alpha\beta}$ in the case of a homogeneous ellipsoid showed that the PN tensor modes become dominant, compared to the Newtonian contributions to the metric tensor, during the late stages of collapse, and possibly even in a shell--crossing singularity. It is important to stress that this effect generally contradicts the standard paradigm that the smallest scale for the applicability of the Newtonian approximation is set by the Schwarzschild radius of the object. Such a critical scale is indeed only relevant for nearly spherical collapse, whereas this effect becomes important if the collapsing structure strongly deviates from sphericity.
1,941,325,220,312
arxiv
\section{Introduction} In this paper we study the stopped process of a stochastic process in Riesz spaces. The notion of a stopped process is fundamental to the study of stochastic processes, since it is often used to extend results that are valid for bounded processes to hold also for unbounded processes. In~\cite{G2} we defined stopped processes for a class of submartingales and we expressed the need to get a definition applicable to more general processes. In this paper we introduce a much more general definition of stopped processes using the Daniell integral. The definition applies to every Daniell integrable process with reference to a certain spectral measure; this class of processes includes the important class of right-continuous processes. \medskip Considering the classical case, let $(\Omega,\mf{F},P)$ be a probability space, and let $(X_t=X(t,\omega))_{t\in J,\omega\in\Omega}$ be a stochastic process in the $L^1(\Omega,\mf{F},P)$ adapted to the filtration $(\mf{F}_t)$ of sub-$\sigma$-algebras of $\mf{F}.$ If the real valued non-negative stochastic variable $\S(\omega)$ is a stopping time for the filtration, then the stopped process is the process (see~\cite[Proposition 2.18]{KS}) $$ (X_{t\n \S})_{t\in J}=(X(t\n \S(\omega),\omega))_{t\in J,\, \omega\in\Omega}. $$ The paths of this process are equal to $X_t(\omega)$ up to time $\S(\omega),$ and from then on they remain constant with value $X_\S(\omega)=X(\S(\omega),\omega).$ \medskip The difficulty encountered in the abstract case is to define, what we shall call the {\em stopping element, $X_\S,$} needed in the definition of the stopped process $X_{t\wedge\S}.$ We note that $X_\S$ can be interpreted as an element of a vector-valued functional calculus on $\mf{E}$ induced by the vector function $t\mapsto X_t,$ in the same way as, in the case of a real-valued function $t\mapsto f(t)$ the element $f(X),$ $X\in\mf{E},$ is an element of the functional calculus on $\mf{E}$ induced by $f.$ The latter element can be obtained as a limit of simple elements of the form $\sum_{i=1}^n f(t_i)(E_{i+1}-E_i),$ with $E_i$ elements of the spectral system of $X$ with reference to a weak order unit $E$ (by taking $f(t)=t,$ the reader will recognize this as Freudenthal's spectral theorem). The element $f(X)$ can then be interpreted as an integral $$ \int_\R f(t)\,d\mu_E(t), $$ with $\mu_E$ the spectral measure that is the extension to the Borel algebra in $\R$ of the (vector-) measure defined on left-open right-closed intervals $(a,b]$ by $\mu_E[(a,b]]=E_b-E_a$ (see~\cite[Sections IV.10, XI.5-7]{Vu}). Our approach will be to define a similar functional calculus for vector-valued functions, and we do this by employing the vector-valued Daniell integral as defined in~\cite{G12}. \medskip Having a more general definition of $X_\S$ implies that a new proof of Doob's optional sampling theorem is needed. The reason is that a special case of Doob's theorem (for martingales) is implicitly contained in the definition of $X_\S$ as given in \cite{G2}. Having obtained such a proof also proves that the definition given in \cite{G2} is a valid definition for the case considered there. \medskip A novelty in this paper is that we do not use order convergence in the definition of a continuous stochastic process, but unbounded order (uo-) convergence. This gives us a better model of the classic case, where convergence of a stochastic process is defined to mean that for almost every $\omega$ the paths $X_t(\omega)$ of the process are continuous functions of $t.$ It is known that uo-convergence in function spaces is the correct notion to describe almost everywhere convergence. The fact, from integration theory, stating that a sequence that is pointwise almost everywhere convergent and uniformly integrable, is convergent in $L^1,$ is also generalized. This generalization is needed in the proof of Doob's optional sampling theorem. \medskip We finally remark that we use, following \cite{G12, Pr}, the Daniell integral for vector-valued functions in our work. It turns out that Daniell's integral fits in perfectly in the Riesz space setting in which we describe stochastic processes. In fact, Daniell's original 1918 paper~\cite{PJD} was the first paper using, what we call today, Riesz space theory. \section{Preliminaries} Let $\mf{E}$ be a Dedekind complete, perfect Riesz space with weak order unit $E.$ We assume $\mf{E}$ to be separated by its order continuous dual $\mf{E}^\sim_{00}.$ For the theory of Riesz spaces (vector lattices) we refer the reader to the following standard texts~\cite{AB2,LZ,MN, Sch, Z1, Z2}. For results on topological vector lattices the standard references are~\cite{AB1,F}. We denote the {\it universal completion} of $\mathfrak E,$ which is an $f$-algebra that contains $\mathfrak E$ as an order dense ideal, by $\mathfrak{E}^u$ (the fact that it is an ideal follows from~\cite[Lemma 7.23.15 and Definition 7.23.19]{AB1}). Its multiplication is an extension of the multiplication defined on the principal ideal $\mf{E}_E$ and $E$ is the algebraic unit and a weak order unit for $\mathfrak E^u$ (see \cite{Z1}). The set of order bounded band preserving operators, called orthomorphisms, is denoted by $\operatorname{Orth}(\mf{E}).$ We refer to \cite{Donner, G9} for the definition and properties of the \emph{sup-completion} $\mathfrak{E}^s$ of a Dedekind complete Riesz space $\mathfrak{E}.$ It is a unique Dedekind complete ordered cone that contains $\mathfrak E$ as a sub-cone of its group of invertible elements and its most important property is that it has a largest element. Being Dedekind complete this implies that every subset of $\mf{E}^s$ has a supremum in $\mf{E}^s.$ Also, for every $C\in\mf{E}^s,$ we have $C=\sup\{X\in\mf{E}: X\le C\}$ and $\mf{E}$ is a solid subset of $\mf{E}^s.$ \medskip A {\em conditional expectation} $\mb{F}$ defined on $\mf{E}$ is a strictly positive order continuous linear projection with range a Dedekind complete Riesz subspace $\mf{F}$ of $\mf{E}.$ It has the property that it maps weak order units onto weak order units. It may be assumed, as we will do, that $\mb{F}E=E$ for the weak order unit $E.$ The space $\mf{E}$ is called {$\mb{F}$-universally complete} (respectively, {$\mb{F}$-universally complete in $\mf{E}^u$}) if, whenever $X_\alpha\uparrow$ in $\mf{E}$ and $\mb{F}(X_\alpha)$ is bounded in $\mf{E}$ (respectively in $\mf{E}^u$), then $X_\alpha\uparrow X$ for some $X\in\mf{E}.$ If $\mf{E}$ is $\F$-universally complete in $\mf{E}^u,$ then it is $\F$-universally complete. \medskip \textit{We shall henceforth tacitly assume that $\mf{E}$ is $\mb{F}$-universally complete in $\mf{E}^u.$} \medskip For an order closed subspace $\mf{F}$ of $\mf{E},$ we shall denote the set of all order projections in $\mf{E}$ by $\mf{P}$ and its subset of all order projections mapping $\mf{F}$ into itself, by $\mf{P}_{\mf{F}}.$ This set can be identified with the set of all order projections of the vector lattice $\mf{F}$ (see~\cite{G1}). \medskip B.A. Watson~\cite{W} proved that if $\mf{G}$ is an order closed Riesz subspace of $\mf{E}$ with $\mf{F}\subset\mf{G},$ then there exists a unique conditional expectation $\mb{F}_\mf{G}$ on $\mf{E}$ with range $\mf{G}$ and $\mb{F}\mb{F}_\mf{G}=\mb{F}_\mf{G}\mb{F}=\mb{F}$ (see~\cite{G2,W}). We shall also use the fact (see~\cite[Theorem 3.3 and Proposition 3.4]{G2}) that $Z=\mb{F}_\mf{G}(X),$ if and only if we have $$ \F(\P Z)=\F(\P X) \mbox{ holds for every projection $\P\in\mf{P}_{\mf{G}}$ }. $$ The conditional expectation $\mb{F}$ may be extended to the sup-completion in the following way: For every $X\in\mf{E}^s,$ define $\mathbb F X$ by $\sup_{\alpha}\mathbb F X_\alpha\in\mathfrak{E}^s$ for any upward directed net $X_\alpha\uparrow X$, $X_\alpha\in\mathfrak{E}.$ It is well defined (see~\cite{G8}). We define $\dom^+ \F:=\{0\le X\in\mf{E}^s:\ \F(X)\in\mf{E}^u\}.$ Then $\dom^+\F\subset\mf{E}^u$ (see~\cite[Proposition 2.1]{G6}) and we define $\dom\F=\dom^+\F-\dom^+\F.$ Since we're assuming $\mf{E}$ is $\F$-universally complete in $\mf{E}^u,$ we have $\dom\,\F=\mf{E}.$ If $XY\in \rm{dom}\,{\mathbb F}$ (with the multiplication taken in the $f$-algebra $\mf{E}^u$), where $Y\in \mathfrak E$ and $X\in \mathfrak F=\mc{R}(\F),$ we have that $\mathbb F(XY)= X \mathbb F(Y)$. This fundamental fact is referred to as the \emph{averaging property} of $\mb F$ (see~\cite{G1}). Let $\Phi$ be the set of all $\phi\in\mf{E}^\sim_{00}$ satisfying $|\phi|(E)=1$ and extend $|\phi|$ to $\mf{E}^s$ by continuity. Define $\mathscr{P}$ to be the set of all Riesz seminorms defined on $\mf{E}$ by $p_{\phi}(X):=|\phi|(\F(|X|)$ where $\phi\in\Phi.$ We define the space $\mathscr{L}^1:=(\mf{E},\s(\mathscr{P}))$ and have that $\mc{L}^1:=\{X\in\mf{E}^u: p_\phi(X)<\infty\mbox{ for all } \phi\in\Phi\},$ equipped with the locally solid topology $\s(\mathscr{L}^1,\mathscr{P})$ (for the proof see~\cite{G9}). We next define the space $\mc{L}^2$ to consist of all $X\in\mc{L}^1$ satisfying $|X|^2\in\mc{L}^1,$ where the product is taken in the $f$-algebra $\mf{E}^u.$ Thus, $\mc{L}^2:=\{X\in\mf{E}^u: |\phi|(\F(|X|^2))<\infty\mbox{ for all } \phi\in\Phi\}.$ For $X\in\mc{L}^2$ we define the Riesz seminorm $q_{\phi}(X):=(|\phi|(\F(|X|^2))^{1/2},$ and we denote the set of all these seminorms by $\mathscr{Q},$ and we equip $\mc{L}^2$ with the weak topology $\s(\mathscr{Q}).$ The spaces $\mathscr{L}^1$ and $\mathscr{L}^2$ are topologically complete (see~\cite{G7} and \cite{G9} and note that this may not be true without the assumption that $\mf{E}$ is $\F$-universally complete in $\mf{E}^u$). \medskip A {\em filtration} on $\mf{E}$ is a set $(\F_t)_{t\in J}$ of conditional expectations satisfying $\F_s=\F_s\F_t$ for all $s<t.$ We denote the range of $\F_t$ by $\mf{F}_t.$ A {\em stochastic process} in $\mf{E}$ is a function $t\mapsto X_t\in\mf{E},$ for $t\in J,$ with $J\subset\R^+$ an interval. The stochastic process $(X_t)_{t\in J}$ is {\em adapted to the filtration} if $X_t\in\mf{F}_t$ for all $t\in J.$ We shall write $\mf{P}_t$ to denote the set of all order projections that maps $\mf{F}_t$ into itself and we recall that $\F_t\P=\P\F_t$ holds for all $\P\in\mf{P}_t.$ The projections in $\mf{P}_t$ are the {\em events} up to time $t$ and $\mf{P}_t$ is a complete Boolean algebra. Let $(\F_t)_{t\in J}$ be a filtration on $\mf{E}$. We recall that $$ \mf{F}_{t+}:=\bigcap_{s>t}\mf{F}_s, $$ and the filtration is called right continuous if $\mf{F}_{t+}=\mf{F}_t$ for all $t\in J.$ Since we're assuming that $\mf{E}$ is $\F$-universally complete in $\mf{E}^u$, there exists a unique conditional expectation $\F_{t+}$ from $\mf{E}$ onto $\mf{F}_{t+}$ satisfying $\F\F_{t+}=\F_{t+}\F=\F$ (see~\cite[Proposition 3.8]{G2}). The set of order projections in the space $\mf{F}_{t+}$ will be denoted by $\mf{P}_{t+}.$ If $(X_t)$ is a stochastic process adapted to $({\mathbb F}_t, \mathfrak F_t)$, we call $(X_t, \mathfrak F_t,\F_t)$ a \emph{supermartingale} (respectively \emph{submartingale}) if ${\mathbb F}_t(X_s)\leq X_t$ (respectively ${\mathbb F}_t(X_s)\geq X_t$) for all $t\leq s$. If the process is both a sub- and a supermartingale, it is called a \emph{martingale}. A stochastic process $(X_t)$ is said to be uo-convergent to $X\in\mf{E}$ as $t$ tends to $s,$ if $$ o-\lim_{t\to s}|X_t-X|\wedge Z=0 $$ for every positive $Z\in\mf{E}. $ We call a stochastic process uo-continuous in a point $s$ if $$ \operatorname{uo}-\lim_{t\to s}X_t=X_s. $$ In function spaces uo-convergence corresponds to pointwise almost everywhere convergence. Hence, the use of uo-convergence to define continuity in the abstract case, yields a direct generalization of the notion of path-wise continuity. The definitions of right-uo-continuity and left-uo-continuity in a point $s$ use uo-convergence from the right or from the left respectively. \medskip The band generated by $(tE-X)^+$ in $\mf{E}$ is denoted by $\mf{B}_{(tE>X)}$ and the projection of $\mf{E}$ onto this band by $\P_{(tE>X)}.$ The component of $E$ in $\mf{B}_{(tE>X)}$ is denoted by $E_t^\ell,$ i.e., $E_t^\ell=\P_{(tE>X)}E.$ The system $(E_t^\ell)_{t\in J}$ is an increasing left-continuous system, called the {\em left-continuous spectral system of $X.$} Also, if $\overline{E}^r_t$ is the component of $E$ in the band generated by $(X-tE)^+$ and $E^r_t:=E-\overline{E}^r_t,$ the system $(E^r_t)$ is an increasing right-continuous system of components of $E,$ called the {\em right-continuous spectral system} of $X$ (see~\cite{LZ,G2}). The next definition was given in~\cite{G2}. \begin{definition} \rm A {\em stopping time} for the filtration $(\F_t,\mf{F}_t)_{t\in J}$ is an orthomorphism $\mb{S}\in\orth(\mf{E})$ such that its right continuous spectral system $(\mb{S}^r_t)$ of projections satisfies $\mb{S}^r_t\in\mf{P}_t.$ It is called an {\em optional time} if the condition holds for its left-continuous system $(\S_t^\ell)_{t\in J}.$ \end{definition} We recall the fact that a stopping time is also an optional time and that the two concepts coincide for right-continuous filtrations. We shall use the following notation: $C^\ell_t:=\S_t^\ell E$ and $C^r_t:=\S^r_tE.$ The processes $C^\ell_\S:=(C^\ell_t)_{t\in J}$ and $C^r_\S:=(C^r_t)_{t\in J}$ are processes of components of $E.$ We have the following reformulation of the definition: \begin{proposition} The orthomophism $\S$ is a stopping time for the filtration $(\F_t,\mf{F}_t)$ if and only if the stochastic process $C^r_\S$ is adapted to the filtration. Similarly, $\S$ is an optional time if and only if the stochastic process $C^\ell_\S$ is adapted to the filtration. \end{proposition} \medskip We recall (see~\cite{G2}) that the set of events (order projections) determined prior to the stopping time $\S$ is defined to be the family of projections $$ \mf{P}_\S:=\{\P\in\mf{P}\,:\, \P\S^r_t\F_t=\F_t\P\S^r_t\mbox{ for all $t$}\}. $$ $\mf{P}_\S$ is a complete Boolean sub-algebra of $\mf{P}$ and $\P\in\mf{P}_\S$ if and only if $\P\S^r_t\in\mf{P}_t$ for every $t\in J.$ The set $$ \mf{C}_\S:=\{\P E\,:\, \P\in\mf{P}_\S\}, $$ is a Boolean algebra of components of $E$ and we denote by $\mf{F}_\S$ the order closed Riesz subspace of $\mf{E}$ generated by $\mf{C}_\S.$ By~\cite[Proposition 5.4]{G2}, there exists a unique conditional expectation $\F_\S$ that maps $\mf{E}$ onto $\mf{F}_\S$ with the property that $\F=\F\F_\S=\F_\S\F.$ Similarly, if $\S$ is an optional time for the filtration $(\F_t,\mf{F}_t),$ we find in~\cite{G2} that the Boolean algebra $\mf{P}_{\S+}$ of events determined immediately after $\S$ is the Boolean algebra of projections given by \[ \mf{P}_{\S+}:=\{\P\in\mf{P}\, :\, \P\S_t^r\F_{t+} = \F_{t+}\P\S_t^r\mbox{ for all }t\}. \] This is again a complete Boolean algebra of projections and $$ \mf{C}_{\S+}:=\{\P E\,:\, \P\in\mf{P}_{\S+}\} $$ is a complete Boolean algebra of components of $E.$ We define the space $\mf{F}_{\S+}$ to be the Dedekind complete Riesz space generated by $\mf{C}_{\S+}.$ Since the space contains $\mf{F}_0,$ there exists a unique conditional expectation, denoted by $\F_{\S+},$ that maps $\mf{E}$ onto $\mf{F}_{\S+}$ with the property that $\F=\F\F_{\S+}=\F_{\S+}\F.$ Our aim is now to define the stopping element $X_\S$ for a stopping time $\S,$ and having done that, the process $(X_{t\wedge\S})$ will be called the {\em stopped process.} \section{Definition of the stopping element $X_\S$} Let $J=[a,b]$ and consider the optional time $\S$ for the filtration $(\F_t,\mf{F}_t)_{t\in J}$ defined on $\mf{E}$ with spectral interval contained in $J.$ Its left continuous spectrum of band projections $(\S^\ell_t)_{t\in J}$ is then adapted to the filtration meaning that $\S^\ell_t$ is a band projection in the Dedekind complete space $\mf{F}_t.$ Therefore, the component $C^\ell_t$ of $E$ is an element of $\mf{F}_t.$ We define a vector measure $\mu_\S$ on the intervals $[s,t)$ by defining $$ \mu_\S[t_{i-1},t_i):=C^\ell_{t_i}-C^\ell_{t_{i-1}}. $$ We refer the reader to~\cite[Section XI.5]{Vu} for a proof that this defines a $\sigma$-additive measure on the algebra of all finite unions of disjoint sub-intervals of the form $[s,t)$ in $J$ (and can be extended to $\sigma$-additive measure on the $\sigma$-algebra of Borel subsets of $J$). \medskip Next, let $\pi=\{a=t_0<t_1<\cdots<t_n=b\}$ be a partition of $J.$ We define $\L$ to be the Riesz space of all right-continuous simple processes of the form $$ X^\pi_t:=\sum_{t_i\in\pi}X_{t_i}\chi_{[t_{i-1}, t_i)},\ \ X_{t_i}\in\mf{F}_{t_i}, $$ where $\pi$ varies over all partitions of $J$ and $\chi_S$ is the indicator function of the set $S.$ We note that the process $(X^\pi_t)_{t\in J}$ is not in general adapted to the filtration $(\F_t,\mf{F}_t),$ because $X^\pi_{t_{i-1}}=X_{t_i}$ and $X_{t_i}$ need not be an element of $\mf{F}_{t_{i-1}}.$ We have that $X^\pi_{t}\in\mf{F}_{t_i}$ for all $t_{i-1}\le t<t_i.$ \medskip The next proposition holds. \begin{proposition} With pointwise ordering, $\L$ is a Riesz subspace of the \linebreak Dedekind complete Riesz space $\mf{E}^J$ of all $\mf{E}$-valued functions defined on the interval $J.$ \end{proposition} \medskip Let $$ I_\S(X_t^\pi):=\sum_{t_i\in\pi}X_{t_i}(C^\ell_{t_i}-C^\ell_{t_{i-1}}) =\sum_{t_i\in\pi}X_{t_i}\mu_\S[t_{i-1},t_i), $$ where the product of $X\in\mf{E}$ and the component $C=\P E,$ $\P\in\mf{P},$ is defined to be $\P X.$ We can therefore also write $$ I_\S(X_t^\pi):=\sum_{t_i\in\pi}(\S^\ell_{t_i}-\S^\ell_{t_{i-1}})X_{t_i}. $$ \begin{proposition} The operator $I_\S:\L\to\mf{E}$ has the following properties. \begin{enumerate} \item[\rm(1)] $I_\S$ is positive and linear; \item[\rm(2)] $I_\S$ is $\sigma$-order continuous, i.e., if $X_{n,t}\in \L$ and $X_{n,t}\downarrow_n 0$ for each $t\in J,$ then $I_\S(X_{n,t})\downarrow_n 0;$ \item[\rm(3)] $I_\S$ is a lattice homomorphism. \end{enumerate} The operator $I_\S$ is therefore a positive vector-valued Daniell integral defined on $\L.$ \end{proposition} {\em Proof.} Property (1) needs no proof. To prove (2), let $X_{n,t}\in\L$ satisfy $X_{n,t}\downarrow 0$ for every $t\in [a,b].$ Let $$ X_{1,t}=\sum_{i=1}^N\xi_{i}\chi_{[t_{i-1},t_i)}(t), \xi_i\in\mf{F}_i, $$ and let $\epsilon>0$ be arbitrary. For each $i,$ $1\le i\le N,$ let $$ \mf{B}_{1,i}:=\mf{B}_{(\xi_{i}>\epsilon E)} =\mf{B}_{(X_{1,t}\chi_{[t_{i-1},t_{i})}(t)>\epsilon E)}, $$ i.e., the band generated by $(\xi_{i}-\epsilon E)^+$ in $\mf{E}.$ We also define $$ \mf{B}_{n,i}:=\mf{B}_{(X_{n,t}\chi_{[t_{i-1},t_{i})}(t)>\epsilon E)} \mbox{ for }n\ge 1, $$ and we denote the band projection onto $\mf{B}_{n,i}$ by $\P_{n,i}$ for each $i$ and $n.$ Then, since $X_{n,t}\downarrow$ for all $t, $ we have that $$ \mf{B}_{n+1,i}\subset \mf{B}_{n,i}\subset \mf{B}_{1,i} \mbox{ for }1\le i\le N. $$ For $n\ge 1,$ let $B_{n,i}\subset J$ be defined by $$ B_{n,i}:=\{t\in[t_{i-1},t_i)\,:\, \P_{n,i}X_{n,t}>0\}. $$ The definition of the simple function $X_{n,t}$ implies that each $B_{n,i}$ is a finite union of left-closed right-open intervals, and note also that by the definition of $\P_{n,i},$ we have for every $t\in B_{n,i},$ $\P_{n,i}X_{n,t}>\epsilon E.$ Since, for every fixed $t\in J$ we have that $X_{n,t}\downarrow 0,$ it follows that $B_{n,i}\downarrow 0.$ The vector measure $\mu_\S$ is a $\sigma$-additive measure and it follows, since $\mu_\S(J)=C^\ell_b-C^\ell_a\in\mf{E},$ that $\mu_\S(B_{n,i})\downarrow 0.$ Then, \begin{align*} I_\S(X_{n,t}\chi_{[t_{i-1},t_{i})}(t)) &=I_\S[(\P_{n,i}X_{n,t}+\P^d_{n,i}X_{n,t})(\chi_{B_{n,i}}(t)+ \chi_{B^c_{n,i}}(t))]\\ &\le (\P_{n,i}\xi_{i}+\epsilon E)\mu_\S(B_{n,i})+\epsilon(\P_i-\P_{i-1})E\\ &\le (\xi_{i}+\epsilon E)\mu_\S(B_{n,i})+\epsilon (\P_i-\P_{i-1})E. \end{align*} It therefore follows, for each $\epsilon>0$ and $i,$ $1\le i\le N,$ that $$ o-\lim_{n\to\infty}I_\S(X_{n,t}\chi_{[t_{i-1},t_i)}(t))\le \epsilon(\P_i-\P_{i-1})E. $$ Summing over $i,$ we get $$ \inf_nI_\S(X_{n,t})\le \epsilon(\P_b-\P_{a})E. $$ This holds for every $\epsilon>0$ and so $I_\S(X_{n,t})\downarrow 0.$ We now prove (3). Let $X=X_t^\pi$ and $Y=Y_t^\pi$ be two elements of $\L$ written with the same partition $\pi$ of $J.$ Then, with $\Delta\S^\ell_{t_i}:=(\S^\ell_{t_i}-\S^\ell_{t_{i-1}})$ \begin{multline*} I_\S(X\vee Y)=\sum_{i=1}^n \Delta\S^\ell_{t_i}(X_{t_i}\vee Y_{t_i}) =\sum_{i=1}^n \Delta\S^\ell_{t_i}X_{t_i}\vee \Delta\S^\ell_{t_i}Y_{t_i}\\ =\bigvee_{i=1}^n\Delta\S^\ell_{t_i}X_{t_i}\vee \Delta\S^\ell_{t_i}Y_{t_i} =\bigvee_{i=1}^n\Delta\S^\ell_{t_i}X_{t_i}\vee \bigvee_{i=1}^n\Delta\S^\ell_{t_i}Y_{t_i} =I_\S(X)\vee I_\S(Y). \end{multline*} Thus $I_\S$ is a Riesz homomorphism.\qed \medskip Applying the Daniell extension procedure to the primitive positive integral $I_\S,$ we obtain an integral defined on the Riesz space $\mc{L}_\S$ of all Daniell $\S$-summable vector-valued functions that has the special property that it is a Riesz homomorphism. \begin{theorem} An adapted left-continuous process $(X_t)$ that is bounded by an $\S$-summable vector valued function $X$ is $\S$-summable. In particular, if $|X_t|\le ME,$ $M\ge 0,$ then $X=(X_t)$ is $\S$-summable. \end{theorem} {\em Proof.} Let $\pi_n=\{a=t^{(n)}_0<t^{(n)}_1<\ldots<t^{(n)}_{2^n}=b\}$ be a diadic partition of $[a,b].$ Define the element $X_n$ by \begin{equation} X_n(t):=\sum_{i=1}^{2^n} X_{t^{(n)}_{i-1}}\chi_{[t^{(n)}_{i-1},t^{(n)}_i)}(t). \end{equation} Then $X_n(t)$ belongs to $\L$ and we claim that $X_n(t)$ converges to $X_t$ in every point $t\in[a,b].$ Fix an element $t_0\in[a,b].$ Then, for each $n$ we have that $t_0\in [t^{(n)}_{i-1},t^{(n)}_i)\in\pi_n$ for a unique $i,$ $1\le i\le 2^n,$ and $X_n(t_0)=X_{t^{(n)}_{i-1}}.$ If, at some stage, $t_0$ is the left endpoint of an interval $[t^{(n')}_{i-1},t^{(n')}_i),$ then, for all finer partitions, it will remain the left endpoint of some interval in that partition and so $X_n(t_0)=X_{t_0}$ for all $n\ge n'.$ We may therefore assume that $t_0>t^{(n)}_{i-1}$ for all $n$ (here, by abusing the notation, $t^{(n)}_{i-1}$ will always denote the left endpoint of the unique interval of $\pi_n$ to which $t_0$ belongs; $i$ will therefore also depend on $n$). Since $t^{(n)}_i-t^{(n)}_{i-1}<(b-a)2^{-n},$ we have that $t^{(n)}_{i-1}\uparrow t_0$ as $n$ tends to infinity, and by the left continuity of $(X_t)$ we have that $X_n(t_0)=X_{t^{(n)}_{i-1}}$ converges to $X_{t_0}$ in order as $n\to\infty.$ Since $(X_t)$ is bounded by an $\S$-summable function $X,$ the Lebesgue domination theorem for the Daniell integral implies that $(X_t)$ is summable and that, with convergence in order, \begin{equation*} \lim_{n\to\infty}I_\S (X_n(t))=I_\S (X_t).\tag*{$\Box$} \end{equation*} \begin{definition}\label{X_S} For $X\in\mc{L}_\S$ we define $$ X_\S:=I_\S(X_t). $$ \end{definition} For the proof of the next result we need the following fact about unbounded order convergence. \begin{proposition}\label{proposition 3.4} Let $\mf{E}$ be a Dedekind complete Riesz space with weak order unit $E.$ Then the following are equivalent \begin{itemize} \item[\rm(1)] The sequence $(X_n)\subset\mf{E}$ is uo-convergent to $X\in\mf{E}.$ \item[\rm(2)] $(|X_n-X|\wedge kE)$ is order convergent to $0$ in $\mf{E}$ for every $k\in\N.$ \item[\rm(3)] The sequence $(X_n)$ is order convergent to $X$ in $\mf{E}^u.$ \end{itemize} \end{proposition} {\em Proof.} The implication (1)$\implies$(2) is clear. (2)$\implies$(3): For every $k\in\N,$ there exists a sequence $(V_n^{(k)})$ in $\mf{E}$ such that $V_n^{(k)}\downarrow_n 0$ and $$ |X-X_n|\wedge kE\le V^{(k)}_n. $$ Let $$ \mf{B}^{(k)}:=\bigcap_{n\in\N}\mf{B}_{(kE>|X-X_n|)} $$ and let $\P_k$ be the projection onto $\mf{B}^{(k)}.$ We note that $\mf{B}^{(k)}$ is an increasing sequence of bands and so also is the sequence of projections $\P_k.$ We also have that $$ \P_k(|X-X_n|)=\P_k(|X-X_n|\wedge kE)\le\P_k V_n^{(k)}\mbox{ for all }n. $$ We now put $\Q_1:=\P_1,\ \Q_2:=(\P_2-\P_1),\ldots \Q_n:=(\P_n-\P_{n-1}),\ldots.$ Then $(\Q_n)$ is a sequence of disjoint projections. We define $$ V_n:=\sup_{k\in\N}\Q_kV_n^{(k)}\mbox{ for all }n. $$ This supremum exists in $\mf{E}^u$ for each $n\in \N.$ Now, since $E$ is a weak order we have for every $n\in\N$ that $$ |X-X_n|\wedge kE\uparrow |X-X_n|, $$ and so $$ |X-X_n|=\sup_{k\in\N}\Q_k(|X-X_n|)\le\sup_{k\in\N} \Q_kV^{(k)}_n=V_n\downarrow 0\mbox{ in }\mf{E}^u. $$ Hence, $(X_n)$ is order convergent to $X$ in $\mf{E}^u.$ (3)$\implies$(1): By assumption there exists a sequence $(Z_n)$ in $\mf{E}^u$ such that $|X-X_n|\le Z_n\downarrow 0.$ Consider an arbitrary $U\in\mf{E}.$ Then $$ |X-X_n|\wedge U\le Z_n\wedge U\in\mf{E}, $$ since, $\mf{E},$ being Dedekind complete, is an ideal in $\mf{E}^u$ (see our remark in section 2). Moreover, since $Z_n\downarrow 0$ in $\mf{E}^u,$ it follows that $Z_n\wedge U\downarrow 0$ in $\mf{E}$ by the order denseness of $\mf{E}$ in $\mf{E}^u.$ Therefore (1) holds.\qed \begin{proposition} Let $(X^n_t)$ be a sequence in $\mc{L}_\S$ that is uo-convergent to $X_t\in\mc{L}_\S$ in each point $t\in J.$ Then $(X^n_\S)$ is uo-convergent to $X_\S.$ \end{proposition} {\em Proof.} The constant vector-valued functions $t\mapsto kE$ are in $\L$ and therefore Daniell integrable. This shows that the sequence $|X^n_t-X_t|\n kE,$ which is order convergent to $0$ in each point $t,$ is pointwise bounded by the integrable function $t\mapsto kE.$ Therefore Lebesgue's dominated convergence theorem for the Daniell integral implies that $I_\S(|X^n_t-X_t|\wedge kE)$ is order convergent to $0.$ But, since $I_\S$ is a Riesz homomorphism, $$ I_\S(|X^n_t-X_t|\wedge kE)=|I_\S (X^n)-I_\S(X)|\wedge I_\S(kE)= |I_\S (X^n)-I_\S(X)|\wedge kE, $$ which shows that $X^n_\S=I_\S(X_n)\overset{uo}{\to}I_\S(X)=X_\S.$\qed \section{Uniform Integrability} In this section we generalize the notion of uniform integrability. There are several ways in which one can do this, due to the different modes of convergence we have. It seems that convergence in $\mc{L}^1$ is the right notion to use in our case. The role of the integral is played by a conditional expectation $\F$ that is defined on the Dedekind complete Riesz space $\mf{E}.$ Our assumptions on $\mf{E}$ are as we stated them in section~2. We recall that the Riesz semi-norm $p_\phi$ is defined as $$ p_\phi(X):=|\phi|(\F(|X|)),\ \ \phi\in\Phi $$ and the topology of $\mc{L}^1$ is the locally solid topology $\sigma(\mc{L}^1,\mc{P}).$ \begin{definition}\label{UniformInt}\rm . The sequence $(X_n)$ in $\mf{E}$ is called \textit{$\mc{L}^1$-uniformly integrable} whenever we have that, for every $p_\phi\in\mc{P},$ \begin{equation}\label{eqUniformInt} p_\phi(\P_{(|X_n|\ge \lambda E)}|X_n|)\to 0 \mbox{ as $0\le\lambda\uparrow\infty$ uniformly in $n.$} \end{equation} This means that for every $\epsilon>0$ and for every $p_\phi\in\mc{P}$ there exists some $\lambda_0$ (depending on $\epsilon$ and $p_\phi$) such that for all $\lambda\ge\lambda_0,$ we have we have that $$ p_\phi(\P_{(|X_n|\ge \lambda E)}|X_n|) <\epsilon \mbox{ for all $n\in\N.$} $$ A stronger notion, that can be called \textit{order-uniformly integrable} is to have $$ \sup_{n\in\N}\F(\P_{(|X_n|\ge \lambda E)}|X_n|)\downarrow 0 \mbox{ as $\lambda\uparrow\infty$}. $$ \end{definition} If $(X_n)$ is order-uniformly integrable, we have for every $n$ that $$ p_\phi(\P_{(|X_n|\ge \lambda E)}|X_n|)\le p_\phi(\sup_{n\in\N}\F(\P_{(|X_n|\ge \lambda E)}|X_n|))\downarrow 0. $$ It follows that $p_\phi(\P_{(|X_n|\ge \lambda E)}|X_n|)\to 0$ uniformly in $n$ as $\lambda\uparrow\infty.$ Thus, order-uniform integrability of $(X_n)$ implies $\mc{L}^1$-uniform integrability of $(X_n).$ \medskip We note that for each fixed $n$ we have that $\P_{(|X_n|>\lambda E)}\downarrow 0$ as $\lambda\uparrow\infty.$ Therefore, for each fixed $n,$ $\P_{(|X_n|>\lambda E)}|X_n|\downarrow $0 as $\lambda\uparrow\infty$ and since $\F$ is order continuous, also $\F(\P_{(|X_n|>\lambda E)}|X_n|)\downarrow 0$ as $\lambda\uparrow\infty$ for each fixed $n.$ Therefore, if $(X_n)$ has only a finite number of non-zero elements, it is clear (see~\cite[Theorem 16.1]{LZ}) that $(X_n)$ is order-uniformly integrable. \medskip If $(X_n)$ is a bounded sequence in $\mc{L}^2,$ i.e., if for every $q\in \mc{Q}$ there exists a constant $M_\phi$ such that $q_\phi(X_n)\le M_\phi$ for all $n\in \N,$ then, by the Cauchy-inequality, $$ p_\phi(\P_{(|X_n|>\lambda E)}|X_n|)\le q_\phi(\P_{(|X_n|>\lambda E)}E)q_\phi(X_n)\le q_\phi(\P_{(|X_n|>\lambda E)}E)M_\phi. $$ Therefore, if $q_\phi(\P_{(|X_n|>\lambda E)}E)\to 0$ uniformly in $n$ as $\lambda\uparrow\infty,$ then $(X_n)$ is $\mc{L}^1$-uniformly integrable. The next proposition can be compared to~\cite[Theorem I.2.1]{DU}. \begin{proposition}\label{4P1} Let $0\le X\in\mf{E},$ let $(\P_{t})_{0\le t<\infty}$ be projections in $\mf{P}.$ Then, given any $p_\phi\in\mc{P}$ and $\epsilon>0,$ there exists a $\delta>0$ such that $p_\phi(\P_tE)<\delta,$ implies that $p_\phi(\P_{t} X)<\epsilon.$ Thus, if $p_\phi(\P_{t} E)$ converges to $0,$ then $p_\phi(\P_{t} X)$ converges to $0.$ \end{proposition} {\em Proof.} Assume that the proposition is false. Then there exists an element $\phi\in\Phi$ and some $\epsilon>0$ such that, for every $k,$ there exists a projection $\P_{t_k}$ satisfying $p_\phi(\P_{t_k}E)<2^{-k}$ and $p_\phi(\P_{t_k}X)>\epsilon.$ Define $$ \Q_k:=\P_{t_k}\vee \P_{t_{k+1}}\vee\ldots, $$ Then $\Q_k\downarrow$ and \begin{equation}\label{eq4.2.1} p_\phi(\Q_kX)\ge p_\phi(\P_{t_k}X)>\epsilon. \end{equation} But, $$ p_\phi(\Q_kE)\le \sum_{j=k}^\infty p_\phi(\P_{t_j} E)\le 2^{1-k} \downarrow 0, \mbox{ as $k\to\infty$}. $$ Since $\F$ is strictly positive, $p_\phi=|\phi|\F$ is strictly positive on the carrier band $C_{\phi}$ of $\phi.$ Therefore, $\Q_kE \downarrow 0$ on $C_\phi.$ But then $\Q_kX\downarrow 0$ on $C_\phi$ and by the order continuity of $p_\phi,$ it follows that $p_\phi(\Q_kX)\downarrow 0.$ This contradicts (\ref{eq4.2.1}). \qed \bigskip The next theorem is also a generalization of a well-known fact about uniform integrability. \begin{theorem}\label{TH43} The sequence $(X_n)$ in $\mf{E}$ is uniformly integrable if and only if it satisfies the following conditions: \begin{enumerate} \item[{\rm(1)}] $(X_n)$ is a bounded set in $\mc{L}^1$ \item[{\rm(2)}] For every $p_\phi\in\mc{P},$\ \ $p_\phi(\P|X_n|)\to 0$ uniformly in $n$ as $p_\phi(\P E)\to 0,$ i.e., given $\epsilon>0$ and $p_\phi\in\mc{P},$ there exists a $\delta>0$ such that, if $p_\phi(\P E)\le\delta,$ then $p_\phi(\P|X_n|)<\epsilon$ for all $n\in\N.$ \end{enumerate} \end{theorem} {\em Proof.} Suppose that $(X_n)$ is a bounded set in $\mc{L}^1$ and that it is uniformly continuous, i.e., that $(X_n)$ satisfies condition (2). Then we have by Chebyshev's inequality, we have $$ \F(\P_{(|X_n|\ge tE)}E)\le \frac{1}{t}\F(|X_n|), $$ which implies that $$ p_\phi(\P_{(|X_n|\ge tE)}E)\le\frac{1}{t}p_\phi(X_n)\le M_\phi, $$ for a number $M_\phi\ge 0.$ By the boundedness of $(X_n)$ in $\mc{L}^1,$ it follows that $p_\phi(\P_{(|X_n|\ge tE)}E)\to 0$ uniformly in $n.$ It follows by (2) that $$ p_\phi(\P|X_n|)\to 0 \mbox{ uniformly in $n$} $$ and so $(X_n)$ is uniformly integrable. \medskip Conversely, if $(X_n)$ is uniformly integrable, we have for every $p_\phi\in\mc{P}$ that \begin{align}\label{equation4.3} p_\phi(\P|X_n|)&=p_\phi(\P\P_{(|X_n|\ge tE)}|X_n|) +p_\phi(\P\P_{(|X_n|< tE)}|X_n|) \nonumber\\ &\le p_\phi(\P_{(|X_n|\ge tE)}|X_n|) + tp_\phi(\P E). \end{align} By the uniform integrability, we can choose, for given $\epsilon>0,$ a number $t_0$ such that the first term is less that $\epsilon/2$ for all $n.$ We then have, for $\phi(\P E)<\epsilon/2t_0$ that $p_\phi(\P|X_n|)<\epsilon$ for all $n,$ thus proving that condition (2) holds. \medskip\noindent Taking the projection $\P$ in (\ref{equation4.3}) equal to the identity $I,$ it follows that for large $t$ (depending on $p_\phi$) we have $$ p_\phi(|X_n|)\le \epsilon+t=M_\phi<\infty. $$ Since this holds for arbitrary $p_\phi\in\mc{P},$ the set $(X_n)$ is bounded in $\mc{L}^1.$\qed \bigskip \begin{corollary}\label{L:Xn+X} If $(X_n)$ and $(Y_n)$ are uniformly integrable sequences, then $(X_n+Y_n)$ is also uniformly integrable. In particular, if $X\in\mf{E}$ then $(X_n+X)$ is uniformly integrable. \end{corollary} {\em Proof.} It is clear that if $(X_n)$ and $(Y_n)$ are bounded sequences in $\mc{L}^1$ then $(X_n+Y_n)$ is also a bounded sequence in $\mc{L}^1.$ Also, since they are uniformly integrable, they are uniformly continuous, i.e., condition (2) in Theorem 4.3 above holds for both of them. But then, for every $p_\phi\in\mc{P},$ we have that if $p_\phi(\P E)\to 0,$ then $$ p_\phi(\P|X_n+Y_n|)\le p_\phi(\P|X_n|)+p_\phi(\P |Y_n|)\to 0 $$ uniformly in $n.$ By Theorem~\ref{TH43}, this implies that $(X_n+Y_n)$ is uniformly integrable. \qed \bigskip Below we denote unbounded order convergence of a sequence $(X_n)$ to an element $X$ by $X_n\overset{uo}{\to}X.$ \begin{lemma}\label{L:zero} If $X_n\overset{uo}{\to}0$ and $(X_n)$ is uniformly integrable, then $X_n\to 0$ in $\mc{L}^1.$ \end{lemma} {\em Proof.} Suppose that $X_n\overset{uo}{\to}0$ and that $(X_n)$ is uniformly integrable. Let $\epsilon>0$ and $p_\phi\in\mc{P}$ be given. Then, it follows from the uniform integrability, that \begin{align*} p_\phi(X_n)&=p_\phi(\P_{(|X_n|\geq\lambda E)}X_n) +p_\phi(\P_{(|X_n|<\lambda E))}X_n)\\ &\le p_\phi(\P_{(|X_n|\geq\lambda E)}X_n) +p_\phi(|X_n|\wedge \lambda E)\\ &<\epsilon/2 + p_\phi(|X_n|\wedge \lambda_0 E), \end{align*} for some $\lambda_0>0$ and for all $n\in\N.$ Since $X_n\overset{uo}{\to}0$ by assumption, and since $p_\phi$ is order continuous, there exist some $N\in\N$ such that for all $n\ge N$ we have that the last term above is less that $\epsilon/2.$ Thus, $p_\phi(X_n)\to 0$ and this holds for every $p_\phi\in\mc{P}.$ \qed \begin{theorem}\label{T:FXn->FX} If $X_n\overset{uo}{\to}X$ and $(X_n)$ is uniformly integrable, then $X_n\to X$ in $\mc{L}^1.$ \end{theorem} {\em Proof.} Suppose that $X_n\overset{uo}{\to}X$ and that $(X_n)$ is uniformly integrable. For each $n\in\mb{N}$ define $C_n:=X_n-X$. Then $C_n\overset{uo}{\to}0$, and by Corollary~\ref{L:Xn+X}, we know that $(C_n)$ is uniformly integrable. Thus, by Lemma~\ref{L:zero}, it is true that $C_n\to 0$ in $\mc{L}^1.$ But this is equivalent to $X_n\to\, X$ in $\mc{L}^1.$ \qed \bigskip \noindent \textbf{Conclusion} If the sequences $(X_{\S_n})$ and $(X_{\T_n})$ are uniformly integrable and uo-converge in $\mf{E}$ to $X_\S$ and $X_\T$, respectively, then it is easy to see that for any band projection $\P$ we have that $\P(X_{\S_n})\overset{uo}{\to}\P(X_\S)$ and $\P(X_{\T_n})\overset{uo}{\to}\P(X_\T)$. It is also easy to see that $(\P(X_{\S_n}))$ and $(\P(X_{\T_n}))$ are also uniformly integrable. Therefore, by Theorem~\ref{T:FXn->FX} we have $X_{\S_n}\to X_\S$ and $\P X_{\T_n}\to\P X_\T$ in $\mc{L}^1.$ This fact will be used in the proof of Doob's optional sampling theorem below. \begin{definition}\rm (see~\cite[Problem 3.11]{KS}) Let $(\mf{F}_n)$ be a decreasing sequence of Dedekind complete Riesz subspaces of $\mf{E},$ i.e., $$ \mf{F}_{n+1}\subseteq \mf{F}_{n}\subseteq \mf{E}, $$ with $\mf{F}_n$ the range of a conditional expectation $\F_n:\mf{E}\to\mf{F}_n$ satisfying $\F_n\F_m=\F_m\F_n=\F_m$ if $m>n.$ The process $(X_n)$ with $X_n\in\mf{F}_n$ and $\F_{n+1}(X_n)\ge X_{n+1}$ is called a {\em backward submartingale}. \end{definition} We note that $\mf{F}_\infty:=\bigcap_n\mf{F}_n$ is a Dedekind complete Riesz space that is contained in each of the spaces $\mf{F}_n$ and so there exists a conditional expectation $\F_\infty:\mf{E}\to\mf{F}_\infty$ with the property that for each $n$ we have $\F_\infty\F_n=\F_n\F_\infty=\F_\infty.$ Furthermore, applying $\F_\infty$ to both sides of the inequality in the definition, we find that for all $n,$ $\F_\infty(X_n)\ge\F_\infty(X_{n+1}),$ i.e., the sequence $(\F_\infty(X_n))$ is a decreasing sequence. It is also easy to show by induction that for all $n$ one has $\F_n(X_1)\ge X_n.$ \begin{example}\rm Let $(X_t,\F_t)_{t\in J}$ be a submartingale. With $J=[a,b],$ we have for any sequence of real numbers $t_n\downarrow a,$ that $(X_{t_n},\F_{t_n})_{n\in \N})$ is a backward submartingale. In this case, $\F_\infty=\F_a=\F.$ \end{example} Since we work in the setting where we do not have an integral, but a fixed conditional expectation $\F,$ we shall assume for all backward submartingales considered that $\F_\infty=\F.$ \medskip \begin{proposition}\label{4P2} Let $(X_n)$ be a backward submartingale with $\F_\infty=\F.$ If the sequence $(\F(X_n))$ is bounded below, i.e., if $$ Y=\inf_{n\in\N}\F(X_n)\mbox{ exists in }\mf{E}, $$ then the sequence $(X_n)$ is uniformly integrable. \end{proposition} {\em Proof.} By Jensen's inequality, $(X_n^+, \mf{F}_n)$ is also a backward submartingale. Hence, for $\lambda>0,$ we find by the Chebyshev inequality that for each $n,$ $$ \lambda\F(\P_{(|X_n|>\lambda E)}E)\le\F(|X_n|) =-\F(X_n)+2\F(X^+_n)\le -Y+2\F(X_1^+). $$ It follows that \begin{equation}\label{4E1} \lim_{\lambda\to\infty}p_\phi(\P_{(|X_n|>\lambda E)}E)=0 \mbox{ uniformly in $n,$} \end{equation} and therefore also \begin{equation}\label{4E2} \lim_{\lambda\to\infty}p_\phi(\P_{(X_n^+>\lambda E)}E)=0 \mbox{ uniformly in $n.$} \end{equation} Using the backward submartingale property of $(X_n^+),$ we have \begin{multline}\label{4E3} \F(\P_{(X_n^+>\lambda E)}X_n^+) \le\F(\P_{(X_n^+>\lambda E)}\F_n X_1^+)\\ =\F\F_n(\P_{(X_n^+>\lambda E)}X_1^+) =\F(\P_{(X_n^+>\lambda E)}X_1^+). \end{multline} Hence, we have for any $p_\phi\in\mc{P},$ that \begin{equation}\label{E47} p_\phi(\P_{(X_n^+>\lambda E)}X_n^+)\le p_\phi(\P_{(X_n^+>\lambda E)}X_1^+). \end{equation} We now apply Proposition \ref{4P1} to find for every $\epsilon>0,$ a $\delta>0$ such that, if $p_\phi(\P_{(X_n^+>\lambda E)}E)<\delta,$ then $p_\phi(\P_{(X_n^+>\lambda E)}X_1^+)<\epsilon.$ From (\ref{4E2}), there exist some $\lambda_0$ such that, for $\lambda>\lambda_0,$ $p_\phi(\P_{(X_n^+>\lambda E)}E)<\delta,$ for all $n\in\N.$ It then follows from (\ref{E47}) that for all $\lambda>\lambda_0,$ we have $p_\phi(\P_{(X_n^+>\lambda E)}X_n^+)<\epsilon$ for all $n\in \N.$ This shows that the backwards submartingale $(X_n^+)$ is uniformly integrable. \medskip We next show that the sequence $(X_n^-)$ is also uniformly integrable. Note that $\P_{(X_n^->\lambda E)}=\P_{(X_n<-\lambda E)}$ and that for $m<n,$ we have $X_n\le\F_nX_m.$ Now,\begin{multline}\label{4E4} 0\ge\F(\P_{(X_n<-\lambda E)}X_n)=\F(X_n)-\F(\P_{(X_n\ge-\lambda E)}X_n)\\ \ge\F(X_n)-\F(\P_{(X_n\ge-\lambda E)}\F_nX_m)\\ \ge\F(X_n)-\F(\P_{(X_n\ge-\lambda E)}X_m)\\ =\F(X_n)-\F(X_m)+\F(\P_{(X_n<-\lambda E)}X_m). \end{multline} Since the sequence $\F(X_n)\downarrow_n Y,$ $(X_n)$ is convergent in $\mc{L}^1$ and therefore a Cauchy sequence. For a given $\epsilon>0,$ we can choose $m=m(\epsilon)$ such that for all $n>m,$ we have $$ p_\phi(X_m-X_n)<\epsilon/2. $$ Also, by Proposition~\ref{4P1}, there exists a $\delta>0,$ such that $p_\phi(\P_{(|X_n|>\lambda E)}E)<\delta,$ implies that $p_\phi(\P_{(|X_n|>\lambda E)}X_m)<\epsilon/2$ and using (\ref{4E1}), we can find a $\lambda_0$ such that for all $\lambda>\lambda_0,$ we have for all $n\in \N$ that $p_\phi(\P_{(|X_n|>\lambda E)}E)<\delta$ and therefore for all $n\in\N$ that $p_\phi(\P_{(|X_n|>\lambda E)}X_m)<\epsilon/2$ if $\lambda>\lambda_0.$ But, $\P_{(X_n^->\lambda E)}\le \P_{(|X_n|>\lambda E)}$ and therefore, there exists a $\lambda$ such that for all $\lambda>\lambda_0,$ $$ p_\phi(\P_{(X_n^->\lambda E)}X_m)<\epsilon/2 \mbox{ for all $n\in\N.$} $$ We now use the inequality in (\ref{4E4}): For all $n>m(\epsilon)$ we have \begin{align}\label{4E5} \F(\P_{(X_n^->\lambda E)}X_n^-) &=|\F(\P_{(X_n^->\lambda E)}X_n)|\nonumber \\ &=-\F(\P_{(X_n<-\lambda E)}X_n) \\ &\le(\F(X_m)-\F(X_n))-\F(\P_{(X_n<-\lambda E)}X_m) \end{align} and so, for all $n>m$ we get $$ p_\phi(\P_{(X_n^->\lambda E)}X_n^-)\le p_\phi(X_m-X_n)+ p_\phi(\P_{(X_n^->\lambda E)}X_m)<\epsilon/2+\epsilon/2=\epsilon $$ for all $\lambda>\lambda_0.$ For $n=1,2,\ldots,m$ we have that $p_\phi(\P_{(X_n^->\lambda E)}X_n^-)\downarrow 0$ as $\lambda\to\infty$ so we can find $\lambda_n$ such that for $\lambda>\lambda_n,$ we have $p_\phi(\P_{(X_n^->\lambda E)}X_n^-)<\epsilon.$ If $\lambda>\max\{\lambda_0,\lambda_1,\ldots,\lambda_m\}$ we have that $$ p_\phi(\P_{(X_n^->\lambda E)}X_n^-)<\epsilon \mbox{ for all $n\in\N$} $$ Thus, $(X_n^-)$ is uniformly integrable. Our final result, that $(X_n)=(X_n^+-X_n^-$) is uniformly integrable, follows from Corollary \ref{L:Xn+X}.\qed \section{The optional sampling theorem} As remarked in the introduction, we have to prove the optional sampling theorem using Definition~\ref{X_S}. \begin{theorem} Let $(X_t)_{t\in J}$ be a right-uo-continuous submartingale and let $\S\le\T$ be two optional times of the filtration $(\F_t,\mf{F}_t).$ Then, if either \begin{enumerate} \item $\T$ is bounded or \item $(X_t)$ has a last element, \end{enumerate} we have $$ \F_{\S+}X_\T\ge X_\S. $$ If $\S$ and $\T$ are stopping times, one has $$ \F_{\S}X_\T\ge X_\S. $$ \end{theorem} {\em Proof.} Let $\pi_n=\{a=t_0<t_1<\ldots<t_{2^n}=b\}$ be a diadic partition of $J=[a,b]$ and define the sequence $(\S_n)$ by putting \begin{equation}\label{5E1} \S_n=\sum_{i=1}^{2^n}t_i(\S^\ell_{t_i}-\S^\ell_{t_{i-1}}) =\sum_{i=1}^{2^n}t_i\Delta\S^\ell_i=\sum_{i=1}^{2^n}t_i\S^\ell_{t_i}(\S_{t_{i-1}}^{\ell})^d \end{equation} and similarly, \begin{equation}\label{5E2} \T_n=\sum_{j=1}^{2^n}t_j(\T^\ell_{t_j}-\T^\ell_{t_{j-1}}) =\sum_{j=1}^{2^n}t_j\Delta\T^\ell_j\sum_{i=1}^{2^n}t_j\T^\ell_{t_j}(\T_{t_{j-1}}^{\ell})^d. \end{equation} We now write them both as sums with respect to the partition $\{\Delta\S^\ell_i\Delta\T^l_j\}_{i,j=1}^n,$ i.e., we get \begin{equation}\label{5E3} \S_n=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}s_{ij}\Delta\T^\ell_j\Delta\S^\ell_i,\ s_{ij}=t_i\mbox{ and } \T_n=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}t_{ij}\Delta\T^\ell_j\Delta\S^\ell_i,\ t_{ij}=t_j. \end{equation} Now, $\S\le\T$ implies that for each fixed $n,$ $\S_n\le\T_n$ and so $s_{ij}\le t_{ij};$ this implies that $t_i\Delta\T^\ell_j\Delta\S^\ell_i\le t_j\Delta\T^\ell_j\Delta\S^\ell_i$ for all $i,j$ such that $\Delta\T^\ell_j\Delta\S^\ell_i\ne 0.$ Each $\S_n$ and each $\T_n$ is a stopping time for the filtration and by Freudenthal's theorem, $\S_n\downarrow \S$ and $\T_n\downarrow\T.$ With these definitions for $\S_n$ and $\T_n,$ we have that \begin{equation}\label{5E4} X_{\S_n}=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}\Delta\T^\ell_j\Delta\S^\ell_i X_{t_{i}}\mbox{ and } X_{\T_n}=\sum_{i=1}^{2^n}\sum_{j=i}^{2^n}\Delta\T^\ell_j\Delta\S^\ell_i X_{t_{j}}. \end{equation} Next, we put \begin{equation}\label{5E5} \F_{\S_n}:=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}\F_{t_{i}}\Delta\T^\ell_j\Delta\S^\ell_i. =\sum_{i=1}^{2^n}\F_{t_{i}}\Delta\S^\ell_i. \end{equation} It is readily checked that $\F_{\S_n}$ is a strictly positive, order continuous projection that maps $E$ onto $E,$ i.e., $\F_{\S_n}$ is a conditional expectation. Its range is the direct sum $$ \bigoplus_{i=1}^{2^n}\Delta\S^\ell_i\mf{F}_{t_i} $$ of the bands $\Delta\S_i^\ell \mf{F}_{t_i}$ (since $\F_{t_i}$ and $\Delta\S_i^\ell $ commute), and the projections in $\mf{E}$ that belong to this space are exactly those projections in $\mf{E}$ that belong to $\mf{P}_t$ for all $t$ such that $\S_n \le tE.$ Therefore, the space $\mf{F}_{\S_n},$ which is by definition the space generated by these projections, is equal to the space $\bigoplus_{i=1}^{2^n}\Delta\S^\ell_i\mf{F}_{t_i}.$ Moreover, $$ \F\F_{\S_n}=\sum_{i=1}^{2^n}\F\F_{t_{i}}\Delta\S^\ell_i=\F\sum_{i=1}^{2^n}\Delta\S^\ell_i=\F, $$ and similarly $\F_{\S_n}\F=\F.$ Therefore, $\F_{\S_n}$ is the unique conditional expectation with range $\mf{F}_{\S_n}$ satisfying these two conditions. For each fixed $i,$ consider the sum \begin{align}\label{equation5.6a} \sum_{j=i}^{2^n}\F_{t_{i}}\Delta\T^\ell_j\Delta\S^\ell_i X_{t_j} &=\F_{t_i}\Delta\T^\ell_{2^n}\Delta\S^\ell_i X_{t_{2^n}}+\ldots +\F_{t_i}\Delta\T^\ell_{t_i}\Delta\S^\ell_i X_{t_{i}} \end{align} and note that, in the first term, $$ \Delta\T^\ell_{2^n}\Delta\S^\ell_i=\Delta \S^\ell_i(\T^\ell_{t_{2^n}}(\T^{\ell}_{t_{2^n-1}})^d)=\Delta \S^\ell_i(\T^{\ell}_{t_{2^n-1}})^d\in\mf{P}_{t_{2^n-1}}. $$ Therefore, \begin{align*} \F_{t_i}\Delta\T^\ell_{2^n}\Delta\S^\ell_i X_{t_{2^n}}&=\F_{t_i}\F_{t_{2^n-1}} \Delta\T^\ell_{2^n}\Delta\S^\ell_i X_{t_{2^n}}\\ &=\F_{t_i}\Delta\T^\ell_{2^n}\Delta\S^\ell_i \F_{t_{2^n-1}} X_{t_{2^n}}\\ &\ge\F_{t_i}\Delta\T^\ell_{2^n}\Delta\S^\ell_i X_{t_{2^n-1}}. \end{align*} Substituting this inequality in Equation (\ref{equation5.6a}), and repeating the process we finally arrive at \begin{equation} \sum_{j=i}^{2^n}\F_{t_{i}}\Delta\T^\ell_j\Delta\S^\ell_i X_{t_j} \ge\F_{t_{i}}\Delta\S^\ell_i(\sum_{j=i}^{2^n}\Delta\T^\ell_j) X_{t_i} =\F_{t_{i}}\Delta\S^\ell_i(\sum_{j=i}^{2^n}\Delta\T^\ell_j) X_{t_i} =\Delta\S^\ell_iX_{t_i}. \end{equation} Thus, \begin{equation}\label{5E6} \F_{\S_n}(X_{\T_n})=\sum_{i=1}^{2^n}\sum_{j=i}^{2^n}\F_{t_{i}}\Delta\T^\ell_j\Delta\S^\ell_i X_{t_j} \ge \sum_{i=1}^{2^n}\Delta\S^\ell_i X_{t_i}=X_{\S_n}. \end{equation} (This is Doob's optional sampling theorem for this special case.) For every $\P\in\mf{P}_{\S_n},$ we therefore have that \begin{equation}\label{5E7} \F(\P X_{\T_n})=\F\F_{\S_n}\P X_{\T_n}= \F\P(\F_{\S_n}X_{\T_n})\ge\F(\P X_{\S_n}). \end{equation} By~\cite[Proposition 5.15]{G2}, we have that $\displaystyle \mf{P}_{\S+}=\bigcap_{n=1}^\infty \mf{P}_{\S_n}.$ Therefore, by (\ref{5E7}), \begin{equation}\label{5E7b} \F(\P X_{\T_n})\ge\F(\P X_{\S_n})\mbox{ holds for all $\P\in\mf{P}_{\S+}.$} \end{equation} If $\S$ is a stopping time, it follows from~\cite[Proposition 5.9]{G2} that since $\S\le\S_n,$ we have $\mf{P}_{\S}\subset\mf{P}_{\S_n}$ and so this inequality holds in that case also for all $\mf{\P}\in\mf{P}_\S.$ \medskip Applying the arguments above to $\S_n\le\S_{n+1}$ we get as in (\ref{5E6}) that $$ \F_{\S_n}(X_{\S_{n+1}})\ge X_{\S_n}\mbox{ for all $n$}, $$ which implies that $(\S_n,\F_{\S_n})$ is a backward submartingale and so $(\F(X_{\S_n}))$ is a decreasing sequence and, using (\ref{5E4}), $\F(X_{\S_n})\ge\F(X_a)$ for all $n.$ Applying Proposition~\ref{4P2}, we have that the sequence $(X_{\S_n})$ is uniformly integrable. The same is true for the sequence $(X_{\T_n}).$ \medskip We now note that $$ X_{\S_n}=I_\S(X^n),\mbox{ and }X_{\T_n}=I_{\T}(X^n), $$ with $$ X^n_t=\sum_{i=0}^{2^n}X_{t_i}\chi_{[t_{i-1},t_i)}(t). $$ Our assumption that $X=(X_t)$ is uo-right-continuous, implies that in each point $t$ we have that $X^n_t$ is uo-convergent to $X_t.$ Now let $k\in\N$ and define the constant process $(kE_t=E).$ This process is Daniell integrable since $I_\S(kE)=kE\in\mf{E}.$ Consider the process $X^n\wedge kE=(X^n_t\wedge kE_t)=(X^n_t\wedge kE).$ Then $X^n\wedge kE$ converges in order pointwise to $X\wedge kE.$ By Lebesgue's dominated convergence theorem, we get that $ I_\S(X^n\wedge kE) $ is order convergent to $I_\S(X\wedge kE).$ But, $I_\S$ is a Riesz homomorphism, so we get that $$ I_\S(X^n\wedge kE)=I_\S(X^n)\wedge I_\S(kE)=I_\S(X^n)\wedge kE. $$ Therefore, by Proposition~(\ref{proposition 3.4}), $X_{\S_n}=I_\S(X^n)$ is uo-convergent to $I_\S(X)=X_{\S}$ and the same holds for $I_\T(X^n)$ and $I_\T(X).$ By our result for a uo-convergent sequence that is uniformly integrable we have that $p_\phi(\P X_{\S_n})$ converges to $p_\phi(\P X_\S)$ and also $p_\phi(\P X_{\T_n})$ converges to $p_\phi(\P X_\T),$ for every $\P\in\mf{P}_{\S+}$ and for every $p_\phi\in\mc{P}.$ Recalling that $p_\phi=|\phi|\F$ for $\phi\in\Phi$, this implies, using~(\ref{5E7b}), that $$ |\phi|\F(\P X_\T)\ge |\phi|\F(\P X_\S),\mbox{ for every $\P\in\mf{P}_{\S+}.$} $$ But since $\mf{E}^\sim_{00}$ separates the points of $\mf{E},$ we get $$ \F(\P X_\T)\ge \F(\P X_\S),\mbox{ for every $\P\in\mf{P}_{\S+}$}, $$ and thus $$ \F\F_{\S+}(\P X_\T)=\F\P(\F_{\S+}X_\T) \ge \F\P (X_\S). $$ Since this holds for every $\P\in\mf{P}_{\S+},$ we have that $$ \F_{S+}X_\T \ge X_\S. $$ This proves the theorem if $\S, \T$ are optional times. In the case that they are stopping times, the theorem holds since the inequalies hold for all $\P\in\mf{P}_\S.$ \qed \input{BibliografieStopped.tex} \end{document} $$ \end{lemma} {\em Proof.} Let $m, n\in\mb{N}$, and let $s,t>0$. Since $V$ is in the band generated by $W,$ we have \[ (tW)\wedge V\uparrow V \mbox{ as } t\uparrow \infty. \] It follows from~\cite[Theorem 30.5]{LZ} and the order continuity of $\mb{F}$ that if $t\to\infty,$ \[ \F(\P(|X_n|\ge s(tW\wedge V)))(|X_n|)\downarrow_t\F(\P(|X_n|\ge sV))(|X_n|). \] Thus there exists a sequence $U_m\downarrow 0$ such that \[ \F(\P(|X_n|\ge s(tW\wedge V)))(|X_n|)\leq U_m+\F(\P(|X_n|\ge sV))(|X_n|) \] holds for sufficiently large $t$. Moreover,since $\F(\P^n_\lambda, V)|X_n|\downarrow_\lambda 0$ uniformly in $n$ as $\lambda\uparrow\infty,$ there exists $Z_m\downarrow 0$ such that \[ \underset{n}{\sup}\F(\P(|X_n|\ge sV))(|X_n|)\le Z_m \] holds for sufficiently large $s$. Then $(U_m+Z_m)\downarrow 0$ and for sufficiently large $s$ and $t$ we have \begin{align*} \F(\P(|X_n|\ge stW))(|X_n|)&\leq\F(\P(|X_n|\ge s(tW\wedge V)))(|X_n|)\\ &\leq U_m+\F(\P(|X_n|\ge sV))(|X_n|)\\ &\leq U_m+Z_m. \end{align*} It follows that $\F(\P^n_\lambda, W)\downarrow 0$ uniformly in $n$. \qed \begin{lemma}\label{L:Xn+X} If $(X_n)$ is uniformly integrable, then so is $(X_n+X)$ for any $X\in\mf{E}$. \end{lemma} {\em Proof.} Suppose $(X_n)$ is uniformly integrable, and let $X\in\mf{E}$. Let $W$ be a positive weak order unit for which $W\geq |X|$ (e.g. take $W=|X|+E$). Since $(X_n)$ is uniformly integrable, it follows from Lemma~\ref{L:VW} that there exists $Z_m\downarrow 0$ such that for every $m\in\mb{N}$ we have \[ \sup_n\F(\P(|X_n|\ge \lambda W))(|X_n|)\leq Z_m \] holds for large enough $\lambda$. Let $m\in\mb{N}$ be fixed. Then for $\lambda$ sufficiently large, \begin{align*} &\F(\P(|X_n+X|\ge \lambda W))(|X_n+X|)\\ &\leq\F(\P(|X_n+X|\ge \lambda W))(|X_n|)+\F(\P(|X_n+X|\ge \lambda W))(|X|)\\ &\leq\F(\P(|X_n|+|X|\ge \lambda W))(|X_n|)+\F(\P(|X_n|+|X|\ge \lambda W))(|X|)\\ &\leq\F(\P(|X_n|+W\ge \lambda W))(|X_n|)+\F(\P(|X_n|+W\ge \lambda W))(|X|)\\ &=\F(\P(|X_n|\ge (\lambda-1)W))(|X_n|)+\F(\P(|X_n|\ge (\lambda-1)W))(|X|)\\ &\leq Z_m+\F(\P(|X_n|\ge (\lambda-1)W))(|X|). \end{align*} Thus for sufficiently large $t\in\mb{N}$ we have \[ \F(\P(|X_n-X|\ge tW))(|X_n-X|)\leq Z_m+\F(\P(|X_n|\ge (t-1)W))(|X|). \] Choose $W$ as (as we may) equal to $|X|+E.$ By Chebyshev's inequality, we get \begin{align*} \F(\P(|X_n|\ge (t-1)W)(E)&\le\F(\P(|X_n|\ge (t-1)E)(E)\\ &\le \frac{1}{t-1}\F(|X_n|). \end{align*} But, since $X_n$ is uniformly integrable, $\F(|X_n|)$ is order bounded: let $Z_k'\downarrow 0$ be such that $\F(\P(|X_n|\ge tE)|X_n|)\le Z_m'$ for all $n$ and for all $t\ge c.$ Then, \begin{multline*} \F(|X_n|)=\F(\P(|X_n|\ge cE)|X_n|)+\F(\P(|X_n|< cE)|X_n|)\\ \le Z'_m+cE\le Z'_1+cE =U\in\mf{E}. \end{multline*} Thus, $\F(\P(|X_n|\ge(t-1)W)E)$ converges $U$-uniformly to $0$ as $t\to\infty$ uniformly in $n.$ Let $p_\phi\in \mathscr{P}$ and $\epsilon>0$ be arbitrary, then there exist some $\delta>0$ such that, for all $n\in \N,$ $$ \F(\P(|X_n|\ge (t-1)W)(E)<\delta U \implies p_\phi(\F(\P(|X_n|\ge (t-1)W))(|X|))<\epsilon. $$ Choose $t_0$ so large that $(t-1)^{-1}<\delta.$ Then we have for all $t\ge t_0$ that $p_\phi(\F(\P(|X_n|\ge (t-1)W))(|X|))<\epsilon$ uniformly in $n$ as $t\uparrow\infty.$ We now apply Proposition 4.3 by taking $Z''_k$ to be the sequence $\frac{1}{t_k-1}(Z_1'+cE$) with $t_k\uparrow\infty.$ Then there exists a sequence $U_k\downarrow 0$ such that $\F(\P(|X_n|\ge (t-1)W))(|X|)\downarrow_t 0$ uniformly in $n$ and the rest of the argument follows. \bigskip But since $\F(\P(|X_n|\ge (t-1)W))(|X|)\downarrow_t 0$, there exists $U_j\downarrow 0$ such that $\F(\P(|X_n|\ge (t-1)W))(|X|)\leq U_m$ holds for sufficiently large $t\in\mb{N}$ (recall that $m$ is fixed). Therefore, we have \[ \sup_n\F(\P(|X_n+X|\ge \lambda W))(|X_n+X|)\leq Z_m+U_m \] holds for sufficiently large $\lambda$. Invoking Lemma~\ref{L:VW} once again, we conclude that $(X_n+X)$ is uniformly integrable. \qed Used in the proof: \begin{enumerate} \item The definitions of $\mf{P}_\S,$ $\mf{P}_{\S +},$ $\mf{F}_\S,$ $\mf{F}_{\S+},$ $\F_\S,$ $\F_{S+}.$ \item If $\S_n$ is a Freudenthal simple element for $\S,$ then for all projections $\P\in \mf{P}_{\S_n}$ one has $$ \F_a(\P X_{\S_n})\le\F_a(\P X_{\T_n}). $$ which implies $X_{\S_n}=\F_{\S_n}X_{\S_n}\le\F_{\S_n}X_{\T_n}$ (this is almost the result we want for simple optional times). \item $\mf{F}_{\S+}=\bigcap_{n=1}^\infty \mf{F}_{\S_n}.$ This implies that the first inequality in 2 also holds for all $\P\in \mf{P}_{\S+}.$ \item If $\S$ is a stopping time, then $\S\le\S_n$ implies $\mf{F}_\S\subset\mf{F}_{\S_n}$ \item Similarly as in 2, $(X_{\S_n})$ is a backward submartingale with $(\F_a(X_{\S_n}))$ decreasing and bounded below by $\F(X_a)$ (remember $J=[a,b]$). Same is true for $\T_n$ replacing $\S_n.$ \item The conditions in 5 imply that the sequences $(X_{\S_n})$ and $(X_{\T_n})$ are "uniformly integrable" which implies that if they converge in order in $\mf{E}$ to $X_\S$ and $X_\T$ respectively, then $\F_a(\P X_\S)=\lim_{n}\F_a(\P X_{\S_n})$ and the same for $\T$ replacing $\S.$ Using 2 it follows that $$ \F_a(\P X_\S)\le \F_a(P X_\T)\mbox{ for all }P\in\mf{P}_{\S+}. $$ (Here we need to prove a theorem: If a sequence is uo-convergent and uniformly integrabel, then it converges in order. I guess it is true.) The sequence $(X_n)$ in $\mf{E}$ is called \textit{$\F$-uniformly integrable} whenever we have that $\F(\P_\lambda):=\F(\P(X_n\ge \lambda E))\downarrow 0 $ as $0\le\lambda\uparrow\infty$ uniformly in $n.$ That is, there exists a sequence $Z_m\downarrow 0$ such that for every $m$ there exists some $M\ge 0$ such that if $\lambda>M,$ then $\sup_n\F(\P_\lambda |X_n|)\le Z_m.$ \item $\lim X_{\S_n}=X_\S$ and $\lim X_{\T_n}=X_{\T},$ i.e., the condition in 6 is fulfilled (this last condition uses Lebesgue's theorem for the Daniell integral). \end{enumerate}. Let $\F$ be a conditional expectation on $\mf{E}.$ We call two events $\P$ and $\Q$ in $\mf{P}$ \textit{$\F$- conditionally independent} whenever $$ \F (\P\Q)\F=(\F\P)(\F\Q)\F=(\F\Q)(\F\P)\F, $$ which is equivalent to $\F(\P\Q)|_\mf{F}=(\F\P)(\F\Q)|_\mf{F}=(\F\Q)(\F\P)|_\mf{F}.$ By~\cite[Lemma 4.2]{G4} $(\F\P)(\F\Q)\F=(\F\Q)(\F\P)\F$ and so it is sufficient to define conditional independence by stating only one of the conditions. A class $\mf{C}$ of projections are $\F$-independent if, for every choice of a finite number of elements $\P_j,\ j=1,\ldots n$ of $\mf{C}$ we have $$ \F(\prod_{j=1}^n\P_j)|_\mf{F}=\prod_{j=1}^n\F\P_j|_\mf{F}. $$ Classes $\mf{C}_\alpha$ are called $\F$-independent if, for any choice of projections $\P_\alpha\in\mf{C}_\alpha,$ we have that this chosen class is $\F$-independent. For every $t\in\R$ let $\mf{B}_t$ be the band generated by $(tE-X)^+=\mf{B}(tE>X)$ with band projection $\P_t=\P(tE>X).$ Let $\mf{P}(X)$ be the order complete Boolean subalgebra of $\mf{P}$ generated by all $\P_t.$ We note that $(tE-X)^+$ is an element of the Riesz space $[X,\mf{F}]$ generated by the element $X$ and $\mf{F}=\F(\mf{E})$ and so the projections $\P_t$ are projections in this space, i.e., $\mf{P}(X)\subset\mf{P}_{[X,\mf{F}]}.$ We say that two elements $X,Y\in\mf{E}$ are $\F$-conditionally independent if the classes $\mf{P}(X)$ and $\mf{P}(Y)$ are $\F$-independent. We say that the element $X$ is independent of the algebra $\mf{G}$ of projections whenever the algebras $\mf{G}$ and $\mf{P}(X)$ are $\F$-conditionally independent. This means that for any $\Q\in\mf{G}$ and $\P\in\mf{P}(X),$ we have that $\P$ and $\Q$ are $\F$-conditionally independent. We can also define classes of elements in $\mf{E}$ to be $\F$-conditionally independent and so it is also meaningful to refer to Riesz subspaces of $\mf{E}$ to be $\F$-conditionally independent. Denoting the ordered closed Riesz subspace generated by two subsets $\mf{G}$ and $\mf{H}$ in $\mf{E}$ by $[\mf{G},\mf{H}],$ we recall~(\cite[Corollary 4.8]{G4}), that elements $X$ and $Y$ in the Riesz space $\mf{E}$ are $\F$-conditionally independent if and only if the Riesz subspaces $[X,\mf{F}]$ and $[Y,\mf{F}]$ of $\mf{E}$ are $\F$-conditionally independent. Since these spaces contain $\mf{F},$ we have by a result of Bruce Watson~\cite{W} that if $\mf{E}$ is $\F$-universally complete, that there exist a unique conditional expectation $\F_X:\mf{E}\to [X,\mf{F}]$ such that $\F\F_X=\F_X\F=\F.$ \medskip We recall the following fact from measure theory: Let $\Omega$ be a set and $(E,\mf{S})$ be a measurable space. Let $X:\Omega\to E$ be a function and suppose that $Y:\Omega\to\R$ be a $\sigma(X)$ measurable function. Then there exist a measurable function $f:E\to \R$ such that $Y=f(X).$ (Dynkin?) We need an analogue of this fact in the abstract case in order to state the Markov property. Firstly, we recall that the right-continuous spectral system of an element $X\in\mf{E}:$ Let $E$ be a weak order unit for $\mf{E}.$ For $t\in\R$ we define $\overline{E}_t^r$ to be the component of $E$ in the band generated by $(X-tE)^+=(tE-X)^-$ and set $E_t^r:=E-\overline{E}_t^r.$ Then the system $(E_t^r)_{t\in\R}$ is an increasing right continuous system of components of $E.$ We proved in ~\cite[Lemma 3.7]{G3}, that $\mu_X(a,b]:=E_b^r-E^r_a$ defines a measure on the algebra of all left-open right-closed intervals and that it can be extended to a countably additive vector measure on the Borel $\sigma$-algebra $\mc{B}(\R)$ (see also~\cite[Chapter XI section 5]{Vu}). Its values are in the set $\{\P E:\ \P\in\mf{P}(X)\}.$ The measure $\mu_X$ is a Boolean measure and as such also satisfies the condition that $\mu_X(A\cap B)=\mu_X(A)\wedge \mu_X(B)$ (\cite[Theorem XI.5.(c)]{Vu}). Since we will always be working with the right-continuous spectral system, but will have to distinguish between spectral systems of different element, we will henceforth denote the right-continuous spectral system of the element $X$ by $(E^X_t)_{t\in\R}.$ \medskip Assuming that $\mf{E}$ has a weak order unit, the functional calculus can be extended to bounded Borel measurable functions. (Verwysing) For a bounded measurable function the integral $$ f(X):=\int_\R f(t)\,d\mu_X(t)\in \mf{E} $$ exists as a limit of measurable step-functions in the same way as, for the function $f(t)=t,$ the integral exists in the proof of Freudenthal's theorem (see~\cite[Theorem 40.3]{LZ}. For an arbitrary positive measurable function the integral also exists, but its value is not necessarily in $\mf{E},$ but in its supremum completion $\mf{E}_s.$ The Borel-measurable function $f$ is then said to be integrable with respect to the Boolean measure $\mu_X$ whenever both $\int_\R f^+\,d\mu_X$ and $\int_\R f^-\,d\mu_X$ are elements of $\mf{E}$ and in that case the integral is defined to be their difference. \medskip The question of which elements $Y$ can be written as $Y=f(X)$ was answered in part by V.I. Sobolov (see~\cite[Theorem XI.7.a]{Vu}). If $M$ denotes the values of $\mu_X,$ i.e., if $M=\{\mu_X(A): A\in\mc{B}(\R)\},$ then the bounded element $Y\in\mf{E}$ can be written as $f(X)$ for some Borel-measurable function if and only if $E_t^Y\in M$ for all $t\in\R.$ In particular, if $A\in\mc{B}(\R),$ then $\mu_X(A)=I_A(X).$ \medskip \begin{lemma}\label{lemma 1.1} Let $\mf{E}$ be a Dedekind complete Riesz space with weak order unit $E.$ Let $\sigma(X)$ be the $\sigma$-algebra of components of $E$ generated by the components $E^X_t,$ $t\in\R.$ Then, for each $E_\alpha\in\sigma(X),$ there exists a Borel measurable set $A_\alpha\in\mc{B}(\R)$ such that $\mu_X(A_\alpha)=E_\alpha.$ \end{lemma} {\em Proof.} We note that for $E_{s,t}:=E^X_t-E^X_s$ we have that the interval $I_{s,t}:=(s,t]\in\mc{B}(\R)$ satisfies $\mu_X(I_{s,t})=E_{s,t}.$ Consider the set $\mf{D}$ of all $E_\alpha\in\mf{X}$ that is contained in the range of $\mu_X.$ Then the set of all $E_{s,t}$ is a $\pi$-system that is contained in $\mf{D}.$ We claim that $\mf{D}$ is a Dynkin-system: \begin{enumerate} \item[(a)] The element $E\in\mf{D},$ for $\mu_X(-\infty,\infty)=E^X_{\infty}-E^X_{-\infty}=E-0=E.$ \item[(b)] Let $E_\alpha, E_\beta\in\mf{D}$ with $E_\alpha\le E_\beta.$ Let $\mu_X(A_\alpha)=E_\alpha$ and $\mu_X(A_\beta)=E_\beta$ with $A_\alpha, A_\beta\in\mc{B}(\R).$ We first note that the disjoint complement $E_\alpha^d$ of $E_\alpha$ also belongs to $\mf{D}$ because, from $E=\mu_X(\R)=\mu_X(A_\alpha\cup A_\alpha^c)=\mu_X(A_\alpha)+\mu_X(A_\alpha^c)=E_\alpha+\mu_X(A_\alpha^c)$ it follows that $\mu_X(A_\alpha^c)=E-E_\alpha=E_\alpha^d.$ Now, $E_\beta-E_\alpha=E_\beta\wedge E_\alpha^d=\mu_X(A_\beta)\wedge\mu_X(A_\alpha^c)=\mu_X(A_\beta\cap A_\alpha^c)$ by our earlier remark that $\mu_X$ is a Boolean measure. So $E_\beta-E_\alpha\in\mf{D}.$ \item[(c)] Let $E_n\in\mf{X}$ with $E_n\uparrow E_0.$ Let $\mu_X(A_n)=E_n.$ We claim that $\mu_X(\bigcup_nA_n)=E_0.$ Define the disjoint sequence of sets $B_n$ in the usual way by $B_1=A_1,$ $B_{n+1}=A_{n+1}-\bigcup_{j=1}^n B_j.$ Then $\bigcup_nB_n= \bigcup_nA_n=A_0,$ and, since $E_n\uparrow,$ $$ \mu_X(B_1)=\mu_S(A_1)=E_1, \mu_X(B_2)=E_2-E_1,\ldots,\mu_X(B_n)=E_n-E_{n-1}. $$ It follows that $$ \mu_X(A_0)=\sum_{j=1}^\infty\mu_X(B_n)=\sum_{j=1}^\infty (E_{n}-E_{n-1})=E_0. $$ We therefore have that $E_0\in\mf{D}.$ \end{enumerate} It follows that $\mf{D}$ contains the $\sigma$-algebra generated by the images of all left-open right-closed intervals, i.e, $\sigma(X)\subset \mf{D}.$ Thus, for every $E_\alpha\in\sigma(X),$ there exists an element $A_\alpha\in\mc{B}(\R)$ such that $\mu_X(A_\alpha)=E_\alpha.$\qed \bigskip In general the algebra of components of $E$ generated by the components $E^X_t,$ $t\in\R,$ may not be a $\sigma$-algebra. This may depend on $X$ but also on the Riesz subspace $\mf{F}$ of $\mf{E}$ to which $X$ belongs. One can define an element $X\in\mf{E}$ to be a \textit{measurable element} in $\mf{E}$ if the Boolean $\sigma$-algebra $\sigma(X),$ generated by $\{E^X_t :\ t\in\R,\},$ is equal to the order complete Boolean algebra generated by the set. This is equivalent to saying that the Boolean algebra $\mf{P}(X)$ is the $\sigma$-algebra generated by the projections $\P(tE>X),$ $t\in\R.$ \medskip\noindent \textit{Conjecture:\,} {\sl If $\mf{E}$ is a super-Dedekind complete Riesz space, then every $X\in\mf{E}$ is measurable.} \begin{proposition} Let $\mf{E}$ be a Dedekind complete Riesz space with weak order unit $E$ and let $X$ be a measurable element of $\mf{E}.$ Let $\mf{G}_X$ be the Riesz subspace of $\mf{E}$ generated by the algebra $\mf{P}(X).$ Then, for every $Y\in\mf{G}_X$ there exists a real valued Borel-measurable function $f$ defined on $\R$ such that $Y=f(X).$ \end{proposition} {\em Proof.} Note that our assumption is that $\mf{P}(X)=\sigma(X).$ Let $Y\in\mf{G}_X$ be bounded, say $|Y|\le ME.$ Then there exists a sequence of simple elements of the form $$ s_n=\sum_{i\in\pi}t_iE_i, \ E_i\in\sigma(X), $$ that converges $E$-uniformly to $Y$ (by Freudenthall's theorem). By Lemma~\ref{lemma 1.1}, there exists, for each $i,$ a Borel measurable set $A_i$ such that $E_i=\mu_X(A_i).$ Hence, $$ s_n=\sum_{i\in\pi}t_i\mu_X(A_i)=\int_\R(\sum_{i\in\pi}t_iI_{A_i})\,d\mu_X=\sigma_n(X), $$ with $\sigma_n(t)$ the real step function $\sum_{i\in\pi}t_iI_{A_i}(t).$ It easily follows that, since $s_n\to Y,$ then, with $\sigma_n\to f, $ we have $Y=f(X).$ The extension to arbitrary $Y\in\mf{G}_X$ should be without problems.\qed \bigskip Remark: It seems as though a definition of a Markov-process used by S.E. Shreve~\cite[page 76]{Sh} can be used in our case only if we make the extra assumptions that the elements $X_t$ in the stochastic process $(X_t,\mf{F}_t,\F)_{t\in J}$ are measurable elements of the Riesz space $\mf{E}.$ \section{Markov Processes} In the theory of Markov processes it if convenient to use some of the classical notation. The order closed Riesz subspace generated in $\mf{E}$ by $\mf{F}$ and the elements $X_1,X_2,\ldots,X_n\in \mf{E},$ that is, the space $[X_1,\ldots,X_n,\mf{F}],$ will be denoted by $\mf{F}_{X_1,X_2,\ldots,X_n}.$ The unique conditional expectation that maps $\mf{E}$ onto this subspace, will be denoted by $\F(\cdot\,|\,X_1,X_2,\ldots,X_n).$ This is the conditional expectation that satisfies $\F\F(\cdot\,|\,X_1,X_2,\ldots,X_n)=\F(\cdot\,|\,X_1,X_2,\ldots,X_n)\F=\F.$ Similarly, the order closed Riesz subspace generated by $\mf{F}$ and $\{X_s : s\ge t\}$ will be denoted by $\mf{F}_{X_{s,s\ge t}}$ and the conditional expectation onto this space by $\F(\cdot\,|\,X_{s,s\ge t}).$ The notation $\mf{F}_{X_{s,s\le t}}$ needs no explanation. It defines a filtration on $\mf{E}$ to which $(X_t)$ is adapted and is denoted by Karatzas and Shreve~\cite{KS} by $\mf{F}^X_t.$ We shall also adopt this simpler notation. We formulate abstractly different definitions used in the literature. In each case the citation refers to the classical definition. \medskip 1. \cite[Definition 1.1]{BG}. Theorem 4.5.4 in Ash and Gardner~\cite{AG}. \begin{definition}[{\rm Blumethal, R.M.; Getoor, R.K.}] Let $\mf{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mf{F}_t,\F_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mf{F}_t,\F_t).$ The process is called a \textit{Markov process} if $\mf{F}_t$ and $\mf{F}_{X_{s,s\ge t}}$ are $\F_t$-conditionally independent, i.e., \begin{equation} \F_t(\P\Q)\F_t=\F_t(\P)\F_t(\Q)\F_t,\mbox{ for all }\P\in\mf{P}_t,\ \Q\in\mf{P}(\mf{F}_{X_{s,s\ge t}})_. \end{equation} \end{definition} \medskip 2. \cite[Definition 4.5.1]{AG}. Theorem 1.3(iii) in Blumenthal and Getoor. \begin{definition}[{\rm Ash and Gardner}] Let $\mf{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mf{F}_t,\F_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mf{F}_t,\F_t).$ The process is called a \textit{Markov process} if for every Borel set $B\in\mc{B}(\R)$ and $s,t\in J$ with $s<t$ \begin{equation} \F_s(I_B(X_t))=\F(I_B(X_t)\,|\,X_s). \end{equation} Equivalent: For any Borel-measureable function $g$ for which $g(X_t)\in\mf{E},$ \begin{equation} \F_s(g(X_t))=\F(g(X_t)\,|\,X_s) \end{equation} \end{definition} Note that by the definition of the measure $\mu_{X_t},$ equation 2.3 can be written as \begin{equation} \F_s(\mu_{X_t}(B))=\F(\mu_{X_t}(B)\,|\,X_s) \end{equation} and the next equation as \begin{equation} \F_s\left(\int_\R g\,d\mu_{X_t}\right)=\F\left(\int_\R g\,d\mu_{X_t}\,|\,X_s\right) \end{equation} \medskip 3. \cite[Definition 2.3.6]{Sh}. See Ash and Gardner: Comments 4.5.2(c). \begin{definition}[{\rm Shreve, S.E.}] Let $\mf{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mf{F}_t,\F_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mf{F}_t,\F_t).$ The process is called a \textit{Markov process} if for all $s\le t\in J$ and for every nonnegative Borel-measurable function $f,$ there exists a Borel-measurable function $g$ such that \begin{equation} \F_s(f(X_t))=g(X_s). \end{equation} \end{definition} 4. \cite[Definition 10.5.4]{Kuo}. See Ash and Gardner: Comments 4.5.3(b). \begin{definition}[{\rm Kuo}] Let $\mf{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mf{F}_t,\F_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mf{F}_t,\F_t).$ The process is called a \textit{Markov process} if for $a\le t_1<t_2<\cdots<t_n<t\le b,$ we have the equality \begin{equation} \F(\P(X_t\le xE)E\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\F(\P(X_t\le xE)E\,|\,X_{t_n}). \end{equation} \end{definition} Note that our notation for $\P(X_t\le xE)E$ is $E^{X_t}_x.$ So the equation above becomes \begin{equation} \F(E^{X_t}_x\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\F(E^{X_t}_x\,|\,X_{t_n}). \end{equation} 5. \cite[Definition 4.1]{VW} \begin{definition}[{\rm Vardy, J. and Watson, B.A.}] Let $\mf{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mf{F}_t,\F_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mf{F}_t,\F_t).$ The process is called a \textit{Markov process} if for any set of points $a\le t_1<t_2<\cdots<t_n<t\le b$ and for any component $E^{X_t}_\alpha$ of $E,$ we have the equality \begin{equation} \F(E^{X_t}_\alpha\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\F(E^{X_t}_\alpha\,|\,X_{t_n}). \end{equation} This is equivalent to \begin{equation} \F(\cdot\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})\F_t=\F(\cdot,|\,X_{t_n})\F_t. \end{equation} \end{definition} \bigskip It seems to us that the strongest definition is Definition 2.3. So, let us take Definition 2.3 as our definition of a Markov process. \medskip \begin{lemma} Let $(X_t,\mf{F}_t)_{t\in J}$ be a Markov process suppose $f$ and $g$ are non-negative as given in the definition. Then for $s\le t\in J$ we have \begin{equation} g(X_s)=\F(f(X_t)\,|\,X_s). \end{equation} \end{lemma} {\em Proof.}\ We have, since $\mf{F}_{X_s}\subset\mf{F}_s,$ by the properties of a conditional expectation, that $\F(\F_s(f(X_t))\,|\, X_s)=\F(f(X_t)\,|\, X_s).$ But, since $(X_t)$ is a Markov process, $\F_s(f(X_t))=g(X_s)$ and so, substituting this in the first equation we get $$ \F(f(X_t)\,|\, X_s)=\F(g(X_s)\,|\,X_s)=g(X_s). $$ \phantom{Koos}\qed \bigskip \begin{corollary} If $(X_t,\mf{F}_t)$ is a Markov process, then it satisfies the condition of Definition 2.2 (i.e., the Definition found in Ash and Gardner~\cite{AG}. \end{corollary} {\em Proof.\ } If it is a Markov process, then, for $s\le t$ and any Borel measurable $f$ we have that for some Borel measurable $g$ that $$ \F_s(f(X_t))=g(X_s)=\F(f(X_t)\,|\,X_s), $$ which is the condition of the Ash-Gardner definition.\qed \bigskip The equivalent formulation of the Ash-Gardner definition is that for all Borel measurable sets $B$ one gets for all $s,t\in J,$ $s<t,$ that $$ \F_s(I_B(X_t))=\F(I_B(X_t)\,|\,X_s). $$ \medskip We say that a stochastic process $(X_t)$ has the Markov property with respect to a finite set $I=\{t_1<t_2<\cdots<t_n\}$ if, for all Borel sets $B\in\R$ we have $$ \F(I_B(X_{t_n})\,|\,X_{t_1},X_{t_1},\ldots,X_{t_{n-1}})=\F(I_B(X_{t_n})\,|\,X_{t_{n-1}}). $$ In the proof of the next result, we use the monotone class theorem applied to order complete algebras. The difference with the usual application is that we now apply it to upwards directed systems whereas in the classical case with $\sigma$-algebras, it is applied to upwards directed sequences. We use the result for order complete algebras of projections. \begin{enumerate} \item[(a)] A class of projections is called a {\em $\pi$-class}, if it is closed under multiplication. \item[(b)] A class of projections is called a {\em $d$-class}, if \begin{enumerate} \item[(i)] The identity operator $\I$ is in the class; \item[(ii)] If $\P\le \Q,$ with both $\P$ and $\Q$ in the class implies that $ \Q-\P=\Q\P^d $ is in the class; \item[(iii)] If if $\P_\alpha $ is in the class and if $\P_\alpha\uparrow\P$ then $\P$ is in the class. \end{enumerate} \item[(c)] The monotone class theorem: If a $\pi$-class contains a $d$-class, then it contains the complete algebra generated by the $\pi$-class. \end{enumerate} The only point where the proof of $(c)$ differs from the proof in the countable case,(see for instance~\cite[Theorem 1.3.9]{A&D}) is in the proof that for an arbitrary set of projections $\P_\alpha$ their supremum is in the algebra assuming that they are elements of a $d$-system. Here we use the standard method of first forming the set of all finite suprema of the $\P_\alpha,$ which is an upward directed set of projections having the same supremum as the original set. \begin{proposition} Let $(X_t,\mf{F}^X_t)_{t\in J}$ be a stochastic process. If $(X_t)_{t\in I}$ has the Markov property for all finite subsets $I\subset J,$ then $(X_t,\mf{F}^X_t)_{t\in J}$ satisfies the Ash-Gardner definition of a stochastic process. \end{proposition}{\em Proof.} We have to prove that for every Borel set $B,$ and for $s<t$ $$ \F(I_B(X_t)\,|\,X_r, r\le s)=\F(I_B(X_t)\,|\,X_s). $$ In order to do that, we note that the Boolean algebra of projections generated by the Boolean algebra of projections $\mf{P}_{X_r,r\le s}$ is equal to the the Boolean algebra of projections generated by all finite families of Boolean algebras $\mf{P}_{X_{t_1},X_{t_2},\ldots,X_{t_n}}$ with $t_1<t_2<\ldots<t_n\le s.$ Using a defining property of a conditional expectation operator (see~\cite[Theorem 3.3]{G2}), we will prove that for all projections $\P$ belonging to $\mf{P}_{X_{s,s\le t}},$ we have \begin{equation} \F(\P I_B(X_t))=\F(\P(\F(I_B(X_t)\,|\,X_s)). \end{equation} This then implies that $\F(I_B(X_t)\,|\,X_s)=\F(I_B(X_t)\,|\,X_r, r\le s)$ . We do it using the monotone class theorem: Let $\mf{P}_s$ be the set of all projections $\P$ satisfying Equation~(\ref{equation 2.12}). For any $\P\in\mf{P}_{X_{t_1},\ldots,X_{t_n}},$ we have, by the definition of conditional expectation that \begin{align*} \F(\P I_B(X_t))&=\F(\P I_B(X_t)\,|\,X_{t_1},\ldots,X_{t_n}) \\ &=\F(\P I_B(X_t)\,|\,X_s)), \end{align*} with the last equality by our assumption that $(X_t)$ has the Markov-property for finite sets. Thus, $\mf{P}_{X_{t_1},\ldots,X_{t_n}}\subset \mf{P}_s.$ This holds for any choice of indices, and so the Boolean algebra generated by the algebras $\mf{P}_{X_{t_1},\ldots,X_{t_n}},$ is a Boolean algebra of projections (and therefore a $\pi$-class) that is contained in $\mf{P}_s.$ It is clear that $\I\in\mf{P}_s,$ because $\I\in \mf{P}_{X_{t_1},\ldots,X_{t_n}}.$ Also, if $\P_\alpha\uparrow \P$ and every $\P_\alpha$ satisfies equation~\ref{equation 2.12}, then by the order continuity of the relevant conditional expectation operators, we have that $\P$ also satisfies equation~\ref{equation 2.12}. By the monotone class theorem this shows that $\mf{P}_s$ contains the order complete algebra generated by all the $\mf{P}_{X_{t_1},\ldots,X_{t_n}}$ for any choice of indices. As this algebra is equal to the algebra $\mf{P}_{X_{s,s\le t}},$ equation~\ref{equation 2.12} holds for all $\P$ in this algebra and we are done.\qed \bigskip \bigskip \begin{proposition} If $(X_t,\mf{F}_t)$ is a Markov process and if $E_\alpha$ is a component of $E$ such that $E_\alpha\in\mf{F}_{X_{r,r\ge t}},$ then \begin{equation} \F_t(E_\alpha)=\F(E_\alpha\,|\,X_t). \end{equation} \end{proposition} \bigskip Using this result one can prove: If $(X_t,\mf{F}_t)$ is a stochastic process adapted to the filtration $(\mf{F}_t)$ then it is a Markov process if for each $t$ the Riesz spaces $\mf{F}_t$ and $\mf{F}_{X_{s,s\ge t}}$ are conditionally independent, given $X_t.$ This is the Blumenthal-Getoor definition. \bigskip We next prove that if $X_t$ is measurable, then anyone of the definitions 2.1, 2.2, 2.4 and 2.5 implies the definition 2.3 (Shreve) that we took as our definition. \bigskip The next lemma was proved by Vardy and Watson~\cite{VW} \begin{lemma} If $\ds X_n=\sum_{k=1}^n Y_k$ for $n=1,2,\ldots,$ and if the $(Y_k)$ are conditionally $\F$-independent, then $X_n$ is a Markov process with respect to the filtration $\mf{F}^Y_n=\mf{F}_{Y_1,Y_2,\ldots,Y_n}$ and hence with respect to the filtration $\mf{F}^X_n=\mf{F}_{X_1,X_2,\ldots,X_n}.$ \end{lemma} Hence, we also derive the well known result \begin{theorem} A process $(X_t)$ with independent increments is a Markov process. \end{theorem} \begin{corollary} A Brownian motion $(B_t)$ is a Markov process. \end{corollary} \bigskip \section{Representation by Brownian integrals} \sc B$\text{\footnotesize{\.{I}}}$bliography \input{BibliografieMarkov.tex} \end{document} \section{Preliminaries} We assume $\mf{E}$ to be a Dedekind complete Riesz space with weak order $E$ separated by its order continuous dual $\mf{E}^\sim_{00}.$ We also assume that $\mf{E}$ is {\em perfect,} i.e., $\mf{E}=(\mf{E}^\sim_{00})^\sim_{00}.$ For the theory of Riesz spaces (vector lattices) we refer the reader to the following standard texts~\cite{AB2,LZ,MN, Sch, Z1, Z2}. For results on topological vector lattices the standard references are~\cite{AB1,F}. We denote the {\it universal completion} of $\mathfrak E,$ which is an $f$-algebra that contains $\mathfrak E$ as an order dense ideal, by $\mathfrak E^u.$ Its multiplication is an extension of the multiplication defined on the principal ideal $\mf{E}_E$ and $E$ is the algebraic unit and a weak order unit for $\mathfrak E^u$ (see \cite{Z1}). The set of order bounded band preserving operators, called orthomorphisms, is denoted by $\operatorname{Orth}(\mf{E}).$ We refer to \cite{Donner, G9} for the definition and properties of the \emph{sup-completion} $\mathfrak E^s$ of a Dedekind complete Riesz space $\mathfrak E.$ It is a unique Dedekind complete ordered cone that contains $\mathfrak E$ as a sub-cone of its group of invertible elements and its most important property is that it has a largest element. Being Dedekind complete this implies that every subset of $\mf{E}^s$ has a supremum in $\mf{E}^s.$ Also, for every $C\in\mf{E}^s,$ we have $C=\sup\{X\in\mf{E}: X\le C\}$ and $\mf{E}$ is a solid subset of $\mf{E}^s.$ \medskip As mentioned in the introduction, a {\em conditional expectation} $\mb{F}$ defined on $\mf{E}$ is a strictly positive order continuous linear projection with range a Dedekind complete Riesz subspace $\mf{F}$ of $\mf{E}$ with the property that $\mb{F}$ maps weak order units onto weak order units. It may be assumed, as we will do, that $\mb{F}E=E$ for the weak order unit $E.$ The space $\mf{E}$ is called {$\mb{F}$-universally complete} (respectively, {$\mb{F}$-universally complete in $\mf{E}^u$}) if, whenever $X_\alpha\uparrow$ in $\mf{E}$ and $\mb{F}(X_\alpha)$ is bounded in $\mf{E}$ (respectively in $\mf{E}^u$), then $X_\alpha\uparrow X$ for some $X\in\mf{E}.$ If $\mf{E}$ is $\F$-universally complete in $\mf{E}^u,$ then it is $\F$-universally complete. \textit{We shall assume henceforth that $\mf{E}$ is $\mb{F}$-universally complete in $\mf{E}^u.$}\\ It follows that if $\mf{G}$ is an order closed Riesz subspace of $\mf{E}$ with $\mf{F}\subset\mf{G},$ then there exists a unique conditional expectation $\mb{F}_\mf{G}$ on $\mf{E}$ with range $\mf{G}$ and $\mb{F}\mb{F}_\mf{G}=\mb{F}_\mf{G}\mb{F}=\mb{F}$ (see~\cite{G2,W}). The conditional expectation $\mb{F}$ may be extended to the sup-completion in the following way: For every $X\in\mf{E}^s,$ define $\mathbb F X$ by $\sup_{\alpha}\mathbb F X_\alpha\in\mathfrak{E}^s$ for any upward directed net $X_\alpha\uparrow X$, $X_\alpha\in\mathfrak{E}.$ It is well defined (see~\cite{G8}). We define $\dom^+ \F:=\{0\le X\in\mf{E}^s:\ \F(X)\in\mf{E}^u\}.$ Then $\dom^+\F\subset\mf{E}^u$ (see~\cite[Proposition 2.1]{G6}) and we define $\dom\F=\dom^+\F-\dom^+\F.$ If $\mf{E}$ is $\F$-universally complete in $\mf{E}^u,$ then $\dom\,\F=\mf{E}.$ If $XY\in \rm{dom}\,{\mathbb F}$ (with the multiplication taken in the $f$-algebra $\mf{E}^u$), where $Y\in \mathfrak E$ and $X\in \mathfrak F=\mc{R}(\F),$ we have that $\mathbb F(XY)= X \mathbb F(Y)$. This fundamental fact is referred to as the \emph{averaging property} of $\mb F$ (see~\cite{G1}). Let $\Phi$ be the set of all $\phi\in\mf{E}^\sim_{00}$ satisfying $|\phi|(E)=1$ and extend $|\phi|$ to $\mf{E}^s$ by continuity. Define $\mathscr{P}$ to be the set of all Riesz seminorms defined on $\mf{E}^s$ by $p_{\phi}(X):=|\phi|(\F(|X|)$ where $\phi\in\Phi.$ Similarly, we define for $\phi\in\Phi$ the set $\mathscr{Q}$ of Riesz seminorms $q$ by putting $q_{\phi}(X):=(|\phi|(\F(|X|^2))^{1/2},$ where the product is formed in the $f$-algebra $\mf{E}^u.$ We define the space $\mathscr{L}^1:=(\mf{E},\s(\mathscr{P}))$ and have that $\mc{L}^1:=\{X\in\mf{E}_s: p_\phi(X)<\infty\mbox{ for all } \phi\in\Phi\},$ equipped with the locally solid topology $\s(\mathscr{L}^1,\mathscr{P})$ (for the proof see~\cite{G9}). The space $\mc{L}^2$ then consists of all $X\in\mf{E}^s$ satisfying $q_{\phi}(X)<\infty\mbox{ for all } \phi\in\Phi,$ equipped with the weak topology $\s(\mathscr{Q}).$ We have $\scr{L}^2=\{X\in\mf{E}_s\,:\, |X|^2\in\mf{E}\}.$ One also has the Cauchy inequality $p_{\phi}(XY)\le q_{\phi}(X)q_{\phi}(Y)\mbox{ for all } X,Y\in \mc{L}^2. $ and hence, for all $X,Y\in\mc{L}^2,$ $XY\in\mc{L}^1.$ The spaces $\mathscr{L}^1$ and $\mathscr{L}^2$ are topologically complete (see~\cite{G7} and \cite{G9} and note that this may not be true without the assumption that $\mf{E}$ is $\F$-universally complete in $\mf{E}^u$). A {\em filtration} on $\mf{E}$ is a set $(\F_t)_{t\in J}$ of conditional expectations satisfying $\F_s=\F_s\F_t$ for all $s<t.$ We denote the range of $\F_t$ by $\mf{F}_t.$ A {\em stochastic process} in $\mf{E}$ is a function $t\mapsto X_t\in\mf{E},$ for $t\in J,$ with $J\subset\R^+$ an interval. The stochastic process $(X_t)_{t\in J}$ is {\em adapted to the filtration} if $X_t\in\mf{F}_t$ for all $t\in J.$ If $(X_t)$ is a stochastic process adapted to $({\mathbb F}_t, \mathfrak F_t)$, we call $(X_t, \mathfrak F_t,\F_t)$ a \emph{supermartingale} (respectively \emph{submartingale}) if ${\mathbb F}_t(X_s)\leq X_t$ (respectively ${\mathbb F}_t(X_s)\geq X_t$) for all $t\leq s$. If the process is both a sub- and a supermartingale, it is called a \emph{martingale}. The submartingale $(X_t,\mf{F}_t,\F_t)_{t\in J}$ is said to have a {\em Doob-Meyer decomposition} if $X_t=M_t+A_t$ with $(M_t)$ a martingale and $(A_t)$ a right-continuous increasing process. In~\cite{G1,G9} conditions can be found for a submartingale to have a unique Doob-Meyer decomposition. If $(X_t)$ is a martingale, the increasing process in the decomposition of the submartingale $(X_t^2)$ is denoted by $(\<X\>_t)$ and this process is called the {\em compensator } of the martingale $(X_t).$ The stochastic process $(X_t)_{t\in J=[a,b]}$ is called \textit{locally H\"older-continuous with exponent $\gamma$ (also $\gamma$-H\"older-continuous)} if there exists a number $\delta>0$ and a strictly positive orthomorphism $\mb{S}$ such that for all $s,t\in[a,b]$ satisfying $0<|t-s|\mb{I}\le\S$ on a band $\mf{C}$ one has $|X_t-X_s|\le\delta|t-s|^\gamma E \mbox{ on the band }\mf{C}.$ The maximal band for which this can hold for given $s$ and $t$ is the band $\mf{B}(|t-s|E\le\S E)=\{(|t-s|E-\S E)^+\}^d$ (see~\cite{G4,G5}). We note that if $\delta_n=s_n-t_n,$ then $(\delta_n E-\S E)^+\downarrow 0$ if $\delta_n\downarrow 0.$ Therefore $\mf{B}_n:=\mf{B}(\delta_n E\le\S E)\uparrow \mf{E}.$ If $(X_t)$ is a locally $\gamma$-H\"older continuous submartingale with Doob-Meyer decomposition $X_t=M_t+A_t,$ then $(A_t)$ is also locally $\gamma$-H\"older continuous (see~\cite{G10}). For $\pi=\{a=t_0<t_1<\cdots<t_n=t\}$ a partition of the interval $[a,t]$ with mesh $|\pi|,$ we put $ V_t^{(p)}(\pi):=\sum_{i=1}^n|X_{t_{i}}-X_{t_{i-1}}|^p.$ The element $ V_t^{(p)}(X):=\sup_{\pi}V_t^{(p)}(\pi)\in\mf{E}^s $ is called the {\em $(p)$-variation of $X$ on $[a,t].$} $X$ has {\em finite $(p)$-variation} on $[a,t]$ if $V_t^{(p)}(X)\in\mf{E}^u.$ We say that $X$ has {\em finite ${(p)}$-variation} if it has finite $(p)$-variation on $[a,t]$ for every $t\in[a,b]$ and if $V_b^{(p)}(X)\in\mf{E}^u$ we say that $X$ is of {\em bounded $(p)$-variation.} The function $t\mapsto V_t^{(p)}(X)$ is called the {\em total $(p)$-variation process of $X.$} If $p=2$ we call the variation the {\em quadratic variation}. The component of $E$ in the band $\mf{B}(tE>X)$ is denoted by $E_t^\ell$ and $(E_t^\ell)_{t\in J}$ is an increasing left-continuous system, called the {\em left-continuous spectral system of $X.$} Also, if $\overline{E}^r_t$ is the component of $E$ in the band generated by $(X-tE)^+$ and $E^r_t:=E-\overline{E}^r_t,$ the system $(E^r_t)$ is an increasing right-continuous system of components of $E,$ called the {\em right-continuous spectral system} of $X$ (see~\cite{LZ,G2}). For the filtration $(\F_t,\mf{F}_t)_{t\in J},$ we denote by $\mf{P}_t$ the set of all order projections in the space $\mf{F}_t.$ A {\em stopping time} for this filtration is an orthomorphism $\mb{S}\in\orth(\mf{E})$ such that its right continuous spectral system $(\mb{S}^r_t)$ of projections satisfies $\mb{S}^r_t\in\mf{P}_t.$ If this holds for the left-continuous system, it is called an {\em optional time}. We refer the reader to~\cite{G2,G10} for a definition of the element $X_{\mb{T}}$ where $X$ is a submartingale adapted to the filtration $(\F_t,\mf{F}_t)_{t\in[a,b]}$ and $\T$ is a stopping time for the filtration. We note in this respect that the element $X_\T$ is defined first for an increasing process and then for a martingale and put together, for a submartingale. The process $(X_{t\wedge\T)}$ is called the {\em stopped process.} If there exists a non-decreasing sequence $(\T_n)$ of stopping times such that $\T_n\uparrow b\I$ and such that for each $n$ the process $(X_{t\wedge \T_n})_{t\in[a,b]}$ is a martingale, then $(X_t)$ is called a {\em local martingale.} The stochastic process $(B_t,\mf{F}_t,\F_t)$ is called an $\F$-conditional {\em Brownian motion} if, for all $0\le s<t,$ the following conditions hold \begin{enumerate} \item[(1)] $B_0=0;$ \item[(2)] the increment $B_t-B_s$ is $\F$-conditionally independent of $\mf{F}_s;$ \item[(3)] $\F[(B_t-B_s)^2]=(t-s)E;$ \item[(4)] $\F[(B_t-B_s)^4]=3(t-s)^2E.$ \end{enumerate} An $\F$-conditional Brownian motion is a martingale that is locally H\"older continuous with exponent $\gamma$ for all $\gamma\in(0,\tfrac 14)$ and that $\<B\>_t=t.$ (see~\cite[Definition 5.2]{G4}). The integrals we use are the stochastic integral with reference to a martingale and the Dobrakov integral, for a summary of which we refer the reader to~\cite{G9}. The latter integral is defined for an $\mc{L}^2$-valued function $X(t)$ with reference to an operator valued measure $\mu_A$ defined on the Borel $\sigma$-algebra $\mc{B}(J)$ with values in $L(\mc{L}^2,\mc{L}^1).$ The vector measure $\mu_A$ on $\mc{B}(J)$ is the extension of a measure defined on intervals by an integrable increasing right-continuous process $(A_t)_{t\in J},$ with $\mu_A(a,b]:=A_b-A_a$ and we assume $A_t\in\mc{L}^2$ for all $t\in J.$ The operator is then the multiplication operator (with the product formed in the $f$-algebra $\mf{E}^u$) and will have values in $\mc{L}^1.$ We assume, as in~\cite{G9}, that all bounded intervals in $\R^+$ have finite semivariation. The set of all Dobrakov integrable functions will be denoted by $L^1([a,b],\mu_A)$ and by $L^2([a,b],\mu_A)$ we denote the space of all $\mu_A$-integrable functions $X$ from $[a,b]$ in $\mf{E}^s$ satisfying $|\phi|\F\int_a^b |X|^2\,d\mu_A<\infty$ for all $\phi\in\mf{E}^\sim_{00}.$ If $(M_t,\mf{F}_t,\F_t)$ is a martingale with compensator $\<M\>,$ the closure of the set $\L$ of all simple predictable processes in $L^2([a,b],\mu_{\<M\>}),$ is denoted by \\ $L^2_{\pred}([a,b],\<M\>).$ The It\^o integral $I^M(X)=\int_a^bX_t\,d\mu_{\<M\>}$ is defined for all $X\in L^2_{\pred}([a,b],\<M\>).$ Moreover, $I_t^M(X)=\int_a^tX_u\,d\mu_{\<M\>}$ is a martingale. The domain of $I^M$ can be extended further to the space $\mc{L}_{\pred}(L^2[a,b],\<M\>),$ but the resulting indefinite integral is no longer a martingale, but a {\em local martingale} (for the detail, see~\cite{G9}). In the special case that $M$ is a Brownian motion, the space $\mc{L}_{\pred}(L^2[a,b],\<M\>)$ is denoted by $\mc{L}_{ad}(L_2[a,b])).$ \medskip \section{The cross-variation process} Let $\mf{E}$ be a Dedekind complete perfect vector lattice and assume that $\F$ is a conditional expectation defined on $\mf{E}$ and that $\mf{E}$ is $\F$-universally complete in $\mf{E}^u.$ We consider the set $\M_2$ of right order continuous martingales $(X_t)_{t\in[a,b]}$ satisfying the condition that $X_t\in\mc{L}^2$ for every $t\in[a,b].$ $\M_2^c$ will denote the set of order continuous martingales in $\M_2.$ For $X\in\M_2,$ we have that $X^2$ is a nonnegative submartingale and hence of class DL (see\cite[Definition 7.3]{G1}). Therefore, by ~\cite[Theorems 5.11 and 5.12]{G9}, $X^2$ has a unique Doob-Meyer decomposition $$ X^2=M+A, \ $$ where $M$ is a right continuous martingale and $A$ is a natural increasing process with $A_0=0.$ As was mentioned above, the compensator $(\lr{X}_t)$ of $X$ is defined to be the process $(A_t).$ For the properties of $\lr{X}$ inherited from those of $X$ we refer to~\cite[Theorem 3.3]{G10}. \begin{definition}\label{cross-variation}\rm For martingales $X,Y\in\M_2$ we define their \textit{cross-variation process $\lr{X,Y}$} by $$ \lr{X,Y}_t:=\tfrac 14[\lr{X+Y}_t-\lr{X-Y}_t],\ \ 0\le t<\infty. $$ We say $X$ and $Y$ are {\em orthogonal} if $\lr{X,Y}_t=0$ for all $0\le t<\infty.$ \end{definition} \begin{proposition} $XY-\lr{X,Y}$ is a martingale. \end{proposition} {\em Proof.} A linear combination of martingales is again a martingale. Thus, if $X$ and $Y$ are in $\M_2,$ then both the processes $(X+Y)^2-\langle X+Y\rangle$ and $(X-Y)^2-\langle X-Y\rangle$ are martingales and so is their difference $4XY-\langle X+Y\rangle-\lr{X-Y}.$ It follows that $$ XY-\lr{X,Y}=XY-\tfrac 14[\lr{X+Y}-\lr{X-Y}] $$ is a martingale.\qed \begin{proposition}\label{uniqueness of cv} If $X,Y\in\M_2$ then $\lr{X,Y}$ is the only process of the form $A=A^{(1)}-A^{(2)},$ with $A^{(j)}$ adapted natural increasing processes, such that $XY-A$ is a martingale. In particular, $\lr{X,X}=\lr{X}.$ \end{proposition} {\em Proof.} By definition, $\lr{X,Y}$ is the difference of two processes as required and $XY-\lr{X,Y}$ is a martingale. This proves the existence of $A.$ To prove the uniqueness assume that $A=A^{(1)}-A^{(2)}$ and $B=B^{(1)}-B^{(2)}$ are two processes such that $XY-A=M$ and $XY-B=N$ are martingales and the processes $A^{(j)}$ en $B^{(j)}$ are adapted natural increasing processes. Hence, $M_t+A_t=N_t+B_t.$ Put $(C_t):=(A_t-B_t)=(N_t-M_t)$ for $t\in[a,b]$ and note that $(C_t)$ is a martingale. An inspection of the proof of the uniqueness of the Doob-Meyer decomposition in~\cite[Theorem 7.5]{G1}, shows that $A_t=B_t$ for all $t.$ Finally, since $\lr{X}$ is an adapted natural increasing process satisfying the condition that $X^2-\lr{X}$ is a martingale, it follows that $\lr{X,X}=\lr{X}.$\qed \bigskip We call a process $A$ with the property that it is the difference of two adapted natural increasing processes $A^{(j)}$ as in the Proposition above, a \textit{regular process}. \bigski \begin{proposition} For $X,Y\in\M_2$ the following identities hold: \begin{enumerate} \item[{(i)}] If $a\le s<t\le b,$ then \begin{align*} \F_s[(X_t-X_s)(Y_t-Y_s)] &= \F_s[(X_tY_t-X_sY_s)] \\ &= \F_s[\lr{X,Y}_t-\lr{X,Y}_s]. \end{align*} Hence, if $X$ and $Y$ are orthogonal, then $XY$ is a martingale and the increments of $X$ and $Y$ over $[s,t]$ are conditionally uncorellated. \item[(ii)] If $a\le s<t\le u<v\le a,$ then $$ \F[(X_v-X_u)(Y_t-Y_s)]=0. $$ Thus, the expectation of products of increments of two martingales over non-overlapping intervals are zero. \item[(iii)] If $a\le u<v\le b$, then \begin{multline*} \F[(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)]= \\ \F[(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)]=0. \end{multline*} It follows that the expectation of the product of terms of the form $(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)$ and of the form $(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)$ taken over non-overlapping intervals are zero. \end{enumerate} \end{proposition {\em Proof.} (i) For $a\le s<t\le b,$ \begin{align*} \F_s[(X_t-X_s)(Y_t-Y_s)] &=\F_s[X_tY_t-X_sY_t-X_tY_s+X_sY_s] \\ &=\F_s[X_tY_t]-X_sY_s-X_sY_s+X_sY_s \\ &=\F_s[X_tY_t-X_sY_s]. \end{align*} Put $M_t=X_tY_t-\lr{X,Y}_t.$ Then, since $M_t$ is a martingale, $$ \F_s[X_tY_t-X_sY_s]-\F_s[\lr{X,Y}_t-\lr{X,Y}_s]=\F_s[M_t-M_s]=0. $$ This proves (i). (ii) For $a\le s<t\le u<v\le a,$ \begin{multline*} \F[(X_v-X_u)(Y_t-Y_s)]=\F\{\F_u[(X_v-X_u)(Y_t-Y_s)]\}\\ =\F[(Y_t-Y_s)\F_u(X_v-X_u)]=0. \end{multline*} (iii) If If $a\le u<v\le b$, the identity follows immediately from (i). The remaining statement follows then as in the proof of (ii). \qed \medskip \begin{proposition}\label{properties cross-variation} Let $X, Y, Z$ be elements of $\M_2.$ The following properties hold. \begin{enumerate} \item[{(i)}] $\lr{\alpha X+\beta Y,Z}=\alpha\lr{X,Z}+\beta\lr{Y,Z}$ for all real numbers $\alpha, \beta.$ \item[{(ii)}] $\lr{X,Y}=\lr{Y,X}.$ \item[{(iii)}] $|\lr{X,Y}|^2\le\lr{X}\lr{Y}.$ \item[{(iv)}] Let $V_t^1(X)$ be the variation of $X$ over the interval $[a,t].$ Then, for $s<t,$ $$ V_t^1(\lr{X,Y})-V_s^1(\lr{X,Y})\le\tfrac 12[\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. $$ \end{enumerate} Thus, the cross-variation process defines a bilinear transformation on $\M_2\times\M_2.$ \end{proposition} {\em Proof.} We show firstly that (ii) holds. From $(-X)^2=X^2$ it follows that $\lr{-X}=\lr{X}.$ Consequently, $$ \lr{X,Y}-\lr{Y,X}=\tfrac 14[\lr{Y-X}-\lr{X-Y}]=\tfrac 14[\lr{Y-X}-\lr{Y-X}]=0, $$ i.e., $\lr{X,Y}=\lr{Y,X}.$ \medskip\noindent (i) $\lr{\alpha X+\beta Y,Z}$ is the unique regular process $A$ such that $(\alpha X+\beta Y)Z-A$ is a martingale. But, $\alpha\lr{X,Z}+\beta\lr{Y,Z}$ is 'n regular process $B$ such that $(\alpha X+\beta Y)Z-B=\alpha XZ+\beta YZ-B $ is a martingale. Hence, $A=B$ and (i) holds. \medskip\noindent (iii) Using (ii), we have \begin{align*} 0\le\lr{\alpha X+Y}&=\lr{\alpha X+Y,\alpha X+Y} \\ &=\alpha^2\lr{X,X}+2\alpha\lr{X,Y}+\lr{Y,Y}\\ &=\alpha^2\lr{X}+2\alpha\lr{X,Y}+\lr{Y}. \end{align*} Hence, for every $t\in[a,b],$ we have for all $\alpha\in\R$ that \begin{equation}\label{quadratic inequality} 0\le \alpha^2\lr{X}_t+2\alpha\lr{X,Y}_t+\lr{Y}_t. \end{equation} Let $\Omega$ be the Stone space of the Boolean algebra $\ms{B}_{\mf{E}}$ of all bands in $\mf{E}.$ Then $C^\infty(\Omega)$ is Riesz isomorphic to $\mf{E}^u,$ which is, as we remarked in the introduction, an $f$-algebra (in fact, it inherits its $f$-algebra structure from $C^\infty(\Omega)$). Thus $\mf{E}^u$ and $C^\infty(\Omega)$ are also isomorphic as $f$-algebras (see~\cite[Section 50]{LZ} and \cite[Chapter 7]{AB1}). Identifying an element $X\in\mf{E}^u$ with its image, we consider $X$ to be a real function on $\Omega.$ Thus the inequality (\ref{quadratic inequality}) becomes an inequality involving real numbers, i.e., for every fixed $t\in[a,b]$ and $\omega\in\Omega,$ we have $$ 0 \le \alpha^2\lr{X}_t(\omega)+2\alpha\lr{X,Y}_t(\omega)+\lr{Y}_t(\omega) \mbox{ for all }\alpha\in\R. $$ This implies that for every fixed $t\in[a,b]$ and $\omega\in\Omega,$ we have \begin{equation} \lr{X,Y}_t(\omega)^2\le \lr{X}_t(\omega)\lr{Y}_t(\omega), \end{equation} i.e., $\lr{X,Y}_t^2\le \lr{X}_t\lr{Y}_t$ holds in $\mf{E}^u$ for all $t\in[a,b]$ and this proves (iii). \medskip\noindent (iv) For $a\le s<t\le b,$ we have \begin{align*} |\lr{X,Y}_t-\lr{X,Y}_s|&=\tfrac 14|(\lr{X+Y}_t-\lr{X-Y}_t)-(\lr{X+Y}_s-\lr{X-Y}_s)|\\ &=\tfrac 14|(\lr{X+Y}_t-\lr{X+Y}_s)+(\lr{X-Y}_s-\lr{X-Y}_t)|\\ &\le\tfrac 14 |(\lr{X+Y}_t-\lr{X+Y}_s)|+|(\lr{X-Y}_s-\lr{X-Y}_t)|\\ &=\tfrac 14 [(\lr{X+Y}_t-\lr{X+Y}_s)+(\lr{X-Y}_t-\lr{X-Y}_s)]\\ &=\tfrac 14 [(\lr{X+Y}_t+\lr{X-Y}_t)-(\lr{X+Y}_s+\lr{X-Y}_s)]. \end{align*} But, using (i) and (ii) and the fact that $\lr{X}=\lr{X,X},$ we get \begin{multline*} \lr{X+Y}=\lr{X+Y,X+Y}=\lr{X,X+Y}+\lr{Y,X+Y}\\ =\lr{X,X}+2\lr{X,Y}+\lr{Y,Y} =\lr{X}+2\lr{X,Y}+\lr{Y} \end{multline*} and also $\lr{X-Y}=\lr{X}-2\lr{X,Y}+\lr{Y}.$ Therefore, for $a\le s<t\le b,$ we have \begin{align} |\lr{X,Y}_t-\lr{X,Y}_s|&\le\tfrac 12 [(\lr{X}_t+\lr{Y}_t)-(\lr{X}_s+\lr{Y}_s)] \nonumber\\ &=\tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. \end{align} Let $V_t^1\lr{X,Y}$ be the total variation of $\lr{X,Y}$ over the interval $[0,t],$ then $V_t^1\lr{X,Y}-V_s^1\lr{X,Y}$ is the variation of $\lr{X,Y}$ over the interval $[s,t].$ For any partition $\pi=\{s=t_0<t_1<\cdots<t_n=t\}$ of $[s,t],$ we have \begin{align*} \sum_{k=1}^n|\lr{X,Y}_{t_k}-\lr{X,Y}_{t_{k-1}}| &\le\tfrac 12[\sum_{k=1}^n\lr{X}_{t_k}-\lr{X}_{t_{k-1}}+\sum_{k=1}^n\lr{Y}_{t_k}-\lr{Y}_{t_{k-1}}]\\ &=\tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. \end{align*} It follows that $V^1_t\lr{X,Y}-V^1_s\lr{X,Y}\le \tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]$ and this completes the proof. \phantom{KOOOOOOOS}\qed \bigskip \begin{remark}\label{remark 3.6}\rm Considering the proof of (iii) above, we see that if $X\in\mf{E}^u$ and if $P(\alpha,X)$ is a proposition that holds for all $\alpha\in\R$ in the $f$-algebra $\mf{E}^u,$ then the proposition $P(\alpha Y, X)$ is also true for any given $Y\in\mf{E}^u.$ This follows by representing the elements as elements of $C^\infty(\Omega)$ for some compact topological space $\Omega;$ then the proposition $P(\alpha, X)$ holds for every $\alpha$ and every $\omega\in\Omega.$ Therefore, since $\alpha Y(\omega)$ is again a real number, one has that $P(\alpha Y(\omega), X(\omega))$ must also be true for every $\omega.$ Thus, the proposition holds for $\alpha$ replaced by $\alpha Y.$ \end{remark} \bigskip For the quadratic variation of a $\gamma$-H\"older continuous martingale $X,$ we have the formula \begin{equation}\label{eq3.4} \lim_{|\pi|\to 0}V_t^{(2)}(\pi,X)=\lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})^2=\<X\>, \end{equation} with convergence in $\mc{L}^1$-conditional probability (see~\cite[Theorem 6.1]{G10} for the definition). We prove a similar characterization for the cross-variation process. Although the proof is similar to that of (\ref{eq3.4}), there are some technical points that warrant the inclusion of a proof. \begin{theorem}Let $X,Y$ be $\gamma$-H\"older continuous martingales. Then, \begin{equation} \lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})=\<X,Y\>, \end{equation} where $\pi=\{t_0,t_1,\ldots,t_m\}$ is a partition of the interval $[a,b]$ with mesh $|\pi|$ and the convergence is in $\mc{L}^1$-conditional probability. \end{theorem} {\em Proof.\ } Let $\ds m_t(Z;\pi):=\sup_{1\le k\le m}|Z_{t_k}-Z_{t_{k-1}}|$ for the stochastic process $(Z_t)$ and let \begin{equation*} CV_t(\pi,X,Y):=\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}}). \end{equation*} Consider the bounded case, i.e., say for some $K>0,$ \\ $$ \sup\limits_{s\in[a,b]}\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE. $$ Then, \begin{align*} &\F[CV_t(\pi,X,Y)-\<X,Y\>_t]^2 \\ &=\F\left[\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &=\sum_{k=1}^m\F\left[(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &\le 2\sum_{k=1}^m\F\left[(X_{t_k}-X_{t_{k-1}})^2(Y_{t_k}-Y_{t_{k-1}})^2+ (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})^2\right] \\ &\le 2\sum_{k=1}^m\F\left[|X_{t_k}-X_{t_{k-1}}||X_{t_k}+X_{t_{k-1}}|(Y_{t_k}-Y_{t_{k-1}})^2\right] \\ &\phantom{jacobus}+2\F m_t(\<X,Y\>;\pi)\sum_{k=1}^m|\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}}|\\ &\le 4K\F m_t(X;\pi)\sum_{k=1}^m(Y_{t_k}-Y_{t_{k-1}})^2 \\ &\phantom{jacobus}+2\F m_t(\<X,Y\>;\pi)\sum_{k=1}^m(\<X\>_{t_k}-\<X\>_{t_{k-1}})+(\<Y\>_{t_k}-\<Y\>_{t_{k-1}})\mbox{ \rm by (3.3)}\\ &\le 4K\F m_t(X;\pi)V_t^{(2)}(\pi, Y) +2\F m_t(\<X,Y\>;\pi)(\<X\>_{t_m}+\<Y\>_{t_m})\\ &\le 4K\F m_t(X;\pi)V_t^{(2)}(\pi,Y) + 4K\F m_t(\<X,Y\>;\pi). \end{align*} From \cite[Lemma 6.2]{G10}, we have that $\F[V_t^{(2)}(\pi,Y)]^2\le 6K^4E$ and so applying the Cauchy inequality to the first term, we get (as in the proof of the formula for one variable in \cite{G11}) that it is bounded by $$ 4K\sqrt{6K^4}\sqrt{\F m_t(X;\pi)^2}. $$ Now, if $|\pi|\to 0$ the $\gamma$-H\"older continuity of $X$ and of $\lr{X,Y}$ implies that in both terms the factor $m_t$ tends to zero. Therefore, the right-hand side tends to zero and we have proved the desired result for bounded martingales. The extension to the general case now proceeds via localization, exactly as in the proof of the corresponding result for the quadratic variation (see~\cite[Theorem 6.1]{G10}). \begin{corollary} If $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t],$ then $$ \F(CV_t(\pi,X,Y))^2\le 2K\F m_t(X;\pi)V_t^{(2)}(\pi,Y). $$ \end{corollary} \section{The cross-variation formula} Let $M=(M_t)_{t\in[a,b]}$ and $N=(N_t)_{t\in [a,b]}$ belong to the set $\mc{M}_2^c$ of continuous square integrable martingales and let $X\in L^2_{\pred}([a,b],\<M\>_t)$ and $Y\in L^2_{\pred}([a,b],\<N\>_t).$ (see~\cite[Definition 5.3]{G9}). Let $I^M_t(X):=\int_a^t X_s\,dM_s$ and $I^N_t(Y):=\int_a^t Y_s\,dN_s.$ Then we have, by~\cite[Theorem 6.5]{G9}, $$ \<I^M_t(X)\>=\int_a^tX_u^2\,d\<M\>_u \mbox{ and } \<I^N_t(Y)\>=\int_a^tY_u^2\,d\<N\>_u. $$ Our aim is to prove the cross-variation formula \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} We first consider the case where $X$ and $Y$ are simple predictable adapted processes. \begin{lemma} Let $M$ and $N$ be martingales in $\mc{M}^c_2$ and let $X$ and $Y$ be simple adaptable predictable processes. Then, for all $a\le s<t\le b,$ we have \begin{equation} \F_s[(I_t^M(X)-I_s^M(X))((I_t^N(Y)-I_s^N(Y)))]=\F_s\left[\int_s^t X_uY_u\,d\<M,N\>_u\right]. \end{equation} Consequently, \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} \end{lemma} {\em Proof.}\ Let $\pi=\{a=t_0<t_1<\cdots<t_n=b\}$ be a partition of the interval $[a,b],$ let $$ X_u=\sum_{i=1}^n X_{i-1}I_{(t_{i-1},t_i]}(u)\mbox{ and }Y_u=\sum_{i=1}^n Y_{i-1}I_{(t_{i-1},t_i]}(u), $$ with $X_i, Y_i\in\mf{F}_i.$ Let $a\le s<t\le b$ and suppose that $t_{k-1}\le s<t_k$ and $t_{\ell}\le t\le t_{\ell+1}.$ Then, using Remark~\ref{remark 3.6}(ii) and Proposition~\ref{proposition 3.4}, \begin{align*} &\F_s[(I_t^M(X)-I_s^M(X))(I_t^N(Y)-I_s^N(Y))]\\ &=\F_s[(X_{k-1}(M_{t_k}-M_s)+\sum_{i=k}^{\ell-1}X_i(M_{t_{i+1}}-M_{t_i})+X_\ell(M_t-M_{t_{\ell}}))\cdot\\ &\qquad \qquad \qquad\qquad \cdot(Y_{k-1}(N_{t_k}-N_s)+\sum_{i=k}^{\ell-1}Y_i(N_{t_{i+1}}-N_{t_i}) +Y_\ell(N_t-N_{t_{\ell}}))]\\ &=\F_s[X_{k-1}Y_{k-1}(M_{t_k}-M_s)(N_{t_k}-N_s)\\ &\qquad \qquad \qquad\qquad +\sum_{i=k}^{\ell-1}X_i Y_i(M_{t_{i+1}}-M_{t_i})(N_{t_{i+1}}-N_{t_i})+\\ & \ \ \ \qquad \qquad \qquad\qquad \qquad\qquad +X_\ell Y_\ell(M_t-M_{t_{\ell}})(N_t-N_{t_{\ell}})] \end{align*} \begin{align*} &=\F_s[X_{k-1}Y_{k-1}(\<M_{t_k},N_{t_k}\>-\<M_{s},N_s\>)\\ &\qquad \qquad \qquad\qquad +\sum_{i=k}^{\ell-1}X_i Y_i(\<M_{t_{i+1}},N_{t_{i+1}}\> -\<M_{t_i},N_{t_i}\>)+\\ & \ \ \ \qquad \qquad \qquad\qquad \qquad\qquad +X_\ell Y_\ell(\<M_t,N_t\>-\<M_{t_{\ell}},N_{t_{\ell}}\>)]\\ &=\F_s\left[\int_s^t X_uY_u\,d\<M_u,N_u\>\right]. \end{align*} But, by Proposition~\ref{proposition 3.4}, $$ \F_s[(I_t^M(X)-I_s^M(X))(I_t^N(Y)-I_s^N(Y))]=\F_s[(I_t^M(X)I_t^N(Y)-I_s^M(X)I_s^M(Y))], $$ from which we conclude that $I_t^M(X)I_t^N(Y)-\int_a^tX_uY_u\,d\<M,N\>$ is a martingale. It follows from Proposition~\ref{uniqueness of cv} that \begin{equation*} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b].\tag*{\qed} \end{equation*} \medskip \begin{corollary} Let $M$ and $N$ be martingales in $\mc{M}^c_2$ and let $X$ be a simple predictable processes. Then, for all $a\le s<t\le b,$ we have \begin{equation} \<I^M(X),N\>_t=\int_a^tX_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} \end{corollary} {\em Proof.\ } Take in the lemma $Y_t=E,$ $t\in[a,b].$\qed \bigskip In order to extend the result to the general case, we need the following inequality due to Kunita and Wataba~\cite{KW} (see also~\cite[Proposition 3.2.14]{KS}). \begin{theorem}{ \rm (Kunita-Watanabe inequality)}\label{Kunita Watanabe} If $M, N\in\mc{M}_2^c$ and if $X\in L^2_{\pred}([a,b],\<M\>_t)$ and $Y\in L^2_{\pred}([a,b],\<N\>_t),$ then \begin{equation} \left(\int_a^t|X_sY_s|\,dV^1_s\right)^2 \le\left(\int_a^tX_u^2\,d\<M\>_u\right)\left(\int_a^tY_u^2\,d\<N\>_u\right), \ t\in[a,b]. \end{equation} \end{theorem} {\em Proof.}\ By definition there exist nets of simple predictable processes $(X_{\gamma})$ that converges to $X$ and $(Y_{\delta})$ that converges to $Y$ in $L^2([a,b],\<M\>)$ and $L^2([a,b],\<N\>)$ respectively. For any fixed pair $(\alpha,\delta),$ write the processes $X_{s,\gamma}$ and $Y_{s,\delta},$ $a\le s\le t,$ with respect to the same partition $\pi=a=t_0<t_1<\cdots<t_n=t$ as follows $$ X_{s,\gamma}=\sum_{i=1}^nX_iI_{(t_{i-1},t_i]}(s),\ Y_{s,\delta}=\sum_{i=1}^nY_iI_{(t_{i-1},t_i]}(s). $$ Denote Let $V_s^1(\<X,Y\>)$ by $V_s^1.$ From Proposition~\ref{properties cross-variation} (iv), we have, replacing $M$ by $\alpha M$ and $N$ by $\beta N$ for arbitrary real numbers $\alpha$ and $\beta,$ \begin{align*} \alpha\beta (V^1_{s_i}-V^1_{s_{i-1}})&\le|\alpha||\beta|(V^1_{s_i}-V^1_{s_{i-1}})\\ &\le\frac{1}{2}[\alpha^2(\<M\>_{s_i}-\<M\>_{s_{i-1}})+\beta^2(\<N\>_{s_i}-\<N\>_{s_{i-1}})]. \end{align*} But then, using Remark~\ref{remark 3.6}, we get by replacing $\alpha$ by $\alpha |X_i|$ and $\beta$ by $|Y_i|,$ $$ 2\alpha|X_iY_i|(V^1_{s_i}-V^1_{s_{i-1}})\le\alpha^2|X_i|^2(\<M\>_{s_i}-\<M\>_{s_{i-1}}) +|Y_i|^2(\<N\>_{s_i}-\<N\>_{s_{i-1}}). $$ Summing the right and left hands of this inequality yields, for all $\alpha\in\R,$ \begin{align*} 2\alpha\int_a^t|X_{s,\gamma}Y_{s,\delta}|\,dV^1_s \le\alpha^2\int_a^t|X_{s,\gamma}|^2\,d\<M\>_s+\int_a^t|Y_{s,\delta}|^2\,d\<N\>_s. \end{align*} This implies that $$ \left[\int_a^t|X_{s,\gamma}Y_{s,\delta}|\,dV^1_s\right]^2\le \int_a^t|X_{s,\gamma}|^2\,d\<M\>_s\cdot\int_a^t|Y_{s,\delta}|^2\,d\<N\>_s. $$ Hence, since $X_{s,\gamma}\to X_s$ and $Y_{s,\delta}\to Y_s$ we get the required result.\qed \bigskip \begin{lemma} If $M,N\in\mc{M}_2^c,$ if $X\in L^2_{\pred}([a,b],\<M\>)$ and if $(X_\alpha)$ is a net in $L^2_{\pred}([a,b],\<M\>)$ such that for some $t\in [a,b]$ $$ \lim_{\alpha}\int_0^t|X_{\alpha,u}-X_u|^2\,d\<M\>_u=0 \mbox{ in order}, $$ then $$ \lim_{\alpha}\<I(X_n),N)\>_s=\<I(X),N\>_s, \ a\le s\le t \mbox{ in order}. $$ \end{lemma} {\em Proof.}\ It follows from Proposition~\ref{properties cross-variation} (iii), that for $a\le s\le t,$ we have \begin{align*} |\<I(X_\alpha)-I(X),N\>_s|^2&\le \<I(X_\alpha)-I(X)\>_s\<N\>_s \\ &\le \int_a^t|X_{\alpha,u}-X_u|^2\,d\<M\>_u\cdot\<N\>_t. \end{align*} If $0\le a_\alpha$ in an $f$-algebra and $a_\alpha^2\to 0$ in order, then $a_\alpha\to 0$ in order. It follows that $\lim_{\alpha}\<I(X_\alpha),N)\>_s=\<I(X),N\>_s \mbox{ in order, for }a\le s\le t .$ This completes the proof.\qed \bigskip As was mentioned, if $\mf{E}$ is $\F$-universally complete in $\mf{E}^u,$ then, for each $\phi\in\Phi,$ we have that the seminorms $p_\phi$ and $q_\phi$ are complete norms when restricted to the carrier band of $\phi$ (see~\cite[Proposition 4.4]{G9}). Therefore, if $(X_\alpha)$ is a net in $\mc{L}^1$ that converges to an element $X\in \mc{L}^1$ and if $\P_\phi$ is the projection onto the carrier band of $\phi$ then $(X_\alpha)$ has a subsequence $(X_{\alpha_n})$ such that $p_\phi(X_{\alpha_n}-X)$ converges to $0.$ But, if a sequence converges in norm, it has a subsequence that converges in order to the limit element. Thus, for every fixed $\phi\in\Phi$ there exists a subsequence of the net $(X_\alpha)$ (depending on $\phi$) that converges in order to $X.$ We will use that in the proof of the next lemma. \begin{lemma} If $M,N\in\mc{M}^c_2$ and $X\in L^2_{\pred}([a,b],\<M\>),$ then \begin{equation} \<I^M(X),N\>_t=\int_a^t X_u\,d\<M,N\>_u,\ \ a\le t\le b. \end{equation} \end{lemma} {\em Proof.}\ Fix an element $\phi\in\Phi.$ Let $(X_\alpha)$ be a net of simple predictable processes that converges in $L^2_{\pred}([a,b],\<M\>)$ to $X.$ Thus, $$ \overline{q}_\phi(X_{\alpha}-X)^2=|\phi|\F(\int_a^b|X_{{\alpha},u}-X_u|^2)\,d\<M\>_u)\to 0. $$ With $Y_\alpha:= \int_a^b|X_{{\alpha},u}-X_u|^2)\,d\<M\>_u,$ this means that $p_\phi(Y_\alpha)\to 0.$ By our remark before the lemma, this means that there exists a subsequence $(Y_{\alpha_n})$ that converges to $0$ in $\mc{L}^1_\phi$ and consequently a subsequence (that we will again denote by $(Y_{\alpha_n})$) that converges in order to zero on the carrier band of $\phi.$ Thus, by the definition of $Y_\alpha,$ we have that there exists a sequence of simple predictable processes $(X_{\alpha_n})$ such that $$ \P_\phi\left(\int_a^b |X_{\alpha_n,u}-X_u|^2\,d\<M\>_u\right)\to 0\ \mbox{in order} $$ and consequently, also for any $t\in[a.b],$ we have $\P_\phi(\int_a^t |X_{\alpha_n,u}-X_u|^2\,d\<M\>_u)\to 0\ \mbox{in order}.$ By Lemma~\ref{lemma 4.4}, we have that \bigskip $$ \lim_{n\to\infty}\P_\phi\<I(X_{\alpha_n}),N)\>_s=\P_\phi\<I(X),N\>_s \mbox{ in order, for }a\le s\le t . $$ But, by Corollary~\ref{4.2}, we have that for each $n$ \begin{equation} \<I^M(X_{\alpha_n}),N\>_t=\int_a^tX_{\alpha_n,u}\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} Considering the right hand side, we find by th Kunita-Watanabe inequality that for $t\in[a,b],$ $$ \int_a^t|X_{\alpha_n,u}-X|\,dV^1_u \le\left(\int_a^t|X_{\alpha_n,u}-X|^2\,d\<M\>_u\right)\<N\>_t $$ and by what be proved above, we get that $$ \P_\phi\left|\int_a^t|X_{\alpha_n,u}-X|\,d\<M,N\>_u\right|\le\P_\phi\int_a^t|X_{\alpha_n,u}-X|\,dV^1_u\to 0\mbox{ in order}. $$ It follows from letting $n$ tend to infinity in Equation~(\ref{equation 4.7}), that the left hand side converges in order to $\<I^M(X),N\>_t$ on the carrier band of $\phi,$ and the right hand side converges in order to $\int_a^tX_u\,d\<M,N\>_u$ on the carrier band of $\phi$ by Lemma~\ref{lemma 4.4}. Thus, $$ \P_\phi\<I^M(X),N\>_t=\P_\phi\int_a^tX_u\,d\<M,N\>_u $$ and this holds for every fixed $\phi.$ This completes the proof.\qed \medskip \begin{theorem} If $M,N\in\mc{M}_2^c,$ $X\in L^2_{\pred}([a,b],\<M\>),$ $Y\in L^2_{\pred}([a,b],\<N\>),$ then \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^t X_uY_u\,d\<M,N\>_u,\ t\in[a,b], \end{equation} and equivalently, \begin{equation} \mb{F}_s[(I^M_t(X)-I^M_s(X))(I^N_t(Y)-I^N_s(Y))]=\mb{F}_s[\int_s^t X_uY_u\,d\<M,N\>_u,]. \end{equation} \end{theorem} {\em Proof.}\ The preceding lemma states that $$ d\<I^M(X),N\>_u=Y_u\,d\<M,N\>_u,\ \ a\le u\le b. $$ Replace in Lemma~\ref{lemma 4.5} $N$ by $I^N(Y),$ then we have \begin{align*} \<I^M(X),I^N(Y)\>_t&=\int_a^t X_u\,d\<M,I^N(Y)\>_u \\ &=\int_a^t X_uY_u\,d\<M,N\>_u, \end{align*} where we formally replaced $d\<M,I^N(Y)\>_u$ in the integral by $Y_ud\<M,N\>_u.$ To see that this can be done, note that if $$ \sum_{i=1}^n X_i\mu_{\<M,I^N(Y)\>}(S_i)=\sum_{i=1}^n X_i\int_{S_i}Y_u\,d(\<M,N\>), $$ is an approximating sum for the integral $\int_a^t X_u\,d\<M,I^N(Y)\>_u,$ and if $$ \sum_{j=1}^{n_i} Y_j\mu_{\<M,N\>}(S_{ij}) $$ is an approximating sum for the integral $\int_{S_i}Y_u\,d\<M,N\>_u,$ then $$ \sum_{i=1}^n\sum_{j=1}^{n_i}X_iY_j\mu_{\<M,N\>}(S_{ij}) $$ is an approximating sum for the integral $\int_a^t X_u\,d\<M,I^N(Y)\>_u $ as well as for the integral $\int_a^t X_uY_u\,d\<M,N\>_u.$ Therefore the two integrals are equal.\qed \medskip \begin{theorem}\label{calculus 1} Let $M$ be a continuous martingale and let $X\in L^2_{\pred}([a,b],\<M\>).$ The It\^o integral $I^M(X)$ is the unique continuous martingale $\Phi$ that satisfies \begin{equation} \<\Phi,N\>_t=\int_a^t X_u\,d\<M,N\>_u,\ t\in[a,b], \end{equation} for every continuous martingale $N.$ \end{theorem} {\em Proof.}\ We know that $I^M(X)$ satisfies this condition by Lemma~\ref{lemma 4.5}. Suppose that $\Phi$ also satisfies the condition for all continuous maringales $N.$ Then, for any such $N$ we get that $\<\Phi-I^M(X),N\>=0$ for all continuous martingales $N.$ Replace $N$ by $\Phi-I^M(X),$ then $$ \<\Phi-I^M(X)\>=\<\Phi-I^M(X),\Phi-I^M(X)\>=0. $$ But, this means that the quadratic variation of the martingale $\Phi-I^M(X)$ is zero (see~\cite{G10}). Consequently $\Phi-I^M(X)=0$ and we are done.\qed \bigskip It is important that the calculus used (and proved) in Theorem~\ref{theorem 4.6} is also true for the It\^o integral. Thus, if, in the ``stochastic differential" notation $dN=X\,dM,$ then $Y\,dN=XY\,dM.$ This is the content of the next theorem. \begin{theorem} Let $M$ be a continuous martingale and let $X\in L^2([a,b],\<M\>).$ Let $N:=I^M(X)$ and suppose furthermore that $Y\in L^2([a,b],\<N\>).$ Then $XY\in L^2([a,b],\<M\>)$ and $I^N(Y)=I^M(XY).$ \end{theorem} {\em Proof.}\ By assumption, we have for the martingale $N,$ that $N_t=\int_a^t X_u\,dM_u=I^M(X)_t.$ By the properties of the It\^o integral, this implies that for the quadratic variation of $N,$ we have $$ \<N\>_t=\int_a^t X_u^2\,d\<M\>_u. $$ Thus, $d\<N\>=X^2\,d\<M\>.$ Applying the calculus in Theorem~\ref{theorem 4.6}, we get $$ \phi\F\int_a^b X_u^2Y_u^2\,d\<M\>_u=\phi\F\int_a^bY_u^2\,d\<N\>_u<\infty,\mbox{ for all $\phi\in\Phi.$ } $$ This shows that $XY\in\mc{L}^2([a,b],\<M\>).$ For any continuous martingale $\tilde{N},$ we have by Lemma~\ref{lemma 4.5} that \begin{align*} \<I^M(XY),\tilde{N}\>_t&=\int_a^t X_uY_u\,d\<M,\tilde{N}\>_u\\ &=\int_a^t Y_u\,d\<N,\tilde{N}\>_u\\ &=\<I^N(Y),\tilde{N}\>_t. \end{align*} Therefore, $I^M(XY)=I^N(Y)$ follows from the uniqueness part of Theorem~\ref{calculus 1}. \qed \section{Exponential processes} Exponential processes play an important role in Girsanov's theorem. Given a stochastic process $(X_t)_{t\in[0,a]}$ and a Brownian motion $(B_t)_{t\in[0,a]},$ both adapted to the filtration $(\mf{F}_t,\F_t),$ the exponential process $Z_t(X)$ is a transformation of $X$ that is a local martingale and under certain conditions a martingale. We recall from~\cite[Section 3]{G8} and~\cite[Section 7]{G9}, that $\mc{L}_{ad}(L_2[0,a])=\mc{L}_{pred}(L^2[0,a],\<B\>).$ \begin{definition}({\rm\cite[Definition 8.7.1, p 137]{Kuo} and~\cite[Problem 3.2.28]{KS}})\label{definition 5.1a} The exponential process given by $X\in \mc{L}_{ad}(L_2[0,a])$ is defined to be the stochastic process \begin{equation} Z_t(X)=\exp\left[\int_0^t X_s\,dB_s-\frac{1}{2}\int_0^t X_s^2\,ds\right],\ \ 0 \le t \le a. \end{equation} \end{definition} In order to prove the main property of exponential processes (Theorem~\ref{theorem 5.2} below), we need the following fact that is interesting in its own right. \begin{lemma} Let $(\mf{F}_t,\F_t)_{t\in[0,a]}$ be a filtration and let $(B_t,\mf{F}_t)$ be a Brownian motion. If $(X_t,\mf{F}_t)_{t\in[0,a]}$ is a bounded stochastic process, then the martingale $$ N_t:=\int_0^t X_s\,dB_s,\ t\in[a,b], $$ satisfies \begin{equation} \F(N_v-N_u)^4\le C(v-u)^2\ \mbox{ for all $0\le u<v<t.$} \end{equation} Therefore, the process $(N_t)$ is $\gamma$-H\"older continuous for $\gamma\in (0,\tfrac 14).$ \end{lemma} {\em Proof.} We first prove that the inequality holds for the simple adapted step process $$ X_t = \sum^r_{i=1} X_{i-1}1_{(t_{i-1},t_i]}(t), $$ with $X_{i-1}\in\mf{F}_{t_{i-1}}$ and $0=t_0<t_1<\cdots<t_r=t.$ In this case, for $0\le u<v\le t,$ we have \begin{align*} &\F(N_v-N_u)^4\\ &=\F\left(\int_u^v X_s\,dB_s\right)^4 \\ &=\F\left(X_{k-1}(B_{t_k}-B_{u})+ \sum^{\ell-1}_{i=k+1}X_{i-1}(B_{t_i} - B_{t_{i-1}})+ X_{\ell-1}(B_v - B_{t_{\ell-1}})\right)^4. \end{align*} Writing, for notational convenience, $u=t_{k-1}$ and $v=t_\ell,$ we have \begin{multline} \F(N_v-N_u)^4 =\F\left(\sum^{\ell}_{i=k}X_{i-1}(B_{t_i} - B_{t_{i-1}})\right)^4 \\ =\F\big(\sum_{i,j,m,n=k}^\ell X_{i-1}X_{j-1}X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_j} - B_{t_{j-1}})\cdot \\ \cdot(B_{t_m} - B_{t_{m-1}})(B_{t_n} - B_{t_{n-1}})\big). \end{multline} Since $(B_t)$ is a Brownian motion, we have for $u<v$, that $\F(B_v-B_u)=\F(B_v-B_u)^3=0,\ \F(B_v-B_u)^2=(v-u)E,$ and $\F(B_v-B_u)^4=3(v-u)^2E.$ By considering the intervals in the following equations to be non-overlapping and in increasing order, we have that \begin{multline*} \F(X_{i-1}X_{j-1}X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_j}-B_{t_{j-1}})\cdot\\ \cdot(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}}))\\ =\F\F_{t_{n-1}}(X_{i-1} (B_{t_i}-B_{t_{i-1}})X_{j-1}(B_{t_j}-B_{t_{j-1}})\cdot \\ \phantom{MMMM}\cdot X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}(B_{t_n}-B_{t_{n-1}}))\\ =\F(X_{i-1} (B_{t_i}-B_{t_{i-1}})X_{j-1}(B_{t_j}-B_{t_{j-1}})\cdot \\\cdot X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}\F_{t_{n-1}}(B_{t_n}-B_{t_{n-1}}))=0. \end{multline*} Similarly, using $\F=\F\F_{t_{n-1}},$ we have \begin{align*} &\F(X_{i-1}^2X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})^2(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}}))\\ &=\F(X_{i-1}X_{m-1}^2X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_m}-B_{t_{m-1}})^2(B_{t_n}-B_{t_{n-1}}))\\ &=\F(X_{i-1}^3X_{n-1} (B_{t_i} - B_{t_{i-1}})^3(B_{t_n}-B_{t_{n-1}}))\\ &=\F(X_{i-1}X_{n-1}^3 (B_{t_i} - B_{t_{i-1}})(B_{t_n}-B_{t_{n-1}})^3)\\ &=0 \end{align*} and, with $|X_t|\le C$ for all $t,$ \begin{align*} &\F(X_{i-1}X_{m-1}X_{n-1}^2 (B_{t_i}-B_{t_{i-1}})(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}})^2)\\ &=\F\F_{t_{n-1}}( X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}^2(B_{t_n}-B_{t_{n-1}})^2)\\ &=\F(X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}^2(t_n-t_{n-1}))\\ &\le C^2(t_n-t_{n-1})\F(X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}}))\\ &=0. \end{align*} It follows that in (\ref{equation 5.2a}) the only non-zero terms are the terms that are products of linear square factors, or linear factors to the fourth power, i.e., \begin{align*} &\F(N_v-N_u)^4 \\ &=\F\big(\sum_{i,j=k}^\ell \F_{t_{i-1}}X_{i-1}^4(B_{t_i} - B_{t_{i-1}})^4 +X_{i-1}^2X_{j-1}^2(B_{t_i} - B_{t_{i-1}})^2(B_{t_j} - B_{t_{j-1}})^2)\big)\\ &\le C^4\F\big(\sum_{i,j=k}^\ell \F_{t_{i-1}}(B_{t_i} - B_{t_{i-1}})^4 + F_{t_{i-1}}\F_{t_{j-1}}(B_{t_i} - B_{t_{i-1}})^2(B_{t_j} - B_{t_{j-1}})^2)\big)\\ &=C^4\sum_{i,j=k}^\ell 3(t_i-t_{i-1})^2+(t_i - t_{i-1})(t_j-t_{j-1})\\ &\le 3C^4\left(\sum_{i,j=k}^\ell (t_i-t_{i-1})^2+(t_i - t_{i-1})(t_j-t_{j-1})\right)E\\ &=3C^4(\sum_{i=k}^\ell(t_i-t_{i-1}))^2=3C^4(v-u)^2E.\\ \end{align*} We have thus shown that \begin{equation} \F\left(\int_u^v X_s\,dB_s\right)^4\le 3C^4(v-u)^2E \end{equation} whenever $(X_s)$ is a bounded simple adapted stochastic process. Now, if $(X_s)$ is a bounded adapted stochastic process (and therefore a bounded element of $L^2_{\text{ad}}[0,b],\mc{L}^2)$) with $|X_s|\le C,$ there exists a net $(X^\alpha_s)$ of bounded simple processes with $|X_\alpha|\le C$ for all $\alpha,$ that converges to $(X_s)$ and such that the It\^o integrals $I(X^\alpha)$ converges in $\mc{L}^2$ to $I(X)$ (see~\cite[Lemma 4.7]{G7}. But then, for every $0<t\le b,$ we have $\F(I_t(X^\alpha))$ converges to $\F(I_t(X)).$ It follows from equation (\ref{equation 5.3a}) applied to $X^\alpha$ that equation (\ref{equation 5.3a}) also applies to $X.$ The final conclusion follows from the Kolmogorov-\u{C}entsov theorem (\cite{G4,G5}.\qed \bigskip \begin{theorem} Let $X\in \mc{L}_{ad}(L_2[0,a])$ and let $(B_t)$ be a Brownian motion with respect to the filtration $(\mf{F}_t).$ For $t\in[0,a],$ let $$ Y_t:=\int_0^t X_s\,dB_s-\tfrac 12\int_0^tX_s^2\,ds. $$ If $$ Z_t(X)=\exp(Y_t),\ 0\le t\le a, $$ then \begin{equation} Z_t(X)=E+\int_0^tZ_sX_s\,dB_s. \end{equation} \end{theorem} {\em Proof.} Write $(Y_t)_{t\in[0,a]}$ as $$ Y_t=N_t+A_t $$ with $N_t=\int_0^tX_s\,dB_s$ and $A_t=-\tfrac 12\int_0^tX_s^2\,ds$ then, since $X\in \mc{L}_{ad}(L_2[0,a]),$ the process $N$ is a local martingale by~\cite[Theorem 4.2]{G8}. \medskip We first assume that the processes $(X_t)$ and $(N_t)$ are bounded and then complete the proof by the process of localization~(see~\cite[Section 5]{G10} and \cite[Section 3.2.2]{G11}). Thus we assume that both $|X_t|\le CE$ and $|N_t|\le CE$ for some positive number $C.$ From the first inequality, we have that $(X_s)\in L^2_{ad}([0,t],\mc{L}^2),$ which implies that $N_t=I^M(X)_t$ is a continuous martingale which is, by Lemma~\ref{Lemma 5.2aa}, $\gamma$-H\"older continuous. Also, this inequality implies that $|A_t|\le\tfrac 12 C^2tE\le \tfrac 12 aC^2E$ and, together with the second inequality, it follows that $(Y_t)$ is uniformly bounded in $\mf{E}_E.$ Hence, also $(Z_t)$ is uniformly bounded and so $Z_t\in L^2_{\pred}([0,t],\<N\>)$ (see~\cite[Proposition 3.6]{G11}). \medskip From the second inequality, we have that $N_t\in L^2_{pred}([0,t],\<N\>)$ (see~\cite[Proposition 3.6]{G11}). Applying Theorem~\ref{theorem 4.8}, we have that $ZX\in L^2([0,t],\mc{L}^2)$ and $\int_0^t Z_s\,dN_s=\int_0^tZ_sX_s\,dB_s.$ Also, by its definition, $dA_s=-\tfrac 12 X_s^2\,ds.$ \medskip Applying It\^o's formula~(see \cite{G11}) for the function $\theta(x)=e^x,$ we get that \begin{align} Z_t&=Z_0+\int_0^t\theta'(Y_s)dN_s+\int_0^t \theta'(Y_s)\,dA_s+\tfrac 12\int_0^t \theta''(Y_s)\,d\<N\>_s \nonumber\\ &=E+\int_0^t Z_s X_s\,dB_s-\tfrac 12\int_0^t Z_sX_s^2\,ds+\tfrac 12\int_0^t Z_sX_s^2\,ds\nonumber\\ &=E+\int_0^t Z_s X_s\,dB_s, \end{align} and we note in passing that in this special case, we have, by \cite[Theorem 4.13]{G7}, that $(Z_t)$ is a martingale. This proves the theorem for the bounded case. \medskip We proceed to the general case by the process of localization (see~\cite[Section 5]{G10}). Let $$ \mf{B}_t^n:=\{\bigcup_{s<t}\mf{B}(|N_s|)> nE)\cup\mf{B}(|X_s| > nE)\}, $$ and let $\P^{(n)}_t$ be the projection onto $\mf{B}_t^n.$ For each fixed $n,$ the system of band projections $(\P_t^{(n)},\I)_{t\in[0,a]}$ is a right continuous increasing system of projections. Thus, as in the paper cited above, it defines a stopping time $\S_n$ of the filtration $(\mf{F}_t)$ and $\S_n\uparrow a\I$ (see~\cite[Proposition 5.1]{G10}). Note that $\P_t^{(n)}$ is the projection onto the band $\mf{B}(tE>\S_nE).$ We denote $\I-\P_t^{(n)}$ by $\Q_t^{(n)}=\mf{B}(tE\le \S_nE).$ For every fixed $n,$ we have \begin{equation} \Q_t^{(n)}(|Z_t|)=\Q_t^{(n)}(|Z_{t\wedge\S_n}|)\le nE \mbox{ and }\Q_t^{(n)}(|X_t|)=\Q_t^{(n)}(|X_{t\wedge\S_n}|)\le nE. \end{equation} Consequently, $\Q_t^{(n)}(|Z_tX_t|)\le n^2E$ which implies that $$ \Q_t^{(n)}(Z_tX_t)\in L^2_{\pred}([0,t],\mc{L}^2). $$ But then, since $\P_t^{(n)}\uparrow \I,$ we have $\Q_t^{(n)}\downarrow 0$ and so, in $\mf{E}^u,$ it follows that $$ Z_tX_t-\P_t^{(n)}(Z_tX_t)=\Q_t^{(n)}Z_tX_t\to 0. $$ Thus $Z_tX_t$ is the order limit in $\mf{E}^u$ of a sequence of elements belonging to $L^2([0,t],\mc{L}^2)$ and therefore belongs to $\mc{L}_{ad}(L^2[0,t]).$ This shows that the integral $\int_0^t Z_sX_s\,dB_s$ exists and is a local martingale. We next claim that $N_{t\wedge \S_n}$ is a martingale. We claim that $A_{t\wedge\S_n}$ is bounded by a constant: We have \begin{align*} |A_{t\wedge \S_n}|=\tfrac 12\int_0^{t\wedge\S_n}X_s^2\,ds =\tfrac 12\int_0^t\Q^{(n)}_sX_s^2\,ds \le\tfrac 12 n^2\int_0^t\,ds\le\tfrac 12 n^2aE. \end{align*} Therefore, $(\Q^{(n)}_tX_t)$ belongs to $L^2([0,t],\mc{L}^2)$ which in its turn shows that, for every $n,$ $$ M^{(n)}_t:=N_{t\wedge\S_n}=\int_0^{t\wedge\S_n}X_s\,dB_s=\int_0^t\Q^{(n)}_sX_s\,dB_s $$ is a martingale. \medskip {\em We claim that $(M^{(n)}_t)$ is a bounded martingale for each fixed $n.$} \medskip Since $n$ is fixed, we shall write $\S$ for $\S_n$ and similarly for $\P_t$ for $\P^{(n)}_t$ and $\Q_t$ for $\Q^{(n)}_t.$ Let $\pi=\{0=t_0<t_1<\ldots <t_n=t\}$ be a partition of the interval $[0,t],$ and let $$ \S_\pi=\sum_{i=1}^n t_{i-1}(\P_{t_i}-\P_{t_{i-1}})=\sum_{i=1}^n t_{i-1}\Delta\P_{t_i} $$ be a lower approximating Freudenthal sum for $\S.$ Then $\S_\pi$ is a stopping time for the filtration $(\mf{F}_s,\F_s)_{0\le s\le t}$ and $$ \int_0^{\S_\pi}X_s\,dB_s=\sum_{i=1}^n\Delta\P_{t_i}\int_0^{t_{i-1}}X_s\,dB_s=\sum_{i=1}^n\Delta\P_{t_i}N_{t_{i-1}}. $$ Hence, $$ \left|\int_0^{t\wedge\S_\pi}X_s\,dB_s\right|=\sum_{i=1}^n\Delta\P_{t_i}|N_{t_{i-1}}|\le nE. $$ On the other hand, $$ \int_0^{t\wedge\S_\pi}X_s\,dB_s=\int_0^tY^\pi_s\,dB_s, $$ where $$ (Y^\pi_s)=\sum_{i=1}^n\Q_{t_{i-1}}X_sI_{(t_{i-1},t_i]}(s) $$ Now, if $|\pi|\to 0,$ we have $(Y^\pi_s)\to (\Q_sX_{s-})=(\Q_sX_{s})$ since we assume $(X_s)$ to be order continuous in $s$ and since $|\Q_sX_s|\le nE,$ it follows from Lebesgue's theorem that $$ \int_0^t\phi\F(|\Q_sX_s-Y^\pi_s|^2)\,ds\to 0 \mbox{ as } |\pi|\to 0. $$ Using the defining property of the It\^o integral, we then have $$ \lim_{|\pi|\to 0}\phi\F(|\int_0^t\Q_sX_s\,dB_s-\int_0^tY^\pi_s\,dB_s|^2)=0. $$ Thus, $(\int_0^tY^\pi_s\,dB_s)$ converges in $\mc{L}^2$ to $\int_0^t \Q_sX_s\,dB_s.$ But, $$ \left|\int_0^tY^\pi_s\,dB_s\right|=\left|\int_0^{t\wedge\S_\pi} X_s\,dB_s\right|\le nE, $$ from which we conclude (by the Birkhoff inequality) that also $$ \left|\int_0^{t\wedge\S}X_s\,dB_s\right|=\left|\int_0^{t}\Q_sX_s\,dB_s\right|\le nE. $$ This establishes our claim. We can now apply the first part of the proof. For each fixed $n,$ let $\tilde{X}^{(n)}_t:=\Q^{(n)}_tX_t.$ Then $(\tilde{X}^{(n)}_t)$ is a bounded adapted process and by our preceding step, $(\tilde{N_t}):=(\int_0^t\tilde{X}^{(n)}_s\,dB_s)$ is a bounded martingale. Therefore, if $$ \tilde{Y}^{(n)}_t:=\int_0^t\tilde{X}^{(n)}_s\,dB_s-\tfrac 12\int_0^t[\tilde{X}^{(n)}_t]^2\,ds $$ and if $$ \tilde{Z}^{(n)}_t(\tilde{X}^{(n)}_t)=\exp(\tilde{Y}^{(n)}_t), $$ then $$ \tilde{Z}^{(n)}_t(\tilde{X}^{(n)}_t)=E+\int_0^t\tilde{Z}^{(n)}_s\tilde{X}^{(n)}_s\,dB_s. $$ However, if $\P^{(n)}=\P(tE\le \S_n),$ then $\P^{(n)}(\tilde{X}^{(n)}_t)=\P^{(n)}X_t,$ $\P^{(n)}(\tilde{Z}^{(n)}_t)=\P^{(n)}Z_t$ and therefore, on the band $\P^{(n)}\mf{E},$ we have $$ \P^{(n)}[Z_t(X_t)]=\P^{(n)}\left(E+\int_0^tZ_sX_s\,dB_s\right). $$ Since we have $\S_n\uparrow \I$ it follows that, in order in $\mf{E}^u,$ the left-hand side converges to $Z_t(X_t)$ and the right-hand side to $E+\int_0^tZ_sX_s\,dB_s.$ \qed \bigskip We note that, if $Z_t(X)$ is a martingale, then $\F(Z_t(X))=E$ for all $t\in[0,T],$ because, for all $t\in[0,T],$ we have $$ \F(Z_t(X))=Z_0(X)=E. $$ That the converse is true, is the content of the next theorem. \begin{theorem} If $X\in\mc{L}_{ad}(L^2[0,T])$ satisfies the condition that $$ \F(Z_t(X))=E \mbox{ for all } t\in[0,T], $$ then the exponential process $(Z_t(X))_{t\in[0,T]}$ is a martingale. \end{theorem} {\em Proof.} From equation (\ref{equation 5.2}) it follows that $Z_t(X)$ is a local martingale. Hence there exists an increasing sequence of stopping times $\T_n\uparrow T$ such that that stopped processes $$ Z_t^n(X):=Z_{t\wedge \T_n}(X) $$ are martingales. Therefore, $$ \F_s(Z_t^n(X))=Z_s^n(X),\ \ 0\le s\le t,\ n\ge 1. $$ We now let $n$ tend to infinity and get, by Fatou's lemma, that \begin{equation} \F_s(Z_t(X))=\F_s(\liminf Z_t^n(X))\le \liminf \F_s(Z_t^n(X) =\liminf Z^n_s(X) = Z_s(X). \end{equation} It follows that $(Z_t(X))_{0\le t\le T}$ is a supermartingale. Suppose that it is not a martingale. Then, for some $s,t\in[0,T],$ $s<t,$ we have that $(Z_s(X)-\F_s(Z_t(X)))>0$ and since $\F$ is strictly positive, we get $$ \F(Z_s(X))>\F(\F_s(Z_t(X)))=\F(Z_t(X)), $$ contradicting our assumption that $\F(Z_s(X)=\F(Z_t(X))=E$ for all $s,\ t\in[0,T].$ \phantom{Koooos}\qed \bigskip \section{Integration by parts formula for martingales}\label{sec 7} Let $(X_t)$ and $(Y_t)$ be two semi-martingales, $X_t=X_0+M_t+B_t$ and $Y_t=Y_0+N_t+C_t$, with $0\le t\le a$, where $M_t,N_t$ are martingales and $B_t,C_t$ are regular adapted processes satifying $B_0=0$ and $C_0=0.$ Then the integration by parts formula holds, i.e., \begin{equation} X_tY_t-X_0Y_0=\int_0^tX_s\,dY_s+\int_0^t Y_s\,dX_s+\<M,N\>_t. \end{equation} To prove this, we have to apply It\^o's rule for two dimensional processes. We therefore include the proof of It\^o's rule for a multi-dimensional process (see~\cite[Theorem 3.3.6, page 153]{KS}. We shall prove the following. \begin{theorem}\label{theorem 8.7} Let $M_t=(M_t^{(1)},M_t^{(2)},\ldots,M_t^{(n)},\mf{F}_t)_{t\in[0,a]}$ be a vector of continuous martingales, $A_t:=(A_t^{(1)},A_t^{(2)},\ldots,A_t^{(n)},\mf{F}_t)_{t\in[0,a]}$ a vector of adapted processes of bounded variation with $A_0=0$ and set $X_t=X_0+M_t+A_t,\ 0\le t\le a$ where $X_0=(X_0^{(i)})$ and $X_0^{(i)}\in\mf{F}_0.$ Let $f(t,x):[0,a]\times\R^n\to\R$ be of class $C^{1,2}.$ Then, for $t\in[0,a],$ \begin{multline} f(t,X_t)=f(0,X_0)+\int_0^t\frac{\partial}{\partial t}f(s,X_s)\,ds +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dB_s^{(i)} \\ +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dM_s^{(i)} \\ +\frac 12\sum_{i=1}^n\sum_{j=1}^n\int_0^t\frac{\partial^2}{\partial x_i\partial x_j}f(s,X_s) \,d\<M^{(i)},M^{(j)}\>_s \\ \end{multline} \end{theorem} {\em Proof.} We refer the reader to the proof of the It\^o formula in the one-dimensional case as given in~\cite[Section 3.2]{G11}, and, without going too much into the detail, show how the proof of the multi-dimensional case follows. For notational convenience, set $z_t=(t,x_t),$ (thus $Z_t=(t,X_t)$) and consider the bounded case. For a partition $\pi=\{0=t_0<t_1<\cdots<t_m=t\},$ we have (with the products interpreted as inner products) \begin{multline} f(t,X_t)-f(0,X_0)=f(Z_t)-f(Z_0)= \\ =\sum_{k=1}^m\nabla f(Z_{t_{k-1}})(Z_{t_k}-Z_{t_{k-1}})+\frac{1}{2}\sum_{k=1}^m[Y_k(Z_{t_k}-Z_{t_{k-1}})](Z_{t_k}-Z_{t_{k-1}}). \end{multline} The derivative is the total derivative, i.e., $$ \nabla f=(\frac{\partial f}{\partial t},\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_m}) =(f_t,f_{x_1},\ldots,f_{x_n})$$ and $Y_k$ is an $(n+1)\times (n+1)$-matrix $Y_k=(Y^{k}_{ij})$ where $Y^k_{ij}$ is the image under representation of the continuous function $\ds\frac{\partial^2f}{\partial z_i\partial z_j}(\eta^k_{ij})=f_{z_iz_j}(\eta^k_{ij})$ as constructed in the one-dimensional case in~\cite[Lemma 3.9]{G11}. The next step is to substitute $Z_t=(t,X_0+M_t+A_t)$ in (\ref{equation 8.15}) and then to approximate the resulting sums as $|\pi|$ tends to zero. The terms involving the first order derivatives yield the first four terms in the theorem exactly by the same arguments as in \cite{G11}. In the last term, we have that all mixed terms that contain a factor of the form $(t_i-t_j),$ or $(A_{t_i}-A_{t_j})$ tend to zero and consequently we have to show that, as $|\pi|$ tends to zero, the sum \begin{equation} \frac 12\sum_{k=1}^m[Y_k(M_{t_k}-M_{t_{k-1}})](M_{t_k}-M_{t_{k-1}}) \end{equation} tends to the last term. The first step there is to show that we can replace $Y_k$ by the matrix $\ds\left(\frac{\partial^2 f}{\partial x_i\partial x_j}(t_{k-1},X_{t_{k-1}})\right)=\big(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\big).$ To do this, we consider the difference \begin{multline} \left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n Y^k_{ij}(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}) \right.\right. \\ \left.\left. -\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}}))(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\right|\\ =\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n (Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\right|\\ \le \sum_{i=1}^n\sum_{j=1}^n \sup_{1\le k\le m}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|\sum_{k=1}^m |M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}}||M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}|. \end{multline} Now for each fixed $i,j$ and $\phi\in\Phi,$ we have that (see \cite[Lemma 3.9(c)]{G11}) $$ \phi\F(m^{ij})=\phi\F(\sup_{1\le k\le m}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|)\to 0 \mbox{ as } |\pi|\to 0. $$ Also, since $\F\sum_{k=1}^m(M^{(i)}_{t_k}-M^{(j)}_{t_{k-1}})^2\le\F(M^{(i)}_{t_m})^2\le K^2E$ (see~\cite[Lemma 6.2]{G11}), \begin{align*} \F(\sum_{k=1}^m |M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}}||M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}|) &\le\F\left(V^{(2)}_t(\pi,M^{i})^{1/2} V^{(2)}_t(\pi,M^{j})^{1/2}\right) \\ &\le \left(\F V^{(2)}_t(\pi,M^{i})\F V^{(2)}_t(\pi,M^{j})\right)^{1/2} \\ &\le \left(K^4E\right)^{1/2}\\ &=K^2E. \end{align*} It now follows from the Cauchy inequality and Corollary \ref{corollary 8.5} that $Y^k$ can be replaced by the matrix $\ds\left(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\right).$ The final step is to consider \begin{multline*} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.-\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]. \\ \end{multline*} Now, using arguments used in the proof of Theorem~\ref{theorem 8.5}, we get \begin{multline*} \F\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\right.\\ \left.\left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]\right|^2 \\ =\F\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n [f_{x_ix_j}(t_{k-1},X_{t_{k-1}})]^2[(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)]^2\right]\\ \le 2 \|f_{x_ix_j}\|^2_\infty\F\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n (M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})^2(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})^2\right.\\ \left.\phantom{\sum_{i=1}^n}+\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)^2\right] \\ \le 4K\|f_{x_ix_j}\|^2_\infty\left[\sum_{i=1}^n\sum_{j=1}^n \left(\F m_t(M^{(i)},\pi)V^{(2)}(\pi,M^{(j)}) \right.\right. \\ \left.\phantom{\sum_{i=1}^n}\left.+\F m_t(\<M^{(i)},M^{(j)}\>,\pi)\right)\right]. \end{multline*} The remainder of the proof to show that the right-hand side tends to zero, proceeds as in the proof of formula (\ref{equation 8.12}).\qed \bigskip To prove the integration by parts formula, we apply the above Theorem, taking $f(t,x,y):=xy.$ Then, \begin{multline*} X_tY_t-X_0Y_0 =f(X_t,Y_t)-f(X_0,Y_0)\\ =\int_0^tY_s\,dA_s+\int_0^tX_s\,dC_s+\int_0^tY_s\,dM_s+\int_0^tX_s\,dN_s \\ +\tfrac 12\int_0^t\,d\<M,N\>_s+\tfrac 12\int_0^t\,d\<N,M\>_s\\ =\int_0^tX_s\,dY_s+\int_0^tY_s\,dX_s+\<M,N\>_t. \end{multline*} \section{Girsanov's Theorem} Let $(B_t)_{t\in[0,a]}$ be a Brownian motion adapted to the filtration $(\mf{F}_t,\F_t)_{t\in[0,a]}.$ Let $X=(X_t)_{t\in[0,a]}$ belong to the space $\mc{L}_{ad}^2(L^2[0,a])$ and define the exponential process \begin{equation} Z_t=Z_t(X):=\exp\left[\int_0^t X_s\,dB_s-\tfrac 12 \int_0^t X_s^2\,ds\right]. \end{equation} As we have shown in Theorem 4.9, we have that \begin{equation} Z_t=E+\int_0^t Z_sX_s\,dB_s, \end{equation} which shows that $(Z_t)$ is a local martingale, that is a martingale if $\F(Z_t)=E$ for all $t\in[0,a]$ (see Theorem 6.5). We now define a new filtration on the space $\mf{F}=\mf{F}_a$ as follows: \begin{equation} \widetilde{\F}_s(Y):=Z_s^{-1}\F_s(Z_aY) \mbox{ for }0\le s\le a,\ Y\in\mf{F}. \end{equation} We shall denote the range of $\wt{\F}_s$ by $\wt{\mf{F}}_s.$ \begin{proposition} If $(Z_s)$ is a martingale, then \begin{enumerate} \item For each $s,$ $\widetilde{\F}_s$ is a conditional expectation. \item The system $(\widetilde{\mf{F}}_t,\widetilde{\F}_t)_{t\in[0,a]}$ is a filtration on $\mf{F}.$ \item $\mf{F}_t=\widetilde{\mf{F}}_t.$ \item If $0\le s\le t\le a,$ then $\wt{\F}_s(Y)=Z_s^{-1}\F_s(Z_tY).$ \end{enumerate} \end{proposition} {\em Proof.} 1. Clearly $\wt{\F}_s$ is positive for every $s.$ Moreover, since by assumption $(Z_s)$ is a martingale, $$ \wt{\F}_s(E)=Z_s^{-1}(\F_s(Z_aE))=Z_s^{-1}\F_s(Z_a)=Z_s^{-1}Z_s=E. $$ If $Y>0,$ then $\wt{\F}_s(Y)=Z_s^{-1}\F_s(Z_aE)>0$ because $\F_s$ is strictly positive and multiplication by $Z_s$ is a strictly positive operator. We show that it is a projection: \begin{align*} \wt{\F}_s^2(Y)&=Z_s^{-1}\F_s(Z_a\wt{\F}_s(Y)))=Z_s^{-1}\F_s(Z_aZ_s^{-1}\F_s(Z_aY))\\ &=Z_s^{-1}\F_s(Z_aY)\F_s(Z_aZ_s^{-1})=Z_s^{-1}\F_s(Z_aY)Z_s^{-1}\F_sZ_a\\ &=Z_s^{-1}\F_s(Z_aY)Z_s^{-1}Z_s=Z_s^{-1}\F_s(Z_aY)=\wt{\F}_s(Y). \end{align*} 2. Let $0\le s<t\le a,$ then \begin{align*} \wt{\F}_s\wt{\F}_t(Y)&=Z_s^{-1}\F_s(Z_a\wt{\F}_t(Y)))=Z_s^{-1}\F_s(Z_aZ_t^{-1}\F_t(Z_aY))\\ &=Z_s^{-1}\F_s\F_t(Z_aZ_t^{-1}\F_t(Z_aY))=Z_s^{-1}\F_s\F_t(Z_aY)\F_t(Z_aZ_t^{-1})\\ &=Z_s^{-1}\F_s(Z_aY)Z_t^{-1}Z_t=Z_s^{-1}\F_s(Z_aY)=\wt{\F}_s(Y). \end{align*} 3. If $Y\in{\mf{F}}_t$ then $$ \wt{\F}_t(Y)=Z_t^{-1}\F_t(Z_a Y)=Z_t^{-1}Y\F_t(Z_a)=Z_t^{-1}YZ_t=Y. $$ Hence, $Y\in\wt{\mf{F}}_t,$ and so $\mf{F}_t\subset\wt{\mf{F}}_t.$ Conversely, if $Y\in\wt{\mf{F}}_t,$ then, since $\wt{\F}_t$ is a projection, we have $$ Y=\wt{\F}_t=Z_t^{-1}\F_t(Z_aY) $$ and so $Z_tY\in\mf{F}_t\subset\mf{F}_t^u.$ It follows that $Y=Z_t^{-1}(Z_tY)\in\mf{F}_t^u\cap\mf{E}.$ Thus, $Y\in\mf{F}_t$ and so we are done. \medskip 4.\ We have \begin{multline*} \wt{\F}_s(Y)=Z_s^{-1}\F_s(Z_aY)=Z_s^{-1}\F_s\F_t\F_a(Z_aY)=Z_s^{-1}\F_s\F_tZ_a\F_a(Y)\\ =Z_s^{-1}\F_s(Z_t\F_a(Y)) =Z_s^{-1}\F_s\F_a(Z_tY)=Z_a^{-1}\F_s(Z_tY). \end{multline*} \qed \begin{example}(\rm See~\cite[Page 191, Section 3.3.5]{KS})\end{example} Let $(\Omega,\mf{F},P)$ be a probability space, let $(\mf{F}_t)_{t\in[0,a]}$ be a filtration of sub-$\sigma$-algebras of $\mf{F}$ and let $(X_t)_{t\in[0,a]}$ be a measurable adapted process satisfying $$ P\left[\int_0^a X_t^2\,dt<\infty\right]=1. $$ Suppose that $\int_\Omega Z_t(X(\omega))=EZ_t(X)=1$ for $0\le t\le a.$ It is known that this condition implies that $(Z_t)_{t\in[0,a]}$ is a martingale with respect to the filtration $(\mf{F}_t).$ Define probability measures $\wt{P}_t(A)=E[1_AZ_t],\ A\in\mf{F}_t,\ t\in[0,a].$ It then follows that the conditional expectations with reference to these measures, denoted by $\wt{E}(Y\,|\,\mf{F}_s),$ satisfy the Bayes' rule (\cite[Lemma 3.5.3]{KS}) $$ \wt{E}(Y\,|\,\mf{F}_s)=\frac{1}{Z_s}E(Z_tY\,|\,\mf{F}_s),\ \mbox{$P$- and $\wt{P}_a$-almost everywhere} $$ for all $0\le s\le t\le a.$ This result easily follows from the definition of the measures and of conditional expectation. This result is the motivation to define the conditional expectations $\wt{\F}_t,$ $t\in[0,a]$ in our measure-free approach. \medskip The proposition above shows that for a given process $X,$ filtration $(\mf{F}_t,\F_t)$ and Brownian motion $(B_t,\mf{F}_t,\F_t)$ the filtration can be transformed into a new filtration $(\mf{F}_t,\wt{\F}_t).$ Girsanov's theorem shows how to transform the given Brownian motion $(B_t,\mf{F}_t,\F_t)$ into a Brownian motion $(\wt{B}_t,\mf{F}_t,\wt{\F}_t)$ (see~\cite{Gi} and \cite{CM}). \begin{theorem}{\rm(Girsanov (1960), Cameron and Martin (1944))}\label{theorem 7.3} Let the exponential process $Z=Z(X),$ as defined in (\ref{equation 8.1}), be a martingale. Then, the process $(\wt{B}_t,\mf{F}_t,\wt{\F}_t)_{t\in[0,a]},$ defined by \begin{equation} \wt{B}_t:= B_t -\int_0^t X_s\,ds,\ \ 0\le t\le a, \end{equation} is a Brownian motion. \end{theorem} We shall use L\'evy's characterization of Brownian motion to prove the theorem. The result in~\cite{G11} shows that we have to prove that $\wt{B}$ is $\gamma$-H\"older continuous and has quadratic variation $\<\wt{B}\>_t=tE.$ In order to do this, we first prove the following result from which the desired result easily follows. \begin{proposition} Let $Z=Z(X)$ be a martingale. If $M$ is a $\gamma$-H\"older continuous martingale, then the same is true for for the process $\wt{M}$ defined by \begin{equation} \wt{M}_t:=M_t-\int_0^tX_s\,d\<M,B\>_s,\ \ 0\le t\le a. \end{equation} Furthermore, if $N$ is another $\gamma$-H\"older continuous martingale and if $$ \wt{N}_t=N_t-\int_0^tX_s\,d\<N,B\>_s,\ \ 0\le t\le a, $$ then $$ \<\wt{M},\wt{N}\>=\<{M},{N}\>,\ \ 0\le t\le a. $$ \end{proposition} {\em Proof.} We assume that $M$ and $N$ are bounded martingales with bounded quadratic variations and also that $Z_t(X)$ and $\int_0^tX_s^2\,ds$ is bounded by some $ME.$ From the Kunita-Watanabe inequality~(Theorem \ref{Kunita Watanabe}) that $$ \left|\int_0^tX_s\,d\<M,B\>_s\right|^2\le \int_0^tX_s^2\,ds\int_0^t\,d\<M\>_s\le\<M\>_t\int_0^t X_s^2\,ds\le M^2E. $$ Therefore, $\wt{M}$ is also bounded. We now apply the integration by parts formula, derived in Section~\ref{sec 7} and Equation~(\ref{equation 8.2}), to get \begin{align*} Z_t\wt{M_t}&=Z_t\wt{M}_t-Z_0\wt{M}_0 \\ &=\int_0^t Z_u\,d\wt{M}_u+\int_0^t\wt{M}_u\,dZ_u+\<Z,M\>_t \\ &=(\int_0^t Z_u\,dM_u-\int_0^t Z_uX_u\,d\<M,B\>_u)+\int_0^t\wt{M}_uZ_uX_u\,dB_u+\<Z,M\>_t. \end{align*} By the cross-variation formula (Theorem~\ref{theorem 4.6}), we have that \begin{align*} \int_0^t Z_uX_u\,d\<M,B\>_u&=\<I^B(Z_uX_u),I^M(E)\>_t \\ &=\<Z-E,M\>_t=\<Z,M\>_t-\<E,M\>_t=\<Z,M\>_t, \end{align*} for, $M=EM$ is a martingale implying that $M$ and $E$ are orthogonal, i.e., $\<E,M\>_t=0.$ Thus, it follows that \begin{equation} Z_t\wt{M}_t=\int_0^tZ_u\,dM_u+\int_0^t\wt{M}_uZ_uX_u\,dB_u. \end{equation} The right-hand side is a martingale relative to the filtration $(\mf{F}_t,\F_t)$ and so, for $s\le t\le a,$ we have from Proposition~\ref{proposition 8.1}(4), that \begin{equation} \wt{\F}_s(\wt{M}_t)=Z_s^{-1}\F_s(Z_t\wt{M}_t)=Z_s^{-1}Z_s\wt{M}_s=\wt{M}_s. \end{equation} and we have shown that $(\wt{M}_t,\mf{F}_t,\wt{\F}_t)$ is a martingale. Again applying the integration by parts formula, we get \begin{align*} &\wt{M}_t\wt{N}_t-\<M,N\>_t \\ &=\int_0^t\wt{M}_u\,d\wt{N}_u+\int_0^t\wt{N}_u\,d\wt{M}_u\\ &=\int_0^t\wt{M}_u\,d(N_u-\int_0^u X_s\,d\<N,B\>_s)+\int_0^t\wt{N}_u\,d(M_u-\int_0^uX_s\,d\<M,B\>_s)\\ &=\int_0^t\wt{M}_u\,dN_u+\int_0^t\wt{N}_u\,dM_u-\left[\int_0^t\wt{M}_u X_u\,d\<N,B\>_u +\int_0^t\wt{N}_uX_u\,d\<M,B\>_u\right].\\ \end{align*} This shows that $\wt{M}_t\wt{N}_t-\<M,N\>_t$ is a semi-martingale. Applying the integration by parts formula, we get \begin{multline} Z_t[\wt{M}_t\wt{N}_t-\<M,N\>_t] =\int_0^tZ_u\,d[\wt{M}_u\wt{N}_u-\<M,N\>_u]+ \\ +\int_0^t[\wt{M}_u\wt{N}_u-\<M,N\>_u]\,dZ_u+\<I^N(\wt{M})+I^M(\wt{N}),Z\>_t. \end{multline} The second integral is, using Equation~(\ref{equation 8.2}), equal to \begin{equation}\label{equation 8.9} \int_0^t[\wt{M}_u\wt{N}_u-\<M,N\>_u]Z_uX_u\,dB_u. \end{equation} By the preceding integral representation of $\wt{M}_u\wt{N}_u-\<M,N\>_u,$ we use the cross-variation formula to derive that \begin{align} &\int_0^tZ_u\,d[\wt{M}_u\wt{N}_u-\<M,N\>_u] \nonumber \\ &=\int_0^t Z_u\wt{M}_u\,dN_u+\int_0^t Z_u\wt{N}_u\,dM_u \nonumber \\ &\phantom{\int_0^t Z_u\wt{M}_u\,dN_u}-\int_0^t\wt{M}_uZ_uX_u\,d\<N,B\>_u -\int_0^t\wt{N}_uZ_uX_u\,d\<M,B\>_u \nonumber \\ &=\int_0^t Z_u\wt{M}_u\,dN_u+\int_0^t Z_u\wt{N}_u\,dM_u \nonumber \\ &\phantom{\int_0^t Z_u\wt{M}_u\,dN_u}-\<\int_0^t\wt{M}\,dN,Z\>_t-\<\int_0^t\wt{N}\,dM,Z\>_t. \end{align} Substituting Equations (\ref{equation 8.9}) and (\ref{equation 8.10}) in Equation (\ref{equation 8.8}), we get \begin{align} Z_t[\wt{M}_t\wt{N}_t&-\<M,N\>_t] =\nonumber \\ &=\int_0^t Z_u\wt{M}_u\,dN_u+\int_0^t Z_u\wt{N}_u\,dM_u +\int_0^t[\wt{M}_u\wt{N}_u-\<M,N\>_u]Z_uX_u\,dB_u. \end{align} Equation~\ref{equation 8.11} shows that $(Z_t[\wt{M}_t\wt{N}_t-\<M,N\>_t],\mf{F}_t,\F_t)_{t\in[0,t]}$ is a martingale. It follows from the definition of the transformed filtration that $$ \wt{\F}_s(\wt{M}_t\wt{N}_t-\<M,N\>_t)=Z_s^{-1}\F_s(Z_t[\wt{M}_t\wt{N}_t-\<M,N\>_t]) =Z_s^{-1}Z_s[\wt{M}_s\wt{N}_s-\<M,N\>_s]. $$ Therefore, $([\wt{M}_t\wt{N}_t-\<M,N\>_t],\mf{F},\wt{\F}_t)_{t\in[0,t]}$ is a martingale. But, $\<\wt{M},\wt{N}\>$ is the unique process $A$ such that $\wt{M}\wt{N}-A$ is a martingale, and this shows that $$ \<\wt{M},\wt{N}\>_t=\<M,N\>_t,\ \ 0\le t\le a. $$ \qed {\em Proof of Girsanov's theorem:}\ Put $M=N=B$ in Proposition 8.4. Then, by Equation~(\ref{equation 8.5}), we get that $$ \wt{B}_t=B_t-\int_0^tX_s\,d\<B,B\>_s=B_t-\int_0^tX_s\,d\<B\>_s=B_t-\int_0^tX_s\,ds, \ \ 0\le t\le a, $$ is a continuous martingale satifying $\<\wt{B}\>=\<\wt{B},\wt{B}\>=\<B,B\>=\<B\>=tE.$ It follows from L\'evy's theorem (see~\cite[Theorem 4.6]{G11}) that $(\wt{B}_t,\mf{F}_t,\wt{\F}_t)_{t\in[0,a]}$ is a Brownian motion.\qed \bigskip In many applications one needs the multidimensional Girsanov theorem. We state it here but will not prove it, since the proof does not require any new ideas. Let $B=(B_t,\mf{F}_t)_{t\in[0,a]}=((B_t^{(1)},\ldots,B_t^{(m)}),\mf{F}_t)_{t\in[0,a]}$ be an $m$-dimensional Brownian motion and let $X=((X_t^{(1)},\ldots,X_t^{(m)}),\mf{F}_t)_{t\in[0,a]}$ be a vector of adapted processes satisfying $X^{(i)}_t\in\mc{L}_{ad}^2(L^2[0,a])$ for $1\le i\le a.$ Then, for each $i,$ the stochastic integral $I^{B^{(i)}}(X^{(i)})$ is defined and is a local martingale. Set \begin{equation} Z_t(X):=\exp\left[\sum_{i=1}^m\int_0^tX_s^{(i)}\,dB_s^{(i)}-\frac 12\int_0^t\|X_s\|^2\,ds\right]. \end{equation} Then we have, as in the one dimensional case, \begin{equation} Z_t(X)=1+\sum_{i=1}^m\int_0^tZ_s(X)X_s^{(i)}\,dB_s^{(i)}. \end{equation} Then, \begin{theorem} Assume that $Z(X)=(Z_t(X))$ is a martingale. Define the process $$ \wt{B}=((\wt{B}_t^{(1)},\ldots,\wt{B}_t^{(m)}),\mf{F}_t)_{t\in[0,a]} $$ by \begin{equation} \wt{B}_t^{(i)}:= {B}_t^{(i)}-\int_0^t{X}_s^{(i)}\,ds,\ \ 1\le i\le m. \end{equation} Then, the process $(\wt{B}_t,\mf{F}_t)_{t\in[0,a]}$ is an $m$-dimesional Brownian motion adapted to the filtration $(\mf{F}_t,\wt{\mf{F}}_t).$ \end{theorem} \bigskip \textbf{Acknowledgement} The research of the second named author was supported by the National Research Foundation (Grant No. 87502). \input{Bibliografie.tex} \begin{lemma} Let $X,Y$ be martingales satisfying $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE$ for all $s\in[a,t].$ Let $\pi=\{a=t_0<t_1<\cdots<t_m=b\}$ be a partition of $[a,t].$ Then \begin{equation} \F[CV_t(\pi)]^2\le 6K^4E. \end{equation} \end{lemma} {\em Proof.} We recall that it follows from the definition of the cross variation $\<X,Y\>$ that it can be written as the difference of two adapted natural positive increasing processes $A$ and $B$ and that $A=\tfrac 14\<X+Y\>$ and $B=\tfrac 14\<X-Y\>.$ Therefore, since $\<X\pm Y\>\le\<X\>+\<Y\>\le 2KE,$ we have that $\sup\{A,B\}\le \tfrac 12 KE.$ For $0\le k\le m-1$ we have \begin{align*} &\F_{t_k}\left[\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\F_{t_k}\left[\sum_{j=k+1}^m X_{t_j}Y_{t_j}-\F_{t_{j-1}}(X_{t_j}Y_{t_{j-1}}-X_{t_{j-1}}Y_{t_j}+X_{t_{j-1}}Y_{t_{j-1}})\right]\\ &=\F_{t_k}\left[\sum_{j=k+1}^m X_{t_j}Y_{t_j}-X_{t_{j-1}}Y_{t_{j-1}}\right]\\ &=\F_{t_k}(X_{t_m}Y_{t_m}-X_{t_k}Y_{t_k})\\ &=\F_{t_k}(\<X,Y\>_{t_m}-\<X,Y\>_{t_k})\\ &=\F_{t_k}(\<X,Y\>_{t_m}-(A_{t_k}-B_{t_k})) \\ &\le\F_{t_k}(\<X,Y\>_{t_m}+B_{t_k})\\ &\le\F_{t_k}(\<X,Y\>_{t_m}+B_{t_m})\\ &\le\tfrac 32 KE \end{align*} Hence, \begin{align*} &\F\left[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\F\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\sum_{j=k+1}^m\F_{t_k}(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\F\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})(\F_{t_k}(\<X,Y\>_{t_m} -\<X,Y\>_{t_k})\right]\\ &\le\F\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\F_{t_k}(\<X,Y\>_{t_m}+B_{t_m})\right]\\ &=\F(\<X,Y\>_{t_m}+B_{t_m})\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\right] \\ &=\F(\<X,Y\>_{t_m}+B_{t_{m}})(\<X,Y,\>_{t_{m-1}})-\<X,Y\>_{t_0})\\ &\le 3K^2E. \end{align*} Similarly, \begin{multline*} \F[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})]\\ \ge \F(\<X,Y\>_{t_m}-A_{t_{m}})(\<X,Y,\>_{t_{m-1}})-\<X,Y\>_{t_0})\ge -3K^2E \end{multline*} So, we arrive at $$ \left|\F\left[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\right|\le 3K^2E $$ \newpage \vfill \noindent \end{document} \section{The Wiener integral} Let $F_{B_t}(x):=\F\P(B_t\le xE)E$ be the distribution function of $B_t.$ We proved in~\cite{G11} that for all $s<t,$ the distribution function of $B_t-B_s$ is given by $$ F_{B_t-B_s}(x)=\frac{1}{\sqrt{2\pi(t-s)}}\left[\int_{-\infty}^ xe^{-\frac{y^2}{2(t-s)}}dy\right]E, $$ which means that $B_t-B_s$ is normally distributed with mean $0$ and variance $s-t.$ Taking $s=0,$ we get $$ F_{B_t}(x)=\frac{1}{\sqrt{2\pi t)}}\left[\int_{-\infty}^ xe^{-\frac{y^2}{2t}}dy\right]E, $$ and so $$ dF_{B_t}(x)=\frac{1}{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}dxE . $$ It follows from~\cite{G11}, that $$ \F(B_t)=\left[\frac{1}{\sqrt{2\pi t)}}\int_{-\infty}^\infty xe^{-\frac{x^2}{2t}}dx \right]E. $$ We now consider a function $f\in L^2[a,b],$ i.e., a function that is square integrable with respect to Lebesgue measure on $[a,b].$ The stochastic process $X_t=f(t)E,\ t\in[a,b],$ then satisfies $X_t\in\mc{L}^2$ for almost every $t\in[a,b]$ because, for any $\phi\in\Phi,$ we have $\phi\F(|X_t|^2)=\phi\F|f(t)|^2E)=|f(t)|^2\phi\F(E)=|f(t)|^2<\infty$ for almost all $t\in [a,b].$ Then, $$ \overline{q}(X_t)^2=\int_a^b \phi\F(|X_t|^2)\,dt=\int_a^b|f(t)|^2\,dt<\infty $$ holds for every $q\in\mathscr{Q}$ which shows that $X_t\in L^2([a,b],\mc{L}^2)$ (see~\cite{G7}) and since the process $X_t$ is adapted to any filtration on $\mf{E},$ it belongs to the filtration $(\mf{F}_t)_t,\F_t)$ with reference to which $(B_t)$ is a Brownian motion. In our notation used in~\cite{G7}, it belongs to $ L^2_{ad}([a,b],\mc{L}^2)$ and so the It\^o integral of the process exists. In this case, the integral is called the {\em Wiener integral} of $f.$ Of course it is much easier to define the Wiener integral directly and not via the It\^o integral, but then we simply repeat arguments used in~\cite{G7} used there to define the It\^o integral. For a step function $f=\sum_{i=1}^n a_iI_{[t_{i-1},t_i)},$ where $t_0=a$ and $t_n=b$ the Wiener integral equals $$ I(f)=\sum_{i=1}^n a_i(B_{t_i}-B_{t_{i-1}})=\sum_{i=1}^n f(t_{i-1})(B_{t_i}-B_{t_{i-1}}). $$ A special property of the Wiener integral, not necessarily shared by the It\^o integral is the following: \begin{theorem} For each $f\in L^2[a,b],$ the Wiener integral $\int_a^b f(t)\,dB_t$ is a Gaussian random variable with mean $0$ and variance $\|f\|^2=\int_a^b |f(t)|^2\,dt.$ \end{theorem} {\em Proof.} Since the increments $B_t-B_s,$ $s<t,$ of the Brownian motion are independent, so is their sum. A calculation with the characteristic function then shows that the sum of the independent elements appearing in the integral are again independent with mean $0.$ Since the variance of $B_t-B_s$ equals $t-s,$ that of $a(B_t-B_s)=a^2(t-s).$ Therefore, if $\sigma^2$ is the variance of $I(f)$ above, we get $$ \sigma^2=\sum_{i=1}^n a_i^2(t_i-t_{i-1})=\sum_{i=1}^n f(t_{i-1})^2(t_i-t_{i-1})=\int_a^b|f(t)|^2\,dt. $$ Thus the theorem is true for step functions. Consider the mapping $f(t)\mapsto f(t)E$ from $L^2[a,b]\to L^2_{ad}([a,b],\mc{L}^2).$ As was remarked above, this mapping is an isometric embedding of $L^2[a,b]$ into $L^2_{ad}([a,b],\mc{L}^2).$ Since for an arbitrary $f\in L^2[a,b]$ there exists a sequence of step functions $f_n$ in $L^2[a,b]$ that converges in norm to $f,$ the images of the elements in the sequence are simple adapted processes that converges in $L^2([a,b],\mc{L}^2)$ to $(X_t).$ Hence, the Wiener integrals $I(f_n)$ (which is the same as the It\^o integrals $I(X_n)$) converges to the Wiener integral $I(f)=\int_a^b f(t)\,dB_t$ and the convergence is in the space $\mc{L}^2.$ If we denote the variation of $I(f_n)$ by $\sigma_n,$ then, since $\sigma_n^2=\|f_n\|^2,$ we have that $\sigma_n$ is a convergent sequence, converging to $\sigma:=\|f\|.$ We thus have a sequence $I(f_n)$ of normally distributed stochastic variables with mean $0$ and variance $\sigma_n$ that converges in $\mc{L}^2$ to the stochastic variable $I(f).$ We have to prove that $I(f)$ is also normally distributed with mean zero and with variance $\sigma^2.$ The distribution function of $I(f_n)$ satisfies $$ F_n(x):=F_{I(f_n)}(x)=\frac{1}{\sqrt{2\pi}\|f_n\|}\left[\int_{-\infty}^x \exp\left(-{\tfrac{y^2}{2\|f_n\|^2}}\right)\,dy \right]E $$ For every fixed $x\in\R,$ $$ \lim_{n\to\infty}F_n(x)=\frac{1}{\sqrt{2\pi}\|f\|}\left[\int_{-\infty}^x \exp\left(-{\tfrac{y^2}{2\|f\|^2}}\right)\,dy \right]E, $$ since $\|f_n\|\to\|f\|$ as $n\to\infty.$ \bigskip On the other hand, as we shall prove in the sequel, since $I(f_n)\to I(f)$ in $\mc{L}^2,$ we have that $ F_n(x)\overset{n}{\underset{\infty}{\longrightarrow}} F(x):=F_{I(f)}(x).$ \bigskip Hence, $$ F(x):=\frac{1}{\sqrt{2\pi}\|f\|}\left[\int_{-\infty}^x \exp\left({-\tfrac{y^2}{2\|f\|^2}}\right)\,dy \right]E $$ and so we conclude that $I(f)$ is normally distributed with mean $0$ and variance $\|f\|^2.$ This concludes the proof of the theorem.\qed \bigskip We recall that the sequence $(X_n)$ order converges in conditional probability to $X$ if for each $\epsilon>0,$ we have $$ \lim_{n\to\infty}|\phi|\F(\P(|X-X_n|\ge \epsilon E))=0 $$ for every $\phi\in\Phi.$ It follows from Chebyshev's inequality that if $X_n\to X$ in $\mc{L}^2,$ then $$ |\phi|\F(\P(|X-X_n|\ge\epsilon E))\le\frac{1}{\epsilon^2}|\phi|\F(|X-X_n|^2), $$ and so $X_n\to X$ in conditional probability. The next proposition is part of Theorem 7.1.7 in~\cite{A&D}. In the proof we will write for an element $X\in\mf{E}$ and a scalar $\lambda$ that $X\le \lambda$ or $X\ge\lambda$ meaning that $X\le\lambda E$ or $X\ge\lambda E.$ \begin{proposition} Let $X_n, X$ be elements of $\mc{L}^2$ with $\F$-conditional distribution functions $F_n(x), F(x).$ If $X_n$ converges in probability to $X,$ then $F_n(x)$ converges to $F(x)$ in all continuity points of $F.$ \end{proposition} {\em Proof.} We have \begin{align*} F_n(X)&=\F(\P(X_n\le x)E) \\ &=\F(\P(X_n\le x, X>x+\epsilon)E)+\F(\P(X_n\le x, X\le x+\epsilon)E)\\ &\le\F(\P(|X_n-X|\ge\epsilon )E)+\F(\P(X\le x+\epsilon)E)\\ &=\F(\P(|X_n-X|\ge\epsilon)E)+F(x+\epsilon), \end{align*} and \begin{align*} F(x-\epsilon)&=\F(\P(X\le x-\epsilon)E)\\ &=\F(\P(X\le x-\epsilon, X_n>x)E)+\F(\P(X\le x-\epsilon,X_n\le x)E)\\ &\le \F(\P(|X_n-X|\ge \epsilon ))E+\F(\P(X_n\le x)E)\\ &=\F(\P(|X_n-X|\ge\epsilon ))E+F_n(x). \end{align*} Hence, for every $\phi\in\Phi,$ $$ |\phi|F(x-\epsilon)\le \lim_{n\to\infty} |\phi|F_n(x)\le |\phi|F(x+\epsilon) $$ for every $\epsilon>0.$ This shows that in every continuity point of $F,$ we have that $$ \lim_{n\to\infty}|\phi|F_n(x)=|\phi|F(x) $$ Therefore, $\lim_{n\to\infty}F_n(x)=F(x)$ in every continuity point of $F.$ \qed \bigskip \begin{corollary} If $X_n$ converges to $X$ in $\mc{L}^2,$ then its $\F$-conditional distribution function converges in every point of continuity to the $\F$-conditional distribution function of $X.$ \end{corollary} \section*{Appendix A: Cross-variation formula} We prove that the cross-variation formula for martingales hold. Although the proof is analogous to the proof of the quadratic variation formula \begin{equation} \lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})^2=\<X\>, \end{equation} given in~\cite{G10}, there are some differences. \begin{theorem}\label{theorem 8.5} Let $X,Y$ be $\gamma$-continuous martingales. Then, \begin{equation}\label{equation 8.12} \lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})=\<X,Y\>, \end{equation} where $\pi=\{t_0,t_1,\ldots,t_m\}$ is a partition of the interval $[a,b]$ with mesh $|\pi|$ and the convergence is in $\mc{L}^1$-conditional probability (see~\cite[Section 6]{G10}. \end{theorem} {\em Proof.\ } Set \begin{equation*} CV_t(\pi):=\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}}). \end{equation*} We first note that if $X,Y$ are in $\mc{M}_2,$ and if $0\le s<t\le u<v\le a,$ then \begin{multline*} \F[(X_v-X_u)(Y_t-Y_s)]=\F\{\F_u[(X_v-X_u)(Y_t-Y_s)]\}\\ =\F[(Y_t-Y_s)\F_u(X_v-X_u)]=0. \end{multline*} Thus, the expectation of increments over non-overlapping intervals are zero. Also, $$ \F[(X_v-X_u)(Y_v-Y_u)]=\F[X_vY_v-\F_u(X_uY_v+X_vY_u)+X_u Y_u]=\F[X_v Y_v- X_u Y_u] $$ Then, since $XY-\<X,Y\>$ is a martingale, \begin{multline*} \F[(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)]= \\ \F[(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)]=0. \end{multline*} It follows that the expectation of the product of terms of the form $(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)$ and of the form $(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)$ taken over non-overlapping intervals will be zero. We firstly consider the case where $$ \sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t], $$ and, for a stochastic process $Z=(Z_t)_{t\in[0,t]}$ we use the notation $\ds m_t(Z;\pi):=\sup_{1\le k\le m}|Z_{t_k}-Z_{t_{k-1}}|.$ Then, \begin{align*} &\F[CV_t(\pi)-\<X,Y\>_t]^2 \\ &=\F\left[\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &=\sum_{k=1}^m\F\left[(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &\le 2\sum_{k=1}^m\F\left[(X_{t_k}-X_{t_{k-1}})^2(Y_{t_k}-Y_{t_{k-1}})^2+ (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})^2\right] \\ &\le 2\sum_{k=1}^m\F\left[|X_{t_k}-X_{t_{k-1}}||X_{t_k}+X_{t_{k-1}}|(Y_{t_k}-Y_{t_{k-1}})^2\right] \\ &\phantom{jacobus}+2\F m_t(\<X,Y\>;\pi)\sum_{k=1}^m|\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}}|\\ &\le 4K\F m_t(X;\pi)\sum_{k=1}^m(Y_{t_k}-Y_{t_{k-1}})^2 \\ &\phantom{jacobus}+2\F m_t(\<X,Y\>;\pi)\sum_{k=1}^m(\<X\>_{t_k}-\<X\>_{t_{k-1}})+(\<Y\>_{t_k}-\<Y\>_{t_{k-1}})\mbox{ \rm by (3.3)}\\ &\le 4K\F m_t(X;\pi)V_t^{(2)}(\pi, Y) +2\F m_t(\<X,Y\>;\pi)(\<X\>_{t_m}+\<Y\>_{t_m})\\ &\le 4K\F m_t(X;\pi)V_t^{(2)}(\pi,Y) + 4K\F m_t(\<X,Y\>;\pi) \end{align*} From \cite[Lemma 6.2]{G10}, we have that $\F[V_t^{(2)}(\pi,Y)]^2\le 6K^4E$ and so applying the Cauchy inequality to the first term, we get (as in the proof of the formula for one variable in \cite{G11}) that it is bounded by $$ 4K\sqrt{6K^4}\sqrt{\F m_t(X;\pi)^2} $$ Now, if $|\pi|\to 0$ the $\gamma$-H\"older continuity of $X$ and of $\lr{X,Y}$ implies that in both terms the factor $m_t$ tends to zero. Therefore, the right-hand side tends to zero and we have proved the desired result for bounded martingales. The extension to the general case now proceeds via localization, exactly as in the proof of the corresponding result for the quadratic variation (see~\cite[Section 6]{G10}. \begin{corollary}\label{corollary 8.5} If $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t],$ then $$ \F(CV_t(\pi))^2\le 2K\F m_t(X:\pi)V_t^{(2)}(\pi,Y). $$ \end{corollary} \section*{Appendix B: A multi-dimensional version of It\^o's rule} In order to prove the integration by parts formula, we used the following multi-dimensional version of It\^o's rule (see~\cite[Theorem 3.3.6, page 153]{KS}. \begin{theorem}\label{theorem 8.7} Let $M_t=(M_t^{(1)},M_t^{(2)},\ldots,M_t^{(n)},\mf{F}_t)_{t\in[0,a]}$ be a vector of local continuous martingales, $B_t:=(B_t^{(1)},B_t^{(2)},\ldots,B_t^{(n)},\mf{F}_t)_{t\in[0,a]}$ a vector of adapted processes of bounded variation with $B_0=0$ and set $X_t=X_0+M_t+B_t,\ 0\le t\le a$ where $X_0$ is an $\mf{F}_0$ measurable element in $\R^n.$ Let $f(t,x):[0,a]\times\R^n\to\R$ be of class $C^{1,2}.$ Then, for $t\in[0,a],$ \begin{multline} f(t,X_t)=f(0,X_0)+\int_0^t\frac{\partial}{\partial t}f(s,X_s)\,ds +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dB_s^{(i)} \\ +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dM_s^{(i)} \\ +\frac 12\sum_{i=1}^n\sum_{j=1}^n\int_0^t\frac{\partial^2}{\partial x_i\partial x_j}f(s,X_s) \,d\<M^{(i)},M^{(j)}\>_s \\ \end{multline} \end{theorem} {\em Proof.} We refer the reader to the proof of the It\^o formula in the one-dimensional case as given in~\cite[Section 3.2]{G11}, and, without going too much into the detail, show how the proof of the multi-dimensional case follows. For notational convenience, set $z_t=(t,x_t),$ (thus $Z_t=(t,X_t)$) and consider the bounded case. For a partition $\pi=\{0=t_0<t_1<\cdots<t_m=t\},$ we have (with the products interpreted as inner products) \begin{multline}\label{equation 8.15} f(t,X_t)-f(0,X_0)=f(Z_t)-f(Z_0)= \\ =\sum_{k=1}^m\nabla f(Z_{t_{k-1}})(Z_{t_k}-Z_{t_{k-1}})+\frac{1}{2}\sum_{k=1}^m[Y_k(Z_{t_k}-Z_{t_{k-1}})](Z_{t_k}-Z_{t_{k-1}}). \end{multline} The derivative is the total derivative, i.e., $$ \nabla f=(\frac{\partial f}{\partial t},\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_m}) =(f_t,f_{x_1},\ldots,f_{x_n})$$ and $Y_k$ is an $(n+1)\times (n+1)$-matrix $Y_k=(Y^{k}_{ij})$ where $Y^k_{ij}$ is the image under representation of the continuous function $\ds\frac{\partial^2f}{\partial z_i\partial z_j}(\eta^k_{ij})=f_{z_iz_j}(\eta^k_{ij})$ as constructed in the one-dimensional case in~\cite[Lemma 3.9]{G11}. The next step is to substitute $Z_t=(t,X_0+M_t+B_t)$ in (\ref{equation 8.15}) and then to approximate the resulting sums as $|\pi|$ tends to zero. The terms involving the first order derivatives yield the first four terms in the theorem exactly by the same arguments as in \cite{G11}. In the last term, we have that all mixed terms that contain a factor of the form $(t_i-t_j),$ or $(B_{t_i}-B_{t_j})$ tend to zero and consequently we have to show that, as $|\pi|$ tends to zero, the sum \begin{equation} \frac 12\sum_{k=1}^m[Y_k(M_{t_k}-M_{t_{k-1}})](M_{t_k}-M_{t_{k-1}}) \end{equation} tends to the last term. The first step there is to show that we can replace $Y_k$ by the matrix $\ds\left(\frac{\partial^2 f}{\partial x_i\partial x_j}(t_{k-1},X_{t_{k-1}})\right)=\big(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\big).$ To do this we consider the difference \begin{multline} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n Y^k_{ij}(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}) \right. \\ \left. -\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}}))(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n (Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\\ \le CV_t(\pi)\sup_{\begin{substack}{1\le k\le m \\ 1\le i,j\le n}\end{substack}}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|. \end{multline} As \cite[Lemma 3.9(c)]{G11} it follows that $$ \phi\F\left(\sup_{\begin{substack}{1\le k\le m \\ 1\le i,j\le n}\end{substack}}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|\right)\to 0 \mbox{ as $|\pi|\to 0$}. $$ It now follows from the Cauchy inequalty and Corollary \ref{corollary 8.5} that $Y^k$ can be replaced by the matrix $\ds\left(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\right).$ The final step is to consider \begin{multline*} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.-\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ \end{multline*} Now, using arguments used in the proof of Theorem~\ref{theorem 8.5}, we get \begin{multline*} \F\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\right.\\ \left.\left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]\right|^2 \\ =\F\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n [f_{x_ix_j}(t_{k-1},X_{t_{k-1}})]^2[(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)]^2\right]\\ \le 2 \|f_{x_ix_j}\|^2_\infty\F\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n (M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})^2(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})^2\right.\\ \left.\phantom{\sum_{i=1}^n}+\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)^2\right] \\ \le 4K\|f_{x_ix_j}\|^2_\infty\left[\sum_{i=1}^n\sum_{j=1}^n \left(\F m_t(M^{(i)},\pi)V^{(2)}(\pi,M^{(j)}) \right.\right. \\ \left.\phantom{\sum_{i=1}^n}\left.+\F m_t(\<M^{(i)},M^{(j)}\>,\pi)\right)\right]. \end{multline*} The remainder of the proof to show that the right-hand side tends to zero, proceeds as in the proof of formula (\ref{equation 8.12}). \section{Exponential processes} First approach following Kuo: Denote by $\mc{L}_{\pred}(L^1[a,b],\<M\>)$ the set of all elements $(X_t)$ satisfying \begin{enumerate} \item[\rm{1.}] $(X_t)$ is predictable adapted in $\mf{E}^u$ (see~\cite[Section 7]{G9}); \item[\rm{2.}] $X$ is integrable, i.e., $\int_a^b|X_t|\,d\<M\>\in\mf{E}^u,$ \end{enumerate} and recall the $(X_t)\in\mc{L}_{\pred}(L^2[a,b],\<M\>)$ if the first condition above holds and if $|X|^2$ is integrable. \begin{definition}\label{Ito process}({\rm Kuo, Definition 7.4.2}) An It\^o process $X=(X_t)_{t\in[a,b]},$ $X_a\in\mf{F}_a,$ is a stochastic process defined by \begin{equation} X_t:=X_a+\int_a^t Y_s\,dB_s +\int_a^t Z_s\,ds,\ \ a\le t\le b, \end{equation} where $Y=(Y_s)\in \mc{L}_{\pred}(L^2[a,b],\<B\>)$ and $Z=(Z_t)\in\mc{L}_{\pred}(L^1[a,b],\<B\>).$ \end{definition} \noindent General form of It\^o's formula \begin{theorem} Let $(X_t)$ be an It\^o process defined by~(\ref{Ito process}) and suppose that $\theta(t,x)$ is a continuous function with continuous partial derivatives $\theta_t,$ $\theta_x$ and $\theta_{xx}.$ Then \begin{multline} \theta(t,X_t)=\theta(a,X_a)+\int_a^t\theta_x(s,X_s)Y_s\,dB_s \\ +\int_a^t\left[\theta_t(s,X_s)+\theta_x(s,X_s)Z_s+\frac{1}{2}\theta_{xx}(s,X_s)Y_s^2\right]\,ds \end{multline} \end{theorem} \begin{corollary}(\textrm{Kuo Example 7.4.4}) Let $Y=(Y_t)\in\mc{L}(L^2,\<B\>)$ and consider the It\^o process \begin{equation}\label{exp process} X_t=\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds, 0\le t\le 1, \end{equation} and the function $\theta(t,x)=e^x$ for all $t.$ then, applying the result above, we get \begin{multline} \exp(\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds)= E+\int_0^tY_s\exp(X_s)dB_s \\ +\int_0^t\left[\exp(X_s)(-\frac{1}{2}Y_s^2)+\frac{1}{2}\exp(X_s)Y_s^2\right]\,ds \\ =E+\int_0^t Y_s\exp(\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds)dB_s. \end{multline} \end{corollary} The process $(X_t)$ defined in~(\ref{exp process}) plays a central role in Girsanov'e theorem which answers the question which processes $(Y_t)$ are such that $(X_t)$ is a martingale. It is clear the $(X_t)$ is a local martingale. \begin{definition}(\textrm{Kuo, Definition 8.7.1, p 137}) The exponential process given by $Y\in \mc{L}(L^2[a,b],t)$ is defined to be the stochastic process \begin{equation} \mathscr{E}_{Y,t}=\exp\left[\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^t Y_s^2\,ds\right],\ \ 0 \le t \le T. \end{equation} \end{definition} It follows from Corollary~\ref{exp process}, that \begin{equation}\label{exp process2} \mathscr{E}_{Y,t}=E+\int_0^t Y_s\mathscr{E}_{Y,s}\, dB_s,\ \ 0\le t\le T. \end{equation} \section{Abandoned, Entered and Stopped processes} Let $\S$ be a stopping time for the filtration $(\F_t,\mf{F}_t)_{t\in J}$ and let $(X_t)_{t\in J}$ be an adapted stochastic process. Then the projections onto the band $\mf{B}(tE\le\S E)$ are the projections $\P^{\S}_t=(\S_t^\ell)^d\in\mf{P}_t$ and therefore, the process $(A^{\S,X}_t)=(\P^{\S}_tX_t)_{t\in J}$ is an adapted stochastic process. We call $(A^{\S,X}_t)$ the process that {\em abandons $(X_t)$ at time $\S,$} or simply the {\em abandoned process}. On the band $\mf{B}(tE\le\S E)$ it has the value $X_t$ and on the band $\mf{B}(tE>\S E)$ it is zero. Similarly, the projections onto the band $\mf{B}(tE\ge\S E),$ i.e., the projections $\S^r_t\in\mf{P}_t,$ will be denoted by $\Q^{\S}_t.$ The adapted process $(B^{\S,X}_t)=(\Q^{\S}_tX_t)_{t\in J}$ will be called the {\em process that enters $(X_t)$ at time $\S,$} or the {\em entered process}. Here we have that on the band $\mf{B}(tE<\S E)$ it has the value $0$ and on the band $\mf{B}(tE\ge\S E)$ it has the value $X_t.$ If $X_t\ge 0$ for all $t\in J,$ we define the adapted process $(X_{\S,t})$ by \begin{equation} X_{\S,t}:=A^{\S,X}_t\n B^{\S,X}_t\in\mf{E}. \end{equation} We call the point $t_0$ a discontinuity of $\S,$ if the right-continuous spectral system $(\S^\ell_t)$ is constant on an interval to the right of $t_0.$ else we call $t_0$ a point of continuity. The set of points of discontinuity of $\S$ will be denoted by $D.$ We define $X_\S$ as follows \begin{equation} X_\S:=\begin{cases} &\sup_{t\in J\setminus D}X_{\S,t}=\sup_{t\in J}A^{\S,X}_t\n B^{\S,X}_t \\ &\inf_{t>t_0}A^{\S,X}_t\n B^{\S,X}_t \mbox{ if $t_0\in D.$}\end{cases} \end{equation} In order to define $X_\S$ for general elements, we This implies that $X_\S\in\mf{E}^u,$ which is an $f$-algebra. We therefore take $X_t^+$ and $X_t^-$ to define as above two elements $X_\S^+$ and $X_\S^-.$ The element $X_\S$ is then defined as \begin{equation} X_\S:=X_S^+ -X_\S^-\in\mf{E}^u. \end{equation} The {\em stopped process} is then defined in the usual way as $(X_{t\n\S})_{t\in J}.$ \begin{lemma} The system of elements $\{X_{\S,t}\,|\,t\in J\}$ is a disjoint system. \end{lemma} {\em Proof.}\ Let $s<t.$ It is then clear that at least one of the factors $A_t^{\S,X}$,$B_t^{\S,X},$ $A_s^{\S,X}$ or $B_s^{\S,X}$ is zero. As they are all positive, it follows that $|X_{\S,t}|\wedge|X_{\S,s}|=0.$ \qed \begin{example} \rm Consider the case mentioned in the introduction where $(X_t=X(t,\omega))_{t\in J,\omega\in\Omega}$ is a stochastic process in $L^1(\Omega,\mf{F},P)$ adapted to the filtration $(\mf{F}_t)$ of $\sigma$-algebras contained in $\mf{F}.$ If $\S(\omega)$ is a stopping time for the filtration, then the process $(X_{\S,t}(\omega))$ is defined by the elements $$ X_{\S,t}(\omega)=\begin{cases} 0 & \mbox{if $t<\S(\omega)$ or if $t>\S(\omega)$} \\ X_t(\omega) & \mbox{if $t=S(\omega).$}\end{cases} $$ \end{example} We consider the case in which $\S$ is a simple function. In that case $\S$ can be written as $$ \S=\sum_{i=1}^n t_i\P_{t_i}, \ \P_{t_i}\in\mf{P}_{t_i}, a\le t_1<t_2<\cdots<t_n\le b, $$ with the projections $\P_{t_i}\P_{t_j}=0$ if $i\ne j.$ Putting In this case we have that the decreasing function $(\P^\S_t)_{t\in J}$ has the finite range consisting of the projections $$\{\I=\sum_{j=1}^n\P_{t_j},\sum_{j=2}^{n}\P_{t_j},\sum_{j=3}^{n}\P_{t_j},\ldots,\P_{j_n} \}=\{\Q_1,\Q_2,\ldots,\Q_n\}. $$ It can therefore be written as $$ (\P^\S_t)_{t\in J}=(\sum_{i=1}^n \Q_iI_{(t_{i-1},t_i]}(t))_{t\in J},\ \ (J=[a,b]). $$ Hence, $$ A_t^{\S,X}=\sum_{i=1}^n\Q_i X_t I_{[t_{i-1},t_i]}(t). $$ On the other hand, we have that $\Q_t^\S$ is the increasing system $\{\P_{t_1},\P_{t_1}+\P_{t_2},\ldots, \\ \P_{t_1}+\cdots+\P_{t_n}\}.$ So, $$ B^{\S,X}_t=\sum_{i=1}^n\P_{t_i}X_tI_{[t_{i-1},t_i]}(t) $$ It follows that $X_\S=\sum_{i=1}^n \P_{t_i}X_{t_i}.$ We remark that we defined, for simple functions $\S,$ in \cite{G1,G2}the element $X_\S$ by this formula and then extended the definition to the case of submartingales. Next Question: Let $\mf{F}_\S$ be the Dedekind complete Riesz subspace of $\mf{E}$ generated by the abandoned process $(A^{\S,X}_t$; is it the same as the one we defined earlier? It contains $\mf{F}_0=\mf{F}$ so there exists a conditional expectation $\F_\S$ that maps $\mf{E}$ onto $\mf{F}_\S.$ Can I characterize it? (in terms of the filtration $(\F_t,\mf{F}_t).$ What is $(\A_t^{\S,X}):=(\P_t^\S\F_t)$? What is $(\B_t^{\S,X}):=(\Q_t^\S\F_t)$? What is their infimum? $\F_{\S,t}=\A_t^{\S,X}\wedge \B_t^{\S,X}$ Is the supremum of this system equal to $\F_\S$? Note that $\A_t^{\S,X}$ is not a conditional expectation (it maps $E$ to $\P_t^{\S}E$ (which is a weak order unit for the projection band that is the range of $\P_t^{\S}).$ If $(X_t)$ is a submartingale then for each $s,t$ $s<t$ we have $$ \A_s^{\S,X}A_t^{\T,X}=\P_s^{\S}\F_s\P_t^{\T}X_t =\F_s\P_s^{\S}\P_t^{\T}X_t=\F_s\P_s^{\S}X_t=\P_s^{\S}\F_sX_t\ge \P_s^{\S} X_s=A_s^{\S,X}. $$ Have in mind a simple proof of Doob's sampling theorem: $\F_\S X_\T\ge X_\S$. \end{document}
1,941,325,220,313
arxiv
\section{Introduction} Let $X$ be a Hilbert space. The critical points of a continuously Fr\'{e}chet-differentiable functional $E:X\to\mathbb{R}$ are defined as solutions to the associated Euler--Lagrange equation \[ E'(u)=0, \quad u\in X, \] where $E'$ is the Fr\'{e}chet-derivative of $E$. The first candidates of critical points are local minima and maxima on which traditional calculus of variations and optimization methods focus. Critical points that are not local extrema are unstable and called saddle points. When the second-order Fr\'{e}chet-derivative $E''$ exists at some critical point $u_*$, the instability of $u_*$ can be depicted by its Morse index (MI) \cite{Chang1993}. In fact, the MI of such a critical point $u_*$, denoted by $\mathrm{MI}(u_*)$, is defined as the maximal dimension of subspaces of $X$ on which the linear operator $E''(u_*)$ is negative-definite. In addition, $u_*$ is said to be nondegenerate if $E''(u_*)$ is invertible. For a nondegenerate critical point, if its $\mathrm{MI}=0$, it is a strict local minimizer and then a stable critical point, while if its $\mathrm{MI}>0$, then it is a saddle point and an unstable critical point. Generally speaking, the higher the MI is, the more unstable the critical point is. Saddle points, as unstable equilibria or transient excited states, are widely found in numerous nonlinear problems in physics, chemistry, biology and materials science \cite{Chang1993,CZN2000IJBC,H1973AA,LWZ2013JSC,Rabinowitz1986,XYZ2012SISC,ZRSD2016npj}. They play an important role in many interesting applications, such as studying rare transitions between different stable/metastable states \cite{CLEZS2010PRL,ERV2002PRB} and predicting morphologies of critical nucleus in the solid-state phase transformation \cite{ZCD2007PRL,ZRSD2016npj}, etc.. Due to various difficulties in the theoretical analysis and direct experimental observation, more and more attentions have been paid to develop effective and reliable numerical methods for catching saddle points. Compared with the computation of stable critical points, it is much more challenging to design a stable, efficient and globally convergent numerical method for finding saddle points due to the instability and multiplicity. In recent years, motivated by some early algorithms for searching for saddle points in computational physics/chemistry/biology, the dimer method \cite{HJ1999JCP,ZD2012SINUM}, gentlest ascent dynamics \cite{EZ2011NL}, climbing string method \cite{ERV2002PRB,RV2013JCP} and etc., have been proposed and successfully implemented to find saddle points. It is noted that methods mentioned above mainly consider saddle points with $\mathrm{MI}=1$. With the development of science and technology, the stable numerical computation of multiple unstable critical points with high MI has attracted more and more attentions both in theories and applications. Studies of relevant numerical methods have been carried out in the literature. Inspired by the minimax theorems in the critical point theory (see, e.g., \cite{Rabinowitz1986}) and the work of Choi and McKenna \cite{CM1993NA}, Ding, Costa and Chen \cite{DCC1999NA} and Chen, Zhou and Ni \cite{CZN2000IJBC}, a local minimax method (LMM) was developed by Li and Zhou in \cite{LZ2001SISC,LZ2002SISC} with its global convergence established in \cite{LZ2002SISC,Z2017CAMC}. Then, in \cite{XYZ2012SISC}, Xie, Yuan and Zhou modified the LMM with a significant relaxation for the domain of the local peak selection, which is a vital definition for the LMM (see below), and provided the global convergence analysis for this modified LMM by overcoming the lack of homeomorphism of the local peak selection. More modifications and developments of the LMM for multiple solutions of various problems, such as elliptic partial differential equations (PDEs) with nonlinear boundary conditions, quasi-linear elliptic PDEs in Banach spaces, upper semi-differentiable locally Lipschitz continuous functional and so on, have been also studied. We refer to \cite{LWZ2013JSC,Y2013MC,YZ2005SISC,Z2017CAMC} and references therein for this topic. On the other hand, Xie and Chen etc. proposed a search extension method and its modified versions for finding high-index saddle points based on generalized Fourier series expansion and homotopy method, see \cite{CX2004CMA,CX2008SCM,LXY-SGSEM,XCX2005IMA}. In \cite{YZZ2019SISC}, Yin, Zhang and Zhang proposed a high-index optimization-based shrinking dimer method to find unconstrained high-index saddle points. Recently, a constrained gentlest ascent dynamics was introduced in \cite{LXY-CGAD} to find constrained saddle points with any specified MI and applied to study excited states of Bose--Einstein condensates. Since our work in this paper is inspired by the traditional LMMs, we want to recall the history of them a little more. The idea of LMMs characterizes a saddle point with specified MI as a solution to a two-level nested local optimization problem consisting of inner local maximization and outer local minimization. Owing to this local minimax characterization, the corresponding numerical algorithm designed for finding multiple saddle points with $\mathrm{MI}\geq 1$ becomes possible. In practical computations, the inner local maximization is equivalent to an unconstrained optimization problem in an Euclidean space with a fixed dimension, and thus many standard optimization algorithms can be employed to solve it efficiently. While the outer local minimization is more challenging because it solves an optimization problem in an infinite-dimensional submanifold (as described later) and is the main concern to the LMMs' family. In \cite{LZ2001SISC}, an exact step-size rule combined with a normalized gradient descent method was introduced for the outer local minimization, and then the LMM was successfully applied to solve a class of semilinear elliptic boundary value problems (BVPs) for multiple unstable solutions. However, the exact step-size rule is not only expensive in the numerical implementation, but also inconvenient for establishing convergence properties \cite{LZ2002SISC}. To compensate for these shortcomings, in the subsequent work \cite{LZ2002SISC}, Li and Zhou introduced a new step-size rule that borrowed the idea of the Armijo or backtracking line search rule \cite{Armijo1966} prefered in the optimization theory. Thanks to this step-size rule, the global convergence result for the LMM was obtained in \cite{LZ2002SISC,Z2017CAMC}. Then, a normalized Goldstein-type LMM was put forward in our work \cite{LXY2021CMS} by recommending a normalized Goldstein step-size search rule that guarantees the sufficient decrease of the energy functional and prevents the step-size from being too small simultaneously. The feasibility and global convergence analysis of this approach was also provided in it. Recently, as a subsequent work \cite{LXY-NWPLMM}, both normalized Wolfe--Powell-type and strong Wolfe--Powell-type step-size search rules for the LMM were introduced and global convergence of the corresponding algorithms for general decrease directions were verified. Above all, up to now the algorithm design and convergence analysis of monotonically decreasing LMMs with several typically normalized inexact step-size search rules have been systematically investigated. However, it should be pointed out that, so far all LMMs have to guarantee the sufficient decrease of the objective functional, and then the normalized monotone step-size search rules are crucial. These strict requirement of the monotone decrease may lead to expensive computations and reduce the convergence rate. In this paper, we focus on proposing novel LMMs with fast convergences for finding multiple saddle points by developing suitable normalized nonmonotone step-size search strategies to replace normalized monotone step-size search rules utilized in traditional LMMs. Let us recall that, in optimization theory, the Barzilai--Borwein (BB) method developed by Barzilai and Borwein in 1988 \cite{BB1988IMANUM} is an efficient nonmonotone method and does not require the decrease of the objective function value at each iteration, distinguished from monotone methods. Actually, it can be regarded as a gradient method with modified step-sizes, which is enlightened by the idea of the quasi-Newton method for avoiding matrix computations. Compared with the classical steepest descent method put forward by Cauchy in \cite{Cauchy1847}, the BB method requires less computational efforts and often speeds up the convergence significantly \cite{BB1988IMANUM,Fletcher2005}. To ensure the global convergence, the BB method is usually integrated with nonmonotone line search strategies to minimize general smooth functions in the optimization theory \cite{R1997SIOPT,WY2013MP}. It should be pointed out that, if a nonmonotone search strategy is used, occasional growth of the function value during the iteration is permitted. Two popular nonmonotone search strategies, i.e., the Grippo--Lampariello--Lucidi (GLL) search strategy and Zhang--Hager (ZH) search strategy were introduced in \cite{GLL1986SINUM} and \cite{ZH2004SIOPT}, respectively. By combining the GLL nonmonotone search strategy with the BB method, Raydan introduced an efficient globalized BB method for large-scale unconstrained optimization problems in \cite{R1997SIOPT}. The relevant algorithm is competitive with some standard conjugate gradient methods in \cite{R1997SIOPT}. Recently, Zhang et al. \cite{YZZ2019SISC,ZDZ2016SISC} united the BB method with the shrinking dimer method \cite{ZD2012SINUM} to efficiently find saddle points with specified MI without theoretical proof for the global convergence yet. Besides, as far as we know, there is no more work applying the BB method for finding saddle points. Inspired by traditional LMMs, the BB method and its globalization strategies founded on nonmonotone step-size searches in the optimization theory, this paper is devoted to establish globally convergent nonmonotone LMMs for finding multiple saddle points of nonconvex functionals in Hilbert space. Actually, a normalized ZH-type nonmonotone step-size search rule will be introduced to the LMM. Its feasibility and global convergence rigorously will be carried out. We also consider a normalized GLL-type nonmonotone step-size search with its feasibility and partial convergence results verified in this paper. Then, a globally convergent Barzilai--Borwein-type LMM (GBBLMM) will be presented by explicitly constructing the BB-type step-size as a trial step-size in each iteration to speed up the LMM. It is worthwhile to point out that, like its LMM family members, the GBBLMM is also able to find saddle points in a stable way and avoid repeatedly being convergent to previously found saddle points. Finally, the GBBLMM will be applied and compared with traditional LMMs in finding multiple unstable solutions of various problems. For example, the Lane-Emden equation, H\'{e}non equation in the astrophysics and the linear elliptic equations with semilinear Neumann boundary conditions which widely appear in many scientific fields, such as corrosion/oxidation modelings, metal-insulator or metal-oxide semiconductor systems. The rest of this paper is organized as follows. In section~\ref{sec:pre}, the preliminaries for the LMM and the BB method in the optimization theory are provided. In section~\ref{sec:nmlmm}, the normalized ZH-type and normalized GLL-type nonmonotone step-size rules are introduced for the LMM and analyzed with their feasibility and convergence results. Then, the GBBLMM is presented in section~\ref{sec:gbblmm} by constructing the explicit BB-type trial step-size of the nonmonotone step-size search rule. Further, in section~\ref{sec:numer}, extensive numerical results are stated and compared with traditional LMMs. Finally, some concluding remarks are provided in section~\ref{sec:con}. \section{Preliminaries} \label{sec:pre} In this section, for the convenience of discussions later, we revisit the basic ideas and algorithm framework of the LMM as well as the BB method in the optimization theory. \subsection{Local minimax principle}\label{subsec:lmt} Throughout this paper, we assume that $E$ has a local minimizer at $0\in X$ and focus on finding nontrivial saddle points of $E$. We begin with some notations and basic lemmas. Let $(\cdot,\cdot)$ and $\|\cdot\|$ be the inner product and norm in $X$ respectively, $\langle\cdot,\cdot\rangle$ the duality pairing between $X$ and its dual space $X^*$, $S=\{v\in X:\|v\|=1\}$ the unit sphere in $X$, and $Y^\bot$ the orthogonal complement to $Y\subset X$. Suppose that $L\subset X$ is a finite-dimensional closed subspace and serves as the so-called support or base space \cite{LZ2001SISC,LZ2002SISC,XYZ2012SISC}. Define the half subspace $[L, v]=\{tv+w^L:t\geq0,w^L\in L\}$ for any $v\in S$. \begin{definition}[\cite{XYZ2012SISC}]\label{def:pv} Denote $2^X$ as the set of all subsets of $X$. A {\em peak mapping} of $E$ w.r.t. $L$ is a set-valued mapping $P:X\to 2^X$ s.t. \[ P(v)=\left\{u\in X:\mbox{$u$ is a local maximizer of $E$ on $[L, v]$}\right\},\quad\forall\, v\in S. \] A {\em peak selection} of $E$ w.r.t. $L$ is a mapping $p:S\to X$ s.t. $p(v)\in P(v)$, $\forall\, v\in S$. For a given $v\in S$, we say that $E$ has a {\em local peak selection} w.r.t. $L$ at $v$, if there exists a neighborhood $N_v$ of $v$ and a mapping $p:N_v\cap S\to X$ s.t. $p(u)\in P(u)$, $\forall\, u\in N_v\cap S$. \end{definition} Set $p(v)$ to be a local peak selection of $E$ w.r.t. $L$ at $v=v^\bot+v^L\in S\backslash L$ with $v^\bot\in L^\bot\backslash\{0\}$ and $v^L\in L$, the definition of the local peak selection obviously leads to the following property. \begin{lemma}[\cite{XYZ2012SISC}]\label{lem:orth} Suppose $E\in C^1(X,\mathbb{R})$ and let $p(v)$ be a local peak selection of $E$ w.r.t. $L$ at $v\in S\backslash L$, then $\langle E'(p(v)),p(v)\rangle=0$ and $\langle E'(p(v)),w\rangle=0$, $\forall\, w\in[L, v]$. \end{lemma} Lemma~\ref{lem:orth} implies that each local peak selection belongs to the Nehari manifold $\mathcal{N}_E=\{u\in X\backslash\{0\}:\langle E'(u),u\rangle=0\}$ which contains all nontrivial critical points of $E$. Denote $g=\nabla E(p(v))\in X$ as the gradient of $E$ at $p(v)$, i.e., the canonical dual or Riesz representer of $E'(p(v))\in X^*$, which is defined by \[ (g,\phi)=\langle E'(p(v)),\phi\rangle,\quad \forall\, \phi\in X. \] Obviously, Lemma~\ref{lem:orth} yields $g\in[L, v]^\bot$. In addition, for each $\alpha>0$, there is a unique orthogonal decomposition for $v(\alpha)={(v-\alpha g)}/{\|v-\alpha g\|}$ as \begin{equation}\label{eq:va-dcom} v(\alpha)=\frac{v-\alpha g}{\|v-\alpha g\|} =\frac{v-\alpha g}{\sqrt{1+\alpha^2\|g\|^2}} =v^L(\alpha)+v^\bot(\alpha), \end{equation} with \begin{equation}\label{eq:vaL} v^L(\alpha)=\frac{v^L}{\sqrt{1+\alpha^2\|g\|^2}}\in L,\quad v^\bot(\alpha)=\frac{v^\bot-\alpha g}{\sqrt{1+\alpha^2\|g\|^2}}\in L^\bot. \end{equation} The following two lemmas follow from direct calculations and their proofs are referred to those of Lemmas~3.2-3.4 in \cite{LXY2021CMS}. \begin{lemma}[\cite{LXY2021CMS}]\label{lem:vs} For $v(\alpha)$ expressed in \eqref{eq:va-dcom} with $v=v^\bot+v^L\in S\backslash L$, $v^\bot\in L^\bot\backslash\{0\}$ and $v^L\in L$, there hold that $v(\alpha)\in S\backslash L$, $\|v^L(\alpha)\|\leq\|v^L\|<1$ and $\|v^\bot(\alpha)\|\geq\|v^\bot\|>0$, $\forall\,\alpha>0$. Further, if $g\neq0$, then \begin{equation}\label{eq:va-v} \frac{\alpha\|g\|}{\sqrt{1+\alpha^2\|g\|^2}}<\|v(\alpha)-v\|<\alpha\|g\|, \quad\forall\,\alpha>0, \end{equation} and \[ \lim_{\alpha\to0^+}\frac{\|v(\alpha)-v\|}{\alpha\|g\|}=1. \] \end{lemma} \begin{lemma}[\cite{LXY2021CMS}]\label{lem:pv-tv} Let $p$ be a local peak selection of $E$ w.r.t. $L$ at $\bar{v}\in S\backslash L$. For $\forall\, v\in S$ near $\bar{v}$, denote $p(v)=t_vv+w_v^L$ with $t_v\geq0$ and $w_v^L\in L$. If $p$ is continuous at $\bar{v}$, then the mappings $v\mapsto t_v$ and $v\mapsto w_v^L$ are continuous at $\bar{v}$. \end{lemma} In view of Lemmas~\ref{lem:orth}-\ref{lem:pv-tv} and in accordance with the lines in the proof of Lemmas~3.6-3.7 in \cite{LXY2021CMS}, one can obtain the following result, which is actually an improved version of Lemma~2.1 in \cite{LZ2001SISC} and Lemma~2.13 in \cite{Y2013MC} since the domain of the local peak selection is changed from $S\cap L^\bot$ to $S$. \begin{lemma}[\cite{LXY2021CMS}]\label{lem:armijo} Suppose $E\in C^1(X,\mathbb{R})$ and let $p(v)=t_vv+w_v^L$ be a local peak selection of $E$ w.r.t. $L$ at $v\in S\backslash L$, where $t_v\geq0$ and $w_v^L\in L$. If there hold (i) $p$ is continuous at $v$; (ii) $t_v>0$; and (iii) $g=\nabla E(p(v))\neq0$, then for any $\sigma\in(0,1)$, there exists $\alpha^A>0$ s.t. \[ E(p(v(\alpha)))< E(p(v))-\sigma\alpha t_v\|g\|^2,\quad \forall \,\alpha\in(0,\alpha^A). \] \end{lemma} Thanks to Lemma~\ref{lem:armijo}, the following result can be obtained by imitating the lines of the proof of Theorem~2.1 in \cite{LZ2001SISC} with its proof omitted here for simplicity. \begin{theorem}\label{thm:lmt0} If $E\in C^1(X,\mathbb{R})$ has a local peak selection w.r.t. $L$ at $v_*\in S\backslash L$, denoted by $p(v_*)=t_{v_*}v_*+w_{v_*}^L$, satisfying (i) $p$ is continuous at $v_*$; (ii) $t_{v_*}>0$; and (iii) $v_*$ is a local minimizer of $E(p(v))$ on $S\backslash L$, then $p(v_*)\notin L$ is a critical point of $E$. \end{theorem} Before establishing the existence result, the concept of the compactness is needed. \begin{definition}[\cite{Rabinowitz1986}] A functional $E\in C^1(X,\mathbb{R})$ is said to satisfy the Palais--Smale (PS) condition if every sequence $\{w_j\}\subset X$ s.t. $\{E(w_j)\}$ is bounded and $E'(w_j)\to0$ in $X^*$ has a convergent subsequence. \end{definition} For a given $v_0=v_0^\bot+v_0^L\in S\backslash L$ with $v_0^\bot\in L^\bot\backslash\{0\}$ and $v_0^L\in L$, define \begin{equation}\label{eq:V0} \mathcal{V}_0:=\{v=v^\bot+\tau v_0^L\in S:v^\bot\in L^\bot,0\leq\tau\leq1\}\subset S\backslash L. \end{equation} As illustrated below in Lemma~\ref{lem:mono}, the sequence $\{v_k\}$ generated by the LMM algorithm with initial data $v_0$ is contained in $\mathcal{V}_0$, it will be seen that the domain of the peak selection $p$ can be limited to be the closed subset $\mathcal{V}_0$ instead of $S$. Similar to Theorem~2.2 in \cite{LZ2001SISC}, by applying the Ekeland's variational principle and Lemma~\ref{lem:armijo}, we have the existence result with the proof given in Appendix~\ref{app:prf-lmt}, which is really an improvement for that in \cite{LZ2001SISC}. In fact, a continuous peak selection $p$ defined on $\mathcal{V}_0$ instead of $S\cap L^\bot$, in general, is no longer a homeomorphism. \begin{theorem}\label{thm:lmt} Let $E\in C^1(X,\mathbb{R})$ satisfy the (PS) condition. If $E$ has a peak selection w.r.t. $L$, denoted by $p(v)=t_vv+w_v^L$ with $v\in \mathcal{V}_0$ and $w_v^L\in L$, satisfying (i) $p$ is continuous on $\mathcal{V}_0$; (ii) $t_v\geq\delta$ for some $\delta>0$ and $\forall\,v\in \mathcal{V}_0$; and (iii) $\inf_{v\in \mathcal{V}_0}E(p(v))>-\infty$, then there exists $v_*\in \mathcal{V}_0$ s.t. $p(v_*)\notin L$ is a critical point and \[ E(p(v_*))=\inf_{v\in \mathcal{V}_0}E(p(v)). \] \end{theorem} \vspace{-0.1in} Under assumptions of Theorem~\ref{thm:lmt}, there is a saddle point, as an unstable critical point of $E$, characterized as a local solution to the constrained local minimization problem \begin{equation}\label{eq:LMM2} \min_{v\in \mathcal{V}_0}E(p(v))\quad \mbox{or}\quad\min_{w\in\mathcal{M}}E(w), \end{equation} with $\mathcal{M}=\{p(v):v\in \mathcal{V}_0\}$ serving as the solution submanifold. Thus, some descent algorithms can work for numerically finding such saddle points of the functional $E$ in a stable way. Consequently, Theorems~\ref{thm:lmt0} and \ref{thm:lmt} provide mathematical justifications for the LMM, which is a stable computational model for unstable solutions. \subsection{Local minimax algorithm}\label{subsec:lma} To numerically find multiple saddle points in a stable way, traditional LMMs solve the constrained local minimization problem \eqref{eq:LMM2} via the following iterative scheme \vspace{-0.05in} \[ v_{k+1}=v_k(\alpha_k):=\frac{v_k-\alpha_kg_k}{\|v_k-\alpha_kg_k\|},\quad w_{k+1}=p(v_{k+1}),\quad k=0,1,\ldots, \] where $v_k\in S\backslash L$, $\alpha_k>0$ is called a step-size and $g_k=\nabla E(w_k)$ denotes the gradient of $E$ at $w_k=p(v_k)$. The framework of the LMM algorithm is outlined in Algorithm~\ref{alg:lmm} and we refer to \cite{LZ2001SISC,LZ2002SISC,XYZ2012SISC} for more details. \begin{algorithm}[!ht] \caption{Framework of the algorithm for the LMM \cite{LZ2001SISC,LZ2002SISC,XYZ2012SISC}.} \label{alg:lmm} \begin{enumerate}[\bf Step 1.] \item Let the support space $L$ be spanned by some previously found critical points of $E$, say $u_1,u_2,\ldots,u_{n-1}\in X$, where $u_{n-1}$ is assumed to have the highest energy functional value. The initial ascent direction $v_0\in S\backslash L$ at $u_{n-1}$ is given. Set $t_{-1}=1$, $w_{-1}^L=u_{n-1}$ and $k:=0$. Repeat {\bf Steps~2-4} until the stopping criterion is satisfied (e.g., $\|\nabla E(w_k)\|\leq \varepsilon_{\mathrm{tol}}$ for a given tolerance $0<\varepsilon_{\mathrm{tol}}\ll 1$), then output $u_n=w_k$. \item Using the initial guess $w=t_{k-1}v_k+w_{k-1}^L$, solve for \[ w_k=\arg\max_{w\in[L, v_k]}E(w), \] and denote $w_k=p(v_k)=t_kv_k+w_k^L$, where $t_k\geq0$ and $w_k^L\in L$. \item Solve the linear subproblem \begin{equation}\label{eq:gk} (g_k,\phi)=\langle E'(w_k),\phi\rangle,\quad \forall\,\phi\in X, \end{equation} to get the gradient $g_k=\nabla E(w_k)$. \item Choose a suitable step-size $\alpha_k>0$ to update \[ v_{k+1}=v_k(\alpha_k)=\frac{v_k-\alpha_k g_k}{\|v_k-\alpha_k g_k\|}.\] Set $k:=k+1$ and go to {\bf Step~2}. \end{enumerate} \end{algorithm} A significant behavior of the iterative sequence is presented in the following lemma and can be proved using the expression $v_{k+1}=v_k(\alpha_k)$ and \eqref{eq:va-dcom}-\eqref{eq:vaL}, and the details can be found in Lemma~2.3 in \cite{XYZ2012SISC}. \begin{lemma}[\cite{XYZ2012SISC}]\label{lem:mono} Let $\{v_k\}$ be a sequence generated by Algorithm~\ref{alg:lmm} with $v_0\in S\backslash L$. Denote $v_k=v_k^\bot+v_k^L$ with $v_k^\bot\in L^\bot$ and $v_k^L\in L$, $k=0,1,\ldots$, then $\|v_0^\bot\|\leq\|v_k^\bot\|\leq1$ and $v_k^L=\tau_kv_0^L$ hold for $0<\tau_{k+1}\leq\tau_k\leq1$, $k=0,1,\ldots$. \end{lemma} This implies that once an initial guess $v_0\in S\backslash L$ is used in Algorithm~\ref{alg:lmm}, we can focus our attention on the domain of a peak selection $p$ only on the closed subset $\mathcal{V}_0$ defined in \eqref{eq:V0}, which contains all possible $v_k$ that the algorithm may generate. Next, we describe the following weaker version of the homeomorphism of $p$, which plays an essential role for the global convergence in section~\ref{sec:nmlmm}. The proof is similar to that of Theorem~2.1 in \cite{XYZ2012SISC} and skipped here for brevity. \begin{lemma}\label{lem:home} Suppose $E\in C^1(X,\mathbb{R})$ and let $p$ be a peak selection of $E$ w.r.t. $L$ and $\{v_k\}$ be a sequence generated by Algorithm~\ref{alg:lmm} with $v_0\in S\backslash L$. Denote $w_k=p(v_k)=t_kv_k+w_k^L$ with $t_k\geq0$ and $w_k^L\in L$. Assume that (i) $p$ is continuous on $\mathcal{V}_0$ and (ii) $t_k\geq\delta$ for some $\delta>0$ and $\forall\,k=0,1,\ldots$ hold. If $\{w_k\}$ contains a subsequence $\{w_{k_i}\}$ converging to some $u_*\in X$, then the corresponding subsequence $\{v_{k_i}\}$ converges to some $v_*\in \mathcal{V}_0$ satisfying $u_*=p(v_*)$. \end{lemma} It is worthwhile to point out that step-size search rules used in traditional LMMs include the optimal/exact step-size search rule \cite{LZ2001SISC}, normalized Armijo-type step-size search rule \cite{LZ2002SISC,YZ2005SISC,XYZ2012SISC}, normalized Glodstein-type step-size search rule \cite{LXY2021CMS}, and normalized Wolfe--Powell-type step-size search rules \cite{LXY-NWPLMM}. And up to now all step-size search rules in traditional LMMs are monotone in the sense that the sequence $\{E(w_k)\}$ is monotonically decreasing. Actually this feature is vital for the convergence analysis in traditional LMMs; see \cite{LZ2002SISC,Z2017CAMC,XYZ2012SISC,LXY2021CMS,LXY-NWPLMM}. Since the work of this paper is closely related to the normalized Armijo-type step-size search rule, let us describe it a little more. In fact, if the normalized Armijo-type step-size search rule is employed in Algorithm~\ref{alg:lmm}, the step-size $\alpha_k$ is chosen by a backtracking strategy as \cite{LZ2002SISC,YZ2005SISC,XYZ2012SISC} \begin{equation}\label{eq:ak-armijo} \alpha_k = \max\left\{\lambda\rho^m>0:\, m\in\mathbb{N},\, E(p(v_k(\lambda\rho^m))) \leq E(p(v_k)) - \sigma\lambda\rho^mt_k\|g_k\|^2\right\}, \end{equation} for $k=0,1,\ldots$, where $g_k=\nabla E(p(v_k))$ and $\sigma,\rho\in(0,1)$, $\lambda>0$ are given parameters. To end this subsection, let us explore a key property associated with the normalized Armijo-type step-size search rule, which will be quite useful in the convergence analysis in section~\ref{sec:nmlmm}. According to Lemma~\ref{lem:armijo}, it is reasonable to define {\em the largest normalized Armijo-type step-size} at $v\in \mathcal{V}_0$ under the assumptions in Lemma~\ref{lem:armijo} as \begin{equation}\label{eq:maxAstep} \bar{\alpha}^A(v) := \sup\left\{\alpha>0: E(p(v(\alpha)))<E(p(v))-\sigma\alpha t_v\|g\|^2 \right\}. \end{equation} The following lemma states that $\bar{\alpha}^A(v)$ is uniformly away from zero when $v$ is close to some point $\bar{v}\in \mathcal{V}_0$ s.t. $p(\bar{v})$ is not a critical point. The proof is similar to that of Lemma~2.5 in \cite{LZ2002SISC} and omitted here for brevity. \begin{lemma}\label{lem:s0} Suppose $E\in C^1(X,\mathbb{R})$ and let $p(\bar{v})=t_{\bar{v}}\bar{v}+w_{\bar{v}}^L$ be a local peak selection of $E$ w.r.t. $L$ at $\bar{v}\in \mathcal{V}_0$. If (i) $p$ is continuous at $\bar{v}$; (ii) $t_{\bar{v}}>0$; and (iii) $E'(p(\bar{v}))\neq0$ hold, then there exist a neighborhood $N_{\bar{v}}$ of $\bar{v}$ and a constant $\underline{\alpha}>0$ s.t. $\bar{\alpha}^A(v)\geq\underline{\alpha}$, $\forall\,v\in N_{\bar{v}}\cap \mathcal{V}_0$. \end{lemma} \subsection{BB method and nonmonotone globalization strategies in the optimization theory}\label{subsec:bb} In order to put forward our approach in sections~\ref{sec:nmlmm}-\ref{sec:gbblmm}, we now review ingenious ideas of the BB method and its nonmonotone globalization strategies used in the optimization theory. Consider an unconstrained minimization problem as \begin{equation}\label{eq:minf} \min_{\mathbf{x}\in\mathbb{R}^d} f(\mathbf{x}), \end{equation} where $f$ is a continuously differentiable function defined on $\mathbb{R}^d$ (with the inner product $(\cdot,\cdot)_{\mathbb{R}^d}$ and norm $\|\cdot\|_{\mathbb{R}^d}$). The standard gradient method or the steepest descent method for solving \eqref{eq:minf} updates the approximate solution iteratively by \begin{equation}\label{eq:sdi} \mathbf{x}_{k+1}=\mathbf{x}_k-\gamma_k\nabla f(\mathbf{x}_k),\quad k=0,1,\ldots, \end{equation} with a step-size $\gamma_k>0$ determined either by an exact or inexact line search. Although the steepest descent method is effective in practice, it may lead to a zigzag-like iterative path and its convergence is usually very slow \cite{SunYuan2006}. It is well known that the quasi-Newton method, which uses the iterative scheme as $\mathbf{x}_{k+1}=\mathbf{x}_k-\mathbf{B}_k^{-1}\nabla f(\mathbf{x}_k)$ with $\mathbf{B}_k$ an appropriate approximation to the Hessian matrix, has often a faster convergence than the steepest descent method since it inherits some merits of the Newton method \cite{SunYuan2006}. Unfortunately, the quasi-Newton method is very expensive for large-scale optimization problems since it involves matrix storage and computations in each iteration. Rewriting \eqref{eq:sdi} as $\mathbf{x}_{k+1}=\mathbf{x}_k-\mathbf{D}_k\nabla f(\mathbf{x}_k)$ with $\mathbf{D}_k=\gamma_k\mathbf{I}$ and $\mathbf{I}$ the $d\times d$ identity matrix, in which $\mathbf{D}_k$ is regarded as an approximation to the inverse Hessian matrix, the BB method chooses $\gamma_k$ s.t. $\mathbf{D}_k$ approximately possesses certain quasi-Newton property \cite{BB1988IMANUM}, i.e., \begin{equation}\label{eq:sDy} \mathbf{s}_k\approx\mathbf{D}_k\mathbf{y}_k\quad\mbox{or}\quad \mathbf{s}_k\approx\gamma_k\mathbf{y}_k,\quad k=1,2,\ldots, \end{equation} where $\mathbf{s}_k=\mathbf{x}_{k}-\mathbf{x}_{k-1}$ and $\mathbf{y}_k=\nabla f(\mathbf{x}_{k})-\nabla f(\mathbf{x}_{k-1})$. Solving \eqref{eq:sDy} in the least-squares sense, i.e., finding $\gamma_k$ to minimize $\|\mathbf{s}_k-\gamma\mathbf{y}_k\|_{\mathbb{R}^d}^2$, yields a BB step-size as \begin{equation}\label{eq:RnBB1} \gamma_{k}^{BB1}=\frac{(\mathbf{s}_k,\mathbf{y}_k)_{\mathbb{R}^d}}{(\mathbf{y}_k,\mathbf{y}_k)_{\mathbb{R}^d}},\quad (\mathbf{y}_k,\mathbf{y}_k)_{\mathbb{R}^d}>0,\quad k=1,2,\ldots. \end{equation} According to the symmetry, one can alternatively minimize $\|\gamma^{-1}\mathbf{s}_k-\mathbf{y}_k\|_{\mathbb{R}^d}^2$ to obtain another BB step-size as \begin{equation}\label{eq:RnBB2} \gamma_{k}^{BB2}=\frac{(\mathbf{s}_k,\mathbf{s}_k)_{\mathbb{R}^d}}{(\mathbf{s}_k,\mathbf{y}_k)_{\mathbb{R}^d}},\quad (\mathbf{s}_k,\mathbf{y}_k)_{\mathbb{R}^d}>0,\quad k=1,2,\ldots. \end{equation} In some sense, the BB method can be viewed as a very simple quasi-Newton method. So it may inherit the advantages of the quasi-Newton method with fast convergence without matrix operations. Actually, it is observed numerically in practice that the BB method often greatly speeds up the convergence of the gradient method \cite{BB1988IMANUM,Fletcher2005}. However, due to essentially nonmonotone behaviors, there are potential difficulties in the convergence analysis for the BB method. In general, a globalization strategy founded on the nonmonotone line search is necessary for the BB method \cite{R1997SIOPT,SunYuan2006}. The basic idea is to use the step-size $\gamma_k=\beta_k\gamma_k^{BB}$ ($\gamma_k^{BB}=\gamma_k^{BB1}$ or $\gamma_k^{BB}=\gamma_k^{BB2}$) searched by a nonmonotone line search strategy, where the factor $\beta_k\in(0,1]$ plays the role of the step-size of the ``quasi-Newton" iteration: $\mathbf{x}_{k+1}=\mathbf{x}_k-\beta_k\mathbf{D}_k\nabla f(\mathbf{x}_k)$ with $\mathbf{D}_k=\gamma_k^{BB}\mathbf{I}$ for $k=1,2,\ldots$, and $\mathbf{D}_0=\gamma_0\mathbf{I}$ for a given $\gamma_0>0$. Finally, we briefly recall GLL and ZH nonmonotone line search strategies in the optimization theory. In fact, the GLL nonmonotone line search strategy \cite{GLL1986SINUM} is roughly as follows: find $\gamma_k^{GLL}=\lambda_k\rho^{m_k}$ for $k=0,1,\ldots$ with $m_k$ the smallest nonnegative integer satisfying \begin{equation}\label{eq:nmlsGLL} f(\mathbf{x}_k+\gamma_k^{GLL}\mathbf{d}_k) \leq \max_{1\leq j\leq \min\{M, k+1\}}f(\mathbf{x}_{k+1-j}) + \sigma\gamma_k^{GLL} (\nabla f(\mathbf{x}_k), \mathbf{d}_k)_{\mathbb{R}^d}, \end{equation} where $\mathbf{d}_k\in\mathbb{R}^d$ denotes a descent direction at $\mathbf{x}_k$, $\lambda_k$ is a trial step-size and $\sigma,\rho\in(0,1)$, $M>1$ are given parameters. It is easy to see that, compared with the standard Armijo line search in \cite{Armijo1966}, \eqref{eq:nmlsGLL} is only a sufficient decrease condition on the maximum of the function value over past several steps. On the other hand, as another popular nonmonotone search strategy in the optimization theory, the ZH nonmonotone line search strategy \cite{ZH2004SIOPT} has a similar form as \eqref{eq:nmlsGLL} except that the ``maximum'' is replaced by a convex combination of all past function values. More precisely, the ZH nonmonotone line search is to find $\gamma_k^{ZH}=\lambda_k\rho^{m_k}$ with $m_k$ the smallest nonnegative integer satisfying \begin{equation}\label{eq:nmlsZH} f(\mathbf{x}_k+\gamma_k^{ZH}\mathbf{d}_k) \leq C_k + \sigma\gamma_k^{ZH} (\nabla f(\mathbf{x}_k), \mathbf{d}_k)_{\mathbb{R}^d}, \quad k=0,1,\ldots, \end{equation} where $C_k$ is a weighted average of $f(\mathbf{x}_j)$, $j=0,1,\ldots,k$. From a philosophical point of view, the latter is closer to the idea of the doctrine of the mean. We remark here that the idea of combining the BB method with nonmonotone globalization strategies in the optimization theory to speed up the convergence of the algorithm is choosing the trial step-size as the BB step-size, i.e., $\lambda_k=\gamma_k^{BB}$, for $k=1,2,\ldots$, explicitly. \section{LMM with nonmonotone step-size search rules} \label{sec:nmlmm} In this section, inspired by the combination of BB method and nonmonotone globalization strategies in the optimization theory in subsection~\ref{subsec:bb}, we propose the normalized ZH-type and normalized GLL-type nonmonotone LMMs. Further, we analyze some related properties and establish the convergence results for them. We use the same notations as those in section~\ref{sec:pre} unless specified. \subsection{Normalized ZH-type nonmonotone LMM and its global convergence} In order to introduce the LMM with a normalized ZH-type nonmonotone step-size search rule and establish its feasibility, the following lemma is needed. \begin{lemma}\label{lem:ZHj} Suppose $E\in C^1(X,\mathbb{R})$ and let $p(v)=t_vv+w_v^L$ with $t_v\geq0$ and $w_v^L\in L$ be a peak selection of $E$ w.r.t. $L$ at $v\in S$, and $k$ be some positive integer. Take $v_0\in S\backslash L$, $\sigma\in(0,1)$, $0\leq\eta_{\min}<\eta_{\max}\leq1$, $\eta_j\in[\eta_{\min},\eta_{\max}]$ and $\alpha_j>0$, $j=0,1,\ldots,k-1$. Set $Q_0=1$, $C_0=E(p(v_0))$, $t_j=t_{v_j}\geq 0$, $g_j=\nabla E(p(v_j))$ and \begin{align*} v_{j+1}&=v_j(\alpha_j)=\frac{v_j-\alpha_jg_j}{\|v_j-\alpha_jg_j\|}, \quad Q_{j+1}=\eta_jQ_j+1,\\ C_{j+1}&=(\eta_jQ_jC_j+E(p(v_{j+1})))/Q_{j+1},\quad j=0,1,\ldots,k-1. \end{align*} Assume that \begin{equation}\label{eq:ZHj} E(p(v_j(\alpha_j)))\leq C_j-\sigma\alpha_jt_j\|g_j\|^2,\quad j=0,1,\ldots,k-1. \end{equation} If (i) $p$ is continuous at $v_k$; (ii) $t_k=t_{v_k}>0$; and (iii) $g_k=\nabla E(p(v_k))\neq0$ hold, then there exists $\alpha_k^A>0$ s.t. \[ E(p(v_k(\alpha)))<C_k-\sigma\alpha t_k\|g_k\|^2,\quad \forall\,\alpha\in(0,\alpha_k^A). \] \end{lemma} \begin{proof} Denote $E_j=E(p(v_j))$, $j=0,1,\cdots,k$. From \eqref{eq:ZHj}, we have $E_{j+1}\leq C_j$ for $j=0,1,\ldots,k-1$. In particular, $E_k\leq C_{k-1}$. Hence \begin{equation}\label{eq:ZHj:eq1} C_k=(\eta_{k-1}Q_{k-1}C_{k-1}+E_k)/Q_k\geq (\eta_{k-1}Q_{k-1}E_k+E_k)/Q_k=E_k. \end{equation} Lemma~\ref{lem:armijo} states that there exists $\alpha_k^A>0$ s.t. \begin{equation}\label{eq:ZHj:eq2} E(p(v_k(\alpha)))<E_k-\sigma\alpha t_k\|g_k\|^2,\quad\forall\,\alpha\in(0,\alpha_k^A). \end{equation} The conclusion follows from the combination of \eqref{eq:ZHj:eq1} and \eqref{eq:ZHj:eq2}. \end{proof} Lemma \ref{lem:armijo} and Lemma~\ref{lem:ZHj} inspire us to define a normalized ZH-type nonmonotone step-size as follows. \begin{definition}{\bf (Normalized ZH-type nonmonotone step-size)}\label{def-ZH} For $k=0,1,\ldots$, take $\sigma,\rho$ $\in(0,1)$, $0<\lambda_{\min}\leq\lambda_k\leq\lambda_{\max}<+\infty$, $0\leq\eta_{\min}\leq\eta_j\leq\eta_{\max}\leq1,\,j=0,1,\ldots,k-1$. If $\alpha=\lambda_k\rho^{m_k}$ and $m_k$ is the smallest positive integer satisfying \begin{equation}\label{eq:ZHcond} E(p(v_k(\alpha))) \leq C_k-\sigma\alpha t_k\|g_k\|^2, \end{equation} with $g_k=\nabla E(p(v_k))$, $Q_0=1$, $C_0=E(p(v_0))$ and $Q_j=\eta_{j-1} Q_{j-1}+1$, $C_j=(\eta_{j-1} Q_{j-1}C_{j-1}+E(p(v_{j})))/Q_{j}$ for $j=1,2,\ldots,k$, then we say that $\alpha$ is a normalized ZH-type nonmonotone step-size at $v_k$. \end{definition} Here, $\lambda_k\in[\lambda_{\min},\lambda_{\max}]$ is a trial step-size with the parameters $\lambda_{\min}$ and $\lambda_{\max}$ used to prevent the trial step-size from being too small or large. Reviving Algorithm~\ref{alg:lmm}, the algorithm of the LMM with the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} is described in Algorithm~\ref{alg:zhlmm}. In the subsequent discussion in this subsection, we use the same notations and parameters as those in Algorithm~\ref{alg:zhlmm} unless specified. The feasibility of Algorithm~\ref{alg:zhlmm} is guaranteed by the following theorem and directly follows from Lemmas \ref{lem:armijo} and~\ref{lem:ZHj}. \begin{algorithm}[!ht] \caption{Normalized ZH-type Nonmonotone Local Minimax Algorithm.} \label{alg:zhlmm} Choose $\sigma,\rho\in(0,1)$, $0<\lambda_{\min}<\lambda_{\max}<+\infty$, $0\leq\eta_{\min}<\eta_{\max}\leq1$, $Q_0=1$ and $C_0=E(p(v_0))$. {\bf Steps 1-3} are the same as those in Algorithm~\ref{alg:lmm}. \begin{enumerate}[\bf Step 1.] \setcounter{enumi}{3} \item Choose a trial step-size $\lambda_k\in[\lambda_{\min},\lambda_{\max}]$ and find \begin{equation}\label{eq:ak-zh} \alpha_k=\max_{m\in\mathbb{N}}\left\{\lambda_k\rho^m: E(p(v_k(\lambda_k\rho^m)))\leq C_k-\sigma\lambda_k\rho^mt_k\|g_k\|^2 \right\}, \end{equation} where the initial guess $w=t_k v_k(\lambda_k\rho^m)+w_k^L$ is used to find the local maximizer $p(v_k(\lambda_k\rho^m))$ of $E$ on $[L, v_k(\lambda_k\rho^m)]$ for $m=0,1,\ldots$. \\ Set $v_{k+1}=v_k(\alpha_k)$ and choose $\eta_k\in[\eta_{\min},\eta_{\max}]$ to calculate \begin{equation}\label{eq:QCupdate} Q_{k+1}=\eta_kQ_k+1,\quad C_{k+1}=(\eta_kQ_kC_k+E(p(v_k(\alpha_k))))/Q_{k+1}. \end{equation} Update $k:=k+1$ and go to {\bf Step~2}. \end{enumerate} \end{algorithm} \begin{theorem}\label{thm:ZH} Assume that $E\in C^1(X,\mathbb{R})$ has a peak selection $p$ of $E$ w.r.t. $L$. Let $\{v_j\}_{j=0}^k\subset \mathcal{V}_0$ be a sequence generated by Algorithm~\ref{alg:zhlmm} with $g_j\neq0$, $\forall\, j=0,1,\ldots,k$, for some $k\geq0$. If there hold (i) $p$ is continuous on $\mathcal{V}_0$ and (ii) $t_j>0$, $\forall\, j=0,1,\ldots,k$, then for each $j=0,1,\ldots,k$, there exists $\alpha_j^A>0$ s.t. \[ E(p(v_j(\alpha)))<C_j-\sigma\alpha t_j\|g_j\|^2,\quad \forall\,\alpha\in(0,\alpha_j^A). \] \end{theorem} \begin{proof} When $k=0$, the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} is exactly the normalized Armijo-type step-size search rule stated in \eqref{eq:ak-armijo}, and its feasibility is obvious from Lemma~\ref{lem:armijo}. For $k\geq1$, the feasibility of the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} derives directly from Lemma~\ref{lem:ZHj} with an inductive argument on $j=0,1,\ldots,k$. \end{proof} For $j=0,1,\ldots$, denote $E_j=E(p(v_j))$, then a direct calculation leads to \begin{align} \label{eq:qkex} Q_{j+1} &= 1+\sum_{i=0}^j\left(\prod_{l=0}^i\eta_{j-l}\right) \leq j+2, \\ \label{eq:ckex} C_{j+1} &= \frac{1}{Q_{j+1}}\left(E_{j+1}+\sum_{i=0}^j\left(\prod_{l=0}^i\eta_{j-l}\right)E_{j-i}\right). \end{align} Thus, $C_k$ is a convex combination of $\{E_j\}_{j=0}^k$ with large weights on recent $E_j$. We remark that the choice of $\eta_j$ affects the degree of the nonmonotonicity of the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond}. In fact, if $\eta_j=0$, $\forall j=0,1,\ldots,k-1$, then $Q_k=1$ and $C_k=E_k$. The normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} is exactly to be the monotone normalized Armijo-type step-size search rule given in \eqref{eq:ak-armijo}; if $\eta_j=1$, $\forall j=0,1,\ldots,k-1$, then $Q_k=k+1$ and $C_k=A_k$ with $A_k := \frac{1}{k+1}\sum_{j=0}^kE_j$ the arithmetic mean of $\{E_j\}_{j=0}^k$. Actually, we have the following property, which can be verified by a similar argument in the proof of \cite[Lemma 1.1]{ZH2004SIOPT}. \begin{lemma}\label{lem:ECA} The following inequalities hold for Algorithm~\ref{alg:zhlmm} under the same assumptions in Theorem~\ref{thm:ZH}, i.e., \[ E_k\leq C_k\leq A_k\leq E_0,\quad k=0,1,\ldots. \] \end{lemma} \begin{proof} The conclusion for the case of $k=0$ holds by the initialization $C_0=E_0$. For $k=1,2,\ldots$, it follows from \eqref{eq:ak-zh} that $E_k\leq C_{k-1}$, which leads to \[ C_k=(\eta_{k-1}Q_{k-1}C_{k-1}+E_k)/Q_k\geq (\eta_{k-1}Q_{k-1}E_k+E_k)/Q_k=E_k. \] Next, we prove $C_k\leq A_k\leq E_0$ by the inductive argument. Assume that $C_{k-1}\leq A_{k-1}\leq E_0$, $k\geq1$. Defining $F_k(t)=(tC_{k-1}+E_k)/(t+1)$, $t\geq0$, we have $F_k'(t)=(C_{k-1}-E_k)/(t+1)^2\geq0$. Hence, $F_k(t)$ is monotonically nondecreasing. From \eqref{eq:qkex}, we can obtain $Q_k\leq k+1$ and \begin{equation}\label{eq:ckdkk} C_k=F_k(\eta_{k-1}Q_{k-1})=F_k(Q_k-1)\leq F_k(k),\quad k\geq 1. \end{equation} By the inductive assumption, it yields that \begin{equation}\label{eq:dkkak} F_k(k)=\frac{kC_{k-1}+E_k}{k+1}\leq\frac{kA_{k-1}+E_k}{k+1}=A_k,\quad k\geq 1. \end{equation} On the other hand, the fact that $E_k\leq C_{k-1}$, $k\geq 1$ implies that \begin{equation}\label{eq:ake0} A_k=\frac{kA_{k-1}+E_k}{k+1}\leq\frac{kA_{k-1}+C_{k-1}}{k+1}\leq\frac{kE_0+E_0}{k+1}=E_0. \end{equation} Combining \eqref{eq:ckdkk}, \eqref{eq:dkkak} and \eqref{eq:ake0} implies $C_k\leq A_k\leq E_0$. The proof is finished by the inductive argument. \end{proof} We remark here that the following significant connection between the normalized ZH-type nonmonotone step-size \eqref{eq:ak-zh} and the largest normalized Armijo-type step-size \eqref{eq:maxAstep} is vital to establish the global convergence of Algorithm~\ref{alg:zhlmm}. \begin{lemma}\label{lem:akgeq} Let $\{v_k\}\subset \mathcal{V}_0$ be a sequence generated by Algorithm~\ref{alg:zhlmm} and $\alpha_k$ be the normalized ZH-type nonmonotone step-size \eqref{eq:ak-zh} at $v_k$, then under the same assumptions in Theorem~\ref{thm:ZH}, we have \[ \alpha_k\geq\min\{\lambda_{\min},\rho\bar{\alpha}^A(v_k)\},\quad k=0,1,\ldots, \] where $\bar{\alpha}^A(v_k)$, defined in \eqref{eq:maxAstep}, is the largest normalized Armijo-type step-size at $v_k$. \end{lemma} \begin{proof} In fact, $\alpha_k=\lambda_k\rho^{m_k}$ for some $m_k\in\mathbb{N}$. If $m_k=0$, then $\alpha_k=\lambda_k\geq\lambda_{\min}$ and the conclusion holds. Otherwise, if $m_k>0$, the minimality of $m_k$ and Lemma~\ref{lem:ECA} lead to \[ E(p(v_k(\rho^{-1}\alpha_k))) > C_k-\sigma\rho^{-1}\alpha_kt_k\|g_k\|^2 \geq E_k-\sigma\rho^{-1}\alpha_kt_k\|g_k\|^2. \] By the definition of the largest normalized Armijo-type step-size $\bar{\alpha}^A(v_k)$ at $v_k$, one can obtain $\alpha_k\geq\rho\bar{\alpha}^A(v_k)$ and the conclusions hold. \end{proof} Now, we are ready to consider the global convergence of Algorithm~\ref{alg:zhlmm}. Note that, by employing \eqref{eq:ak-zh} and \eqref{eq:QCupdate}, we have \begin{equation}\label{eq:Ckmono} C_{k+1}=\frac{\eta_kQ_kC_k+E_{k+1}}{Q_{k+1}} \leq \frac{\eta_kQ_kC_k+C_k-\sigma \alpha_kt_k\|g_k\|^2}{Q_{k+1}} = C_k-\sigma\frac{\alpha_kt_k\|g_k\|^2}{Q_{k+1}}, \end{equation} which means that $\{C_k\}$ is monotonically decreasing, though $\{E_k\}$ may not monotonically decrease in general. Actually, the monotonicity of $\{C_k\}$ in \eqref{eq:Ckmono} will play a key role in establishing the global convergence of Algorithm~\ref{alg:zhlmm}. \begin{theorem}\label{thm:cvg-zhlmm} Suppose $E\in C^1(X,\mathbb{R})$ and let $p$ be a peak selection of $E$ w.r.t. $L$, and $\{v_k\}\subset \mathcal{V}_0$ and $\{w_k=p(v_k)\}$ be sequences generated by Algorithm~\ref{alg:zhlmm}. Assume that (i) $p$ is continuous on $\mathcal{V}_0$; (ii) $t_k\geq\delta$ for some $\delta>0$, $k=0,1,\ldots$; and (iii) $\inf_{k\geq0}E_k>-\infty$ hold, then \begin{itemize} \item[{\rm(a)}] $\sum_{k=0}^{\infty}\alpha_k\|g_k\|^2/Q_{k+1}<\infty$; \item[{\rm(b)}] if $\{w_k\}$ converges to some point $\bar{u}\in X$, $\bar{u}\notin L$ is a critical point. \end{itemize} \vspace{2pt} Especially, if $\eta_{\max}<1$, then \begin{itemize} \item[{\rm(c)}] $\sum_{k=0}^{\infty}\alpha_k\|g_k\|^2<\infty$; \item[{\rm(d)}] every accumulation point of $\{w_k\}$ is a critical point not belonging to $L$; \item[{\rm(e)}] $\liminf_{k\to\infty}\|g_k\|=0$. \end{itemize} \vspace{2pt} Further, if $E$ satisfies the (PS) condition, then \begin{itemize} \item[{\rm(f)}] $\{w_k\}$ contains a subsequence converging to a critical point $u_*\notin L$. In addition, if $u_*$ is isolated, $w_k\to u_*$ as $k\to\infty$. \end{itemize} \end{theorem} \begin{proof} Since $\{C_k\}$ is monotonically decreasing by \eqref{eq:Ckmono} and bounded from below by the assumption (iii) and Lemma~\ref{lem:ECA}, it converges to a finite number $C_*$. Then, \eqref{eq:Ckmono} and the assumption (ii) lead to the conclusion (a), i.e., \[ \sum_{k=0}^{\infty}\frac{\alpha_k\|g_k\|^2}{Q_{k+1}} \leq \frac{1}{\sigma\delta}\sum_{k=0}^{\infty}(C_k-C_{k+1}) = \frac{1}{\sigma\delta}(C_0-C_*) <\infty. \] Next, we verify the conclusion (b). By employing Lemma~\ref{lem:mono} and the assumption (ii), we have \[ \dist(w_k,L)=t_k\|v_k^\bot\|\geq\delta\|v_0^\bot\|>0,\quad k=0,1,\ldots.\] Thus, if $\{w_k\}$ converges to some point $\bar{u}\in X$, it implies that $\dist(\bar{u},L)\geq\delta\|v_0^\bot\|>0$. Immediately, we can obtain $\bar{u}\notin L$. In addition, Lemma~\ref{lem:home} indicates that $\{v_k\}$ converges to some $\bar{v}\in \mathcal{V}_0$ satisfying $\bar{u}=p(\bar{v})$. For the sake of contradiction, suppose that $\bar{u}$ is not a critical point, then $\nabla E(\bar{u})\neq0$. Since $E\in C^1(X,\mathbb{R})$, one can obtain $g_k=\nabla E(w_k)\to\nabla E(\bar{u})\neq0$ as $k\to\infty$. Therefore, $\|g_k\|>\frac12\|\nabla E(\bar{u})\|>0$, for all $k$ large enough. Recalling the conclusion (a), it yields that $\sum_{k=0}^{\infty} \alpha_k/Q_{k+1}<\infty$. By utilizing \eqref{eq:qkex}, we can arrive at \begin{equation}\label{eq:sumakQ} \sum_{k=0}^{\infty}\frac{\alpha_k}{k+2}\leq\sum_{k=0}^{\infty}\frac{\alpha_k}{Q_{k+1}}<\infty. \end{equation} On the other hand, Lemma~\ref{lem:pv-tv} and the assumption (ii) lead to $t_{\bar{v}}=\lim_{k\to\infty}t_k$ $\geq\delta>0$. According to Lemma~\ref{lem:s0} and Lemma~\ref{lem:akgeq}, there exists $\underline{\alpha}>0$ s.t., for all $k$ large enough, \begin{equation}\label{eq:akbt0} \alpha_k \geq \min\{\lambda_{\min},\rho\bar{\alpha}^A(v_k)\} \geq \min\{\lambda_{\min},\rho\underline{\alpha}\}>0. \end{equation} The combination of \eqref{eq:sumakQ} and \eqref{eq:akbt0} yields $\sum_{k=0}^\infty 1/(k+2)<\infty$, which is a contradiction. Thus, $\bar{u}\notin L$ is a critical point and the conclusion (b) is obtained. If $\eta_{\max}<1$, then revisiting \eqref{eq:qkex}, it is easy to see that \[ Q_{k+1}\leq 1+\sum_{j=0}^k \eta_{\max}^{j+1}<\frac{1}{1-\eta_{\max}}<\infty. \] Hence, the conclusion (c) directly follows from the conclusion (a). Moreover, by the conclusion (c) and an analogous argument in the proof of the conclusion (b), the conclusion (d) is obvious. Now, to consider the conclusion (e), suppose that $\delta_1:=\liminf_{k\to\infty}\|g_k\|>0$ by the contradiction argument. Then, $\|g_k\|\geq\delta_1/2>0$, for all $k$ large enough. One can see from the conclusion (c) that $\sum_{k=0}^{\infty}\alpha_k<\infty$ and $\sum_{k=0}^{\infty}\alpha_k\|g_k\|<\infty$. It immediately leads to $\alpha_k\to0$ as $k\to\infty$ and \[ \sum_{k=0}^{\infty}\|v_{k+1}-v_k\|\leq \sum_{k=0}^{\infty}\alpha_k\|g_k\|<\infty, \] where the inequality $\|v_{k+1}-v_k\|=\|v_k(\alpha_k)-v_k\|<\alpha_k\|g_k\|$ is employed according to Lemma~\ref{lem:vs}. Hence, $\{v_k\}$ is a Cauchy sequence. Note that $\{v_k\}$ is contained in the closed subset $\mathcal{V}_0$ which is complete, thus there exists $\bar{v}\in \mathcal{V}_0$ s.t. $v_k\to\bar{v}$ as $k\to\infty$. By the continuity of $p$ and $E'$, we have $g_k\to\nabla E(p(\bar{v}))$ as $k\to\infty$ and \[ \|\nabla E(p(\bar{v}))\|=\lim_{k\to\infty}\|g_k\|=\delta_1>0. \] However, from the conclusion (b), $p(\bar{v})=\lim_{k\to\infty}p(v_k)$ must be a critical point. This is a contradiction. Consequently, the conclusion (e) holds. The rest of the proof is to verify the conclusion (f). Due to the conclusion (e), one can find a subsequence $\{v_{k_i}\}$ s.t. $E'(w_{k_i})=E'(p(v_{k_i}))\to0$ in $X^*$ as $i\to\infty$. In view of Lemma~\ref{lem:ECA} and the assumption (iii), $\{E(w_{k_i})\}$ is bounded, i.e., \[ \inf_{k\geq0}E_k\leq E(w_{k_i})\leq E_0,\quad i=0,1,\ldots. \] Then, by the (PS) condition, $\{w_{k_i}\}$ possesses a subsequence, still denoted by $\{w_{k_i}\}$, that converges to a critical point $u_*$, and $u_*\notin L$ in view of the conclusion (d). Finally, by using the assumption that $u_*$ is isolated and following the lines of the original proof for the global sequence convergence of the normalized Armijo-type LMM in \cite{Z2017CAMC} which is skipped here for brevity, we can reach the global sequence convergence, i.e., $w_k\to u_*$ as $k\to\infty$. Above all, the proof is finished. \end{proof} \begin{remark} As discussed above, the normalized Armijo-type step-size search rule given in \eqref{eq:ak-armijo} is a special case of the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} with $\eta_j=0$ ($j=0,1,\ldots,k$). Thus, Theorem~\ref{thm:cvg-zhlmm} covers the global convergence of the LMM with the normalized Armijo-type step-size search rule stated in \eqref{eq:ak-armijo}. \end{remark} \begin{remark} The assumption (ii) in Theorem~\ref{thm:cvg-zhlmm} is crucial to guarantee the critical point obtained by Algorithm~\ref{alg:zhlmm} is away from previously found critical points in $L$. When $L=\{0\}$, assumptions (i) and (ii) can be verified for energy functionals associated with several typical BVPs of PDEs occurring in section~\ref{sec:numer}. Although it is not easy to theoretically prove the assumption (ii) in general cases, it is effective to numerically check it in practical computations. \end{remark} In the next subsection, we will introduce another nonmonotone step-size search rule for the LMM, i.e., the normalized GLL-type nonmonotone LMM, and analyze its convergence results. \subsection{Normalized GLL-type nonmonotone LMM and its convergence} Enlightened by the GLL nonmonotone line search strategy in the optimization theory \cite{GLL1986SINUM}, we introduce the following normalized GLL-type nonmonotone step-size search rule for the LMM iterations. Hereafter, we use $\mathbb{N}_+=\{k\in\mathbb{N}: k\geq1\}$ to denote the set of all positive integers. \begin{definition}{\bf (Normalized GLL-type nonmonotone step-size)}\label{def-GLL} For $k=0,1,\ldots$, taking $\sigma,\rho\in(0,1)$, $0<\lambda_{\min}\leq\lambda_k\leq\lambda_{\max}<+\infty$ and $M\in\mathbb{N}_+$. If $\alpha=\lambda_k\rho^{m_k}$ and $m_k$ is the smallest nonnegative integer satisfying \begin{equation}\label{eq:gll} E(p(v_k(\alpha))) \leq \max_{1\leq j\leq\min\{M, k+1\}}E(p(v_{k+1-j}))-\sigma\alpha t_k\|g_k\|^2, \end{equation} with $g_k=\nabla E(p(v_k))$, then we say that $\alpha$ is a normalized GLL-type nonmonotone step-size at $v_k$. \end{definition} Here, the integer $M\geq 1$ in \eqref{eq:gll} is to control the degree of the nonmonotonicity. Especially, if $M=1$, \eqref{eq:gll} degenerates to the normalized monotone Armijo-type step-size search rule stated in \eqref{eq:ak-armijo}. When $M>1$, it does not require the monotone descent of the energy functional values. Thus, the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} is another relaxation to the normalized Armijo-type step-size search rule given in \eqref{eq:ak-armijo}. As a straightforward conclusion of Lemma~\ref{lem:armijo}, the following theorem describes the feasibility of the normalized GLL-type nonmonotone step-size search rule. \begin{theorem}\label{thm:gll-feas} Suppose $E\in C^1(X,\mathbb{R})$ and let $\{v_j\}_{j=0}^k$ ($k\geq0$) be a subset of $\mathcal{V}_0$. If $E$ has a peak selection $p$ w.r.t. $L$, denoted by $p(v_j)=t_j v_j+w_j^L$ with $t_j\geq0$ and $w_j^L\in L$ ($j=0,1,\ldots,k$), satisfying (i) $p$ is continuous on $\mathcal{V}_0$; (ii) $t_k>0$; and (iii) $g_k=\nabla E(p(v_k))\neq0$, then for any $M\in\mathbb{N}_+$ and $\sigma\in(0,1)$, there exists a $\alpha_k^A>0$ s.t. \eqref{eq:gll} holds for all $\alpha_k\in(0,\alpha_k^A)$. \end{theorem} Recalling Algorithm~\ref{alg:lmm}, the algorithm of the LMM with the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} is presented in Algorithm~\ref{alg:glllmm}. \begin{algorithm}[!ht] \caption{Normalized GLL-type Nonmonotone Local Minimax Algorithm} \label{alg:glllmm} Choose $\sigma,\rho\in(0,1)$, $0<\lambda_{\min}<\lambda_{\max}<+\infty$, and $M\in\mathbb{N}_+$, $M>1$. {\bf Steps 1-3} are the same as those in Algorithm~\ref{alg:lmm}. \begin{enumerate}[\bf Step 1.] \setcounter{enumi}{3} \item Compute $E_{k,M}=\max\{E(w_{k+1-j}):1\leq j\leq\min\{M, k+1\}\}$, choose $\lambda_k\in[\lambda_{\min},\lambda_{\max}]$ and find \begin{equation}\label{eq:ak-gll} \alpha_k=\max_{m\in\mathbb{N}}\left\{\lambda_k\rho^m: E(p(v_k(\lambda_k\rho^m))) \leq E_{k,M} - \sigma\lambda_k\rho^mt_k\|g_k\|^2\right\}, \end{equation} where the initial guess $w=t_k v_k(\lambda_k\rho^m)+w_k^L$ is used to find the local maximizer $p(v_k(\lambda_k\rho^m))$ of $E$ on $[L, v_k(\lambda_k\rho^m)]$ for $m=0,1,\ldots$. \\ Set $v_{k+1}=v_k(\alpha_k)$. Update $k:=k+1$ and go to {\bf Step~2}. \end{enumerate} \end{algorithm} Denote $E_k=E(w_k)$, $w_k=p(v_k)=t_kv_k+w_k^L$ with $t_k\geq0$ and $w_k^L\in L$. The following interesting property describes the important relationship between the step-size determined by the normalized GLL-type nonmonotone step-size \eqref{eq:ak-gll} and the largest normalized Armijo-type step-size \eqref{eq:maxAstep}. It is remarked that, in the subsequent discussion in this subsection, we use the same notations and parameters as those in Algorithm~\ref{alg:glllmm} unless specified. \begin{lemma}\label{lem:gll-akgeq} Let $\{v_k\}\subset \mathcal{V}_0$ be a sequence generated by Algorithm~\ref{alg:glllmm} and $\alpha_k$ be the step-size determined by the normalized GLL-type nonmonotone step-size search rule \eqref{eq:ak-gll} at $v_k$. Then, under the same assumptions in Theorem~\ref{thm:gll-feas}, we have \vspace{-0.05in} \[ \alpha_k\geq\min\{\lambda_{\min},\rho\bar{\alpha}^A(v_k)\},\quad k=0,1,\ldots, \] where $\bar{\alpha}^A(v_k)$, defined as \eqref{eq:maxAstep}, is the largest normalized Armijo-type step-size at $v_k$. \end{lemma} \begin{proof} Noticing \eqref{eq:ak-gll}, $\alpha_k=\lambda_k\rho^{m_k}$ for some $m_k\in\mathbb{N}$. If $m_k=0$, we have $\alpha_k=\lambda_k\geq\lambda_{\min}$ and the conclusion holds. If $m_k>0$, the minimality of $m_k$ implies \[ E(p(v_k(\rho^{-1}\alpha_k))) > E_{k,M}-\sigma\rho^{-1}\alpha_kt_k\|g_k\|^2\geq E_k-\sigma\rho^{-1}\alpha_kt_k\|g_k\|^2. \] Thus, by the definition of the largest normalized Armijo-type step-size $\bar{\alpha}^A(v_k)$ at $v_k$ in \eqref{eq:maxAstep}, one can get $\alpha_k\geq\rho\bar{\alpha}^A(v_k)$ immediately and the conclusion holds. \end{proof} Due to the employment of the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} in Algorithm~\ref{alg:glllmm}, the sequence $\{E_k\}$ is no longer monotonically decreasing in general. Fortunately, the lemma below shows that it contains a monotonically decreasing subsequence. The proof is similar to that of Lemma~2.3 in \cite{BMR2003IMAJNA} and Lemma~3.2 in \cite{IP2018IMAJNA}. \begin{lemma}\label{lem:gll-mono} Set $\mu_0=0$ and take $\mu_j\in\{(j-1)M+1,\ldots,jM\}$ for $j=1,2,\ldots$ s.t. $E_{\mu_j}=\max\left\{E_{(j-1)M+1},\ldots,E_{jM}\right\}$ with $E_{\mu_j}=E(p(v_{\mu_j}))$, then \begin{equation}\label{eq:maxEd0} E_{\mu_{j+1}} \leq E_{\mu_j} - \sigma\alpha_{\mu_{j+1}-1}t_{\mu_{j+1}-1}\|g_{\mu_{j+1}-1}\|^2,\quad j=0,1,\ldots. \end{equation} \end{lemma} \begin{proof} Since $\mu_{j+1}=jM+\ell$ for some $\ell\in\{1,2,\ldots,M\}$, it suffices to verify \begin{equation}\label{eq:maxEd1} E_{jM+\ell} \leq E_{\mu_j} - \sigma\alpha_{jM+\ell-1}t_{jM+\ell-1}\|g_{jM+\ell-1}\|^2, \end{equation} for each fixed $j=0,1,\ldots$ and $\forall\,\ell=1,2,\ldots,M$. We will prove it by the inductive argument on $\ell$. The case of $\ell=1$ immediately follows from the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} with $k=jM$. As the inductive hypothesis, assume that \eqref{eq:maxEd1} holds for each fixed $j=0,1,\ldots$ and all $1\leq\ell\leq\ell'\leq M-1$, which states that \begin{equation}\label{eq:maxEd3} \max\big\{E_{jM+1},\ldots,E_{jM+\ell'}\big\}\leq E_{\mu_j}. \end{equation} Now, we only need to verify that \eqref{eq:maxEd1} also holds for $\ell=\ell'+1\leq M$. In addition, by applying the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} with $k=jM+\ell-1=jM+\ell'$, noticing \eqref{eq:maxEd3} and the definition of $E_{\mu_j}$, $j\geq 0$, we have \begin{align*} E_{jM+\ell} &=E_{jM+\ell'+1} \\ &\leq \max_{1\leq i\leq \min\{M,jM+\ell'+1\}}E_{jM+\ell'+1-i} - \sigma\alpha_{jM+\ell-1}t_{jM+\ell-1}\|g_{jM+\ell-1}\|^2 \\ &= \max_{\max\{0,(j-1)M+\ell'+1\}\leq k\leq jM+\ell'}E_k - \sigma\alpha_{jM+\ell-1}t_{jM+\ell-1}\|g_{jM+\ell-1}\|^2 \\ & \leq \max\big\{E_{\mu_j},E_{jM+1},\ldots,E_{jM+\ell'}\big\} - \sigma\alpha_{jM+\ell-1}t_{jM+\ell-1}\|g_{jM+\ell-1}\|^2 \\ &\leq E_{\mu_j} - \sigma\alpha_{jM+\ell-1}t_{jM+\ell-1}\|g_{jM+\ell-1}\|^2. \end{align*} The proof is completed by the inductive argument for $1\leq \ell\leq M$. \end{proof} Based on the above, the partial convergence result for Algorithm~\ref{alg:glllmm} is established as below. \begin{theorem}\label{thm:cvg-glllmm} Suppose $E\in C^1(X,\mathbb{R})$ and let $p$ be a peak selection of $E$ w.r.t. $L$, and $\{v_k\}\subset \mathcal{V}_0$ and $\{w_k=p(v_k)\}\in X$ be sequences generated by Algorithm~\ref{alg:glllmm}. Set $K=\{\mu_j-1:j=1,2,\ldots\}$ with $\mu_j$ defined in Lemma~\ref{lem:gll-mono}. Assume that (i) $p$ is continuous on $\mathcal{V}_0$; (ii) $t_k\geq\delta$ for some $\delta>0$ and $\forall\,k\in K$; and (iii) $\inf_{j\geq0}E_{\mu_j}>-\infty$ hold, then \begin{itemize} \item[{\rm(a)}] $\sum_{k\in K}\alpha_k\|g_k\|^2<\infty$; \item[{\rm(b)}] every accumulation point of $\{w_k\}_{k\in K}$ is a critical point not belonging to $L$; \item[{\rm(c)}] if $\{w_k\}_{k=0}^{\infty}$ converges to a point $u_*$, then $u_*\notin L$ is a critical point. \end{itemize} \end{theorem} \begin{proof} By utilizing Lemma~\ref{lem:gll-mono} and the assumption (iii), $\{E_{\mu_j}\}_{j=0}^{\infty}$ is monotonically decreasing and bounded from below. So it converges to a finite number $E_*$. According to \eqref{eq:maxEd0} and the assumption (ii), we can obtain \begin{align*} \sum_{j=0}^{\infty}\alpha_{\mu_{j+1}-1}\|g_{\mu_{j+1}-1}\|^2 \leq \frac{1}{\sigma\delta}\sum_{j=0}^{\infty}\left(E_{\mu_j}-E_{\mu_{j+1}}\right) = \frac{1}{\sigma\delta}\left(E_0-E_*\right) <\infty. \end{align*} Noticing the definition of the set $K$, the conclusion (a) is obviously true. Based on the conclusion (a) and Lemmas~\ref{lem:home}, \ref{lem:s0}, \ref{lem:gll-akgeq}, the remaining conclusions can be verified by an analogous argument used in the proof of Theorem~\ref{thm:cvg-zhlmm}. We omit the details for brevity. \end{proof} \begin{remark} This theorem guarantees that the sequence $\{w_k\}$ generated by Algorithm~\ref{alg:glllmm} must tend to a new critical point not in $L$ if it converges. Nevertheless, due to the essential nonmonotonicity, it seems that there are some potential difficulties to establish the global convergence of the whole sequence $\{w_k\}$. New analytical strategy is needed and is our on-going work. \end{remark} \section{Globally convergent BB-type LMM (GBBLMM)} \label{sec:gbblmm} In this section, we present the GBBLMM by using the nonmonotone globalizations developed in section~\ref{sec:nmlmm} with a BB-type trial step-size for $\lambda_k$ at $v_k$. First, we modify the BB method in the optimization theory and construct the BB-type step-size for the LMM iteration. \subsection{BB-type step-size for the LMM} \label{sec:bbsslmm} From Theorem~\ref{thm:lmt0}, under some assumptions, the local solution $v_*$ to the minimization problem \begin{equation}\label{eq:lmm-minEpv} \min_{v\in\mathcal{V}_0}E(p(v)) \end{equation} satisfies $\nabla E(p(v_*))=0$ and $p(v_*)\notin L$ (i.e., $p(v_*)\notin L$ is a critical point). As discussed in previous sections, the LMM iteration for solving the minimization problem \eqref{eq:lmm-minEpv} is \begin{equation}\label{eq:lmm-iter} v_{k+1} =v_k(\alpha_k)= \frac{v_k-\alpha_kg_k}{\|v_k-\alpha_kg_k\|},\quad k=0,1,\ldots, \end{equation} where $g_k=\nabla E(p(v_k))$ and $v_k\in \mathcal{V}_0\subset S\backslash L$. A direct calculation shows that \begin{align*} \left\|v_k(\alpha) - \left(v_k-\alpha g_k\right)\right\| &= \left|\frac{1}{\sqrt{1+\alpha^2\|g_k\|^2}}-1\right|\left\|v_k-\alpha g_k\right\| = \frac{\alpha^2\|g_k\|^2}{1+\sqrt{1+\alpha^2\|g_k\|^2}}, \end{align*} and then $v_k(\alpha) = v_k-\alpha g_k + O\left(\alpha^2\|g_k\|^2\right)$. Hence, the linearized iterative scheme \begin{equation}\label{lmm-iter-linearized} v_{k+1}=v_k-\alpha_k g_k\quad\mbox{or}\quad v_{k+1}=v_k-D_k\nabla E(p(v_k)), \quad k=0,1,\ldots, \end{equation} with $D_k=\alpha_kI$ and $I$ the identity operator in $X$, is a second-order approximation to the nonlinear iterative scheme \eqref{eq:lmm-iter}. Similar to the BB method in the optimization theory, one can construct a linear iterative scheme \eqref{lmm-iter-linearized} for the nonlinear equation $\nabla E(p(v))=0$ with $\alpha_k$ as a BB-type step-size. Intuitively, such an $\alpha_k$ can serve as the step-size of the nonlinear iterative scheme \eqref{eq:lmm-iter} and still called a BB-type step-size due to the fact that \eqref{lmm-iter-linearized} is a second-order approximation to \eqref{eq:lmm-iter}. For this purpose, the step-size $\alpha_k$ is chosen s.t. $D_k=\alpha_kI$ approximately satisfies the ``secant equation" \begin{equation}\label{eq:dys} D_k y_k=s_k, \quad k=1,2\ldots, \end{equation} with $s_k=v_{k}-v_{k-1}$ and $y_k=g_{k}-g_{k-1}$. Then, solving the least-squares problem \begin{equation}\label{eq:min-say} \min_{\alpha}\|s_k-\alpha y_k\|^2 \quad \mbox{or}\quad \min_{\beta}\|\beta s_k-y_k\|^2\quad\mbox{(w.r.t. $\beta=\alpha^{-1}$)},\quad k=1,2\ldots, \end{equation} yields BB-type step-sizes respectively as \begin{equation}\label{eq:bbss} \alpha_k^{\text{BB1}}=\frac{(s_k, y_k)}{(y_k, y_k)} \quad\mbox{or}\quad \alpha_k^{\text{BB2}}=\frac{(s_k, s_k)}{(s_k, y_k)},\;\;(s_k, y_k)>0,\quad k=1,2\ldots. \end{equation} Another slightly different construction of the BB-type step-size can be obtained by considering the constrained minimization problem \eqref{eq:LMM2} from the point of view of manifold optimization. In fact, a Riemannian BB method for optimization methods on finite-dimensional Riemannian manifolds has been recently developed in \cite{IP2018IMAJNA} and the so-called vector transport is utilized to move vectors from a tangent space to another in it. Relevant to similar ideas in \cite{IP2018IMAJNA}, we construct the projected BB-type step-size for the LMM in Hilbert space. However, due to the unit sphere $S$ involved is a simple Hilbert-Riemannian manifold with a natural Riemannian metric induced by the inner product $(\cdot,\cdot)$ of $X$, we will avoid the general mathematical setting of infinite-dimensional Hilbert-Riemannian manifolds, which can be found, e.g., in \cite{Lang1995}. The tangent space to the unit spherical manifold $S$ at a point $v\in S$ is given by $T_vS:=\{w\in X:(v, w)=0\}$, which is a Hilbert subspace equipped with the inner product $(\cdot,\cdot)_v=(\cdot,\cdot)$ and the norm $\|\cdot\|_v=\|\cdot\|$. Note that the second-order Fr\'{e}chet-derivative of a smooth functional defined on $S$ at $v\in S$ is a linear mapping from $T_vS$ to $T_vS$ \cite{Lang1995}. To preserve this property, both vectors $s_k$ and $y_k$ appearing in the secant equation \eqref{eq:dys} should belong to $T_{v_k}S$. Replacing $s_k$ and $y_k$ in \eqref{eq:min-say} with $\hat{s}_k=P_{v_k}s_k$ and $\hat{y}_k=P_{v_k}y_k$, respectively, where $P_{v_k}$ denotes the orthogonal projection from $X$ onto $T_{v_k}S$ with $P_{v_k}u=u-(u, v_k)v_k$, $\forall\,u\in X$, we can obtain the following projected BB-type step-size as \vspace{-0.05in} \begin{equation}\label{eq:pbbss} \alpha_k^{\text{PBB1}}=\frac{(\hat{s}_k,\hat{y}_k)}{(\hat{y}_k,\hat{y}_k)} \quad\mbox{or}\quad \alpha_k^{\text{PBB2}}=\frac{(\hat{s}_k,\hat{s}_k)}{(\hat{s}_k,\hat{y}_k)},\;\; (\hat{s}_k,\hat{y}_k)>0,\quad k=1,2\ldots. \end{equation} Clearly, $P_{v_k}(v_k)=0$. Applying the fact that $g_k\in[L,v_k]^\bot$ from Lemma~\ref{lem:orth}, it yields $P_{v_k}(g_k)=g_k$. Hence, for $k=1,2\ldots$, we have \begin{align*} \hat{s}_k = -P_{v_{k}}\left(v_{k-1}\right) = -\alpha_{k-1}P_{v_k}(g_{k-1}),\quad \hat{y}_k = g_k-P_{v_k}(g_{k-1}) = g_k+\hat{s}_k/\alpha_{k}. \end{align*} We remark here that in \eqref{eq:bbss} and \eqref{eq:pbbss}, the main computational cost is the calculation of inner products. In addition, from the expression of $\hat{s}_k$ and $\hat{y}_k$, the key ingredient is the computation of the projection $P_{v_k}(g_{k-1})$ with $P_{v_k}(g_{k-1})=g_{k-1}-(g_{k-1},v_k)v_k$. In practice, compared to the BB-type step-size \eqref{eq:bbss}, only one additional inner product, i.e., $(g_{k-1},v_k)$, needs to be calculated in the projected BB-type step-size \eqref{eq:pbbss}. \subsection{BB-type LMM with nonmonotone globalizations} Owing to essentially nonmonotone behaviors of the BB method, the nonlinearity and nonconvexity of the functional $E$ and the multiplicity and instability of saddle points, the convergence analysis for the LMM with the (projected) BB-type step-size is potentially difficult. To obtain a convergence safeguard, one needs to develop a globalization strategy. For this purpose, we propose the GBBLMM combined with a nonmonotone search strategy developed in section~\ref{sec:nmlmm} with the trial step-size $\lambda_k$ determined by utilizing the BB-type step-size \eqref{eq:bbss} or the projected BB-type step-size \eqref{eq:pbbss}. Since the (projected) BB-type step-size is defined for $k\geq1$, an appropriate initial trial step-size $\lambda_0$ is needed. For $k\geq 1$, when $(s_k, y_k)\leq0$ (respectively, $(\hat{s}_k,\hat{y}_k)\leq0$), BB-type step-size \eqref{eq:bbss} (respectively, the projected BB-type step-size \eqref{eq:pbbss}) is unavailable. In this case, we simply set the trial step-size as $\lambda_k=\lambda_0$. In terms of cases when the (projected) BB-type step-size is unacceptably large or small, we must assume that the trial step-size $\lambda_k$ satisfies the condition \[0<\lambda_{\min}\leq\lambda_k\leq\lambda_{\max},\quad k=1,2,\ldots. \] Here, $\lambda_{\min}$ is to prevent $\lambda_k$ from being too small while $\lambda_{\max}$ is to avoid the search along the curve $\{v_k(\alpha):\alpha>0\}$ going too far and to enhance the stability of the algorithm. Hence, for $k\geq1$, the trial step-size $\lambda_k$ can be defined as one of the following, \begin{align} \lambda_k &= \begin{cases} \min\left\{\max\left\{\alpha_k^{\text{BB}}, \lambda_{\min}\right\}, \lambda_{\max}\right\}, & \mbox{if } (s_k, y_k)>0, \\ \lambda_0, &\mbox{otherwise}, \end{cases} \label{eq:lambdak-bb} \\ \lambda_k &= \begin{cases} \min\left\{\max\left\{\alpha_k^{\text{PBB}}, \lambda_{\min}\right\}, \lambda_{\max}\right\}, & \mbox{if } (\hat{s}_k,\hat{y}_k)>0, \\ \lambda_0, &\mbox{otherwise}, \end{cases} \label{eq:lambdak-pbb} \end{align} with $\alpha_k^{\text{BB}}\in\{\alpha_k^{\text{BB1}},\alpha_k^{\text{BB2}}\}$ and $\alpha_k^{\text{PBB}}\in\{\alpha_k^{\text{PBB1}},\alpha_k^{\text{PBB2}}\}$. Revisiting Algorithms~\ref{alg:lmm}, \ref{alg:zhlmm} and \ref{alg:glllmm}, these main steps of the GBBLMM are summarized in Algorithm~\ref{alg:gbblmm}. \begin{algorithm}[!ht] \caption{Algorithm of the GBBLMM.} \label{alg:gbblmm} \begin{enumerate}[\bf Step~1.] \item Perform the same initialization as Algorithm~\ref{alg:lmm} and Algorithm~\ref{alg:zhlmm} or \ref{alg:glllmm}. Take $\lambda_0\in[\lambda_{\min},\lambda_{\max}]$. Set $k:=0$. Compute $w_0=p(v_0)$ and $g_0=\nabla E(w_0)$. Repeat {\bf Steps~2-4} until the stopping criterion is satisfied (e.g., $\|\nabla E(w_k)\|\leq \varepsilon_{\mathrm{tol}}$ for a given tolerance $0<\varepsilon_{\mathrm{tol}}\ll 1$), then output $u_n=w_k$. \item Find $\alpha_k=\lambda_k\rho^{m_k}$ with $m_k$ the smallest nonnegative integer satisfying the normalized ZH-type nonmonotone step-size search rule \eqref{eq:ZHcond} as in Algorithm~\ref{alg:zhlmm} or the normalized GLL-type nonmonotone step-size search rule \eqref{eq:gll} as in Algorithm~\ref{alg:glllmm}. \item Set $v_{k+1}=v_k(\alpha_k)$ and $w_{k+1}=p(v_k(\alpha_k))$, compute $g_{k+1}=\nabla E(w_{k+1})$, and update $k:=k+1$. \item Compute $\lambda_k$ according to \eqref{eq:lambdak-bb} or \eqref{eq:lambdak-pbb} and go to {\bf Step~2}. \end{enumerate} \end{algorithm} \section{Numerical experiments} \label{sec:numer} In this section, we apply Algorithm~\ref{alg:gbblmm} to find multiple unstable solutions of several nonlinear BVPs with variational structure. We take parameters $\sigma=10^{-4}$, $\rho=0.2$, $M=10$, $\eta_k\equiv0.85$, $\lambda_{\min}=10^{-6}$, $\lambda_{\max}=10$, then set $\lambda_0=\lambda=0.1$ and $\lambda_k$ $(k\geq1)$ defined in \eqref{eq:lambdak-bb} with $\alpha_k^{\text{BB}}=\alpha_k^{\text{BB1}}$ unless specified. Our numerical experiments illustrate that Algorithm~\ref{alg:gbblmm} using the normalized ZH-type or normalized GLL-type nonmonotone step-size search rule has the similar numerical performance. However, due to the limit of the length of the paper, we only show numerical results of Algorithm~\ref{alg:gbblmm} using the normalized ZH-type nonmonotone step-size search rule. We remark here that all numerical experiments in this paper are implemented with MATLAB (R2017b) under the PC with the Inter Core i5-4300M CPU (2.60GHz) and a 4.00GB RAM. In the code of Algorithm~\ref{alg:gbblmm}, the MATLAB subroutine {\ttfamily fminunc} is called to compute the local peak selection. \subsection{Semilinear Dirichlet BVPs} \label{sec:numer:slpde} Consider the homogeneous Dirichlet BVP \begin{equation}\label{eq:slpde} -\Delta u(\mathbf{x})=f(\mathbf{x},u(\mathbf{x}))\quad \mbox{in }\Omega, \qquad u(\mathbf{x})=0\quad\mbox{on }\partial\Omega, \end{equation} where $\Omega$ is a bounded domain in $\mathbb{R}^d$ with a Lipschitz boundary $\partial\Omega$ and the function $f:\bar{\Omega}\times\mathbb{R}\to \mathbb{R}$ satisfies standard hypotheses ($f$1)-($f$4) \cite{LZ2001SISC,Rabinowitz1986} given below. We omit the variable $\mathbf{x}\in\bar{\Omega}\subset\mathbb{R}^d$ in the following unless specified. \begin{enumerate}[($f$1)] \item $f(\mathbf{x},\xi)$ is locally Lipschitz on $\bar{\Omega}\times\mathbb{R}$ and $f(\mathbf{x},\xi)=o(|\xi|)$ as $\xi\to0$; \item there are constants $c_1,c_2>0$ s.t. $|f(\mathbf{x},\xi)|\leq c_1+c_2|\xi|^s$, where $s$ satisfies $1<s<2^*-1$ with $2^*:=2d/(d-2)$ if $d>2$ and $2^*:=\infty$ if $d=1,2$; \item there are constants $\mu>2$ and $R>0$ s.t. for $|\xi|\geq R$, $0<\mu F(\mathbf{x},\xi)\leq f(\mathbf{x},\xi)\xi$, where $F(\mathbf{x},u)=\int_0^uf(\mathbf{x},\xi)d\xi$; \item $f(\mathbf{x},\xi)/|\xi|$ is increasing w.r.t. $\xi$ on $\mathbb{R}\backslash\{0\}$. \end{enumerate} The energy functional associated to the BVP \eqref{eq:slpde} is \[ E(u)=\int_{\Omega}\left(\frac12|\nabla u|^2-F(\mathbf{x},u)\right) d\mathbf{x},\quad u\in X, \] where $X:=H_0^1(\Omega)$ is equipped with the inner product and norm as \[ (u, v)=\int_{\Omega}\nabla u\cdot\nabla vd\mathbf{x},\quad \|u\|=\sqrt{(u,u)},\quad\forall \,u, v\in X. \] According to \cite{LZ2001SISC,Rabinowitz1986}, the following facts are true: \begin{enumerate}[(1)] \item Under hypotheses $(f1)$-$(f3)$, $E\in C^1(X,\mathbb{R})$ and satisfies the (PS) condition. Further, any critical point of $E$ is a weak solution and also a classical solution of the BVP \eqref{eq:slpde}. Moreover, $E$ has a mountain pass structure with a unique local minimizer, i.e., $u=0$. Therefore, for any finite-dimensional closed subspace $L$ of $X$, the peak mapping $P(v)$ of $E$ w.r.t. $L$ at each $v\in S$ is nonempty. \item Under hypotheses $(f1)$-$(f4)$, for any finite-dimensional closed subspace $L$ of $X$, the uniqueness of a local peak selection $p(v)$ of $E$ w.r.t. $L$ implies its continuity at $v$. For the case of $L=\{0\}$, there is only one peak selection $p(v)$ of $E$ w.r.t. $L$ at any $v\in S$ and then $p$ is continuous on $S$. Moreover, in this case, there exists a constant $\delta>0$ s.t. $\mathrm{dist}(p(v),L)=\|p(v)\|\geq\delta>0$, $\forall\,v\in S$. \end{enumerate} It is clear that functions of the form $f(\mathbf{x},\xi)=|\xi|^{\gamma-1}\xi$ with $1<\gamma<2^*-1$ satisfy hypotheses $(f1)$-$(f4)$, and so do all positive linear combinations of such functions. A typical example is that $f(\mathbf{x}, u)=|\mathbf{x}|^\ell u^3$, which leads to the H\'{e}non equation as \begin{equation}\label{eq:henon} -\Delta u=|\mathbf{x}|^\ell u^3 \quad \mbox{in }\Omega, \qquad u=0\quad \mbox{on }\partial\Omega, \end{equation} where $\ell$ is a nonnegative parameter. The equation \eqref{eq:henon} was introduced by H\'{e}non \cite{H1973AA} when he studied rotating stellar structures. If $\ell=0$, this equation is also called the Lane-Emden equation. By \eqref{eq:gk} and a simple calculation, the gradient $g_k\in X=H_0^1(\Omega)$ of $E$ at an iterative point $w_k=p(v_k)$ can be expressed as $g_k=w_k-\phi_k$ with $\phi_k$ the weak solution to the linear BVP \[ -\Delta \phi_k = f(\mathbf{x},w_k)\quad \mbox{in }\Omega,\quad \phi_k=0\quad\mbox{on }\partial\Omega. \] Thus, the main cost for computing the gradient $g_k$ is solving the Poisson equation. For $d=2$, in our MATLAB code, {\ttfamily assempde}, a finite element subroutine provided by the MATLAB PDE Toolbox, is implemented with 32768 triangular elements to handle this task. In addition, the initial ascent direction $v_0$ is taken as the normalization of the solution to the following Poisson equation \begin{equation}\label{eq:poisson-v0} -\Delta \tilde{v}_0=\mathbf{1}_{\Omega_1}-\mathbf{1}_{\Omega_2}\quad\mbox{in }\Omega,\qquad \tilde{v}_0=0\quad\mbox{on }\partial\Omega, \end{equation} where $\mathbf{1}_.=\mathbf{1}_.(\mathbf{x})$ is the indicator function and $\Omega_1,\Omega_2$ are two disjoint subdomains of $\Omega$ for controlling the convexity of $v_0$. The stopping criterion for all examples below is set as $\|g_k\|<10^{-5}$ and $\max_{\mathbf{x}\in\Omega}\big|\Delta w_k+f(\mathbf{x},w_k)\big|<5\times10^{-5}$. We remark here that examples and profiles of all solutions in this subsection are only shown on a square in $\mathbb{R}^2$. However, our approach is also available and efficient for different domains such as a ball, dumbbell or other complex domains. \begin{example}\rm{\bf (Lane-Emden equation)}$\;$\label{ex:lesq} In this example, we employ Algorithm~\ref{alg:gbblmm} to compute a few nontrivial solutions to the Lane-Emden equation on a square, i.e., \eqref{eq:henon} with $\ell=0$, $d=2$ and $\mathbf{x}=(x_1,x_2)\in\Omega=(-1,1)^2$. Limited by the length of the paper, we only profile ten solutions obtained and labeled as $u_1,u_2,\ldots,u_{10}$ in Fig.~\ref{fig:LE10sols}. For each solution, the information of the corresponding support space $L$, initial ascent direction $v_0$ and its energy functional value are described in Table~\ref{tab:lesq-init} with the notation `$[\cdots]$' denoting the space spanned by functions inside it. It is observed that the solution $u_1$ is a nontrivial positive solution with lowest energy and others are sign-changing solutions with higher energy. In fact, according to Theorem~1 in \cite{L1994MM}, due to $\Omega$ is convex in $\mathbb{R}^2$, $u_1$ is actually the unique positive solution to the Lane-Emden equation. Moreover, the existence of $u_1$ has been proved by the mountain pass lemma in \cite{Rabinowitz1986} and it is called the least-energy solution or the ground state solution. Then, we compare the efficiency of our GBBLMM with traditional LMMs for computing these nontrivial solutions $u_1,u_2,\ldots,u_{10}$ in Fig.~\ref{fig:LE10sols} with the same initial information stated in Table~\ref{tab:lesq-init}. The cost of the CPU time and number of iterations are exhibited in Table~\ref{tab:lesq2-comp} and Fig.~\ref{fig:LE10sols-cvg}, in which, respectively, `Exact' denotes Algorithm~\ref{alg:lmm} with the exact step-size search rule, i.e., the step-size $\alpha_k$ is chosen such that \begin{equation*} E(p(v_k(\alpha_k)))=\min_{0<\alpha\leq\lambda_{\max}}E(p(v_k(\alpha))); \end{equation*} `Armijo' denotes Algorithm~\ref{alg:lmm} with the normalized Armijo-type step-size search rule given in \eqref{eq:ak-armijo}. `BB1' (or `BB2') denotes Algorithm~\ref{alg:gbblmm} with $\lambda_k$ ($k\geq1$) defined in \eqref{eq:lambdak-bb} with $\alpha_k^{\text{BB}}=\alpha_k^{\text{BB1}}$ (or $\alpha_k^{\text{BB}}=\alpha_k^{\text{BB2}}$). `PBB1' (or `PBB2') denotes Algorithm~\ref{alg:gbblmm} with $\lambda_k$ ($k\geq1$) defined in \eqref{eq:lambdak-pbb} with $\alpha_k^{\text{PBB}}=\alpha_k^{\text{PBB1}}$ (or $\alpha_k^{\text{PBB}}=\alpha_k^{\text{PBB2}}$). `ABB' denotes Algorithm~\ref{alg:gbblmm} with $\lambda_k$ ($k\geq1$) defined in \eqref{eq:lambdak-bb} with $\alpha_k^{\text{BB}}=\alpha_k^{\text{BB1}}$ if $k$ is odd and $\alpha_k^{\text{BB}}=\alpha_k^{\text{BB2}}$ if $k$ is even. `APBB' denotes Algorithm~\ref{alg:gbblmm} with $\lambda_k$ ($k\geq1$) defined in \eqref{eq:lambdak-pbb} with $\alpha_k^{\text{BB}}=\alpha_k^{\text{PBB1}}$ if $k$ is odd and $\alpha_k^{\text{PBB}}=\alpha_k^{\text{PBB2}}$ if $k$ is even. From Table~\ref{tab:lesq2-comp}, Fig.~\ref{fig:LE10sols-cvg} and additional results not shown here, it is observed that our GBBLMM is quite efficient with less iterations and CPU time for solving the Lane-Emden equation, compared with the LMM using the exact step-size search rule or normalized Armijo-type step-size search rule. Moreover, for different choices of BB-type step-sizes, the corresponding algorithms of the GBBLMM have the similar efficiency. \end{example} \begin{table}[!t] \centering \small \caption{The initial information and energy functional value for each solution in Fig.~\ref{fig:LE10sols}.} \label{tab:lesq-init} \vspace{-0.1in} \begin{tabular}{|c|l|l|l|r|} \hline $u_n$ & \quad$L$ & \quad$\Omega_1$ & ~~$\Omega_2$ & $E(u_n)$~ \\ \hline $u_1$ & $\{0\}$ & $\Omega$ & $\varnothing$ & 9.4460 \\ \hline $u_2$ & $[u_1]$ & $\Omega\cap\{x_1>0\}$ & $\Omega\backslash\Omega_1$ & 53.6731 \\ \hline $u_3$ & $[u_1]$ & $\Omega\cap\{x_2>0\}$ & $\Omega\backslash\Omega_1$ & 53.6731 \\ \hline $u_4$ & $[u_1]$ & $\Omega\cap\{x_1+x_2>0\}$ & $\Omega\backslash\Omega_1$ & 48.8807 \\ \hline $u_5$ & $[u_1]$ & $\Omega\cap\{x_1-x_2>0\}$ & $\Omega\backslash\Omega_1$ & 48.8807 \\ \hline $u_6$ & $[u_1,u_2]$ & $\Omega\cap\{|x_1|>0.2\}$ & $\Omega\backslash\Omega_1$ & 178.0269 \\ \hline $u_7$ & $[u_1,u_4]$ & $\Omega\cap\{|x_1+x_2|>0.3\}$ & $\Omega\backslash\Omega_1$ & 135.6335 \\ \hline $u_8$ & $[u_1,u_2,u_3]$ & $\Omega\cap\{x_1x_2>0\}$ & $\Omega\backslash\Omega_1$ & 151.3864 \\ \hline $u_9$ & $[u_1,u_4,u_5]$ & $\Omega\cap\{|x_1|>|x_2|\}$ & $\Omega\backslash\Omega_1$ & 195.7620 \\ \hline $u_{10}$ & $[u_1,u_2,u_3,u_8]$ & $\Omega\cap\{x_1^2+x_2^2>0.25\}$ & $\Omega\backslash\Omega_1$ & 233.9289 \\ \hline \end{tabular} \end{table} \def.18\textwidth{.19\textwidth} \def.09\textwidth{.15\textwidth} \begin{figure}[!t] \footnotesize \makebox[.18\textwidth]{$u_1$} \makebox[.18\textwidth]{$u_2$} \makebox[.18\textwidth]{$u_3$} \makebox[.18\textwidth]{$u_4$} \makebox[.18\textwidth]{$u_5$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u1} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u2} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u3} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u4} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u5} \\ \makebox[.18\textwidth]{$u_6$} \makebox[.18\textwidth]{$u_7$} \makebox[.18\textwidth]{$u_8$} \makebox[.18\textwidth]{$u_9$} \makebox[.18\textwidth]{$u_{10}$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u6} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u7} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u8} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u9} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u10} \\ \vspace{-0.3in} \caption{Profiles of ten solutions of the Lane-Emden equation on $\Omega=(-1,1)^2$.} \label{fig:LE10sols} \end{figure} \begin{table}[!t] \centering \caption{Numerical comparisons of the GBBLMM with traditional LMMs in terms of the CPU time (in seconds) for computing those solutions in Fig.~\ref{fig:LE10sols} with the shortest time underlined.} \label{tab:lesq2-comp} \vspace{-0.1in} \footnotesize \begin{tabular}{|l|rrrrrrrr|} \hline $u$ & Exact & Armijo & BB1~ & PBB1 & BB2~ & PBB2 & ABB~ & APBB \\ \hline $u_1$ & 2.1096 & 3.8458 & 1.3353 & 1.2192 & 1.3089 & \underline{1.1933} & 1.3728 & 1.2367 \\ $u_2$ & 15.2261 & 3.2181 & \underline{1.8661} & 1.8975 & 1.8775 & 2.5628 & 2.0697 & 2.2227 \\ $u_3$ & 17.7360 & 4.7765 & 2.0224 & \underline{1.9484} & 2.0085 & 2.3639 & 2.1205 & 2.4500 \\ $u_4$ & 26.2401 & 4.4908 & 2.7760 & 2.6123 & \underline{2.4365} & 4.5057 & 2.8080 & 2.9564 \\ $u_5$ & 25.7855 & 4.1724 & 4.8276 & 2.5380 & \underline{2.3958} & 3.7228 & 3.0434 & 4.9856 \\ $u_6$ & 95.2901 & 18.2996 & 6.8527 & 7.0348 & \underline{6.2970} & 8.1687 & 6.4826 & 6.8969 \\ $u_7$ & 106.4799 & 15.1870 & 4.1001 & \underline{3.9299} & 5.0760 & 10.5012 & 5.1192 & 7.2954 \\ $u_8$ & 53.2211 & 12.0485 & 4.2981 & 4.2186 & 4.0111 & 2.9897 & 4.0687 & \underline{2.7712} \\ $u_9$ & 87.9478 & 9.6975 & 5.4521 & \underline{3.7206} & 6.2267 & 6.1504 & 6.8100 & 5.2849 \\ $u_{10}$ & 174.1314 & 19.8673 & 9.8359 & \underline{8.0634} & 8.9516 & 8.1358 & 11.0005 & 9.1701 \\ \hline \end{tabular} \end{table} \def.18\textwidth{.36\textwidth} \def.09\textwidth{.175\textheight} \begin{figure}[!t] \centering \footnotesize \makebox[0.03\textwidth][r]{(a)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u1_cvg} \qquad \makebox[0.03\textwidth][r]{(b)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u2_cvg} \\ \makebox[0.03\textwidth][r]{(c)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u3_cvg} \qquad \makebox[0.03\textwidth][r]{(d)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u4_cvg} \\ \makebox[0.03\textwidth][r]{(e)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u5_cvg} \qquad \makebox[0.03\textwidth][r]{(f)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u6_cvg} \\ \makebox[0.03\textwidth][r]{(g)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u7_cvg} \qquad \makebox[0.03\textwidth][r]{(h)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u8_cvg} \\ \makebox[0.03\textwidth][r]{(i)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u9_cvg} \qquad \makebox[0.03\textwidth][r]{(j)}\includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/LE_u10_cvg} \\ \caption{Numerical comparison of the GBBLMM with traditional LMMs in terms of the convergence rate for computing solutions in Fig.~\ref{fig:LE10sols}: (a) $\sim$ (j) for $u_1\sim u_{10}$, respectively. The horizontal and vertical coordinates represent the number of iterations and the norm of the gradient, respectively.} \label{fig:LE10sols-cvg} \end{figure} \begin{example}\rm{\bf (H\'{e}non equation)}$\;$\label{ex:hensq} Now, we employ Algorithm~\ref{alg:gbblmm} to compute a few nontrivial solutions to the H\'{e}non equation \eqref{eq:henon} on $\Omega=(-1,1)^2$. First, for different $\ell\geq0$, we compute the ground state solution by taking $L=\{0\}$ and $v_0$ according to \eqref{eq:poisson-v0} with $\Omega_1=\{\mathbf{x}=(x_1,x_2)\in\Omega:x_1>0,x_2>0\}$ and $\Omega_2=\varnothing$. The profiles of corresponding ground state solutions with different $\ell$ are presented in Fig.~\ref{fig:hensq-gs}. From Fig.~\ref{fig:hensq-gs} and other numerical results in various domain not shown here, one can numerically observe that, when $\ell$ is close to zero (approximately, $\ell\leq0.5$), the ground state solution is symmetric and attains its maximum value at the center of the domain; when $\ell$ is large (approximately, $\ell\geq0.6$), the maximizer of the ground state solution is gradually away from the center of the domain, i.e., the symmetry-breaking occurs. This similar interesting phenomenon for the H\'{e}non equation on the unit ball was first numerically observed in \cite{CZN2000IJBC} and then theoretically verified in \cite{SWS2002CCM}. To the best of our knowledge, the rigorous analysis for the exact critical value of $\ell$ that determines whether symmetry-breaking occurs for the H\'{e}non equation on other domains besides the unit ball is still an open problem. It should be one of interesting issues considered in our future work. Then, taking $\ell=6$, we profile twelve solutions obtained and labeled as $u_1,u_2,\ldots,$ $u_{12}$ in Fig.~\ref{fig:hensq12sols}. For each solution, the information of the corresponding support space $L$, initial ascent direction $v_0$ and its energy functional value is listed in Table~\ref{tab:hensq-init}. It is observed that $u_1$, $u_2$, $u_3$, $u_6$ and $u_9$ are five positive solutions and others are sign-changing solutions. Distinguished from the case of $\ell=0$ (see the Lane-Emden equation in Example~\ref{ex:lesq}), the positive solution is no longer unique and more nontrivial solutions spring up. The multiplicity of positive solutions for large $\ell$ is also numerically observed and theoretically analyzed in some literature; see, e.g., \cite{CZN2000IJBC,LZ2002SISC,SWS2002CCM,YLZ2008SCSA}. In addition, our approach is also compared with traditional LMMs for the H\'{e}non equation with the significant superiority in the performance, which is similar as that in Example~\ref{ex:lesq} and skipped here due to the limit of the length. \end{example} \def.18\textwidth{.19\textwidth} \def.09\textwidth{.15\textwidth} \begin{figure}[!t] \footnotesize \makebox[.18\textwidth]{$\ell=0$} \makebox[.18\textwidth]{$\ell=0.1$} \makebox[.18\textwidth]{$\ell=0.2$} \makebox[.18\textwidth]{$\ell=0.3$} \makebox[.18\textwidth]{$\ell=0.4$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_1} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_2} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_3} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_4} \\ \makebox[.18\textwidth]{$\ell=0.5$} \makebox[.18\textwidth]{$\ell=0.6$} \makebox[.18\textwidth]{$\ell=0.7$} \makebox[.18\textwidth]{$\ell=0.8$} \makebox[.18\textwidth]{$\ell=0.9$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_5} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_6} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_7} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_8} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r0_9} \\ \makebox[.18\textwidth]{$\ell=1$} \makebox[.18\textwidth]{$\ell=2$} \makebox[.18\textwidth]{$\ell=3$} \makebox[.18\textwidth]{$\ell=4$} \makebox[.18\textwidth]{$\ell=5$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r1} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r2} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r3} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r4} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_ug_r5} \\ \vspace{-0.3in} \caption{Profiles of ground state solutions of the H\'{e}non equation on $\Omega=(-1,1)^2$ with different $\ell s$.} \label{fig:hensq-gs} \end{figure} \begin{table}[!t] \centering \small \caption{The initial information and energy functional value for each solution in Fig.~\ref{fig:hensq12sols}.} \label{tab:hensq-init} \vspace{-0.1in} \begin{tabular}{|c|l|l|l|r|} \hline $u_n$ & \quad$L$ & \quad$\Omega_1$ & \quad$\Omega_2$ & $E(u_n)$~ \\ \hline $u_1$ & $\{0\}$ & $\Omega\cap\{x_1>0,x_2>0\}$ & $\varnothing$ & 61.9634 \\ \hline $u_2$ & $[u_1]$ & $\Omega\cap\{x_1<0,x_2>0\}$ & $\varnothing$ & 120.7887 \\ \hline $u_3$ & $[u_1]$ & $\Omega\cap\{x_1<0,x_2<0\}$ & $\varnothing$ & 122.4078 \\ \hline $u_4$ & $[u_1]$ & $\Omega\cap\{x_2>0\}$ & $\varnothing$ & 126.6988 \\ \hline $u_5$ & $[u_1]$ & $\Omega\cap\{x_1>0,x_2>0\}$ & $\Omega\cap\{x_1<0,x_2<0\}$ & 125.3561 \\ \hline $u_6$ & $[u_1,u_2]$ & $\Omega\cap\{x_1<0,x_2<0\}$ & $\varnothing$ & 177.6068 \\ \hline $u_7$ & $[u_1,u_3]$ & $\Omega\cap\{x_2>0\}$ & $\varnothing$ & 187.1379 \\ \hline $u_8$ & $[u_1,u_4]$ & $\Omega\cap\{x_1<0,x_2<0\}$ & $\varnothing$ & 189.9406 \\ \hline $u_9$ & $[u_1,u_2,u_6]$ & $\Omega\cap\{x_1>0,x_2<0\}$ & $\varnothing$ & 230.0141 \\ \hline $u_{10}$ & $[u_1,u_2,u_6]$ & $\Omega\cap\{x_2<0\}$ & $\Omega\cap\{x_2>0\}$ & 247.0220 \\ \hline $u_{11}$ & $[u_1,u_2,u_6]$ & $\Omega\cap\{x_1x_2>0\}$ & $\Omega\cap\{x_1x_2<0\}$ & 250.6746 \\ \hline $u_{12}$ & $[u_1,u_2,u_6]$ & $\Omega\cap\{x_1x_2<0\}$ & $\varnothing$ & 255.9728 \\ \hline \end{tabular} \end{table} \def.18\textwidth{.24\textwidth} \def.09\textwidth{.18\textwidth} \begin{figure}[!t] \footnotesize \makebox[.18\textwidth]{$u_1$} \makebox[.18\textwidth]{$u_2$} \makebox[.18\textwidth]{$u_3$} \makebox[.18\textwidth]{$u_4$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u1} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u2} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u3} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u4} \\ \makebox[.18\textwidth]{$u_5$} \makebox[.18\textwidth]{$u_6$} \makebox[.18\textwidth]{$u_7$} \makebox[.18\textwidth]{$u_8$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u5} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u6} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u7} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u8} \\ \makebox[.18\textwidth]{$u_9$} \makebox[.18\textwidth]{$u_{10}$} \makebox[.18\textwidth]{$u_{11}$} \makebox[.18\textwidth]{$u_{12}$} \\ \hspace*{2ex} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u9} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u10} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u11} \includegraphics[width=.18\textwidth,height=.09\textwidth]{fig/henon_u12} \\ \vspace{-0.3in} \caption{Profiles of twelve solutions of the H\'{e}non equation with $\ell=6$ on $\Omega=(-1,1)^2$.} \label{fig:hensq12sols} \end{figure} \subsection{Elliptic PDEs with nonlinear boundary conditions} Consider the following BVP \vspace{-0.1in} \begin{equation}\label{eq:nlbc-model} -\Delta u+au=0 \quad \mbox{in }\Omega, \qquad {\partial u}/{\partial\mathbf{n}}=q(\mathbf{x},u) \quad \mbox{on }\partial\Omega, \end{equation} where $\Omega\subset \mathbb{R}^d$ is a bounded open domain with a Lipschitz boundary $\partial\Omega$, the constant $a>0$, $\mathbf{n}=\mathbf{n}(\mathbf{x})$ denotes the unit outward normal vector to $\partial\Omega$ at $\mathbf{x}$, and the nonlinear function $q(\mathbf{x}, \xi)$ satisfies the following regularity and growth hypotheses \cite{LWZ2013JSC}: \begin{enumerate}[($q$1)] \item $q(\mathbf{x},\xi)\in C^1(\partial\Omega\times\mathbb{R},\mathbb{R})$ and $q(\mathbf{x},0)=\partial_{\xi}q(\mathbf{x},\xi)|_{\xi=0}=0$, $\forall \,\mathbf{x}\in\partial\Omega$; \item there are constants $c_1,c_2>0$ s.t. $|q(\mathbf{x},\xi)|\leq c_1+c_2|\xi|^s$, $\forall\, \mathbf{x}\in\partial\Omega$, where $s$ satisfies $1<s<d/(d-2)$ for $d>2$ and $1<s<\infty$ for $d=2$; \item there are constants $\mu>2$, $R>0$ s.t. $0\leq\mu Q(\mathbf{x},\xi)\leq \xi q(\mathbf{x},\xi)$, $\forall\,|\xi|>R$, $\mathbf{x}\in\partial\Omega$, where $Q(\mathbf{x},u)=\int_0^uq(\mathbf{x},\xi)d\xi$; \item $\partial_{\xi}q(\mathbf{x},\xi)>q(\mathbf{x},\xi)/\xi$, $\forall\, (\mathbf{x},\xi)\in\partial\Omega\times(\mathbb{R}\backslash\{0\})$. \end{enumerate} It is worthwhile to point out that the BVP \eqref{eq:nlbc-model} appears in many scientific fields, such as corrosion/oxidation modeling, metal-insulator or metal-oxide semiconductor systems; see \cite{A2004NFAO,LWZ2013JSC} and references therein. However, there are few studies on the computation of multiple solutions to such problems. Define the space \vspace{-0.05in} \[ X=\left\{u\in H^1(\Omega):\int_{\Omega}(\nabla u\cdot\nabla v+a uv)d\mathbf{x}=\int_{\partial\Omega}\frac{\partial u}{\partial\mathbf{n}}v ds, \; \forall\, v\in H^1(\Omega)\right\}, \] which is equipped with the inner product and norm as \vspace{-0.05in} \begin{equation}\label{eq:nlbc-ipX} (u, v)=\int_{\partial\Omega}\frac{\partial u}{\partial\mathbf{n}}vds,\quad \|u\|=\sqrt{(u,u)},\quad \forall\, u, v\in X. \end{equation} According to \cite{A2004NFAO,LWZ2013JSC}, $X$ is a Hilbert space and $X=H^{\frac12}(\partial\Omega)$ in the sense of equivalent norms. In addition, for the inner product in $H^1(\Omega)$ defined as $(u, v)_a=\int_{\Omega}(\nabla u\cdot\nabla v+a uv)d\mathbf{x}$, $\forall\, u, v\in H^1(\Omega)$, $X$ is the $(\cdot,\cdot)_a$-orthogonal complement of $H_0^1(\Omega)$ in $H^1(\Omega)$ and has an $(\cdot,\cdot)_a$-orthogonal basis formed by the Steklov eigenfunctions \cite{A2004NFAO,LWZ2013JSC}. Clearly, $X$ contains all solutions of the BVP \eqref{eq:nlbc-model} in the weak sense and the energy functional associated to the BVP \eqref{eq:nlbc-model} for $u\in X$ can be written as \[ E(u) =\frac12\int_{\Omega}\left(|\nabla u|^2+au^2\right)d\mathbf{x}- \int_{\partial\Omega}Q(\mathbf{x},u)ds = \int_{\partial\Omega}\left(\frac12\frac{\partial u}{\partial\mathbf{n}}u-Q(\mathbf{x},u)\right)ds. \] Under hypotheses ($q$1) and ($q2$), $E\in C^2(X,\mathbb{R})$ and satisfies the (PS) condition \cite{LWZ2013JSC,Rabinowitz1986}. If $q$ satisfies hypotheses ($q$1)-($q$3), then the BVP \eqref{eq:nlbc-model} has at least three nontrivial solutions \cite{LWZ2013JSC,W1991AIHPNLA}. If, in addition to hypotheses ($q$1)-($q$3), $q(\mathbf{x},\xi)$ is odd in $\xi$, the existence of infinitely many solutions to the BVP \eqref{eq:nlbc-model} can be established by following the proof of Theorem 9.12 in \cite{Rabinowitz1986}. Under hypotheses ($q$1)-($q$4), when $L=\{0\}$, the peak selection $p(v)$ is uniquely defined for each $v\in S$ and is $C^1$ \cite{LWZ2013JSC}. Moreover, in this case, there exists a constant $\delta>0$ s.t. $\mathrm{dist}(p(v),L)=\|p(v)\|\geq\delta>0$, $\forall\,v\in S$; see Proposition 4 in \cite{LWZ2013JSC}. By the definition of the inner product \eqref{eq:nlbc-ipX}, the gradient $g=\nabla E(u)\in X$ satisfies \[ \int_{\partial\Omega}\frac{\partial g}{\partial\mathbf{n}}vds = \langle E'(u),v\rangle = \frac{d}{d\tau}E(u+\tau v)\Big|_{\tau=0} = \int_{\partial\Omega}\left(\frac{\partial u}{\partial\mathbf{n}}-q(x,u)\right)vds, \;\; \forall\, v\in X. \] Recalling the definition of $X$, it implies that $g\in X$ is the weak solution to the linear elliptic BVP as \begin{equation}\label{eq:nbc-g-pde} -\Delta g+ag=0 \quad \mbox{in }\Omega, \qquad \frac{\partial g}{\partial\mathbf{n}}=b \quad \mbox{on }\partial\Omega, \end{equation} with $b=b(\mathbf{x})=\frac{\partial u}{\partial\mathbf{n}}(\mathbf{x})-q(\mathbf{x}, u(\mathbf{x}))$, $\mathbf{x}\in\partial\Omega$. In practice, the linear elliptic BVP \eqref{eq:nbc-g-pde} can be solved numerically by finite difference methods, finite element methods and so on. In our experiments, an efficient boundary element method (BEM) \cite{CZ1992,LWZ2013JSC} with 1024 boundary elements is applied to solve it. We now employ our GBBLMM to solve for multiple solutions of the BVP \eqref{eq:nlbc-model} with $d=2$, $a=1$ and $q(\mathbf{x}, u)=u^3$ for two different domains stated in Example \ref{ex:nlbc_circ} and Example \ref{ex:nlbc_rect}, respectively. It is easy to see that hypotheses ($q$1)-($q$4) are satisfied for this case. In addition, for the following examples, the initial ascent direction $v_0$ is taken as the normalization of \begin{equation}\label{eq:v0bie} \tilde{v}_0(\mathbf{x})=\int_{\partial\Omega}\Phi(|\mathbf{x}-\mathbf{y}|)\rho_0(\mathbf{y})ds_{\mathbf{y}},\quad \mathbf{x}\in \Omega, \end{equation} for some given function $\rho_0$ defined on $\partial\Omega$, where $\Phi$ is the fundamental solution to the linear elliptic operator $-\Delta+aI$ and given as \begin{equation}\label{eq:fundsol} \Phi(|\mathbf{x}-\mathbf{y}|)= \frac{1}{2\pi}K_0\left(\sqrt{a}|\mathbf{x}-\mathbf{y}|\right),\quad \mathbf{x}, \mathbf{y}\in\Omega, \end{equation} with $K_0$ the modified Bessel function of the second kind of order $0$. The stopping criterion is set as $\|g_k\|<10^{-5}$ and $\max_{\mathbf{x}\in\partial\Omega}\left|\frac{\partial w_k}{\partial\mathbf{n}}-q(\mathbf{x}, w_k)\right|<5\times10^{-5}$. \begin{example}\label{ex:nlbc_circ}\rm{\bf (A circle domain case)}$\;$ Take the domain $\Omega=\{(x_1,x_2):x_1^2+x_2^2<1\}$ with its boundary $\partial\Omega=\{(x_1,x_2):x_1^2+x_2^2=1\}$ parametrized by \begin{equation}\label{ex:nlbc_circ_theta} \mathbf{x}=\mathbf{x}(\theta) = (x_1(\theta),x_2(\theta))=(\cos\theta,\sin\theta),\quad \theta\in[0,2\pi]. \end{equation} We show five solutions $u_1,u_2,\ldots,u_5$ obtained in Fig.~\ref{fig:nlbc_circ5sols} with their profiles inside the domain and on the boundary. For each solution, the information of corresponding support space $L$, initial ascent direction $v_0$ and its energy functional value is listed in Table~\ref{tab:nlbc_circle}. It is noted that the boundary value of the solution $u_1$ is a constant, approximately to $0.6691$. Actually, the algorithm for computing $u_1$ needs only one iteration. Compared the efficiency of our GBBLMM with that of traditional LMMs, our approach can be observed to perform much better with less iterations and CPU time for the BVP \eqref{eq:nlbc-model} on a circle domain. The relevant details are omitted here due to the length limitation. \end{example} \begin{table}[!t] \centering \small \caption{The initial information and energy functional value for each solution in Example~\ref{ex:nlbc_circ}.} \label{tab:nlbc_circle} \begin{tabular}{|c|c|c|c|c|c|} \hline $u_n$ & $u_1$ & $u_2$ & $u_3$ & $u_4$ & $u_5$ \\ \hline $L$ & $\{0\}$ & $\{0\}$ & $[u_2]$ & $[u_2]$ & $[u_1,u_3,u_4]$ \\ \hline $\rho_0(\mathbf{x}(\theta))$ & $1$ & $1-\cos\theta$ & $\sin\theta$ & $\cos\theta$ & $\cos2\theta$ \\ \hline $E(u_n)$ & 0.3148 & 0.3105 & 1.3025 & 1.3025 & 4.1364 \\ \hline \end{tabular} \end{table} \def.18\textwidth{.18\textwidth} \def.09\textwidth{.09\textwidth} \begin{figure}[!t] \centering \footnotesize \makebox[.18\textwidth]{$u_1$} \makebox[.18\textwidth]{$u_2$} \makebox[.18\textwidth]{$u_3$} \makebox[.18\textwidth]{$u_4$} \makebox[.18\textwidth]{$u_5$} \\ \quad \includegraphics[width=.18\textwidth]{fig/nlbc_circle_u1_p} \includegraphics[width=.18\textwidth]{fig/nlbc_circle_u2_p} \includegraphics[width=.18\textwidth]{fig/nlbc_circle_u3_p} \includegraphics[width=.18\textwidth]{fig/nlbc_circle_u4_p} \includegraphics[width=.18\textwidth]{fig/nlbc_circle_u5_p} \\ \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_circle_u1_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_circle_u2_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_circle_u3_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_circle_u4_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_circle_u5_b}} \\ \caption{Profiles of five nontrivial solutions $u_1\sim u_5$ in Example~\ref{ex:nlbc_circ} inside the domain (top row) and on the boundary (bottom row). In each subplot in the bottom panel, the horizontal axis represents the value of the boundary parameter $\theta$ ($0\leq\theta\leq 2\pi$) which corresponds to the boundary point $(x_1,x_2)=(\cos\theta,\sin\theta)$ as described in \eqref{ex:nlbc_circ_theta}.} \label{fig:nlbc_circ5sols} \end{figure} \begin{example}\label{ex:nlbc_rect}\rm{\bf (A square domain case)}$\;$ Set $\Omega=(-1,1)^2$ and the boundary $\partial\Omega$ is parametrized starting from the point $\mathbf{x}=(-1,-1)$ by a scaled arc-length in the counterclockwise direction: \begin{equation}\label{eq:nlbc_rect_theta} \mathbf{x}=\mathbf{x}(\theta)=(x_1(\theta),x_2(\theta))= \begin{cases} (4\theta/\pi-1,-1), & 0\leq\theta<\pi/2, \\ (1,4\theta/\pi-3), & \pi/2\leq\theta<\pi, \\ (5-4\theta/\pi,1), & \pi\leq\theta<3\pi/2, \\ (-1,7-4\theta/\pi), & 3\pi/2\leq\theta\leq2\pi. \\ \end{cases} \end{equation} In this case, more nontrivial solutions spring up. We simply show ten solutions obtained in Fig.~\ref{fig:nlbc_rect_u1-10} with their profiles inside the domain and on the boundary. For each solution, the information of the corresponding support space $L$, initial ascent direction $v_0$ and its energy functional value is listed in Table~\ref{tab:nlbc_rect}. It is observed that $u_1\sim u_5$ are positive solutions and others are sign-changing solutions. \end{example} \begin{table}[!t] \centering \small \caption{The initial information and energy functional value for each solution in Example~\ref{ex:nlbc_rect}.} \label{tab:nlbc_rect} \begin{tabular}{|c|c|c|c|c|c|} \hline $u_n$ & $u_1$ & $u_2$ & $u_3$ & $u_4$ & $u_5$ \\ \hline $L$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $[u_1,u_3]$ & $\{0\}$ \\ \hline $\rho_0(\mathbf{x}(\theta))$ & $1-\cos\theta$ & $1+\sin(\theta-\pi/4)$ & $1+\cos2\theta$ & $-\cos\theta$ & $1$ \\ \hline $E(u_n)$ & 0.2128 & 0.3068 & 0.3364 & 0.3550 & 0.3658 \\ \hline \hline $u_n$ & $u_6$ & $u_7$ & $u_8$ & $u_9$ & $u_{10}$ \\ \hline $L$ & $\{0\}$ & $[u_1,u_3]$ & $[u_1,u_6]$ & $[u_1,u_2,u_3]$ & $\{0\}$ \\ \hline $\rho_0(\mathbf{x}(\theta))$ & $-\cos\theta$ & $-\cos2\theta$ & $1-\sin\theta$ & $1+\sin\theta$ & $\cos2\theta$ \\ \hline $E(u_n)$ & 0.5233 & 0.7474 & 0.8429 & 1.0411 & 1.2550 \\ \hline \end{tabular} \end{table} \def.18\textwidth{.18\textwidth} \def.09\textwidth{.09\textwidth} \begin{figure}[!t] \centering \footnotesize \makebox[.18\textwidth]{$u_1$} \makebox[.18\textwidth]{$u_2$} \makebox[.18\textwidth]{$u_3$} \makebox[.18\textwidth]{$u_4$} \makebox[.18\textwidth]{$u_5$}\\ \quad \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u1_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u2_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u3_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u4_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u5_p} \\ \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u1_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u2_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u3_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u4_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u5_b}} \makebox[.18\textwidth]{$u_6$} \makebox[.18\textwidth]{$u_7$} \makebox[.18\textwidth]{$u_8$} \makebox[.18\textwidth]{$u_9$} \makebox[.18\textwidth]{$u_{10}$}\\ \quad \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u6_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u7_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u8_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u9_p} \includegraphics[width=.18\textwidth]{fig/nlbc_rect_u10_p} \\ \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u6_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u7_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u8_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u9_b}} \resizebox*{.18\textwidth}{.09\textwidth}{\includegraphics{fig/nlbc_rect_u10_b}} \caption{Profiles of $u_1\sim u_{10}$ in Example~\ref{ex:nlbc_rect} inside the domain (first and third rows) and on the boundary (second and fourth rows). In each subplot in second and fourth rows, the horizontal axis represents the value of the parameter $\theta$ ($0\leq\theta\leq 2\pi$) as described in \eqref{eq:nlbc_rect_theta}. Particularly, $\theta=0$ ($2\pi$), $\pi/2$, $\pi$ and $3\pi/2$ correspond to corner points $(-1,-1)$, $(1,-1)$, $(1,1)$ and $(-1,1)$, respectively.} \label{fig:nlbc_rect_u1-10} \end{figure} \section{Concluding remarks} \label{sec:con} Novel nonmonotone LMMs were proposed in this paper for finding multiple saddle points of general nonconvex functionals in Hilbert spaces by improving the traditional LMMs with nonmonotone step-size search rules. Both the normalized ZH-type and GLL-type nonmonotone step-size search rules were proposed and analyzed for the LMM. In particular, the global convergence of the normalized ZH-type LMM was rigorously verified under the same assumptions as those in \cite{Z2017CAMC} for the normalized Armijo-type LMM. Specifically, an efficient GBBLMM was designed to speed up the convergence by combining the Barzilai--Borwein-type step-size method with nonmonotone globalizations. Finally, by applying the GBBLMM to find multiple solutions of two typical semilinear BVPs with variational structures, abundant numerical results were obtained to verify that our approach is efficient and can greatly improve the convergence rate of traditional LMMs.
1,941,325,220,314
arxiv
\section{Introduction} The superconducting phase of the cuprate superconductors exhibits $d$-wave pairing symmetry.\cite{har01} As such, there exist four nodal points on the two-dimensional Fermi surface at which the quasiparticle excitations are gapless, and quasiparticles excited in the vicinity of a node behave like massless Dirac fermions.\cite{lee02,alt01,ore01}. The presence of impurities enhances the density of states at low energy\cite{gor01} resulting in a universal limit $(T\rightarrow0,~\Omega\rightarrow0)$ where the thermal conductivity is independent of disorder.\cite{lee01,hir01,hir02,hir03,graf01,sen01,dur01} Calculations have shown that the thermal conductivity retains this universal character even upon the inclusion of vertex corrections.\cite{dur01} Experiments have confirmed the validity of this quasiparticle picture of transport by observing their universal-limit contribution to the thermal conductivity, and thereby measuring the anisotropy of the the Dirac nodes, $v_f/v_\Delta$.\cite{tai01,chi01,chi02,nak01,pro01,sut01,hil01,sun01,sut02,haw01,sun02} For some time, there has been significant interest \cite{kiv01,pod01,li01,che01,seo01} in the idea of additional types of order coexisting with $d$-wave superconductivity (dSC) in the cuprates. And in recent years, as the underdoped regime of the phase diagram has been explored in greater detail, evidence of coexisting order has grown substantially \cite{kiv01}. Particularly intriguing has been the evidence of checkerboard charge order revealed via scanning tunnelling microscopy (STM) experiments. \cite{hof01,hof02,how01,ver01,mce01,han01,mis01,mce02,koh01,boy01,han02,pas01,wis01,koh02} And if charge order coexists with $d$-wave superconductivity in the underdoped cuprates, it begs the question of how the quasiparticle excitation spectrum is modified. Previous work \cite{ber01} has shown that even with the addition of a charge or spin density wave to the dSC hamiltonian, the low-energy excitation spectrum remains gapless as long as a harmonic of the ordering vector does not nest the nodal points of the combined hamiltonian. However, if the coexisting order is strong enough, the nodal points can move to $k$-space locations where they are nested by the ordering vector, at which point the excitation spectrum becomes fully gapped. \cite{par01,gra01,voj01} Such a nodal transition should have dramatic consequences for low-temperature thermal transport, the details of which were studied in Ref.~\onlinecite{dur02}. That paper considered the case of a conventional $s$-wave charge density wave (CDW) of wave vector $\vect{Q}=(\pi,0)$ coexisting with $d$-wave superconductivity. It showed that the zero-temperature thermal conductivity vanishes, as expected, once charge order is of sufficient magnitude to gap the quasiparticle spectrum. In addition, the dependence of zero-temperature thermal transport was calculated and revealed to be disorder-dependent. Hence, in the presence of charge order, the universal-limit is no longer universal. This result is in line with the results of recent measurements \cite{hus01,tak01,sun05,and01,sun03,sun04,haw02} of the underdoped cuprates, as well as other calculations \cite{gus01,ander01}. We extend the work of Ref.~\onlinecite{dur02} herein. We consider the same physical system, but employ a more sophisticated model of disorder that includes the effects of impurity scattering within the self-consistent Born approximation. We find that this self-consistent model of disorder requires that off-diagonal components be retained in our matrix self-energy. These additional components lead to a renormalization of the critical value of charge order beyond which the thermal conductivity vanishes. Furthermore, we include the contribution of vertex corrections within our diagrammatic thermal transport calculation. While vertex corrections become more important as charge order increases, especially for long-ranged impurity potentials, we find that for reasonable parameter values, they do not significantly modify the bare-bubble result. In Sec.~\ref{sec:model}, we introduce the model hamiltonian of the dSC+CDW system, describe the effect charge ordering has on the nodal excitations, and present our model for disorder. In Sec.~\ref{sec:SCBA}, a numerical procedure for computing the self-energy within the self-consistent Born approximation is outlined. The results of its application in the relevant region of parameter space are presented in Sec.~\ref{sec:SCBAresults}. In Sec.~\ref{sec:thermalconductivity}, we calculate the thermal conductivity using a diagrammatic Kubo formula approach, including vertex corrections within the ladder approximation. An analysis of the vertex-corrected results and a calculation of the clean-limit thermal conductivity is presented in Sec.~\ref{sec:analysis}. Also in this section, we discuss how our self-consistent model of disorder renormalizes the nodal transition point, the value of charge order parameter at which the nodes effectively vanish. Conclusions are presented in Sec.~\ref{sec:conc}. \section{Model} \label{sec:model} We employ the phenomenological hamiltonian of Ref.~\onlinecite{dur02} in order to calculate the low-temperature thermal conductivity of the fermionic excitations of a $d$-wave superconductor with a $\vect{Q}=(\pi,0)$ charge density wave, in the presence of a small but nonzero density of point-like impurity scatterers. The presence of $d$-wave superconducting order contributes a term to the hamiltonian \begin{equation} H_{dSC}=\frac{1}{2}\sum_{k\alpha}\Big(\epsilon_k c_{k\alpha}^\dagger c_{k\alpha}+\Delta_k c_{k\alpha}^\dagger c_{-k\beta}^\dagger\Big)+\mathrm{h.c.} \end{equation} where $\epsilon_k$ is a typical tight-binding dispersion, and $\Delta_k$ an order parameter of $d_{x^2-y^2}$ symmetry. Due to the $d$-wave nature of the gap, nodal excitations exist in the $(\pm\pi,\pm\pi)$ directions with respect to the origin. The locations of these nodes in the absence of charge ordering are close to the points $(\pm \pi/2,\pm \pi/2)$, and are denoted with white dots in Fig.~\ref{fig:BZfig}. These low energy excitations are massless anisotropic Dirac fermions. That is, the electron dispersion and pair function are linear functions of momentum in the vicinity of these nodal locations. We will refer to the slopes of the electron dispersion and pair function, defined by $\vect{v}_f\equiv\frac{\partial\epsilon_k}{\partial\bf{k}}$ and $\vect{v}_\Delta\equiv\frac{\partial\Delta_k}{\partial\bf{k}}$, as the Fermi velocity and gap velocity respectively. The energy of the quasiparticles in the vicinity of the nodes is given by $E_k=\sqrt{v_f^2 k_1^2+v_\Delta^2 k_2^2}$, where $k_1$ and $k_2$ are the momentum displacements (from the nodes) in directions perpendicular to and parallel to the Fermi surface. The universal-limit $(T\rightarrow 0, \Omega\rightarrow 0)$ transport properties of these quasiparticles was explored in Ref.~\onlinecite{dur01}. While experiments have revealed evidence of a number of varieties of spin and charge order, the system described in this paper will be restricted to the addition of a site-centered charge density wave of wave vector $\vect{Q}=(\pi,0)$, which contributes a term to the hamiltonian \begin{equation} H_{CDW}=\sum_{k\alpha}a_k c_{k\alpha}^\dagger c_{k+Q \alpha}+\mathrm{h.c.} \end{equation} The charge density wave doubles the unit cell, reducing the Brillouin zone to the shaded portion seen in Fig.~\ref{fig:BZfig}. \begin{figure} \centerline{\resizebox{3.25 in}{!}{\includegraphics{BZfig2.eps}}} \caption{Illustrated is the Brillouin zone for our model, reduced to the shaded region by unit-cell-doubling charge order. The $\psi=0$ nodal locations are illustrated by white dots. They are displaced by a distance $k_0$ from the $(\pm\frac{\pi}{2},\pm\frac{\pi}{2})$ points (stars). As the charge density wave's amplitude increases, the location of the gapless excitations evolves along curved paths toward the $(\pm\frac{\pi}{2},\pm\frac{\pi}{2})$ points, until $\psi$ reaches $\psi_c$, when the spectrum becomes gapped because the nodes are nested by the charge density wave-vector. The gray dots depict the images of the nodes in the second reduced Brillouin zone.} \label{fig:BZfig} \end{figure} Restricting summations over momentum space to the reduced Brillouin zone, and invoking the charge density wave's time-reversal symmetry and commensurability with the reciprocal lattice, we are able to write the hamiltonian as \begin{eqnarray} H&=&\sum_{k}\Psi^\dagger_kH_k \Psi \hspace{10pt} H_k=H^{dSC}_k+H^{CDW}_k, \end{eqnarray} where \begin{equation} H_k= \begin{pmatrix} \epsilon_k & \Delta_k & \psi & 0 \\ \Delta_k & -\epsilon_k & 0 & -\psi\\ \psi & 0 & \epsilon_{k+Q} & \Delta_{k+Q} \\ 0 & -\psi & \Delta_{k+Q} & -\epsilon_{k+Q} \end{pmatrix}, \end{equation} is a matrix in the basis of extended-Nambu vectors, \begin{equation} \Psi_k= \begin{pmatrix} c_{k\uparrow} \\ c^\dagger_{-k\downarrow} \\ c_{k+Q\uparrow} \\ c^\dagger_{-k-Q\downarrow} \end{pmatrix} \hspace{10pt} \Psi^\dagger_k= \begin{pmatrix} c^\dagger_{k\uparrow} & c_{-k\downarrow} & c^\dagger_{k+Q\uparrow} & c_{-k-Q\downarrow} \end{pmatrix} \end{equation} and $\psi$ represents the constant value taken at the nodes by the charge density wave order parameter $A_k=a_k+a_{k+Q}^*$. The onset of the charge order modifies the energy spectrum of the clean hamiltonian so that the locations of the nodes evolve along curved paths towards the $(\pm\frac{\pi}{2},\pm\frac{\pi}{2})$ points at the edges of the reduced Brillouin zone, as was noted in Ref.~\onlinecite{par01}. ``Ghost'' nodes, their images in what is now the second reduced Brillouin zone, evolve the same way, until the charge density wave is strong enough that the nodes and ghost nodes collide at those $(\pm\pi/2,\pm\pi/2)$ points. When that occurs, $\vect{Q}$ nests two of the nodes, gapping the spectrum so that low temperature quasiparticle transport is no longer possible. We define the value of $\psi$ at which this occurs as $\psi_c$. Due to the nodal properties of the quasiparticles, all functions of momentum space $\vect{k}$ can be parametrized in terms of a node index $j$, and local coordinates $p_1$ and $p_2$ in the vicinity of each node. We choose to parametrize our functions using symmetrized coordinates centered at $(\pm \pi/2, \pm \pi/2)$, \begin{eqnarray} \label{eq:parametrization} \epsilon_k&=&\psi_c+\beta p_1 \hspace{40pt} \Delta_k=\frac{1}{\beta}p_2 \nonumber\\ \epsilon_{k+Q}&=&\psi_c+\beta p_2 \hspace{40pt} \Delta_{k+Q}=\frac{1}{\beta}p_1 \end{eqnarray} where we have rescaled $\sqrt{v_f v_\Delta} k_1= p_1$ for the coordinate normal to Fermi surface, $\sqrt{v_f v_{\Delta}} k_2 = p_2$ for the coordinate parallel to Fermi surface, and introduced the definition $\beta\equiv \sqrt{\frac{v_f}{v_{\Delta}}}$. In this coordinate system, the displacement of the original node locations from the collision points is given by $\psi_c$. A sum over momentum space is therefore performed by summing over nodes, and integrating over each node's contribution, as follows. \begin{eqnarray} \label{eq:intparametrization} \sum_k f(\vect{k})\rightarrow\frac{1}{2}\sum_{j=1}^4\int\frac{\mathrm{d}^2p}{4\pi^2 v_fv_{\Delta}}f^{(j)}(p_1,p_2)\nonumber\\ =\frac{1}{8\pi^2 v_fv_{\Delta}}\sum_{j=1}^4\int_{-p_0}^{p_0}\mathrm{d}p_1\int_{-p_0}^{p_0}\mathrm{d}p_2 \,\,f^{(j)}(p_1,p_2) \end{eqnarray} where the factor of $\frac{1}{2}$ comes from extending the integrals to all $p_1$ and $p_2$, rather than just the shaded part depicted in Fig.~\ref{fig:BZfig}, and $p_0$ is a high-energy cutoff. At sufficiently low temperatures, the thermal conductivity is dominated by the nodal excitations, since phonon modes are frozen out, and other quasiparticles are exponentially rare. Using this fact, we can calculate the low temperature thermal conductivity of the system using linear response formalism. We incorporate disorder into the model by including scattering events from randomly distributed impurities. Because the quasiparticles are nodal, only limited information about the scattering potential is needed, in particular, the amplitudes $V_1,V_2$ and $V_3$, for intra-node, adjacent node, and opposite node scattering respectively, as explained in Ref.~\onlinecite{dur01}. We calculate the thermal conductivity using linear response formalism, wherein we obtain the retarded current-current correlation function by analytic continuation of the corresponding Matsubara correlator \cite{mah01,Fetter}. In Ref.~\onlinecite{dur02}, using a simplified model for disorder, where the self-energy was assumed to be a negative imaginary scalar, the thermal conductivity was calculated as a function of $\psi$, and found to vanish for $\psi>\psi_c$. We now improve upon that result by calculating the self-energy within the self-consistent Born approximation, and by including vertex corrections within the ladder approximation in our calculation of the thermal conductivity. \section{Self-Energy} \subsection{SCBA Calculation} \label{sec:SCBA} Within the self-consistent Born approximation (SCBA), the self-energy tensor is given by \begin{equation} \label{eq:sigma} \Sig(\vect{k},\omega)=n_{\mathrm{imp}}\sum_{k'}\left|V_{kk'}\right|^2 (\widetilde{ \sigma_0\otimes\tau_3} ) \G(\vect{k},\omega) (\widetilde{ \sigma_0\otimes\tau_3} ) \end{equation} where $n_{\mathrm{imp}}$ is the impurity density and $\widetilde{V}_{kk'}=V_{kk'}(\widetilde{\sigma_0\otimes\tau_3})$ accompanies each scattering event, as seen in Fig.~\ref{fig:bornFeynman}. The tilde signifies an operator in the extended-Nambu basis, and the $\sigma$'s and $\tau$'s are Pauli matrices in charge-order-coupled and particle-hole spaces respectively. \begin{figure} \centerline{\resizebox{2.25 in}{!}{\includegraphics{bornFeynmanFig.eps}}} \caption{Feynman diagram depicting self-energy in the self-consistent Born approximation. The double line represents the dressed propagator, the dashed line represents the interaction with the impurity, and the cross represents the impurity density.} \label{fig:bornFeynman} \end{figure} $\G(\vect{k},\omega)$ is the full Green's function, whose relation to the bare Green's function $\G_0(\vect{k},\omega)$ and the self-energy $\Sig(\vect{k},\omega)$ is given by Dyson's equation \begin{equation} \label{eq:Dyson} \G(\vect{k},\omega) = (\G^{-1}_0(\vect{k},\omega)-\Sig(\vect{k},\omega))^{-1}, \end{equation} the bare Green's function having been determined by \begin{equation} \G_0(\vect{k},\omega)=(\omega\widetilde{\openone}-\widetilde{H}_k)^{-1}. \end{equation} Eq.~(\ref{eq:sigma}) and Eq.~(\ref{eq:Dyson}) define a set of integral equations for the self-energy $\Sig(\vect{k},\omega)$. For the calculation of the universal-limit thermal conductivity, it is sufficient to find the zero-frequency limit of the self-energy. In its present form, $\Sig$ has 32 real components. Below, we demonstrate that this number can be reduced further to six components. If we write the Green's function as \begin{equation} \G(\vect{k},\omega)=\frac{1}{\mathcal{G}_{den}} \left( \begin{array}{cc} \mathcal{G}_A & \mathcal{G}_B \\ \mathcal{G}_C & \mathcal{G}_D \end{array} \right), \end{equation} where \begin{equation} \mathcal{G}_\alpha=\sum_{i=0}^3 \mathcal{G}_{\alpha i}\tau_i \end{equation} then the self-energy can be written as the set of 16 complex equations (for $\alpha=\{A,B,C,D\}$, $i=\{0,1,2,3\}$) \begin{eqnarray} \label{eq:Self} \Sigma_{\alpha i}&=&n_{\mathrm{imp}} \sum_{k'}\left|V_{kk'}\right|^2\frac{\xi_i}{\mathcal{G}_{den}}\mathcal{G}_{\alpha i}\nonumber\\ &=&\xi_i c\int\mathrm{d}^2p \frac{\mathcal{G}_{\alpha i}\left(p_1,p_2\right)}{\mathcal{G}_{den}\left(p_1,p_2\right)} \end{eqnarray} where $ \xi_i=\left\{\begin{array}{c} + 1, \,i=0,3\\-1,\, i=1,2 \end{array}\right\}$, $c=\frac{n_i(V_1^2+2V_2^2+V_3^2)}{8\pi^2v_fv_\Delta}$, and the final line is realized by using the notation of Eq.~(\ref{eq:parametrization}) and Eq.~(\ref{eq:intparametrization}) and completing the sum over nodes. From the symmetries of the hamiltonian, we are able to ascertain certain symmetries the bare Green's function will obey, specifically, \begin{eqnarray} \label{eq:greensymmetries} \mathcal{G}^{(0)}_{A0}(p_2,p_1)&=&\mathcal{G}^{(0)}_{D0}\left(p_1,p_2\right)\\ \mathcal{G}^{(0)}_{A1}(p_2,p_1)&=&\mathcal{G}^{(0)}_{D1}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{A3}(p_2,p_1)&=&\mathcal{G}^{(0)}_{D3}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{B0}(p_2,p_1)&=&\mathcal{G}^{(0)}_{C0}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{B1}(p_2,p_1)&=&\mathcal{G}^{(0)}_{C1}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{B2}(p_2,p_1)&=&\mathcal{G}^{(0)}_{C2}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{B3}(p_2,p_1)&=&\mathcal{G}^{(0)}_{C3}\left(p_1,p_2\right)\nonumber\\ \mathcal{G}^{(0)}_{\mathrm{den}}(p_2,p_1)&=&\mathcal{G}^{(0)}_{\mathrm{den}}(p_1,p_2)\nonumber \end{eqnarray} In addition, the realization that the integration is also symmetric with respect to exchange of $p_1$ and $p_2$, coupled with these symmetries, lead to relations for self-energy components \begin{eqnarray} \Sigma_{Ai}&=&\Sigma_{Di}\\ \Sigma_{Bi}&=&\Sigma_{Ci} \hspace{40pt} i=0,1,2,3\nonumber\\ \Sigma_{B2}&=&\Sigma_{C2}=0\nonumber \end{eqnarray} so that we see a reduction from 32 components of the self-energy to 6 independent components:$\{\Sigma_{\alpha i}\}\equiv\{\Sigma_{A0},\Sigma_{A1},\Sigma_{A3},\Sigma_{B0},\Sigma_{B1},\Sigma_{B3}\}$. A self-consistent self-energy must therefore satisfy 6 coupled integral equations given by Eq.~(\ref{eq:Self}). The self-consistent calculation of the self-energy proceeds by applying the following scheme: First, a guess is made as to which self-energy components will be included. The full Green's function corresponding to such a self-energy is then obtained from Dyson's equation, Eq.~(\ref{eq:Dyson}). The quantitative values of the $\Sigma_{\alpha i}$'s are then determined as follows: An initial guess for the quantitative values of each of the $\Sigma_{\alpha i}$'s is made, and the six integrals of Eq.~(\ref{eq:Self}) are computed numerically, which provides the next set of guesses for $\{\Sigma_{\alpha i}\}$. This process is repeated until a stable solution is reached. Finally, the resulting solutions must be checked that they are consistent with the initial guess for the form of $\Sig$. If they are, the self-consistent calculation is complete. We begin with the simplest assumption, that $\Sig^{(1)}(\omega) = -i\Gamma_0 \widetilde{(\sigma_0\otimes\tau_0)}$, where $\Gamma_0$ is the zero-frequency limit of the scattering rate. The superscript indicates that this is the first guess for $\Sig$. The Green's function components are computed, which gives the explicit form of Eq.~(\ref{eq:sigma}). Upon evaluating the numerics, it is seen that this first iteration generates a nonzero (real and negative) term for $\Sigma_{B1}$. So, the diagonal self-energy assumption turns out to be inconsistent, in contrast to the situation for $\psi=0$. We then modify our guess, assuming self-energy of the form $\Sig^{(2)}=-i\Gamma_0\widetilde{(\sigma_0\otimes\tau_0)}-B_1\widetilde{(\sigma_1\otimes\tau_1)}$. The Green's function is computed again, using Dyson's equation, and the self-energy equations are obtained explicitly. It is noted that the symmetries of Eq.~(\ref{eq:greensymmetries}) still hold. Again, the equations (\ref{eq:Self}) are solved iteratively; the result is a non-zero $\Sigma_{B3}$ component as well. Once again, the Green's functions are modified to incorporate this term, and the iterative scheme is applied. Calculation of the self-energy based on the assumption \begin{eqnarray} \Sig^{(3)}=-i\Gamma_0\widetilde{(\sigma_0\otimes\tau_0)}-B_1\widetilde{(\sigma_1\otimes\tau_1)}-B_3\widetilde{(\sigma_1\otimes\tau_3)}\nonumber\\ \Gamma_0,B_1,B_3>0 \end{eqnarray} generates $\Gamma_0$, $B_1$, and $B_3$ that are much larger than any remaining terms, and hence provides the self-consistent values of $\Sigma_{A0},\Sigma_{B1}$ and $\Sigma_{B3}$. A plot of the 6 components of $\Sig$ is displayed in Fig.~\ref{fig:6sigmasBW} for a representative parameter set, where we see that the three terms of the ansatz are indeed dominant. For the remainder of this paper, the effect of the $\Sigma_{A1}$, $\Sigma_{A3}$ and $\Sigma_{B0}$ components will be ignored. The self-consistent Green's functions are provided in Appendix~\ref{app:greensfunctions}, while additional details of the self-energy calculation are discussed in Appendix~\ref{app:divergence}. \begin{figure} \centerline{\resizebox{3.25 in}{!}{\includegraphics{6sigmasBW.eps}}} \caption{Components of self-energy computed using iterative procedure described in Sec.~\ref{sec:SCBA}. The third iteration self-energy, $\Sig^{(3)}$, is shown here. The dominance of $\Gamma_0=-\mathrm{Im}(\Sigma_{A0}),B_1=-\mathrm{Re}(\Sigma_{B1})$, and $B_3=-\mathrm{Re}(\Sigma_{B3})$ over other components establishes this third iteration as yielding the (approximately) self-consistent value of the self-energy. $\Sigma_{A1}$ and $\Sigma_{A3}$ overlap.} \label{fig:6sigmasBW} \end{figure} \subsection{SCBA Results} \label{sec:SCBAresults} In order to discuss the numerical results contained in this paper, it is necessary to make a note about the units employed. The following discussion of units applies as well to the numerical analysis of the results of the thermal conductivity calculation in Sec.~\ref{sec:analysis}. Because we are studying the evolution of the system with respect to increasing CDW order parameter $\psi$, we wish to express energies in units of $\psi_c$, the value of $\psi$ which gaps the clean system. In order to do this, the cutoff $p_0$ is fixed such that the Brillouin zone being integrated over in Eq.~(\ref{eq:intparametrization}) has the correct area. In this way, $p_0$ sets the scale of the product $v_f v_\Delta$; a parameter $\beta\equiv\sqrt{\frac{v_f}{v_\Delta}}$ is defined to represent the velocity anisotropy. Then, $\frac{p_0}{\psi_c}=\frac{\pi}{2a}\sqrt{v_fv_\Delta}$, so that we may eliminate the frequently occurring parameter $4\pi v_fv_\Delta$ by expressing lengths in units of $\frac{4}{\sqrt{\pi}}a\approx 2.25 a$. Impurity density $n_{\mathrm{imp}}$ is thus recast in terms of impurity fractions $z$ according to $n_{\mathrm{imp}}=\frac{16}{\pi}z$. Finally, the parameters of the scattering potential are recast in terms of their anisotropy. We define $V_2\equiv R_2 V_1$ and $V_3\equiv R_3 V_1$. With these modifications, the original set of parameters, $\{n_i,V_1,V_2,V_3,v_f,v_\Delta,p_0,\psi,\psi_c\}$ is reduced to $\{z,V_1,R_2,R_3,\beta,p_0,\psi\}$. For the work contained herein, the cutoff $p_0$ is fixed at $p_0=100$. The self-energy in the self-consistent Born approximation was computed for different scattering potentials as a function of impurity fraction and CDW order parameter $\psi$. Since it was found that three of the components, $\Sigma_{A0}$, $\Sigma_{B1}$ and $\Sigma_{B3}$, dominate over the others, we will subsequently analyze only those three components, referring to their magnitudes as $\Gamma_0$, $B_1$, and $B_3$ respectively. As $z\rightarrow 0$, the Green's functions become impossibly peaked from a numerical point of view. For sufficiently large $z$, depending on the strength of the scatterers, the Born approximation breaks down. Given a scattering strength of $V_1=110$, cutoff $p_0=100$, scattering potentials that fall off slowly in $k$-space and velocity anisotropy ratios $\beta\equiv \sqrt{v_f/v_\Delta}={1,2,3,4}$, this puts the range of $z$ in which our numerics may be applied at roughly between one half and one percent. Some results for $\widetilde{\Sigma}(\psi)$, for several values of $z$, are shown in Figs.~\ref{fig:sigmaofPsiBeta1} and \ref{fig:sigmaofPsiBeta4}. These plots correspond to the same parameters, except that Fig.~\ref{fig:sigmaofPsiBeta1} illustrates the $v_f=v_\Delta$ case, and Fig.~\ref{fig:sigmaofPsiBeta4} illustrates $v_f=16v_\Delta$. In all cases it is seen that \begin{eqnarray} B_1(\psi,z)\simeq b_1(z)\psi\nonumber\\ B_3(\psi,z)\simeq b_3(z)\psi \end{eqnarray} where the dependence of $B_1$, $B_3$, $b_1$, and $b_3$ on the remaining parameters is implicit. For much of the parameter space sampled, $\Gamma_0$ does not have much $\psi$ dependence, except that it typically rises and then falls to zero at some sufficiently large $\psi<\psi_c$. This feature will be revisited in Sec. \ref{sec:analysis}, wherein it is explained that this vanishing scattering rate coincides with vanishing thermal conductivity, and corresponds to the point at which the system becomes effectively gapped and our nodal approximations break down. The value of $\psi$ at which this occurs depends on the entire set of parameters used, and will be referred to as $\psi_c^*$. \begin{figure} \resizebox{3.25in}{!}{\includegraphics{sigmaofPsiBeta1.eps}} \caption{Effect of disorder on charge-order-dependence of self-energy components. To satisfy Dyson's equation, it is necessary to include three (extended-Nambu space) components of the self energy. Their self-consistent values are plotted here for several different values of impurity fraction $z$. Here, the scattering potential is given in our three parameter model as $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$, which represents a fairly short-ranged potential. These results are for the case of isotropic nodes ($v_f=v_\Delta$). All energies are in units of $\psi_c$.} \label{fig:sigmaofPsiBeta1} \end{figure} \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{sigmaofPsiBeta4.eps}}} \caption{Effect of disorder on charge-order-dependence of self-energy components. This figure illustrates the case where $v_f=16 v_\Delta$. The scattering potential is again given by $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$, representing a fairly short-ranged potential. The plots for $B_1$ and $B_3$ terminate before $\psi$ reaches $\psi_c$ because for sufficiently large $\psi$, the excitations become gapped and our nodal approximations break down.} \label{fig:sigmaofPsiBeta4} \end{figure} The observed $z$ dependence is not very surprising, in light of Eq.~(\ref{eq:Self}). The self-energy components depend on $z$ roughly according to \begin{eqnarray} \Gamma_0 &\sim &p_0 \exp{(-\frac{1}{z})}\nonumber\\ B_1 &\sim & z\nonumber\\ B_3&\sim &z \end{eqnarray} as can be seen in Fig.~\ref{fig:sigmaofz}. When $\psi=0$, $\Gamma_0$ is given by the closed-form expression obtained in Ref.~\onlinecite{dur01}, $\Gamma_0=p_0 \exp(\frac{-1}{2\pi c})$, where $c=\frac{n_i(V_1^2+2V_2^2+V_3^2)}{8\pi^2v_fv_\Delta}$. For finite $\psi$, this precise form does not hold, but the strong $z$ dependence of $\Gamma_0$ remains, in contrast to that of $B_1$ and $B_3$. Note that the $z$ dependence of $B_1$ and $B_3$ is roughly linear for $\psi \ll \psi_c^*$. As $\psi$ approaches $\psi_c^*$ the functions diverge slightly from linearity. Results for several values of $\psi<\psi_c^*$ are shown in the figure. \begin{figure}[h] \centerline{\resizebox{3 in}{!}{\includegraphics{sigmaofz.eps}}} \caption{Effect of charge order on disorder-dependence of self-energy components. Nonzero components of $\widetilde{\Sigma}(z)$ are shown for impurity fraction $z$ ranging from $0.5$ to $1.0\%$, for charge order parameter $\psi$=0, 0.2, 0.4, 0.6, and 0.8 (in units of $\psi_c$). These results are for scattering parameters $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$ and $v_f=v_\Delta$. Similar results are obtained for the case of anisotropic nodes.} \label{fig:sigmaofz} \end{figure} \section{Thermal Conductivity} \label{sec:thermalconductivity} Thermal conductivity was calculated using the Kubo formula \cite{mah01,Fetter}, \begin{equation} \frac{\kappa(\Omega,T)}{T}=-\frac{\mathrm{Im}\Pi_{\mathrm{Ret}}(\Omega)}{\Omega\,\,T^2}, \end{equation} where $\Pi_{\mathrm{Ret}}(\Omega)$ is the retarded thermal current-current correlation function. To find this correlator, it is necessary to first compute the appropriate thermal current operator. For our model hamiltonian, this is done in Ref. \onlinecite{dur02} with the result \begin{equation} \label{eq:current} \vect{\widetilde{j}}^\kappa_0=\lim_{\substack{q\rightarrow0\\ \Omega\rightarrow0}}\sum_{\substack{k,\omega}} (\omega+\frac{\Omega}{2})\psi^\dagger_k \left(\widetilde{\vect{v}}_{fM}+\widetilde{\vect{v}}_{\Delta M}\right)\psi_{k+q}, \end{equation} where a generalized velocity is defined as \begin{eqnarray} \widetilde{\vect{v}}_{\alpha M}=v_\alpha^x \widetilde{M}_\alpha^x\hat{x} +v_\alpha^y\widetilde{M}_\alpha^y\hat{y}\nonumber\\ \widetilde{M}_\alpha^x\equiv\widetilde{(\sigma_3\otimes\tau_\alpha)} \hspace{15pt} \widetilde{M}_\alpha^y\equiv\widetilde{(\sigma_0\otimes\tau_\alpha)} \end{eqnarray} where $\alpha=\{f,\Delta\}$ and $\tau_\alpha=\{\tau_3,\tau_1\}$ for Fermi and gap velocities respectively. To calculate a thermal conductivity that satisfies Ward identities, vertex corrections must be included on the same footing as the self-energy corrections to the single particle Green's function. The details of this calculation are similar to those performed in Appendix B of Ref.~\onlinecite{dur01}. The impurity scattering diagrams which contribute to the ladder series of diagrams are included by expressing the correlation function in terms of a dressed vertex, as shown in Fig.~\ref{fig:bubbleFeynman}. \begin{figure}[ht] \subfigure[]{ \centerline{\resizebox{3.25in}{!}{\includegraphics{bubbleFeynman.eps}}} \label{fig:bubbleFeynman} } \subfigure[]{ \centerline{\resizebox{3.25in}{!}{\includegraphics{vertexFeynman.eps}}} \label{fig:vertexFeynman} } \caption{ (a) Feynman diagram representing the correlation function $\Pi_{\alpha\beta}^{\mathrm{mn}}$ in terms of a bare vertex $j_\alpha^{\mathrm{m}}$, and a dressed vertex $\Gamma_\beta^{\mathrm{n}}$. (b) Feynman diagram representing the (ladder series) dressed vertex in terms of the bare vertex and the Born scattering event. } \end{figure} The current-current correlation function is obtained from this dressed bubble. The bare current operator of Eq.~(\ref{eq:current}) is associated with one vertex of the bubble, while the dressed vertex of Fig.~\ref{fig:vertexFeynman} is associated with the other. Evaluating Fig.~\ref{fig:bubbleFeynman}, we find that the current-current correlation function takes the form \begin{eqnarray} \Pi^{mn}(i\Omega)=\sum_{\alpha,\beta=f,\Delta}\Pi^{mn}_{\alpha\beta}(i\Omega)\nonumber\\ \Pi^{mn}_{\alpha\beta}(i\Omega)=\frac{1}{k_B T}\sum_{i \omega}(i\omega+\frac{i\Omega}{2})^2 \sum_k\nonumber\\ \mathrm{Tr} \left[\widetilde{\mathcal{G}_1}v_\alpha k_\alpha^m\widetilde{M_\alpha^m}\widetilde{\mathcal{G}_2}v_\beta\widetilde{M_\beta^n}\widetilde{\Gamma_\beta^n}\right] \end{eqnarray} where $\widetilde{\mathcal{G}}_1\equiv\widetilde{\mathcal{G}(\vect{k},i\omega})$, $\widetilde{\mathcal{G}_2}\equiv\widetilde{\mathcal{G}}(\vect{k},i\omega+i\Omega)$, and $\widetilde{\Gamma_\beta^n}=\widetilde{\Gamma_\beta^n}(\vect{k},i\omega,i\Omega)$ represents the dressed vertex depicted in Fig.~\ref{fig:vertexFeynman}. The Greek indices denote ``Fermi'' and ``gap'' terms, while the Roman indices denote the position space components of the tensor. We use Fig.~\ref{fig:vertexFeynman} to find the form of the vertex equation, and then make the ansatz that \begin{equation} \widetilde{\vect{\Gamma}_\beta}(\vect{k},i\omega,i\Omega)=\Big(\widetilde{\openone}+\widetilde{\Lambda}(|\vect{k}|,i\omega,i\Omega)\Big)\hat{k}, \end{equation} which leads to the scalar equations \begin{eqnarray} \widetilde{\Gamma}_\beta^n(\vect{k},i\omega,i\Omega)=k_n(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n). \end{eqnarray} Looking for solutions of this form, we see that the scalar vertex function is \begin{equation} \widetilde{\Lambda}_\beta^n=\mathrm{n}_{i}\sum_{k'}\widetilde{M}_\beta^n\widetilde{V}_{kk'}\widetilde{\mathcal{G}}_2\widetilde{M}_\beta^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n) \widetilde{\mathcal{G}}_1\widetilde{V}_{k'k}\frac{k^{'n}_\beta}{k^n_\beta}. \end{equation} Since we are working with nodal quasiparticles, we utilize the parametrization of Eq.~(\ref{eq:intparametrization}), so that the vertex function is now a function of node index $j$ and local momentum $\vect{p}$ \begin{eqnarray} \widetilde{\Lambda}_\beta^n&=&n_{\mathrm{imp}}\sum_{j'=1}^4\underline{V}_{jj'}\underline{V}_{j'j} (\frac{k_{\beta n}^{(j')}}{k_{\beta n}^{(j)}})\int\frac{\mathrm{d}^2p'}{8\pi^2 v_fv_2}\nonumber\\& &\widetilde{M}_\beta^n(\widetilde{\sigma_0\otimes\tau_3}) \widetilde{\mathcal{G}}_2\widetilde{M}_\beta^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n)\widetilde{\mathcal{G}}_1(\widetilde{\sigma_0\otimes\tau_3}). \end{eqnarray} Arbitrarily choosing $j=1$, then for $j'=\{1,2,3,4\}$ \begin{eqnarray} \frac{k_{1x}^{(j')}}{k_{1x}^{(1)}}=\{1,-1,-1,1\} \hspace{20pt} \frac{k_{1y}^{(j')}}{k_{1y}^{(1)}}=\{1,1,-1,-1\} \nonumber\\ \frac{k_{2x}^{(j')}}{k_{2x}^{(1)}}=\{1,-1,-1,1\} \hspace{20pt} \frac{k_{2y}^{(j')}}{k_{2y}^{(1)}}=\{1,1,-1,-1\}. \end{eqnarray} Using the node space matrix representing the 3-parameter scattering potential \begin{equation} \underline{V}_{jj'}= \begin{pmatrix} V_1 & V_2 & V_3 & V_2 \\ V_2 & V_1 & V_2 & V_3 \\ V_3 & V_2 & V_1 & V_2 \\ V_2 & V_3 & V_2 & V_1 \end{pmatrix} \end{equation} we obtain for the vertex equation \begin{equation} \label{eq:vertex} \widetilde{\Lambda}^n_\beta=\gamma \int \frac{\mathrm{d}^2p'}{\pi}\widetilde{M}_\beta^n(\widetilde{\sigma_0\otimes\tau_3})\widetilde{\mathcal{G}}_2 \widetilde{M}_\beta^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n)\widetilde{\mathcal{G}}_1(\widetilde{\sigma_0\otimes\tau_3}) \end{equation} where $\gamma\equiv n_{\mathrm{imp}}\frac{V_1^2-V_3^2}{8\pi v_fv_2}$. The correlator then becomes \begin{eqnarray} \Pi_{\alpha \beta}^{mn}(i\Omega)&=& v_\alpha v_\beta \frac{1}{\beta}\sum_{i\omega}(i\omega+\frac{i\Omega}{2})^2\sum_k (k_{\alpha m}k_{\beta n}) \nonumber\\ & &\mathrm{Tr}\left(\widetilde{\mathcal{G}}_1 \widetilde{M}_\alpha^m\widetilde{\mathcal{G}}_2\widetilde{M}_\beta ^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta ^n)\right) \nonumber\\ &=&v_\alpha v_\beta \frac{1}{\beta}\sum_{i\omega}(i\omega+\frac{i\Omega}{2})^2 \sum_{j=1}^4(k_{\alpha m}^{(j)}k_{\beta n}^{(j)}) \nonumber\\ & &\int \frac{\mathrm{d}^2p}{8\pi^2v_fv_\Delta} \mathrm{Tr}\left(\widetilde{\mathcal{G}}_1 \widetilde{M}_\alpha^m\widetilde{\mathcal{G}}_2\widetilde{M}_\beta ^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta ^n)\right). \end{eqnarray} Since \begin{equation} \sum_{j=1}^4k_{\alpha m}^{(j)}k_{\beta n}^{(j)}=2\left( (1-\delta_{\alpha \beta})\eta_m+\delta_{\alpha \beta}\right)\delta_{mn} \end{equation} we can write \begin{equation} \label{eq:correlator} \Pi_{\alpha\beta}^{mn}(i\Omega)=2\pi c_{\alpha\beta}^{mn}\frac{1}{\beta}\sum_{i\omega}(i\omega+\frac{i\Omega}{2})^2 \mathrm{Tr}\left(\widetilde{I}_{\alpha\beta}^{mn}(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n)\right) \end{equation} where \begin{eqnarray} \label{eq:cmn} c_{\alpha \beta}^{mn}&\equiv&\frac{1}{8\pi^2}\frac{v_\alpha v_\beta}{v_f v_\Delta}\Big((1-\delta_{\alpha\beta})\eta_m+ \delta_{\alpha\beta}\Big)\delta_{mn} \end{eqnarray} and \begin{eqnarray} \label{eq:integral} \widetilde{I}_{\alpha\beta}^{mn}(i\omega,i\omega+i\Omega)&\equiv&\int\frac{\mathrm{d}^2p}{\pi}\widetilde{\mathcal{G}}_1\widetilde{M}_\alpha^m\widetilde{\mathcal{G}}_2\widetilde{M}_\beta^n. \end{eqnarray} To calculate the conductivity, we will need Tr$(\widetilde{I}_{\alpha\beta}^{m})$ and Tr$(\widetilde{I}_{\alpha\beta}^m\widetilde{\Lambda}_\beta^n)$. For $\psi=0$, it is possible to compute the integral in Eq.~(\ref{eq:integral}) analytically, but for general $\psi$ we had to compute the integrals numerically. We note that if we write \begin{equation} \widetilde{I}= \begin{pmatrix} I_A & I_B \\ I_C & I_D \end{pmatrix}, \end{equation} apply the symmetry properties of Eq.~(\ref{eq:greensymmetries}) and reverse the order of integration of $p_1$ and $p_2$, then $I_A=I_D$, and $I_B=I_C$, so that the most general expansion of $\widetilde{I}_{\alpha\beta}^{mn}$ in Nambu space is \begin{equation} \widetilde{I}_{\alpha\beta}^{mn}=\sum_{i=0}^1\sum_{i'=0}^3(I_{\alpha\beta}^{mn})_{ii'}(\widetilde{\sigma_i}\otimes\tau_{i'}). \end{equation} Then \begin{eqnarray} \mathrm{Tr}(\widetilde{I}_{\alpha\beta}^{mn})&=&\mathrm{Tr}\left( \sum_{i=0}^1\sum_{i'=0}^3(I_{\alpha\beta}^{mn})_{ii'}(\widetilde{\sigma_i\otimes\tau_{i'}})\right)\nonumber\\ &=& 4(I_{\alpha\beta}^{mn})_{00}, \end{eqnarray} while if we use the same expansion for \begin{eqnarray} \widetilde{\Lambda}_\beta^n=\sum_{i=0}^1\sum_{i'=0}^3(\Lambda_\beta^n)_{ii'}(\widetilde{\sigma_i \otimes\tau_{i'}}), \end{eqnarray} we find \begin{eqnarray} \mathrm{Tr}(\widetilde{I}_{\alpha\beta}^{mn}\widetilde{\Lambda}_\beta^n)=\sum_{ij=0}^1\sum_{i'j'=0}^3(I_{\alpha\beta}^{mn})_{ii'} (\Lambda_\beta^n)_{jj'}\nonumber\\ \mathrm{Tr}(\widetilde{\sigma_i\sigma_j\otimes\tau_{i'}\tau_{j'}})\nonumber\\ =4\sum_{i=0}^1\sum_{i'=0}^3(I_{\alpha\beta}^{mn})_{ii'}(\Lambda_\beta^n)_{ii'}. \end{eqnarray} Then Eq.~(\ref{eq:vertex}) becomes \begin{eqnarray} 4(\Lambda_\beta^n)_{ii'}=\mathrm{Tr}\left((\widetilde{\sigma_i\otimes\tau_{i'}})\widetilde{\Lambda}_\beta^n\right)\nonumber\\ =\gamma\int\frac{\mathrm{d}^2p}{\pi}\mathrm{Tr}((\widetilde{\sigma_i\otimes\tau_{i'}})\widetilde{M}_\beta^n(\widetilde{\sigma_0\otimes\tau_3}) \\ \widetilde{\mathcal{G}}_2\widetilde{M}_\beta^n(\widetilde{\openone}+\widetilde{\Lambda_\beta^n})\widetilde{\mathcal{G}}_1(\widetilde{\sigma_0\otimes\tau_3}))\nonumber\\ = \gamma\mathrm{Tr}\left(\widetilde{L}_{\beta ii'}^n(\widetilde{1}+\widetilde{\Lambda}_\beta^n)\right) \end{eqnarray} where \begin{eqnarray} \label{eq:integralprime} \widetilde{L}_{\beta ii'}^n\equiv\int\frac{\mathrm{d}^2p}{\pi}\widetilde{\mathcal{G}}_1(\widetilde{\sigma_0\otimes\tau_3})(\widetilde{\sigma_i\otimes\tau_{i'}})\nonumber\\\widetilde{M}_\beta^n (\widetilde{\sigma_0\otimes\tau_3})\widetilde{\mathcal{G}}_2\widetilde{M}_\beta^n. \end{eqnarray} The symmetries of $\widetilde{\mathcal{G}}$ which were used to see which components of $\widetilde{I}_{\alpha\beta}^{mn}$ were $0$ can also be applied to $\widetilde{L}_{\beta ii'}^n$ with the result that $(L_{\beta ii'}^n)_A=(L_{\beta ii'}^n)_D$, $(L_{\beta ii'}^n)_B=\eta_i (L_{\beta ii'}^n)_C$, where $\eta_i=\left\{\begin{array}{c} + 1, \,i=0,1\\-1,\, i=2,3 \end{array}\right\}$. Since all that is required for the conductivity is $i=0,1$, we use the expansion \begin{equation} \widetilde{L}_{\beta ii'}^n=\sum_{j=0}^1\sum_{j'=0}^3(\widetilde{\sigma_j\otimes\tau_{j'}})(L_{\beta ii'}^n)_{jj'} \end{equation} so that \begin{eqnarray} \label{eq:vertexfinal} (\Lambda_\beta^n)_{ii'}=\frac{1}{4}\gamma\mathrm{Tr}\left(\widetilde{L}_{\beta ii'}^n(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n)\right)\nonumber\\ =\frac{1}{4}\gamma\,\,\mathrm{Tr}\,(\sum_{j=0}^1\sum_{j'=0}^3(L_{\beta ii'}^n)_{jj'}(\widetilde{\sigma_j\otimes\tau_{j'}})\nonumber\\ +\sum_{jk=0}^1\sum_{j'k'=0}^3(L_{\beta ii'})_{jj'}(\Lambda_\beta^n)_{kk'})\nonumber\\ =\gamma\left((L_{\beta ii'}^n)_{00}+\sum_{j=0}^1\sum_{j'=0}^3(L_{\beta ii'}^n)_{jj'}(\Lambda_\beta^n)_{jj'}\right). \end{eqnarray} The thermal conductivity is obtained from the retarded current-current correlation function \begin{eqnarray} \frac{\kappa^{mn}(\Omega)}{T}=-\frac{1}{T}\frac{\mathrm{Im}\left(\Pi_{\mathrm{ret}}^{mn}(\Omega)\right)}{\Omega}, \end{eqnarray} where $\Pi_{\mathrm{ret}}(\Omega)=\Pi(i\Omega\rightarrow\Omega+i\delta).$ To get the retarded correlator we first perform the Matsubara summation. Consider the summand of Eq.~\ref{eq:correlator}, which we redefine according to \begin{equation} \label{eq:jeqn} J(i\omega,i\omega+i\Omega)=\mathrm{Tr}\left(\widetilde{I}_{\alpha\beta}^{mn}(\widetilde{\openone}+\widetilde{\Lambda}_\beta^n)\right). \end{equation} The function $J(i\omega,i\omega+i\Omega)$ is of the form $J(i\omega,i\omega+i\Omega)=f(A(i\omega)B(i\omega+i\Omega))$ where $A$ and $B$ are dressed Green's functions of a complex variable $z=i\omega_n$, so that $J$ is analytic with branch cuts occurring where $z$ and $z+i\Omega$ are real. The Matsubara summation needed is performed by integrating on a circular path of infinite radius, so that the only contribution is from just above and just below the branch cuts, \begin{eqnarray} \Pi_{\alpha\beta}^{mn}=-c_{\alpha\beta}^{mn}\frac{1}{i}\oint \mathrm{d}z (z+\frac{i\Omega}{2})^2J(z,z+i\Omega)\nonumber\\ =-c_{\alpha\beta}^{mn}\frac{1}{i}\int_{-\infty}^\infty\mathrm{d}\epsilon\, n_f(\epsilon)\Big( \nonumber\\ (\epsilon+\frac{i\Omega}{2})^2 (J(\epsilon+i\delta,\epsilon+i\Omega)-J(\epsilon-i\delta,\epsilon+i\Omega))\nonumber\\+(\epsilon-\frac{i\Omega}{2})^2( J(\epsilon-i\Omega,\epsilon+i\delta)-J(\epsilon-i\Omega,\epsilon-i\delta))\Big). \end{eqnarray} To obtain the retarded function, we analytically continue $i\Omega\rightarrow\Omega+i\delta$. Then we let $\epsilon\rightarrow\epsilon+\Omega$ in the third and fourth terms, so that \begin{eqnarray} \Pi_{\alpha\beta}^{mn}(\Omega)_{\mathrm{ret}}&=&c_{\alpha\beta}^{mn}\int_{-\infty}^\infty\mathrm{d}\epsilon\, n_f(\epsilon+\Omega)-n_f(\epsilon))(\epsilon+\frac{\Omega}{2})^2\nonumber\\ & &\times \mathrm{Re}\Big(J_{\alpha\beta}^{AR}(\epsilon,\epsilon+\Omega)-J_{\alpha\beta}^{RR}(\epsilon,\epsilon+\Omega)\Big) \end{eqnarray} where $J^{AR}$ and $J^{RR}$ are defined by Eqs. (\ref{eq:jeqn}) and (\ref{eq:vertexfinal}) and are composed of the universal-limit Green's functions given in Appendix \ref{app:greensfunctions}. Taking the imaginary part, we find \begin{eqnarray} \frac{\kappa^{mn}(\Omega,T)}{T}&=&-\int_{-\infty}^\infty \mathrm{d}\epsilon\frac{n_f(\epsilon+\Omega)-n_f(\epsilon)}{\Omega} \left(\frac{\epsilon+\frac{\Omega}{2}}{T}\right)^2\nonumber\\ \sum_{\alpha\beta}c_{\alpha\beta}^{mn}&\mathrm{Re}&(J_{\alpha\beta}^{AR}(\epsilon,\epsilon+\Omega))-J_{\alpha\beta}^{RR}(\epsilon,\epsilon+\Omega)). \end{eqnarray} In taking the $\Omega\rightarrow 0$ limit, the difference in Fermi functions becomes a derivative. Evaluating the integral, $\int\mathrm{d}\epsilon(-\frac{\mathrm{d}n}{\mathrm{d}\epsilon})(\frac{\epsilon} {T})^2=\frac{\pi^2k_B^2}{3}$, we find that \begin{eqnarray} \frac{\kappa_{\alpha\beta}^{mm}(0,0)}{T}=\frac{\pi^2k_B^2}{3}c_{\alpha\beta}^{mm}\mathrm{Re}\left(J_{\alpha\beta}^{AR}(0,0)-J_{\alpha\beta}^{RR} (0,0)\right). \end{eqnarray} That $\kappa^{xy}=\kappa^{yx}=0$ is seen from Eq.~(\ref{eq:cmn}). Finally, since the $\alpha\neq\beta$ integrals are traceless, the result for the thermal conductivity is \begin{eqnarray} \label{eq:thermal} \frac{\kappa^{mm}}{T}=\frac{k_B^2}{3}\frac{v_f^2+v_\Delta^2}{v_fv_\Delta}\frac{1}{8}\left(J_{\alpha\beta}^{AR}(0,0)-J_{\alpha\beta}^{RR} (0,0)\right). \end{eqnarray} \section{Results} \label{sec:analysis} For a discussion of the units employed in the analysis, one can refer to Sec.~\ref{sec:SCBAresults}. The reduced set of parameters for the model is $\{z,V_1,R_2,R_3,\beta,p_0,\psi\}$. We explored a limited region of this parameter space, calculating the integrals and solving the matrix equation numerically. In particular, we looked at the $\psi$ dependence of $\kappa$. To vary the anisotropy of the scattering potential, we considered the $\{R_2,R_3\}$ values of $\{0.9,0.8\}$, $\{0.7,0.6\}$, and $\{0.5,0.3\}$, and kept fixed the constant $c$ (given after Eq.~(\ref{eq:Self}))by appropriately modifying $V_1$. For $\{R_2,R_3\}=\{0.9,0.8\}$, we used $V_1=110$. The rationale for keeping $c$ fixed is that the self-energy depends only on $c$, $\beta$ and $p_0$. Additionally, we explored the dependence of the thermal conductivity on impurity fraction $z$ and velocity anisotropy $\beta$. For all computations we set the cutoff $p_0=100$; this simply fixes a particular value of the product $v_f v_\Delta$ for these calculations. \subsection{Vertex Corrections} \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{vertexDiscrep78.eps}}} \caption{Vertex-corrected thermal conductivity, in units of the universal conductivity $\kappa_0/T \equiv \frac{k_B}{3\hbar}(v_f/v_\Delta+v_\Delta/v_f)$. This data reflects a short range scattering potential $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$, impurity fraction $z$=0.01, and isotropic Dirac quasiparticles ($v_f=v_\Delta$). The inset displays the discrepancy between the bare-bubble and vertex-corrected results, in units of the bare-bubble result. It is clear that the vertex corrections are of little quantitative importance for these particular parameters. } \label{fig:vertex78} \end{figure} \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{vertexDiscrep60.eps}}} \caption{Vertex-corrected thermal conductivity, in units of the universal conductivity $\kappa_0/T \equiv \frac{k_B}{3\hbar}(v_f/v_\Delta+v_\Delta/v_f)$. This figure portrays the effect that a different scattering potential has on the importance of vertex corrections. Here, a longer range potential $\{V_1,R_2,R_3\}=\{140,0.5,0.3\}$ was used, again with impurity fraction $z$=$0.01$ and $v_f=v_\Delta$. The inset displays the discrepancy between the bare-bubble and vertex-corrected results, in units of the bare-bubble result. From this, we determine that vertex corrections make a more substantial correction as the forward scattering limit is approached, but only once the charge ordering is quite strong. } \label{fig:vertex60} \end{figure} \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{vertexDiscrep48.eps}}} \caption{Vertex-corrected thermal conductivity, in units of the universal conductivity $\kappa_0/T \equiv \frac{k_B}{3\hbar}(v_f/v_\Delta+v_\Delta/v_f)$. Again, a short-ranged scattering potential, $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$ and isotropic nodes ($v_f=v_\Delta$) are used. This figure displays the effect of a smaller impurity fraction than that depicted in Fig.\ref{fig:vertex78}. The inset displays the discrepancy between the bare-bubble and vertex-corrected results, in units of the bare-bubble result; since the scattering potential falls off slowly (in k-space) here, the vertex corrections are again quite unimportant.} \label{fig:vertex48} \end{figure} \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{vertexDiscrep58.eps}}} \caption{Vertex-corrected thermal conductivity, in units of the universal conductivity $\kappa_0/T \equiv \frac{k_B}{3\hbar}(v_f/v_\Delta+v_\Delta/v_f)$, for short-ranged scattering potential, $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$ and impurity fraction $z=0.01$. These calculations differ from those of Fig.\ref{fig:vertex78} in that they apply to the case of a more anisotropic Dirac spectrum with $v_f=9v_\Delta$. The thermal conductivity has a qualitatively similar $\psi$ dependence, but vanishes for a smaller value of $\psi$ than for the isotropic case. The inset displays the discrepancy between the bare-bubble and vertex-corrected results, in units of the bare-bubble result; again, the vertex corrections do not significantly modify the bare-bubble results.} \label{fig:vertex58} \end{figure} The importance of including the vertex corrections is determined by comparing the vertex corrected thermal conductivity with that of the bare-bubble. If $\frac{\kappa^{VC}-\kappa^{BB}}{\kappa^{BB}}<<1$ for a region of parameter space, then in that regime the bare-bubble results can be used instead. This is of threefold practicality: the bare-bubble results are less computationally expensive, the bare-bubble expression is much simpler to analyze, and other hamiltonians could be more easily studied. The bare bubble thermal conductivity can be obtained by setting $\widetilde{\Lambda}_\beta^n\rightarrow \widetilde{0}$ in Eq.~(\ref{eq:jeqn}), or by using a spectral representation, as in Ref~\onlinecite{dur02}; both methods have the same result. For impurity fraction $z$ ranging from 0.5$\%$ to $1\%$, the importance of the vertex corrections is largely seen to be negligible, which implies that an analysis of the bare bubble results is sufficient. Figs. \ref{fig:vertex78}-\ref{fig:vertex58} illustrate the vertex corrected thermal conductivities, $\kappa^{VC}$, in the main graphs, while the insets display the relative discrepancy with respect to the bare bubble thermal conductivities $\frac{\kappa^{VC}-\kappa^{BB}}{\kappa^{BB}}$. Each is plotted as a function of the amplitude of the CDW, $\psi/\psi_c$, where $\psi_c$ indicates the maximal CDW for which the clean system remains gapless. We will postpone analysis of the character of the thermal conductivity until Sec.~V~C. To gauge the importance of the vertex corrections, we look first at Fig.~\ref{fig:vertex78}. The inset indicates that the vertex corrections do not signifigantly modify the bare bubble thermal conductivity. Although their importance grows somewhat with increasing $\psi$, the correction is still slight. Next, Fig.~\ref{fig:vertex78} is used as a reference against which to consider the dependence of vertex corrections on scattering potential, impurity fraction, and velocity anisotropy. The next three figures are the results of computations with each of these parameters modified in turn. By comparing Fig.~\ref{fig:vertex60} with Fig.~\ref{fig:vertex78} we conclude that the vertex corrections become more important when the scattering potential is peaked in $k$-space, but are unimportant for potentials that fall off slowly in $k$-space. Fig.~\ref{fig:vertex78} and Fig.~\ref{fig:vertex48} correspond roughly to the largest and smallest $z$ for which these calculations are valid. Comparison of these two figures, as well as that of intermediary values of $z$ (not displayed) indicates that the relative importance of the vertex corrections is independent of $z$. Nor does increasing the velocity anisotropy affect their importance, as seen by making a comparison between Fig.~\ref{fig:vertex78} and Fig.~\ref{fig:vertex58}. \subsection{Clean Limit Analysis} It is of great interest to consider the behavior of the thermal conductivity in the clean $(z\rightarrow 0)$ limit. Because the thermal conductivity is composed of integrals over $\vect{p}$-space of functions which become increasingly peaked in this limit, there exists a sufficiently small $z$ beyond which it is not possible to perform the requisite numerical integrations. However, it is still possible to obtain information about this regime. To that end, we will examine the form of the bare-bubble thermal conductivity, and consider the $z\rightarrow 0$ limit. As we shall see, this will enable us to determine the value of $\psi$ at which the nodal approximation, and hence this calculation, is no longer valid. Additionally, a closed-form result for the thermal conductivity in the $z\rightarrow 0$ limit is obtained for the isotropic ($v_f=v_\Delta$) case. The bare-bubble thermal conductivity, identical with setting $\widetilde{\Lambda}\rightarrow \widetilde{0}$ in Eq.~(\ref{eq:thermal}), is \begin{widetext} \begin{eqnarray} \label{eq:cleanThermal} \kappa^{mm}&=&\frac{k_B}{3}\frac{v_f^2+v_2^2}{v_f v_2}J^m \hspace{50pt} J^m=\int\frac{\mathrm{d}^2\vect{p}}{2\pi}\frac{N_1+N_2}{D}\hspace{50pt}\epsilon_1\equiv\epsilon_k\hspace{50pt}\Delta_1\equiv\Delta_k\nonumber\\ N_1&=&A\left((A+B+\epsilon_1^2+\Delta_1^2)^2+(A+B+\epsilon_2^2+\Delta_2^2)^2\right) \hspace{41pt}\epsilon_2\equiv\epsilon_{k+G}\hspace{38pt}\Delta_2\equiv\Delta_{k+G}\nonumber\\ N_2&=&\eta_m A\Big((\psi-B_3)^2((\epsilon_1+\epsilon_2)^2-(\Delta_1-\Delta_2)^2)+B_1^2((\Delta_1+\Delta_2)^2-(\epsilon_1-\epsilon_2)^2) -4B_1(\psi-B_3)(\epsilon_1\Delta_1+\epsilon_2\Delta_2))\Big)\nonumber\\ D&=&\Big[ (A+B+\epsilon_1^2+\Delta_1^2)(A+B+\epsilon_2^2+\Delta_2^2)-B\Big((\epsilon_1+\epsilon_2)^2+(\Delta_1-\Delta_2)^2\Big)\nonumber\\ & &+4 B_1\Big(B_1(\epsilon_1\epsilon_2-\Delta_1\Delta_2)+(\psi-B_3)(\epsilon_1\Delta_2+\epsilon_2\Delta_1)\Big)\Big]^2, \end{eqnarray} \end{widetext} where $A\equiv\Gamma_0^2$ and $B\equiv(\psi-B_3)^2+B_1^2$. Since the results of Section~\ref{sec:SCBAresults} indicated that $\Gamma_0\sim \exp{(-\frac{1}{z})}$ and $B_1,B_3\sim z$, in the $z\rightarrow 0$ limit, $A\rightarrow 0$ much faster than $B_1\rightarrow 0$ or $B_3\rightarrow 0$. Therefore in taking the $z\rightarrow 0$ limit we will first let $A\rightarrow 0$ to obtain a result still expressed in terms of $B_1$ and $B_3$. The denominator can be rearranged as \begin{widetext} \begin{eqnarray} D&=&\Big( (A^2+A(2B+\epsilon_1^2+\Delta_1^2+\epsilon_2^2+\Delta_2^2)+f\Big)^2 \hspace{20pt} \mathrm{where}\nonumber\\ f&=&B^2+(\epsilon_1^2+\Delta_1^2)(\epsilon_2^2+\Delta_2^2)-2B(\epsilon_1\epsilon_2-\Delta_1\Delta_2)+4\Big(B_1(\epsilon_1\epsilon_2-\Delta_1\Delta_2) +(\psi-B_3)(\epsilon_1\Delta_2+\epsilon_2\Delta_1)\Big)\nonumber\\ &=&\Big((\epsilon_1 \epsilon_2-\Delta_1\Delta_2) - (2B_1^2-B) \Big)^2 +\Big( (\epsilon_1\Delta_2+\epsilon_2\Delta_1) + 2B_1(\psi-B_3)\Big)^2 \end{eqnarray} \end{widetext} We are thus considering, in the limit that $A\rightarrow0$, an integral of the form \begin{equation} \int\mathrm{d}^2\vect{p}\frac{A\,g(\vect{p})}{\big(A h(\vect{p})+f(\vect{p})\big)^2} \end{equation} Note that any nonzero contribution to this integral must come from a region in $\vect{p}$-space in which $f(\vect{p})=0$. We will consider separately the isotropic case ($v_f=v_\Delta$) and the anisotropic case ($v_f > v_\Delta$). \subsubsection{Isotropic Case} For the special case where $v_f=v_\Delta$, it is possible to calculate the integral of Eq.~(\ref{eq:cleanThermal}) exactly, by taking the $A\rightarrow0$ limit, and choosing another parametrization. The coordinates $q_1\equiv\epsilon_k-\epsilon_{k+Q}$ and $q_2\equiv\epsilon_k+\epsilon_{k+Q}-1$, have their origin located at the midpoint of the white and gray dots of Fig.~\ref{fig:BZfig}. Using these coordinates, in the $A\rightarrow0$ limit we find that the elements of Eq.~(\ref{eq:cleanThermal}) become \begin{widetext} \begin{eqnarray} \label{eq:cleanprogress} N_1&=&2A\Big(B^2+B(q^2+1)+\frac{1}{4}(q^2+1)^2+q^2-q_2^2\Big)\nonumber\\ N_2&=&2\eta_mA\Big((\psi-B_3)^2((\epsilon_1+\epsilon_2)^2-(\Delta_1-\Delta_2)^2)+B_1^2((\Delta_1+\Delta_2)^2-(\epsilon_1-\epsilon_2)^2)-4B_1 (\psi-B_3)(\epsilon_1\Delta_2+\epsilon_2\Delta_1)\Big)\nonumber\\ &=&2\eta_m\Big((\psi-B_3)^2(q_2^2+2q_2+1-q_1^2)+B_1^2(q_2^2-2q_2+1-q_1^2)-4B_1(\psi-B_3)(2q_2^2-q^2-1)\Big)\nonumber\\ &=&2\eta_m A\Big[(2q_2^2-q^2+1)\Big((\psi-B_3)^2-2B_1(\psi-B_3)+B_1^2\Big)+2q_2\Big((\psi-B_3)^2-B_1^2\Big)+4B_1(\psi-B_3)\Big]\nonumber\\ D&=&\Big[ 2A\Big(1+B-2B_1(\psi-B_3)\Big)+\Big(q_2-(\psi^2-B_1^2)^2\Big)^2+\frac{1}{4}\Big(q^2-(1-4B_1(\psi-B_3))\Big)^2\Big]^2. \end{eqnarray} \end{widetext} Now the part of the denominator not proportional to $A$, the $f$-term, is zero when \begin{equation} \label{eq:conditions} q_2=(\psi-B_3)^2-B_1^2 \hspace{10pt} \mathrm{and} \hspace{10pt} q^2=1-4B_1(\psi-B_3). \end{equation} \begin{figure}[h] \centerline{\resizebox{3.25in}{!}{\includegraphics{lineCircle.eps}}} \caption{Illustrated is a schematic view of the line and circle whose intersection determines whether gapless excitations remain, for the isotropic case ($v_f=v_\Delta$). The left figure indicates the situation in the absence of charge ordering, that is, for $\psi=0$, where the radius of the circle is 1 and the line lies on the horizontal axis. The right figure indicates the situation at $\psi=\psi_c^*$, when charge ordering is such that the excitation spectrum becomes gapped. In the clean case, the $\psi$ evolution corresponds to moving the line past the circle. With self-consistent disorder, the radius of the circle and height of the line are both functions of $\psi$; in each instance, this construction can be used to determine the value of $\psi$ at which the quasiparticle spectrum becomes gapped. This value of $\psi$ is referred to as $\psi_c^*$ in this paper.} \label{fig:lineCircle} \end{figure} In $q_1/q_2$ coordinates, these are the equations of a horizontal line and a circle, which must intersect for there to be a nonzero contribution to the integral, since each term is positive definite. In the simplified disorder treatment of Ref.~\onlinecite{dur02} for which $B_1=B_3=0$ and $\Gamma_0=\mathrm{constant}$, these constraints simplify to $q_2=\psi^2$ and $q^2=1$, so that no contribution occurs when $\psi>1$ (Note that as in the numerical analysis, $\psi$, being an energy, is measured in units of $\psi_c$). With the self-consistent treatment of disorder, there will likewise be a sufficiently large value of $\psi$ beyond which the line and circle no longer intersect; we will call this value $\psi_c^*$ (see Fig.~\ref{fig:lineCircle}). We interpret $\psi_c^*$ as the point beyond which the system becomes effectively gapped. This is consistent with the exact result found by computing the eigenvalues of the completely clean hamiltonian (as $\psi_c^*=\psi_c$ in that case). In Sec.~\ref{sec:SCBAresults} it was determined that $B_1\simeq b_1 \psi$ and $B_3\simeq b_3 \psi$, where $b_1$ and $b_3$ depend on the remaining parameters of the model. Using this approximate form for $B_1$ and $B_3$, the condition for the maximum $\psi$ for which the constraints of Eq.~(\ref{eq:conditions}) are satisfied, \begin{equation} 1-4B_1(\psi-B_3)=\Big((\psi-B_3)^2-B_1^2\Big)^2, \end{equation} indicates that \begin{equation} \psi_c^{*2}\simeq \frac{\pm\Big((1-b_3)\mp b_1\Big)^2}{\Big((1-b_3-b_1)(1-b_3+b_1)\Big)^2}. \end{equation} Since $\psi_c^{*2}>0$, we find that for $v_f=v_\Delta$, \begin{equation} \label{eq:psicrit} \psi_c^* \simeq \frac{1}{1-b_3+b_1}. \end{equation} We now proceed with the calculation of the clean-limit thermal conductivity. Substituting the conditions of Eq.~(\ref{eq:conditions}) into Eq.~(\ref{eq:cleanprogress}), we find that the numerators become \begin{widetext} \begin{eqnarray} N_1 &=& 4A\Big[\Big(1-2B_1(\psi-B_3)\Big)\Big(1+B-2B_1(\psi-B_3)\Big)\Big]\nonumber\\ N_2 &=& 4\eta_m A\Big(1+B-2B_1(\psi-B_3)\Big)\Big([(\psi-B_3)^2-B_1^2]^2+2B_1(\psi-B_3)\Big) \end{eqnarray} \end{widetext} both of which are independent of $\vect{q}$, so that the clean limit result hinges upon the integral \begin{eqnarray} I=\int\frac{\mathrm{d}^2q}{4\pi}\frac{A}{\Big(k_1 A+(q_2-k_2)^2+\frac{1}{4}(q^2-k_3)^2\Big)^2}, \end{eqnarray} where \begin{eqnarray} k_1&=&2\Big(1+B-2B_1(\psi-B_3)\Big)\nonumber\\ k_2&=&(\psi-B_3)^2-B_1^2\nonumber\\ k_3&=&1-4B_1(\psi-B_3). \end{eqnarray} The details of this integration are reported in Appendix \ref{app:integration}, with the result \begin{equation} I=\frac{1}{2k_1\sqrt{k_3-k_1^2}}. \end{equation} We can now write the anisotropic clean limit thermal conductivity \begin{widetext} \begin{eqnarray} J&=&\frac{1-2B_1\psi+\eta_m\Big([(\psi-B_3)^2-B_1^2]^2+2B_1(\psi-B_3)\Big)}{\sqrt{1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2}} \hspace{3pt}\Theta\Big(1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2\Big)\nonumber\\ J^{xx}&=&\sqrt{1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2}\hspace{6pt} \Theta \Big(1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2\Big)\nonumber\\ J^{yy}&=&\frac{1+[(\psi-B_3)^2-B_1^2]^2}{\sqrt{1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2}}\hspace{6pt}\Theta\Big(1-4B_1(\psi-B_3)-[(\psi-B_3)^2-B_1^2]^2\Big), \end{eqnarray} \end{widetext} where the $\Theta$ function is the Heaviside step function. Using the definition for $\psi_c^*$ found in Eq.~(\ref{eq:psicrit}), and defining \begin{equation} \label{eq:chicrit} \chi \equiv \frac{1}{1-b_3-b_1}, \end{equation} we are able to rewrite the dimensionless conductivity in terms of parameters easily extrapolated from SCBA calculations \begin{widetext} \begin{eqnarray} \label{eq:cleanlimitthermal} J^{xx}&=&\frac{\kappa^{xx}}{\kappa_0}=\sqrt{\Big( 1-\frac{\psi^2}{\psi_c^{*2}} \Big) \Big( 1+\frac{\psi^2}{\chi^2} \Big)} \hspace{5pt} \Theta[\Big(1-\frac{\psi^2}{\psi_c^{*2}}\Big)]\nonumber\\ J^{yy}&=&\frac{\kappa^{yy}}{\kappa_0}=\Big(1+\frac{\psi^4}{\psi_c^{*2}\chi^2}\Big)\Big(1-\frac{\psi^2}{\psi_c^{*2}}\Big)^{-1/2}\Big(1+\frac{\psi^2}{\chi^2}\Big)^{-1/2} \hspace{6pt}\Theta[\Big(1-\frac{\psi^2}{\psi_c^{*2}}\Big)] \end{eqnarray} \end{widetext} in which form it is clear that the thermal conductivity vanishes for $\psi>\psi_c^*$. \subsubsection{Anisotropic Case} For the case of anisotropic nodes, $v_f > v_\Delta$, the integral of Eq.~(\ref{eq:cleanThermal}) becomes intractable. However, it is still possible to predict $\psi_c^*$. Using the same $q_1/q_2$ coordinates, the $f$-part of the denominator is again a sum of two positive definite terms. Again, the only contributions to the clean-limit thermal conductivity arise when $f=0$, which again provides two equations \begin{eqnarray} \label{eq:2eqns} x^2+(y-a)^2=R^2\nonumber\\ (y-b)^2-x^2=c^2 \end{eqnarray} where \begin{eqnarray} a&=&\frac{1}{\beta}(\beta-1)\nonumber\\ b&=&\frac{\beta^4-2\beta^3-1}{\beta^4-1}\nonumber\\ c&=&\frac{2\beta}{\beta^4-1}\sqrt{1-(\beta^4-1)\Big((\psi-B_3)^2-B_1^2\Big)}\nonumber\\ R&=&\sqrt{(1-\frac{1}{\beta}(\beta-1))^2-4B_1(\psi-B_3)}. \end{eqnarray} This defines a hyperbola and a circle, again parametrized by $\psi$. One instance of this is depicted in Fig.~\ref{fig:hyperbolaCircle}. \begin{figure} \centerline{\resizebox{3.25in}{!}{\includegraphics{hyperbolaCircle.eps}}} \caption{For generally anisotropic Dirac quasiparticles, the construction used in Fig.\ref{fig:lineCircle} is modified to contain a hyperbola and circle. When these no longer intersect, the excitation spectrum becomes gapped. Illustrated is the construction for scattering parameter values $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$, impurity fraction $z=0.01$, and with $v_f=4v_\Delta$. For these parameters it was determined that the value of $\psi$ at which the spectrum becomes gapped is given by $\psi_c^*=0.32\psi_c$.} \label{fig:hyperbolaCircle} \end{figure} The value of $\psi$ at which these equations no longer have a solution is $\psi_c^*$. The computed values for $\psi_c^*$ are included for comparison in the plots of thermal conductivity in Fig.~\ref{fig:kappa1} and Fig.~\ref{fig:kappa4}. \subsection{Effect of Self-Consistent Disorder} \begin{figure} \centerline{\resizebox{3.25 in}{!}{\includegraphics{kappa1BW.eps}}} \caption{Effects of disorder on the charge-order-dependence of the bare-bubble thermal conductivity, isotropic case ($v_f=v_\Delta$). Note how an increase in the impurity fraction, $z$, broadens out the peak in the conductivity. As the disorder becomes sufficiently small, the computed conductivity (triangles and squares) attains a limiting value that closely agrees with the closed-form clean-limit results of Eq.~(\ref{eq:cleanlimitthermal}) (shown with solid lines). The thermal conductivity obtained by simply letting $\widetilde{\Sigma}\rightarrow$-i$\Gamma_0$ (as in Ref.~\onlinecite{dur02}) is shown with dashed lines. The effect of the self-consistent disorder is to renormalize the effective $\psi$ at which the thermal conductivity vanishes (from $\psi_c$ to $\psi_c^*$). Here, we have considered short-ranged scatterers $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$.} \label{fig:kappa1} \end{figure} \begin{figure} \centerline{\resizebox{3.25 in}{!}{\includegraphics{kappa4BW.eps}}} \caption{Effects of disorder on the charge-order-dependence of the bare-bubble thermal conductivity, anisotropic case ($v_f=16v_\Delta$). The effect of disorder is the same as in the isotropic case, which is to mix gapped and gapless states, smearing the peak in $\kappa_{yy}$ across the renormalized nodal transition point, $\psi_c^*$. It is interesting to note that for this anisotropic case, $\psi_c^*$ is significantly smaller than $\psi_c$. Again, we have considered short-ranged scatterers $\{V_1,R_2,R_3\}=\{110,0.9,0.8\}$.} \label{fig:kappa4} \end{figure} Satisfied that vertex corrections are of little importance, we set about analyzing the form of the thermal conductivity by studying the bare-bubble results. Thermal conductivity $\kappa$ was computed for $\beta\equiv\sqrt{v_f/v_\Delta}$ values of $1,2,3$ and $4$ (that is, for $v_f/v_\Delta$=$1,4,9$ and $16$). In Fig.~\ref{fig:kappa1} is presented a representative plot of $\kappa$ for $v_f=v_\Delta$. The clean limit prediction for $\kappa$ (Eq.~(\ref{eq:cleanlimitthermal})) is computed by fitting $b_1$ and $b_3$ from the self-energy calculations. These clean limit predictions are then plotted on the same graph with the numerical results of the thermal conductivity for the same parameters. In addition, the clean limit results of the simpler disorder model of Ref.~\onlinecite{dur02} are also shown for the $v_f=v_\Delta$ case. Increasing disorder broadens the peak in $\kappa^{yy}$ near $\psi_c^*$. For $z=0.005$, the numerical computation is already almost exactly given by the clean limit results, while for $z=0.009$, the features of the conductivity are nearly totally smeared out, as seen in Fig.~\ref{fig:kappa1}. In this figure, the value of $\psi_c^*$ given by Eq.~(\ref{eq:psicrit}) is indicated with an arrow. For $v_f>v_\Delta$, the thermal conductivity has the same characteristics as for $v_f=v_\Delta$, except that $\psi_c^*$ is generally smaller for larger $\beta$. The numerically computed thermal conductivities for the case of $\beta=4$ are shown in Fig.~\ref{fig:kappa4}. In this figure, the value of $\psi_c^*$ is computed by determining the largest value of $\psi$ for which Eqs.~(\ref{eq:2eqns}) have a solution, and is indicated with an arrow. It is clear from these graphs that the self-consistent disorder renormalizes the amplitude of charge density wave at which the thermal conductivity vanishes, and that the amount of renormalization is heavily dependent on the velocity anisotropy ratio, and varies only slightly with changing impurity fraction. \section{Conclusions} \label{sec:conc} The work described in this paper investigates the low temperature thermal conductivity of a $d$-wave superconductor with coexisting charge order in the presence of impurity scattering. We improve upon the model studied in Ref.~\onlinecite{dur02} by incorporating the effect of vertex corrections, and by including disorder in a self-consistent manner. Inclusion of vertex corrections does not significantly modify the bare-bubble results for short range scattering potentials. The role vertex corrections play increases somewhat for longer range scattering potentials, in particular as the amplitude of charge ordering increases. Nonetheless, for reasonable parameter values, the inclusion of vertex corrections is not found to significantly modify the bare-bubble results. This opens up the possibility of doing bare-bubble calculations for models with different types of ordering. Our analysis determined that for self-consistency, it is necessary to include off-diagonal (in extended-Nambu space) terms in the self-energy. As the charge ordering increases, the off-diagonal components become more important, and are found to dominate the self-energy in the clean limit. We also find that the zero-temperature thermal conductivity is no longer universal, as it depends on both disorder and charge order, rather than being solely determined by the anisotropy of the nodal energy spectrum. In addition, inclusion of disorder within the self-consistent Born approximation renormalizes, generally to smaller values, the critical value of charge ordering strength $\psi$ at which the system becomes becomes effectively gapped. This renormalization is seen in the calculated thermal conductivity curves, and depends primarily on the impurity fraction $z$ and velocity anisotropy $v_f/v_\Delta$. For larger $v_f/v_\Delta$, the renormalization can be significant, which may indicate that the calculated effects could be seen in low-temperature thermal transport even in systems with relatively weak charge order. \begin{acknowledgements} We are grateful to Subir Sachdev for very helpful discussions. This work is supported by NSF Grant No. DMR-0605919. \end{acknowledgements}
1,941,325,220,315
arxiv
\section{Introduction} Since stars form in dense clumps within molecular clouds, very young clusters are observed to be embedded in gas. Typically $1/3$ of the initial cluster-forming gas mass is converted into stars \citep{Lada:2003il,Machida:2011uw}. Following star formation, and especially after formation of OB stars, most of the residual gas is driven out, so that there is nearly no gas observed in populations older than a few million years. While the gas ejection process begins, self-gravitation decreases, resulting in a rapid expansion of the young star cluster (\citealt{1997MNRAS.284..785G}, \citealt{2006MNRAS.373..752G}, \citealt{2003MNRAS.338..665B}, \citealt{2007MNRAS.380.1589B}). In this contribution we investigate cluster formation and evolution during this process, focusing on the surface brightness of star clusters responding to residual star-forming gas expulsion. We want to answer the question of how the surface brightness is influenced by a) stellar dynamics and b) stellar evolution, and how both relate to each other. In the context of gas expulsion, numerical $N$-body computations show a re-virialisation process taking place after a few million years in the centre of the system. While the cluster outskirts continue to expand, the centre contracts \citep*{2001MNRAS.321..699K}, leading to an increase of the density in the central region. Since at any given age, the surface mass density has an impact on the surface brightness, this process is of special interest. We ask what role this effect plays in the evolution of the surface brightness and whether its impact is sufficient to let the cluster core disappear under a given detection limit owing to its expansion, then re-appear after re-virialisation. We have performed direct $N$-body computations, including stellar evolution, with initial conditions similar to \cite{2001MNRAS.321..699K}, with the emphasis being put on the time-evolution of the cluster surface brightness and surface mass density. The residual gas is modelled by a time-dependent background potential. \section{The Code} The main aspect here is the impact of gas expulsion on the early evolution of star clusters. We choose \mbox{N{\sc body} 6} \cite[]{2000chun.proc..286A} to perform these direct $N$-body simulations since this code already offers features that enable us to account for gas expulsion. In doing so, a time-varying analytical background potential is added which represents the gas and therefore its gravitational interaction with the stellar population. We have to make this assumption to simplify the complex hydrodynamical processes in the gas component. However, it has been concluded by \citet{2001ASPC..230..311G} that this approximation is physically realistic. In addition to the gas potential, a standard solar-neighbourhood tidal field is adopted. Also, stellar evolution is taken into account, because it is important when determining the luminosity of stars. We did not include pre-main sequence evolution as it is not part of the \mbox{N{\sc body} 6} package yet, and incorporating it into \mbox{N{\sc body} 6} is a very major effort. This means that the star clusters would initially be brighter than we compute, but also that they would fade even more strongly. The brightening of some of the clusters listed in Tab. \ref{tab:brightness_changes_revirialisation} are therefore upper limits, since pre-main sequence evolution would reduce this brightening because the stars fade towards the main sequence. The brightening effect due to revirialisation is therefore even less observable than estimated by our present models such that clusters cannot reappear after revirialisation. \subsection{Analytical gas model}\label{gas_model} We assume that both the stars and gas obey a Plummer density profile \citep{1911MNRAS..71..460P} with equal half-mass radii. The star formation efficiency (SFE) at the onset of gas expulsion is $\text{SFE} = 1/3$, that is, the gas mass is twice the initial stellar mass $M_\text{st}(t<t_\text{D})$, \begin{equation} M_\text{g}(t<t_\text{D}) = 2\, M_\text{st}(t<t_\text{D}) \, . \end{equation} A star with a position $\textbf r$ relative to the centre of the cluster experiences an acceleration from the background potential, \begin{equation}\label{eq:acceleration} \textbf a_\text{g} = - \frac{GM_\text{g}(t)}{(r^2+r_\text{pl,g}^2)^{3/2}}\, \textbf r, \end{equation} where $M_\text{g}(t)$ is the time-varying mass and $r_\text{pl,g}$ is the Plummer radius of the cluster-forming region. $G$ is the gravitational constant. Gas expulsion is modelled by decreasing the gas mass, the Plummer radius $r_\text{pl,g}$ remaining unchanged. The gas mass $M_\text{g}(t)$ -- and accordingly the background potential -- is assumed to stay constant until a time-delay $t_\text{D} = 0.6\,\text{Myr}$. After this time delay it decreases following \begin{equation} M_\text{g}(t) = \frac{M_\text{g}(t<t_\text{D})} {1+\left(t-t_\text{D}\right)/\tau },\label{eq:gas-mass-vs-time} \end{equation} where $\tau$ is the gas expulsion time scale, which depends on the velocity of the heated ionized gas \citep[$10\,\text{km}\,\text{s}^{-1}$, e.g.][]{Hills:1980fx}, i.e. \begin{equation} \tau \sim \frac{r_\text{pl,g}}{10\,\text{pc}/\text{Myr}} \,. \end{equation} We stress that the impact of the gas expulsion time scale depends on $\tau$ expressed in units of the cluster-forming region crossing time, $t_\text{cr}$: \begin{equation}\label{eq:crossingtime} t_\text{cr} = \sqrt{\frac{8\,r_\text{vir}^3}{G M_\text{st}}} \end{equation} where the virial radius obeys \begin{equation} r_\text{vir} = \frac{16}{3 \pi} r_\text{pl} \end{equation} in a Plummer model with a Plummer radius $r_\text{pl}$. Gas removal can be expected to be very rapid, and faster than the dynamical crossing time of the cluster-forming region \citep{Whitworth:1979uc}. In the present calculations, $\tau$ takes the following values \begin{equation} \tau \in \left\{0.001\textbf{,}\ 0.15\textbf{,}\ 0.3\textbf{,}\ 0.6\textbf{,}\ 1.2\right\}\,t_\text{cr} \,. \end{equation} Table \ref{models_tab} gives $t_\text{cr}$ in Myr. The time-delay $t_\text{D}$ is adopted from \cite{2001MNRAS.321..699K} and corresponds to the gas-confinement time until the HII region erupts. At a time $t = t_\text{D} +\tau$, half of the initial gas mass is driven out. In that respect, $\tau$ differs from the gas expulsion time scale adopted in \citet{2001ASPC..230..311G} and \citet{Parmentier:2008ew}, where it is defined as the time when gas removal is complete. \subsection*{Alternative analytical model} One alternative model representing the gas would be a Plummer model with constant mass and time-dependent Plummer radius $r_\text{pl,g}(t)$, e.g. \begin{equation} r_\text{pl,g}(t) = \left\{\begin{array}{ll} r_\text{pl,g}^{(0)}, &t<t_\text{D} \\ r_\text{pl,g}^{(0)} \left(2\left(1+\frac{t-t_\text{D}}{\tau}\right) - 1 \right)^{1/2}, &t \geq t_\text{D}\,, \end{array}\right. \end{equation} where $r_\text{pl,g}^{(0)}$ is the Plummer radius at $t<t_\text{D}$. This expression is derived from the condition to have, at the time $t$, the same gas mass left within a sphere about the centre of the system with radius $r_\text{pl,g}^{(0)} = r_\text{pl,g}(0)$ as in the first gas model. \begin{figure} \includegraphics[width=0.47\textwidth] {gas-expulsion-models} \caption{ Comparison of two different analytical gas expulsion models. Both models assume that the initial gas distribution (at $t<t_\text{D}$) follows a Plummer model. The solid line corresponds to a model where the gas mass $M_\text{g}(t)$ is decreased with time (Eq. \ref{eq:gas-mass-vs-time}). In the alternative model (dotted line), the total gas mass is assumed to be constant, and the Plummer radius $r_\text{pl,g}$ is increased with time after $t>t_\text{D}$. The first panel shows the assumed gas density $\rho_\text{g}(r)$, while the second panel presents the resulting potential $\phi_\text{g}(r)$, and the third panel shows the resulting radial force, $m\ddot r = -\d\phi_\text{g}(r) / \d r$. The plots compare the gas models at different times, $t_{0}=t_\text{D}$, $t_{1}=t_\text{D}+\tau$, and $t_{10}=t_\text{D}+10\tau$. } \label{fig:gas-expulsion-models} \end{figure} In Fig.\,\ref{fig:gas-expulsion-models}, the corresponding density and potential are plotted for $t \in \left\{t_\text{D}, t_\text{D} + \tau, t_\text{D} + 10\tau\right\}$ (dashed line). Compared to the model based on Eq. \ref{eq:acceleration} we use in our calculations (solid line), the gas mass is not just reduced but the gas is driven outwards. As a consequence, in the model with variable Plummer radius the gas is removed faster from the central region (see Fig.\,\ref{fig:gas-expulsion-models}, first panel), and the potential remains deeper (second panel) so that cluster will disrupt more slowly as long as the Plummer radius of the gas is smaller than the tidal radius. The maximum of the rejecting force, $-\d\phi(r)/\d r$, resulting from the gas potential can be found at \begin{equation} r_\text{max}(t) = \frac{r_\text{pl,g}(t)}{\sqrt 2} \, . \end{equation} Thus, increasing the Plummer radius, $r_\text{pl,g}(t)$, means increasing the radius $r_\text{max}(t)$ at which the rejecting force is maximal, resulting in a larger, less dense core. In the present work, we use the decreasing mass model, because we assume the star cluster models to be initially not mass-segregated. In this case, it is more natural to remove the gas uniformly, following the distribution of the OB stars. \\ \section{Initial conditions} Five models with different initial properties (total mass and half-mass radius) are calculated. Model III is based on model A of \citet{2001MNRAS.321..699K}. The properties of all models are listed in Tab.\,\ref{models_tab}. As for the tidal field, a Milky Way potential is adopted. The cluster has a distance to the Galactic centre of 8.5\,kpc and moves with a velocity of 220\,km\,s$^{-1}$ on a circular orbit through the Galactic disc. \subsection{The initial stellar population} The stars are initially distributed in mass according to the canonical two-part power-law IMF \citep{2001MNRAS.322..231K}. The distribution function $\xi(m)$ is given by \begin{equation} \xi(m) \propto m^{-\alpha}, \end{equation} where \begin{equation} \alpha =\left\{\begin{array}{lrl} 1.3, & 0.08 \leq &m/\text{M}_{\sun} < 0.5, \\ 2.3, & 0.5 \leq &m/\text{M}_{\sun} < 100 . \end{array}\right. \end{equation} $\xi(m)\d m$ is the number of stars with mass $m \in \left[m,m+\d m\right]$. For computational feasibility, there are no primordial binaries. This is a valid approximation, because they would not have an important effect on the surface brightness. \subsection{Cluster model} Like the gas, the density distribution of the stellar population at $t < t_\text{D}$ with stellar mass $M_\text{st}$ is assumed to follow a Plummer model with a half mass radius, $R_{0.5}$, that is identical to that of the gas. The properties are listed in Tab.\,\ref{models_tab}. The models are initially not mass-segregated. \renewcommand\arraystretch{1.5} \begin{table} \caption{ Initial properties of the models. Note that model III here is model A of \citet{2001MNRAS.321..699K}. } \label{models_tab} \begin{tabular}{ cccccccc} \hline & $M_\text{st} \left[\text{M}_{\sun}\right]$ & $N$ & $\left<m\right> \left[\text{M}_{\sun}\right]$ & $R_{0.5} \left[\text{pc}\right]$ & $t_\text{cr} \left[\text{Myr}\right]$ \\ \hline I & 787 & 1343 & 0.59 & 0.263& 0.15 \\ II & 787 & 1357 & 0.58 & 0.45 & 0.33 \\ III & 3700 & 6382 & 0.58 & 0.45 & 0.15 \\ IV & 18893 & 32578 & 0.58 & 0.45 & 0.067\\ V & 18893 & 32122 & 0.59 & 0.77 & 0.15 \\ \hline \end{tabular} \medskip This table gives an overview of the models we use. $M_\text{st}$ is the total stellar mass before gas expulsion onset, $N$ the number of stars, $\left<m\right>$ the mean stellar mass, $R_{0.5}$ the half-mass radius, and $t_\text{cr}$ the crossing time. The gas expulsion time scale $\tau$ (see Section \ref{gas_model}) takes values of 0.001\,$t_\text{cr}$, 0.15\,$t_\text{cr}$, 0.29\,$t_\text{cr}$, 0.59\,$t_\text{cr}$, and 1.18\,$t_\text{cr}$ in each model. \end{table} \section{Analysis steps} For every time step, the Lagrange radii as well as the surface brightness are determined using the mass, position and total luminosity of the stellar ensemble. The density centre is calculated by the method of \cite{1985ApJ...298...80C}. Mass loss due to stellar evolution is already taken into account by the simulation. In order to analyse the time-evolution of the surface brightness, the average surface brightness within a constant radius (1, 3, 5, 10, and 15\,Myr) is plotted vs. time. In order to compute the surface brightness of a finite ensemble of stars the stellar luminosity evolution as a function of the initial stellar mass is required. We use the spectral evolution code P{\sc egase} by \cite{fioc1997a,fioc1999a}. This code allows a user-defined multi-part power law IMF. In order to extract the evolution for a single star with mass $m_0$, we define a Dirac-IMF, $\xi(m) = \delta(m-m_0)$, and approximate it with a single-part power law, $\xi(m)\propto m^{-\alpha}$, with slope $\alpha=0$ between $(1-0.001)m_0$ and $(1+0.001)m_0$ and zero otherwise. The masses $m_0$ are the same as the masses for which stellar evolution models are implemented into P{\sc egase}. A two-dimensional table (the stellar luminosity as a function of the initial stellar mass and the age of the star) for the passbands FUV, B, U, and V are constructed. A luminosity for an arbitrary initial stellar mass and age is obtained by two-dimensional linear interpolation of the luminosity table. \section{Results} As we analyze the results of the performed computations qualitatively, only model III is presented in detail, because the remaining models show qualitatively the same results. \input{plots.tex} The computational results are shown in Figs.\,\ref{results3} and \ref{results3uvb}. As a check of consistency, the evolution of the Lagrange radii is compared between model A of \cite{2001MNRAS.321..699K} and model III here. They show the same characteristics: the expansion during and immediately after gas expulsion, followed by re-virialisation. In model III, the Lagrange radii, the surface mass density, and the surface brightness in the four passbands specified above are plotted vs. time (six columns in total). The first column shows the Lagrange radii which document the expansion history of the cluster. Columns 2--6 contain the time-evolution of the average surface mass density/brightness within constant radii (1, 3, 5, 10, and 15\,pc). Each row presents a computation of model III with a different gas expulsion time scale $\tau$. \begin{figure} \includegraphics[width=0.48\textwidth] {detailed2_4} \caption{ This plot illustrates the effects of stellar evolution and stellar dynamics on the cluster surface brightness. It shows the total FUV luminosity of all stars located within the innermost 1\,pc radius (dashed line, in erg\,$\text{s}^{-1} \text{\AA}^{-1}$) and the FUV surface brightness of a projected area with the same radius (solid line, in erg\,$\text{s}^{-1} \text{\AA}^{-1} \text{pc}^{-2}$) on the left $y$-axis. In order to relate the surface brightness evolution to the re-virialisation process, the 20\% Lagrange radius (LR, grey line) is shown on the right $y$-axis. } \label{stellar_evolution} \end{figure} Figure \ref{stellar_evolution} shows the 20\% Langrage radius, the total cluster luminosity and the average FUV surface brightness within a projected 1\,pc radius circle around the density centre of the system. On the basis of this figure, we want to discuss the interaction of cluster expansion and total stellar luminosity evolution and, accordingly, the dependencies of the surface brightness on stellar dynamics and stellar evolution. \renewcommand\arraystretch{1.5} \begin{table} \caption{ Re-increase of the surface brightness due to core re-virialisation in the V-band. } \label{tab:brightness_changes_revirialisation} \begin{tabular}{ccccc} \hline Model & $\tau$\,[$t_\text{cr}$] & SB$_0$\,[mag] & SB$_\text{min}$\,[mag] & SB$_\text{max}$\,[mag] \\ \hline % III & 0.6 & $-8.5$ & $-7.41$ & $-7.81$ \\ \hline % IV & 0.001 & $-9.1 $ & $-8.3$ & $-8.7$ \\ IV & 1.2 & $-9.1$ & $-8.5$ & $-8.8$ \\ \hline % V & 0.001 & $-9.5$ & $-6.8$ & $-8.0$ \\ V & 0.3 & $-9.5$ & $-8.4$ & $-8.7$ \\ V & 0.6 & $-9.5$ & $-7.7$ & $-8.7$ \\ V & 1.2 & $-9.5$ & $-7.8$ & $-8.5$ \\ \hline \end{tabular} \medskip The table gives an overview of all calculations with a positive re-increase of the surface brightness within the inner 1\,pc due to core re-virialisation. Only 7 of 25 calculations show this behaviour. Here, $\tau$ is the gas expulsion time, SB$_{0}$ the initial surface brightness at $t<t_\text{D}$, SB$_\text{min}$ corresponds to the minimum surface brightness before re-virialisation at $t_\text{min}$, and SB$_\text{max}$ to the maximum surface brightness at $t_\text{max}$ after re-virialisation. \end{table} While the cluster starts to expand after the time $t_\text{D}$ has passed, the total luminosity of all stars increases clearly until $t\approx 2\,\text{Myr}$. It reaches its maximum near the core re-virialisation phase, depending on the model. From then on, the total stellar luminosity (dashed line) decreases distinctly as massive stars evolve. Although the contraction of the core can be weakly detected as a bump in the surface mass density (see Fig.\,\ref{results3}, middle column), it is covered in the surface brightness by luminosity changes due to stellar evolution so that stellar dynamics play only a minor role in that context. Therefore, we can conclude that the re-virialisation in the core does not result in a significant re-increase of the central cluster surface brightness. To quantify the latter finding in all models (five models, each with five different values of $\tau$, i.e. 25 calculations in total), we do the following analyses. First, look at the surface mass density and detect the re-increase of the surface mass density within the inner 1\,pc which results from the core re-virialisation. If there is a noticeable increase, remember the times at which the surface mass density reaches the corresponding local minimum, t$_\text{min}$, and maximum, t$_\text{max}$, and determine the surface brightness changes (in the V-band) between $t_\text{min}$ and $t_\text{max}$. Table \ref{tab:brightness_changes_revirialisation} shows the resulting surface brightness changes of all calculations in which such a re-increase is detectable. 7 of 25 models match the requirements. In model I and II, not a single calculation matching these criteria was found. In model III, $\tau = 0.6\,t_\text{cr}$ results in a weak re-increase of the surface brightness. Model IV has also two calculations with a weak increase and model V shows a visible effect in four calculations. However, compared to the later surface brightness decrease due to stellar evolution, the detected changes, $\text{SB}_\text{max} - \text{SB}_\text{min}$, are fairly small and it seems unlikely that they lead to the addressed effect of falling below and (re-)exceeding the detection limit. Apart from our main question we notice some general dynamical aspects of gas expulsion when comparing simulations with the same initial conditions and different gas expulsion rates. The expansion of the 90\% Lagrange radius does not depend strongly on the gas expulsion time scale, although this parameter covers a range of three orders of magnitude. In contrast, the expansion of the inner Lagrange radii is very sensitive to the gas expulsion time scale. The quicker the gas expulsion, the faster the expansion of the central part of the cluster and accordingly of the inner Lagrange radii (first column in Fig.\,\ref{results3}). This is due to the effective crossing time, which gets longer with increasing radius and is shortest in the cluster core. Consequently, in the case of slow expulsion re-virialisation influences the surface mass density less strongly, because the cluster remains bound anyway for a long period. For short expulsion time scales on the other hand, the core expands fast and the core has only little chance to re-virialise. For the assumed SFE, one can observe re-virialisation clearly only for a range of gas expulsion times $\tau$, depending on the model. The surface mass density plots show statistical fluctuations in the central region due to the small number of stars (the noisy shape of the upper line of the second row of Fig.\,\ref{results3} after a few million years). The noisy shape in the surface brightness plots on the other hand is mainly caused by the evolution of individual stars. \section{Conclusions} In the present work, we investigate the evolution of the surface brightness of star clusters under the assumption that the residual gas of the cluster-forming region is driven out by evolving OB stars, leading to a strong expansion of the total cluster. Five different models of star clusters with different sizes and masses are numerically evolved, every model with a SFE of $1/3$ and with five different gas expulsion time scales $\tau$. Thereby we compare the influences of stellar dynamics and stellar evolution on the surface brightness. Based on the expansion histories the question came up whether the impact of the re-virialisation process is sufficient to let the cluster core disappear under a given detection limit owing to its expansion, then re-appear after re-virialisation. From our computations we conclude that the surface brightness is not changed significantly by this process, because it is clearly dominated by stellar evolution. Independent by this finding, we can draw conclusions of the expansion behaviour. We find that the expansion of the inner parts of a star cluster is more sensitive to the gas expulsion time scale $\tau$ than the outer parts are. While the outer Lagrange radii are only weakly responsive to changing $\tau$, the inner Lagrange radii react very sensitively. E.g. the core expands moderately in the case of slow expulsion, while a quick expulsion on the other hand can lead to a rapid disruption of the core. \section{Acknowledgements} This work is a part of FL's Diploma thesis at the Argelander-Institut f\"ur Astronomie (Bonn, Germany).\\ GP acknowledges support from the Humboldt Foundation and from the Max-Planck-Institut f\"ur Radioastronomie (Bonn, Germany) in the form of Research Fellowships.
1,941,325,220,316
arxiv
\section{Introduction}\label{sec:introduction} \noindent Point-set registration is a fundamental problem in computer and robot vision. Given two sets of points in different coordinate systems, or equivalently in the same coordinate system with different poses, the goal is to find the transformation that best aligns one of the point-sets to the other. Point-set registration plays an important role in many vision applications. Given multiple partial scans of an object, it can be applied to merge them into a complete 3D model~\cite{blais1995registering,huber2003fully}. In object recognition, fitness scores of a query object with respect to existing model objects can be measured with registration results~\cite{johnson1999using,belongie2002shape}. In robot navigation, localization can be achieved by registering the current view into the global environment \cite{nuchter20076d,pomerleau2013comparing}. Given cross-modality data acquired from different sensors with complementary information, registration can be used to fuse the data~\cite{makela2002review,zhao2005alignment} or determine the relative poses between these sensors~\cite{yang2013single,geiger2012automatic}. Among the numerous registration methods proposed in literature, the Iterative Closest Point (ICP) algorithm \cite{besl1992method,yang1991object,zhang1994iterative}, introduced in the early 1990s, is the most well-known algorithm for efficiently registering two 2D or 3D point-sets under Euclidean (rigid) transformation. Its concept is simple and intuitive: given an initial transformation (rotation and translation), it alternates between building closest-point correspondences under the current transformation and estimating the transformation with these correspondences, until convergence. Appealingly, point-to-point ICP is able to work directly on the raw point-sets, regardless of their intrinsic properties (such as distribution, density and noise level). Due to its conceptual simplicity, high usability and good performance in practice, ICP and its variants are very popular and have been successfully applied in numerous real-world tasks (\cite{newcombe2011kinectfusion, seitz2006comparison, makela2002review}, for example). However, ICP is also known for its susceptibility to the problem of local minima, due to the non-convexity of the problem as well as the local iterative procedure it adopts. Being an iterative method, it requires a good initialization, without which the algorithm may easily become trapped in a local minimum. If this occurs, the solution may be far from the true (optimal) solution, resulting in erroneous estimation. More critically, there is no reliable way to tell whether or not it is trapped in a local minimum. To deal with the issue of local minima, previous efforts have been devoted to widening the basin of convergence~\cite{fitzgibbon2003robust,tsin2004correlation}, performing heuristic and non-deterministic global search~\cite{sandhu2010point,silva2005precision} and utilizing other methods for coarse initial alignment~\cite{rusu2009fast,makadia2006fully}, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. However, global optimality cannot be guaranteed with these approaches. Furthermore, some methods, such as those based on feature matching, are not always reliable or even applicable when the point-sets are not sampled densely from smooth surfaces. This work is, to the best of our knowledge, the first to propose a globally optimal solution to the Euclidean registration problem defined by ICP in 3D. The proposed method always produces the exact and globally optimal solution, up to the desired accuracy. Our method is named the \emph{Globally Optimal ICP}, abbreviated to \emph{Go\nobreakdash-ICP}. We base the Go-ICP method on the well-established Branch-and-Bound (BnB) theory for global optimization. Nevertheless, choosing a suitable domain parametrization for building a tree structure in BnB and, more importantly, deriving efficient error bounds based on the parametrization are both non-trivial. Our solution is inspired by the $SO(3)$ space search technique proposed in Hartley and Kahl~\cite{hartley2007global} as well as Li and Hartley~\cite{li20073d}. We extend it to $SE(3)$ space search and derive novel bounds of the 3D registration error. Another feature of the Go-ICP method is that we employ, as a subroutine, the conventional (local) ICP algorithm within the BnB search procedure. The algorithmic structure of the proposed method can be summarized as follows. \vspace{0pt} \begin{framed} \vspace{-0pt} \noindent \emph{Use BnB to search the space of $SE(3)$} \hangafter 0 \hangindent 1.5em \noindent \emph{Whenever a better solution is found, call ICP initialized at this solution to refine (reduce) the objective function value. Use ICP's result as an updated upper bound to continue the BnB.} \hangindent 0em \noindent \emph{Until convergence.} \vspace{-0pt} \end{framed} \vspace{0pt} Our error metric strictly follows that of the original ICP algorithm, that is, minimizing the $L_2$~norm of the closest-point residual vector. We also show how a trimming strategy can be utilized to handle outliers. With small effort, one can also extend the method with robust kernels or robust norms. A preliminary version of this work was presented as a conference paper~\cite{yang2013goicp}. \subsection{Previous Work} There is a large volume of work published on ICP and other registration techniques, precluding us from giving a comprehensive list. Therefore, we will focus below on some relevant Euclidean registration works addressing the local minimum issue in 2D or 3D. For other papers, the reader is referred to two surveys on ICP variants~\cite{rusinkiewicz2001efficient,pomerleau2013comparing}, a recent survey on 3D point cloud and mesh registration~\cite{tam2013registration}, an overview of 3D registration~\cite{castellani20123d} and the references therein. \vspace{4pt} \noindent\textbf{\emph{{Robustified Local Methods.}}} To improve the robustness of ICP to poor initializations, previous work has attempted to enlarge the basin of convergence by smoothing out the objective function. Fitzgibbon~\cite{fitzgibbon2003robust} proposed the LM-ICP method where the ICP error was optimized with the Levenberg--Marquardt algorithm~\cite{more1978levenberg}. Better convergence than ICP was observed, especially with the use of robust kernels. It was shown by Jian and Vemuri~\cite{jian2005robust} that if the point-sets are represented with Gaussian Mixture Models (GMMs), ICP is related to minimizing the Kullback-Leibler divergence of two GMMs. Although improved robustness to outliers and poor initializations could be achieved by GMM-based techniques~\cite{jian2005robust,tsin2004correlation,myronenko2010point,campbell2015adaptive}, the optimization was still based on local search. Earlier than these works, Rangarajan \emph{et al}\onedot~\cite{rangarajan1997robust} presented a SoftAssign algorithm which assigned Gaussian weights to the points and applied deterministic annealing on the Gaussian variance. Granger and Pennec~\cite{granger2002multi} proposed an algorithm named Multi-scale EM-ICP where an annealing scheme on GMM variance was also used. Biber and Stra{\ss}er~\cite{biber2003normal} developed the Normal Distributions Transform (NDT) method, where Gaussian models were defined for uniform cells in a spatial grid. Magnusson \emph{et al}\onedot~\cite{magnusson2009evaluation} experimentally showed that NDT was more robust to poor initial alignments than ICP. Some methods extend ICP by robustifying the distance between points. For example, Sharp \emph{et al}\onedot~\cite{sharp2002icp} proposed the additional use of invariant feature descriptor distance; Johnson and Kang~\cite{johnson1999registration} exploited color distances to boost the performance. \vspace{0.06in} \noindent\textbf{\emph{{Global Methods.}}} To address the local minima problem, global registration methods have also been investigated. A typical family adopts stochastic optimization such as Genetic Algorithms~\cite{silva2005precision,robertson2002parallel}, Particle Swam Optimization~\cite{wachowiak2004approach}, Particle Filtering~\cite{sandhu2010point} and Simulated Annealing schemes~\cite{blais1995registering,papazov2011stochastic}. While the local minima issue is effectively alleviated, global optimality cannot be guaranteed and initializations still need to be reasonably good as otherwise the parameter space is too large for the heuristic search. Another class of global registration methods introduces shape descriptors for coarse alignment. Local descriptors, such as Spin Images~\cite{johnson1999using}, Shape Contexts~\cite{belongie2002shape}, Integral Volume~\cite{gelfand2005robust} and Point Feature Histograms~\cite{rusu2009fast} are invariant under specific transformations. They can be used to build sparse feature correspondences, based on which the best transformation can be found with random sampling~\cite{rusu2009fast}, greedy algorithms~\cite{johnson1999using}, Hough Transforms~\cite{woodford2014demisting} or BnB algorithms~\cite{gelfand2005robust,bazin2012globally}. Global shape descriptors, such as Extended Gaussian Images (EGI)~\cite{makadia2006fully}, can be used to find the best transformation maximizing descriptor correlation. These methods are often robust and can efficiently register surfaces where the descriptor can be readily computed. Random sampling schemes such as RANSAC~\cite{fischler1981random} can also be used to register raw point clouds directly. Irani and Raghavan~\cite{irani1999combinatorial} randomly sampled 2-point bases to align 2D point-sets using similarity transformations. For 3D, Aiger \emph{et al}\onedot~\cite{aiger20084} proposed a 4PCS algorithm that sampled coplanar 4-points, since congruent coplanar 4-point sets can be efficiently extracted with affine invariance. \vspace{0.06in} \noindent\textbf{\emph{{Globally Optimal Methods.}}} Registration methods that guarantee optimality have been published in the past, albeit in a smaller number. Most of them are based on BnB algorithms. For example, geometric BnB has been used for 2D image pattern matching \cite{breuel2003implementation,mount1999efficient,pfeuffer2012discrete}. These methods share a similar structure with ours: given each transformation sub-domain, determine for each data point the uncertainty region, based on which the objective function bounds are derived and the BnB search is applied. However, despite uncertainty region computation with various 2D transformations has been extensively explored, extending them to 3D is often impractical due to the heightened complexity~\cite{breuel2003implementation}. For 3D registration, Li and Hartley~\cite{li20073d} proposed using a Lipschitzized $L_2$ error function that was minimized by BnB. However, this method makes unrealistic assumptions that the two point-sets are of equal size and that the transformation is pure rotation. Olsson \emph{et al}\onedot~\cite{olsson2009branch} obtained the optimal solution to simultaneous point-to-point, point-to-line and point-to-plane registration using BnB and bilinear relaxation of rotation quaternions. This method, although related to ours, requires known correspondences. Recently, Bustos \emph{et al}\onedot~\cite{bustos2014fast} proposed searching $SO(3)$ space for optimal 3D geometric matching, assuming known translation. Efficient run-times were achieved using stereographic projection techniques. Some optimal 3D registration methods assume a small number of putative correspondences, and treat registration as a correspondence outlier removal problem. For example, to minimize the overall pairwise distance error, Gelfand~\emph{et al}\onedot~\cite{gelfand2005robust} applied BnB to assign one best corresponding model point for each data point. A similar idea using pairwise consistency was proposed by Enqvist \emph{et al}\onedot~\cite{enqvist2009optimal}, where the inlier-set maximization was formulated as an NP-hard graph vertex cover problem and solved using BnB. Using angular error, Bazin \emph{et al}\onedot~\cite{bazin2012globally} solved a similar correspondence inlier-set maximization problem via $SO(3)$ space search assuming known translation. Enqvist and Kahl~\cite{enqvist2008robust} optimally solved camera pose in $SE(3)$ via BnB. However, the key insight is that with pre-matched correspondences, their pairwise constraint (also used in \cite{enqvist2009optimal}) enabled a single translation BnB in $\mathbb{R}^3$ to solve the $SE(3)$ problem. \vspace{0.02in} \textbf{In this paper}, we optimally solve the 3D Euclidean registration problem with both rotation and translation. The proposed Go-ICP method is able to work directly on raw sparse or dense point-sets (which may be sub-sampled only for reasons of efficiency), without the need for a good initialization or putative correspondences. The method is related to the idea of $SO(3)$ space search, as proposed in \cite{hartley2007global,li20073d} and extended in \cite{ruland2012globally,bazin2012globally,yang2014optimal}, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. We extend the 3-dimensional $SO(3)$ search to 6-dimensional $SE(3)$ search, which is much more challenging. \section{Problem Formulation}\label{sec:formulation} In this paper we define the $L_2$-norm registration problem in the same way as in the standard point-to-point ICP algorithm. Let two 3D point-sets $\mathcal{X}=\{\mathbf{x}_i\},i=1,...,N$ and $\mathcal{Y}=\{\mathbf{y}_j\},j=1,...,M$, where $\mathbf{x}_i,\mathbf{y}_j\in\mathbb{R}^3$ are point coordinates, be the \emph{data} point-set and the \emph{model} point-set respectively. The goal is to estimate a rigid motion with rotation $\mathbf{R}\!\in\!SO(3)$ and translation $\mathbf{t}\!\in\!\mathbb{R}^3$, which minimizes the following $L_2$-error $E$, \begin{equation} E(\mathbf{R},\mathbf{t}) = \sum_{i=1}^N e_i(\mathbf{R},\mathbf{t})^2 = \sum_{i=1}^N \|\mathbf{R}\mathbf{x}_i+\mathbf{t}-\mathbf{y}_{j^*}\|^2 \label{eq:registrationerror} \end{equation} where $e_i(\mathbf{R},\mathbf{t})$ is the per-point residual error for $\mathbf{x}_i$. Given $\mathbf{R}$ and $\mathbf{t}$, the point $\mathbf{y}_{j^*}\in{\mathcal{Y}}$ is denoted as the optimal correspondence of $\mathbf{x}_i$, which is the closest point to the transformed $\mathbf{x}_i$ in $\mathcal{Y}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot \begin{equation} j^*=\argmin_{j\in\{1,..,M\}}\|\mathbf{R}\mathbf{x}_i+\mathbf{t}-\mathbf{y}_{j}\|. \label{eq:closestpoint} \end{equation} Note the short-hand notation used here: $j^*$ varies as a function of $(\mathbf{R},\mathbf{t})$ and also depends on $\mathbf{x}_i$. Equations (\ref{eq:registrationerror}) and (\ref{eq:closestpoint}) actually form a well-known \emph{chicken-and-egg} problem: if the true correspondences are known \emph{a priori}, the transformation can be optimally solved in closed-form \cite{horn1987closed,arun1987least}; if the optimal transformation is given, correspondences can also be readily found. However, the joint problem cannot be trivially solved. Given an initial transformation $(\mathbf{R},\mathbf{t})$, ICP iteratively solves the problem by alternating between estimating the transformation with (\ref{eq:registrationerror}), and finding closest-point matches with (\ref{eq:closestpoint}). Such an iterative scheme guarantees convergence to a local minimum~\cite{besl1992method}. \begin{figure}[!t] \begin{center} \subfigure{ \includegraphics[width=0.33\textwidth]{nonconvexity_points.pdf}} \subfigure{ \includegraphics[width=0.23\textwidth]{nonconvexity_e1.pdf}} \subfigure{ \includegraphics[width=0.23\textwidth]{nonconvexity_E.pdf}} \vspace{0pt} \caption{Nonconvexity of the registration problem. \textbf{Top}: two 1D point-sets $\{x_1,x_2\}$ and $\{y_1,y_2,y_3\}$. \textbf{Bottom-left}: residual error (closest-point distance) for $x_1$ as a function of translation $t$; the three dashed curves are $\|x_1\!+\!t\!-\!y_j\|$ with $j\!=\!1,2,3$ respectively. \textbf{Bottom-right}: the overall $L_2$ registration error; the two dashed curves are $e_i(t)^2$ with $i\!=\!1,2$ respectively. The residual error functions are nonconvex, thus the $L_2$ error function is also nonconvex.}\label{fig:nonconvexity} \end{center} \end{figure} \vspace{0.06in} \noindent\textbf{\emph{{(Non-)Convexity Analysis.}}} It is easy to see from (\ref{eq:registrationerror}) that the transformation function denoted by $T_x(p)$ affinely transforms a point $x$ with parameters $p$, thus the residual function $e(p)=d(T_x(p))$ is convex provided that \emph{domain $D_p$ is a convex set (Condition~1)} and $d(x)=\inf_{y\in \mathcal{Y}}\|x-y\|$ is convex. Moreover, it has been shown in \cite{boyd2004convex} and further in \cite{olsson2009branch} that $d(x)$ is convex if and only if \emph{$\mathcal{Y}$ is a convex set (Condition~2)}. For registration with pure translation, Condition~1 can be satisfied as the domain $D_p$ is $\mathbb{R}^3$. However, $\mathcal{Y}$ is often a discrete point-set sampled from complex surfaces and is thus rarely a convex set, violating Condition~2. Therefore, $e(p)$ is nonconvex. Figure~\ref{fig:nonconvexity} shows a 1D example. For registration with rotation, even Condition~1 cannot be fulfilled, as the rotation space induced by the quadratic orthogonality constraints $\mathbf{R}\bR^\mathrm{T}=\mathbf{I}$ is clearly not a convex set. \vspace{0.06in} \noindent\textbf{\emph{Outlier Handling.}} As is well known, $L_2$-norm least squares fitting is susceptible to outliers. A small number of outliers may lead to erroneous registration, even if the global optimum is achieved. There are many strategies to deal with outliers \cite{rusinkiewicz2001efficient,champleboux1992accurate,fitzgibbon2003robust,jian2005robust,chetverikov2005robust}. In this paper, a trimmed estimator is used to gain outlier robustness similar to \cite{chetverikov2005robust}. To streamline the presentation and mathematical derivation, we defer the discussion to Sec.~\ref{sec:outlier}. For now we assume there are no outliers and focus on minimizing (\ref{eq:registrationerror}). \section{The Branch and Bound Algorithm}\label{sec:bnb} The BnB algorithm is a powerful global optimization technique that can be used to solve nonconvex and NP-hard problems~\cite{lawler1966branch}. Although existing BnB methods work successfully for 2D registration, extending them to search $SE(3)$ and solve 3D rigid registration has been much more challenging~\cite{breuel2003implementation,li20073d}. In order to apply BnB to 3D registration, we must consider \emph{i}) how to parametrize and branch the domain of 3D motions (Sec.~\ref{sec:domain}), and \emph{ii}) how to efficiently find upper bounds and lower bounds (Sec.~\ref{sec:bounds}). \subsection{Domain Parametrization}\label{sec:domain} Recall that our goal is to minimize the error $E$ in (\ref{eq:registrationerror}) over the domain of all feasible 3D motions (the $SE(3)$ group, defined by $SE(3)=SO(3)\times\mathbb{R}^3$). Each member of $SE(3)$ can be minimally parameterized by 6 parameters (3 for rotation and 3 for translation). Using the \emph{angle-axis representation}, each rotation can be represented as a 3D vector $\mathbf{r}$, with axis $\mathbf{r}/\|\mathbf{r}\|$ and angle $\|\mathbf{r}\|$. We use $\mathbf{R}_\mathbf{r}$ to denote the corresponding rotation matrix for $\mathbf{r}$. The 3x3 matrix $\mathbf{R}_\mathbf{r}\in SO(3)$ can be obtained by the matrix exponential map as \begin{equation}\label{eq:rodrigues} \mathbf{R}_\mathbf{r} = \exp([\,\mathbf{r}\,]_{\times}) = \mathbf{I}\!+\!\frac{[\,\mathbf{r}\,]_{\times} \!\sin{\|\mathbf{r}\|}}{\|\mathbf{r}\|} \!+\! \frac{[\,\mathbf{r}\,]^2_{\times}(1\!-\!\cos{\|\mathbf{r}\|})}{\|\mathbf{r}\|^2} \end{equation} \noindent where $[\,\cdot\,]_{\times}$ denotes the skew-symmetric matrix representation \begin{equation} [\,\mathbf{r}\,]_{\times} = \left[\!\begin{array}{ccc} 0 & -r^3 & r^2 \\ r^3 & 0 & -r^1 \\ -r^2 & r^1 & 0 \end{array}\!\right] \end{equation} \noindent where $r^i$ is the $i$th element in $\mathbf{r}$. Equation~\ref{eq:rodrigues} is also known as the \emph{Rodrigues' rotation formula}~\cite{hartley2004}. The inverse map is given by the matrix logarithm as \begin{equation}\label{eq:} [\,\mathbf{r}\,]_{\times} = \log{\mathbf{R}_\mathbf{r}}=\frac{\|\mathbf{r}\|}{2\sin{\|\mathbf{r}\|}}(\mathbf{R}_\mathbf{r}-\mathbf{R}_{\mathbf{r}}^{\mathrm{T}}) \end{equation} \noindent where $\|\mathbf{r}\|\!=\!\arccos{\big((\mathrm{trace}(\mathbf{R}_\mathbf{r})\!-\!1)/2\big)}$. With the angle-axis representation, the entire 3D rotation space can be compactly represented as a solid radius-$\pi$ ball in $\mathbb{R}^3$. Rotations with angles less than (or, equal to) $\pi$ have unique (or, two) corresponding angle-axis representations on the interior (or, surface) of the ball. For ease of manipulation, we use the minimum cube $[-\pi,\pi]^3$ that encloses the $\pi$-ball as the rotation domain. For the translation part, we assume that the optimal translation lies within a bounded cube $[-\xi,\xi]^3$, which may be readily set by choosing a large number for $\xi$. During BnB search, initial cubes will be subdivided into smaller sub-cubes $C_r$, $C_t$ using the \emph{octree data-structure} and the process is repeated. Figure~\ref{fig:domain} illustrates our domain parametrization. \begin{figure}[!t] \begin{center} \subfigure[Rotation domain]{ \includegraphics[width=0.16\textwidth]{domain_rot.pdf}}\ \ \ \ \ \ \ \ \subfigure[Translation domain]{ \includegraphics[width=0.16\textwidth]{domain_trans.pdf}} \vspace{-2pt} \caption{$SE(3)$ space parameterization for BnB. \textbf{Left}: the rotation space $SO(3)$ is parameterized in a solid radius-$\pi$ ball with the angle-axis representation. \textbf{Right}: the translation is assumed to be within a 3D cube $[-\xi,\xi]^3$ where $\xi$ can be readily set. The octree data-structure is used to divide (branch) the domains and the yellow box in each diagram represents a sub-cube.}\label{fig:domain} \end{center} \vspace{-6pt} \end{figure} \section{Bounding Function Derivation}\label{sec:bounds} For our 3D registration problem, we need to find the bounds of the $L_2$-norm error function used in ICP within a domain $C_r\times C_t$. Next, we will introduce the concept of an \emph{uncertainty radius} as a mathematical preparation, then derive our bounds based on it. \subsection{Uncertainty Radius}\label{sec:uncertainty} Intuitively, we want to examine the uncertainty region of a 3D point $\mathbf{x}$ perturbed by an arbitrary rotation $\mathbf{r}\in C_r$ or a translation $\mathbf{t}\in C_t$. We aim to find a ball, characterised by an uncertainty radius, that encloses such an uncertainty region. We will use the first two lemmas of \cite{hartley2009global} in the following derivation. For convenience, we summarize both lemmas in a single Lemma shown below.\begin{lemma} For any vector $\mathbf{x}$ and two rotations $\mathbf{R}_{\mathbf{r}}$ and $\mathbf{R}_{\mathbf{r}_0}$ with $\mathbf{r}$ and $\mathbf{r}_0$ as their angle-axis representations, we have \begin{equation}\label{eq:rotationinequality} \angle(\mathbf{R}_{\mathbf{r}}\mathbf{x}, \mathbf{R}_{\mathbf{r}_0} \mathbf{x})\leqslant\angle(\mathbf{R}_{\mathbf{r}}, \mathbf{R}_{\mathbf{r}_0})\leqslant\|\mathbf{r}-\mathbf{r}_0\|, \end{equation} where $\angle(\mathbf{R}_{\mathbf{r}},\mathbf{R}_{\mathbf{r}_0}) = \arccos{\big((\mathrm{trace}(\mathbf{R}_{\mathbf{r}}^\mathrm{T}\mathbf{R}_{\mathbf{r}_0})\!-\!1)/2\big)}$ is the angular distance between rotations. \end{lemma} The second inequality in (\ref{eq:rotationinequality}) means that the angular distance between two rotations on the $SO(3)$ manifold is less than the Euclidean vector distance of their angle-axis representations in $\mathbb{R}^3$. Based on this Lemma, uncertainty radii are given as follows. \begin{figure}[!t] \begin{center} \includegraphics[width=0.33\textwidth]{result1.pdf} \caption{Distance computation from $\mathbf{R}_\mathbf{r} \mathbf{x}$ to $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}$ used in the derivation of the rotation uncertainty radius.}\label{fig:result1eq1} \end{center} \end{figure} \begin{theorem}\label{rs:u_r}(Uncertainty radius) Given a 3D point $\mathbf{x}$, a rotation cube $C_r$ of half side-length $\sigma_r$ with $\mathbf{r}_0$ as the center and examining the maximum distance from $\mathbf{R}_\mathbf{r} \mathbf{x}$ to $\mathbf{R}_{\mathbf{r}_0} \mathbf{x}$, we have $\forall \mathbf{r}\in C_r$, \begin{equation}\label{eq:rotuncertainty} \|\mathbf{R}_\mathbf{r}\mathbf{x}- \mathbf{R}_{\mathbf{r}_0} \mathbf{x}\| \!\leqslant\! 2\sin(\min({\sqrt{3}\sigma_r}/{2},{\pi}/{2}))\|\mathbf{x}\| \!\doteq\! \gamma_r. \end{equation} Similarly, given a translation cube $C_t$ with half side-length $\sigma_t$ centered at $\mathbf{t}_0$, we have $\forall \mathbf{t}\in C_t$, \begin{equation}\label{eq:transuncertainty} \textcolor[rgb]{0.00,0.00,0.00}{\|(\mathbf{x}+\mathbf{t})-(\mathbf{x}+\mathbf{t}_0)\|\leqslant \sqrt{3}\sigma_t \doteq \gamma_t.} \end{equation} \end{theorem} \noindent \emph{Proof:} Inequality (\ref{eq:rotuncertainty}) can be derived from {\begin{align} \ \ \ \ \ \ \ \ \ \ \ & \|\mathbf{R}_\mathbf{r}\mathbf{x}-\mathbf{R}_{\mathbf{r}_0}\mathbf{x}\| &\\ & = 2\sin({\angle(\mathbf{R}_\mathbf{r} \mathbf{x}, \mathbf{R}_{\mathbf{r}_0} \mathbf{x})}/{2})\|\mathbf{x}\| & \label{eq:result1eq1}\\ & \leqslant 2\sin(\min({\angle(\mathbf{R}_\mathbf{r}, \mathbf{R}_{\mathbf{r}_0})}/{2},{\pi}/{2})) \|\mathbf{x}\| & \label{eq:result1ieq1}\\ & \leqslant 2\sin(\min({\|\mathbf{r}-\mathbf{r}_0\|}/{2},{\pi}/{2})) \|\mathbf{x}\| & \label{eq:result1ieq2}\\ & \leqslant 2\sin(\min({\sqrt{3}\sigma_r}/{2},{\pi}/{2}))\|\mathbf{x}\| \label{eq:result1ieq3}& \end{align}}where (\ref{eq:result1eq1}) is illustrated in Fig.~\ref{fig:result1eq1}. Inequalities (\ref{eq:result1ieq1}), (\ref{eq:result1ieq2}) are based on Lemma 1, and (\ref{eq:result1ieq3}) is from the fact that $\mathbf{r}$ resides in the cube. Inequality (\ref{eq:transuncertainty}) can be trivially derived via \mbox{$\|(\mathbf{x}+\mathbf{t})-(\mathbf{x}+\mathbf{t}_0)\|=\|\mathbf{t}-\mathbf{t}_0\|\leqslant \sqrt{3}\sigma_t$}.\qed \vspace{3pt} We call $\gamma_r$ the rotation uncertainty radius, and $\gamma_t$ the translation uncertainty radius. They are depicted in Fig.~\ref{fig:uncertainty}. Note that $\gamma_r$ is point-dependent, thus we use ${\gamma_r}_i$ to denote the rotation uncertainty radius at $\mathbf{x}_i$ and the vector $\boldsymbol{\gamma}_r$ to represent all ${\gamma_r}_i$. Based on the uncertainty radii, the bounding functions are derived in the following section. \begin{figure}[!t] \begin{center} \subfigure[Rotation uncertainty radius]{ \includegraphics[width=0.23\textwidth]{uncertaintyradius_rot.pdf}\label{fig:uncertaintyr}} \subfigure[Translation uncertainty radius]{ \includegraphics[width=0.23\textwidth]{uncertaintyradius_trans.pdf}\label{fig:uncertaintyt}} \vspace{-2pt} \caption{Uncertainty radii at a point. \textbf{Left:} rotation uncertainty ball for $C_r$ (in red) with center $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}$ (blue dot) and radius $\gamma_r$. \textbf{Right:} translation uncertainty ball for $C_t$ (in red) with center $\mathbf{x}+\mathbf{t}_0$ (blue dot) and radius $\gamma_t$. In both diagrams, the uncertainty balls enclose the range of $\mathbf{R}_\mathbf{r}\mathbf{x}$ or $\mathbf{x}+\mathbf{t}$ (in green). \label{fig:uncertainty}} \end{center} \end{figure} \subsection{Bounding the \boldmath${L_2}$\unboldmath\ Error}\label{sec:errorbound} Given a rotation cube $C_r$ centered at $\mathbf{r}_0$ and a translation cube $C_t$ centered at $\mathbf{t}_0$, we will first derive valid bounds of the residual $e_i(\mathbf{R},\mathbf{t})$ for a single point $\mathbf{x}_i$. The upper bound of $e_i$ can be easily chosen by evaluating the error at any $(\mathbf{r},\mathbf{t})\in C_r\times C_t$. Finding a suitable lower bound for the $L_2$ error is a harder task. From Sec.~\ref{sec:uncertainty} we know that, with rotation $\mathbf{r}\in C_r$ (or, translation $\mathbf{t}\in C_t$), a transformed point $\mathbf{x}_i$ will lie in the uncertainty ball centered at $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i$ (or, $\mathbf{x}_i+\mathbf{t}_0$) with radius ${\gamma_r}_i$ (or, $\gamma_t$). For both rotation and translation, it therefore lies in the uncertainty ball centered at $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i+\mathbf{t}_0$ with radius ${\gamma_r}_i+\gamma_t$. Now we need to consider the smallest residual error that is possible for $\mathbf{x}_i$. We have the following theorem, which is the cornerstone of the proposed method. \begin{theorem}(Bounds of per-point residuals) For a 3D motion domain $C_r\times C_t$ centered at $(\mathbf{r}_0,\mathbf{t}_0)$ with uncertainty radii ${\gamma_r}_i$ and $\gamma_t$, the upper bound $\overline{e_i}$ and the lower bound $\underline{e_i}$ of the optimal registration error $e_i(\mathbf{R}_\mathbf{r},\mathbf{t})$ at $\mathbf{x}_i$ can be chosen as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\overline{e_i} \doteq e_i(\mathbf{R}_{\mathbf{r}_0},\mathbf{t}_0), \label{eq:perpointupperbound}\\ &&\!\!\!\!\!\!\!\!\!\underline{e_i} \doteq \max\big(e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)-({\gamma_r}_i\!+\!\gamma_t),0\big). \label{eq:perpointlowerbound} \end{eqnarray} \end{theorem} \noindent\emph{Proof:} The validity of $\overline{e_i}$ is obvious: error $e_i$ at the specific point $(\mathbf{r}_0,\mathbf{t}_0)$ must be larger than the minimal error within the domain, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $e_i(\mathbf{R}_{\mathbf{r}_0},\mathbf{t}_0)\geqslant \min_{\forall(\mathbf{r},\mathbf{t})\in(C_r\times C_t)}e_i(\mathbf{R}_\mathbf{r},\mathbf{t})$. We now focus on proving the correctness of $\underline{e_i}$. As defined in (\ref{eq:closestpoint}), the model point $\mathbf{y}_{j^*}\in\mathcal{Y}$ is closest to $(\mathbf{R}_\mathbf{r} \mathbf{x}_i+\mathbf{t})$. Let $\mathbf{y}_{j^*_0}$ be the closest model point to $\mathbf{R}_{\mathbf{r}_0} \mathbf{x}_i+\mathbf{t}_0$. Observe that, $\forall(\mathbf{r},\mathbf{t})\in(C_r\times C_t)$, \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!e_i(\mathbf{R}_\mathbf{r}, \mathbf{t}) \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\!\|\mathbf{R}_\mathbf{r} \mathbf{x}_i\!+\!\mathbf{t}\!-\!\mathbf{y}_{j^*}\| \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\!\|( \mathbf{R}_{\mathbf{r}_0} \mathbf{x}_i\!+\!\mathbf{t}_0\!-\!\mathbf{y}_{j^*}\!)\!+(\! \mathbf{R}_\mathbf{r} \mathbf{x}_i\!-\! \mathbf{R}_{\mathbf{r}_0} \mathbf{x}_i)\!+\!(\mathbf{t} \!-\!\mathbf{t}_0)\| \label{eq:eq_auxiliary}\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\geqslant\!\|\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i\!+\!\mathbf{t}_0\!-\!\mathbf{y}_{j^*}\!\|\!-\!(\|\mathbf{R}_\mathbf{r} \mathbf{x}_i\!-\!\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i\|\!+\!\|\mathbf{t}\!-\!\mathbf{t}_0\|) \label{eq:ineq_triangle}\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\geqslant\!\|\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i\!+\!\mathbf{t}_0\!-\!\mathbf{y}_{j^*}\!\|\!-\!({\gamma_r}_i\!+\!\gamma_t) \label{eq:ineq_uncertaintyradius} \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\geqslant\!\|\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i\!+\!\mathbf{t}_0\!-\!\mathbf{y}_{j^*_0}\!\|\!-\!({\gamma_r}_i\!+\!\gamma_t) \label{eq:ineq_closestpoint} \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\!e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)\!-\!({\gamma_r}_i\!+\!\gamma_t), \end{eqnarray} where (\ref{eq:eq_auxiliary}) trivially involves introducing two auxiliary elements $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}$ and $\mathbf{t}_0$, (\ref{eq:ineq_triangle}) follows from the reverse triangle inequality\footnote{$|x+y| = |x-(-y)| \geqslant |x|-|-y| = |x|-|y|$}, (\ref{eq:ineq_uncertaintyradius}) is based on the uncertainty radii in (\ref{eq:rotuncertainty}) and (\ref{eq:transuncertainty}), and (\ref{eq:ineq_closestpoint}) is from the closest-point definition. Note that $\mathbf{y}_{j^*}$ is not fixed, but changes dynamically as a function of $(\mathbf{R}_\mathbf{r},\mathbf{t})$ as defined in (\ref{eq:closestpoint}). According to the above derivation, the residual error $e_i(\mathbf{R}_\mathbf{r}, \mathbf{t})$ after perturbing a data point $\mathbf{x}_i$ by a 3D rigid motion composed of a rotation $\mathbf{r}\in C_r$ and a translation $\mathbf{t}\!\in\! C_t$ will be at least $e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)\!-\!({\gamma_r}_i\!+\!\gamma_t)$. Given that a closest point distance should be non-negative, a valid lower bound $\underline{e_i}$ for $C_r\times C_t$ is $\max\big(e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)-({\gamma_r}_i\!+\!\gamma_t),0\big)\!\leqslant\! \min_{\forall(\mathbf{r},\mathbf{t})\in(C_r\times C_t)}e_i(\mathbf{R}_\mathbf{r},\mathbf{t})$.\qed \vspace{3pt} The geometric explanation for $\underline{e_i}$ is as follows. Since $\mathbf{y}_{j^*_0}$ is closest to the center $\mathbf{R}_{\mathbf{r}_0}\mathbf{x}_i+\mathbf{t}_0$ of the uncertainty ball with radius ${\gamma={\gamma_r}_i+\gamma_t}$, it is also closest to the surface of the ball and $\underline{e_i}$ is the closest distance between point-set $\mathcal{Y}$ and the ball. Thus, no matter where the transformed data point $\mathbf{R}_\mathbf{r} \mathbf{x}_i+\mathbf{t}$ lies inside the ball, its closest distance to point-set $\mathcal{Y}$ will be no less than $\underline{e_i}$. See Fig.~\ref{fig:lowerbound} for a geometric illustration. \begin{figure} \begin{center} \includegraphics[width=0.26\textwidth]{lowerbound.pdf} \vspace{0pt} \caption{Deriving the lower bound. Any transformed data point $\mathbf{R}_\mathbf{r} \mathbf{x}\!+\!\mathbf{t}$ lies within the uncertainty ball (in yellow) centered at $\mathbf{R}_{\mathbf{r}_0} \mathbf{x}\!+\!\mathbf{t}_0$ with radius $\gamma=\gamma_r+\gamma_t$. Model points $\mathbf{y}_{j^*}$ and $\mathbf{y}_{j^*_0}$ are closest to $\mathbf{R}_\mathbf{r} \mathbf{x}\!+\!\mathbf{t}$ and $\mathbf{R}_{\mathbf{r}_0} \mathbf{x}\!+\!\mathbf{t}_0$ respectively. It is clear that $a\le b \le c$ where $a=\underline{e_i}$ and $c=e_i(\mathbf{R}_\mathbf{r},\mathbf{t})$. See text for more details. \label{fig:lowerbound}} \end{center} \vspace{0pt} \end{figure} Summing the squared upper and lower bounds of per-point residuals in (\ref{eq:perpointupperbound}) and (\ref{eq:perpointlowerbound}) for all $M$ points, we get the $L_2$-error bounds in the following corollary. \begin{corollary}\label{cor:l2errorbounds}(Bounds of $L_2$ error) For a 3D motion domain $C_r\times C_t$ centered at $(\mathbf{r}_0,\mathbf{t}_0)$ with uncertainty radii ${\gamma_r}_i$ and $\gamma_t$, the upper bound $\overline{E}$ and the lower bound $\underline{E}$ of the optimal $L_2$ registration error $E^*$ can be chosen as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\overline{E} \doteq \sum_{i=1}^M \overline{e_i}^2 = \sum_{i=1}^M e_i(\mathbf{R}_{\mathbf{r}_0},\!\mathbf{t}_0)^2, \label{eq:upperbound} \vspace{-5pt}\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\underline{E} \doteq \sum_{i=1}^M \underline{e_i}^2= \sum_{i=1}^M \max\big(e_i(\mathbf{R}_{\mathbf{r}_0},\!\mathbf{t}_0)\!-\!({\gamma_r}_i\!+\!\gamma_t), 0\big)^2. \label{eq:lowerbound} \end{eqnarray} \end{corollary} \section{The Go-ICP Algorithm}\label{sec:algorithm} Now that the domain parametrization and bounding functions have been specified, we are ready to present the Go-ICP algorithm concretely. \subsection{Nested BnBs} Given Corollary~\ref{cor:l2errorbounds}, a direct 6D space BnB (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot branching each 6D cube into $2^6=64$ sub-cubes and bounding errors for them) seems to be straightforward. However, we find it prohibitively inefficient and memory consuming, due to the huge number of 6D cubes and point-set transformation operations. Instead, we propose using a nested BnB search structure. An outer BnB searches the rotation space of $SO(3)$ and solves the bounds and corresponding optimal translations by calling an inner translation BnB. \textcolor[rgb]{0.00,0.00,0.00}{In this way, we only need to maintain two queues with significantly fewer cubes. Moreover, it avoids redundant point-set rotation operations for each rotation region, and takes the advantage that translation operations are computationally much cheaper.} The bounds for both the BnBs can be readily derived according to Sec.~\ref{sec:errorbound}. In the outer rotation BnB, for a rotation cube $C_r$ the bounds can be chosen as \begin{eqnarray} \!\!\!\!&&\overline{E}_r = \min_{\forall \mathbf{t} \in \mathscr{C}_t}\sum_i e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t})^2, \label{eq:rotationub}\\ \!\!\!\!&&\underline{E}_r = \min_{\forall \mathbf{t} \in \mathscr{C}_t}\sum_i \max\big(e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t})-{\gamma_r}_i,0\big)^2, \label{eq:rotationlb} \end{eqnarray} where $\mathscr{C}_t$ is the initial translation cube. To solve the lower bound $\underline{E}_r$ in (\ref{eq:rotationlb}) with the inner translation BnB, the bounds for a translation cube $C_t$ can be chosen as \begin{eqnarray} \!\!\!\!&&\overline{E}_t =\sum_i \max\big(e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)-{\gamma_r}_i, 0\big)^2, \label{eq:translationub}\\ \!\!\!\!&&\underline{E}_t = \sum_i \max\big(e_i(\mathbf{R}_{\mathbf{r}_0}, \mathbf{t}_0)-({\gamma_r}_i+\gamma_t),0\big)^2. \label{eq:translationlb} \end{eqnarray} By setting all the rotation uncertainty radii ${\gamma_r}_i$ in (\ref{eq:translationub}) and (\ref{eq:translationlb}) to zero, the translation BnB solves $\overline{E}_r$ in (\ref{eq:rotationub}). A detailed description is given in Algorithm~\ref{alg:bnbrt} and Algorithm~\ref{alg:bnbt}. \vspace{0.06in} \noindent\textbf{Search Strategy and Stop Criterion.} In both BnBs, we use a best-first search strategy. Specifically, each of the BnBs maintains a priority queue; the priority of a cube is opposite to its lower bound. Once the difference between so-far-the-best error $E^*$ and the lower bound $\underline{E}$ of current cube is less than a threshold $\epsilon$, the BnB stops. Another possible strategy is to set $\epsilon=0$ and terminate the BnBs when the remaining cubes are sufficiently small. \begin{algorithm}[!tb] \footnotesize \caption{Go-ICP -- the Main Algorithm: BnB search for optimal registration in $SE(3)$}\label{alg:bnbrt} \KwIn{Data and model points; threshold $\epsilon$; initial cubes $\mathscr{C}_r$,\! $\mathscr{C}_t$. } \KwOut{ Globally minimal error $E^*$ and corresponding $\mathbf{r}^*$,\! $\mathbf{t}^*$. } Put $\mathscr{C}_r$ into priority queue $Q_r$.\\ Set $E^*=+\infty$.\\ \Begin { Read out a cube with lowest lower-bound $\underline{E}_r$ from $Q_r$.\\ Quit the loop if $E^*\!-\!\underline{E}_r\!<\!\epsilon$. \\ Divide the cube into 8 sub-cubes.\\ \ForEach{\textnormal{sub-cube} $C_r$} { Compute $\overline{E}_r$ for $C_r$ and corresponding optimal $\mathbf{t}$ by calling Algorithm~\ref{alg:bnbt} with $\mathbf{r}_0$, zero uncertainty radii, and $E^*$. \\ \If{$\overline{E}_r<E^*$} { {Run ICP with the initialization $(\mathbf{r}_0, \mathbf{t})$.} \label{ln:icprefine1}\\ Update $E^*$, $\mathbf{r}^*$, and $\mathbf{t}^*$ with the results of ICP. \label{ln:icprefine2} } Compute $\underline{E}_r$ for $C_r$ by calling Algorithm~\ref{alg:bnbt} with $\mathbf{r}_0$, $\boldsymbol{\gamma}_r$ and $E^*$.\\ \If{$\underline{E}_r\geqslant E^*$} { Discard $C_r$ and continue the loop; } Put $C_r$ into $Q_r$. } } \end{algorithm} \begin{algorithm}[!tb] \footnotesize \caption{BnB search for optimal translation given rotation}\label{alg:bnbt} \KwIn{Data and model points; threshold $\epsilon$; initial cube $\mathscr{C}_t$; rotation $\mathbf{r}_0$; rotation uncertainty radii $\boldsymbol{\gamma}_r$, so-far-the-best error $E^*$. } \KwOut{Minimal error $E^*_t$ and corresponding $\mathbf{t}^*$.} Put $\mathscr{C}_t$ into priority queue $Q_t$.\\ Set $E^*_t=E^*$.\\\label{ln:initalerror} \Begin { Read out a cube with lowest lower-bound $\underline{E}_t$ from $Q_t$.\\ Quit the loop if $E^*_t\!-\!\underline{E}_t\!<\!\epsilon$. \\ Divide the cube into 8 sub-cubes.\\ \ForEach{\textnormal{sub-cube} $C_t$} { Compute $\overline{E}_t$ for $C_t$ by (\ref{eq:translationub}) with $\mathbf{r}_0$,$\mathbf{t}_0$ and $\boldsymbol{\gamma}_r$.\\ \If{$\overline{E_t}<E^*_t$} { Update $E^*_t=\overline{E}_t$, $\mathbf{t}^*=\mathbf{t}_0$.\\ } Compute $\underline{E}_t$ for $C_t$ by (\ref{eq:translationlb}) with $\mathbf{r}_0$,$\mathbf{t}_0$,$\boldsymbol{\gamma}_r$ and $\gamma_t$.\\ \If{$\underline{E}_t\geqslant E^*_t$} { Discard $C_t$ and continue the loop.\\ } Put $C_t$ into $Q_t$. } } \end{algorithm} \subsection{Integration with the ICP Algorithm}\label{sec:integration} Lines~\ref{ln:icprefine1}--\ref{ln:icprefine2} of Algorithm~\ref{alg:bnbrt} show that whenever the outer BnB finds a cube $C_r$ that has an upper bound lower than the current best function value, it will call conventional ICP, initialized with the center rotation of $C_r$ and the corresponding best translation. Figure~\ref{fig:ICP_BnB_coop} illustrates the collaborative relationship between ICP and BnB. Under the guidance of global BnB, ICP converges into local minima one by one, with each local minimum having lower error than the previous one, and ultimately reaches the global minimum. Since ICP monotonically decreases the current-best error $E^*$ (cf. \cite{besl1992method}), the search path of the local ICP is confined to un-discarded, promising sub-cubes with small lower bounds, as illustrated in Fig.~\ref{fig:ICP_BnB_coop}. In this way, the global BnB search and the local ICP search are intimately integrated in the proposed method. The former helps the latter jump out of local minima and guides the latter's next search; the latter accelerates the former's convergence by refining the upper bound, hence improving the efficiency. \begin{figure} \begin{center} \includegraphics[width=0.475\textwidth]{go-icp.pdf} \caption{Collaboration of BnB and ICP. \textbf{Left}: BnB and ICP collaboratively update the upper bounds during the search process. \textbf{Right}: with the guidance of BnB, ICP only explores un-discarded, promising cubes with small lower bounds marked up by BnB. \label{fig:ICP_BnB_coop}} \end{center} \vspace{-5pt} \end{figure} \subsection{Outlier Handling with Trimming}\label{sec:outlier} \begin{figure*}[!t] \begin{center} \includegraphics[width=1\textwidth]{remainingcubes3d_synthetic.pdf} \caption{Remaining cubes of the BnBs. The first five figures show the remaining cubes in the rotation $\pi$-ball of the rotation BnBs, for an irregular tetrahedron, a cuboid with three different side-lengths, a regular tetrahedron, a regular cube, and a regular octahedron respectively. The last figure shows a typical example of remaining cubes of a translation BnB, for the irregular tetrahedron. (Best viewed when zoomed in) \label{fig:rotationcubes}} \end{center} \end{figure*} In statistics, trimming is a strategy to obtain a more robust statistic by excluding some of the extreme values. It is used in Trimmed ICP~\cite{chetverikov2005robust} for robust point-set registration. Specifically, in each iteration, only a subset $\mathcal{S}$ of the data points that have the smallest closest distances are used for motion computation. Therefore, the registration error will be \begin{equation}\label{eq:trimmedregistrationerror} E^{Tr}=\sum_{i\in\mathcal{S}}e_i(\mathbf{R},\mathbf{t})^2. \end{equation} To robustify our method with trimming, it is necessary to derive new upper and lower bounds of (\ref{eq:trimmedregistrationerror}). We have the following result. \begin{corollary} (Bounds of the trimmed $L_2$ error) The upper bound $\overline{E^{Tr}}$ and lower bound $\textcolor[rgb]{0.00,0.00,0.00}{\underline{E^{Tr}}}$ of the registration error with trimming for the domain $C_r\times C_t$ can be chosen as \begin{eqnarray} &&\overline{E^{Tr}}\doteq\sum_{i\in\mathcal{P}}\overline{e_i}^2, \label{eq:trimmedperpointupperbound}\\ &&\underline{E^{Tr}}\doteq\sum_{i\in \mathcal{Q}}\underline{e_i}^2. \label{eq:trimmedperpointlowerbound} \end{eqnarray} where $\overline{e_i}$, $\underline{e_i}$ are bounds of the per-point residuals defined in (\ref{eq:perpointupperbound}), (\ref{eq:perpointlowerbound}) respectively, and $\mathcal{P}$, $\mathcal{Q}$ are the trimmed point-sets having smallest values of $\overline{e_i}$, $\underline{e_i}$ respectively, with ${|\mathcal{P}|=|\mathcal{Q}|=|\mathcal{S}|=K}$. \end{corollary} \noindent\emph{Proof:} The upper bound in (\ref{eq:trimmedperpointupperbound}) is chosen trivially. To see the validity of the lower bound in (\ref{eq:trimmedperpointlowerbound}), observe that $\forall (\mathbf{r},\mathbf{t})\in C_r\times C_t$, \begin{equation} \underline{E^{Tr}}=\sum_{i\in \mathcal{Q}}\underline{e_i}^2 \leq \sum_{i\in \mathcal{S}}\underline{e_i}^2 \leq \sum_{i\in \mathcal{S}}e_i(\mathbf{R}_\mathbf{r},\mathbf{t})^2 = E^{Tr}\!.\! \end{equation}\qed Based on this corollary, the corresponding bounds in the nested BnB can be readily derived. As proved in \cite{chetverikov2005robust}, iterations of Trimmed ICP decrease the registration error monotonically to a local minimum. Thus it can be directly integrated into the BnB procedure. \vspace{0.06in} \noindent\textbf{Fast Trimming.} A straightforward yet inefficient way to do trimming is to sort the residuals outright and use the $K$ smallest ones. In this paper, we employ \textcolor[rgb]{0.00,0.00,0.00}{the Introspective Selection algorithm}~\cite{musser1997introspective} which has $O(N)$ performance in both the worst case and average case. \vspace{0.06in} \noindent\textbf{Other Robust Extensions.} In the same spirit as trimming, other ICP variants such as \cite{champleboux1992accurate,masuda1994robust} can be handled. The method can also be adapted to LM-ICP~\cite{fitzgibbon2003robust}, where the new lower-bound is simply a robust kernelized version of the current one. It may also be extended to ICP variants with $L_p$-norms~\cite{bouaziz2013sparse}, such as the robustness-promoting $L_1$-norm. \section{Experiments}\label{sec:experiment} We implemented the method\footnote{Source code and demo can be found on the author's webpage.} in C++ and tested it on a standard PC with an Intel i7 3.4GHz CPU. In the experiments reported below, the point-sets were pre-normalized such that all the points were within the domain of $[-1,1]^3$. Although the goal was to minimize the $L_2$ error in (\ref{eq:registrationerror}), the root-mean-square (RMS) error is reported for better comprehension. \vspace{0.06in} \noindent\textbf{Closest-point distance computation.} To speed up the closest distance computation, a kd-tree data structure can be used. We also provide an alternative solution that is used more often in the experiments -- a 3D Euclidean Distance Transform (DT)~\cite{fitzgibbon2003robust} used to compute closest distances for fast bound evaluation\footnote{Local ICP is called infrequently so we simply use a kd-tree for it. The refined upper-bounds from the found ICP solutions are evaluated via the DT for consistency. }. A DT approximates the closest-point distances in the real-valued space by distances of uniform grids, and pre-computes them for constant-time retrieval (details about our DT implementation can be found in the supplementary material). Despite the DT can introduce approximation errors thus the convergence gap may not be exactly $\epsilon$, in the following experiments our method works very well with a $300\!\times\!300\!\times\!300$ DT for optimal registration. Naturally, higher resolutions can be used when necessary. \begin{figure}[!t] \begin{center} \includegraphics[width=0.425\textwidth]{points_synthetic.pdf} \caption{A clustered scene (black circles) and the registration results of Go-ICP for the five shapes. \label{fig:syntheticscene}} \end{center} \vspace{-5pt} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.49\textwidth]{remainingcubes2d_synthetic.pdf} \caption{Remaining rotation domains of the outer rotation BnB on 2D slices of the $\pi$-ball, for the synthetic points. Results using the DT and the kd-tree are within magenta and green polygons, respectively. The white dots denote optimal rotations. From left to right: a cuboid, a regular tetrahedron and a regular cube. The colors on the slices indicate registration errors evaluated via inner translation BnB: red for high error and blue for low error. (Best viewed when zoomed in) \label{fig:syntheticrotationslice}} \end{center} \vspace{-5pt} \end{figure} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.995\textwidth]{evolution_reg_bunny.pdf} \caption{Evolution of Go-ICP registration for the bunny dataset. The model point-set and data point-set are shown in red and green respectively. BnB and ICP collaboratively update the registration: ICP refines the solution found by BnB and BnB guides ICP into the convergence basins of multiple local minima with increasingly lower registration errors. \label{fig:regevol}} \end{center} \vspace{-5pt} \end{figure*} \subsection{Optimality}\label{sec:optimality} To verify the correctness of the derived bounds and the optimality of Go-ICP, we first use a convergence condition similar to \cite{hartley2009global} for the BnBs. Specifically, we set the the threshold of a BnB to be 0, and specify a smallest cube size at which the BnB stops dividing a cube. In this way, we can examine the uncertainty in the parameter space after the BnB stops. Both the DT and kd-tree are tested in these experiments. \vspace{0.06in} \noindent\textbf{Synthetic Points.} We first tested the method on a synthetically generated scene with simple objects. Specifically, five 3D shapes were created: an irregular tetrahedron, a cuboid with three different side-lengths, a regular tetrahedron, a regular cube, and a regular octahedron. Note that the latter 4 shapes have self-symmetries. All the shapes were then placed together, each with a random transformation, to generate clustered scenes. Zero-mean Gaussian noise with standard deviation $\sigma\!=\!0.01$ was added to the scene points. We created such a scene as shown in Figure~\ref{fig:syntheticscene}, and applied Go-ICP to register the vertices of each shape to the scene points. To test the rotation BnB, we set the parameter domain to be $[-\pi,\pi]^3\times[-1,1]^3$ and the minimal volume of a rotation cube to $1.5\mathrm{E}{-5}$ ($\sim\!\!1$ degree uncertainty). The lower bound of a rotation cube was set to be the global lower bound of the invoked translation BnB. Thus the threshold of translation BnB is not very important and we set it to a small value ($0.0001\!\times\!N$ where $N$ is the data point number). The initial errors $E^*_t$ of translation BnBs were set to infinity. In all tests, Go-ICP produced correct results with both the DT and kd-tree. The remaining rotation cubes using the DT and kd-tree respectively are almost visually indistinguishable, and Figure~\ref{fig:rotationcubes} shows the results using the DT. It is interesting to see that the remaining cubes formed 1 cluster for the irregular tetrahedron, 4 clusters for the cuboid, 12 clusters for the regular tetrahedron, and 24 clusters for the regular cube and octahedron. These results conform to the geometric properties of these shapes, and validated the derived bounds. Investigating shape self-similarity would be a practical application of the algorithm. Moreover, Figure~\ref{fig:syntheticrotationslice} shows some typical remaining rotation domains on 2D slices of the rotation $\pi$-ball\footnote{We chose the slices passing two randomly-selected optimal rotations plus the origin. Due to shape symmetry there may exist more than two optimal rotations on one slice.}. The non-convexity of the problem can be clearly seen from the presence of many local minima. It can also been seen that the remaining rotation domains using a DT and kd-tree are highly consistent, and the optima are well contained by them. The translation BnB can be easily verified by running it with rotations picked from the remaining rotation cubes. The threshold was set to be 0, and the minimal side-length of a translation cube was set to be $0.01$. The last figure of Fig.~\ref{fig:rotationcubes} shows a typical result. More results of remaining rotation and translation cubes can be found in the supplementary material. \begin{figure}[!t] \begin{center} \subfigure{ \includegraphics[width=0.2185\textwidth]{evolution_bound_bunny.pdf}} \subfigure{ \includegraphics[width=0.244\textwidth]{evolution_cube_bunny.pdf}} \vspace{-13pt} \caption{Evolution of the bounds (left) and cubes (right) in the rotation BnB with a DT on the bunny point-sets. See text for details. \label{fig:cubeevol}} \end{center} \vspace{-7pt} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.49\textwidth]{remainingcubes2d_bunny.pdf} \caption{Remaining rotation domains of the outer rotation BnB on 2D slices of the $\pi$-ball, for the bunny point-sets. The three slices pass through the optimal rotation and the X-, Y-, Z-axes respectively. See also the caption of Fig.~\ref{fig:syntheticrotationslice}. (Best viewed when zoomed in) \label{fig:bunnyrotationslice}} \end{center} \vspace{-7pt} \end{figure} \begin{figure*}[!t] \begin{center} \includegraphics[width=1\textwidth]{time_bunnydragon.pdf} \caption{Running time of the Go-ICP method with DTs on the bunny and dragon point-sets with respect to different factors. The evaluation was conducted on 10 data point-sets with 100 random poses (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, 1\,000 pairwise registrations). \label{fig:time}} \end{center} \vspace{-8pt} \end{figure*} \vspace{0.06in} \noindent\textbf{Real Data.} Similar experiments were conducted on real data. We applied our method to register a bunny scan bun090 from the Stanford 3D dataset\footnote{\url{http://graphics.stanford.edu/data/3Dscanrep/}} to the reconstructed model. Since model and data point-sets are of similar spatial extents, we set the parameter domain to be $[-\pi,\pi]^3\!\times\![-0.5,0.5]^3$ which is large enough to contain the optimal solution. We randomly sampled 500 data points, and did similar tests to those on the synthetic points. The translation BnB threshold was set to $0.001\!\times\!N$, and the remaining rotation cubes from the outer rotation BnB were similar to the first figure in Fig.~\ref{fig:rotationcubes} (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, one cube cluster). Figure~\ref{fig:bunnyrotationslice} shows the results on three slices of the rotation $\pi$-ball. Additionally, we recorded the bound and cube evolutions in the rotation BnB which are presented in Fig.~\ref{fig:cubeevol}. It can be seen that BnB and ICP collaboratively update the global upper bound. Corresponding transformations for each global upper bound found by BnB and ICP are shown in Fig.~\ref{fig:regevol}. Note that in the fourth image the pose has been very close to the optimal one, which indicates that ICP may fail even if reasonably good initialization is given. Although the convergence condition used in this section worked successfully, we found that using a small threshold $\epsilon$ of the bounds to terminate a BnB also works well in practice. It is more efficient and produces satisfactory results. In the following experiments, we used this strategy for the BnBs. \subsection{``Partial" to ``Full" Registration}\label{sec:p2f} \begin{figure}[!t] \begin{center} \subfigure{ \includegraphics[width=0.232\textwidth]{time_hist_bunny.pdf}} \subfigure{ \includegraphics[width=0.232\textwidth]{time_hist_dragon.pdf}} \vspace{-10pt} \caption{Running time histograms of Go-ICP with DTs for the bunny (left) and dragon (right) point-sets. \label{fig:time_pose}} \end{center} \vspace{-9pt} \end{figure} In this section, we test the performance of Go-ICP by registering partially scanned point clouds to full 3D model point clouds. The bunny and dragon models from the Stanford 3D dataset were used for evaluation. All 10 partial scans of the bunny dataset were used as data point-sets. For the dragon model, we selected 10 scans generated from different view points as data point-sets. The reconstructed bunny and dragon models were used as model point-sets. For each of these 20 scans, we first performed 100 tests with random initial rotations and translations. The transformation domain to explore for Go-ICP was set to be $[-\pi,\pi]^3\!\times\![-0.5,0.5]^3$. We sampled $N=1000$ data points from each scan, and set the convergence threshold $\epsilon$ to be $0.001\!\times\!N$. As expected, \emph{Go-ICP achieved 100\% correct registration on all the 2\,000 registration tasks on the bunny and dragon models}, with both the DT and kd-tree. All rotation errors were less than 2 degrees and all translation errors were less than 0.01. With a DT, the mean/longest running times of Go-ICP, in the 1\,000 tests on 1\,000 data points and 20\,000--40\,000 model points, were 1.6s/22.3s for bunny and 1.5s/28.9s for dragon. Figure~\ref{fig:time_pose} shows the running time histograms The running times with a kd-tree were typically 40--50 times longer than that with the DT. The solutions from using the DT and the kd-tree respectively were highly consistent (the largest rotation difference was below 1 degree). See the supplementary material for detailed result and running time comparisons for the DT and the kd-tree. We then analyzed the running time of the proposed method under various settings using the DT. We analyzed the influence of each factor by varying it while keeping others fixed. Default factor settings: number of data points $N\!=\!1000$, no added Gaussian noise (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot standard deviation $\sigma\!=\!0$) and convergence threshold $\epsilon\!=\!0.001\!\times\!N$. \vspace{0.00in} \noindent\textbf{Effect of Number of Points.} In this experiment, the running time was tested for different numbers of points. Since the DT was used for closest-point distance retrieval, the number of model points does not significantly affect the speed of our method. To test the running time with respect to different numbers of data points, we randomly sampled the data point-set. As presented in Fig.~\ref{fig:time}, the running time manifested a linear trend since closest-point distance retrieval was $O(1)$ and the convergence threshold varied linearly with the number of data points. \vspace{0.05in} \noindent\textbf{Effect of Noise.} We examined how the noise level impacted the running time by adding Gaussian noise to both the data and model point-sets. The registration results on the corrupted bunny point-sets are shown in Fig.~\ref{fig:reg_noise}. We found that, as shown in Fig.~\ref{fig:time}, the running time decreased as the noise level increased (until $\sigma\!=\!0.02$). This is because the Gaussian noise (especially that added to the model points) smoothed out the function landscape and widened the convergence basin of the global minimum, which made it easier for Go-ICP to find a good solution. \renewcommand*{\thesubfigure}{} \begin{figure}[!t] \begin{center} \captionsetup[subfigure]{labelformat=empty} \subfigure[$\sigma=0.01$]{ \includegraphics[width=0.16\textwidth]{points_bunny_noise1.pdf}}\!\!\! \subfigure[$\sigma=0.02$]{ \includegraphics[width=0.16\textwidth]{points_bunny_noise2.pdf}}\!\!\! \subfigure[$\sigma=0.03$]{ \includegraphics[width=0.16\textwidth]{points_bunny_noise3.pdf}}\!\!\! \vspace{-4pt} \caption{Registration with different levels of Gaussian noise. \label{fig:reg_noise}} \end{center} \vspace{-9pt} \end{figure} \renewcommand*{\thesubfigure}{(\alph{subfigure})} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.98\textwidth]{points_partialoverlap.pdf} \caption{Registration with partial overlap. Go-ICP with the trimming strategy successfully registered the 10 point-set pairs with 100 random relative poses for each of them. The point-sets in red and blue are denoted as point-set $A$ and point-set $B$, respectively. The trimming settings and running times are presented in Table~\ref{tab:time_partial}. \label{fig:partialoverlap}} \end{center} \vspace{-2pt} \end{figure*} \vspace{0.05in} \noindent\textbf{Effect of Convergence Threshold.} We further investigated the running time with respect to the convergence threshold of the BnB loops. We set the threshold $\epsilon$ to depend linearly on $N$, since the registration error is a sum over the $N$ data points. Figure~\ref{fig:time} shows that the smaller the threshold is, the slower our method performs. In our experiments, $\epsilon\!=\!0.001\!\times\!N$ was adequate to get a 100\% success rate for the bunny and dragon point-sets. For cases when the local minima are small or close to the global minimum, the threshold can be set smaller. \begin{figure}[!t] \begin{center} \subfigure{ \includegraphics[width=0.136\textwidth]{points_bunny_higherr.pdf}}~~ \subfigure{ \includegraphics[width=0.24\textwidth]{evolution_bound_bunny_higherr.pdf}} \vspace{-5pt} \caption{Registration with high optimal error. \textbf{Left:} Gaussian noise was added to the data point-set to increase the RMS error. \textbf{Right:} the global minimum was found at about 25s with a DT; the remainder of the time was devoted solely to increasing the lower bound. \label{fig:time_error_eg}} \end{center} \vspace{-7pt} \end{figure} \vspace{0.05in} \noindent\textbf{Effect of Optimal Error.} We also tested the running time \emph{w.r.t.} the optimal registration error. To increase the error, Gaussian noise was added to the data point-set \emph{only}. As shown in Fig.~\ref{fig:time}, the running time remained almost constant when the RMS error was less than 0.03. This is because the gap between the global lower bound and the optimal error was less than $\epsilon$. Therefore, the running time depended primarily on when the global minimum was found, that is, the termination depended on the \emph{decrease of the upper bound}. However, it takes longer to converge if the final RMS error is higher. Figure~\ref{fig:time_error_eg} shows the bounds evolution for bunny when the RMS error was increased to $\sim\!0.04$. As can be seen, the global minimum was found at about 25s, with the remainder of the time devoted to \emph{increasing the lower bound}. \subsection{Registration with Partial Overlap}\label{sec:outlierexp} \begin{table}[!t] \centering \caption{Running time (in seconds) of Go-ICP with DTs for the registration of the partially overlapping point-sets in Fig.~\ref{fig:partialoverlap}. 100 random relative poses were tested for each point-set pair and 1\,000 data points were used. $\rho$ is the trimming percentage.}. \label{tab:time_partial} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$A$$\rightarrow$$B$} & \multicolumn{2}{|c|}{$B$$\rightarrow$$A$} \\ \cline{2-5} & $\rho$ & \!\!mean/max time\!\! & $\rho$ & \!\!mean/max time\!\! \\ \hline Bunny & 10\% & 0.81 / 10.7 & 10\% & 0.49 / 7.25 \\ \hline Dragon & 20\% & 2.99 / 43.5 & 40\% & 8.72 / 72.4 \\ \hline Buddha & 10\% & 0.71 / 11.3 & 10\% & 0.60 / 14.8 \\ \hline Chef & 20\% & 0.45 / 4.47 & 30\% & 0.52 / 3.79 \\ \hline Dinosaur & 10\% & 2.03 / 23.5 & 10\% & 1.65 / 26.1 \\ \hline Owl & 40\% & 12.5 / 87.5 & 40\% & 13.4 / 75.0 \\ \hline Denture & 30\% & 6.74 / 74.7 & 30\% & 4.24 / 68.1 \\ \hline Room & 30\% & 9.82 / 73.3 & 30\% & 18.4 /107.3 \\ \hline Bowl & 20\% & 3.19 / 20.3 & 30\% & 3.52 / 25.3 \\ \hline Loom & 30\% & 8.64 / 67.2 & 20\% & 5.96 / 44.6 \\ \hline \end{tabular} \vspace{-0pt} \end{table} In this section, we tested the proposed method on partially overlapping point-sets. The data points in regions that are not overlapped by the other model point-set should be treated as outliers, as their correspondences are missing. Trimming was employed to deal with outliers as described in Sec.~\ref{sec:outlier}; We used 10 point-set pairs shown in Fig.~\ref{fig:partialoverlap} to test Go-ICP with trimming. These point-sets were generated by different scanners and with different noise levels. The bunny, dragon and buddha models are from the Standford 3D dataset. The chef and dinosaur models are from \cite{mian2006three}. The denture was generated with a structured light 3D scanner\footnote{http://www.david-3d.com/en/support/downloads}. The owl status is from \cite{bouaziz2013sparse} and the room scans are from \cite{shotton2013scene}. The bowl and loom point-sets were collected by us with a Kinect. The overlapping ratio of the point-set pairs are between $50\%\!\sim\!95\%$. \begin{figure*}[!t] \begin{center} \includegraphics[width=1.0\textwidth]{localization.pdf} \caption{Camera localization experiment. \textbf{Left}: 5 (out of 100) color and depth image pairs of the scene. (The color images were not used) \textbf{Right}: Corresponding registration results. Note that the scene contains many similar structures, and the depth images only cover small portions of the scene, which make the 3D registration tasks very challenging. \label{fig:localization}} \end{center} \vspace{-5pt} \end{figure*} For each of the 10 point-set pairs, we generated 100 random relative poses, and registered the two point-sets to each other. This lead to 2\,000 registration tasks. The translation domain to explore for Go-ICP was set to be $[-\pi,\pi]^3\times[-0.5,0.5]^3$. We chose the trimming percentages $\rho$ as in Table~\ref{tab:time_partial}, sampled $N\!=\!1000$ data points for each registration, and set all the convergence thresholds to $\epsilon\!=\!0.001\times K$ where $K=(1\!-\!\rho)\!\times\!N$. Our method correctly registered the point-sets in all these tasks. All the rotation errors were less than $5$ degrees and translation errors were less than $0.05$ compared to the manually-set ground truths. The running times using DTs are presented in Table~\ref{tab:time_partial}. In general, it takes the method a longer time compared to the outlier-free case due to 1) the emergence of additional local minima induced by the outliers and 2) the time-consuming trimming operations. \vspace{0.06in} \noindent\textbf{Choosing trimming percentages.} In these experiments, each parameter $\rho$ was chosen by visually observing the two point-sets and roughly guessing their non-overlapping ratios. The results were not very sensitive to $\rho$ (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot setting $\rho$ as $5\%,10\%$ and $20\%$ all led to a successfully registration for bunny). If no rough guess is available, one can gradually increase $\rho$ until a measure such as the inlier number or RMS error attains a set value, or apply the automatic overlap estimation proposed in [62]. We also plan to test other outlier handling strategies (cf. Sec.~\ref{sec:outlier}) in future. \subsection{More Applications}\label{sec:exp_application} In this section, we present several additional scenarios where Go-ICP can be applied to achieve global optimality. Future efforts can be taken to extend the method and build complete real-world systems. In the following experiments, the transformation domain for exploration was set to be $[-\pi,\pi]^3\times[-1,1]^3$. \vspace{0.06in} \noindent\textbf{3D Object Localization.} The proposed method is useful for model-based 3D object detection, localization and pose estimation from relatively large scenes. To experimentally verify this, we tested our method on one sequence of the camera localization dataset \cite{shotton2013scene}. Figure~\ref{fig:localization} shows a sample color and depth image pair, and a 3D model of the office scene. Our goal was to estimate the camera poses by registering the point clouds of the depth images onto the 3D scene model. We evenly sampled the sequence taken by a smoothly moving camera to 100 depth images. Each depth image was then sampled to $400\sim 600$ points. We set our method to seek a solution with the registration error smaller than $0.0001\!\times\!N$, and the method registered the 100 point-sets with the mean/longest running time of 32s/178s using a DT. The rotation errors and translation errors were all below 5 degrees and 10cm. Figure~\ref{fig:localization} shows 5 typical registration results. We then used the RGB-D Object Dataset \cite{lai2011large}, with the goal of registering the points of a baseball cap to a point cloud of the scene, as shown in Fig.~\ref{fig:cap}. We sampled $N\!=\!100$ points from the cap model, and set the trimming percentage and threshold to be $\rho\!=\!10\%$ and $\epsilon\!=\!0.00003\!\times\!K$ respectively. Go-ICP successfully localized the cap in 42 seconds with a DT. \begin{figure}[!t] \begin{center} \includegraphics[width=0.49\textwidth]{cap.pdf} \caption{3D object localization experiment. \textbf{Left}: a labelled object and its depth image to generate the data point-set. \textbf{Middle}: a scene depth image to generate the model point-set. \textbf{Right}: the registration result. \label{fig:cap}} \end{center} \vspace{-9pt} \end{figure} \begin{figure}[!t] \begin{center} \subfigure{ \includegraphics[width=0.156\textwidth]{calibration_colorimg.pdf}}\!\! \subfigure{ \includegraphics[width=0.155\textwidth]{calibration_reg3d.pdf}}\!\! \subfigure{ \includegraphics[width=0.155\textwidth]{calibration_reg2dproj.pdf}} \caption{RGB-D extrinsic calibration experiment. \textbf{Left}: the color image with extracted line segments for single view 3D reconstruction. \textbf{Middle}: the initial 3D registration (in green), the result of ICP (in cyan) and the result of Go-ICP (in blue) (the lines are for visualization purposes only). \textbf{Right}: the depth image with a projection of the registered 3D points from ICP (in cyan) and Go-ICP (in blue). \label{fig:calibration}} \end{center} \vspace{-9pt} \end{figure} \vspace{0.06in} \noindent\textbf{Camera Extrinsic Calibration.} In the work of Yang \emph{et al}\onedot~\cite{yang2013single}, the sparse point-set from a color camera, obtained by single view 3D reconstruction, was registered onto the dense point-set from a depth camera to obtain the camera relative pose. Figure~\ref{fig:calibration} shows an example where 12 points are reconstructed. We found that ICP often failed to find the correct registration when the pose difference between the cameras was reasonably large. To the best of our knowledge, few methods can perform such \emph{sparse-to-dense registration} reliably without human intervention, due to the difficulty of building putative correspondences. Setting $\epsilon$ to be $0.00001\!\times\!N$, Go-ICP with a DT found the optimal solution in less than 1s. Note that the surfaces are not exactly perpendicular to each other. \section{Conclusion}\label{sec:conclusion} We have introduced a globally optimal solution to Euclidean registration in 3D, under the $L_2$-norm closest-point error metric originally defined in ICP. The method is based on the Branch-and-Bound (BnB) algorithm, thus global optimality is guaranteed regardless of the initialization. The key innovation is the derivation of registration error bounds based on the $SE(3)$ geometry. The proposed Go-ICP algorithm is especially useful when an exactly optimal solution is highly desired or when a good initialization is not reliably available. For practical scenarios where real-time performance is not critical, the algorithm can be readily applied or used as an optimality benchmark. \vspace{-5pt} \ifCLASSOPTIONcompsoc \subsection*{Acknowledgments} \else \subsection*{Acknowledgment} \fi { This work was supported in part by the Natural Science Foundation of China (NSFC) under Grant No. 61375044, and ARC grants DP120103896, CE140100016 ARC Centre of Excellence for Robotic Vision. J. Yang was funded by Chinese Scholarship Council (CSC) from Sep 2013 to Aug 2015. } \ifCLASSOPTIONcaptionsoff \newpage \fi { \bibliographystyle{IEEEtran}
1,941,325,220,317
arxiv
\section{Introduction} Termination bugs can compromise safety-critical software systems by making them irresponsive, e.g., termination bugs can be exploited in denial-of-service attacks~\cite{CVE}. Termination guarantees are therefore instrumental for software reliability. Termination provers, static analysis tools that aim to construct a termination proof for a given input program, have made tremendous progress. They enable automatic proofs for complex loops that may require linear lexicographic (e.g.~\cite{BG13,LH14}) or non-linear termination arguments (e.g.~\cite{BMS05b}) in a completely automatic way. However, there remain major practical challenges in analysing real-world code. First of all, as observed by~\cite{FKS12}, most approaches in the literature are specialised to linear arithmetic over unbounded mathematical integers. Although, unbounded arithmetic may reflect the intuitively-expected program behaviour, the program actually executes over bounded machine integers. The semantics of C allows unsigned integers to wrap around when they over/underflow. Hence, arithmetic on $k$-bit-wide unsigned integers must be performed modulo-$2^k$. According to the C standards, over/underflows of signed integers are undefined behaviour, but practically also wrap around on most architectures. Thus, accurate termination analysis requires a \mbox{\emph{bit-precise}} analysis of program semantics. Tools must be configurable with architectural specifications such as the width of data types and endianness. The following examples illustrate that termination behaviour on machine integers can be completely different than on mathematical integers. For example, the following code: \begin{lstlisting}[numbers=none] void foo1(unsigned n) { for(unsigned x=0; x<=n; x++); } \end{lstlisting} does terminate with mathematical integers, but does \emph{not} terminate with machine integers if \texttt{n} equals the largest unsigned integer. On the other hand, the following code: \begin{lstlisting}[numbers=none] void foo2(unsigned x) { while(x>=10) x++; } \end{lstlisting} does not terminate with mathematical integers, but terminates with machine integers because unsigned machine integers wrap around. A second challenge is to make termination analysis scale to larger programs. The yearly Software Verification Competition (SV-COMP)~\cite{DBLP:conf/tacas/Beyer15} includes a division in termination analysis, which reflects a representative picture of the state-of-the-art. The SV-COMP'15 termination benchmarks contain challenging termination problems on smaller programs with at most 453 instructions \hbox{(average 53)}, contained at most 7 functions \hbox{(average 3)}, and 4 loops \hbox{(average 1)}. In this paper, we present a technique that we have successfully run on programs that are one magnitude larger, containing up to 5000 instructions. Larger instances require different algorithmic techniques to scale, e.g., modular interprocedural analysis rather than monolithic analysis. This poses several conceptual and practical challenges that do not arise in monolithic termination analysers. For example, when proving termination of a program, a possible approach is to try to prove that all {\function}s in the program terminate \emph{universally}, i.e., in any possible calling context. However, this criterion is too optimistic, as termination of individual {\function}s often depends on the calling context, i.e., {\function}s terminate \emph{conditionally} only in specific calling contexts. Hence, an interprocedural analysis strategy is to verify universal program termination in a top-down manner by proving termination of each {\function} relative to its \emph{calling contexts}, and propagating upwards which calling contexts guarantee termination of the {\function}. It is too difficult to determine these contexts precisely; analysers thus compute preconditions for termination. A \emph{sufficient precondition} identifies those pre-states in which the {\function} will definitely terminate, and is thus suitable for proving termination. By contrast, a \emph{necessary precondition} identifies the pre-states in which the {\function} may terminate. Its negation are those states in which the {\function} will not terminate, which is useful for proving nontermination. In this paper we focus on the computation of sufficient preconditions. Preconditions enable information reuse, and thus scalability, as it is frequently possible to avoid repeated analysis of parts of the code base, e.g. libraries whose {\function}s are called multiple times or did not undergo modifications between successive analysis runs. \paragraph{Contributions:} \begin{compactenum} \item We propose an algorithm for \emph{interprocedural termination analysis}. The approach is based on a template-based static analysis using SAT solving. It combines context-sensitive, summary-based interprocedural analysis with the inference of preconditions for termination based on template abstractions. We focus on non-recursive programs, which cover a large portion of software written, especially in domains such as embedded systems. \item We provide an implementation of the approach in 2LS\xspace, a static analysis tool for C programs. Our instantiation of the algorithm uses template polyhedra and lexicographic, linear ranking functions templates. The analysis is bit-precise and purely relies on SAT-solving techniques. \item We report the results of an experimental evaluation on 597 procedural SV-COMP benchmarks with in total 1.6 million lines of code that demonstrates the scalability and applicability of the approach to programs with thousands of lines of code. \end{compactenum} \section{Preliminaries}\label{sec:prelim} In this section, we introduce basic notions of interprocedural and termination analysis. \paragraph{Program model and notation.} We assume that programs are given in terms of acyclic\footnote{ We consider non-recursive programs with multiple procedures.} call graphs, where individual {\function}s $f$ are given in terms of symbolic input/output transition systems. Formally, the input/output transition system of a {\function} $f$ is a triple $(\mathit{Init}_f,\mathit{Trans}_f,\mathit{Out}_f)$, where $\mathit{Trans}_f({\vec{\x}},{\vec{\x}'})$ is the transition relation; the input relation $\mathit{Init}_f({\vx^{in}}, {\vec{\x}})$ defines the initial states of the transition system and relates it to the inputs ${\vx^{in}}$; the output relation $\mathit{Out}_f({\vec{\x}},{\vx^{out}})$ connects the transition system to the outputs ${\vx^{out}}$ of the {\function}. Inputs are {\function} parameters, global variables, and memory objects that are read by $f$. Outputs are return values, and potential side effects such as global variables and memory objects written by $f$. Internal states ${\vec{\x}}$ are commonly the values of variables at the loop heads in $f$. These relations are given \emph{as first-order logic formulae} resulting from the logical encoding of the program semantics. Fig.~\ref{fig:encoding1} shows the encoding of the two {\function}s in Fig.~\ref{fig:example1} into such formulae.% \footnote{\emph{c?a:b} is the conditional operator, which returns $a$ if $c$ evaluates to $\mathit{true}$, and $b$ otherwise.} The inputs ${\vx^{in}}$ of $f$ are $(z)$ and the outputs ${\vx^{out}}$ consist of the return value denoted $(r_f)$. The transition relation of $h$ encodes the loop over the internal state variables $(x,y)^T$. We may need to introduce Boolean variables $g$ to model the control flow, as shown in $f$. Multiple and nested loops can be similarly encoded in $\mathit{Trans}$. Note that we view these formulae as predicates, e.g.\ $\mathit{Trans}({\vec{\x}}, {\vec{\x}'})$, with given parameters ${\vec{\x}}, {\vec{\x}'}$, and mean the substitution $\mathit{Trans}[\vec{a}/\vec{x},\vec{b}/\vec{x}']$ when we write $\mathit{Trans}(\vec{a}, \vec{b})$. Moreover, we write ${\vec{\x}}$ and $x$ with the understanding that the former is a vector, whereas the latter is a scalar. Each call to a {\function} $h$ at call site $i$ in a {\function}~$f$ is modeled by a \emph{placeholder predicate} \hbox{$h_i({\vx^{p\_in}}_i,{\vx^{p\_out}}_i)$} occurring in the formula $\mathit{Trans}_f$ for $f$. The placeholder predicate ranges over intermediate variables representing its actual input and output parameters ${\vx^{p\_in}}_i$ and ${\vx^{p\_out}}_i$, respectively. Placeholder predicates evaluate to $\mathit{true}$, which corresponds to havocking {\function} calls. In {\function} $f$ in Fig.~\ref{fig:encoding1}, the placeholder for the {\function} call to $h$ is $h_0((z),(w_1))$ with the actual input and output parameters $z$ and $w_1$, respectively. A full description of the program encoding is given in \rronly{Appendix~\ref{sec:appendix}} \paragraph{Basic concepts.} Moving on to interprocedural analysis, we introduce formal notation for the basic concepts below: \begin{definition}[Invariants, Summaries, Calling Contexts] \label{def:inv} For a {\function} given by \hbox{$(\mathit{Init},\mathit{Trans}, \linebreak \mathit{Out})$} we define: \begin{itemize} \item An \emph{invariant} is a predicate $\mathit{Inv}$ such that: \[ \begin{array}{rlr} \forall {\vx^{in}}, {\vec{\x}}, {\vec{\x}'}: & \mathit{Init}({\vx^{in}},{\vec{\x}}) \Longrightarrow \mathit{Inv}({\vec{\x}}) \\ \wedge &\mathit{Inv}({\vec{\x}})\wedge\mathit{Trans}({\vec{\x}},{\vec{\x}'})\Longrightarrow \mathit{Inv}({\vec{\x}'}) \end{array} \] \item Given an invariant $\mathit{Inv}$, a \emph{summary} is a predicate $\mathit{Sum}$ such that: % \[ \begin{array}{rl} \forall {\vx^{in}},{\vec{\x}}, {\vec{\x}'},{\vx^{out}}: & \mathit{Init}({\vx^{in}},{\vec{\x}}) \wedge \mathit{Inv}({\vec{\x}'}) \wedge \mathit{Out}({\vec{\x}'},{\vx^{out}}) \\ & \Longrightarrow \mathit{Sum}({\vx^{in}},{\vx^{out}}) \end{array} \] \item Given an invariant $\mathit{Inv}$, the \emph{calling context} for a {\function} call $h$ at call site $i$ in the given {\function} is a predicate $\mathit{CallCtx}_{h_i}$ such that % \[ \begin{array}{l} \forall {\vec{\x}},{\vec{\x}'},{\vx^{p\_in}}_i,{\vx^{p\_out}}_i: \\ \quad \mathit{Inv}({\vec{\x}}) \wedge \mathit{Trans}({\vec{\x}},{\vec{\x}'}) \Longrightarrow \mathit{CallCtx}_{h_i}({\vx^{p\_in}}_i,{\vx^{p\_out}}_i) \end{array} \] \end{itemize} \end{definition} These concepts have the following roles: Invariants abstract the behaviour of loops. Summaries abstract the behaviour of called {\function}s; they are used to strengthen the placeholder predicates. Calling contexts abstract the caller's behaviour w.r.t.\ the {\function} being called. When analysing the callee, the calling contexts are used to constrain its inputs and outputs. In Sec.~\ref{sec:overview} we will illustrate these notions on the program in Fig.~\ref{fig:example1}. \begin{figure}[t] \begin{tabular}{l@{\hspace{3em}}l} \begin{lstlisting} unsigned f(unsigned z) { unsigned w=0; if(z>0) w=h(z); return w; } \end{lstlisting} & \begin{lstlisting} unsigned h(unsigned y) { unsigned x; for(x=0; x<10; x+=y); return x; } \end{lstlisting} \end{tabular} \caption{\label{fig:example1} Example. } \vspace{-.25cm} \end{figure} \begin{figure}[t] \footnotesize \begin{tabular}{@{\hspace*{-0.5em}}r@{\,}l} $\mathit{Init}_f((z),(w,z,g)^T) \equiv$ & $(w_0{=}0 \wedge z'{=}z \wedge g)$ \\ $\mathit{Trans}_f((w,z,g)^T,(w',z',g')^T) \equiv$ & $(g \wedge h_0((z),(w_1)) \wedge$\\ & $w {=} (z{>}0 ? w_1{:}w_0) \wedge \neg g')$ \\ $\mathit{Out}_f((w,z,g)^T,(r_f)) \equiv$ & $(r_f{=}w)$ \\[0.5ex] \hline \\[-1.5ex] $\mathit{Init}_h((y),(x,y')^T) \equiv$ & $(x{=}0 \wedge y'{=}y)$ \\ $\mathit{Trans}_h((x,y)^T,(x',y')^T) \equiv$ & $(x'{=}x{+}y \wedge x{<}10 \wedge y{=}y')$ \\ $\mathit{Out}_h((x,y)^T,(r_h)) \equiv$ & $(r_h{=}x \wedge \neg(x{<}10))$ \\ \end{tabular} \caption{\label{fig:encoding1} Encoding of Example~\ref{fig:example1}. } \vspace{-.25cm} \end{figure} Since we want to reason about termination, we need the notions of ranking functions and preconditions for termination. \begin{definition}[Ranking function] \label{def:ranking_function} A \emph{ranking function} for a {\function} $(\mathit{Init},\mathit{Trans},\mathit{Out})$ with invariant $\mathit{Inv}$ is a function $r$ from the set of program states to a well-founded domain such that $ \forall {\vec{\x}},{\vec{\x}'}: \mathit{Inv}({\vec{\x}}) \wedge \mathit{Trans}({\vec{\x}},{\vec{\x}'}) \Longrightarrow r({\vec{\x}}) > r({\vec{\x}'}). $ \end{definition} We denote by $RR({\vec{\x}},{\vec{\x}'})$ a set of constraints that guarantee that $r$ is a ranking function. The existence of a ranking function for a {\function} guarantees its \emph{universal} termination. The weakest termination precondition for a {\function} describes the inputs for which it terminates. If it is $\mathit{true}$, the {\function} terminates universally; if it is $\mathit{false}$, then it does not terminate for any input. Since the weakest precondition is intractable to compute or even uncomputable, we under-approximate the precondition. A \emph{sufficient precondition} for termination guarantees that the program terminates for all ${\vx^{in}}$ that satisfy it. \begin{definition}[Precondition for termination]\label{def:precond} Given a {\function} $(\mathit{Init},\mathit{Trans},\mathit{Out})$, a sufficient \emph{precondition for termination} is a predicate $\mathit{Precond}$ such that \[ \begin{array}{rl} \multicolumn{2}{l}{\exists RR,\mathit{Inv}: \forall {\vx^{in}},{\vec{\x}},{\vec{\x}'}:} \\ & \mathit{Precond}({\vx^{in}}) \wedge \mathit{Init}({\vx^{in}},{\vec{\x}}) \Longrightarrow \mathit{Inv}({\vec{\x}}) \\ \wedge & \mathit{Inv}({\vec{\x}}) \wedge \mathit{Trans}({\vec{\x}},{\vec{\x}'}) \Longrightarrow \mathit{Inv}({\vec{\x}'}) \wedge RR({\vec{\x}},{\vec{\x}'}) \end{array} \] \end{definition} Note that $\mathit{false}$ is always a trivial model for $\mathit{Precond}$, but not a very useful one. \section{Overview of the Approach}\label{sec:overview} In this section, we introduce the architecture of our interprocedural termination analysis. Our analysis combines, in a non-trivial synergistic way, the inference of invariants, summaries, calling contexts, termination arguments, and preconditions, which have a concise characterisation in second-order logic (see Definitions~\ref{def:inv}, and~\ref{def:precond}). At the lowest level our approach relies on a solver backend for second-order problems, which is described in Sec.~\ref{sec:sa}. To see how the different analysis components fit together, we now go through the pseudo-code of our termination analyser (Algorithm~\ref{alg:interproc}). Function $\mathit{analyze}$ is given the entry {\function} $f_{\mathit{entry}}$ of the program as argument and proceeds in two analysis phases. Phase one is an \emph{over-approximate} forward analysis, given in subroutine $\mathit{analyzeForward}$, which recursively descends into the call graph from the entry point $f_{\mathit{entry}}$. Subroutine $\mathit{analyzeForward}$ infers for each {\function} call in $f$ an over-approximating calling context $\mathit{CallCtx}^o$, using {\function} summaries and other previously-computed information. Before analyzing a callee, the analysis checks if the callee has already been analysed and, whether the stored summary can be re-used, i.e., if it is compatible with the new calling context $\mathit{CallCtx}^o$. Finally, once summaries for all callees are available, the analysis infers loop invariants and a summary for $f$ itself, which are stored for later re-use by means of a join operator. The second phase is an \emph{under-approximate} backward analysis, subroutine $\mathit{analyzeBackward}$, which infers termination preconditions. Again, we recursively descend into the call graph. Analogous to the forward analysis, we infer for each {\function} call in $f$ an under-approximating calling context $\mathit{CallCtx}^u$ (using under-approximate summaries, as described in Sec.~\ref{sec:interproc}), and recurses only if necessary (Line~\ref{line:bw_recurse}). Finally, we compute the under-approximating precondition for termination (Line~\ref{line:bw_precondterm}). This precondition is inferred w.r.t.~the termination conditions that have been collected: the backward calling context (Line~\ref{line:bw_cond1}), the preconditions for termination of the callees (Line~\ref{line:bw_cond2}), and the termination arguments for $f$ itself (see Sec.~\ref{sec:interproc}). Note that superscripts $o$ and $u$ in predicate symbols indicate over- and underapproximation, respectively. \SetAlFnt{\small} \begin{algorithm}[t] \KwGlobal $\mathit{Sums}^o,\mathit{Invs}^o,\mathit{Preconds}^u$\; \Func{$\mathit{analyzeForward}(f,\mathit{CallCtx}^o_f)$}{ \ForEach {{\function} call $h$ in $f$} { $\mathit{CallCtx}^o_h = \mathit{compCallCtx}^o(f,\mathit{CallCtx}^o_f,h)$\; \If{$\mathit{needToReAnalyze}^o(h,\mathit{CallCtx}^o_h)$\label{line:needtoreanalze1}} { $\mathit{analyzeForward}(h,\mathit{CallCtx}^o_h)$\; } } $\mathit{join}^o((\mathit{Sums}^o[f],\mathit{Invs}^o[f]),\mathit{compInvSum}^o(f,\mathit{CallCtx}^o_f))$\label{line:fw_invsum} } \Func{$\mathit{analyzeBackward}(f,\mathit{CallCtx}^u_f)$}{ $\mathit{termConds} = \mathit{CallCtx}^u_f$\label{line:bw_cond1}\; \ForEach {{\function} call $h$ in $f$} { $\mathit{CallCtx}^u_h = \mathit{compCallCtx}^u(f,\mathit{CallCtx}^u_f,h)$\; \If{$\mathit{needToReAnalyze}^u(h,\mathit{CallCtx}^u_h)$\label{line:bw_recurse}} { $\mathit{analyzeBackward}(h,\mathit{CallCtx}^u_h)$\; } $\mathit{termConds} \gets \mathit{termConds} \wedge \mathit{Preconds}^u[h]$\label{line:bw_cond2}\; } $\begin{array}{@{}l@{}l}\mathit{join}^u(&\mathit{Preconds}^u[f],\\ &\mathit{compPrecondTerm}(f,\mathit{Invs}^o[f],\mathit{termConds}); \end{array}$\label{line:bw_precondterm} } \Func{$\mathit{analyze}(f_\mathit{entry})$}{ $\mathit{analyzeForward}(f_\mathit{entry},\mathit{true})$\; $\mathit{analyzeBackward}(f_\mathit{entry},\mathit{true})$\; \Return $\mathit{Preconds}^u[f_\mathit{entry}]$\; } \caption{\label{alg:interproc} $\mathit{analyze}$} \end{algorithm} \paragraph{Challenges.} Our algorithm uses over- and under-approximation in a novel, systematic way. In particular, we address the challenging problem of finding meaningful preconditions: \begin{itemize} \item The precondition Definition~\ref{def:precond} admits the trivial solution $\mathit{false}$ for $\mathit{Precond}$. How do we find a good candidate? To this end, we ``bootstrap'' the process with a candidate precondition: a single value of ${\vx^{in}}$, for which we compute a termination argument. The key observation is that the resulting termination argument is typically more general, i.e., it shows termination for many further entry states. The more general precondition is then computed by precondition inference w.r.t.~the termination argument. \item A second challenge is to compute under-approximations. Obviously, the predicates in the definitions in Sec.~\ref{sec:prelim} can be over-approximated by using abstract domains such as intervals. However, there are only few methods for under-approximating analysis. In this work, we use a method similar to \cite{CGL+08} to obtain under-approximating preconditions w.r.t.~property~$p$: we infer an over-approximating precondition w.r.t.~$\neg p$ and negate the result. In our case, $p$ is the termination condition $\mathit{termConds}$. \end{itemize} \paragraph{Example.} We illustrate the algorithm on the simple example given as Fig.~\ref{fig:example1} with the encoding in Fig.~\ref{fig:encoding1}. \texttt{f} calls a {\function} \texttt{h}. {{Procedure}} \texttt{h} terminates if and only if its argument \texttt{y} is non-zero, i.e., {\function} \texttt{f} only terminates conditionally. The call of \texttt{h} is guarded by the condition \texttt{z>0}, which guarantees universal termination of {\function} \texttt{f}. Let us assume that unsigned integers are 32 bits wide, and we use an interval abstract domain for invariant, summary and precondition inference, but the abstract domain with the elements $\{\mathit{true},\mathit{false}\}$ for computing calling contexts, i.e., we can prove that calls are unreachable. We use $M:=2^{32}{-}1$. Our algorithm proceeds as follows. The first phase is $\mathit{analyzeForward}$, which starts from the entry {\function} \texttt{f}. By descending into the call graph, we must compute an over-approximating calling context $\mathit{CallCtx}^o_{h}$ for {\function} \texttt{h} for which no calling context has been computed before. This calling context is $\mathit{true}$. Hence, we recursively analyse \texttt{h}. Given that \texttt{h} does not contain any {\function} calls, we compute the over-approximating summary $\mathit{Sum}^o_h=(0{\leq} y{\leq} M \wedge 0 {\leq} r_h{\leq} M)$ and invariant $\mathit{Inv}^o_h=(0{\leq} x{\leq} M \wedge 0{\leq} y{\leq} M)$. Now, this information can be used in order to compute $\mathit{Sum}^o_f=(0{\leq} z{\leq} M \wedge 0 {\leq} r_f{\leq} M)$ and invariant $\mathit{Inv}^o_f=\mathit{true}$ for the entry {\function} \texttt{f}. The backwards analysis starts again from the entry {\function} \texttt{f}. It computes an under-approximating calling context $\mathit{CallCtx}^u_{h}$ for {\function} \texttt{h}, which is $\mathit{true}$, before descending into the call graph. It then computes an under-approximating precondition for termination $\mathit{Precond}^u_{h} = (1{\leq} y{\leq} M)$ or, more precisely, an under-approximating summary whose projection onto the input variables of \texttt{h} is the precondition $\mathit{Precond}^u_{h}$. By applying this summary at the call site of \texttt{h} in \texttt{f}, we can now compute the precondition for termination $\mathit{Precond}^u_{f} = (0{\leq} z{\leq} M)$ of \texttt{f}, which proves universal termination of~\texttt{f}. We illustrate the effect of the choice of the abstract domain on the analysis of the example program. Assume we replace the $\{\mathit{true},\mathit{false}\}$ domain by the interval domain. In this case, $\mathit{analyzeForward}$ computes $\mathit{CallCtx}^o_{h}=(1{\leq} z{\leq} M\wedge 0{\leq} w_1{\leq} M)$. The calling context is computed over the actual parameters $z$ and $w_1$. It is renamed to the formal parameters $y$ and $r_h$ (the return value) when $\mathit{CallCtx}^o_{h}$ is used for constraining the pre/postconditions in the analysis of \texttt{h}. Subsequently, $\mathit{analyzeBackward}$ computes the precondition for termination of \texttt{h} using the union of all calling contexts in the program. Since \texttt{h} terminates unconditionally in these calling contexts, we trivially obtain $\mathit{Precond}^u_{h} = (1{\leq} y{\leq} M)$, which in turn proves universal termination of~\texttt{f}. \section{Interprocedural Termination Analysis} \label{sec:interproc} We can view Alg.~\ref{alg:interproc} as solving a series of formulae in second-order predicate logic with existentially quantified predicates, for which we are seeking satisfiability witnesses.% \footnote{ To be precise, we are not only looking for witness predicates but (good approximations of) weakest or strongest predicates. Finding such biased witnesses is a feature of our synthesis algorithms.} In this section, we state the constraints we solve, including all the side constraints arising from the interprocedural analysis. Note that this is not a formalisation exercise, but these are precisely the formulae solved by our synthesis backend, which is described in Section~\ref{sec:sa}. \subsection{Universal Termination}\label{sec:univterm} \SetAlFnt{\small} \begin{algorithm}[t] \KwGlobal $\mathit{Sums}^o,\mathit{Invs}^o,\mathit{termStatus}$\; \Func{$\mathit{analyzeForward}(f,\mathit{CallCtx}^o_f)$}{ \ForEach {{\function} call $h$ in $f$} { $\mathit{CallCtx}^o_h = \underline{\mathit{compCallCtx}^o}(f,\mathit{CallCtx}^o_f,h)$\; \If{$\mathit{needToReAnalyze}^o(h,\mathit{CallCtx}^o_h)$} { $\mathit{analyzeForward}(h,\mathit{CallCtx}^o_h)$\; } } $\mathit{join}^o((\mathit{Sums}^o[f],\mathit{Invs}^o[f]),\underline{\mathit{compInvSum}^o}(f,\mathit{CallCtx}^o_f))$ } \Func{$\mathit{analyzeBackward}'(f)$}{ $\mathit{termStatus}[f] = \underline{\mathit{compTermArg}}(f)$\; \ForEach {{\function} call $h$ in $f$} { \If{$\mathit{needToReAnalyze}^u(h,\mathit{CallCtx}^o_h)$} { $\mathit{analyzeBackward}(h)$\; $\mathit{join}(\mathit{termStatus}[f],\mathit{termStatus}[h])$\label{line:join2}\; } } } \Func{$\mathit{analyze}(f_\mathit{entry})$}{ $\mathit{analyzeForward}(f_\mathit{entry},\mathit{true})$\; $\mathit{analyzeBackward}'(f_\mathit{entry})$\; \Return $\mathit{termStatus}[f_\mathit{entry}]$\; } \caption{\label{alg:interprocuniversal} $\mathit{analyze}$ for universal termination} \end{algorithm} For didactical purposes, we start with a simplification of Algorithm~\ref{alg:interproc} that is able to show universal termination (see Algorithm~\ref{alg:interprocuniversal}). This variant reduces the backward analysis to a call to $\mathit{compTermArg}$ and propagating back the qualitative result obtained: \emph{terminating, potentially non-terminating}, or \emph{non-terminating}. This section states the constraints that are solved to compute the outcome of the functions underlined in Algorithm~\ref{alg:interprocuniversal} and establish its soundness: \begin{compactitem} \item $\mathit{compCallCtx}^o$ (Def.~\ref{prop:callctxo}) \item $\mathit{compInvSum}^o$ (Def.~\ref{prop:summaryo}) \item $\mathit{compTermArg}$ (Lemma~\ref{prop:term1})\\ \end{compactitem} \begin{definition}[$\mathit{compCallCtx}^o$]\label{prop:callctxo} A forward calling context $\mathit{CallCtx}^o_{h_i}$ for $h_i$ in {\function} $f$ in calling context $\mathit{CallCtx}^o_f$ is a satisfiability witness of the following formula: $$ \begin{array}{r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{CallCtx}^o_{h_i}, \mathit{Inv}^o_f: \forall {\vx^{in}},{\vec{\x}},{\vec{\x}'},{\vx^{out}},{\vx^{p\_in}}_i,{\vx^{p\_out}}_i:} \\ \multicolumn{3}{l}{~~\mathit{CallCtx}^o_f({\vx^{in}},{\vx^{out}}) \wedge \mathit{Sums}^o_f \rronly{ \wedge \mathit{Assumptions}_f({\vec{\x}})} \Longrightarrow} \\ & \big( & \mathit{Init}_f({\vx^{in}},{\vec{\x}}) \Longrightarrow \mathit{Inv}^o_f({\vec{\x}}) \big) \\ &~~\wedge\big( & \mathit{Inv}^o_f({\vec{\x}}) \wedge \mathit{Trans}_f({\vec{\x}},{\vec{\x}'}) \\ &&\Longrightarrow \mathit{Inv}^o_f({\vec{\x}'}) \wedge (g_{h_i} \Rightarrow\mathit{CallCtx}^o_{h_i}({\vx^{p\_in}}_i,{\vx^{p\_out}}_i)\big) \end{array} $$ \end{definition} $\begin{array}{@{}rl} \text{with }\mathit{Sums}^o_f = \bigwedge_{\text{calls }h_j\text{ in }f} & g_{h_j} \Longrightarrow \\ &\mathit{Sums}^o[h]({\vx^{p\_in}}_j,{\vx^{p\_out}}_j) \end{array} $\\ where $g_{h_j}$ is the guard condition of {\function} call $h_j$ in $f$ capturing the branch conditions from conditionals. For example, $g_{h_0}$ of the {\function} call to \texttt{h} in \texttt{f} in Fig.~\ref{fig:example1} is $z>0$. $\mathit{Sums}^o[h]$ is the currently available summary for \texttt{h} (cf.\ global variables in Alg.~\ref{alg:interproc}). \rronly{Assumptions correspond to \texttt{assume()} statements in the code.} \begin{lemma}\label{lemma:callctxo} $\mathit{CallCtx}^o_{h_i}$ is over-approximating. \end{lemma} \begin{proof} $\mathit{CallCtx}^o_f$ when $f$ is the entry-point {\function} is $\mathit{true}$; also, the summaries $\mathit{Sum}^o_{h_j}$ are initially assumed to be $\mathit{true}$, i.e.~over-appro\-xi\-ma\-ting. Hence, given that $\mathit{CallCtx}^o_f$ and $\mathit{Sums}^o_f$ are over-approxi\-ma\-ting, $\mathit{CallCtx}^o_{h_i}$ is over-approximating by the soundness of the synthesis (see Thm.~\ref{thm:synthsound} in Sec.~\ref{sec:sa}). \end{proof} \paragraph{Example.} Let us consider {\function} \texttt{f} in Fig.~\ref{fig:example1}. \texttt{f} is the entry {\function}, hence we have $\mathit{CallCtx}^o_{f}((z),(r_{f})) = \mathit{true}$ ($=(0 {\leq} z {\leq} M \wedge 0 {\leq} r_{f} {\leq} M)$ with $M:=2^{32}{-}1$ when using the interval abstract domain for 32 bit integers). Then, we instantiate Def.~\ref{prop:callctxo} (for {\function} \texttt{f}) to compute $\mathit{CallCtx}^o_{h_0}$. We assume that we have not yet computed a summary for \texttt{h}, thus, $\mathit{Sum}_h$ is $\mathit{true}$. Remember that the placeholder $h_0((z),(w_1))$ evaluates to $\mathit{true}$. \rronly{Notably, there are no assumptions in the code, meaning that $\mathit{Assumptions}_{f}(z) = true$.} \medskip \centerline{ $ \begin{array}{@{}r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{CallCtx}^o_{h_0}, \mathit{Inv}^o_{f}: \forall z,w_1,w,w',z',g,g',r_{f} :} \\ \multicolumn{3}{l}{~~0 {\leq} z {\leq} M \wedge 0 {\leq} r_f {\leq} M \wedge (z{>}0 \Longrightarrow \mathit{true}) \rronly{ \wedge \mathit{true}} \Longrightarrow} \\ & \big( & w{=}0 \wedge z'{=}z \wedge g \Longrightarrow \mathit{Inv}^o_f((w,z,g)^T) \big) \\ &~~\wedge\big( & \mathit{Inv}^o_f((w,z,g)^T) \wedge \\ &&g \wedge h_0((z),(w_1)) \wedge w' {=} (z{>}0 ? w_1{:}w) \wedge z'{=}z\wedge \neg g' \\ &&\Longrightarrow \mathit{Inv}^o_f((w',z',g')^T) \wedge (z{>}0 \Rightarrow\mathit{CallCtx}^o_{h_i}((z),(w_1))\big) \end{array} $ } \smallskip A solution is $\mathit{Inv}^o_{f} = \mathit{true}$, and $\mathit{CallCtx}^o_{h_0}((z),(w_1)) =(1{\leq} z{\leq} M \wedge 0{\leq} w_1{\leq} M)$. \begin{definition}[$\mathit{compInvSum}^o$]\label{prop:summaryo} A forward summary $\mathit{Sum}^o_f$ and invariants $\mathit{Inv}^o_f$ for {\function} $f$ in calling context $\mathit{CallCtx}^o_f$ are satisfiability witnesses of the following formula: \medskip \centerline{ $ \begin{array}{r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{Sum}^o_f, \mathit{Inv}^o_f: \forall {\vx^{in}},{\vec{\x}},{\vec{\x}'},{\vec{\x}''},{\vx^{out}}:} \\ \multicolumn{3}{l}{~~\mathit{CallCtx}^o_f({\vx^{in}},{\vx^{out}}) \wedge \mathit{Sums}^o_f \rronly{ \wedge \mathit{Assumptions}_f({\vec{\x}})} \Longrightarrow} \\ & \big( & \mathit{Init}_f({\vx^{in}},{\vec{\x}}) \wedge \mathit{Inv}^o_f({\vec{\x}''}) \wedge \mathit{Out}_f({\vec{\x}''},{\vx^{out}}) \\ &&\Longrightarrow \mathit{Inv}^o_f({\vec{\x}}) \wedge \mathit{Sum}^o_f({\vx^{in}},{\vx^{out}}) \big) \\ &~~\wedge\big( & \mathit{Inv}^o_f({\vec{\x}}) \wedge \mathit{Trans}_f({\vec{\x}},{\vec{\x}'}) \Longrightarrow \mathit{Inv}^o_f({\vec{\x}'})\big) \end{array} $} \end{definition} \begin{lemma}\label{lemma:summaryo} $\mathit{Sum}^o_f$ and $\mathit{Inv}^o_f$ are over-approximating. \end{lemma} \begin{proof} By Lemma~\ref{lemma:callctxo}, $\mathit{CallCtx}^o_f$ is over-approximating. Also, the summaries $\mathit{Sums}^o_f$ are initially assumed to be $\mathit{true}$, i.e. over-approxi\-ma\-ting. Hence, given that $\mathit{CallCtx}^o_f$ and $\mathit{Sums}^o_f$ are over-approxi\-ma\-ting, $\mathit{Sum}^o_f$ and $\mathit{Inv}^o_f$ are over-approximating by the soundness of the synthesis (Thm.~\ref{thm:synthsound}). \end{proof} \paragraph{Example.} Let us consider {\function} \texttt{h} in Fig.~\ref{fig:example1}. We have computed $\mathit{CallCtx}^o_{h_0}((y),(r_h)) = (1{\leq} y{\leq} M \wedge 0{\leq} r_h{\leq}M)$ (with actual parameters renamed to formal ones). Then, we need obtain witnesses $\mathit{Inv}^o_{h_0}$ and $\mathit{Sum}^o_{h_0}$ to the satifiability of the instantiation of Def.~\ref{prop:summaryo} (for {\function} \texttt{h}) as given below. \medskip \centerline{ $ \begin{array}{r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{Inv}^o_{h_0}, \mathit{Sum}^o_{h_0}: \forall y,x,x',y',x'',y'',r_f:} \\ \multicolumn{3}{l}{~~1 {\leq} y {\leq} M \wedge 0{\leq} r_h{\leq}M \wedge \mathit{true} \Longrightarrow}\\ & \big( & (x{=}0 \wedge y'{=}y) \wedge \mathit{Inv}^o_{h}((x'',y'')^T) \wedge (r_h{=}x'' \wedge \neg(x''{<}10)\\ && \Longrightarrow \mathit{Inv}^o_{h}((x,y')^T) \wedge \mathit{Sum}^o_{h}((y),(r_h))\big) \\ & \quad \wedge \big( & \mathit{Inv}^o_{h}((x,y)^T) \wedge (x'{=}(x{+}y \wedge x{<}10) \wedge y{=}y') \\ &&\Longrightarrow \mathit{Inv}^o_{h}((x',y')^T)\big) \end{array} $} A solution is $\mathit{Inv}^o_{h_0} =(0{\leq} x{\leq} M \wedge 1{\leq} y{\leq} M)$ and $\mathit{Sum}^o_{h_0} =(1{\leq} y{\leq} M \wedge 10{\leq} r_h{\leq} M)$, for instance. \begin{remark}\label{rem:ipfixpoint} Since Def.~\ref{prop:callctxo} and Def.~\ref{prop:summaryo} are interdependent, we can compute them iteratively until a fixed point is reached in order to improve the precision of calling contexts, invariants and summaries. However, for efficiency reasons, we perform only the first iteration of this (greatest) fixed point computation. \end{remark} \begin{lemma}[$\mathit{compTermArg}$]\label{prop:term1} A {\function} $f$ with forward invariants $\mathit{Inv}^o_f$ terminates if there is a termination argument $RR_f$: $$ \begin{array}{ll@{}l} \multicolumn{3}{l}{\exists RR_f: \forall {\vec{\x}},{\vec{\x}'}:} \\ && \mathit{Inv}^o_f({\vec{\x}}) \wedge \mathit{Trans}_f({\vec{\x}},{\vec{\x}'}) \wedge \rronly{\\ &&} \mathit{Sums}^o_f \wedge \rronly{\mathit{Assumptions}_f({\vec{\x}}) \wedge} \mathit{Assertions}_f({\vec{\x}}) \\ && \Longrightarrow RR_f({\vec{\x}},{\vec{\x}'}) \end{array} $$ \end{lemma} Assertions in this formula correspond to to \texttt{assert()} statements in the code. They can be assumed to hold because assertion-violating traces terminate. Over-approximating forward information may lead to inclusion of spurious non-terminating traces. For that reason, we might not find a termination argument although the {\function} is terminating. As we essentially under-approximate the set of terminating {\function}s, we will not give false positives. Regarding the solving algorithm for this formula, we refer to Sec.~\ref{sec:sa}. \paragraph{Example.} Let us consider function \texttt{h} in Fig.~\ref{fig:example1}. Assume we have the invariant $0 {\leq} x {\leq} M \wedge 1 {\leq} y {\leq} M$. Thus, we have to solve $$ \begin{array}{l} \exists RR_h: 0 {\leq} x {\leq} M \wedge 1 {\leq} y {\leq} M \wedge x'{=}x{+}y \wedge x{<}10 \wedge y'{=}y \wedge\\ \qquad \qquad \mathit{true} \wedge \mathit{true} \Longrightarrow RR_h((x,y),(x',y')) \end{array} $$ When using a linear ranking function template $c_1\cdot x+c_2\cdot y$, we obtain as solution, for example, $RR_h = (-x{>}-x')$. \smallskip If there is no trace from {\function} entry to exit, then we can prove non-termination, even when using over-appro\-xi\-mations: \begin{lemma}[line~\ref{line:fw_invsum} of $\mathit{analyze}$% ]\label{prop:nonterm} A {\function} $f$ in forward calling context $\mathit{CallCtx}^o_f$, and forward invariants $\mathit{Inv}^o_f$ never terminates if its summary $\mathit{Sum}^o_f$ is $\mathit{false}$. \end{lemma} Termination information is then propagated in the (acyclic) call graph ($\mathit{join}$ in line~\ref{line:join2} in Algorithm~\ref{alg:interprocuniversal}): \begin{proposition}\label{prop:ipterm} A {\function} is declared \begin{compactenum} \item[(1)] \emph{non-terminating} if it is non-terminating by Lemma~\ref{prop:nonterm}. \item[(2)] \emph{terminating} if \begin{compactenum} \item[(a)] all its {\function} calls $h_i$ that are potentially reachable (i.e.\ with $\mathit{CallCtx}^o_{h_i}\neq \mathit{false}$) are declared terminating, and \item[(b)] $f$ itself is terminating according to Lemma~\ref{prop:term1}; \end{compactenum} \item[(3)] \emph{potentially non-terminating}, otherwise. \end{compactenum} \end{proposition} Our implementation is more efficient than Algorithm~\ref{alg:interprocuniversal} because it avoids computing a termination argument for $f$ if one of its callees is potentially non-terminating. \begin{theorem} If the entry {\function} of a program is declared terminating, then the program terminates universally. If the entry {\function} of a program is declared non-terminating, then the program never terminates. \end{theorem} \begin{proof} By induction over the acyclic call graph using Prop.~\ref{prop:ipterm}. \end{proof} \subsection{Preconditions for Termination}\label{sec:precond} Before introducing conditional termination, we have to talk about preconditions for termination. If a {\function} terminates conditionally like {\function} $h$ in Fig.~\ref{fig:example1} $\mathit{compTermArg}$ (Lemma~\ref{prop:term1}) will not be able to find a satisfying predicate $RR$. However, we would like to know under which preconditions, i.e. values of \texttt{y} in above example, the {\function} terminates. \begin{algorithm}[t] \KwIn{{\function} $f$ with invariant $\mathit{Inv}$, additional termination conditions $\mathit{termConds}$ } \KwOut{precondition $\mathit{Precond}$} $(\mathit{Precond},p) \gets (\mathit{false},\mathit{true})$\; \KwLet $\varphi = \mathit{Init}({\vx^{in}},{\vec{\x}}) \wedge \mathit{Inv}({\vec{\x}})$\; \While{$\mathit{true}$}{ $\psi \gets p \wedge \neg \mathit{Precond}({\vx^{in}}) \wedge \varphi$\; solve $\psi$ for ${\vx^{in}},{\vec{\x}},{\vec{\x}'}$\label{line:bootstrap}\; \lIf{UNSAT}{\Return{$\mathit{Precond}$}} \Else{ \KwLet ${\vxv^{in}}$ be a model of $\psi$\; \KwLet $\mathit{Inv}' = \mathit{compInv}(f,{\vx^{in}}{=}{\vxv^{in}})$\label{line:invcand}\; \KwLet $\mathcal{RR} = \mathit{compTermArg}(f,\mathit{Inv}')$\label{line:termcand}\; \lIf{$\mathcal{RR} = \mathit{true}$}{ $p \gets p \wedge ({\vx^{in}}\neq {\vxv^{in}})$\label{line:blockcand}} \Else{ \KwLet $\theta = \mathit{termConds} \wedge \mathcal{RR}$\; \KwLet $\mathit{Precond}' = \neg\mathit{compNecPrecond}(f,\neg\theta)$\label{line:necprecond}\; $\mathit{Precond} \gets \mathit{Precond} \vee\mathit{Precond}'$\label{line:addprecond}\; } } } \caption{\label{alg:compPrecond}$\mathit{compPrecondTerm}$} \end{algorithm} We can state this problem as defined in Def.~\ref{def:precond}. In Algorithm~\ref{alg:compPrecond} we search for $\mathit{Precond}$, $\mathit{Inv}$, and $RR$ in an interleaved manner. Note that $\mathit{false}$ is a trivial solution for $\mathit{Precond}$; we thus have to aim at finding a good under-approximation of the maximal solution (weakest precondition) for $\mathit{Precond}$. We bootstrap the process by assuming $\mathit{Precond}=\mathit{false}$ and search for values of ${\vx^{in}}$ (Line~\ref{line:bootstrap}). If such a value ${\vxv^{in}}$ exists, we can compute an invariant under the precondition candidate ${\vx^{in}}={\vxv^{in}}$ (Line~\ref{line:invcand}) and use Lemma~\ref{prop:term1} to search for the corresponding termination argument (Line~\ref{line:termcand}). If we fail to find a termination argument ($\mathcal{RR}=\mathit{true}$), we block the precondition candidate (Line~\ref{line:blockcand}) and restart the bootstrapping process. Otherwise, the algorithm returns a termination argument $\mathcal{RR}$ that is valid for the concrete value ${\vxv^{in}}$ of ${\vx^{in}}$. Now we need to find a sufficiently weak $\mathit{Precond}$ for which $\mathcal{RR}$ guarantees termination. To this end, we compute an over-approximating precondition for those inputs for which we cannot guarantee termination ($\neg\theta$ in Line~\ref{line:necprecond}, which includes additional termination conditions coming from the backward calling context and preconditions of {\function} calls, see Sec.~\ref{sec:condterm}). The negation of this precondition is an under-approximation of those inputs for which $f$ terminates. Finally, we add this negated precondition to our $\mathit{Precond}$ (Line~\ref{line:addprecond}) before we start over the bootstrapping process to find precondition candidates outside the current precondition ($\neg \mathit{Precond}$) for which we might be able to guarantee termination. \paragraph{Example.} Let us consider again function \texttt{h} in Fig.~\ref{fig:example1}. This time, we will assume we have the invariant $0 \leq x \leq M$ (with $M:=2^{32}-1$). We bootstrap by assuming $\mathit{Precond}=\mathit{false}$ and searching for values of $y$ satisfying $\mathit{true} \wedge \neg\mathit{false} \wedge x{=}0 \wedge 0\leq x\leq M$. One possibility is $y=0$. We then compute the invariant under the precondition $y=0$ and get $x=0$. Obviously, we cannot find a termination argument in this case. Hence, we start over and search for values of $y$ satisfying $y\neq 0 \wedge \neg\mathit{false} \wedge x{=}0 \wedge 0{\leq} x{\leq} 10$. This formula is for instance satisfied by $y=1$. This time we get the invariant $0 {\leq} x {\leq} 10$ and the ranking function $-x$. Thus, we have to solve \smallskip \centerline{ $ \begin{array}{l} \exists \vec{e}: \mathcal{P}(y, \vec{e}) \wedge 0{\leq} x{\leq} M \wedge x'{=}x{+}y \wedge x{<}10\\%\mathit{Trans}(x,x') \qquad \qquad \Rightarrow \neg (-x {>} -x') \end{array} $ } \smallskip to compute an over-approximating precondition over the template $\mathcal{P}$. In this case, $\mathcal{P}(y, e)$ turns out to be $y = 0$, therefore its negation $y \neq 0$ is the $\mathit{Precond}$ that we get. Finally, we have to check for further precondition candidates, but $y\neq 0 \wedge \neg(y\neq 0) \wedge x{=}0 \wedge 0{\leq} x{\leq} M$ is obviously UNSAT. Hence, we return the sufficient precondition for termination $y\neq 0$. \subsection{Conditional Termination}\label{sec:condterm} We now extend the formalisation to Algorithm~\ref{alg:interproc}, which additionally requires the computation of under-approximating calling contexts and sufficient preconditions for termination (procedure $\mathit{compPrecondTerm}$, see Alg.~\ref{alg:compPrecond}). First, $\mathit{compPrecondTerm}$ computes in line~\ref{line:invcand} an over-approx\-ima\-ting invariant $\mathit{Inv}^{o}_{fp}$ entailed by the candidate precondition. $\mathit{Inv}^{o}_{fp}$ is computed through Def.~\ref{prop:summaryo} by conjoining the candidate precondition to the antecedent. Then, line~\ref{line:termcand} computes the corresponding termination argument $RR_f$ by applying Lemma~\ref{prop:term1} using $\mathit{Inv}^o_{fp}$ instead of $\mathit{Inv}^o_{f}$. Since the termination argument is under-approximating, we are sure that $f$ terminates for this candidate precondition if $RR_f\neq\mathit{true}$. \rronly{ \begin{remark} The available under-approximate information $\mathit{CallCtx}^u_f \wedge \mathit{Sums}^u_f \wedge \mathit{Preconds}^u_f$, where \\ $ \begin{array}{@{}rl} \mathit{Sums}^u_f = \bigwedge_{\text{calls }h_j\text{ in }f} & g_{h_j} \Longrightarrow \\ &\mathit{Sum}^u_{h_j}({\vx^{p\_in}}_j,{\vx^{p\_out}}_j) \end{array} $\\ $ \begin{array}{@{}l} \text{and }\mathit{Preconds}^u_f = \bigwedge_{\text{calls }h_j\text{ in }f} g_{h_j} \Rightarrow \mathit{Precond}^u_{h_j}({\vx^{p\_in}}_j) \end{array} $ could be conjoined with the antecedents in Prop.~\ref{prop:summaryo} and Prop.~\ref{prop:term1} in order to constrain the search space. However, this is neither necessary for soundness nor does it impair soundness, because the same information is used in Props.~\ref{prop:precondu} and~\ref{prop:callctxu}. \end{remark} } Then, in line~\ref{line:necprecond} of $\mathit{compPrecondTerm}$, we compute under-approximating (sufficient) preconditions for traces satisfying the termination argument $RR$ via over-approxima\-ting the traces violating $RR$. Now, we are left to specify the formulae corresponding to the following functions: \begin{compactitem} \item $\mathit{compCallCtx}^u$ (Def.~\ref{prop:callctxu}) \item $\mathit{compNecPrecond}$ (Def.~\ref{prop:precondu}) \end{compactitem} We use the superscript ${}^\overtilde{u}$ to indicate negations of under-approxi\-ma\-ting information. \begin{definition}[Line~\ref{line:necprecond} of $\mathit{compPrecondTerm}$]\label{prop:precondu} A precondition for termination $\mathit{Precond}^u_f$ in backward calling context $\mathit{CallCtx}^u_f$ and with forward invariants $\mathit{Inv}^o_f$ is $\mathit{Precond}^u_f \equiv \neg \mathit{Precond}^\overtilde{u}_f$, i.e. the negation of a satisfiability witness $\mathit{Precond}^\overtilde{u}_f$ for: $$ \begin{array}{r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{Precond}^\overtilde{u}_{h_i},\mathit{Inv}^\overtilde{u}_f,\mathit{Sum}^\overtilde{u}_f: \forall {\vx^{in}},{\vec{\x}},{\vec{\x}'},{\vec{\x}''},{\vx^{out}}:} \\ \multicolumn{3}{l}{~~\neg\mathit{CallCtx}^u_f({\vx^{in}},{\vx^{out}}) \wedge \mathit{Inv}^o_f({\vec{\x}}) \wedge \rronly{\mathit{Sums}^o_f \wedge}}\\ \multicolumn{3}{l}{~~~~\pponly{\mathit{Sums}^o_f \wedge} \mathit{Sums}^\overtilde{u}_f \wedge \rronly{\mathit{Assumptions}_f({\vec{\x}}) \wedge} \mathit{Assertions}_f({\vec{\x}}) \Longrightarrow} \\ &\big(& \mathit{Init}({\vx^{in}},{\vec{\x}''}) \wedge \mathit{Inv}^\overtilde{u}_f({\vec{\x}''}) \wedge \mathit{Out}({\vec{\x}},{\vx^{out}}) \\ &&\Longrightarrow \mathit{Inv}^\overtilde{u}_f({\vec{\x}}) \wedge \mathit{Sum}^\overtilde{u}_f({\vx^{in}},{\vx^{out}}) \wedge \mathit{Precond}^\overtilde{u}_f({\vx^{in}})\big) \\ \pponly{ \end{array} $$ $$ \begin{array}{r@{}r@{}l} } &~~~\wedge \big(& (\neg\mathcal{RR}_f({\vec{\x}},{\vec{\x}'}) \vee \mathit{Preconds}^\overtilde{u}_f) \wedge \\ &&\mathit{Inv}^\overtilde{u}_f({\vec{\x}'}) \wedge \mathit{Trans}({\vec{\x}},{\vec{\x}'}) \Longrightarrow \mathit{Inv}^\overtilde{u}_f({\vec{\x}})\big) \end{array} $$ \noindent$ \begin{array}{@{}rl} \text{with }\mathit{Sums}^\overtilde{u}_f = \bigwedge_{\text{calls }h_j\text{ in }f} & g_{h_j} \Longrightarrow \\ &\neg\mathit{Sum}^u[h]({\vx^{p\_in}}_j,{\vx^{p\_out}}_j) \end{array}$ \noindent$ \begin{array}{@{}r@{}l} \text{and }\mathit{Preconds}^\overtilde{u}_f = \bigvee_{\text{calls }h_j\text{ in }f} & g_{h_j} \Longrightarrow \\ & \neg\mathit{Precond}^u[h]({\vx^{p\_in}}_j,{\vx^{p\_out}}_j). \end{array} $ \end{definition} This formula is similar to Def.~\ref{prop:summaryo}, but w.r.t.\ backward calling contexts and summaries, and strengthened by the (forward) invariants $\mathit{Inv}^o_f$. We denote the negation of the witnesses found for the summary and the invariant by $\mathit{Sum}^u_f \equiv \neg\mathit{Sum}^\overtilde{u}_f$ and $\mathit{Inv}^u_f \equiv \neg\mathit{Inv}^\overtilde{u}_f$, respectively. \begin{lemma} \label{lem:precondu} $\mathit{Precond}^u_f$, $\mathit{Sum}^u_f$ and $\mathit{Inv}^u_f$ are under-approximating. \end{lemma} \begin{proof} We compute an over-approximation of the negation of the precondition w.r.t.\ the negation of the under-approximating termination argument and the negation of further under-approximating information (backward calling context, preconditions of {\function} calls) --- by the soundness of the synthesis (see Thm.~\ref{thm:synthsound} in Sec.~\ref{sec:sa}), this over-approximates the non-terminating traces, and hence under-approximates the terminating ones. Hence, the precondition is a sufficient precondition for termination. The term $\negRR_f({\vec{\x}},{\vec{\x}'}) \vee \mathit{Preconds}^\overtilde{u}_f$ characterises non-ter\-mi\-nating states in the invariants of $f$: for these, either the termination argument for $f$ is not satisfied or the precondition for termination of one of the callees does not hold. \end{proof} Finally, we have to define how we compute the under-appro\-xi\-ma\-ting calling contexts: \begin{definition}[$\mathit{compCallCtx}^u$]\label{prop:callctxu} The backward calling context $\mathit{CallCtx}^u_{h_i}$ for {\function} call $h_i$ in {\function} $f$ in backward calling context $\mathit{CallCtx}^u_f$ and forward invariants $\mathit{Inv}^o_f$ is $\mathit{CallCtx}^u_{h_i} \equiv \neg \mathit{CallCtx}^\overtilde{u}_{h_i}$, the negation of a satisfiability witnesses for: $$ \begin{array}{r@{}r@{}l} \multicolumn{3}{l}{\exists \mathit{CallCtx}^\overtilde{u}_{h_i},\mathit{Inv}^\overtilde{u}_f: \forall {\vx^{in}},{\vec{\x}},{\vec{\x}'},{\vx^{p\_in}}_i,{\vx^{p\_out}}_i,{\vx^{out}}:} \\ \multicolumn{3}{l}{~~\neg\mathit{CallCtx}^u_f({\vx^{in}},{\vx^{out}}) \wedge \mathit{Inv}^o_f({\vec{\x}}) \wedge \rronly{\mathit{Sums}^o_f \wedge}}\\ \multicolumn{3}{l}{~~ \pponly{\mathit{Sums}^o_f \wedge} \mathit{Sums}^\overtilde{u}_f \wedge \rronly{\mathit{Assumptions}_f({\vec{\x}}) \wedge} \mathit{Assertions}_f({\vec{\x}}) \Longrightarrow} \\ &\big(& \mathit{Out}({\vec{\x}},{\vx^{out}}) \Longrightarrow \mathit{Inv}^\overtilde{u}_f({\vec{\x}})\big) \\ \rronly{ \end{array} $$ $$ \begin{array}{r@{}r@{}l} } &~~~\wedge \big(& \mathit{Inv}^\overtilde{u}_f({\vec{\x}'}) \wedge \mathit{Trans}({\vec{\x}},{\vec{\x}'}) \\ && \Longrightarrow \mathit{Inv}^\overtilde{u}_f({\vec{\x}}) \wedge \mathit{CallCtx}^\overtilde{u}_{h_i}({\vx^{p\_in}},{\vx^{p\_out}})\big) \end{array} $$ \end{definition} \begin{lemma}\label{lem:callctxu} $\mathit{CallCtx}^u_{h_i}$ is under-approximating. \end{lemma} \begin{proof} The computation is based on the negation of the under-approximating calling context of $f$ and the negated under-approximating summaries for the function calls in $f$. By Thm.~\ref{thm:synthsound}, this leads to an over-approximation of the negation of the calling context for $h_i$. \end{proof} \begin{theorem} A {\function} $f$ terminates for all values of ${\vx^{in}}$ satisfying $\mathit{Precond}^u_f$. \end{theorem} \begin{proof} By induction over the acyclic call graph using Lemmae~\ref{lem:precondu} and~\ref{lem:callctxu}. \end{proof} \subsection{Context-Sensitive Summaries} The key idea of interprocedural analysis is to avoid re-analysing {\function}s that are called multiple times. For that reason, Algorithm~\ref{alg:interproc} first checks whether it can re-use already computed information. For that purpose, summaries are stored as implications $\mathit{CallCtx}^o \Rightarrow \mathit{Sum}^o$. As the call graph is traversed, the possible calling contexts $\mathit{CallCtx}^o_{h_i}$ for a {\function} $h$ are collected over the call sites~$i$. $\mathit{NeedToReAnalyze}^o$ (Line~\ref{line:needtoreanalze1} in Alg.~\ref{alg:interproc}) checks whether the current calling context $\mathit{CallCtx}^o_{h_i}$ is subsumed by calling contexts $\bigvee_i\mathit{CallCtx}^o_{h_i}$ that we have already encountered, and if so, $\mathit{Sums}[h]$ is reused; otherwise it needs to be recomputed and $\mathit{join}$ed conjunctively with previously inferred summaries. The same considerations apply to invariants, termination arguments and preconditions. \section{Template-Based Static Analysis}\label{sec:sa} In this section, we give a brief overview of our synthesis engine, which serves as a backend for our approach (it solves the formulae in Definitions~\ref{prop:callctxo}, \ref{prop:summaryo}, \ref{prop:precondu}, and \ref{prop:callctxu} (see Sec.~\ref{sec:interproc})). Our synthesis engine employs template-based static analysis to compute ranking functions, invariants, summaries, and calling contexts, i.e., implementations of functions $\mathit{compInvSum}^o$ and $\mathit{compCallCtx}^o$ from the second-order constraints defined in Sec.~\ref{sec:interproc}. To be able to effectively solve second-order problems, we reduce them to first-order by restricting the space of solutions to expressions of the form $\mathcal{T}({\vec{\x}},\vec{d})$ where \begin{itemize} \item $\vec{d}$ are parameters to be instantiated with concrete values and ${\vec{\x}}$ are the program variables. \item $\mathcal{T}$ is a template that gives a blueprint for the shape of the formulas to be computed. Choosing a template is analogous to choosing an abstract domain in abstract interpretation. To allow for a flexible choice, we consider \emph{template polyhedra} \cite{SSM05}. \end{itemize} We state give here a soundness result: \begin{theorem}\label{thm:synthsound} Any satisfiability witness $\vec{d}$ of the reduction of the second order constraint for invariants in Def.~\ref{def:inv} using template~$\mathcal{T}$ \[ \begin{array}{rlr} \exists \vec{d},\forall {\vx^{in}}, {\vec{\x}}, {\vec{\x}'}: & \mathit{Init}({\vx^{in}},{\vec{\x}}) \Longrightarrow \mathcal{T}({\vec{\x}},\vec{d}) \\ \wedge &\mathcal{T}({\vec{\x}},\vec{d})\wedge\mathit{Trans}({\vec{\x}},{\vec{\x}'})\Longrightarrow \mathcal{T}({\vec{\x}'},\vec{d}) \end{array} \] satisfies $\forall {\vec{\x}}: \mathit{Inv}({\vec{\x}}) \Longrightarrow \mathcal{T}({\vec{\x}},\vec{d})$, i.e.\ $\mathcal{T}({\vec{\x}},\vec{d})$ is a sound over-approximating invariant. Similar soundness results hold true for summaries and calling contexts. \end{theorem} This ultimately follows from the soundness of abstract interpretation \cite{CC77}. Similar approaches have been described, for instance, by \cite{GS07,GSV08,LAK+14}. However, these methods consider programs over mathematical integers. Ranking functions require specialised synthesis techniques. To achieve both expressiveness and efficiency, we generate linear lexicographic functions~\cite{BMS05,CSZ13}. Our ranking-function synthesis approach is similar to the TAN tool~\cite{KSTW10} but extends the approach from monolithic to lexicographic ranking functions. Further, unlike TAN, our synthesis engine is much more versatile and configurable, e.g., it also produces summaries and invariants. \pponly{Due to space limitations, w}\rronly{W}e refer to \rronly{Appendix~\ref{sec:appendix}} \pponly{the extended version \cite{extended-version}}, which includes a detailed description of the synthesis engine, our program encoding, encoding of bit-precise arithmetic, and tailored second-order solving techniques for the different constraints that occur in our analysis. In the following section, we discuss the implementation. \section{Implementation}\label{sec:impl} We have implemented the algorithm in 2LS\xspace~ \cite{2LSanon}, a static analysis tool for C programs built on the CPROVER framework, using MiniSat~2.2.0 as back-end solver. Other SAT and SMT solvers with incremental solving support would also be applicable. Our approach enables us to use a single solver instance per {\function} to solve a series of second-order queries as required by Alg.~\ref{alg:interproc}. This is essential as our synthesis algorithms make thousands of solver calls. Architectural settings (e.g.\ bitwidths) can be provided on the command line. \pponly{Discussions about technical issues w.r.t. bit-preciseness and the computation of intraprocedural termination arguments can be found in the extended version \cite{extended-version}.} \rronly{ \paragraph{Bitvector Width Extension}\label{sec:impl:extension} As aforementioned, the semantics of C allows integers to wrap around when they over/underflow. Let us consider the following example, for which we want to find a termination argument using Algorithm~\ref{alg:compTerm}: \noindent{ \footnotesize\textbf{void} f() \{ \textbf{for}(\textbf{unsigned char} x; ; x++); \}} The ranking function synthesis needs to compute a value for template parameter $\ell$ such that $\ell(x{-}x')>0$ holds for all $x, x'$ under transition relation $x'{=}x{+}1$ and computed invariant $\mathit{true}$ (for details of the algorithm refer to \rronly{Appendix~\ref{sec:intraproc}} \pponly{the extended version \cite{extended-version}}). \medskip Thus, assuming that the current value for $\ell$ is $-1$, the constraint to be solved (Algorithm~\ref{alg:compTerm} Line~\ref{line:solveterm}) is $\mathit{true} \wedge x'{=}x{+}1 \wedge \neg({-1}(x{-}x')){>}0)$, or $\neg({-1}(x{-}(x{+}1)){>}0)$, for short. While for mathematical integers this is SAT, it is UNSAT for signed bit-vectors due to overflows. For $x{=}127$, the overflow happens such that $x{+}1{=}{-}128$. Thus, $127{-}({-}128){>} 0$ becomes $-1 {>} 0$, which makes the constraint UNSAT, and we would incorrectly conclude that $-x$ is a ranking function, which does not hold for signed bitvector semantics. However, if we extend the bitvector width to $k{=}9$ such that the arithmetic in the template does not overflow, then $\neg(-1\cdot((\mathit{signed}_9)127{-}(\mathit{signed}_9)({-}128))>0)$ evaluates to $255>0$, where $\mathit{signed}_k$ is a cast to a $k$-bit signed integer. Now, $x{=}127$ is a witness showing that $-x$ is not a valid ranking function. For similar reasons, we have to extend the bit-width of $k$-bit unsigned integers in templates to $(k{+}1)$-bit signed integers to retain soundness. \paragraph{Optimisations} Our ranking function synthesis algorithm searches for coefficients $\vec{\ell}$ such that a constraint is UNSAT. However, this may result in enumerating all the values for $\vec{\ell}$ in the range allowed by its type, which is inefficient. In many cases, a ranking function can be found for which $\ell_j \in \{-1,0,1\}$. In our implementation, we have embedded an improved algorithm (Algorithm~\ref{alg:compTerm} in Appendix~\ref{sec:intraproc}). into an outer refinement loop which iteratively extends the range for $\vec{\ell}$ if a ranking function could not be found. We start with $\ell_j \in \{-1,0,1\}$, then we try $\ell_j \in [-10,10]$ before extending it to the whole range. \paragraph{Further Bounds} As explained in Algorithm~\ref{alg:compTerm}, we bound the number of lexicographic components (default 3), because otherwise Algorithm~\ref{alg:compTerm} does not terminate if there is no number $n$ such that a lexicographic ranking function with $n$ components proves termination. Since the domains of ${\vec{\x}},{\vec{\x}'}$ in Algorithm~\ref{alg:compTerm} and of ${\vx^{in}}$ in Algorithm~\ref{alg:compPrecond} might be large, we limit also the number of iterations (default 20) of the \emph{while} loops in these algorithms. In the spirit of bounded model checking, these bounds only restrict completeness, i.e., there might exist ranking functions or preconditions which we could have found for larger bounds. The bounds can be given on the command line. } \section{Experiments}\label{sec:exp} We performed experiments to support the following claims: \begin{compactenum} \item Interprocedural termination analysis (IPTA) is faster than monolithic termination analysis (MTA). \item The precision of IPTA is comparable to MTA. \item {2LS\xspace} outperforms existing termination analysis tools. \item {2LS\xspace}'s analysis is bit-precise. \item {2LS\xspace} computes usable preconditions for termination. \end{compactenum} We used the \emph{product line} benchmarks of the \cite{svbenchmarks} benchmark repository. In contrast to other categories, this benchmark set contains programs with non-trivial procedural structure. This benchmark set contains 597 programs with 1100 to 5700 lines of code (2705 on average),% \footnote{Measured using \texttt{cloc} 1.53.} 33 to 136 {\function}s (67 on average), and 4 to 10 loops (5.5 on average). Of these bencharks, 264 terminate universally, whereas 333 never terminate. The experiments were run on a Xeon X5667 at 3\,GHz running Fedora 20 with 64-bit binaries. Memory and CPU time were restricted to 16\,GB and 1800 seconds per benchmark, respectively (using \cite{Rou11}). Using {2LS\xspace} with interval templates was sufficient to obtain reasonable precision. \paragraph{Modular termination analysis is fast} We compared IPTA with MTA (all {\function}s inlined). Table~\ref{tab:results2} shows that IPTA times out on 2.3\,\% of the benchmarks vs.~39.7\,\% for MTA. The geometric mean speed-up of IPTA w.r.t.~MTA on the benchmarks correctly solved by both approaches is 1.37. In order to investigate how the 30m timeout affects MTA, we randomly selected 10 benchmarks that timed out for 30\,m and re-ran them: 1 finished in 32\,m, 3 after more than 1h, 6 did not finish within 2\,h. \paragraph{Modular termination analysis is precise} Again, we compare IPTA with MTA. Table~\ref{tab:results2} shows that IPTA proves 94\,\% of the terminating benchmarks, whereas only 10\,\% were proven by MTA. MTA can prove all never-terminating benchmarks including 13 benchmarks where IPTA times out. MTA times out on the benchmarks that cause 13 additional \emph{potentially non-terminating} outcomes for IPTA. \begin{table}[t] \centering \begin{tabular}{l|r|rrr@{}l} & \rot{{2LS\xspace} IPTA} & \rot{{2LS\xspace} MTA} & \rot{TAN} & \rot{Ultimate}& \\ \hline terminating & 249 & 26 & 18 & 50& \\ non-terminating & 320 & 333 & 3 & 324&${}^*$ \\ potentially non-term. & 14 & 1 & 425 & 0& \\ timed out & 14 & 237 & 150 & 43& \\ errors & 0 & 0 & 1 & 180& \\ \hline total run time (h) & 58.7 & 119.6 & 92.8 & 23.9& \end{tabular}~\\[2ex] \caption{\label{tab:results2} Tool comparison (${}^*$ see text).} \end{table} \paragraph{{2LS\xspace} outperforms existing termination analysis tools} We compared {2LS\xspace} with two termination tools for C programs from the SV-COMP termination competition, namely~\cite{tan2014} and~\cite{Ultimate2015}. Unfortunately, the tools~\cite{Aprove2014}, \cite{Loopus}, \cite{FuncTion2015}, \cite{HipTNT}, and~\cite{ARMC2011} have limitations regarding the subset of C that they can handle that make them unable to analyze any of the benchmarks out of the box. We describe these limitations in \cite{experiments-log}. Unfortunately, we did not succeed to generate the correct input files in the intermediate formats required by \cite{MSt2} and \cite{KiTTeL2015} using the recommended frontends \cite{SLAyer11} and \cite{llvm2KiTTeL2015}. TAN~\cite{KSTW10}, and KiTTeL/KoAT~\cite{FKS12} support bit-precise C semantics. Ultimate uses mathematical integer reasoning but tries to ensure conformance with bit-vector semantics. Also, Ultimate uses a semantic decomposition of the program~\cite{HHP14} to make its analysis efficient. Table~\ref{tab:results2} shows lists for each of the tools the number of instances solved, timed out or aborted because of an internal error. We also give the total run time, which shows that analysis times are roughly halved by the modular/interprocedural approaches ({2LS\xspace} IPTA, Ultimate) in comparison with the monolithic approaches ({2LS\xspace} MTA, TAN). Ultimate spends less time on those benchmarks that it can prove terminating, however, these are only 19\,\% of the terminating benchmarks (vs.~94\,\% for {2LS\xspace}). If Ultimate could solve those 180 benchmarks on which it fails due to unsupported features of C, we would expect its performance to be comparable to {2LS\xspace}. Ultimate and {2LS\xspace} have different capabilities regarding non-termination. {2LS\xspace} can show that a program never terminates for all inputs, whereas Ultimate can show that there exists a potentially non-terminating execution. To make the comparison fair, we counted benchmarks flagged as potentially non-terminating by Ultimate, but which are actually never-terminating, in the \emph{non-terminating} category in Table~\ref{tab:results2} (marked ${}^*$). \paragraph{{2LS\xspace}'s analysis is bit-precise} We compared {2LS\xspace} with Loopus on a collection of 15 benchmarks (\texttt{ABC\_ex01.c} to \texttt{ABC\_ex15.c}) taken from the Loopus benchmark suite \cite{Loopus}.% While they are short (between 7 and 41 LOC), the main characteristic of these programs is the fact that they exhibit different terminating behaviours for mathematical integers and bit-vectors. For illustration, \texttt{ABC\_ex08.c} shown Fig.~\ref{fig:example:loopus} terminates with mathematical integers, but not with machine integers if, for instance, \texttt{m} equals \texttt{INT MAX}. Next, we summarise the results of our experiments on these benchmarks when considering machine integers: \begin{itemize} \item only 2 of the programs terminate (\texttt{ABC\_ex08.c} and \texttt{ABC\_ex11.c}), and are correctly identified by both {2LS\xspace} and Loopus. \item for the rest of 13 non-terminating programs, Loopus claims they terminate, whereas {2LS\xspace} correctly classifies 9 as potentially non-terminating (including \texttt{ABC\_ex08.c} in Fig.~\ref{fig:example:loopus}) and times out for 4. \end{itemize} \begin{figure} \begin{center} \footnotesize \begin{lstlisting}[basicstyle=\footnotesize] void ex15(int m, int n, int p, int q) { for (int i = n; i >= 1; i = i - 1) for (int j = 1; j <= m; j = j + 1) for (int k = i; k <= p; k = k + 1) for (int l = q; l <= j; l = l + 1) ; } \end{lstlisting} \end{center} \vspace{-.5cm} \caption{ Example \texttt{ABC\_ex15.c} from the Loopus benchmarks.\label{fig:example:loopus} } \vspace{-.5cm} \end{figure} \paragraph{{2LS\xspace} computes usable preconditions for termination} This experiment was performed on benchmarks extracted from Debian packages and the linear algebra library CLapack. The quality of preconditions, i.e.~usability or ability to help the developer to spot problems in the code, is difficult to quantify. We give several examples where function terminate conditionally. The \texttt{abe} package of Debian contains a function, shown in Fig.~\ref{fig:example:abe}, where increments of the iteration in a loop are not constant but dynamically depend on the dimensions of an image data structure. Here {2LS\xspace} infers the precondition $img \rightarrow h > 0 \wedge img \rightarrow w > 0$. \begin{figure} \begin{center} \footnotesize \begin{lstlisting}[basicstyle=\footnotesize] void createBack(struct SDL_Surface *back_surf) { struct SDL_Rect pos; struct SDL_Surface *img = images[img_back]->image; for(int x=0; !(s>=(*back_surf)->h); s+=img->h) { for(int y=0; !(y>=(*back_surf)->w); y+=img->w) { pos.x = (signed short int)x; pos.y = (signed short int)y; SDL_UpperBlit(img, NULL, *back_surf, &pos); ... } } } \end{lstlisting} \end{center} \vspace{-.5cm} \caption{ Example \texttt{createBack} from Debian package \texttt{abe}.\label{fig:example:abe} } \vspace{-.5cm} \end{figure} The example in Fig.~\ref{fig:example:busybox} is taken from the benchmark \texttt{basename} in the \texttt{busybox}-category of SVCOMP 2015, which contains simplified versions of Debian packages. The termination of function \texttt{full\_write} depends on the return value of its callee function \texttt{safe\_write}. Here {2LS\xspace} infers the calling context $cc>0$, i.e.\ the contract for the function \texttt{safe\_write}, such that the termination of \texttt{full\_write} is guaranteed. Given a proof that \texttt{safe\_write} terminates and returns a strictly positive value regardless of the arguments it is called with, we can conclude that \texttt{full\_write} terminates universally. \begin{figure} \begin{center} \begin{lstlisting}[basicstyle=\footnotesize] signed long int full_write(signed int fd, const void *buf, unsigned long int len, unsigned long int cc) { signed long int total = (signed long int)0; for( ; !(len == 0ul); len = len - (unsigned long int)cc) { cc=safe_write(fd, buf, len); if(cc < 0l) { if(!(total == 0l)) return total; return cc; } total = total + cc; buf = (const void *)((const char *)buf + cc); } } \end{lstlisting} \end{center} \vspace{-.5cm} \caption{ Example from SVCOMP 2015 \texttt{busybox}.\label{fig:example:busybox} } \vspace{-.5cm} \end{figure} The program in Fig.~\ref{fig:example:clapack} is a code snippet taken from the summation {\function} \texttt{sasum} within \cite{clapack}, the C version of the popular LAPACK linear algebra library. The loop in {\function} \texttt{f} does not terminate if $incx=0$. If $incx>0$ ($incx<0$) the termination argument is that $i$ increases (decreases). Therefore, $incx \neq 0$ is a termination precondition for \texttt{f}. \begin{figure} \begin{center} \begin{lstlisting}[basicstyle=\footnotesize{}] int f(int *sx, int n, int incx) { int nincx = n * incx; int stemp=0; for (int i=0; incx<0 ? i >= nincx: i<= nincx; i+=incx) { stemp += sx[i-1]; } return stemp; } \end{lstlisting} \end{center} \vspace{-.5cm} \caption{ Non-unit increment from \texttt{CLapack}.\label{fig:example:clapack} } \end{figure} \section{Limitations, Related Works and Future Directions} Our approach makes significant progress towards analysing real-world software, advancing the state-of-the-art of termination analysis of large programs. Conceptually, we decompose the analysis into a sequence of well-defined second-order predicate logic formulae with existentially quantified predicates. In addition to \cite{DBLP:conf/pldi/GrebenshchikovLPR12}, we consider context-sensitive analysis, under-approximate backwards analysis, and make the interaction with termination analysis explicit. Notably, these seemingly tedious formulae are actually solved by our generic template-based synthesis algorithm, making it an efficient alternative to predicate abstraction. An important aspect of our analysis is that it is bit-precise. As opposed to the synthesis of termination arguments for linear programs over integers (rationals) \cite{CPR06b,LWY12,BG13,PR04a,HHLP13,BMS05,CSZ13}, this subclass of termination analyses is substantially less covered. While \cite{KSTW10,CKRW10} present methods based on a reduction to Presburger arithmetic, and a template-matching approach for predefined classes of ranking functions based on reduction to SAT- and QBF-solving, \cite{DBLP:conf/esop/DavidKL15} only compute intraprocedural termination arguments. There are still a number of limitations to be addressed, all of which connect to open challenges subject to active research. While some are orthogonal (e.g., data structures, strings, refinement) to our interprocedural analysis framework, others (recursion, necessary preconditions) require extensions of it. In this section, we discuss related work, as well as, characteristics and limitations of our analysis, and future directions (cost analysis and concurrency). \paragraph{Dynamically allocated data structures} We currently ignore heap-allocated data. This limitation could be lifted by using specific abstract domains. For illustration, let us consider the following example traversing a singly-linked list. \begin{lstlisting}[numbers=none] List x; while (x != NULL) { x = x->next; } \end{lstlisting} Deciding the termination of such a program requires knowledge about the shape of the data structure pointed by~$x$, namely, the program only terminates if the list is acyclic. Thus, we would require an abstract domain capable of capturing such a property and also relate the shape of the data structure to its length. Similar to \cite{CSZ13}, we could use \cite{DBLP:conf/popl/MagillTLT10} in order to abstract heap-manipulating programs to arithmetic ones. Another option is using an abstract interpretation based on separation logic formulae which tracks the depths of pieces of heaps similarly to \cite{DBLP:conf/cav/BerdineCDO06}. \paragraph{Strings and arrays} Similar to dynamic\rronly{ally allocated} data structures, handling strings and arrays requires specific abstract domains. String abstractions that reduce null-terminated strings to integers (indices, length, and size) are usually sufficient in many practical cases; scenarios where termination is dependent on the content of arrays are much harder and would require quantified invariants \cite{DBLP:conf/tacas/McMillan08}. Note that it is favorable to run a safety checker before the termination checker. The latter can assume that assertions for buffer overflow checks hold which strengthens invariants and makes termination proofs easier. \paragraph{Recursion} We currently use downward fixed point iterations for computing calling contexts and invariants that involve summaries (see Remark~\ref{rem:ipfixpoint}). This is cheap but gives only imprecise results in the presence of recursion, which would impair the termination analysis. We could handle recursions by detecting cycles in the call graph and switching to an upward iteration scheme in such situations. Moreover, an adaptation regarding the generation of the ranking function templates is necessary. An alternative approach would be to make use of the theoretic framework presented in \cite{PSW05} for verifying total correctness and liveness properties of while programs with recursion. \paragraph{Template refinement} We currently use interval templates together with heuristics for selecting the variables that should be taken into consideration. This is often sufficient in practice, but it does not exploit the full power of the machinery in place. While counterexample-guided abstraction refinement (CEGAR) techniques are prevalent in predicate abstraction \cite{CGJ+00}, attempts to use them in abstract interpretation are rare \cite{RRT08}. We consider our template-based abstract interpretation that automatically synthesises abstract transformers more amenable to refinement techniques than classical abstract interpretations where abstract transformers are implemented manually. \paragraph{Sufficient preconditions to termination} Currently, we compute \emph{sufficient} preconditions, i.e. under-approximating preconditions to termination via computing over-approxima\-ting preconditions to potential non-termination. The same concept is used by other works on conditional termination \cite{CGL+08,BIK12}. However, they consider only a single {\function} and do not leverage their results to perform interprocedural analysis on large benchmarks which adds, in particular, the additional challenge of propagating under-approximating information up to the entry {\function} (e.g.\ \cite{GIK13}). Moreover, by contrast to Cook et al \cite{CGL+08} who use an heuristic \textsc{Finite}-operator left unspecified for bootstrapping their preconditions, our bootstrapping is systematic through constraint solving. We could compute \emph{necessary} preconditions by computing over-approximating preconditions to potential termination (and negating the result). However, this requires a method for proving that there exist non-terminating executions, which is a well-explored topic. While \cite{GHM+08} dynamically enumerate lasso-shaped candidate paths for counterexamples, and then statically prove their feasibility, \cite{CCF+14} prove nontermination via reduction to safety proving. In order to prove both termination and non-termination, \cite{HLNR10} compose several program analyses (termination provers for multi-path loops, non-termination provers for cycles, and safety provers). \paragraph{Cost analysis} A potential future application for our work is cost and resource analysis. Instances of this type of analyses are the worst case execution time (WCET) analysis \cite{WEE+08}, as well as bound and amortised complexity analysis \cite{ADFG10,BEF+14,SZV14}. The control flow refinement approach \cite{GJK09,CML13} instruments a program with counters and uses progress invariants to compute worst case or average case bounds. \paragraph{Concurrency} Our current analysis handles single-threaded C programs. One way of extending the analysis to multi-threaded programs is using the rely-guarantee technique which is proposed in \cite{Jon83}, and explored in several works \cite{CPR07,GPR11,PR12} for termination analysis. In our setting, the predicates for environment assumptions can be used in a similar way as invariants and summaries are used in the analysis of sequential programs. \section{Conclusions}\label{sec:concl} While many termination provers mainly target small, hard programs, the termination analysis of larger code bases has received little attention. We present an algorithm for \emph{interprocedural termination analysis} for non-recursive programs. To our knowledge, this is the first paper that describes in full detail the entire machinery necessary to perform such an analysis. Our approach relies on a bit-precise static analysis combining SMT solving, template polyhedra and lexicographic, linear ranking function templates. We provide an implementation of the approach in the static analysis tool 2LS\xspace, and demonstrate the applicability of the approach to programs with thousands of lines of code. \bibliographystyle{ieeetr}
1,941,325,220,318
arxiv
\section{\label{sec:md-intro}Introduction} The investigation for calculating the dielectric tensor and Born effective charge tensor in finite electric field is very important in studying of bulk ferroelectrics, ferroelectric films, superlattices, lattice vibrations in polar crystals, and so on[1,2,3]. Recently, the investigation of response properties to the external electric field is becoming interested theoretically as well as practically. In particular, the dielectric tensor and Born effective charge tensor in finite electric field are important physical quantities for analyzing and modeling the response of material to the electric field. In case of zero electric field, these response properties have already been studied by using DFPT (Density Functional Perturbation theory), and excellent results have been obtained [1]. DFPT[4]provides a powerful tool for calculating the $2^{nd}$-order derivatives of the total energy of a periodic solid with respect to external perturbations, such as strains, atomic sublattice displacements, a homogeneous electric field etc. In contrast to the case of strains and sublattice displacements for which the perturbing potential remains periodic, treatment of homogeneous electric fields is subtle, because the corresponding potential requires a term that is linear in real space, thereby breaking the translational symmetry and violating the conditions of Bloch's theorem. Therefore, electric field perturbations have already been studied using the long-wave method, in which the linear potential caused by applied electric field is obtained by considering a sinusoidal potential in the limit that its wave vector goes to zero[5]. In this approach, however, the response tensor can be evaluated only at zero electric field. In nonzero electric field, the investigation of the response properties can't be performed using method based on Bloch's theorem, for nonperiodicity of the potential with respect to electric field. Therefore, several methods for overcoming it have been developed [2,3]. Ref.[2] introduces the electric field-dependent energy functional by Berry's phase, and suggests the methodology for calculating by using finite-difference scheme. Ref.[3] discusses the proposal for calculating it by the discretized form of Berry's phase term and response theory with respect to perturbation of the finite electric field. However, in these methods, the nonperiodicity of the potential due to electric field is resolved by introducing polarized WFs (Wannier Functions) due to finite electric field. This requires much cost in calculating its inverse matrix in the perturbation expansion of Berry's phase and yields instability of results. In this paper, we developed a new discrete method for calculating the dielectric tensor and Born effective charge tensor in finite electric field by using Berry's phase and the gauge invariance. We present a new method for overcoming non-periodicity of the potential in finite electric field due to the gauge invariance, and calculate the dielectric tensor and Born effective charge tensor in a discrete different way than ever before. This paper is organized as follows. In Sec. 2, instead of preceding investigation in which the total field-dependent energy functional is divided into Kohn-Sham energy, Berry's phase and Lagrange multiplier term, we discuss the method for studying the response properties with a new discrete way by using the polarization written with Berry's phase and unit cell periodic function polarized by field. In Sec. 3, we calculate the dielectric tensor and Born effective charge tensor in finite electric field by constructing the polarized Bloch wave Function and evaluating linear response of the wave function with Sternheimer equation. We also calculate the $2^{nd}$-order nonlinear dielectric tensor indicating nonlinear response property with respect to electric field. In order to demonstrate the correctness of the method, we also perform calculations for the semiconductors AlAs and GaAs under the finite electric field. In Sec. 4, summary and conclusion are presented. \section{\label{sec:md-method}New discrete method by using Berry's phase and the gauge invariance} The response tensors with respect to electric field in finite electric field are presented by the $2^{nd}$-order derivatives of the field-dependent total energy functional with respect to the atomic sublattice displacements and the homogeneous electric field. Here, the field-dependent energy functional[6] is \begin{equation} \label{eq:md-br1} E[\{ u^{(\vec \varepsilon )} \} ,\vec \varepsilon ] = E_{KS} [\{ u^{(\vec \varepsilon )} \} ] - \Omega \vec \varepsilon \cdot {\bf{P}}[\{ u^{(\vec \varepsilon )} \} \end{equation} where $E_{KS},\vec \varepsilon,{\bf{P}}$ are the Kohn-Sham energy functional, the finite electric field, and the cell volume, respectively. In addition, $u^{(\vec \varepsilon )}$ is a set of unit cell periodic function polarized by field, and polarization ${\bf{P}}$ written through Berry's phase is \begin{equation} \label{eq:md-br2} {\bf{P}} = - {{ife} \over {(2\pi )^3 }}\sum\limits_{n = 1}^M {\int\limits_{BZ} {d^3 k\left\langle {u_{n{\bf{k}}}^{(\vec \varepsilon )} } \right|} } \nabla _{\bf{k}} \left| {u_{n{\bf{k}}}^{(\vec \varepsilon )} } \right\rangle \end{equation} where $f$ is the spin degeneracy, and $f = 2$. In fact, the polarization is calculated by the following discretized form that is suggested by King-Smith and Vanderbilt[7]. \begin{equation} \label{eq:md-br3} {\bf{P}} = {{ef} \over {2\pi \Omega }}\sum\limits_{i = 1}^3 {{{{\bf{a}}_i } \over {N_ \bot ^{(i)} }}} \sum\limits_{l = 1}^{N_ \bot ^{(i)} } {{\mathop{\rm Im}\nolimits} \ln \prod\limits_{j = 1}^{N_i } {\det S_{{\bf{k}}_{lj} ,{\bf{k}}_{lj + 1} } } }, \end{equation} (Ref.[6] and [7] point out the meaning of every parameter in Eq.~\ref{eq:md-br2} and Eq.~\ref{eq:md-br3}.) Next, if we consider the orthonormality constraints of the unit cell periodic function polarized by field \begin{equation} \label{eq:md-br4} \left\langle {{u_{m{\bf{k}}}^{(\vec \varepsilon )} }} \mathrel{\left | {\vphantom {{u_{m{\bf{k}}}^{(\vec \varepsilon )} } {u_{n{\bf{k}}}^{(\vec \varepsilon )} }}} \right. \kern-\nulldelimiterspace} {{u_{n{\bf{k}}}^{(\vec \varepsilon )} }} \right\rangle = \delta _{mn} \end{equation} ,the total energy functional is divided into 3 parts as follows. \begin{equation} \label{eq:md-br5} F = F_{KS} + F_{BP} + F_{LM} \end{equation} where $F_{KS} = E_{KS}$ is Kohn-Sham energy and $F_{BP} = - \Omega \vec \varepsilon \cdot {\bf{P}}$ is the coupling between the electric field and the polarization by Berry's phase, and the constraints are given by Lagrange multiplier term, $F_{LM}$. Next, the set of unit cell periodic functions polarized by field, $\{ u^{(\vec \varepsilon )} \}$ is determined with variational method. The set of its function is different from a set of unit cell periodic function in zero field. Although, strictly speaking, calculated ground state is not exact ground state, this method is a way to overcome nonperiodicity of the potential caused by electric field[3]. Therefore, it does not include explicitly the gauge invariant property and requires big cost in calculating its inverse matrix in the perturbation expansion of Berry's phase, yielding instability of results. We apply the perturbation expansion by using DFPT, and investigate the response property with a new discrete method by using Eq.~\ref{eq:md-br2} and unit cell periodic functions polarized by field. Since the general perturbation expansion methods were mentioned in Refs.[1,2,3], we consider the response tensors, dielectric tensor and Born effective charge tensor in case of perturbation with respect to the atomic sublattice displacements and the homogeneous electric field. In Gaussian system the dielectric tensor is \begin{equation} \label{eq:md-br6} \in _{\alpha \beta } = \delta _{\alpha \beta } + 4\pi \chi _{\alpha \beta } \end{equation} and then electric susceptibility tensor can be written by perturbation expansion. \begin{equation} \label{eq:md-br7} \begin{split} \chi _{\alpha \beta } &= - {1 \over \Omega }{{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }} = - {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|T + v_{ext} \left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle + } } \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|T + v_{ext} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle ]\\ &+ \sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(ie{\partial \over {\partial k_\beta }})\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\beta }})\left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|(ie{\partial \over {\partial k_\alpha }})\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle }\\ &+ \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\alpha }})\left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle ] + {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{m,n = 1}^M {\Lambda _{mn}^{(0)} ({\bf{k}})[\left\langle {{u_{n{\bf{k}}}^{\varepsilon _\alpha } }} \mathrel{\left | {\vphantom {{u_{n{\bf{k}}}^{\varepsilon _\alpha } } {u_{m{\bf{k}}}^{\varepsilon _\beta } }}} \right. \kern-\nulldelimiterspace} {{u_{m{\bf{k}}}^{\varepsilon _\beta } }} \right\rangle + \left\langle {{u_{n{\bf{k}}}^{\varepsilon _\beta } }} \mathrel{\left | {\vphantom {{u_{n{\bf{k}}}^{\varepsilon _\beta } } {u_{m{\bf{k}}}^{\varepsilon _\alpha } }}} \right. \kern-\nulldelimiterspace} {{u_{m{\bf{k}}}^{\varepsilon _\alpha } }} \right\rangle ]} } - {1 \over {2\Omega }}{{\partial ^2 E_{XC} } \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }} \end{split} \end{equation} Though Eq.~\ref{eq:md-br7} reflects successfully the response properties with respect to perturbation in finite electric field, it does not describe sufficiently the periodic effect of crystal. Because the operator,$ie\nabla _{\bf{k}}$ hidden Berry's phase must be applied to gauge invariant quantity in order to overcome nonperiodicity of potential caused by field[7]. Therefore, using the gauge invariant form,$ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|$ and considering $0^{th}$-order,$\Lambda _{mn}^{(0)} ({\bf{k}}) = \varepsilon _{n{\bf{k}}}^{(0)} \delta _{mn}$, dielectric tensor is \begin{equation} \label{eq:md-br8} \begin{split} \chi _{\alpha \beta } &= \left. { - {1 \over \Omega }{{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }}} \right|_{\varepsilon = \varepsilon ^{(0)} }\\ & = - {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} \left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle + } } \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle\\ & + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(ie{\partial \over {\partial k_\beta }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|(ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle\\ &- \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\beta }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle - \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle ] - \left. {{1 \over {2\Omega }}{{\partial ^2 E_{XC} } \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }}} \right|_{\varepsilon = \varepsilon ^{(0)} } \end{split} \end{equation} where BZ(Brillouin Zone) integration is performed by Monkhorst-Pack special point method. Meanwhile, the partial derivative is calculated with following discretized method. \begin{equation} \label{eq:md-br9} {\partial \over {\partial k_x }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_x }}(\left| {u_{m,i + 1,j,k}^{} } \right\rangle \left\langle {u_{m,i + 1,j,k}^{} } \right| - \left| {u_{m,i - 1,j,k}^{} } \right\rangle \left\langle {u_{m,i - 1,j,k}^{} } \right|) \end{equation} \begin{equation} \label{eq:md-br10} {\partial \over {\partial k_y }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_y }}(\left| {u_{m,i,j + 1,k}^{} } \right\rangle \left\langle {u_{m,i,j + 1,k}^{} } \right| - \left| {u_{m,i,j - 1,k}^{} } \right\rangle \left\langle {u_{m,i,j - 1,k}^{} } \right|) \end{equation} \begin{equation} \label{eq:md-br11} {\partial \over {\partial k_z }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_z }}(\left| {u_{m,i,j,k + 1}^{} } \right\rangle \left\langle {u_{m,i,j,k + 1}^{} } \right| - \left| {u_{m,i,j,k - 1}^{} } \right\rangle \left\langle {u_{m,i,j,k - 1}^{} } \right|) \end{equation} Additionally, the $1^{st}$-order wave function response with respect to finite electric field is calculated with the following Sternheimer equation \begin{equation} \label{eq:md-br12} P_{c{\bf{k}}} (T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} )P_{c{\bf{k}}} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle = - P_{c{\bf{k}}} (ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle \end{equation} Generally, investigation of the $2^{nd}$-order energy response requires up to $1^{st}$-order wave function response with respect to perturbation by using $"$2n+1 $"$theorem. Therefore, every result can be calculated with only the $1^{st}$-order wave function response to finite electric field. In this way, Born effective charge tensor is \begin{equation} \label{eq:md-br13} \begin{split} Z_{\kappa ,\alpha \beta }^* &= \left. { - {{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \tau _{\kappa , \beta }}}} \right|_{\varepsilon = \varepsilon ^{(0)} }\\ & = {{f\Omega } \over {(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{(0)} } \right|(T + v_{ext} )^{\tau _{\kappa ,\beta } } \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(T + v_{ext} )^{\tau _{\kappa ,\beta } } \left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left. {{{\partial ^2 E_{XC} } \over {\partial \tau _{\kappa ,\beta} \partial \varepsilon _\alpha }}} \right|_{\varepsilon = \varepsilon ^{(0)} } } } \end{split} \end{equation} Eq.~\ref{eq:md-br13} also calculate with DFPT and the wave function polarized by field. \section{\label{sec:md-result}Results and Analysis} The calculation of the dielectric permittivity tensor and the Born effective charge tensor is performed in three steps. First, a ground state calculation in finite electric field is performed using the Berry's phase method implemented in the ABINIT code, and the field-polarized Bloch functions are stored for the later linear-response calculation. Second, the linear-response calculation is performed to obtain the first order response of Bloch functions. Third, the matrix elements of the dielectric and Born effective charge tensors are computed using these $1^{st}$-order responses. To verify the correctness of our method, we have performed test calculation on two prototypical semiconductors, AlAs and GaAs. In this calculation, we have used the HSC norm-conserving pseudopotential method based on Density Functional Theory with LDA (Local Density Approximation). The cutting energy, $E_{cut} = 20Ry$ and $6 \times 6 \times 6$ Monkhorst-Pack mesh for $k$-point sampling were used. In Table 1, we present the calculated values of dielectric tensor and Born effective charge tensor of AlAs and GaAs, when such finite electric field as in Ref. [3] is applied along the [100] direction. In order to compare our method with the preceding one, we present the calculated values in our method and preceding one (Ref.[3]), and the experimental values. As you see in Table 1, the calculated value of dielectric tensor in our method goes to the experiment one[2] more closely than the calculated one in preceding method (Ref.[3]).However, in case of Born effective charge tensor, the difference between our method and preceding one does not almost occur. It shows that in calculating Born effective charge tensor, there exist the $1^{st}$-order contribution of the potential with respect to atomic sublattice displacements and one of the polarized wave function with respect to the finite electric field, the latter playing the essential role. \begin{table*}[!h] \begin{center} \caption{\label{tab:md-die}Calculated and experimental values of dielectric tensor and Born effective charge tensor in finite electric field} \begin{tabular}{|c|c|c|c|} \hline Material& Method & $\in$ & $Z^*$ \\ \hline & Our Method & 9.48 & 2.05 \\ AlAs &Preceding Method[3] & 9.72 & 2.03 \\ & Experiment[2] & 8.2 & 2.18 \\ \hline & Our Method & 12.56 & 2.20 \\ GaAs & Preceding Method[3] & 13.32 & 2.18 \\ & Experiment[2] & 10.9 & 2.07 \\ \hline \end{tabular} \end{center} \end{table*} We also calculated the $2^{nd}-order \ nonlinear \ dielectric \ tensor$, nonlinear response property with respect to electric field. The $2^{nd}$-order nonlinear dielectric tensor is \begin{equation} \label{eq:md-br14} \chi _{123}^{(2)} = {1 \over 2}{{\partial ^2 P_2 } \over {\partial \varepsilon _1 \partial \varepsilon _3 }} = {1 \over 2}{{\partial \chi _{23} } \over {\partial \varepsilon _1 }} \end{equation} Table 2 shows calculated value of the $2^{nd}$-order nonlinear dielectric tensor on AlAs. \begin{table*}[!h] \begin{center} \caption{\label{tab:md-non}Calculated value of the $2^{nd}$-order nonlinear dielectric tensor on AlAs} \begin{tabular}{|c|c|} \hline Method & $\chi _{123}^{(2)} (pm/V)$ \\ \hline Our Method & 67.32 \\ Preceding Method[3] & 60.05 \\ Experiment[8] & 78 $\pm$ 20\\ \hline \end{tabular} \end{center} \end{table*} As shown in Table 2, the calculated value of $2^{nd}$-order nonlinear dielectric tensor in our method coincides with the experimental value[8] more closely than the calculated one in the preceding method (Ref.[3]). \section{\label{sec:sum}Summary} We suggested a new method for calculating the dielectric tensor and Born effective charge tensor in finite electric field. In particular, in order to overcome nonperiodicity of potential caused by electric field, a new transformation conserving gauge invariant property is introduced. In future, this methodology can be expanded not only to perturbation with respect to field and atomic replacement but also to the other cases, such as strain and chemical composition of solid solution. \section*{\label{ack}Acknowledgments} It is pleasure to thank Jin-U Kang, Chol-Jun Yu, Kum-Song Song, Kuk-Chol Ri and Song-Bok Kim for useful discussions. This work was supported by the Physics faculty in {\bf Kim Il Sung} university of Democratic People's Republic of Korea. \section*{\label{ref}References} [1] C.-J. Yu and H. Emmerich. J. Phys.:Condens. Matter, 19:306203, 2007. [2] I. Souza, J. Iniguez, and D. Vanderbilt. Phys. Rev. Lett., 89:117602, 2002. [3] X. Wang and D. Vanderbilt. Phys. Rev. B, 75:115116, 2007. [4] S. Baroni, Stefano de Gironcoli, and Andrea Dal Corso. Rev.Mod.Phys., 73:515, 2001. [5] X. Gonze and C. Lee. Phys. Rev. B, 55:10355, 1997. [6] R. W. Nunes and X. Gonze. Phys. Rev. B, 63:155107, 2001. [7] R. D. King-Smith and D.Vanderbilt. Phys. Rev. B, 48:4442, 1993. [8] I. Shoji, T. Kondo, and R. Ito. Opt. Quantum Electron, 34:797, 2002 \end{document}
1,941,325,220,319
arxiv
\section{Introduction}\label{sec:intro} Since Einstein discovered the theory of general relativity \cite{Einstein:1915:1,Einstein:1915:2,Einstein:1915:3,Einstein:1915:4,Einstein:1916}, many attempts to solve the field equations were undertaken. Yet only a few analytical solutions to the full field equations are known at the time being, mostly for highly symmetric matter or field configurations \cite{Stephani:Kramer:MacCallum:Hoenselaers:Herlt:2003,Griffiths:Podolsky:2009}. The most famous solutions of the field equations are the Schwarzschild \cite{Schwarzschild:1916} and the Kerr \cite{Kerr:1963} ones. Even so, an analytic solution for binary systems of black holes (or even neutron stars) is missing and unlikely to be found in the future. Nevertheless, such binaries are very interesting. In particular they constitute the most promising and strongest sources for gravitational waves, one of the most fascinating predictions of the theory of general relativity \cite{Cutler:Apostolatos:Bildsten:others:1993,Reisswig:Husa:Rezzolla:Dorband:Pollney:Seiler:2009}. To observe gravitational waves one needs very sensitive detectors because of the tiny cross section of the waves with matter. There exist several ground-based detector projects like e.g. Geo600, VIRGO, and LIGO \cite{GEO600home:2011, VIRGOhome:2011, advancedLIGOhome:2011} for this purpose. Their sensitivity increased during the last years due to continuous upgrades and they probably will detect gravitational waves directly within the next {\em few} years. The mentioned large-scale detectors were started to be built after gravitational waves had been observed indirectly by measuring the change of the radial orbital period e.g. for the binary pulsar PSR 1913+16 by Hulse and Taylor \cite{Hulse:Taylor:1975} in 1978 (Nobel prize 1993). Further precise observations of orbital period decays are the {\em double} pulsar PSR J0737--3039, \cite{Kramer:Stairs:Manchester:McLaughlin:Lyne:others:2006, Kramer:Wex:2009}, and the white dwarf binary J0651+2844 \cite{Hermes:others:2012}. Their theoretical prediction is based on the famous quadrupole formula (see e.g. \cite[Eq. (4.12)]{Schafer:1985}) which gives an expression for the orbital averaged energy loss of the whole system due to gravitational radiation and can thus be translated into a decay of the orbital period. If gravitational wave signals are detected in the near future, a great effort in data analysis will be necessary to extract them from the noisy raw data. Necessary ingredients to achieve this goal on the way towards gravitational wave astronomy are predictions on which kind of signals can be expected, e.g. from binary sources. The theoretical form of the signals -- which should incorporate all known physical effects to some specific order of magnitude -- is called template and relies on a solution of the field equations and therefore on the physical parameters of the system. Unfortunately, after around one hundred years of research there are no dynamical analytical solutions of the field equations for an $n$-body system ($n\ge2$) known. There are two possibilities to circumvent this problem. The first one is to rely on numerical simulations. The second possibility is to rely on approximation methods to extract solutions for these kind of problems from the field equations. These include in particular the post-Minkowskian approximation, the post-Newtonian (PN) approximation, extreme mass ratios (including testmass case), and the effective-one-body approach. One of the most successful approximation methods is the post-Newtonian approximation, a slow motion and wide separation approximation. It is used to treat the finite propagation velocity of the gravitational field approximately as an instantaneous effect and therefore ``freeze" its dynamics. Thus the field degrees of freedom are eliminated in this approximation scheme. Afterwards one is left with ordinary differential equations for positions, momenta, and spins of the objects in the system only. It is convenient to encode these equations of motion into a Lagrangian or a Hamiltonian. To get a more quantitative understanding of the post-Newtonian approximation, consider a gravitationally bound binary system in the Newtonian limit. In this case one can relate the Newtonian kinetic energy and the Newtonian gravitational potential through the Virial theorem, namely \begin{align}\label{eq:PNcount} \frac{v^2}{c^2} \sim \frac{G M}{c^2 r} \sim \frac{r_s}{r}\,, \end{align} with $v$ denoting a typical orbital velocity in the system, $c$ meaning the speed of light, $G$ Newton's gravitational constant, $M$ the total mass of the system, $r$ a typical distance between the gravitating objects, and $r_s$ being the Schwarzschild radius of the total system. Every order in $v^2/c^2$ is denoted as one relative post-Newtonian order. The post-Newtonian approximation was used to obtain matter equations of motion for some special systems already shortly after general relativity had been developed, see e.g. the 1PN binary Lagrangian in \mycite{Lorentz:Droste:1917:1,Lorentz:Droste:1917:2}{Lorentz:Droste:1917:1,*Lorentz:Droste:1917:2} and the 1PN equations of motion in e.g. \cite{Einstein:Infeld:Hoffmann:1938}. Furthermore previous results for non-spinning objects obtained within the formalism used in the present article, namely the canonical formalism of Arnowitt, Deser, and Misner (ADM) \cite{Arnowitt:Deser:Misner:1962,*Arnowitt:Deser:Misner:2008}, are the 2PN \cite{Ohta:Okamura:Kimura:Hiida:1974, Damour:Schafer:1985, Damour:Schafer:1988}, 2.5PN \cite{Schafer:1985,Schafer:1986}, 3PN \cite{Jaranowski:Schafer:1998, Kimura:Toiya:1972, Jaranowski:Schafer:1999, Damour:Jaranowski:Schafer:2000,Damour:Jaranowski:Schafer:2001}, and 3.5PN \cite{Jaranowski:Schafer:1997,Konigsdorffer:Faye:Schafer:2003} Hamiltonians. For various other (non-canonical) derivations of post-Newtonian results for non-rotating objects see \cite{Blanchet:2006, Futamase:Itoh:2007, Pati:Will:2000, Damour:EspositoFarese:1995, Goldberger:Rothstein:2006, Gilmore:Ross:2008, Kol:Smolkin:2009, Foffa:Sturani:2011, Foffa:Sturani:2012} and references therein. Regarding the spin corrections to the post-Newtonian approximation, the leading order can be found in \cite{Barker:OConnell:1975, DEath:1975, Barker:OConnell:1979, Thorne:Hartle:1985}. Interestingly the leading-order equations of motion were already obtained earlier within the (more general) post-Minkowskian approximation \cite{Goenner:Gralewski:Westpfahl:1967, Bennewitz:Westpfahl:1971}. The post-Newtonian next-to-leading order in spin was only tackled more recently {\mycite{Tagoshi:Ohashi:Owen:2001, Faye:Blanchet:Buonanno:2006, Damour:Jaranowski:Schafer:2008:1, Steinhoff:Hergt:Schafer:2008:2, Steinhoff:Hergt:Schafer:2008:1, Perrodin:2010, Porto:2010, Levi:2010, Porto:Rothstein:2008:1, Porto:Rothstein:2008:1:err, Levi:2008} {Tagoshi:Ohashi:Owen:2001, Faye:Blanchet:Buonanno:2006, Damour:Jaranowski:Schafer:2008:1, Steinhoff:Hergt:Schafer:2008:2, Steinhoff:Hergt:Schafer:2008:1, Perrodin:2010, Porto:2010, Levi:2010, Porto:Rothstein:2008:1, *Porto:Rothstein:2008:1:err, Levi:2008}}. The post-Newtonian next-to-next-to-leading order spin-orbit and spin(1)-spin(2) Hamiltonians are the subject of the present paper. For very rapidly rotating objects they can be comparable in strength to 3.5PN and 4PN corrections, respectively. Half a post-Newtonian order above them the first (leading-order) spin-dependent \emph{radiative} Hamiltonians appear. The spin-orbit and spin(1)-spin(2) ones were already obtained recently \cite{Wang:Steinhoff:Zeng:Schafer:2011}. At the 3.5PN level one should further include all Hamiltonians cubic in the spins derived in \cite{Hergt:Schafer:2008:2, Hergt:Schafer:2008}. Notice that these cubic Hamiltonians are only known for binary black holes so far, whereas all other mentioned Hamiltonians (including the ones derived in the present paper) are valid for general compact objects [or have been generalized to this case, see \mycite{Poisson:1998, Porto:Rothstein:2008:2, Steinhoff:Schafer:2009:1, Hergt:Steinhoff:Schafer:2010:1} {Poisson:1998, Porto:Rothstein:2008:2, *Porto:Rothstein:2008:2:err, Steinhoff:Schafer:2009:1, Hergt:Steinhoff:Schafer:2010:1} for the spin(1)-spin(1) level]. However, the Hamiltonians in the present paper are only obtained for binary systems (but many results for three and more objects already exist \cite{Schafer:1987, Ohta:Kimura:Hiida:1975, Lousto:Nakano:2008, Chu:2009, Hartung:Steinhoff:2010}). Besides spin effects, tidal contributions to the post-Newtonian approximation become very important for general compact objects like neutron stars \cite{Damour:Nagar:2009, Vines:Flanagan:2010, Bini:Damour:Faye:2012}, also see e.g.\ \cite{Steinhoff:Puetzfeld:2012} for the extreme mass ratio case. The present article provides a detailed exposition of how the next-to-next-to-leading order (NNLO) spin-orbit and spin(1)-spin(2) post-Newtonian Hamiltonians can be derived from an extension of the canonical formalism of Arnowitt, Deser, and Misner \cite{Arnowitt:Deser:Misner:1962,*Arnowitt:Deser:Misner:2008}. The mentioned extension of the ADM formalism refers to a generalization from (non-rotating) point-masses (PM) to rotating objects \cite{Steinhoff:Schafer:2009:2}, see also \cite{Steinhoff:2011, Steinhoff:Wang:2009, Steinhoff:Schafer:Hergt:2008}. Results for these Hamiltonians were already presented in this journal \cite{Hartung:Steinhoff:2011:1,Hartung:Steinhoff:2011:2}. Their technicalities are also discussed in the (german) PhD thesis of JH \cite{Hartung:2012}. A corresponding Lagrangian potential for the NNLO spin(1)-spin(2) interaction was derived simultaneously by M. Levi in \cite{Levi:2011} via an effective field theory (EFT) approach. A comparisons between EFT and ADM results at NNLO spin(1)-spin(2) was not yet undertaken, as it is not straightforward (see \cite{Hergt:2011, Hergt:Steinhoff:Schafer:2011} for a discussion at NLO). However, in \cite{Marsat:Bohe:Faye:Blanchet:2012,Bohe:Marsat:Faye:Blanchet:2012} the equations of motion in harmonic gauge were calculated at NNLO spin-orbit level and agreement with our Hamiltonian was found. For unbound systems one can in general not relate the velocities of the objects to the strength of the gravitational coupling, i.e.\ the basic relation \eqref{eq:PNcount} of the post-Newtonian approximation is not applicable. For these kinds of systems (e.g. scattering of black holes) an useful approximation is the so called post-Minkowskian approximation, which is an expansion in powers of the gravitational coupling constant $G$ only and thus also appropriate for very high velocities and weak fields. The first post-Minkowskian approximation for non-rotating objects was used to derive the Hamiltonian in \cite{Ledvinka:Schafer:Bicak:2008} within the ADM formalism. In principle the expressions given in \cite{Schafer:1986} can be used to derive the ADM Hamiltonian in the second post-Minkowskian approximation if there is no incoming radiation. But the integrals have not been given in closed form yet and since we are only interested in gravitationally bound systems, we retreat to the post-Newtonian approximation. The Hamiltonians derived in this article are not the end of the journey. In order to extract useful information (i.e. the parameters of a binary) from the gravitational waves, one needs the solution of the post-Newtonian equations of motion for the binary.\footnote{The following literature and in particular \cite{Tessmer:2011} gives a complete overview over the research area of parameterization. See e.g.\ \cite{Damour:Deruelle:1985} for a point-mass 1PN parameterization, \cite{Schafer:Wex:1993,*Schafer:Wex:1993:err} for a point-mass 2PN parameterization, \cite{Memmesheimer:Gopakumar:Schafer:2004} for a quasi-Keplerian 3PN point-mass parameterization, \cite{Wex:1995,Konigsdorffer:Gopakumar:2005} for point-mass parameterizations under leading order spin-orbit coupling, \cite{Konigsdorffer:Gopakumar:2006} for using a 3PN point-mass parameterization including radiative dynamics for the phasing of gravitational waves, \cite{Tessmer:2009} for a post-equal-mass parameterization of a binary at 3PN point-mass level under leading order spin-orbit coupling, and finally \cite{Tessmer:Hartung:Schafer:2010} for a parameterization up to 2.5PN with orbital angular momentum aligned spins. \cite{Tessmer:Hartung:Schafer:2012} will incorporate the linear-in-spin Hamiltonians given in the present article into the orbital elements for orbital momentum aligned spins.} For known orbital parameterizations one can further calculate the far-zone radiation field (see e.g \cite{Bonnor:1959,Thorne:1980,Blanchet:Damour:1986} for general formalism of treating radiation in general relativity, \cite{Blanchet:Schafer:1989, *Blanchet:Schafer:1989:err} for higher order radiation losses in point-mass binaries, \cite{Kidder:1995,Blanchet:Buonanno:Faye:2006, *Blanchet:Buonanno:Faye:2006:err,*Blanchet:Buonanno:Faye:2006:err:2,Buonanno:Faye:Hinderer:2012} for spin effects on the radiation, \cite{Blanchet:Buonanno:Faye:2011} for spin-dependent tail effects, and \cite{Porto:Ross:Rothstein:2012,Porto:Ross:Rothstein:2010} for multipole moments including spin up to 2.5PN). In case of eccentric orbits the radiation consists of several modes which may be extracted by a mode decomposition (see e.g.\ \cite{Turner:Will:1978, Galtsov:Matiukhin:Petukhov:1980,Pierro:Pinto:Spallicci:Laserra:Recano:2001} for decomposition of the radiation field in tensor spherical harmonics, and computing and solving ordinary differential equations for mean motion and eccentricity in a binary without spin; see also \cite{Junker:Schafer:1992, MorenoGarrido:Buitrago:Mediavilla:1994,MorenoGarrido:Mediavilla:Buitrago:1995, Gopakumar:Iyer:2002, Tessmer:Gopakumar:2006, Tessmer:Schafer:2010, Tessmer:Schafer:2011} for higher order mode decomposition of multipole moments also for point-mass binaries only). Further for known parameterizations one is also able to calculate the loss of energy and angular momentum during the inspiral process, see e.g. \cite{Gopakumar:Iyer:1997,Gopakumar:Iyer:2002,Damour:Gopakumar:Iyer:2004} for time evolution effects on the template banks and \cite{Chandrasekhar:Esposito:1970, Wang:Steinhoff:Zeng:Schafer:2011} for the near-zone luminosity. This is necessary to construct the mentioned template banks to extract the physical parameters from a noisy signal via a matched filtering procedure. There the ``fitting factor'' is a very sensitive indicator for the performance of the template bank vis-{\`a}-vis the real signal. An introduction to matched filtering can be found in \cite{Finn:1992,Apostolatos:1995}. There exists a plethora of articles referring to circular inspiral without spin, e.g. \cite{Damour:Iyer:Sathyaprakash:2000,Damour:Iyer:Sathyaprakash:2001,*Damour:Iyer:Sathyaprakash:2001:err, Ajith:Babak:Chen:others:2008,*Ajith:Babak:Chen:others:2008:err,Buonanno:Chen:Vallisneri:2003,*Buonanno:Chen:Vallisneri:2003:err}. Though the effects considered in the present article are very small, they still probably have a serious impact on future template banks. The reason for this is that even tiny contributions to the binary's interaction accumulate during the long inspiral phase (where the post-Newtonian approximation is still valid) and thus may become observable for potentially planned space based detectors in the future. During the {\em very} late inspiral phase these effects will become more important, but the post-Newtonian approximation will break down due to the highly nonlinear behavior of the dynamics and high velocities ($v/c \gtrsim 1/3$). To overcome this problem it is most convenient to extrapolate to this nonlinear regime by resumming the post-Newtonian series. Such a resummation was successfully implemented into the effective-one-body approach, which analytically provides complete gravitational waveforms for binary inspiral that are in good agreement with numerical relativity. This succeeded so far for point-masses \cite{Damour:Nagar:2009:2, Buonanno:etal:2009} and for non-precessing spins \cite{Pan:etal:2009} by calibrations to full numerical simulations, but more work is needed for precessing spins \cite{Damour:2001, Barausse:Buonanno:2009}. Here the Hamiltonians derived in the present paper should be very useful, and the spin-orbit one was indeed already incorporated into the effective-one-body approach \cite{Nagar:2011,Barausse:Buonanno:2011}. See also \cite{Taracchini:Pan:Buonanno:Barausse:Boyle:Chu:Lovelace:Pfeiffer:Scheel:2012} for a very complete overview of the literature on the effective-one-body approach. Alternative ways of resumming the post-Newtonian series by Pad\'{e} approximants are possible, which is most interesting for certain gauge invariant quantities \cite{Damour:Jaranowski:Schafer:2000:2,Damour:Jaranowski:Schafer:2000:3}. Within the overlap region of post-Newtonian approximation and numerical relativity in which the gravitational field is not too strong and the number of orbits can be handled by numerical simulations the results of both approaches can be compared. The mentioned resummation methods can make these approximate results competitive to numerical relativity% \footnote{See e.g.\ \cite{Faber:2009,Duez:2010,Rosswog:2010} for reviews, \cite{Shibata:Uryu:2000} for the first simulation of binary neutron stars, and e.g. \cite{Thierfelder:Bernuzzi:Brugmann:2011, Gold:Bernuzzi:Thierfelder:Brugmann:Pretorius:2011,Bernuzzi:Nagar:Thierfelder:Brugmann:2012} for very recent studies about them. The first simulations of coalescences of binary black holes were performed in \cite{Pretorius:2005,Baker:Centrella:Choi:Koppitz:vanMeter:2006}. Furthermore see e.g.\ \cite{Lousto:Zlochower:2008,Campanelli:Lousto:Zlochower:2008,Galaviz:Brugmann:Cao:2010} for numerical simulations of a system of more than two black holes and the recent publication \cite{Galaviz:2011} about its chaotic behavior.} also in the late inspiral phase \cite{Damour:Nagar:2009:2, Buonanno:etal:2009}. Last but not least, a powerful interface between self-force calculations and the ADM Hamiltonians derives from the redshift observable and the first law of binary dynamics \cite{LeTiec:Barausse:Buonanno:2012}, which was extended to include spin recently \cite{Blanchet:Buonanno:LeTiec:2012}. For all computations we used {\scshape xTensor} \cite{MartinGarcia:2002}, a free package for {\scshape Mathematica} \cite{Wolfram:2003}, especially because of its fast index canonicalizer based on the package {\scshape xPerm} \cite{MartinGarcia:2008}. We also used the package {\scshape xPert} \cite{Brizuela:MartinGarcia:MenaMarugan:2009}, which is part of {\scshape xTensor}, for performing the perturbative part of our calculations. Furthermore we wrote several {\scshape Mathematica} packages ourselves for the various steps of the computation including evaluating integrals. The article is organized as follows. \Sec{sec:d1splitADM} shows how to split the spacetime in a $(d+1)$ manner, derives the Hamilton constraint and the momentum constraint from the Einstein-Hilbert action, and shows the definition of the ADM Hamiltonian. In \Sec{sec:expansionconstraints} the constraint equations are expanded using a post-Newtonian power counting scheme and several integrations by parts simplify subsequent integrations. Afterwards a transition from the ADM Hamiltonian to a Routhian via a Legendre transform is performed in \Sec{sec:routhian_waveequation}. The integrands are composed of various fields and their sources. These are explained in detail in \Sec{sec:sources}. From the sources one can obtain the solution of the constraint equations and wave equations in \Sec{sec:fieldsolutions} and is ready to integrate. Due to its importance a detailed explanation of the ultraviolet analysis is also provided there. In \Sec{sec:results} the resulting next-to-next-to-leading order Hamiltonians are given. The kinematic consistency check of the Hamiltonians, namely the global post-Newtonian approximate Poincar\'e algebra, is discussed in \Sec{sec:kinematicalconsistency}. Also the center-of-mass vectors are uniquely determined from an ansatz in the same section. Another check is provided in \Sec{sec:testspin}. There the Hamiltonians from \Sec{sec:results} are compared with the Hamiltonian of a test-spin in a stationary exterior gravitational field. After that in \Sec{sec:conclusions} the conclusions are presented and in the {Appendices} more details on some of the calculation procedures used in the former sections are provided. The spacetime has $d$ spatial dimensions, $1$ time dimension, and metric signature $d-1$. In this article a restriction to $d=3$ is explicitly written, if this is not the case $d$ is always generic. All calculations at the level of the field equations are performed in arbitrary dimensions, because of the necessary ultraviolet(UV) analysis concerning the short-range decay of fields around the sources. Vectors are written in boldface and their components are denoted by Latin indices from the middle of the alphabet. The scalar product between two vectors $\vct{a}$ and $\vct{b}$ is denoted by $\scpm{\vct{a}}{\vct{b}} \equiv (\vct{a} \cdot \vct{b})$. Our units are such that $c=1$. There is no special convention for Newton's gravitational constant $G^{(\dim)}$. (Notice that $G^{(\dim)}$ is the $d$-dimensional coupling strength of the gravitational field. It has the same numerical value as $G$ in $d=3$ dimensions, but different units.) In the results and the expressions for the sources, $\vmom{a}$ denotes the canonical linear momentum of the $a$th object, $\vx{a}$ the canonical conjugate position of the object, $m_a$ the mass of the object, $\vspin{a}$ and $\spin{a}{i}{j}$ the spin vector and the spin tensor of the object, $\relab{ab}=|\vx{a} - \vx{b}|$ the relative distance between two objects, and $\vnxa{ab} = (\vx{a} - \vx{b})/\relab{ab}$ the direction vector pointing from object $b$ to object $a$. Also important are the distance between source $a$ and field point, $\relab{a} = |\vct{x} - \vx{a}|$, the unit vector pointing from source $a$ to the field point $\vnxa{a} = (\vct{x} - \vx{a})/\relab{a}$, and the circumference of the triangle of source $a$, source $b$ and the field point $\vct{x}$, given by $s_{ab} = \relab{a} + \relab{b} + \relab{ab}$. In the binary case the object labels $a, b$ take only the values $1$ and $2$. The round brackets around the indices of the canonical spin tensor $\spin{a}{i}{j}$ indicate that its components are given in a local Lorentz basis, which is essential for the canonical formalism, see \cite{Steinhoff:Schafer:2009:2, Steinhoff:2011}. \section{ADM Canonical Formalism}\label{sec:d1splitADM} In the following we introduce the ADM canonical formalism \mycite{Arnowitt:Deser:Misner:1962,Arnowitt:Deser:Misner:2008,DeWitt:1967,Regge:Teitelboim:1974} {Arnowitt:Deser:Misner:1962,*Arnowitt:Deser:Misner:2008,DeWitt:1967,Regge:Teitelboim:1974}. We pay special attention to the dependence on the spatial dimensions $d$, as dimensional regularization is important for the consistency of the post-Newtonian approximation when point-like sources are utilized \cite{Damour:Jaranowski:Schafer:2001}. The ADM formalism is extended from non-spinning to spinning point-like sources valid to linear order in spin here, see \cite{Steinhoff:Schafer:2009:2,Steinhoff:2011} for the case $d=3$. \subsection{Setting the Canonical Formalism} Canonical methods usually require one to single out a time coordinate, thus splitting spacetime into space and time. The geometrically favored way for such a splitting gives rise to the line element \begin{align} \text{d}s^2 &= -\lapse^2 \text{d}t^2 + \gamma_{ij} (\shiftup{i} \text{d}t + \text{d}x^i) (\shiftup{j} \text{d}t + \text{d}x^j)\label{eq:metricdecomp1}\,, \end{align} which corresponds to a $(d+1)$-dimensional metric tensor field given by \begin{align} g_{\mu\nu} = \left(\begin{array}{cc} -\lapse^2 + \shift{i} \shiftup{i} & \shift{i} \\ \shift{j} & \gamma_{ij} \end{array}\right)\,, \qquad g^{\mu\nu} = \left(\begin{array}{cc} -1/\lapse^2 & \shiftup{i}/\lapse \\ \shiftup{j}/\lapse & \gamma^{ij} - \frac{\shiftup{i} \shiftup{j}}{\lapse^2} \end{array}\right)\label{eq:metricdecomp2}\,, \end{align} where $\lapse$ is the lapse function, $\shiftup{i}$ the shift vector, $\shift{i} = \gamma_{ij} \shiftup{j}$, and $\gamma_{ij}$ the metric of the spatial slices, $\gamma^{ij}$ being its inverse \cite{Misner:Thorne:Wheeler:1973,Poisson:2002,Gourgoulhon:2007}. Notice that rewriting the metric tensor field in terms of lapse, shift, and the $d$-dimensional metric $\gamma_{ij}$ corresponds only to another representation of the metric, since together they have the same number of degrees of freedom. On the one hand a symmetric rank two tensor field like $g_{\mu\nu}$ in $d+1$ dimensions has $(d+1)(d+2)/2$ independent entries and on the other hand, a symmetric rank two tensor in $d$ dimensions like $\gamma_{ij}$ has $d(d+1)/2$ degrees of freedom, the vector $\shiftup{i}$ has $d$ independent entries, and the scalar $\lapse$ represents one degree of freedom. Obviously the degrees of freedom match. The action of the gravitational field is given by the usual Einstein--Hilbert action, namely \begin{align} W_{{\text{field}}} = \int \text{d}^{d+1}x\, \mathcal{L}_{{\text{field}}} \,, \qquad \mathcal{L}_{{\text{field}}} = \frac{1}{16\piG^{(\dim)}} \sqrt{-g} \ricci{(d+1)}{} \,, \end{align} where $g = \det(g_{\mu\nu})$ and $\ricci{(d+1)}{}$ is the $(d+1)$-dimensional Ricci scalar, which can be split in a $(d+1)$ manner resulting in \begin{align} \mathcal{L}_{{\text{field}}} &= \frac{1}{16\piG^{(\dim)}} \lapse \sqrt{\gamma} \left[\ricci{(d)}{} + K_{ij}K^{ij} - (\gamma_{ij} K^{ij})^2\right] + \text{(td)} \,. \label{eq:Ldecomp} \end{align} Here $\ricci{(d)}{}$ is the $d$-dimensional spatial Ricci scalar, $K_{ij}$ is the extrinsic curvature, $\gamma = \det(\gamma_{ij})$, and (td) denotes a total divergence which we ignore for now (it will be discussed in \Sec{sec:ADMHam}). We define \begin{align} \pi^{ij} &= 16\pi G^{(\dim)} \frac{\partial \mathcal{L}_{\text{field}}}{\partial \gamma_{ij,0}} = \sqrt{\gamma} (\gamma^{ij}\gamma^{kl} - \gamma^{ik}\gamma^{jl})K_{kl}\,, \end{align} where $2 \lapse K_{ij} = - \gamma_{ij,0} + 2 N_{(i;j)}$ was used (which deviates from the convention for $K_{ij}$ used in \cite{Poisson:2002}), a comma denotes a partial derivative, and a semicolon denotes the $d$-dimensional covariant derivative. The inversion reads \begin{align} K_{ij} = \frac{1}{\sqrt{\gamma}} \left( \frac{1}{d-1} \gamma_{ij}\gamma_{kl} - \gamma_{ik}\gamma_{jl} \right) \pi^{kl} \,, \end{align} and the dimension finally enters the calculation explicitly. In order to put the field equations in the form of Hamilton's canonical equations, we perform a Legendre transform, \begin{equation} \mathcal{L}_{{\text{field}}} = \frac{1}{16\piG^{(\dim)}} \pi^{ij} \gamma_{ij,0} - \lapse \srcfield{} + \shiftup{i} \srcfield{i} + \text{(td)} \,, \end{equation} where \begin{subequations} \begin{align} \srcfield{} &= - \frac{1}{16\pi\sqrt{\gamma} G^{(\dim)}} \left[ \gamma \ricci{(d)}{} - \gamma_{ij} \gamma_{k l} \pi^{ik} \pi^{jl} + \frac{1}{d-1} \left( \gamma_{ij} \pi^{ij} \right)^2 \right]\,,\label{eq:hamconstr} \\ \srcfield{i} &= \frac{1}{8\pi G^{(\dim)}} \gamma_{ij} \pi^{jk}_{~~ ; k} \label{eq:momconstr}\,. \end{align} \end{subequations} Variation of the action with respect to $\gamma_{ij}$ and $\pi^{ij}$ leads to $d(d+1)$ first order evolution equations for these variables. The $d$-metric $\gamma_{ij}$ and the field momentum $\tfrac{1}{16\piG^{(\dim)}}\pi^{ij}$ are canonically conjugate variables in the vacuum case. However, the action does not contain time derivatives of $\lapse$ and $\shiftup{i}$, so a variation with respect to these variables leads to constraint-type equations (i.e.\ containing no time derivatives). Variation of $\lapse$ gives only one equation called the Hamilton constraint, while variation of $\shiftup{i}$ gives $d$ equations called momentum constraint. We have arrived at a canonical formalism without any restriction on the coordinates, or gauge, but in the presence of constraints. In the following sections we will eliminate these constraints by simultaneously fixing the gauge. In order to couple the ADM formalism to point-like objects possessing masses $m_a$ and $(d+1)$-dimensional spin tensors $S_{a\,\mu\nu} = - S_{a\,\nu\mu}$, it is best to start with a matter action of the form \begin{align} W_{{\text{matter}} = \sum_a \int \text{d}\tau_a \biggl[ p_{a\,\mu} u^\mu_a + \frac{1}{2} S_{a\,\mu\nu} \Omega^{\mu\nu}_a - \lambda_a (g^{\mu\nu} p_{a\,\mu} p_{a\,\nu} + m^2_a) \biggr]\,, \end{align} where $\tau_a$ is a worldline parameter, $p_{a\,\mu}$ the $(d+1)$-momentum, $u^{\mu}_a = \text{d} z_a^{\mu} / \text{d} \tau_a$, $z_a^{\mu}$ the position, $\Omega^{\mu\nu}_a = - \Omega^{\nu\mu}_a$ the angular velocity tensor, and $\lambda_a$ a Lagrange multiplier belonging to the mass-shell constraint $g^{\mu\nu} p_{a\,\mu} p_{a\,\nu} + m^2_a = 0$ (all for the $a$-th object), see \cite{Steinhoff:Schafer:2009:2,Steinhoff:2011} for details. Notice that the angular velocity tensor is built from a Lorentz matrix encoding the orientation of a body-fixed frame and its covariant derivative. This means one has to handle a derivative coupling of the metric because of Christoffel symbols or Ricci rotation coefficients appearing in the covariant derivative of the Lorentz matrix. Further constraints must be fulfilled for the mentioned Lorentz matrix and the spin, the latter reads $S^{\mu\nu}_a p_{a\,\mu} = 0$. It is important that this action is valid to linear order in spin and is now considered for a generic spatial dimension $d$. The equations of motion following from this matter action are the well-known Mathisson-Papapetrou equations \mycite{Mathisson:1937,Mathisson:2010,Papapetrou:1951}{Mathisson:1937,*Mathisson:2010,Papapetrou:1951}, see also \cite{Tulczyjew:1959,Dixon:1979}, \begin{align} \frac{\text{D} S^{\mu\nu}_a}{\text{d}\tau_a} &= 2 p^{[\mu}_a u^{\nu]}_a \,, \quad \frac{\text{D} p_{a\mu}}{\text{d}\tau_a} = -\frac{1}{2}\; \ricci{(d+1)}{\mu\rho\beta\alpha} u^\rho_a S^{\beta\alpha}_a \,, \end{align} where $\ricci{(d+1)}{\mu\rho\beta\alpha}$ is the $(d+1)$-dimensional Riemann tensor, and the source of the gravitational field equations is given by Tulczyjew's singular energy-momentum tensor density \cite{Tulczyjew:1959} \begin{align} \sqrt{-g} T^{\mu\nu} &= \sum_a \int \text{d}\tau_a \biggl[ u^{(\mu}_a p^{\nu)}_a \dl{(d+1)a} - \left( S^{\alpha(\mu}_a u^{\nu)}_a \dl{(d+1)a} \right)_{\parallel \alpha} \biggr]\,, \end{align} where $\dl{(d+1)a} = \delta(x^{\mu}-z^{\mu}_a)$. For a review of spin in relativity see e.g.\ \cite{Westpfahl:1967,Westpfahl:1969:1}. Further details on a corresponding action principle can be found in \cite{Goenner:Westpfahl:1967, Romer:Westpfahl:1969, Westpfahl:1969:2, Hanson:Regge:1974, Bailey:Israel:1975}. Next the matter constraints are eliminated from the action with suitable gauge choices, e.g.\ $\tau_a = t$ where $t$ is the coordinate time and the matter variables are transformed to (reduced) canonical variables denoted by a hat, e.g.\ $\hat{z}^i_a$ or $\hat{p}_{ai}$. This is completely analogous to \cite{Steinhoff:Schafer:2009:2,Steinhoff:2011} and will therefore not be repeated here. But the following facts are important for the present article. The spatial dimension $d$ is not entering the just mentioned calculations explicitly, whereas it appears in the gravitational part of the action. Further, the matter action becomes linear in lapse and shift, so the gravitational constraints following from their variation now contain matter source terms $\src{}$ and $\src{i}$, \begin{equation} \srcfield{} + \src{} = 0 \,, \quad \srcfield{i} + \src{i} = 0 \label{eq:matterfielddecomp} \,. \end{equation} Finally, due to the coupling of the spin to derivatives of the metric, time derivatives of the gravitational field are present in the matter part. This necessitates a matter contribution $\pimati{ij}$ to the \emph{canonical} field momentum, which now reads \begin{align}\label{eq:picandef} \picani{ij} & = \pi^{ij} + \pimati{ij} \,. \end{align} Explicit expressions for $\src{}$, $\src{i}$, and $\pimati{ij}$ are provided in \Sec{sec:sources} and have the same form as for $d = 3$ in \cite{Steinhoff:Schafer:2009:2,Steinhoff:2011}. The next goal is the elimination of the field constraints within the so called ADM transverse-traceless gauge (ADMTT). The corresponding gauge conditions read \begin{subequations} \label{eq:diffgaugeconditionsADM} \begin{align} \gamma_{ij,j} - \frac{1}{d} \gamma_{jj,i} &= 0 \label{eq:metricdiffADM}\,, \\ \picani{ii} &= 0 \label{eq:momentumdiffADM} \,. \end{align} \end{subequations} We proceed to work out field decompositions suitable for an elimination of the field constraints. \subsection{Metric Decomposition} The differential gauge condition \eqref{eq:metricdiffADM} for the metric is solved by \begin{align} \gamma_{ij} &= \Psi \delta_{ij} + h^\TT_{ij} \,, \end{align} which can be seen from a transverse-traceless (TT) decomposition of $\gamma_{ij}$. The first part is the conformally flat part of the metric and the last part can be interpreted in the far-zone as the radiation field, which is transverse-traceless, i.e.\ $h^\TT_{ii} = 0 = h^\TT_{ij,j}$. Due to the requirement of maximal simple curvature density expression \cite{Faye:Jaranowski:Schafer:2004}, one can determine the conformal part of the metric in a form appropriate for a post-Newtonian expansion. Consider the static (i.e. momentum independent) part of the Hamilton constraint using \eqref{eq:hamconstr} and \eqref{eq:matterfielddecomp} setting $\pi^{ij} = 0$, \begin{align} \sqrt{\gamma}\; \ricci{(d)}{} &= 16\piG^{(\dim)} \src{}\,. \end{align} If one inserts the truncation $\gamma_{ij} = \Psi \delta_{ij}$ with the ansatz $\Psi = \psi^\beta$ into the static Hamilton constraint, one obtains \ifprd\begin{widetext}\fi \begin{align} -\frac{1}{4} \beta (d-1) \psi^{-2 + \frac{1}{2} \beta (d - 2)} \left(4 \psi \Delta \psi + (-4 + \beta(d-2)) (\psi_{,i})^2\right) &= 16\piG^{(\dim)}\src{}\,, \end{align} \ifprd\end{widetext}\fi where $\Delta = \partial_i \partial_i$ and $\partial_i$ denotes a partial coordinate derivative. Demanding that the nonlinear gradient term $(\psi_{,i})^2$ disappears (yielding a Poisson-type equation) or the $\psi$ in front has a vanishing exponent, both leads to \begin{align} \beta &= \frac{4}{d-2}\,, \end{align} and thus gives the most simple expression for the curvature density $\sqrt{\gamma}\; \ricci{(d)}{}$. Using this solution, the static Hamilton constraint reduces to \begin{align} -\frac{4(d-1)}{d-2}\psi \Delta \psi &= 16\piG^{(\dim)}\src{}\,. \end{align} Now one can further set $\psi = 1 + \alpha \phi$ and demand that the static part of the Hamilton constraint linear in $\phi$ reduces to a Poisson-type equation without any $d$-dependence. This leads to \begin{align} \alpha &= \frac{d-2}{4(d-1)}\,, \end{align} and \begin{align} -\left(1+\frac{d-2}{4(d-1)}\phi\right) \Delta \phi &= 16\piG^{(\dim)}\src{}\,. \end{align} Finally, the optimized metric decomposition reads \begin{align} \gamma_{ij} &= \left(1+\frac{d-2}{4(d-1)}\phi\right)^{4/(d-2)} \delta_{ij} + h^\TT_{ij}\,, \label{eq:gammadecomp} \end{align} see, e.g., \cite{Damour:Jaranowski:Schafer:2001}. One can also borrow the conformal factor from the $d$-dimensional isotropic Schwarzschild metric given in, e.g., \cite[Eq. (22)]{Xanthopoulos:Zannias:1989}. For convenience we introduce \begin{align} \bar{\phi} &= \frac{d-2}{4(d-1)} \phi\,, \end{align} in order to absorb certain dimension-depending factors in subsequent calculations. \subsection{Momentum Decomposition} For convenience we introduce some differential-integral operators called $\tproj{}{}$-operators, given by \begin{align} \tproj{a}{ij} = \delta_{ij} + (a - 1)\partial_i \partial_j\invlapl{1} = \tproj{a}{ji}\,, \end{align} where $\invlapl{1}$ is the inverse Laplacian with usual boundary conditions. These have the following nice properties \begin{subequations} \begin{align} \frac{1}{n}\sum_n \tproj{a_n}{ij} &= \tproj{\frac{1}{n}\sum_n a_n}{ij}\,,\\ \tproj{a}{ik} \tproj{b}{kj} &= \tproj{ab}{ij}\,,\\ \tproj{a^{-1}}{ij} &= (\tproj{a}{ij})^{-1}\,, \end{align} \end{subequations} and further \begin{subequations} \begin{align} \tproj{1}{ij} &= \delta_{ij}\,,\\ \delta_{ij} \tproj{a}{ij} &= d+a-1\,,\\ \partial_i \tproj{a}{ij} &= a \partial_j\,. \end{align} \end{subequations} Note that for $a=0$ one gets the transverse projector \begin{align} \transv{ij} &\defdby \tproj{0}{ij}\,, \end{align} which has vanishing divergence and is not invertible. From the multiplication property of the $\tproj{}{}$-operators the projection property $\transv{ik}\transv{kj} = \transv{ij}$ follows immediately. Also a longitudinal projector can be constructed, namely \begin{align} \longit{ij} &\defdby \tproj{1}{ij} - \tproj{0}{ij} = \delta_{ij} - \transv{ij}\,. \end{align} Obviously the longitudinal projector also fulfills the projector condition $\longit{ik}\longit{kj} = \longit{ij}$. According to \cite{Steinhoff:Wang:2009,Steinhoff:2011} one can decompose the field momentum as \begin{align} \pi^{ij} &= \pitti{ij} + \pitildei{ij} + \pihati{ij}\,. \label{eq:pidecomp} \end{align} The different parts are given by \ifprd\begin{widetext}\fi \begin{subequations} \begin{align} \pitti{ij} &= \biggl[\transv{k(i}\transv{j)\ell} - \frac{1}{d-1}\transv{ij}\transv{k\ell}\biggr]\pi^{k\ell} \bydefd \TTproj{k\ell}{ij} \pi^{k\ell}\,,\\ \pitildei{ij} &= \biggl[\longit{k(i}\delta_{j)\ell} + \delta_{k(i} \longit{j)\ell} - \longit{k(i}\longit{j)\ell}-\frac{1}{d-1}\transv{ij}\longit{k\ell}\biggr]\pi^{k\ell} \bydefd \LTproj{k\ell}{ij} \pi^{k\ell}\,,\\ \pihati{ij} &= \frac{1}{d-1}\transv{ij} \delta_{k\ell} \pi^{k\ell} \bydefd \Trproj{k\ell}{ij} \pi^{k\ell}\,. \end{align} \end{subequations} \ifprd\end{widetext}\fi Obviously $\pitti{ij}$ has no divergence and is trace free (it is transverse-traceless), $\pitildei{ij}$ is trace free and its divergence contains the divergence of the whole field momentum, and $\pihati{ij}$ is divergence free and its trace is the trace of the whole field momentum. This decomposition is essentially the usual transverse-traceless-decomposition of a symmetric rank two tensor field, but rearranged in a way more suitable for the present computations. For example, $\pihati{ij}$ is fixed by the gauge condition \eqref{eq:momentumdiffADM} as \begin{equation}\label{eq:pitrfix} \pihati{ij} = - \frac{1}{d-1} \transv{ij} \pimati{kk}\,, \end{equation} which follows from its definition and the trace of Eq.\ \eqref{eq:picandef}. Furthermore one can decompose $\pitildei{ij}$ into two different vector potentials, $\pitildei{i}$ and $V^{i}$. The decompositions read \begin{align} \pitildei{ij} &= \pitildei{i}_{,j} + \pitildei{j}_{,i} - \frac{1}{d-1}\tproj{d-1}{ij}\pitildei{k}_{,k}\,,\label{eq:momentumtildepitilde}\\ &= V^i_{,j} + V^j_{,i} - \frac{2}{d} \delta_{ij} V^k_{,k}\label{eq:momentumtildevpot}\,. \end{align} Of course $\pitildei{i}$ and $V^{i}$ are interrelated, \begin{equation} \pitildei{i} = \tproj{2\frac{d-1}{d}}{ij} V^j \,, \qquad V^{i} = \tproj{\frac{d}{2(d - 1)}}{ij} \pitildei{j} \,, \label{eq:pitildevpot} \end{equation} and it holds \begin{equation} \pitildei{i} = \invlapl{1} \pitildei{ij}_{,j} \,. \label{eq:pisolve} \end{equation} These two vector potentials have different advantages. From $V^i$ it is possible to compute $\pitildei{ij}$ without any integration. On the other hand $\pitildei{i}$ has a simpler structure and can more easily be obtained from the momentum constraint using \eqref{eq:pisolve} [cf.\ \eqref{eq:pitilde3sol}, \eqref{eq:vpot3sol}, and \eqref{eq:momentumtilde3sol}]. The transverse-traceless parts of metric $\gamma_{ij}$, $h^\TT_{ij}$ and of the canonical field momentum $\picani{ij}$, $\picantti{ij}$ are denoted as transverse-traceless degrees of freedom or propagating field degrees of freedom in the following. The latter denotation will become clear in the next section. \subsection{ADM Hamiltonian}\label{sec:ADMHam} We are now principally able to solve the $d+1$ constraint equations \eqref{eq:matterfielddecomp} for the $d+1$ variables $\phi$ and $\pitildei{i}$. Though this involves solving a system of nonlinear partial differential equations, it can be tackled analytically within the post-Newtonian approximation up to a certain order. Notice that $\pihati{ij}$ is fixed by \eqref{eq:pitrfix} and that $\pitti{ij}$ can be replaced by \begin{align} \pitti{ij} = \picantti{ij} - \TTproj{k\ell}{ij} \pimati{k\ell} \,. \label{eq:piTTreplace} \end{align} The independent variables are thus the \emph{reduced} canonical field variables $h^\TT_{ij}$ and $\picantti{ij}$ with Poisson brackets \begin{align} \{h^\TT_{ij}(\vct{x}), \picantti{k\ell}(\vct{x}^\prime)\} &= 16\piG^{(\dim)} \TTproj{k\ell}{ij}\delta(\vct{x}-\vct{x}^\prime)\,, \end{align} as well as reduced canonical matter variables which enter via the matter parts of the constraint equations \eqref{eq:matterfielddecomp} and are discussed in more detail in \Sec{sec:sources}. It was shown in \mycite{Arnowitt:Deser:Misner:1962,Arnowitt:Deser:Misner:2008,DeWitt:1967,Regge:Teitelboim:1974} {Arnowitt:Deser:Misner:1962,*Arnowitt:Deser:Misner:2008,DeWitt:1967,Regge:Teitelboim:1974} using different methods that the fully reduced Hamiltonian after gauge fixing is given by the ADM energy \begin{align} E_{\text{ADM}} = \frac{1}{16\piG^{(\dim)}} \oint \text{d}^{d-1} s_i [\gamma_{ij,j} - \gamma_{jj,i}]\,, \end{align} where $\oint \text{d}^{d-1} s_i$ denotes an integral over the asymptotic boundary of a spatial hypersurfaces at fixed time. (The identical expression for the energy follows from the Landau-Lifshitz superpotential which is related with the well-known Landau-Lifshitz stress-energy-momentum pseudotensor of the gravitational field, see \cite{Landau:Lifshitz:Vol2:4}.) More precisely, the ADM energy $E_{\text{ADM}}$ turns into the ADM Hamiltonian $H_{\text{ADM}}$ if it is expressed in terms of the mentioned reduced canonical variables. Inserting the metric decomposition \eqref{eq:gammadecomp}, the surface integral can be transformed into a volume integral, \begin{align} H_{\text{ADM}} = -\frac{1}{16\piG^{(\dim)}}\int \text{d}^d x\,\Delta \phi\,, \label{eq:HADM} \end{align} where the asymptotic behavior of certain field components was used, see e.g.\ \cite{Regge:Teitelboim:1974}. It was argued by Regge and Teitelboim \cite{Regge:Teitelboim:1974} that one must modify the total divergence in \eqref{eq:Ldecomp} such that it leads to the surface integral expression for the ADM energy $E_{\text{ADM}}$, otherwise the variational principle is not well posed. Indeed, for the same reason one should add the (regularized) York--Gibbons--Hawking, or ``trace K,'' surface term \cite{York:1972, Gibbons:Hawking:1977, Wald:1984, York:1986} already to the Einstein--Hilbert action. The ADM energy then directly arises from a surface term contained in the complete action, see \cite{Brown:York:1993,Hawking:Horowitz:1996} and also \cite{Poisson:2002}. Notice that the Einstein equations can be followed from a variation of the action by disregarding all surface terms, but this is in general not allowed if the variation of the action has prescribed nontrivial behavior at the boundary. The ADM Hamiltonian still depends on the reduced canonical field variables $h^\TT_{ij}$ and $\picantti{ij}$. In \Sec{sec:routhian_waveequation} also these remaining field variables will be eliminated through a Routhian approach to arrive at a conservative matter-only Hamiltonian. \section{Expansion of the Constraints and Integrations by Parts}\label{sec:expansionconstraints} As already stated in \Sec{sec:intro} we will use the post-Newtonian approximation throughout this article. Furthermore as mentioned above we utilize the spin-extended ADM formalism. So first of all we have to solve the constraint equations order by order. This requires to expand them in powers of the post-Newtonian approximation parameter \eqref{eq:PNcount} with the decompositions \eqref{eq:gammadecomp} and \eqref{eq:pidecomp} (which are adapted to the ADMTT gauge) inserted. This is the task performed in the following subsection. \subsection{Order Counting}\label{subsub:ordercounting} The field and source expansions starting at their leading order are given by \ifprd\begin{widetext}\fi \begin{subequations} \label{eq:fieldexpansions} \begin{align} \phi & = \phis{2} + \phis{4} + \phis{6} + \phis{8} + \phis{10} + \dots \,,\\ \pitildei{ij} & = \momls{3}{ij} + \momls{5}{ij} + \momls{7}{ij} + \dots \,,\\ \src{} & = \srcs{2} + \srcs{4} + \srcs{6} + \srcs{8} + \srcs{10} + \dots \,, \\ \src{i} & = \srcis{3}{i} + \srcis{5}{i} + \srcis{7}{i} + \dots \,, \end{align} where the subscript in round brackets denotes the $\cInv{1}$ order. A similar order counting is also valid for all derived fields, like vector potentials. At a later stage of the calculation we also need to expand $h^\TT_{ij}$ and $\picantti{ij}$, \begin{align} h^\TT_{ij} & = h^\TT_{(4)ij} + h^\TT_{(6)ij} + \dots\,,\label{eq:httorders}\\ \picantti{ij} & = \hat{\pi}^{ij}_{(5)\,{\text{TT}}} + \dots\,. \end{align} \end{subequations} \ifprd\end{widetext}\fi At the mentioned stage we also need an order counting for the deviations between field momentum and canonical field momentum due to $\pimati{ij}$ and the tracelessness violation of the field momentum $\pihati{ij}$. As one can see in \Sec{sec:matmom} $\pihatmati{ij}$ starts at $\Order{\cInv{9}}$ and thus cannot contribute to the expansion of the Hamilton and momentum constraint later, while $\pimati{ij}$ starts at $\Order{\cInv{5}}$. In anticipation of the source expressions in \Sec{sec:sources} we will introduce the matter variables used there to discuss the order counting of the field variables, because all fields depend on the source expression and thus the order counting of the matter variables. The mass $m_a$, canonical matter momentum $\vmom{a}$, and spin variables $\vspin{a}$ are {\em formally} counted as $m_a \sim \Order{\cInv{2}}$, $\vmom{a} \sim \Order{\cInv{3}}$, and $\vspin{a} \sim \Order{\cInv{3}}$ for dimensional reasons only (remember that for {\em maximal} spins one would have $\vspin{a} \sim \Order{\cInv{4}}$ instead, see e.g. \cite[Appendix A]{Steinhoff:Wang:2009}). This counting comes from the fact that after setting $c=G^{(\dim)}=1$ we require all quantities to be in units of length. Let us introduce symbols with a bar below them being the quantities in SI units and the other symbols the quantities in units of length, then it holds $m_a = \tfrac{G^{(\dim)}}{c^2} \underline{m}_a$ for the mass, $t = c \underline{t}$ for the time, $\vmom{a} = \tfrac{G^{(\dim)}}{c^3} \underline{\hat{\vct{\canmomp}}}_{a}$ for the linear momentum, and similarly for the spin variables. Although we mentioned that $G^{(\dim)}$ has different units in $d\ne3$ than in $d=3$, we treat their $\cInv{1}$ order as in $d=3$ for simplicity. So the order counting comes from the $c$ powers inserted to reconstruct the SI units. It should be noted that these counting rules will in general not give correct \emph{absolute} orders in $c$ if the SI units of the final expression are not taken into account. However, \emph{relative} orders are always meaningful, which is all that is relevant for perturbative expansions. Further notice that different counting rules are obtained if one assumes that all quantities are expressed in terms of mass units instead of length units when setting $c=G^{(\dim)}=1$, which is also often used in the literature. \subsection{Hamilton Constraint} The Hamilton constraint \ifprd\begin{widetext}\fi \begin{align} \frac{1}{16\pi\sqrt{\gamma} G^{(\dim)}} \left[ \gamma \; \ricci{(d)}{} - \gamma_{ij} \gamma_{k l} \pi^{ik} \pi^{jl} + \frac{1}{d-1} \left( \gamma_{ij} \pi^{ij} \right)^2 \right] = \src{}\,, \end{align} has to be expanded in a post-Newtonian manner. After inserting the metric and momentum decomposition, \eqref{eq:gammadecomp} and \eqref{eq:pidecomp}, and all field expansions \eqref{eq:fieldexpansions} it decomposes into several Poisson equations for the post-Newtonian potential, namely \begin{subequations}\label{eq:hamiltonconstraint} \begin{align} -\frac{1}{16 \pi G^{(\dim)}} \Delta \phis{2} & = \srcs{2}\label{subeq:phi2constr}\,,\\ -\frac{1}{16 \pi G^{(\dim)}} \Delta \phis{4} & = \srcs{4} - \phibs{2} \srcs{2}\label{subeq:phi4constr}\,,\\ -\frac{1}{16 \pi G^{(\dim)}} \Delta \phis{6} & = \srcs{6} - \phibs{2} \srcs{4} + ( -\phibs{4} + \phibs{2}^2 ) \srcs{2} - \frac{1}{16 \pi G^{(\dim)}}\biggl( - (\momls{3}{ij})^2 \nonumber\\ & + 4 (\phibs{2} h^\TT_{ij})_{,ij} \biggr)\,,\\ -\frac{1}{16 \pi G^{(\dim)}} \Delta \phis{8} & = \srcs{8} - \phibs{2} \srcs{6} + ( -\phibs{4} + \phibs{2}^2 )\srcs{4} + ( - \phibs{6} + 2 \phibs{2} \phibs{4} \nonumber\\ & - \phibs{2}^3 ) \srcs{2} - \frac{1}{16 \pi G^{(\dim)} } \biggl( \frac{3d - 10}{d - 2} \phibs{2} (\momls{3}{ij})^2 - 2 \momls{3}{ij} \momls{5}{ij} - 2 \momls{3}{ij} \pitti{ij} \nonumber\\ & + \frac{4}{d - 2} h^\TT_{ij} \phibs{2}_{,i}\phibs{2}_{,j} - \frac{1}{4} (h^\TT_{ij,k})^2 -\frac{16}{d - 2} (\phibs{2} \phibs{2}_{,j} h^\TT_{ij})_{,i} + 4 (\phibs{4} h^\TT_{ij})_{,ij} \nonumber\\ & + \frac{1}{2} \Delta (h^\TT_{ij})^2 - \frac{1}{2} (h^\TT_{ik} h^\TT_{jk})_{,ij} \biggr)\,,\\ -\frac{1}{16 \pi G^{(\dim)}} \Delta \phis{10} & = \srcs{10} - \phibs{2} \srcs{8} + ( -\phibs{4} + \phibs{2}^2 )\srcs{6} + ( - \phibs{6} + 2 \phibs{2} \phibs{4} \nonumber\\ & - \phibs{2}^3 ) \srcs{4} + \biggl( -\phibs{8} + 2\phibs{2} \phibs{6} + \phibs{4}^2 - 3 \phibs{4} \phibs{2}^2 + \phibs{2}^4 \nonumber\\ & + \frac{d - 4}{8(d - 1)} (h^\TT_{ij})^2 \biggr)\srcs{2} - \frac{1}{16 \pi G^{(\dim)}} \biggl( \frac{3d - 10}{d - 2} \phibs{4} (\momls{3}{ij})^2 - 2 h^\TT_{ij} \momls{3}{ik} \momls{3}{jk} \nonumber\\ & - (\momls{5}{ij})^2 - 2\momls{3}{ij}\momls{7}{ij} - (\pitti{ij})^2 - 2 \frac{(d - 3)(3d - 10)}{(d - 2)^2} (\momls{3}{ij})^2 \phibs{2}^2 \nonumber\\ & + \frac{8}{d - 2} h^\TT_{ij} \phibs{2}_{,i} \phibs{4}_{,j} + 2\frac{3d - 10}{d - 2} \momls{3}{ij} \momls{5}{ij} \phibs{2} + 2 \frac{3d - 10}{d - 2} \momls{3}{ij} \pitti{ij} \phibs{2} \nonumber\\ & - 4 \frac{d + 2}{(d - 2)^2} h^\TT_{ij} \phibs{2}_{,i} \phibs{2}_{,j} \phibs{2} - \frac{1}{2} h^\TT_{ij,k} h^\TT_{ik,j} \phibs{2} - \frac{1}{4}\frac{d - 10}{d - 2} (h^\TT_{ij,k})^2 \phibs{2} \biggr) \nonumber\\ & + {(\text{td})}\,. \label{eq:ham10} \end{align} \end{subequations} By virtue of \eqref{eq:HADM} the post-Newtonian Hamiltonians follow from an integration of the right hand sides of these equations. The last equation leads to the formal 3PN ADM Hamiltonian. \subsection{Momentum Constraint} Also the momentum constraint has to be expanded in a post-Newtonian manner. First of all one has to write the covariant divergence in a more explicit form. For convenience it is also useful to write as much terms as possible in terms of divergences of a traceless symmetric tensor, see \cite{Steinhoff:Wang:2009}. Notice that we also did not remove the trace parts of the field momentum in \begin{align} \momli{ij}_{\;,j} & = - 8 \pi G^{(\dim)} \;\src{i} + \biggl[ \left(1 - \left(1+\bar{\phi}\right)^{4/(d-2)}\right)(\momli{ij} + \pitti{ij}) + V^k (h^\TT_{kj,i} + h^\TT_{ik,j} - h^\TT_{ij,k}) \nonumber\\ & - \left(1 - \frac{2}{d}\right) V^{k}_{\;,k} h^\TT_{ij} \biggr]_{,j} - \Delta(h^\TT_{ik} V^k) + \frac{1}{2} h^\TT_{\ell j,i} \pitti{\ell j} - (h^\TT_{ik} \pitti{kj})_{,j} \nonumber\\ & + \frac{1}{2} h^\TT_{\ell j,i} \pihati{\ell j} - (h^\TT_{ik} \pihati{kj})_{,j} - \frac{4}{d-2} \left(1+\bar{\phi}\right)^{(6-d)/(d-2)} \left( \pihati{\ell i} \bar{\phi}_{,\ell} - \frac{1}{2} \pihati{\ell \ell} \bar{\phi}_{,i} \right)\,, \end{align} as the tracelessness condition may be violated for the non-canonical field momentum due to the spin. The post-Newtonian expansion is necessary for the later integrations by parts and to obtain the necessary field solutions for the momentum type fields. The expansion reads \begin{subequations} \label{eq:momentumconstraint} \begin{align} \momls{3}{ij}{}_{,j} & = - 8 \pi G^{(\dim)} \srcis{3}{i} \label{subeq:pi3constr}\,,\\ \momls{5}{ij}{}_{,j} & = - 8 \pi G^{(\dim)} \srcis{5}{i} + \biggl[-\frac{4}{d-2} \momls{3}{ij} \phibs{2}\biggr]_{,j} \label{subeq:pi5constr} \,,\\ \momls{7}{ij}{}_{,j} & = - 8 \pi G^{(\dim)} \srcis{7}{i} + \biggl[ -\frac{4}{d-2} (\pitti{ij} + \momls{5}{ij}) \phibs{2} + \frac{2(d - 6)}{(d - 2)^2} \momls{3}{ij} \phibs{2}^2 - \frac{4}{d - 2} \momls{3}{ij} \phibs{4} \nonumber\\ & - \biggl(1-\frac{2}{d}\biggr) h^\TT_{ij} \vpots{3}{k}_{,k} + \vpots{3}{k} \biggl( h^\TT_{jk,i} + h^\TT_{ik,j} - h^\TT_{ij,k} \biggr) \biggr]_{,j} - \Delta (h^\TT_{ik} \vpots{3}{k}) \label{subeq:pi7constr}\,. \end{align} \end{subequations} \ifprd\end{widetext}\fi Notice that we did not insert the expansion for $h^\TT_{ij}$ and $\pitti{ij}$ since this is only possible after their evolution were obtained and solved order by order later on. \subsection{Integration by Parts} Due to the complicated structure of $-\Delta \phis{10}/(16\piG^{(\dim)})$ in particular the appearance of $\phibs{8}$, $\phibs{6}$ and $\momls{5}{ij}$, $\momls{7}{ij}$ it is necessary to simplify the integral over the right hand side of \eqref{eq:ham10}. Some of the mentioned fields are not even known in $d=3$ dimensions. The best way to remove them is to integrate by parts certain terms and afterwards use lower order Hamilton and momentum constraints. In the used dimensional regularization one may always neglect boundary terms if the integrands are not UV- and IR-divergent simultaneously. These more subtle terms occur at 4PN point-mass calculations for the first time. In the present calculation we always neglected boundary terms in the integrations by parts. The parts of the Hamilton constraint coming from the expanded Ricci scalar in the conformal approximation have always a structure where a power of the post-Newtonian potential is coupled to a matter source of the Hamilton constraint. These terms can be simplified by inserting the lower order Hamilton constraint for the source and shift the emerging Laplacian to the coupled post-Newtonian potential via integrating by parts twice. This procedure can be used to eliminate $\phibs{8}$ and $\phibs{6}$ and the appropriate calculations are given by \begin{align} \dunderline{- \phibs{8} \srcs{2}} & = -\phibs{2} \srcs{8} + \phibs{2}^2 \srcs{6} + (\phibs{4} \phibs{2} - \phibs{2}^3) \srcs{4} + ( \phibs{6} \phibs{2} \nonumber\\ & - 2 \phibs{2}^2 \phibs{4} + \phibs{2}^4 ) \srcs{2} - \frac{1}{16 \pi G^{(\dim)}} \biggl\{ -\frac{3d - 10}{d - 2} \phibs{2}^2 (\momls{3}{ij})^2 \nonumber\\ & + 2 \phibs{2} \momls{3}{ij} (\momls{5}{ij} + \pitti{ij}) - \frac{4}{d - 2} h^\TT_{ij} \phibs{2}{}_{,i} \phibs{2}{}_{,j} \phibs{2} + \frac{1}{4} \phibs{2} (h^\TT_{ij,k})^2 \nonumber\\ & + \frac{16}{d - 2} (\phibs{2} \phibs{2}{}_{,i} h^\TT_{ij})_{,j} \phibs{2} - 4 (\phibs{4} h^\TT_{ij})_{,ij} \phibs{2} - \frac{1}{2} \phibs{2} \Delta(h^\TT_{ij})^2 \nonumber\\ & + \frac{1}{2} \phibs{2} (h^\TT_{ik} h^\TT_{jk})_{,ij} \biggr\} + {(\text{td})}\,,\label{eq:phi8h2PI}\\ \dunderline{-\phibs{6} \srcs{4}} & = -\phibfo \srcs{6} +\phibs{2} \phibfo \srcs{4} +( \phibs{4} \phibfo - \phibs{2}^2 \phibfo ) \srcs{2} \nonumber\\ & - \frac{1}{16 \pi G^{(\dim)}} \biggl\{ \phibfo (\momls{3}{ij})^2 - 4 \phibfo (\phibs{2} h^\TT_{ij})_{,ij} \biggr\} + {(\text{td})}\,, \label{eq:phi6src4PI} \\ \dunderline{3 \phibs{6} \phibs{2} \srcs{2}} & = \biggl\{ -3\srcs{6} + 3\phibs{2} \srcs{4} - (-3\phibs{4} + 3\phibs{2}^2)\srcs{2} \nonumber\\ & - \frac{1}{16 \pi G^{(\dim)}} \biggl[ 3(\momls{3}{ij})^2 - 12 \phibs{2}{}_{,ij} h^\TT_{ij} \biggr] \biggr\}\phibft + {(\text{td})}\,. \end{align} The fields $\phibfo$ and $\phibft$ are the momentum dependent and the momentum independent part of $\phibs{4}$ given by $\phibfo = -16\piG^{(\dim)} \tfrac{d-2}{4(d-1)}\Delta^{-1}\srcs{4}$ and $\phibfo = -16\piG^{(\dim)} \tfrac{d-2}{4(d-1)}\Delta^{-1}(-\phibs{2}\srcs{2})$. It is possible to simplify \eqref{eq:phi8h2PI} once more using \begin{align} \dunderline{-\frac{1}{16\piG^{(\dim)}}\left(-\frac{1}{2} \phibs{2} \Delta (h^\TT_{ij})^2\right)} & = -\frac{d - 2}{8(d - 1)} (h^\TT_{ij})^2 \srcs{2} + {(\text{td})}\,. \end{align} In the Hamilton constraint there are also several terms coupling different orders of the longitudinal field momentum. These terms can also be simplified by inserting the decomposition \eqref{eq:momentumtildevpot}, removing the derivatives from the vector potential $V^i$ via an integration by parts and inserting the lower order momentum constraints \eqref{eq:momentumconstraint} into the divergencies of the longitudinal field momentum. This procedure is used in order to eliminate $\momls{5}{ij}$ and $\momls{7}{ij}$. The respective integrations by parts are given by \begin{align} \dunderline{-\frac{1}{16 \pi G^{(\dim)}} \left(-2 \momls{7}{ij} \momls{3}{ij}\right)} & = 2 \vpots{3}{i} \srcis{7}{i} - \frac{1}{16 \pi G^{(\dim)}}\biggl\{ -2 h^\TT_{ij} \biggl[ -2\vpots{3}{j}_{,k} \momls{3}{ik} +(\vpots{3}{k} \momls{3}{ij})_{,k} \biggr] \nonumber\\ & + \frac{8}{d - 2} \momls{3}{ij} (\pitti{ij} + \momls{5}{ij}) \phibs{2} - 4 \frac{d - 6}{(d - 2)^2} (\momls{3}{ij})^2 \phibs{2}^2 \nonumber\\ & + \frac{8}{d - 2} (\momls{3}{ij})^2 \phibs{4}\biggr\} + {(\text{td})}\,. \end{align} Last but not least the $(\momls{5}{ij})^2$ integration by parts is given by \begin{align} \dunderline{- \frac{1}{16 \pi G^{(\dim)}}\left(-( \momls{5}{ij} )^2\right)} & = \vpots{5}{i} \srcis{5}{i} - \frac{1}{16 \pi G^{(\dim)}} \biggl\{ \frac{4}{d - 2} \momls{3}{ij} \momls{5}{ij}\phibs{2} \biggr\} + {(\text{td})}\,. \end{align} Although it is good to express field integrals in terms of sources and fields, the $\vpots{5}{i}$ potential is still very complicated. Thus we try to express $(\momls{5}{ij})^2$ in yet another way. From the transverse-traceless projection of an arbitrary second rank tensor field $A_{ij}$, namely \begin{align} \TTproj{ij}{k\ell} A_{k\ell} & = \biggl(\delta_{i(k} \delta_{\ell)j} - \delta^{\text{LT}\,ij}_{k\ell} - \delta^{\text{Tr}\,ij}_{k\ell}\biggr)A_{k\ell}\,, \\ & = A_{ij} - \invlapl{1} \biggl((A_{j\ell})_{,\ell i} + (A_{i\ell})_{,\ell j} - \frac{1}{d - 1}\tproj{d-1}{ij} (A_{k\ell})_{,k\ell}\biggr) \nonumber\\ & \quad - \frac{1}{d-1}\left(\delta_{ij} - \partial_i \partial_j \invlapl{1}\right)\delta_{k\ell} A_{k\ell}\,,\label{eq:ttprojection} \end{align} one can get the transverse-traceless projection ($\phibs{2} \momls{3}{k\ell}$ is traceless) \begin{align} \TTproj{ij}{k\ell} (\phibs{2} \momls{3}{k\ell}) &= \phibs{2} \momls{3}{ij} + \frac{d-2}{4} \left\{\momls{5}{ij} + \momlones{5}{ij}\right\}\,, \label{eq:TTphipi} \end{align} via the momentum constraint \eqref{eq:momentumconstraint}. Here \begin{align} \momlones{5}{ij} &= \momlones{5}{i}_{,j} + \momlones{5}{j}_{,i} - \frac{1}{d-1}\tproj{d-1}{ij} \momlones{5}{k}_{,k} \label{eq:momentumtilde51}\,, \end{align} where \begin{align} \momlones{5}{i} &= 8\piG^{(\dim)}\invlapl{1}\srcis{5}{i}\label{eq:pitilde51}\,. \end{align} Notice that $\momlones{5}{i}$ is defined with a different sign than the usual vector potentials used throughout this article. Now we can express $\momls{5}{ij}$ in terms of $\momlones{5}{ij}$, $\phibs{2}\momls{3}{ij}$ and the transverse-traceless projection \eqref{eq:TTphipi}. From these considerations it follows that $(\momls{5}{ij})^2$ is given by \begin{align} \dunderline{- \frac{1}{16 \pi G^{(\dim)}}\left(-( \momls{5}{ij} )^2\right)} & = -\frac{1}{16\piG^{(\dim)}} \biggl\{ \frac{16}{(d-2)^2}\left[ \left(\TTproj{ij}{k\ell}(\phibs{2}\momls{3}{k\ell})\right)^2 -\phibs{2}^2(\momls{3}{ij})^2 \right] \nonumber\\ & -\frac{8}{d-2}\phibs{2}\momls{3}{ij}\momlones{5}{ij} -(\momlones{5}{ij})^2 \biggr\} + {(\text{td})}\,. \end{align} Some of the results above were partially checked by comparison with \cite{Steinhoff:Wang:2009}. \subsection{The formal 3PN ADM Hamiltonian} Performing the mentioned integration by parts leads to the following formal 3PN ADM Hamiltonian which can also be compared to \cite{Steinhoff:Wang:2009}, \begin{align} H_{\text{3PN}} & = \int \text{d}^d x \biggl[ \srcs{10} - 2 \phibs{2} \srcs{8} + (- {\bar{S}_{(4)}{}} -4 \phibs{4} + 2 \phibs{2}^2)\srcs{6} + (6 \phibs{2} \phibs{4} + {\bar{S}_{(4)}{}} \phibs{2} \nonumber\\ & -2\phibs{2}^3) \srcs{4} + \biggl( 4 \phibs{4}^2 + {\bar{S}_{(4)}{}} \phibs{4} - 8 \phibs{4} \phibs{2}^2 - {\bar{S}_{(4)}{}} \phibs{2}^2 + 2\phibs{2}^4 \nonumber\\ & - \frac{1}{4(d - 1)} (h^\TT_{ij})^2 \biggr)\srcs{2} + 2 \vpots{3}{i} \srcis{7}{i} - \frac{1}{16 \pi G^{(\dim)}} \biggl( -(\momlones{5}{ij})^2 + 2\frac{3d - 4}{d - 2} \phibs{4} (\momls{3}{ij})^2 \nonumber\\ & + {\bar{S}_{(4)}{}} (\momls{3}{ij})^2 - \frac{(3d-4)(3d-2)}{(d - 2)^2} (\momls{3}{ij})^2 \phibs{2}^2 - (\pitti{ij})^2 + 8 \phibs{2} \momls{3}{ij} \pitti{ij} \nonumber\\ & - 2 h^\TT_{ij} \biggl[ -2\vpots{3}{j}_{,k} \momls{3}{ik} +(\vpots{3}{k} \momls{3}{ij})_{,k} + \momls{3}{ik} \momls{3}{jk} -4\frac{2d - 3}{d - 2} \phibs{2}_{,i} \phibs{4}_{,j} \nonumber\\ & -4\frac{3d - 4}{(d - 2)^2} \phibs{2} \phibs{2}_{,i} \phibs{2}_{,j} -2{\bar{S}_{(4)}{}}_{,i} \phibs{2}_{,j} \biggr] -8 \frac{d-1}{d-2} \phibs{2}\momlones{5}{ij}\momls{3}{ij} \nonumber\\ & +16 \frac{2d-3}{(d-2)^2} \left(\TTproj{ij}{k\ell}(\phibs{2}\momls{3}{k\ell})\right)^2 + \frac{2}{d - 2} (h^\TT_{ij,k})^2 \phibs{2} \biggr) \biggr]\label{eq:H3PN}\,. \end{align} We changed all occurrences of $\phibfo$ to ${\bar{S}_{(4)}{}} = -2 \phibfo$ to gain a result which is comparable to \cite{Damour:Jaranowski:Schafer:2001}. Note that, since we did not expand $h^\TT_{ij}$ and $\picantti{ij}$, we also need some formal 2PN terms which may contribute to the 3PN kinetic energy or the 3PN interaction Hamiltonian after expanding $h^\TT_{ij}$, \begin{align} H_{\text{2PN}} & = \int \text{d}^d x \biggl[\srcstt{8} + \dots - \frac{1}{16 \pi G^{(\dim)} } \biggl(\dots + \frac{4 (d - 1)}{d - 2} \phibs{2}{}_{,i} \phibs{2}{}_{,j} h^\TT_{ij} - \frac{1}{4} (h^\TT_{ij,k})^2 \biggr)\biggr]\label{eq:H2PNTT}\,. \end{align} Now we can split up the Hamiltonian into a kinetic part for the reduced canonical field variables [$h^\TT_{ij}$ and $\picantti{ij}$; after inserting \eqref{eq:piTTreplace}], an interaction part between these canonical fields and constraint fields, and a part independent of the canonical fields, i.e., \begin{align} H_{{\text{ADM}}} = H_{{\text{ADM}}}^{{\text{int}}} + H_{{\text{ADM}}}^{{\text{non-TT}}} + \frac{1}{16\piG^{(\dim)}}\int {\rm d}^d x \left[\frac{1}{4} (h^\TT_{ij,k})^2 + (\picantti{ij})^2\right] \,, \label{eq:Hsplit} \end{align} with \begin{align} H_{{\text{ADM}}}^{{\text{int}}} &= \frac{1}{16 \pi G^{(\dim)}} \int \text{d}^d x \biggl\{ (B_{(4)ij}+\hat{B}_{(6)ij}) h^\TT_{ij} - \frac{4 \pi G^{(\dim)}}{d - 1} (h^\TT_{ij})^2 \srcs{2} + \picantti{ij} C_{ij} \nonumber\\ & - \frac{2}{d - 2}\phibs{2} (h^\TT_{ij,k})^2 \biggr\} \,, \\ H_{{\text{ADM}}}^{{\text{non-TT}}} &= H_{{\text{ADM}}} - H_{{\text{ADM}}}^{{\text{int}}} - \frac{1}{16\piG^{(\dim)}}\int {\rm d}^d x \left[\frac{1}{4} (h^\TT_{ij,k})^2 + (\picantti{ij})^2\right] \,, \end{align} and \begin{align} B_{(4)ij} &= 16\piG^{(\dim)} \frac{\delta}{\delta h^\TT_{i j}} \int{ \text{d}^d x \, \srcs{8} } - \frac{4(d-1)}{d-2} \phibs{2}{}_{, i} \phibs{2}{}_{, j} \,,\\ \hat{B}_{(6)ij} & = 16 \pi G^{(\dim)} \frac{\delta}{\delta h^\TT_{ij}} \int \text{d}^d x \biggl[ \srcs{10} -2\phibs{2} \srcs{8} + 2 \vpots{3}{k} \srcis{7}{k} \biggr] + 2 \biggl[ -2\vpots{3}{j}_{,k} \momls{3}{ik} \nonumber\\ & +(\vpots{3}{k} \momls{3}{ij})_{,k} +\momls{3}{ik} \momls{3}{jk} -4\frac{2d - 3}{d - 2} \phibs{2}_{,i} \phibs{4}_{,j} +4\frac{3d - 4}{(d - 2)^2} \phibs{2} \phibs{2}_{,i} \phibs{2}_{,j} \nonumber\\ & -2{\bar{S}_{(4)}{}}_{,i} \phibs{2}_{,j}\biggr]\label{eq:hatB6}\,,\\ C_{ij} & = -2 \pimati{ij} - 8 \phibs{2} \momls{3}{ij}\label{eq:Cmatter}\,. \end{align} $H_{{\text{ADM}}}$ consists of all Hamiltonians starting from the rest mass contribution up to $H_{3\text{PN}}$. The $B_{(4)ij}$ part in $H_{{\text{ADM}}}^{{\text{int}}}$ comes from the formal 2PN Hamiltonian shown in \eqref{eq:H2PNTT}. $H_{{\text{ADM}}}^{{\text{non-TT}}}$ can be obtained by removing all $h^\TT_{ij}$ and $\picantti{ij}$ parts in \eqref{eq:H2PNTT} and \eqref{eq:H3PN} respectively. Notice that the relevant source terms in the expressions for $B_{(4)ij}$ and $\hat{B}_{(6)ij}$ are at most linear in $h^\TT_{ij}$. This allowed us to single out these contributions by a functional derivative. Now we arrived at a point mentioned in \Sec{sec:ADMHam}, namely where we are able to eliminate the constraint fields using lower order Hamilton and momentum constraints, but where the dynamical field degrees of freedom are still in the Hamiltonian. The elimination of these and further simplifications of the calculation process are subject of the following section. \section{Routhian and Application of Wave Equation}\label{sec:routhian_waveequation} To obtain a fully reduced matter only Hamiltonian, we have to remove the dynamical degrees of freedom $h^\TT_{ij}$ and $\picantti{ij}$ by solving the appropriate equations of motion and inserting their solutions into $H_{{\text{ADM}}}$. However there are some subtleties in this procedure which will be discussed in \Sec{subsec:routhian}. From Hamilton's equations \begin{align} \frac{1}{16\piG^{(\dim)}} \pdiffq{h^\TT_{ij}}{t} & = \TTproj{ij}{k\ell} \frac{\delta H_{{\text{ADM}}}}{\delta \picantti{k\ell}}\,,\\ \frac{1}{16\piG^{(\dim)}} \pdiffq{ \picantti{ij}}{t} & = -\TTproj{ij}{k\ell} \frac{\delta H_{{\text{ADM}}}}{\delta h^\TT_{k\ell}}\,, \end{align} and the split of the ADM Hamiltonian \eqref{eq:Hsplit} one gets the appropriate wave equations \begin{align} \Box h^\TT_{i j} &= 16\piG^{(\dim)} \TTproj{ij}{kl} \left[ 2 \frac{\delta H_{{\text{ADM}}}^{{\text{int}}}}{\delta h^\TT_{k l}} - \pdiffq{}{t} \frac{\delta H_{{\text{ADM}}}^{{\text{int}}}}{\delta \picantti{kl}} \right] \,, \\ \picantti{i j} &= \frac{1}{2} \left[ \dot{h}^\TT_{i j} - 16\piG^{(\dim)} \TTproj{ij}{kl} \frac{\delta H_{{\text{ADM}}}^{{\text{int}}}}{\delta \picantti{kl}} \right] \,, \end{align} for the dynamical degrees of freedom of the gravitational field (here $\Box = \Delta - \cInv{2} \partial_t^2$; remember that we use a $(d+1)$-metric with signature $d-1$). In order to receive the full wave equations one has to perform the variational derivative of the interaction Hamiltonian, \begin{align} \Box h^\TT_{ij} & = \delta^{\text{TT}\,ij}_{k\ell} \biggl\{ 2 (B_{(4)k\ell}+\hat{B}_{(6)k\ell}) - \frac{16\piG^{(\dim)}}{d - 1} \srcs{2} h^\TT_{k\ell} + \frac{8}{d - 2} (\phibs{2} h^\TT_{k\ell,m})_{,m} - \pdiffq{}{t} C_{k\ell}\biggr\}\label{eq:boxhtt}\,,\\ \picantti{ij} & = \frac{1}{2} \dot{h}^\TT_{ij} - \frac{1}{2}\delta^{\text{TT}\,ij}_{k\ell} C_{k\ell} \label{eq:pitt}\,. \end{align} \subsection{Near-Zone Expansion}\label{subsec:nzexpansion} The wave equation \begin{align} \Box h &= f\,, \end{align} [which represents the components of \eqref{eq:boxhtt}] for a source $f$ has several solutions depending on the boundary conditions. In field theory mostly the retarded one is used. At the order considered in the present article the time symmetric (i.e. conservative) solution is sufficient.% \footnote{At the considered order $h^\TT_{(5)ij}$ can be neglected since the linear terms are not time symmetric and the quadratic terms arise only due to $(h^\TT_{(5)ij,k})^2$ which is zero in $d=3$ because $h^\TT_{(5)ij}$ is only a function of $t$ (there are no explicit terms in the source at $\cInv{5}$-order); $h^\TT_{(7)ij}$ is of too high order to appear at formal 3PN order, see \cite{Jaranowski:Schafer:1997,Steinhoff:Wang:2009,Wang:Steinhoff:Zeng:Schafer:2011}. We therefore neglected $h^\TT_{(5)ij}$ and $h^\TT_{(7)ij}$ in \eqref{eq:httorders} and hence there are no contributions to \eqref{eq:H3PN} or \eqref{eq:H2PNTT}.} The appropriate solution derives from a near-zone expansion, which is a formal expansion in $c^{-1}$. (see e.g. \cite{Galtsov:2002,Cardoso:Dias:Lemos:2003,Blanchet:Damour:EspositoFarese:Iyer:2005,Blanchet:Damour:EspositoFarese:2004}.) More precisely, the near-zone expansion is a series expansion in the small quantity $r/(c t)\ll 1$, which means that the distance from the field point to the appropriate source point is small compared to the gravitational wavelength. So the near-zone expansion may be used if retardation effects are negligible. Consider the Feynman propagator for a massless particle (which corresponds to the Green's function of the wave equation) \begin{align} G_{F}(\vct{x}, t) &= -\frac{1}{2\pi} \lim_{\varepsilon\to0} \int\text{d}k_0 \frac{1}{(2\pi)^d} \int\text{d}^d k \frac{e^{-i(k_0 x^0 - \scpm{\vct{k}}{\vct{x}})}}{k^2 - k_0^2 - i\varepsilon}\,. \end{align} For the reader's convenience we reintroduce the powers of $c$ in the following expressions. This means $k_0 = \omega/c$ and $x^0 = c t$. As it was argued above and in e.g. \cite{Iwanenko:Sokolow:1953,Goldberger:2007} the condition $r/(ct) \ll 1$ corresponds to $k_0/k \ll 1$ which gives rise to the so-called potential gravitons in contrast to the radiation gravitons for which $k_0^2 \approx k^2$. For the radiation gravitons the $i\varepsilon$ term becomes important, but for the potential ones it could be neglected and the Fourier amplitude in the propagator is expandable in $k_0/k$, namely \begin{align} G_{F}(\vct{x}, t) &\stackrel{\text{NZ}}{=} -\frac{1}{2\pi c} \int\text{d}\omega e^{-i \omega t}\sum_{n=0}^\infty \frac{\omega^{2n}}{c^{2n}} \frac{1}{(2\pi)^d} \int\text{d}^d k \frac{e^{i\scpm{\vct{k}}{\vct{x}}}}{k^{2(n+1)}}\,.\label{eq:nzfourier} \end{align} This means that a near-zone expansion of the (time symmetric) Feynman propagator cannot contain any contributions leading to radiative losses. The $d$-dimensional $k$-space integral in \eqref{eq:nzfourier} has to be performed by using \begin{align} \int\text{d}^d k \frac{e^{i\scpm{\vct{k}}{\vct{x}}}}{k^{2\alpha}} &= \frac{1}{(4\pi)^{d/2}} \frac{\Gamma\left(\frac{d}{2}-\alpha\right)}{\Gamma(\alpha)} \left(\frac{r^2}{4}\right)^{\alpha-d/2}\,, \end{align} which can be obtained by dimensional regularization (the source of the wave lies at the origin of the coordinate system). This result can be reformulated in inverse Laplacians on a delta-type source by iterating \eqref{eq:invlapdelta} and \eqref{eq:invra} in the {Appendix}, \begin{align} \frac{1}{(2\pi)^d} \int\text{d}^d k \frac{e^{i\scpm{\vct{k}}{\vct{x}}}}{k^{2(n+1)}} &= -\frac{\Gamma\left(2+n-\frac{d}{2}\right)\Gamma\left(\frac{d}{2}-1-n\right)}{\Gamma\left(2-\frac{d}{2}\right)\Gamma\left(\frac{d}{2}-1\right)} (\Delta^{-1})^{n+1}\delta\,. \end{align} For $d\notin 2\mathbb{Z}$ (i.e. no odd dimensional spacetime) one can simplify the Gamma functions further by using the identity $\Gamma(1-z)\Gamma(z) = \pi/\sin(\pi z), z\notin\mathbb{Z}$ which leads to \begin{align} \frac{1}{(2\pi)^d} \int\text{d}^d k \frac{e^{i\scpm{\vct{k}}{\vct{x}}}}{k^{2(n+1)}} &= -(-1)^n (\Delta^{-1})^{n+1}\delta\,, \end{align} and therefore after performing the $\omega$ Fourier transform to \begin{align} G_{F}(\vct{x}, t) &\stackrel{\text{NZ}}{=} \sum_{n=0}^\infty \left(\frac{1}{c}\frac{\partial}{\partial t}\right)^{2n} (\Delta^{-1})^{n+1}\delta_{(d+1)}\,, \end{align} for the Feynman propagator. This result could immediately be used to write down the near-zone expanded solution of the wave equation \begin{align} h &= \Box^{-1} f \stackrel{\text{NZ}}{=} \Delta^{-1} \sum_{n=0}^{\infty}(\Delta^{-1})^{n}\left(\frac{1}{c}\frac{\partial}{\partial t}\right)^{2n}f \label{eq:nzexpansion}\,. \end{align} Notice that a near-zone expanded field in general does not converge at spatial infinity. Now we are able to derive the solutions for the transverse-traceless part of the metric at a certain post-Newtonian order in the near-zone. \subsection{Routhian}\label{subsec:routhian} Before we can insert the wave equation \eqref{eq:boxhtt} and its solution [see \eqref{eq:nzexpansion}, and for a more explicit form see \Sec{subsec:solwaveequation}], we have to transform the ADM Hamiltonian into a Routhian, i.e. a Lagrangian in $h^\TT_{ij}$ and $\picantti{ij}$ and a Hamiltonian in the particle degrees of freedom, \begin{align} R[h^\TT_{ij}, \dot{h}^\TT_{ij}] = H_{{\text{ADM}}} - \frac{1}{16\pi G^{(\dim)}} \int \text{d}^d x \, \picantti{ij} \dot{h}^\TT_{ij}\,. \end{align} This is necessary because otherwise the equation of motion, e.g., for $\vx{a}$, following from the Hamiltonian (for simplicity we omit the spin variables here) \begin{align} H_{{\text{ADM}}}(\vx{a}, \vmom{a}, h^\TT_{ij}(\vx{b}, \vmom{b}), \picantti{ij}(\vx{b}, \vmom{b}))\,, \end{align} using the Poisson brackets would be \begin{align} \dot{\hat{z}}_a^k &= \pdiffq{H_{{\text{ADM}}}}{\mom{a}{k}} + \frac{\delta H_{\text{ADM}}}{\delta h^\TT_{ij}} \pdiffq{h^\TT_{ij}}{\mom{a}{k}} + \frac{\delta H_{\text{ADM}}}{\delta \picantti{ij}} \pdiffq{\picantti{ij}}{\mom{a}{k}}\,. \end{align} This equation is obviously wrong, because $h^\TT_{ij}$ and $\picantti{ij}$ are dynamical degrees of freedom and may not lead to additional terms in the equations of motion, also when their solutions are inserted. On the other hand using $R(\vx{a}, \vmom{a}, h^\TT_{ij}(\vx{b}, \vmom{b}), \dot{h}^\TT_{ij}(\vx{b}, \dot{\hat{\vct{z}}}_b, \vmom{b}, \dot{\hat{\vct{p}}}_b))$ one has for the same equation of motion \begin{align} \dot{\hat{z}}_a^k &= \pdiffq{R}{\mom{a}{k}} + \underbrace{\frac{\delta R}{\delta h^\TT_{ij}}}_{=0} \pdiffq{h^\TT_{ij}}{\mom{a}{k}}\,, \end{align} which has no additional terms coming from the chain rule, because they vanish due to $h^\TT_{ij}$ fulfills the equations of motion in the appropriate approximation. Hence one can obtain the equation of motion in the usual way and one does not have to keep track of the field insertions. This is analogous to the construction of a Fokker action, see, e.g., \cite{Damour:EspositoFarese:1995} and references therein. A Fokker-like construction of a matter-only Lagrangian (or Routhian) can not account for dissipative effects, see \cite{Galley:2012} for a discussion. However, dissipative Hamiltonians in the ADM formalism can be constructed in the following way \cite{Jaranowski:Schafer:1997, Wang:Steinhoff:Zeng:Schafer:2011}: The matter variables entering the solution of $h^\TT_{ij}$ and $\picantti{ij}$ are substituted by new (``primed'') variables and thus the binary Hamiltonian now contains four types of canonical matter variables. This procedure prevents occurrence of wrong contributions in the equations of motion, too. The primed variables will be treated as explicitly time dependent and lead to an explicitly time dependent Hamiltonian. After calculating the equations of motion for canonical positions $\vx{a}$ and momenta $\vmom{a}$, the primed variables are again identified with the old ones which makes another regularization procedure necessary \cite{Jaranowski:Schafer:1997}. Another approach to construct an action principle for dissipative systems was suggested in \cite{Galley:2012} recently. \subsection{Reduction of the Routhian using the Wave Equation} The part of the Routhian labeled as TT contains $H_{\text{ADM}}^{\text{int}}$, the field kinetic part of the Hamiltonian and the part coming from the Legendre transform (and thus only terms coming from the transverse-traceless degrees of freedom), \begin{align} R^{{\text{TT}}} &= \frac{1}{16 \pi G^{(\dim)}} \int \text{d}^d x \biggl[ \left(B_{(4)ij} + \hat{B}_{(6)ij}\right) h^\TT_{ij} - \frac{4 \pi G^{(\dim)}}{d - 1} (h^\TT_{ij})^2 \srcs{2} + \picantti{ij} C_{ij} \nonumber\\ & - \frac{2}{d - 2}\phibs{2} (h^\TT_{ij,k})^2 + \frac{1}{4} (h^\TT_{i j , k})^2 + (\picantti{i j})^2 - \dot{h}^\TT_{ij} \picantti{ij} \biggr]\,. \end{align} Inserting $\Box h^\TT_{ij}$ and $\picantti{ij}$ from \eqref{eq:boxhtt} and \eqref{eq:pitt}, or \begin{align} h^\TT_{ij} (B_{(4)ij} + \hat{B}_{(6)ij}) & = \frac{1}{2} h^\TT_{ij} \Box h^\TT_{ij} + \frac{8\piG^{(\dim)}}{d - 1} (h^\TT_{ij})^2 \srcs{2} - \frac{4}{d - 2} h^\TT_{ij} (\phibs{2} h^\TT_{ij,k})_{,k} \nonumber\\ & + \frac{1}{2} h^\TT_{ij} \pdiffq{}{t} C_{ij} + {(\text{td})}\,, \end{align} leads to \begin{align} R^{{\text{TT}}} &= \frac{1}{16 \pi G^{(\dim)}} \int \text{d}^d x \biggl[ \frac{1}{4} h^\TT_{ij} \Box h^\TT_{ij} + \frac{4\piG^{(\dim)}}{d - 1} (h^\TT_{ij})^2 \srcs{2} + \frac{2}{d - 2}\phibs{2} (h^\TT_{ij,k})^2 \nonumber\\ & + \frac{1}{2} \pdiffq{}{t} \biggl[h^\TT_{ij} C_{ij}\biggr] - \frac{1}{4} C_{ij} \TTproj{ij}{k\ell} C_{k\ell} \biggr]\,,\label{eq:Routhian3PN} \end{align} where the last part will appear in the matter part of the final Routhian, because there is no $h^\TT_{ij}$ and no $\picantti{ij}$ or $\dot{h}^\TT_{ij}$ in $C_{ij}$. Notice that we kept a total time derivative here. If we would drop it the $\dot{C}_{ij}$ terms would not cancel in the next step. These terms are not impossible to handle, but it is advised to remove them to simplify the calculation. \subsection{Insertion of the Near-Zone Wave Equation for Further Simplification} Now we need to split up the first expression in the TT part of the Routhian \eqref{eq:Routhian3PN}. The near-zone expansion of the transverse-traceless part of the metric $h^\TT_{ij} = h^\TT_{(4)ij} + h^\TT_{(6)ij} + \dots$, which is explained in detail in \Sec{subsec:nzexpansion}, contributes only via $h^\TT_{(4)ij}$ and $h^\TT_{(6)ij}$ at 3PN level. This expansion leads to \begin{align} h^\TT_{ij} \Box h^\TT_{ij} & = h^\TT_{(4)\,ij} \Box h^\TT_{(4)\,ij} + 2 h^\TT_{(4)\,ij} \Box h^\TT_{(6)\,ij} + {(\text{td})} + {(\text{ttd})}\,, \end{align} where ${(\text{ttd})}$ denotes a total time derivative. Notice that there is a difference between $\Box h^\TT_{(6)\,ij}$ and $(\Box h^\TT_{ij})_{(6)}$ in the near-zone expansion as time derivatives raise the formal order of a field in contrast to spatial derivatives. This is a specific feature of the near-zone expansion, as in the far-zone time and space derivatives are equal in magnitude. These considerations lead to the difference in the following box operations, \begin{align} \Box h^\TT_{(6)\,ij} & = \Delta h^\TT_{(6)\,ij} - \partial_t^2 h^\TT_{(6)\,ij} \label{eq:boxhtt6}\,,\\ (\Box h^\TT_{ij})_{(6)} & = \Delta h^\TT_{(6)\,ij} - \partial_t^2 h^\TT_{(4)\,ij} \label{eq:box6htt}\,, \end{align} where $\partial_t^2 h^\TT_{(6)\,ij}$ is of formal $\cInv{8}$ order and hence the total time derivative can be neglected. From this it follows that \begin{align} h^\TT_{ij} \Box h^\TT_{ij} & = h^\TT_{(4)\,ij} \Delta h^\TT_{(4)\,ij} - h^\TT_{(4)\,ij} \partial_t^2 h^\TT_{(4)\,ij} + 2 h^\TT_{(4)\,ij} \underbrace{\Delta h^\TT_{(6)\,ij}}_{(\Box h^\TT_{ij})_{(6)} + \partial_t^2 h^\TT_{(4)\,ij}} + {(\text{td})}\,,\label{eq:httboxhtt} \end{align} such that \begin{align} (h^\TT_{ij} \Box h^\TT_{ij})_{(10)} & = 2 h^\TT_{(4)\,ij} (\Box h^\TT_{ij})_{(6)} + h^\TT_{(4)\,ij} \partial_t^2 h^\TT_{(4)\,ij} + {(\text{td})}\,, \end{align} where $(\Box h^\TT_{ij})_{(6)}$ is given by the $\cInv{6}$ part of Eq. \eqref{eq:boxhtt}. After another integration by parts this immediately leads to (we need only the leading order of $C_{ij}$ which is at $\cInv{5}$) \begin{align} R^{{\text{TT}}}_{\text{3PN}} &= \frac{1}{16 \pi G^{(\dim)}} \int \text{d}^d x \biggl[ h^\TT_{(4)ij} \hat{B}_{(6)ij} -\frac{1}{4} (\dot{h}^\TT_{(4)\,ij})^2 -\frac{4\piG^{(\dim)}}{d - 1} (h^\TT_{(4)ij})^2 \srcs{2} \nonumber\\ & -\frac{2}{d - 2}\phibs{2} (h^\TT_{(4)ij,k})^2 +\frac{1}{2} \dot{h}^\TT_{(4)ij} C_{(5)ij} -\frac{1}{4} C_{(5)ij} \TTproj{ij}{k\ell} C_{(5)k\ell} \biggr]\,. \end{align} Rearranging some of the terms into pure matter and pure transverse-traceless parts, one obtains the full final 3PN Routhian, \begin{subequations} \label{eq:FinalRouthian3PN} \begin{align} R^{{\text{matter}}}_{\text{3PN}} &= \int \text{d}^d x \biggl[ \srcsnontt{10} -2\phibs{2} \srcsnontt{8} + (- {\bar{S}_{(4)}{}} -4 \phibs{4} + 2 \phibs{2}^2)\srcs{6} \nonumber\\ & + (6 \phibs{2} \phibs{4} + {\bar{S}_{(4)}{}} \phibs{2} -2\phibs{2}^3) \srcs{4} + \biggl(4 \phibs{4}^2 + {\bar{S}_{(4)}{}} \phibs{4} - 8 \phibs{4} \phibs{2}^2 - {\bar{S}_{(4)}{}} \phibs{2}^2 \nonumber\\ & + 2\phibs{2}^4 \biggr)\srcs{2} +2 \vpots{3}{i} \srcisnontt{7}{i} - \frac{1}{16 \pi G^{(\dim)}} \biggl( -(\momlones{5}{ij})^2 +2\frac{3d - 4}{d - 2} \phibs{4} (\momls{3}{ij})^2 \nonumber\\ & + {\bar{S}_{(4)}{}} (\momls{3}{ij})^2 - \frac{(3d - 4)(3d - 2)}{(d - 2)^2} (\momls{3}{ij})^2 \phibs{2}^2 + \biggl(\frac{4(d - 1)}{d -2} \TTproj{ij}{k\ell} (\phibs{2} \momls{3}{k\ell})\biggr)^2 \nonumber\\ & - 4\frac{2d - 3}{d - 2} \phibs{2} \momls{3}{ij}\momlones{5}{ij} \biggr) \biggr] \,, \label{eq:FinalMatterRouthian3PN}\\ R^{{\text{TT}}}_{\text{3PN}} &= \frac{1}{16 \pi G^{(\dim)}} \int \text{d}^d x \biggl[ h^\TT_{(4)ij} \hat{B}_{(6)ij} -\frac{1}{4} (\dot{h}^\TT_{(4)\,ij})^2 -\frac{4\piG^{(\dim)}}{d - 1} (h^\TT_{(4)ij})^2 \srcs{2} \nonumber\\ & -\frac{2}{d - 2}\phibs{2} (h^\TT_{(4)ij,k})^2 -\dot{h}^\TT_{(4)ij} \pimatis{5}{ij} -4 \phibs{2} \momls{3}{ij} \dot{h}^\TT_{(4)ij} \biggr]\,,\label{eq:FinalTTRouthian3PN} \end{align} \end{subequations} where $\hat{B}_{(6)ij}$ is given by \eqref{eq:hatB6}. Note, that one does not need to calculate the $h^\TT_{(6)ij}$ field, which is not fully known in closed form. An explicit form of $\hat{B}_{(6)ij}$ with all sources inserted is derived in the next section, cf.\ \eqref{eq:B6matter}. \section{Sources} \label{sec:sources} The construction of the sources linear in spin follows along the lines of \cite{Steinhoff:Schafer:2009:2}. This requires the introduction of a local Lorentz frame% \footnote{Such frames were originally invented by \'{E}lie Cartan and named ``rep\`{e}re mobile'', which is French for ``moving frame''. In $d=3$ it is also called ``triad'' or ``dreibein''; in $d=4$ ``tetrad'' or ``vierbein''. For arbitrary integer $d$ it is called ``vielbein''.} as we want flat space Poisson brackets for the spin. In particular we also apply the Schwinger time gauge for the $(d+1)$-dimensional framefield which effectively reduces it to a $d$-dimensional spatial framefield $\triad{i}{j}$. During the following calculations we neglect the $\pimati{ij}$ terms, which are far too high in their order, such that the modified source terms are given by \cite[Eqs. (6.33) and (6.34)]{Steinhoff:2011} \begin{align} \src{} & = \sum_a \biggl[ -\nmom{a} \delta_a +\frac{\mom{a}{j}\gamma^{ji}}{\nmom{a}}\hat{A}_a^{k\ell} \triad{m}{k} e^{(m)}_{\quad \ell,i}\dl{a} \nonumber\\ & +\biggl\{ \frac{1}{2} \biggl( \frac{\triad{r}{\ell}\triad{s}{i}\mom{a}{j}}{\nmom{a}} + \gamma^{mn}\frac{\triad{r}{m}\triad{s}{i}\mom{a}{j}\mom{a}{n}\mom{a}{\ell}}{(\nmom{a})^2(m_a - \nmom{a})} \biggr) \gamma^{k\ell} \biggl( \gamma^{nj}\;\christoffel{(d)}{i}{nk} + \gamma^{in}\;\christoffel{(d)}{j}{nk} \biggr)\delta_a \nonumber\\ & - \biggl( \frac{\mom{a}{\ell}}{m_a - \nmom{a}}\gamma^{ij}\gamma^{k\ell}\triad{r}{j}\triad{s}{k}\delta_a \biggr)_{,i} \biggr\}\spin{a}{r}{s} \biggr]\,,\label{eq:sourcehamilton}\\ \src{i} & = \sum_a \biggl[ \mom{a}{i}\delta_a - \hat{A}_a^{k\ell} \triad{m}{k} e^{(m)}_{\quad \ell,i}\dl{a} + \frac{1}{2} \biggl( \gamma^{mk}\triad{r}{i}\triad{s}{k}\delta_a \nonumber\\ & - \frac{\mom{a}{\ell}\mom{a}{k}}{\nmom{a} (m_a - \nmom{a})} (\gamma^{mk} \delta^{p}_{i} + \gamma^{mp} \delta^{k}_{i}) \gamma^{q\ell} \triad{r}{q}\triad{s}{p}\delta_a \biggr)_{,m}\spin{a}{r}{s} \biggr]\label{eq:sourcemomentum}\,, \end{align} where $m_a$ is the mass of the $a$th object, $\vmom{a}$ its canonical momentum, $\spin{a}{r}{s}$ its canonical spin, $\nmom{a} = -\sqrt{m_a^2 + \gamma^{ij}\mom{a}{i}\mom{a}{j}}$, and $\dl{a} = \delta(\vct{x} - \vx{a})$ is the $d$-dimensional Dirac delta located at $\vct{x}=\vx{a}$. Furthermore \begin{align} \hat{A}_a^{k\ell} &= \gamma^{ik}\gamma^{j\ell} \left(\frac{1}{2}\hat{S}_{a\,ij} + \frac{m_a \mom{a}{(i}(nS_a)_{j)}}{\nmom{a} (m_a - \nmom{a})}\right)\label{eq:Ahat}\,,\\ (nS_a)_i &= -\frac{\mom{a}{j}\gamma^{jk}\hat{S}_{a\,ki}}{m_a}\label{eq:nS}\,, \end{align} to linear order in spin. However the $\hat{A}_a^{k\ell}$-terms do not contribute to the expanded source expressions at the orders considered here. The matter position and matter momentum variables are canonical conjugate to each other, namely \begin{align} \{\xa{a}{i},\mom{b}{j}\} &= \delta_{ij} \delta_{ab}\,, \end{align} and the spin variables fulfill also canonical Poisson bracket relations, namely \begin{align} \{\spin{a}{i}{j}, \spin{a}{k}{\ell}\} = \delta_{ik} \spin{a}{j}{\ell} - \delta_{i\ell} \spin{a}{j}{k} - \delta_{jk} \spin{a}{i}{\ell} + \delta_{j\ell} \spin{a}{i}{k}\,, \end{align} where the canonical spin tensor $\spin{a}{i}{j}$ is related to the canonical spin vector $\hat{\vct{S}}_{a}$ via $\spin{a}{i}{j} = \varepsilon_{ijk} \hat{S}_{a\,(k)}$ and $\varepsilon_{ijk}$ is the Levi-Civita symbol. The appropriate Poisson brackets for the canonical spin vector are given by \begin{align} \{\hat{S}_{a\,(i)}, \hat{S}_{a\,(j)}\} = \varepsilon_{ijk}\hat{S}_{a\,(k)}\,. \end{align} \subsection{Framefield Expansion} We choose to work within a symmetrical framefield gauge $\triad{i}{j}=\triad{j}{i}$ \cite{Kibble:1963}, so the dreibein can be written as a matrix square root of the metric, symbolically $\triad{i}{j} = \sqrt{\gamma_{ij}}$ or more explicit \begin{align} \triad{i}{k} \triad{k}{j} = \gamma_{ij}\label{eq:dreibein}\,. \end{align} Notice that $\gamma_{ij}$ is positive definite and we require the same for $\triad{i}{j}$ such that it is unique. The second relation, \eqref{eq:dreibein}, can be inverted order by order, namely \begin{subequations} \begin{align} \triads{0}{i}{k} \triads{0}{k}{j} &= \gamma_{(0)\,ij} \stackrel{{\text{ADMTT}}}{\Rightarrow} \triads{0}{i}{j} = \delta_{ij}\,,\label{eq:dreibeinzero}\\ \triads{2}{i}{k} \triads{0}{k}{j} + \triads{0}{i}{k} \triads{2}{k}{j} &= \gamma_{(2)\,ij} \stackrel{{\text{ADMTT}}}{\Rightarrow} \triads{2}{i}{j} = \frac{1}{2}\gamma_{(2)\,ij} = \frac{2}{d-2} \phibs{2} \delta_{ij}\,,\\ \vdots \nonumber \end{align} \end{subequations} and at the end of the day one gets \begin{subequations} \begin{align} \triads{0}{i}{j} & = \delta_{ij}\,, \\ \triads{2}{i}{j} & = \frac{2}{d - 2} \phibs{2} \delta_{ij}\,, \\ \triads{4}{i}{j} & = \biggl(\frac{2}{d - 2} \phibs{4} - \frac{d - 4}{(d - 2)^2} \phibs{2}^2\biggr) \delta_{ij} + \frac{1}{2} h^\TT_{ij} \,, \\ \triads{6}{i}{j} & = \biggl(\frac{2}{d - 2} \phibs{6} - \frac{d - 4}{(d - 2)^2} \phibs{2} \phibs{4} + \frac{2}{3} \frac{(d - 4)(d - 3)}{(d - 2)^3} \phibs{2}^3\biggr) \delta_{ij} - \frac{1}{d - 2} \phibs{2} h^\TT_{ij}\,, \end{align} \end{subequations} for the framefield perturbations. The antisymmetric part of the framefield (which is zero in this gauge) can be interpreted as rotational degrees of freedom in the choice of the local frame. Such a rotation does not change the length of the spins. Recall that an antisymmetric matrix is an infinitesimal generator of rotations and in $d$ dimensions has $\tfrac{1}{2}\,d (d-1)$ independent entries. This is exactly the number of rotation planes in $d$ dimensions. \subsection{Constraint Sources} After performing the expansion of the sources \eqref{eq:sourcehamilton} and \eqref{eq:sourcemomentum} in powers of $\cInv{1}$ and also expanding the metric-framefield relation \eqref{eq:dreibein}, we are able to write down the appropriate post-Newtonian contributions to the source of the Hamilton constraint, namely \begin{subequations} \label{eq:PNSources} \begin{align} \srcs{2} & = \sum_a m_a \dl{a}\,,\nonumber\\ \srcs{4} & = \sum_a \biggl[ \frac{\vmom{a}^2}{2 m_a} \dl{a} + \frac{1}{2 m_a} \mom{a}{i} \spin{a}{i}{j} \dl{a}{}_{,j} \biggr]\,,\\ \srcs{6} & = \sum_a \biggl[ -\frac{(\vmom{a}^2)^2}{8 m_a^3} \dl{a} - \frac{2}{d-2} \frac{\vmom{a}^2}{m_a} \phibs{2}\dl{a} + \frac{2}{d - 2} \frac{\mom{a}{i}}{m_a}\spin{a}{i}{j} \phibs{2}_{,j}\dl{a} - \frac{\vmom{a}^2}{8 m_a^3} \mom{a}{i}\spin{a}{i}{j} \dl{a}{}_{,j} \nonumber\\ & - \frac{2}{d - 2} \frac{\mom{a}{i}}{m_a} \spin{a}{i}{j} (\phibs{2} \dl{a})_{,j} \biggr]\,,\\ \srcs{8} & = \sum_a \biggl[ \frac{(\vmom{a}^2)^3}{16 m_a^5} \dl{a} + \frac{1}{d - 2} \frac{(\vmom{a}^2)^2}{m_a^3}\phibs{2}\dl{a} + \frac{d + 2}{d - 2} \frac{\vmom{a}^2}{m_a} \phibs{2}^2 \dl{a} - \frac{2}{d - 2} \frac{\vmom{a}^2}{m_a} \phibs{4} \dl{a} \nonumber\\ & - \frac{1}{2 m_a} \mom{a}{i} \mom{a}{j} h^\TT_{ij}\dl{a} - \frac{1}{d-2} \frac{\vmom{a}^2}{m_a^3} \mom{a}{i}\spin{a}{i}{j} \phibs{2}_{,j}\dl{a} - \frac{2(d+2)}{(d-2)^2} \frac{\mom{a}{i}}{m_a} \spin{a}{i}{j} \phibs{2}\phibs{2}_{,j} \dl{a} \nonumber\\ & + \frac{2}{d - 2} \frac{\mom{a}{i}}{m_a} \spin{a}{i}{j} \phibs{4}_{,j} \dl{a} + \frac{1}{2 m_a} \mom{a}{i} \spin{a}{j}{k} h^\TT_{ij,k} \dl{a} \biggr] +\sum_a \partial_j \biggl[ \frac{(\vmom{a}^2)^2}{16 m_a^5} \mom{a}{i}\spin{a}{i}{j}\dl{a} \nonumber\\& + \frac{1}{d-2}\frac{\vmom{a}^2}{m_a^3}\mom{a}{i}\spin{a}{i}{j}\phibs{2}\dl{a} + \frac{d+2}{(d-2)^2}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{j}\phibs{2}^2\dl{a} - \frac{2}{d-2}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{j}\phibs{4}\dl{a} \nonumber\\ & + \frac{1}{4 m_a} \mom{a}{i}\spin{a}{k}{i}h^\TT_{jk}\dl{a} - \frac{1}{4 m_a} \mom{a}{i}\spin{a}{k}{j}h^\TT_{ik}\dl{a} \biggr]\,,\\ \srcs{10} & = \sum_a\biggl[ - \frac{5 (\vmom{a}^2)^4}{128 m_a^7}\dl{a} - \frac{3}{4(d-2)}\frac{(\vmom{a}^2)^3}{m_a^5} \phibs{2}\dl{a} - \frac{d+6}{2(d-2)^2}\frac{(\vmom{a}^2)^2}{m_a^3}\phibs{2}^2\dl{a} \nonumber\\ & - \frac{2d(d+2)}{3(d-2)^3}\frac{\vmom{a}^2}{m_a}\phibs{2}^3\dl{a} + \frac{1}{d-2}\frac{(\vmom{a}^2)^2}{m_a^3}\phibs{4}\dl{a} + \frac{2(d+2)}{(d-2)^2}\frac{\vmom{a}^2}{m_a}\phibs{2}\phibs{4}\dl{a} \nonumber\\ & - \frac{2}{d-2}\frac{\vmom{a}^2}{m_a}\phibs{6}\dl{a} + \frac{\vmom{a}^2}{4 m_a^3}\mom{a}{i}\mom{a}{j}h^\TT_{ij}\dl{a} + \frac{4}{d-2}\frac{\mom{a}{i}\mom{a}{j}}{m_a}h^\TT_{ij}\phibs{2}\dl{a} \nonumber\\ & + \frac{3}{4(d-2)}\frac{(\vmom{a}^2)^2}{m_a^5}\mom{a}{i}\spin{a}{i}{j}\phibs{2}_{,j}\dl{a} + \frac{d+6}{(d-2)^2}\frac{\vmom{a}^2}{m_a^3}\mom{a}{i}\spin{a}{i}{j}\phibs{2}\phibs{2}_{,j}\dl{a} \nonumber\\ & + \frac{2d(d+2)}{(d-2)^3}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{j}\phibs{2}^2\phibs{2}_{,j}\dl{a} - \frac{1}{d-2}\frac{\vmom{a}^2}{m_a^3}\mom{a}{i}\spin{a}{i}{j}\phibs{4}_{,j}\dl{a} \nonumber\\ & - \frac{2(d+2)}{(d-2)^2}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{j}(\phibs{2}\phibs{4})_{,j}\dl{a} + \frac{2}{d-2}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{j}\phibs{6}_{,j}\dl{a} \nonumber\\ & - \frac{\vmom{a}^2}{4 m_a^3}\mom{a}{i}\spin{a}{j}{k}h^\TT_{ij,k}\dl{a} - \frac{4}{d-2}\frac{\mom{a}{i}}{m_a}\spin{a}{j}{k}h^\TT_{ij,k}\phibs{2}\dl{a} \nonumber\\ & + \frac{3}{d-2}\frac{\mom{a}{i}}{m_a}h^\TT_{ik}\spin{a}{j}{k}\phibs{2}_{,j}\dl{a} - \frac{1}{d-2}\frac{\mom{a}{i}}{m_a}\spin{a}{i}{k}h^\TT_{jk}\phibs{2}_{,j}\dl{a} \biggr] +{(\text{td})}\,. \end{align} \end{subequations} In $\srcs{10}$ there is a $\phibs{6}$ term left. But all occurrences of $\phibs{6}$ can be cast into the form $-\frac{4}{d-2} \srcs{4}\phibs{6}$ which we integrate by parts using \eqref{eq:phi6src4PI}. Then $\phibs{6}$ disappears and gets substituted by \begin{align} -\frac{4}{d-2} \srcs{4}\phibs{6} & = -\frac{4}{d-2}\biggl[ \sum_a \biggl\{ \frac{(\vmom{a}^2)^2}{16 m_a^3}{\bar{S}_{(4)}{}}\dl{a} +\frac{d+2}{4(d-2)}\frac{\vmom{a}^2}{m_a}\phibs{2}{\bar{S}_{(4)}{}}\dl{a} \nonumber\\ & +\frac{m_a}{2}(\phibs{4} -\phibs{2}^2 ){\bar{S}_{(4)}{}}\dl{a} +\frac{\vmom{a}^2 \mom{a}{i}\spin{a}{i}{j}}{16 m_a^3} {\bar{S}_{(4)}{}}\dl{a}{}_{,j} \nonumber\\ & +\frac{d+2}{4(d-2)} \frac{\mom{a}{i}\spin{a}{i}{j}}{m_a} \phibs{2}{\bar{S}_{(4)}{}}\dl{a}{}_{,j} \biggr\} \nonumber\\ & -\frac{1}{16\piG^{(\dim)}}\biggl\{\frac{1}{2}(\momls{3}{ij})^2{\bar{S}_{(4)}{}} - 2h^\TT_{ij}{\bar{S}_{(4)}{}}\phibs{2}_{,ij}\biggr\} \biggr] +{(\text{td})}\label{eq:srcH4phi6}\,. \end{align} Furthermore we are also able to write down the sources for the momentum constraint in their full form, which are given by \begin{subequations} \label{eq:PNSourcesMom} \begin{align} \srcis{3}{i} & = \sum_a \biggl[\mom{a}{i} \dl{a} + \frac{1}{2} (\spin{a}{i}{j} \dl{a})_{,j}\biggr]\,,\\ \srcis{5}{i} & = \frac{1}{2} \sum_a \biggl[ -\frac{\mom{a}{k}}{2 m_a^2} (\mom{a}{j}\spin{a}{i}{k} + \mom{a}{i} \spin{a}{j}{k}) \dl{a}\biggr]_{,j}\,,\\ \srcis{7}{i} & = \frac{1}{2} \sum_a \biggl[ \biggl(-\frac{1}{2}(h^\TT_{jk}\spin{a}{i}{k} + h^\TT_{ik}\spin{a}{j}{k}) + \frac{3 \vct{\hat{\canmomp}}_a^2}{8 m_a^4}\mom{a}{k}(\mom{a}{j}\spin{a}{i}{k} + \mom{a}{i} \spin{a}{j}{k}) \nonumber\\ & +\frac{2}{d-2}\frac{\mom{a}{k}}{m_a^2}(\mom{a}{j}\spin{a}{i}{k} + \mom{a}{i} \spin{a}{j}{k})\phibs{2}\biggr) \dl{a}\biggr]_{,j}\,. \end{align} \end{subequations} With the source expressions \eqref{eq:PNSources} including \eqref{eq:srcH4phi6} and \eqref{eq:PNSourcesMom} from above, we may express $B_{(4)ij}$ and $\hat{B}_{(6)ij}$ in terms of the matter variables. They are given by \begin{subequations} \begin{align} B_{(4)ij} &= 16\piG^{(\dim)}\sum_a \biggl[ -\frac{\mom{a}{i}\mom{a}{j}}{2m_a}\dl{a}-\frac{\mom{a}{i}\spin{a}{j}{k}}{2m_a}\dl{a,k} \biggr] - \frac{4(d-1)}{d-2}\phibs{2}_{,i}\phibs{2}_{,j}\,,\label{eq:B4matter}\\ \hat{B}_{(6)ij} & = 16\piG^{(\dim)} \sum_a \biggl[ \frac{\vmom{a}^2}{4 m_a^3} \mom{a}{i}\mom{a}{j}\dl{a} + \frac{d+2}{d-2} \frac{\mom{a}{i}\mom{a}{j}}{m_a}\phibs{2}\dl{a} \nonumber\\ & + \frac{\vmom{a}^2}{4 m_a^3}\mom{a}{i}\spin{a}{j}{k}\dl{a,k} + \frac{d+2}{d-2} \frac{\mom{a}{i}}{m_a}\spin{a}{j}{k}(\phibs{2}\dl{a})_{,k} \nonumber\\ & + \frac{d+4}{2(d-2)} \frac{\mom{a}{i}}{m_a}\spin{a}{k}{j} \phibs{2}_{,k} \dl{a} - \frac{d}{2(d-2)} \frac{\mom{a}{k}}{m_a}\spin{a}{k}{j} \phibs{2}_{,i} \dl{a} \nonumber\\ & + \frac{1}{2} \left(\vpots{3}{j}_{,k} + \vpots{3}{k}_{,j}\right) \spin{a}{k}{i} \dl{a} \biggr] \nonumber\\ & + 2\momls{3}{ik}\left(\momls{3}{k}_{,j}-\momls{3}{j}_{,k}\right) + \frac{d-2}{d-1}\momls{3}{ij}\momls{3}{k}_{,k} + 2\momls{3}{ij}_{,k}\vpots{3}{k} \nonumber\\ & + 8 \frac{2d-3}{d-2} \phibs{4}\phibs{2}_{,ij} + 8 \frac{3d-4}{(d-2)^2} \phibs{2}\phibs{2}_{,i}\phibs{2}_{,j} + 4 \frac{d-4}{d-2} {\bar{S}_{(4)}{}} \phibs{2}_{,ij}\label{eq:B6matter}\,. \end{align} \end{subequations} We expressed $\hat{B}_{(6)ij}$ in a more convenient way now, since the $\momls{3}{i}$ vector potential has a much more simple structure than the $\vpots{3}{i}$ vector potential. \subsection{Matter Correction to the Canonical Field Momentum} \label{sec:matmom} Since we eliminated the transverse-traceless part of the canonical field momentum via using the relation between canonical field momentum and velocity of the $h^\TT_{ij}$ field \eqref{eq:pitt}, there are terms containing matter parts of the field momentum left in the Routhian \eqref{eq:FinalTTRouthian3PN}. These can be calculated from e.g. \cite[Eq. (2.34)]{Steinhoff:Wang:2009} where $\pimati{ij}$ is given by \begin{align} \pimati{ij} & = 16 \pi G^{(\dim)} \sum_a \pia{a}{ij}\dl{a}\label{eq:pimatter}\,. \end{align} From \cite[Eqs. (3.33) and (3.34)]{Steinhoff:Wang:2009} one gets the closed form expression \begin{align} \pia{a}{ij} & = \frac{1}{2} \gamma^{ik}\gamma^{j\ell} \frac{m_a \mom{a}{(k} nS_{a\,\ell)}}{\nmom{a} (m_a - \nmom{a})}\,. \end{align} The part containing the antisymmetric $\hat{A}^{[ij]}_a$ was neglected, because it is of order $\cInv{7}$ ($B^{ij}_{k\ell}$ starts at $\cInv{4}$) which is not necessary here. Here we only need $\pia{a}{ij}$ to the order $\cInv{5}$. A power counting (see beginning of subsection \ref{subsub:ordercounting}) tells us that we only have to take the leading order approximation of the above expression, reading \begin{align} \pia{(5)a}{ij} & = \frac{\mom{a}{k}}{8 m_a^2} \left(\mom{a}{i}\spin{a}{k}{j} + \mom{a}{j}\spin{a}{k}{i}\right)\label{eq:pimatter5}\,. \end{align} This expression has a vanishing trace, which makes obvious that we can neglect $\pihati{ij}$, Eq.\ \eqref{eq:pitrfix}, at the considered order. \section{Field Solutions and Integration}\label{sec:fieldsolutions} After obtaining expressions for the sources \eqref{eq:PNSources}, \eqref{eq:PNSourcesMom} and the Routhian \eqref{eq:FinalRouthian3PN} which gives the Hamiltonian in the matter degrees of freedom after an integration, we need the fields to be inserted into the Routhian. These can be derived by solving the lower order constraint equations in case of the post-Newtonian potential and the non-propagating parts of the field momentum (see \ref{subsubsec:solconstraints}). For the propagating degrees of freedom the wave equation has to be solved (see \ref{subsec:solwaveequation}). \subsection{\texorpdfstring{$d$}{d}-dimensional Solutions of the Constraints}\label{subsubsec:solconstraints} With $K = \frac{\Gamma\left(\frac{d}{2} - 1\right)}{\pi^{\frac{d}{2} - 1}} G^{(\dim)}$, the Hamilton constraint equations \eqref{subeq:phi2constr}, \eqref{subeq:phi4constr}, the momentum constraint equation \eqref{subeq:pi3constr}, and the various transformation formulas \eqref{eq:momentumtildepitilde}, \eqref{eq:momentumtildevpot}, and \eqref{eq:pitildevpot} relating the longitudinal field momentum and its corresponding vector potentials, we find using the inverse Laplacians listed in the {Appendix} \ref{sec:invlaptechnique} that \begin{align} \phis{2} & = 4 K \sum_a \frac{m_a}{r_a^{d - 2}}\label{eq:phi2sol}\,,\\ \phis{4} & = 4 K \sum_a \biggl[ \frac{\vmom{a}^2}{2 m_a} \frac{1}{r_a^{d-2}} + \frac{\mom{a}{i}\spin{a}{i}{j}}{2 m_a} \left(\frac{1}{r_a^{d-2}}\right)_{,j} - K\frac{d-2}{d-1} \sum_{b\ne a} \frac{m_a m_b}{r_{ab}^{d-2} r_b^{d-2}} \biggr]\label{eq:phi4sol}\,, \\ \pipots{3}{i} & = K \sum_a \biggl[ 2\frac{\mom{a}{i}}{r^{d - 2}_a} +\spin{a}{i}{j}\left(\frac{1}{r_a^{d-2}}\right)_{,j} \biggr]\label{eq:pitilde3sol}\,,\\ \vpots{3}{i} & = K \sum_a \left[ 2\frac{\mom{a}{i}}{r^{d - 2}_a} - \frac{d - 2}{2(d - 1)(4 - d)} \mom{a}{j} \left(\frac{1}{r^{d - 4}_a}\right)_{,ij} + \spin{a}{i}{j}\left(\frac{1}{r_a^{d-2}}\right)_{,j} \right]\label{eq:vpot3sol}\,,\\ \momls{3}{ij} & = K \sum_a \biggl[ 2\mom{a}{i} \left(\frac{1}{r^{d - 2}_a}\right)_{,j} + 2\mom{a}{j} \left(\frac{1}{r^{d - 2}_a}\right)_{,i} - \frac{d - 2}{(d - 1)(4 - d)} \mom{a}{k} \left(\frac{1}{r^{d - 4}_a}\right)_{,ijk} \nonumber\\ & - \frac{2}{d - 1} \delta_{ij} \mom{a}{k} \left(\frac{1}{r^{d - 2}_a}\right)_{,k} + \spin{a}{i}{k}\left(\frac{1}{r_a^{d-2}}\right)_{,kj} + \spin{a}{j}{k}\left(\frac{1}{r_a^{d-2}}\right)_{,ki} \biggr]\label{eq:momentumtilde3sol}\,. \end{align} Remember that the momentum constraint can be solved for $\pitildei{i}$ with the help of \eqref{eq:pisolve}. The more complicated fields like $\momls{5}{i}$ or $\phis{6}$ were so far only found in $d=3$ dimensions \cite{Jaranowski:Schafer:1998}. Also the leading order of the transverse-traceless part of the metric is only partially known in $d$ dimensions. We will discuss these issues in the following subsection. \subsection{Solutions of the Wave Equation} \label{subsec:solwaveequation} Consider now the wave equation \eqref{eq:boxhtt} for $h^\TT_{ij}$. There $h^\TT_{ij}$ is given in terms of a post-Newtonian approximate source $S_{ij}$, namely \begin{align} \Boxh^\TT_{ij} &= \TTproj{ij}{k\ell} (S_{(4)k\ell} + S_{(6)k\ell})\,. \end{align} By taking into account the near-zone expansion of $h^\TT_{ij}$, \eqref{eq:nzexpansion}, one gets \begin{align} h^\TT_{(4)ij} &= \TTproj{ij}{k\ell} \Delta^{-1} S_{(4)k\ell}\,,\label{eq:htt4}\\ h^\TT_{(6)ij} &= \TTproj{ij}{k\ell} \Delta^{-1} \left(S_{(6)k\ell} + \Delta^{-1} \partial_t^2 S_{(4)k\ell}\right) = \Delta^{-1}\left(\TTproj{ij}{k\ell} S_{(6)k\ell} + \ddot{h}^\TT_{(4)ij}\right)\,,\label{eq:htt6} \end{align} for the leading order and next-to-leading order expressions of $h^\TT_{ij}$ in the near-zone. Here the sources are given by \begin{align} S_{(4)ij} &= 2 B_{(4)ij}\,,\\ S_{(6)ij} &= 2 \hat{B}_{(6)ij} - \frac{16\piG^{(\dim)}}{d - 1} \srcs{2} h^\TT_{(4)ij} + \frac{8}{d - 2} (\phibs{2} h^\TT_{(4)ij,k})_{,k} - \pdiffq{}{t} C_{(5)ij}\label{eq:sourcehtt6}\,, \end{align} where $B_{(4)ij}$ and $\hat{B}_{(6)ij}$ are given by \eqref{eq:B4matter} and \eqref{eq:B6matter}, and $C_{(5)ij}$ by \eqref{eq:Cmatter} via \eqref{eq:pimatter} and \eqref{eq:pimatter5}. Notice that we removed the post-Newtonian order-counting parameter $c$ in \eqref{eq:htt4} and \eqref{eq:htt6}. Fortunately there is no need to evaluate \eqref{eq:htt6} here. In fact the $h^\TT_{(4)ij}$ dependence of \eqref{eq:sourcehtt6} renders the calculation almost impossible. Then the solution of the wave equation at leading order and linear in spin is given by \begin{align} h^\TT_{(4)ij} &= 4 K \sum_a \biggl[ \frac{\mom{a}{i}\mom{a}{j}}{ m_a} \frac{1}{r_a^{d-2}} - \frac{1}{4-d}\frac{\mom{a}{k}\mom{a}{(i}}{ m_a} \left(\frac{1}{r_a^{d-4}}\right)_{,j)k} \nonumber\\ & -\frac{1}{d-1}\delta_{ij} \biggl(\frac{\vmom{a}^2}{m_a} \frac{1}{r_a^{d-2}} -\frac{1}{2(4-d)}\frac{\mom{a}{k}\mom{a}{\ell}}{m_a}\left(\frac{1}{r_a^{d-4}}\right)_{,k\ell}\biggr) \nonumber\\ & +\frac{1}{2(d-1)(4-d)}\frac{\vmom{a}^2}{m_a}\left(\frac{1}{r_a^{d-4}}\right)_{,ij} \nonumber\\ & + \frac{d-2}{8(d-1)(d-4)(d-6)}\frac{\mom{a}{k}\mom{a}{\ell}}{m_a} \left(\frac{1}{r_a^{d-6}}\right)_{,ijk\ell} \nonumber\\ & +\frac{\mom{a}{k}\spin{a}{\ell}{m}}{m_a}\biggl\{ \left(\delta_{k(i}\delta_{j)\ell}\partial_m -\frac{1}{d-1}\delta_{ij}\delta_{k\ell}\partial_m\right)\frac{1}{r_a^{d-2}} \nonumber\\ & +\frac{1}{2(4-d)}\left(\frac{1}{d-1}\delta_{k\ell}\partial_i \partial_j \partial_m - \delta_{\ell(i}\partial_{j)}\partial_k\partial_m\right)\frac{1}{r_a^{d-4}} \biggr\} \biggr] + h^\TT_{(4\,0)ij}\label{eq:htt4sol}\,, \end{align} where $h^\TT_{(4\,0)ij}$ is the momentum (and spin-) independent part of the transverse-traceless part of the metric which is generated by the TT-projection of $\Delta^{-1}(\phibs{2}{}_{,i}\phibs{2}{}_{,j})$ and is only known in $d=3$ see \cite[Eq. (A20)]{Jaranowski:Schafer:1998}, namely \begin{align} h^\TT_{(4\,0)ij} &= G^2 \sum_a \sum_{b\ne a} m_a m_b \biggl\{ -\frac{4}{s_{ab}}\left(\frac{1}{r_{ab}} + \frac{1}{s_{ab}}\right)\nxa{ab}{i}\nxa{ab}{j} +\frac{1}{4}\left(\frac{r_a + r_b}{r_{ab}^3} + \frac{12}{s_{ab}^2}\right)\nxa{a}{i}\nxa{b}{j} \nonumber\\ & +2\left(\frac{2}{s_{ab}^2} - \frac{1}{r_{ab}^2}\right)(\nxa{a}{i}\nxa{ab}{j}+\nxa{a}{j}\nxa{ab}{i}) \nonumber\\ & +\biggl[ \frac{5}{8 r_{ab} r_a} -\frac{1}{8 r_{ab}^3} \left(\frac{r_b^2}{r_a} + 3 r_a\right) - \frac{1}{s_{ab}} \left(\frac{1}{r_a} + \frac{1}{s_{ab}}\right) \biggr] \nxa{a}{i}\nxa{a}{j} \nonumber\\ & +\biggl[ \frac{5 r_a}{8 r_{ab}^3} \left(\frac{r_a}{r_b} - 1\right) -\frac{17}{8 r_{ab} r_a} +\frac{1}{2 r_a r_b} +\frac{1}{s_{ab}} \left(\frac{1}{r_a} + \frac{4}{r_{ab}}\right) \biggr] \delta_{ij} \biggr\}\,.\label{eq:htt40} \end{align} Both solutions were also obtained by using the inverse Laplacians in the {Appendix} \ref{sec:invlaptechnique}. Obviously, most of the parts of $h^\TT_{(6)ij}$ are of the same type as $h^\TT_{(4\,0)ij}$ (see \eqref{eq:htt6} and \eqref{eq:sourcehtt6}). That is the reason why we eliminated it from the integrands. \subsection{Distributional Contributions}\label{subsec:distcontrib} As long as the Riesz-kernel method is not used (where a Dirac delta is substituted by the so-called Riesz-kernel) one has to take care of delta parts when differentiating certain functions. Consider for example the field $\phis{2} = 4 K\sum_a \tfrac{m_a}{r_a^{d-2}}$ and differentiate it two times. Using the ordinary derivative it would give \begin{align} \partial_i^{\text{ord}} \partial_j^{\text{ord}} \phis{2} & = 0\,. \end{align} But as we already know from the constraint equations the second derivative of $\phis{2}$ should be \begin{align} \partial_i \partial_j \phis{2} & = -16\piG^{(\dim)}\frac{1}{d} \delta_{ij} \sum_a m_a \dl{a}\,. \end{align} Fortunately there is a result from the theory of distributions \cite{Jaranowski:Schafer:1998} which defines the so-called distributional derivative \begin{align} \partial_i f &= \partial_i^{\text{ord}} f + \frac{(-1)^k}{k!} \frac{\partial^k \delta(\vct{x})}{\partial x^{i_1} \dots \partial x^{i_k}} \oint_\Sigma \text{d}\Omega_{d-1}\, n^i f x^{i_1} \dots x^{i_k}\,. \end{align} Here $f$ is a positive homogeneous function of degree $\lambda$ (i.e. $f(a\vct{x}) = a^\lambda f(\vct{x})$ for $a\ge0$) and $k := -\lambda + 1 - d$ is a non-negative integer. This means $f$ must decay with an exponent linear in the dimension $d$ which does not apply to fields generated by a Riesz-kernel type source (see {Appendix} \ref{subsec:rieszkernel}). There are not only distributional contributions from the field derivatives, but from the fields themselves (some parts of the higher order field momenta). \subsection{Ultraviolet-Analysis}\label{subsec:UVana} As for gauge theories in quantum field theory, dimensional regularization should be used in classical general relativity \cite{Damour:Jaranowski:Schafer:2001}. Therefore first all integrals must be evaluated in generic $d$ dimensions and then the limit $d \rightarrow 3$ is calculated. However, certain integrals are very difficult to solve for generic $d$. In practice one therefore evaluates the integrals in $d=3$ first and then determines possible additional contributions that arise from dimensional regularization. That is, one analyses the $d$-dependence of the integrals close to the singular sources, i.e., in the UV. (Only close to singularities regularization is important.) This is the purpose of the present section. Other necessary integration techniques are provided in Appendix \ref{sec:integrationtechniques}. The UV-analysis in generic dimension $d$ is a necessary ingredient to correctly derive the Hamiltonians at formal 3PN level. This includes the 3PN point-mass Hamiltonian (see \cite{Damour:Jaranowski:Schafer:2001}) and the NNLO spin-orbit and spin(1)-spin(2) Hamiltonians considered in the present article. It would also be necessary for the yet unknown NNLO spin(1)-spin(1) Hamiltonian. For integrals only obtained for $d=3$ one has no control on poles in $1/(d-3)$. There are two different problems with such poles: First the poles do not appear in pure $d=3$ calculations and thus lead to ambiguous results after integrations by parts in integrands containing such poles (in one representation there are poles, in another maybe not). This comes from the fact that some of the pole terms can also give finite contributions which must be added to the $d=3$ result. Second the poles have to cancel each other in order to extract a finite result from the $d$ dimensional integration in the limit $d\to3$ (or one must be able to absorb all poles through a renormalization procedure as in \cite{Blanchet:Damour:EspositoFarese:2004}). Both problems are well-known and also discussed in \cite{Damour:Jaranowski:Schafer:2001}. In the following we will provide some more technical details on how to perform the UV-analysis depending on the structure of the integrand. All integrals involving $h^\TT_{(4\,0)\,ij}$, $\TTproj{ij}{k\ell}(\phibs{2}\momls{3}{k\ell})$ and the high order potentials such as $\phibs{6}$ or $\momls{5}{i}$ are not available in $d$ dimensions and were only calculated in $d=3$ dimensions here. In all other integrals the limit $d\to3$ is straightforward, although integrations in $d$ dimensions sometimes involve around one million terms on which the limit must be performed. In case of the TT-projection of $\phibs{2}\momls{3}{ij}$, the fields are available in $d$ dimensions. Hence, one can split up this part of the Hamiltonian in one-particle TT-projections (which can be performed in $d$ dimensions) and two-particle TT-projections (which can only be evaluated for $d=3$ with the presented methods). For the latter ones must still perform the UV-analysis. The term UV-analysis in this context refers to the short-range behavior of the integrand around a specific point. This will become more clear during the following explanation. Let us now consider the decay of the integrand $f(r_a, r_b, \vnxa{a}, \vnxa{b})$ around the source $a$. First of all the integral is split up into a ball integral around one of the sources, say for example source particle $a$, and an integral over the whole $\mathbb{R}^d$ without this ball, \begin{align} \int \text{d}^d x\, f(r_a, r_b, \vnxa{a}, \vnxa{b}) &= \int_{B_{\ell_a}(\vx{a})} \text{d}^d x\, f(r_a, r_b, \vnxa{a}, \vnxa{b}) + \int_{\mathbb{R}^d \setminus B_{\ell_a}(\vx{a})} \text{d}^d x\, f(r_a, r_b, \vnxa{a}, \vnxa{b})\,, \end{align} where $0<\ell_a\ll r_{ab}$. The variables $r_b$ and $\vnxa{b}$ of the other source ($b\neq a$) are expressed in terms of $r_a$, $r_{ab}$ and $\vnxa{ab}$, \begin{align} r_b & = |\vct{x}-\vx{b}| = |\vct{x}-\vx{a} + \vx{a} - \vx{b}| =\sqrt{r_a^2 + r_{ab}^2 + 2 r_a r_{ab} \scpm{\vnxa{a}}{\vnxa{ab}}} \,,\label{eq:rbinsertion}\\ \vnxa{b} & = \frac{r_a}{r_b} \vnxa{a} + \frac{r_{ab}}{r_{b}} \vnxa{ab}\,, \end{align} such that all $\vct{x}$-dependent expressions come from $r_{a}$ and $\vnxa{a}$ type variables. Next we concentrate on the ball integral around $a$, \begin{align} \int_{B_{\ell_a}(\vct{x}_a)} \text{d}^d x f(r_{a}, \vnxa{a}) & = \int \text{d}\Omega_{a,d-1} \int_{0}^{\ell_a}\text{d}r_a\, r_a^{d-1} f(r_{a}, \vnxa{a})\,.\label{eq:ballintegration} \end{align} Now the integrand is expanded in $r_a$ (leaving $\vnxa{a}$ untouched). This is possible because $a$ and $b$ are well separated, and the ball contains only a small neighborhood of the source $a$. Then the integrand takes the form of a polynomial in $r_a$ and one can pick out the terms contributing poles at $d=3$ (the ones with an exponent giving $-3$ for $d=3$ on $r_a$). The next step is to count the number of $\vnxa{a}$-vectors in each term and remove terms with an odd number of these vectors. This is due to the averaging procedure coming from the angular integration in \eqref{eq:ballintegration} using the formulas \eqref{eq:averageOmega} in the {Appendix}. Consider for example an integrand $f(r_a, \vnxa{a}) = C(d) r_a^{6-3d} \scpm{\vmom{a}}{\vnxa{a}}\scpm{\vmom{b}}{\vnxa{a}}$ (this integrand is of the form qualified as dangerous in \cite[Eq. (3.1)]{Damour:Jaranowski:Schafer:2001}) then \eqref{eq:ballintegration} gives \begin{align} \int_{B_{\ell_a}(\vct{x}_a)} \text{d}^d x f(r_{a}, \vnxa{a}) & = C(d) \int \text{d}\Omega_{a,d-1} \scpm{\vmom{a}}{\vnxa{a}}\scpm{\vmom{b}}{\vnxa{a}} \int_{0}^{\ell_a}\text{d}r_a\, r_a^{d-1}r_a^{6-3d} \nonumber\\ & = C(d) \mom{a}{i}\mom{b}{j}\int \text{d}\Omega_{a,d-1} \nxa{a}{i}\nxa{a}{j} \int_{0}^{\ell_a}\text{d}r_a\, r_a^{5 - 2d}\,. \end{align} Using \eqref{eq:angavgtwo} and usual integration rules we obtain \begin{align} \int_{B_{\ell_a}(\vct{x}_a)} \text{d}^d x f(r_{a}, \vnxa{a}) & = \frac{C(d)}{d}\Omega_{a,d-1} \scpm{\vmom{a}}{\vmom{b}} \frac{\ell_a^{2(3-d)}}{2(3-d)}\,. \end{align} The last integration step was performed by means of an analytic continuation from $d<3$. At this stage there are three possibilities: The first one is that $C(d)$ contains several factors of $d-3$ which cancel the pole in the last factor and even lead to a vanishing limit when $d \rightarrow 3$. Then the potentially dangerous term is actually not dangerous at all. The second possibility is that $C(d) \sim d-3$ which would also lead to a cancellation of the pole but would give a finite contribution which has to be taken into account to get the correct Hamiltonian. Last but not least $C(d)$ could have such a structure that a pole remains and so this term has to be renormalized or canceled by another pole to give a physically meaningful result. The procedure mentioned above is only valid if there is no TT-projection (and in particular no $h^\TT_{(4\,0)ij}$) appearing. The analysis of the $h^\TT_{(4\,0)\,ij}$ type integrals works as follows (during this discussion we talk about a two-particle system where we consider $r_1$ as expansion point): The $h^\TT_{(4\,0)\,ij}$ part is given in terms of inverse Laplacians and field variables in $d$ dimensions by \begin{align} h^\TT_{(4\,0)\,ij} & = - \frac{8(d-1)}{d-2} \TTproj{ij}{kl}\Delta^{-1} (\phibs{2}{}_{, k} \phibs{2}{}_{, l})\,, \end{align} see \eqref{eq:htt40} for the explicit solution in $d=3$. Now one can insert the $\phibs{2}$ field, given by \begin{align} \phibs{2} = \frac{d-2}{4(d-1)} (m_1 u_1 + m_2 u_2)\,, \end{align} where $u_a = - 16\pi G^{(\dim)}\Delta^{-1} \dl{a}\sim r_a^{2-d}$. After interchanging TT-projector and inverse Laplacian, one gets $\TTproj{ij}{kl} (u_{a,k} u_{a,l})=0$. One can see this by using $u_{a,k}u_{a,\ell} \sim \nxa{a}{k}\nxa{a}{\ell} r_{a}^{2-2d}$ which can be rewritten using \begin{align} \partial_i \partial_j r_{a}^{4-2d} &= -2(d-2)(\delta_{ij}-2(d-1)\nxa{a}{i}\nxa{a}{j})r_{a}^{2-2d}\,, \end{align} as $u_{a,k}u_{a,\ell} \sim \frac{1}{2(d-1)}(\delta_{ij}r_{a}^{2-2d} + \frac{1}{2(d-2)}\partial_i \partial_j r_{a}^{4-2d})$. This will obviously be projected to zero by the TT-projector. Thus, after TT-projection there is only one part left, namely \begin{align} h^\TT_{(4\,0)\,ij} & = - 2 \frac{d-2}{2(d-1)} m_1 m_2 \Delta^{-1} \TTproj{ij}{kl} (u_{1, k} u_{2, l})\,. \end{align} Under the TT-projector one can integrate by parts because $\partial_k \TTproj{ij}{kl} = 0$ and obtains \begin{align} h^\TT_{(4\,0)\,ij} & = 2 \frac{d-2}{2(d-1)} m_1 m_2 \Delta^{-1} \TTproj{ij}{kl} (u_{1} \partial_{k} \partial_{l} u_{2})\,. \end{align} Notice that the derivative should act on the $u_2$ term because all quantities will be Taylor expanded in $r_1$ around $\vct{x} = \vx{1}$ and $u_1$ is already proportional to an $r_1$-power. At this stage of processing the $h^\TT_{(4\,0)\,ij}$ terms one is able to use the same analysis procedure as mentioned above: Check whether there are powers of $r_1$ with exponents smaller than $-3$ for $d=3$ and expand $\partial_{k} \partial_{l} u_{2}$ around $r_1$ to an order sufficient to reach the critical value in the exponent of $r_1$. The idea is the expansion of the $r_2$ variable in the TT-projector such that the Taylor expansion consists only of multiple inverse Laplacians on powers of $r_1$, which can be calculated using \eqref{eq:invra} from the {Appendix}. Also bear in mind that the inverse Laplacian introduces additional powers of $r_1$ via the Green's function. The final ball integration can now be performed as discussed above. The same technique can also be used to perform the UV-analysis of terms involving $\TTproj{ij}{kl}(\phibs{2}\momls{3}{kl})$. To check our UV-analysis code we first reproduced the pole coefficients given in \cite[Table 1]{Damour:Jaranowski:Schafer:2001}. The {\em finite} UV-contribution to the 3PN point-mass Hamiltonian in our representation \eqref{eq:FinalRouthian3PN} is given by \begin{align} \Delta H^{\text{3PN,UV}}_{\text{PM}}(d) &= \frac{2 \Lambda^{6-2d} (d-2)(d+1)(96-40d-28d^2+d^3) \pi^{3-3d/2} \Gamma\left(\frac{d}{2}\right)^3}{3(d-4)(d-1)^4(d+2)} \times \nonumber\\ & \frac{(G^{(\dim)})^3}{\relab{12}^d} m_1 m_2 \left(d\scpm{{\vnxa{12}}}{\vmom{1}}^2 - \vmom{1}^2 + d\scpm{{\vnxa{12}}}{\vmom{2}}^2 - \vmom{2}^2\right)\,, \label{eq:UVcontribPM} \end{align} where $\Lambda$ is an UV-cutoff scale which does not contribute in the limit $d\to3$. A similar analysis for the 2PN point-mass Hamiltonian gave no contribution. We found no net contribution to the spin-dependent Hamiltonians, though poles and finite parts appeared in intermediate expressions. That is, Hadamard regularization would have been sufficient to obtain the correct linear-in-spin Hamiltonians presented in the present work. The same situation was also found for the harmonic-gauge calculation of the equations of motion in \cite{Marsat:Bohe:Faye:Blanchet:2012,Bohe:Marsat:Faye:Blanchet:2012}. \section{Results}\label{sec:results} After discussing the several simplifications given above to reduce the integral of the formal 3PN Routhian to a form which can be handled appropriately, we continue by giving a short description of the integrands showing up at this order. The integrands can be divided into three different types: \begin{itemize} \item the delta-type $\int {\rm d}^d x f(\vct{x}) \dl{1}$, \item the Riesz-type $\int {\rm d}^d x\, n^{i_1}_1 \dots n^{i_k}_1 n^{j_1}_2 \dots n^{j_\ell}_2 r_1^\alpha r_2^\beta$, \item and the generalized Riesz-type $\int {\rm d}^3 x\, n^{i_1}_1 \dots n^{i_k}_1 n^{j_1}_2 \dots n^{j_\ell}_2 r_1^\alpha r_2^\beta s_{12}^\gamma$. \end{itemize} The solution of these three types of integrals will be shown in {Appendix} \ref{sec:integrationtechniques}. We used our {\sc Mathematica} code to perform an integration of \eqref{eq:FinalRouthian3PN} directly with all fields inserted up to linear order in spin, neglecting spin(1)$^2$ and spin(2)$^2$ terms afterwards. From this we obtained the fully reduced matter Hamiltonians for point masses, for the spin-orbit- and the spin(1)-spin(2)-interaction. There were no $1/(d-3)$ poles for the linear-in-spin Hamiltonians giving rise to finite parts or being singular for $d\to3$. Such poles only appeared in some intermediate steps in the UV-analysis (see \Sec{subsec:UVana}) but finally identically canceled. For the point-mass parts there appeared a finite contribution given in \eqref{eq:UVcontribPM}. This together with our integration result reproduced the result from the literature exactly. Now we are able to write down the NNLO linear-in-spin fully reduced Hamiltonians in terms of matter variables only, by using \eqref{eq:FinalRouthian3PN} for the Routhian, inserting the sources \eqref{eq:PNSources}, $\hat{B}_{(6)ij}$ \eqref{eq:B6matter}, $\pimati{ij}$ \eqref{eq:pimatter5}, the solution for the constraint fields \eqref{eq:phi2sol}--\eqref{eq:momentumtilde3sol}, \eqref{eq:momentumtilde51}, \eqref{eq:pitilde51}, and the propagating degrees of freedom \eqref{eq:htt4sol} and \eqref{eq:htt40}. Notice that certain field derivatives may lead to distributional contributions mentioned in \ref{subsec:distcontrib}. The integrals can (for a binary) be solved by using the techniques given in {Appendix} \ref{sec:integrationtechniques}. Both Hamiltonians are valid for any compact objects like black holes or neutron stars. Their center-of-mass frame versions are given in \ref{subsec:comhamiltonians} where the gyromagnetic ratios in the spin-orbit case are also given in \cite{Nagar:2011,Barausse:Buonanno:2011}. \subsection{Next-to-next-to-leading Order Spin-Orbit Hamiltonian} The spin-orbit Hamiltonian given in this subsection is the higher order gravitational analogue of the interaction of an electron's spin interacting with the electron's orbital angular momentum in the case of an e.g. hydrogen atom. In quantum electrodynamics this interaction is responsible for the fine structure in the spectrum. Here the spin is obviously no quantum mechanical quantity, it only characterizes the rotation (i.e. its direction and magnitude) of a gravitating mass in the gravitational field of another mass. Notice that the fine structure constant $\alpha$ in electromagnetic theory is substituted by Newton's gravitational constant $G$ here. The result for the NNLO spin-orbit Hamiltonian reads \begin{align} H^{\text{NNLO}}_{\text{SO}} & = \frac{G}{\relab{12}^2} \biggl[ \biggl( \frac{7 m_2 (\vmom{1}^2)^2}{16 m_1^5} + \frac{9 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}\vmom{1}^2}{16 m_1^4} + \frac{3 \vmom{1}^2 \scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^3 m_2}\nonumber\\ & + \frac{45 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}^3}{16 m_1^2 m_2^2} + \frac{9 \vmom{1}^2 \scpm{\vmom{1}}{\vmom{2}}}{16 m_1^4} - \frac{3 \scpm{{\vnunit}}{\vmom{2}}^2 \scpm{\vmom{1}}{\vmom{2}}}{16 m_1^2 m_2^2}\nonumber\\ & - \frac{3 (\vmom{1}^2) (\vmom{2}^2)}{16 m_1^3 m_2} - \frac{15 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}} \vmom{2}^2}{16 m_1^2 m_2^2} + \frac{3 \scpm{{\vnunit}}{\vmom{2}}^2 \vmom{2}^2}{4 m_1 m_2^3}\nonumber\\ & - \frac{3 \scpm{\vmom{1}}{\vmom{2}} \vmom{2}^2}{16 m_1^2 m_2^2} - \frac{3 (\vmom{2}^2)^2}{16 m_1 m_2^3} \biggr)(({\vnunit} \times \vmom{1})\vspin{1}) +\biggl( - \frac{3 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}\vmom{1}^2}{2 m_1^3 m_2}\nonumber\\ & - \frac{15 \scpm{{\vnunit}}{\vmom{1}}^2\scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^2 m_2^2} + \frac{3 \vmom{1}^2 \scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^2 m_2^2} - \frac{\vmom{1}^2 \scpm{\vmom{1}}{\vmom{2}}}{2 m_1^3 m_2} + \frac{\scpm{\vmom{1}}{\vmom{2}}^2}{2 m_1^2 m_2^2}\nonumber\\ & + \frac{3 \scpm{{\vnunit}}{\vmom{1}}^2 \vmom{2}^2}{4 m_1^2 m_2^2} - \frac{(\vmom{1}^2) (\vmom{2}^2)}{4 m_1^2 m_2^2} - \frac{3 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}\vmom{2}^2}{2 m_1 m_2^3}\nonumber\\ & - \frac{\scpm{\vmom{1}}{\vmom{2}} \vmom{2}^2}{2 m_1 m_2^3} \biggr)(({\vnunit} \times \vmom{2})\vspin{1}) +\biggl( - \frac{9 \scpm{{\vnunit}}{\vmom{1}} \vmom{1}^2}{16 m_1^4} + \frac{\vmom{1}^2 \scpm{{\vnunit}}{\vmom{2}}}{m_1^3 m_2}\nonumber\\ & + \frac{27 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}^2}{16 m_1^2 m_2^2} - \frac{\scpm{{\vnunit}}{\vmom{2}}\scpm{\vmom{1}}{\vmom{2}}}{8 m_1^2 m_2^2} - \frac{5 \scpm{{\vnunit}}{\vmom{1}} \vmom{2}^2}{16 m_1^2 m_2^2}\nonumber\\ & + \frac{\scpm{{\vnunit}}{\vmom{2}}\vmom{2}^2}{m_1 m_2^3} \biggr)((\vmom{1} \times \vmom{2})\vspin{1}) \biggr] \nonumber\\ &+ \frac{G^2}{\relab{12}^3} \biggl[ \biggl( -\frac{3 m_2 \scpm{{\vnunit}}{\vmom{1}}^2}{2 m_1^2} +\left( -\frac{3 m_2}{2 m_1^2} +\frac{27 m_2^2}{8 m_1^3} \right) \vmom{1}^2 +\left( \frac{177}{16 m_1} +\frac{11}{m_2} \right) \scpm{{\vnunit}}{\vmom{2}}^2\nonumber\\ & +\left( \frac{11}{2 m_1} +\frac{9 m_2}{2 m_1^2} \right) \scpm{{\vnunit}}{\vmom{1}} \scpm{{\vnunit}}{\vmom{2}} +\left( \frac{23}{4 m_1} +\frac{9 m_2}{2 m_1^2} \right) \scpm{\vmom{1}}{\vmom{2}}\nonumber\\ & -\left( \frac{159}{16 m_1} +\frac{37}{8 m_2} \right) \vmom{2}^2 \biggr)(({\vnunit} \times \vmom{1})\vspin{1}) +\biggl( \frac{4 \scpm{{\vnunit}}{\vmom{1}}^2}{m_1} +\frac{13 \vmom{1}^2}{2 m_1}\nonumber\\ & +\frac{5 \scpm{{\vnunit}}{\vmom{2}}^2}{m_2} +\frac{53 \vmom{2}^2}{8 m_2} - \left( \frac{211}{8 m_1} +\frac{22}{m_2} \right) \scpm{{\vnunit}}{\vmom{1}} \scpm{{\vnunit}}{\vmom{2}}\nonumber\\ & -\left( \frac{47}{8 m_1} +\frac{5}{m_2} \right)\scpm{\vmom{1}}{\vmom{2}} \biggr)(({\vnunit} \times \vmom{2})\vspin{1}) +\biggl( -\left( \frac{8}{m_1} +\frac{9 m_2}{2 m_1^2} \right)\scpm{{\vnunit}}{\vmom{1}}\nonumber\\ & +\left( \frac{59}{4 m_1} +\frac{27}{2 m_2} \right)\scpm{{\vnunit}}{\vmom{2}} \biggr)((\vmom{1} \times \vmom{2})\vspin{1}) \biggr]\nonumber\\ &+\frac{G^3}{\relab{12}^4} \biggl[ \left( \frac{181 m_1 m_2}{16} + \frac{95 m_2^2}{4} + \frac{75 m_2^3}{8 m_1} \right) (({\vnunit} \times \vmom{1})\vspin{1})\nonumber\\ & - \left( \frac{21 m_1^2}{2} + \frac{473 m_1 m_2}{16} + \frac{63 m_2^2}{4} \right)(({\vnunit} \times \vmom{2})\vspin{1}) \biggr] + (1\leftrightarrow2)\label{eq:HNNLOSO}\,. \end{align} This Hamiltonian is formally at 3PN but for maximally rotating objects the post-Newtonian order goes up to 3.5PN. Recently in \cite{Marsat:Bohe:Faye:Blanchet:2012,Bohe:Marsat:Faye:Blanchet:2012} the NNLO spin-orbit contributions to the acceleration and spin-precession in harmonic gauge were calculated and agreement with the equations of motion following from our Hamiltonian was found.\footnote{In \cite{Hartung:Steinhoff:2011:1} there is a typo in the term $-\frac{G}{\relab{12}^2}\frac{15 \scpm{{\vnunit}}{\vmom{1}} \vmom{2}^2}{16 m_1^2 m_2^2} ((\vmom{1} \times \vmom{2})\vspin{1})$. The coefficient has to be $-\frac{5}{16}$ instead of $-\frac{15}{16}$. The result given here is correct. Thanks to S. Marsat for pointing this out.} From a combinatorial point of view there are 66 algebraically different possible contributions to the Hamiltonian for each object (written in terms of the canonical spin tensor), but 24 of them do not appear in the canonical representation used here. \subsection{Next-to-next-to-leading Order Spin(1)-Spin(2) Hamiltonian} Also the spin(1)-spin(2) Hamiltonian has an electromagnetic counterpart. It is the gravitational analogue to e.g. the coupling between electron spin and spin of the atomic nucleus, responsible for the hyperfine structure in the electromagnetic spectrum. Of course in our case the spin(1)-spin(2) interaction leads to the modulation of gravitational waves but does not lead to a hyperfine structure in the emitted atomic electromagnetic spectrum. The result for the NNLO spin(1)-spin(2) Hamiltonian reads \begin{align} H^{\text{NNLO}}_{\text{SS}} & = \frac{G}{\relab{12}^3}\biggl[ \frac{((\vmom{1} \times \vmom{2})\,\vspin{1})((\vmom{1} \times \vmom{2})\,\vspin{2})}{16 m_1^2 m_2^2} -\frac{9 ((\vmom{1} \times \vmom{2})\,\vspin{1})(({\vnunit} \times \vmom{2})\,\vspin{2})\scpm{{\vnunit}}{\vmom{1}}}{8 m_1^2 m_2^2} \nnl -\frac{3 (({\vnunit} \times \vmom{2})\,\vspin{1})((\vmom{1} \times \vmom{2})\,\vspin{2})\scpm{{\vnunit}}{\vmom{1}}}{2 m_1^2 m_2^2} \nnl +(({\vnunit} \times \vmom{1})\,\vspin{1})(({\vnunit} \times \vmom{1})\,\vspin{2})\biggl( \frac{9 \vmom{1}^2}{8 m_1^4} + \frac{15 \scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^2 m_2^2} - \frac{3 \vmom{2}^2}{4 m_1^2 m_2^2} \biggr) \nnl +(({\vnunit} \times \vmom{2})\,\vspin{1})(({\vnunit} \times \vmom{1})\,\vspin{2})\biggl( -\frac{3 \vmom{1}^2}{2 m_1^3 m_2} +\frac{3 \scpm{\vmom{1}}{\vmom{2}}}{4 m_1^2 m_2^2} \nnl -\frac{15 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{4 m_1^2 m_2^2} \biggr) +(({\vnunit} \times \vmom{1})\,\vspin{1})(({\vnunit} \times \vmom{2})\,\vspin{2})\biggl( \frac{3 \vmom{1}^2}{16 m_1^3 m_2} \nnl -\frac{3 \scpm{\vmom{1}}{\vmom{2}}}{16 m_1^2 m_2^2} -\frac{15 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{16 m_1^2 m_2^2} \biggr) \nnl + \scpm{\vmom{1}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}\biggl( \frac{3 \scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^2 m_2^2} - \frac{\vmom{2}^2}{4 m_1^2 m_2^2} \biggr) \nnl + \scpm{\vmom{1}}{\vspin{1}}\scpm{\vmom{2}}{\vspin{2}}\biggl( -\frac{\vmom{1}^2}{4 m_1^3 m_2} +\frac{\scpm{\vmom{1}}{\vmom{2}}}{4 m_1^2 m_2^2} \biggr) \nnl + \scpm{\vmom{2}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}\biggl( \frac{5\vmom{1}^2}{16 m_1^3 m_2} -\frac{3\scpm{\vmom{1}}{\vmom{2}}}{16 m_1^2 m_2^2} -\frac{9\scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{16 m_1^2 m_2^2} \biggr) \nnl + \scpm{{\vnunit}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}\biggl( \frac{9 \scpm{{\vnunit}}{\vmom{1}} \vmom{1}^2}{8 m_1^4} -\frac{3 \scpm{{\vnunit}}{\vmom{2}} \vmom{1}^2}{4 m_1^3 m_2} -\frac{3 \scpm{{\vnunit}}{\vmom{2}} \vmom{2}^2}{4 m_1 m_2^3} \biggr) \nnl + \scpm{\vmom{1}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\biggl( -\frac{3 \scpm{{\vnunit}}{\vmom{2}} \vmom{1}^2}{4 m_1^3 m_2} -\frac{15 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}^2}{4 m_1^2 m_2^2} +\frac{3 \scpm{{\vnunit}}{\vmom{1}} \vmom{2}^2}{4 m_1^2 m_2^2} \nnl -\frac{3 \scpm{{\vnunit}}{\vmom{2}} \vmom{2}^2}{4 m_1 m_2^3} \biggr) + \scpm{{\vnunit}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\biggl( -\frac{3 \scpm{\vmom{1}}{\vmom{2}}^2{}}{8 m_1^2 m_2^2} +\frac{105 \scpm{{\vnunit}}{\vmom{1}}^2 \scpm{{\vnunit}}{\vmom{2}}^2}{16 m_1^2 m_2^2} \nnl -\frac{15 \scpm{{\vnunit}}{\vmom{2}}^2 \vmom{1}^2}{8 m_1^2 m_2^2} +\frac{3 \vmom{1}^2\scpm{\vmom{1}}{\vmom{2}}}{4 m_1^3 m_2} +\frac{3 \vmom{1}^2 \vmom{2}^2}{16 m_1^2 m_2^2} +\frac{15 \vmom{1}^2 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{4 m_1^3 m_2} \biggr) \nnl + \scpm{\vspin{1}}{\vspin{2}}\biggl( \frac{\scpm{\vmom{1}}{\vmom{2}}^2}{16 m_1^2 m_2^2} -\frac{9 \scpm{{\vnunit}}{\vmom{1}}^2 \vmom{1}^2}{8 m_1^4} -\frac{5 \scpm{\vmom{1}}{\vmom{2}} \vmom{1}^2}{16 m_1^3 m_2} -\frac{3 \scpm{{\vnunit}}{\vmom{2}}^2\vmom{1}^2}{8 m_1^2 m_2^2} \nnl -\frac{15 \scpm{{\vnunit}}{\vmom{1}}^2 \scpm{{\vnunit}}{\vmom{2}}^2}{16 m_1^2 m_2^2} +\frac{3 \vmom{1}^2 \vmom{2}^2}{16 m_1^2 m_2^2} +\frac{3 \vmom{1}^2 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{4 m_1^3 m_2} \nnl +\frac{9 \scpm{\vmom{1}}{\vmom{2}}\scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{16 m_1^2 m_2^2} \biggr) \biggr] \nonumber \\ & + \frac{G^2}{\relab{12}^4}\biggl[ (({\vnunit} \times \vmom{1})\,\vspin{1})(({\vnunit} \times \vmom{1})\,\vspin{2})\biggl( \frac{12}{m_1} +\frac{9 m_2}{m_1^2} \biggr) \nnl -\frac{81}{4 m_1}(({\vnunit} \times \vmom{2})\,\vspin{1})(({\vnunit} \times \vmom{1})\,\vspin{2}) -\frac{27}{4 m_1}(({\vnunit} \times \vmom{1})\,\vspin{1})(({\vnunit} \times \vmom{2})\,\vspin{2}) \nnl -\frac{5}{2 m_1}\scpm{\vmom{1}}{\vspin{1}}\scpm{\vmom{2}}{\vspin{2}} +\frac{29}{8 m_1}\scpm{\vmom{2}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}} -\frac{21}{8 m_1}\scpm{\vmom{1}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}} \nnl +\scpm{{\vnunit}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}\biggl\{ \left(\frac{33}{2 m_1} + \frac{9 m_2}{m_1^2}\right)\scpm{{\vnunit}}{\vmom{1}} -\left(\frac{14}{m_1} + \frac{29}{2 m_2}\right)\scpm{{\vnunit}}{\vmom{2}} \biggr\} \nnl +\scpm{\vmom{1}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\biggl\{ \frac{4}{m_1}\scpm{{\vnunit}}{\vmom{1}} -\left(\frac{11}{m_1} + \frac{11}{m_2}\right)\scpm{{\vnunit}}{\vmom{2}} \biggr\} \nnl +\scpm{{\vnunit}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\biggl\{ -\frac{12}{m_1} \scpm{{\vnunit}}{\vmom{1}}^2 -\frac{10}{m_1} \vmom{1}^2 +\frac{37}{4 m_1 \scpm{\vmom{1}}{\vmom{2}} \nnl +\frac{255}{4 m_1 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}} \biggr\} +\scpm{\vspin{1}}{\vspin{2}}\biggl\{ -\left(\frac{25}{2 m_1} + \frac{9 m_2}{m_1^2}\right) \scpm{{\vnunit}}{\vmom{1}}^2 + \frac{49}{8 m_1} \vmom{1}^2 \nnl + \frac{35}{4 m_1 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}} - \frac{43}{8 m_1 \scpm{\vmom{1}}{\vmom{2}} \biggr\} \biggr]\nonumber \\ & +\frac{G^3}{\relab{12}^5}\biggl[ -\scpm{\vspin{1}}{\vspin{2}}\left( \frac{63}{4} m_1^2 +\frac{145}{8} m_1 m_ \right) + \scpm{{\vnunit}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\left( \frac{105}{4} m_1^2 +\frac{289}{8} m_1 m_ \right) \biggr] \nonumber \\ & + (1\leftrightarrow2)\label{eq:HNNLOS1S2}\,, \end{align} This Hamiltonian is formally also at 3PN but for maximally rotating objects the post-Newtonian order goes up to 4PN. Notice that from a combinatorial point of view there are 167 algebraically different possible contributions to the Hamiltonian for all objects (written in terms of the canonical spin tensor), but 75 of them do not appear in the canonical representation used here. \subsection{Hamiltonians in Center-Of-Mass Frame}\label{subsec:comhamiltonians} For later computations of, e.g., the mentioned orbital parametrizations of a binary system it is convenient to provide the Hamiltonians in the center-of-mass frame ($\vmom{1} = -\vmom{2} = \vmom{}$). In this frame in dimensionless quantities (see e.g. \cite{Tessmer:Hartung:Schafer:2010,Tessmer:Hartung:Schafer:2012} for rescaling) they are given by \begin{subequations} \label{eq:Hredfull} \begin{align} H_{\text{COM SO}}^{\text{NNLO}} &= \frac{1}{4\relab{12}^5} \left[ 21\sqrt{1-4\eta}(\eta+1)\scpm{\vct{\ang}}{\vct{\Delta}} +\frac{1}{2} (-2\eta^2+33\eta+42)\scpm{\vct{\ang}}{\vct{\Sigma}} \right] \nonumber\\& + \frac{\eta}{32 \relab{12}^4} \biggl[ -\sqrt{1-4\eta}\left((256+45\eta)\scpm{{\vnxa{12}}}{\vmom{}}^2 + (314+39\eta)\vmom{}^2\right)\scpm{\vct{\ang}}{\vct{\Delta}} \nonumber\\ & \quad +\left((-256+275\eta)\scpm{{\vnxa{12}}}{\vmom{}}^2 + (-206+73\eta)\vmom{}^2\right)\scpm{\vct{\ang}}{\vct{\Sigma}} \biggr] \nonumber\\& + \frac{\eta}{32 \relab{12}^3} \biggl[ \sqrt{1-4\eta}\bigl( 15\scpm{{\vnxa{12}}}{\vmom{}}^4 + 3(9\eta-4) \scpm{{\vnxa{12}}}{\vmom{}}^2 \vmom{}^2 \nonumber\\ & \quad\quad + 2(22\eta-9) (\vmom{}^2)^2 \bigr)\scpm{\vct{\ang}}{\vct{\Delta}} -\bigl( 15(2\eta-1)\scpm{{\vnxa{12}}}{\vmom{}}^4 \nonumber\\ &\quad\quad +3 (6\eta^2 - 11 \eta + 4) \scpm{{\vnxa{12}}}{\vmom{}}^2 \vmom{}^2 +2 (5\eta^2 - 3\eta + 2) (\vmom{}^2)^2 \bigr)\scpm{\vct{\ang}}{\vct{\Sigma}} \biggr]\,,\\ H_{\text{COM SS}}^{\text{NNLO}} &= \eta\biggl\{ \frac{1}{4\relab{12}^5} \left[ (79\eta + 105) \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} -(63 + 19\eta) \scpm{\vspin{1}}{\vspin{2}} \right] \nonumber\\ &\quad +\frac{1}{\relab{12}^4} \biggl[ -\left( \frac{303}{4}\eta \scpm{{\vnxa{12}}}{\vmom{}}^2 + \left(\frac{125}{4}\eta+9\right) \vmom{}^2 \right) \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} \nonumber\\ & \quad\quad \left( -\left(18+\frac{25}{4}\eta\right) \scpm{{\vnxa{12}}}{\vmom{}}^2 + \left(9 + \frac{47}{2}\eta\right) \vmom{}^2 \right) \scpm{\vspin{1}}{\vspin{2}} \nonumber\\ & \quad\quad - \frac{9}{4} (7\eta+4) \scpm{\vmom{}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}} \nonumber\\ & \quad\quad +\left(34\eta+\frac{27}{2}\right)\scpm{{\vnxa{12}}}{\vmom{}} (\scpm{\vmom{}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} + \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}}) \nonumber\\ & \quad\quad +\frac{3}{2}\sqrt{1-4\eta}(\eta+3)\scpm{{\vnxa{12}}}{\vmom{}} (\scpm{\vmom{}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} - \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}}) \biggr] \nonumber\\ &\quad +\frac{1}{\relab{12}^3} \biggl[ \frac{1}{8}\biggl( 105 \eta^2 \scpm{{\vnxa{12}}}{\vmom{}}^4 +15 \eta (3\eta - 2) \scpm{{\vnxa{12}}}{\vmom{}}^2 \vmom{}^2 \nonumber\\ &\quad\quad\quad +\frac{3}{2} (10\eta^2 + 13 \eta - 6) (\vmom{}^2)^2 \biggr)\scpm{{\vnxa{12}}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} \nonumber\\ &\quad\quad +\frac{1}{8}\biggl( -3 (8\eta^2 - 37 \eta + 12) \scpm{{\vnxa{12}}}{\vmom{}}^2 \vmom{}^2 + (7\eta^2 - 23 \eta +9) (\vmom{}^2)^2 \biggr)\scpm{\vspin{1}}{\vspin{2}} \nonumber\\ &\quad\quad +\frac{1}{4}\biggl( 9 \eta^2 \scpm{{\vnxa{12}}}{\vmom{}}^2 + \frac{1}{2} (4\eta^2 + 25 \eta - 9) \vmom{}^2 \biggr)\scpm{\vmom{}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}} \nonumber\\ &\quad\quad -\frac{3}{8}\biggl( + 15 \eta^2 \scpm{{\vnxa{12}}}{\vmom{}}^2 + \frac{1}{2} (10\eta^2 + 21 \eta - 9) \vmom{}^2 \biggr)\scpm{{\vnxa{12}}}{\vmom{}}\nonumber\\& \quad\quad\quad\quad\times(\scpm{\vmom{}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} + \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}}) \nonumber\\ &\quad\quad +\frac{9}{16}\sqrt{1-4\eta}(1 - 2\eta)\scpm{{\vnxa{12}}}{\vmom{}}(\scpm{\vmom{}}{\vspin{1}}\scpm{{\vnxa{12}}}{\vspin{2}} - \scpm{{\vnxa{12}}}{\vspin{1}}\scpm{\vmom{}}{\vspin{2}}) \biggr] \biggr\}\,. \end{align} \end{subequations} There $\vct{\Delta} = \vspin{1}-\vspin{2}$ and $\vct{\Sigma} = \vspin{1}+\vspin{2}$ the differences and sums of the spin vectors and $\vct{\ang}$ is the orbital angular momentum $\vct{\ang} = \relab{12} {\vnxa{12}} \times \vmom{}$. \section{Kinematical Consistency: The Approximate Poincar\'{e} Algebra} \label{sec:kinematicalconsistency} For a space-time which is asymptotically flat the Poincar\'{e} algebra must be fulfilled at spacial infinity, e.g. \cite{Beig:OMurchadha:1987}. The generators of the Poincar\'{e} algebra can be expressed in terms of the canonical variables describing the physical system, i.e. matter variables like linear momenta, position variables or spins. Also propagating field degrees of freedom enter the generators of the Poincar\'{e} algebra. Throughout this section we set $d=3$. The relations between the generators are given by \begin{subequations} \begin{align} \{P_i, H\} &= 0\,,\quad\{J_i, H\} = 0\label{eq:PAJH}\,,\\ \{J_i, P_j\} &= \epsilon_{ijk} P_k\,,\quad\{J_i, J_j\} = \epsilon_{ijk} J_k\,,\\ \{J_i, G_j\} &= \epsilon_{ijk} G_k\,,\\ \{G_i, H\} &= P_i\,,\label{eq:PAGHeqP}\\ \{G_i, P_j\} &= \cInv{2} \delta_{ij} H\,,\label{eq:PAGPeqH}\\ \{G_i, G_j\} &= -\cInv{2} \epsilon_{ijk} J_k\,,\label{eq:PAGGeqJ} \end{align} \end{subequations} where $\vct{P}$ is the total linear momentum, $J^{ij}$ is the total angular momentum tensor and $J_i = \tfrac{1}{2}\epsilon_{ijk} J^{jk}$ the associated dual vector, $\vct{G}$ is the center-of-mass vector and $H$ the Hamiltonian of the physical system. Total linear momentum $\vct{P}$ and total angular momentum $J^{ij} = - J^{ji}$ are given by \begin{align} \vct{P} = \sum_a \vmom{a}\,, \quad J^{ij} = \sum_a \left[\hat{z}^i_a \mom{a}{j} - \hat{z}^j_a \mom{a}{i} + \spin{a}{i}{j}\right]\,, \end{align} see also, e.g., \cite{Damour:Jaranowski:Schafer:2000, Damour:Jaranowski:Schafer:2008:1}. For the contributions of the propagating field degrees of freedom see, e.g., \cite{Steinhoff:Wang:2009,Steinhoff:2011}. However, these contributions can be dropped within $\vct{P}$ and $J^{ij}$ here as we are considering the \emph{conservative} matter-only Hamiltonian instead of the ADM Hamiltonian (the latter still depends on the canonical field variables). \subsection{General Considerations for a Center-Of-Mass Vector Ansatz} As in \cite{Damour:Jaranowski:Schafer:2000, Damour:Jaranowski:Schafer:2008:1,Hartung:Steinhoff:2011:1} we use an ansatz for the center-of-mass vectors $\vct{G}$ at next-to-next-to-leading order (at lower orders it is also possible to directly calculate $\vct{G}$ from certain integrals). For constructing the center-of-mass vectors one has to consider the irreducible algebraic quantities which can be generated from $\spin{a}{i}{j}$, $\mom{a}{i}$ and $\nxa{ab}{i}$. Since the Newtonian center-of-mass vector \begin{align} \vct{G}_\text{N} &= \sum_a m_a \vx{a}\,, \end{align} is at $\cInv{2}$ and the Newtonian Hamiltonian \begin{align} H_\text{N} &= \sum_a \frac{\vmom{a}^2}{2 m_a} - \sum_a \sum_{b\ne a} \frac{G m_a m_b}{2 r_{ab}}\,, \end{align} is at $\cInv{4}$ the higher order corrections to the center-of-mass vector are also one post-Newtonian order below the appropriate Hamiltonian. Thus the momentum and $G$ powers there are also reduced. Let us demonstrate these considerations at point-mass level: The Newtonian Hamiltonian has only $p^2$ terms at $G^0$ [which could be $\vmom{1}^2$, $\vmom{2}^2$, $\scpm{{\vnunit}}{\vmom{1}}^2$, $\scpm{{\vnunit}}{\vmom{2}}^2$, and $\scpm{\vmom{1}}{\vmom{2}}$]\footnote{Although only $\vmom{1}^2$ and $\vmom{2}^2$ are appearing at the Newtonian order. This discussion should only provide some idea of appearing momentum powers at certain post-Newtonian orders.} and $p^0$ terms at $G^1$ (which is only one term). At 1PN there appear $p^4$ terms at $G^0$, $p^2$ terms at $G^1$ and $p^0$ at $G^2$. At 3PN there are $p^8$ terms at $G^0$ and $p^0$ terms at $G^4$. The center-of-mass vectors belonging to the Hamiltonians above have the following momentum powers emerging there: The Newtonian center-of-mass vector mentioned above has $p^0$ at $G^0$, the 1PN one contains $p^2$ at $G^0$ and $p^0$ at $G^1$ and at 3PN level it contains $p^6$ at $G^0$ and $p^0$ at $G^3$. Now we will discuss how to construct the linear-in-spin corrections for the center-of-mass vectors. Symbolically they can be written in the form \begin{align} \vct{G}_{\text{SO}} &= \text{SO-scalar}\cdot\text{PM-vector} + \text{PM-scalar}\cdot\text{SO-vector}\,,\label{eq:GSO}\\ \vct{G}_{\text{SS}} &= S_1 S_2\text{-scalar}\cdot\text{PM-vector} + \text{SO-scalar}\cdot\text{SO-vector} + \text{PM-scalar}\cdot S_1 S_2\text{-vector}\label{eq:GSS}\,. \end{align} Notice that we are formally working in generic dimension, where a spin-\emph{vector} can not be defined. As the Poincare-Algebra must also hold in generic dimensions, it must be possible to construct the center-of-mass vector in terms of the spin-\emph{tensor}. This is fortunate, as identities such as (6.1) in \cite{Marsat:Bohe:Faye:Blanchet:2012} would complicate the situation if one is forced to work with a spin vector in $d=3$. Let us now summarize the vector quantities which can be built at certain spin-levels and may be used to construct the center-of-mass vectors. \begin{table}[htpb] \begin{tabular}{c|c} Vector & Irreducible Quantities \\ \hline point-mass (PM) & $\vx{a}$, $\vmom{a}$, $\vnxa{ab}$ \\ spin-orbit (SO) (for $\spin{a}{i}{j}$) & $\nxa{ab}{j}\spin{a}{i}{j}$,$\mom{a}{j}\spin{a}{i}{j}$, $\mom{b}{j}\spin{a}{i}{j}$ \\ spin($a$)-spin($b$) ($S_a S_b$) & $\nxa{ab}{k}\spin{a}{k}{j}\spin{b}{i}{j}$, $\mom{a}{k}\spin{a}{k}{j}\spin{b}{i}{j}$, $\mom{b}{k}\spin{a}{k}{j}\spin{b}{i}{j}$ \\ \end{tabular} \caption{Vector quantities at certain spin levels} \label{tab:vectors} \end{table} The mentioned vectors must perhaps be multiplied by scalar quantities, see \eqref{eq:GSO} and \eqref{eq:GSS}. If the number of momentum variables in the spin-orbit or spin(1)-spin(2) scalars given in the following Table \ref{tab:scalars} is not sufficient for the appropriate $G$-order they have to be filled up by the point-mass scalars, namely the linear momentum powers. \begin{table}[htpb] \begin{tabular}{c|c} Scalar & Irreducible Quantities \\ \hline PM & linear momentum powers $p^n$ \\ SO & $(\nxa{ab}{i}\mom{a}{j}\spin{a}{i}{j})$, $(\nxa{ab}{i}\mom{b}{j}\spin{a}{i}{j})$, $(\mom{a}{i}\mom{b}{j}\spin{a}{i}{j})$ \\ $S_1 S_2$ & $(\mom{1}{i}\mom{2}{j}\spin{1}{i}{j})(\mom{1}{i}\mom{2}{j}\spin{2}{i}{j})$, $(\nun{i}\mom{1}{j}\spin{1}{i}{j})(\nun{i}\mom{1}{j}\spin{2}{i}{j})$,\\ & $(\nun{i}\mom{1}{j}\spin{1}{i}{j})(\nun{i}\mom{2}{j}\spin{2}{i}{j})$, $(\mom{1}{i}\mom{2}{j}\spin{1}{i}{j})(\mom{1}{i}\mom{2}{j}\spin{2}{i}{j})$,\\ &$(\nun{i}\mom{1}{j}\spin{1}{i}{j})(\nun{i}\mom{1}{j}\spin{2}{i}{j})$, $(\nun{i}\mom{1}{j}\spin{1}{i}{j})(\nun{i}\mom{2}{j}\spin{2}{i}{j})$,\\ &$(\nun{i}\mom{2}{j}\spin{1}{i}{j})(\nun{i}\mom{1}{j}\spin{2}{i}{j})$, $(\nun{i}\mom{2}{j}\spin{1}{i}{j})(\nun{i}\mom{2}{j}\spin{2}{i}{j})$,\\ &$(\spin{1}{i}{j}\spin{2}{i}{j})$, $(\nun{i}\nun{j}\spin{1}{i}{k}\spin{2}{j}{k})$,\\ &$(\nun{i}\mom{1}{j}\spin{1}{i}{k}\spin{2}{j}{k})$, $(\nun{i}\mom{2}{j}\spin{1}{i}{k}\spin{2}{j}{k})$, \\ &$(\mom{1}{i}\nun{j}\spin{1}{i}{k}\spin{2}{j}{k})$, $(\mom{2}{i}\nun{j}\spin{1}{i}{k}\spin{2}{j}{k})$,\\ &$(\mom{1}{i}\mom{2}{j}\spin{1}{i}{k}\spin{2}{j}{k})$, $(\mom{2}{i}\mom{1}{j}\spin{1}{i}{k}\spin{2}{j}{k})$, \\ &$(\mom{1}{i}\mom{2}{j}\spin{1}{i}{j})(\nun{i}\mom{1}{j}\spin{2}{i}{j})$, $(\mom{1}{i}\mom{2}{j}\spin{1}{i}{j})(\nun{i}\mom{2}{j}\spin{2}{i}{j})$,\\ &$(\nun{i}\mom{1}{j}\spin{1}{i}{j})(\mom{1}{i}\mom{2}{j}\spin{2}{i}{j})$, $(\nun{i}\mom{2}{j}\spin{1}{i}{j})(\mom{1}{i}\mom{2}{j}\spin{2}{i}{j})$ \\ \end{tabular} \caption{Scalar quantities at certain spin levels} \label{tab:scalars} \end{table} Also important is that every spin is counted like a linear momentum because they have the same $\cInv{1}$-order, see \ref{subsub:ordercounting}. This means the formal 3PN spin-orbit and spin(1)-spin(2) Hamiltonians have only contributions up to $G^3$ ($G^4$ contributions cannot contain any spins since they are momentum-independent for point-masses). Notice that the Hamiltonians can only be constructed from the irreducible scalar quantities given above. This is demanded by the Poincar\'e algebra, namely \eqref{eq:PAJH} ($H$ should be invariant under translations and rotations and thus is a scalar). The $\vx{a}$ contribution in the center-of-mass vector can be fixed by \eqref{eq:PAGPeqH} using the lower order Hamiltonian. This Hamiltonian can always be written in the form $H = \sum_a h_a$, where the $h_a$ are translation invariant, $\{h_a, \vct{P}\} = 0$. (In the post-Newtonian approximation of general relativity all Hamiltonians have such a structure that the $h_a$ are translational invariant.) If we make an ansatz for the center-of-mass vector of the form \begin{align} \vct{G} &= \sum_a h_a \vx{a} + \vct{Y}\label{eq:Gansatz}\,, \end{align} we see that ($\cInv{1} = 1$) \begin{align} \left\{\sum_a h_a \xa{a}{i} + Y^i, \sum_b \mom{b}{j}\right\} &= \sum_a \biggl[\underbrace{\{h_a,P^j\}}_{=0}\xa{a}{i} + h_a \delta_{ab} \delta_{ij}\biggr] + \underbrace{\{Y^i,P^j\}}_{\stackrel{!}{=}0} = \sum_a h_a \delta_{ij} = H \delta_{ij} \label{eq:GPeqHdemand}\,. \end{align} Equation \eqref{eq:GPeqHdemand} demands that $\{Y^i,P^j\}=0$ and so $\vct{Y}$ must be translational invariant. We have shown that the part of the center-of-mass vector which is not translation invariant, i.e., $\sum_a h_a \vx{a}$, can be read off from the Hamiltonian. From these consideration it follows that in the spin-orbit case the center-of-mass vector consists of 52 algebraic independent quantities for one object and in the spin(1)-spin(2) case there are 86 algebraic independent quantities for both objects. Notice that up to the formal 3PN level and linear order in spin all center-of-mass vectors can be fixed uniquely by using the Poincar\'e algebra. \subsection{Next-to-next-to-leading Order linear-in-spin Center-Of-Mass Vectors} Now we take the ansatz for the center-of-mass vector \eqref{eq:Gansatz} where $\vct{Y}$ has to be constructed from the irreducible quantities given in Tables \ref{tab:vectors} and \ref{tab:scalars} with the decompositions \eqref{eq:GSO} and \eqref{eq:GSS}, but without $\vx{a}$ vectors and put them into \eqref{eq:PAGHeqP} with the Hamiltonians \eqref{eq:HNNLOSO} and \eqref{eq:HNNLOS1S2} for the spin-orbit and spin(1)-spin(2) case. From this all unknown coefficients mentioned above could be fixed uniquely. The center-of-mass vector contributions given here implement the change in the binding energy of the system due to the NNLO linear-in-spin interaction Hamiltonians. This results also in context of the energy-mass equivalence in a modified gravitating mass and thus in a correction to the Newtonian center-of-mass vector $\vct{G}_{\text{N}} = \sum_a m_a \vx{a}$, which does not take any interactions into account. The correction to the center-of-mass vector from NNLO spin-orbit interactions finally results as \begin{align} \vct{G}^{\text{NNLO}}_{\text{SO}} & = \frac{(\vmom{1}^2)^2}{16 m_1^5} (\vmom{1}\times\vspin{1})\nonumber\\ &+ (\vmom{2}\times\vspin{1}) \biggl[ \frac{G}{\relab{12}} \biggl( - \frac{3 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{8 m_1 m_2} - \frac{\scpm{\vmom{1}}{\vmom{2}}}{8 m_1 m_2} \biggr) +\frac{G^2}{\relab{12}^2} \biggl( -\frac{47 m_1}{16} -\frac{21 m_2}{8} \biggr) \biggr]\nonumber\\ &+ (\vmom{1}\times\vspin{1}) \biggl[ \frac{G}{\relab{12}} \biggl( \frac{9 m_2 \vmom{1}^2}{16 m_1^3} - \frac{5 \vmom{2}^2}{8 m_1 m_2} \biggr) +\frac{G^2}{\relab{12}^2} \biggl( \frac{57 m_2}{16} +\frac{15 m_2^2}{8 m_1} \biggr) \biggr]\nonumber\\ &+ ({\vnunit}\times\vspin{1}) \biggl[ \frac{G}{\relab{12}} \biggl( \frac{9\scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}^2}{16 m_1 m_2} + \frac{\scpm{{\vnunit}}{\vmom{2}}\scpm{\vmom{1}}{\vmom{2}}}{8 m_1 m_2} + \frac{\scpm{{\vnunit}}{\vmom{1}}\vmom{2}^2}{16 m_1 m_2} \biggr)\nonumber\\ &\quad +\frac{G^2}{\relab{12}^2} \biggl( -\frac{5 m_2}{8} \scpm{{\vnunit}}{\vmom{1}} +\left\{\frac{13 m_1}{8} + \frac{11 m_2}{4}\right\} \scpm{{\vnunit}}{\vmom{2}} \biggr) \biggr]\nonumber\\ &- \frac{G}{\relab{12}} \vmom{1} \frac{\scpm{{\vnunit}}{\vmom{2}}(({\vnunit} \times \vmom{2})\vspin{1})}{2 m_1 m_2}\nonumber\\ &+ \frac{G}{\relab{12}} \vmom{2} \biggl( -\frac{\scpm{{\vnunit}}{\vmom{2}}(({\vnunit} \times \vmom{1})\vspin{1})}{8 m_1 m_2} +\frac{\scpm{{\vnunit}}{\vmom{1}}(({\vnunit} \times \vmom{2})\vspin{1})}{2 m_1 m_2}\nonumber\\ &\quad -\frac{((\vmom{1} \times \vmom{2})\vspin{1})}{8 m_1 m_2} \biggr)\nonumber\\ &+ {\vnunit} \biggl[ \frac{G}{\relab{12}} \biggl( \biggl\{ \frac{m_2 \vmom{1}^2}{16 m_1^3} + \frac{15 \scpm{{\vnunit}}{\vmom{2}}^2}{16 m_1 m_2} - \frac{3 \vmom{2}^2}{16 m_1 m_2} \biggr\}(({\vnunit} \times \vmom{1})\vspin{1})\nonumber\\ &\quad\quad +\biggl\{ -\frac{3 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{2 m_1 m_2} -\frac{\scpm{\vmom{1}}{\vmom{2}}}{2 m_1 m_2} \biggr\}(({\vnunit} \times \vmom{2})\vspin{1})\nonumber\\ &\quad\quad +\frac{13 \scpm{{\vnunit}}{\vmom{2}}}{8 m_1 m_2}((\vmom{1} \times \vmom{2})\vspin{1}) \biggr)\nonumber\\ &\quad +\frac{G^2}{\relab{12}^2} \biggl( \left\{\frac{m_2}{2} + \frac{5 m_2^2}{4 m_1}\right\} (({\vnunit} \times \vmom{1})\vspin{1}) +\left\{-2 m_1 - 5 m_2\right\} (({\vnunit} \times \vmom{2})\vspin{1}) \biggr) \biggr]\nonumber\\ &+ \frac{\hat{\vct{z}}_1}{\relab{12}} \biggl[ \frac{G}{\relab{12}} \biggl( \biggl\{ \frac{3 \scpm{{\vnunit}}{\vmom{1}} \scpm{{\vnunit}}{\vmom{2}}}{m_1 m_2} +\frac{\scpm{\vmom{1}}{\vmom{2}}}{m_1 m_2} \biggr\} (({\vnunit} \times \vmom{2})\vspin{1})\nonumber\\ &\quad\quad +\biggl\{ \frac{3 \scpm{{\vnunit}}{\vmom{1}}}{4 m_1^2} -\frac{2 \scpm{{\vnunit}}{\vmom{2}}}{m_1 m_2} \biggr\}((\vmom{1} \times \vmom{2})\vspin{1})\nonumber\\ &\quad\quad +\biggl\{ - \frac{5 m_2 \vmom{1}^2}{8 m_1^3} - \frac{3 \scpm{{\vnunit}}{\vmom{1}} \scpm{{\vnunit}}{\vmom{2}}}{4 m_1^2} - \frac{3 \scpm{{\vnunit}}{\vmom{2}}^2}{2 m_1 m_2}\nonumber\\ &\quad\quad\quad - \frac{3 \scpm{\vmom{1}}{\vmom{2}}}{4 m_1^2} + \frac{3 \vmom{2}^2}{4 m_1 m_2} \biggr\} (({\vnunit} \times \vmom{1})\vspin{1}) \biggr)\nonumber\\ & +\frac{G^2}{\relab{12}^2} \biggl( \left\{ -\frac{11 m_2}{2} -\frac{5 m_2^2}{m_1} \right\} (({\vnunit} \times \vmom{1})\vspin{1}) +\left\{ 6 m_1 + \frac{15 m_2}{2} \right\} (({\vnunit} \times \vmom{2})\vspin{2}) \biggr) \biggr]\nonumber\\ & + (1\leftrightarrow2)\,,\label{eq:GNNLOSO} \end{align} and the NNLO spin(1)-spin(2) part reads \begin{align} \vct{G}^{\text{NNLO}}_{\text{SS}} & = \frac{G^2}{\relab{12}^3} (({\vnunit} \times \vspin{2})\times \vspin{1}) \biggl(\frac{17}{8} m_1 + m_2\biggr) \nonumber \\ & + \frac{G}{\relab{12}^2}\biggl[ \vmom{1} \biggl( -\frac{\scpm{{\vnunit}}{\vspin{1}}\scpm{\vmom{2}}{\vspin{2}}}{4 m_1 m_2} +\frac{3 \scpm{{\vnunit}}{\vspin{1}} \scpm{{\vnunit}}{\vspin{2}} \scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} \biggr) \nnl + ({\vnunit} \times \vspin{1}) \biggl( -\frac{((\vmom{1}\times\vmom{2})\,\vspin{2})}{4 m_1 m_2} +\frac{3(({\vnunit}\times\vmom{1})\,\vspin{2})\scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} \biggr) \nnl - (\vmom{1} \times \vspin{1}) \frac{(({\vnunit} \times \vmom{2})\,\vspin{2})}{8 m_1 m_2} - (\vmom{2} \times \vspin{1}) \frac{(({\vnunit} \times \vmom{1})\,\vspin{2})}{4 m_1 m_2} \nnl - ((\vmom{1} \times \vspin{2})\times \vspin{1}) \frac{\scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} - ((\vmom{2} \times \vspin{2})\times \vspin{1}) \frac{\scpm{{\vnunit}}{\vmom{1}}}{4 m_1 m_2} \biggr]\nonumber \\ & +\frac{\vx{1}}{\relab{12}} \biggl( \frac{2 G^2 (2 m_1 + m_2)}{\relab{12}^3}\biggl[ \scpm{\vspin{1}}{\vspin{2}} -2\scpm{{\vnunit}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}} \biggr] \nnl +\frac{G}{\relab{12}^2}\biggl[ -\frac{3 (({\vnunit}\times\vmom{1})\,\vspin{1})(({\vnunit}\times\vmom{1})\,\vspin{2})}{2 m_1^2} \nnl +\frac{3 (({\vnunit}\times\vmom{2})\,\vspin{1})(({\vnunit}\times\vmom{1})\,\vspin{2})}{2 m_1 m_2} +\frac{3 (({\vnunit}\times\vmom{1})\,\vspin{1})(({\vnunit}\times\vmom{2})\,\vspin{2})}{8 m_1 m_2} \nnl -\frac{\scpm{\vmom{2}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}}{8 m_1 m_2} +\frac{\scpm{\vmom{1}}{\vspin{1}}\scpm{\vmom{2}}{\vspin{2}}}{4 m_1 m_2} +\frac{3 \scpm{\vmom{2}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\scpm{{\vnunit}}{\vmom{1}}}{2 m_1 m_2} \nnl -\frac{3 \scpm{{\vnunit}}{\vspin{1}}\scpm{\vmom{1}}{\vspin{2}}\scpm{{\vnunit}}{\vmom{1}}}{2 m_1^2} +\frac{3 \scpm{{\vnunit}}{\vspin{1}}\scpm{\vmom{2}}{\vspin{2}}\scpm{{\vnunit}}{\vmom{1}}}{4 m_1 m_2} \nnl +\frac{3 \scpm{\vmom{1}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} \nnl -\scpm{{\vnunit}}{\vspin{1}}\scpm{{\vnunit}}{\vspin{2}}\biggl\{ \frac{15 \scpm{{\vnunit}}{\vmom{1}} \scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} +\frac{3 \scpm{\vmom{1}}{\vmom{2}}}{4 m_1 m_2} \biggr\} \nnl +\scpm{\vspin{1}}{\vspin{2}}\biggl\{ \frac{3 \scpm{{\vnunit}}{\vmom{1}}^2}{2 m_1^2} -\frac{3 \scpm{{\vnunit}}{\vmom{1}}\scpm{{\vnunit}}{\vmom{2}}}{4 m_1 m_2} +\frac{\scpm{\vmom{1}}{\vmom{2}}}{8 m_1 m_2} \biggr\} \biggr] \biggr) + (1\leftrightarrow2)\,.\label{eq:GNNLOSS} \end{align} From this the boost vector $\vct{K} = \vct{G} - t \vct{P}$ can be obtained, which explicitly depends on time $t$. Notice that in \eqref{eq:GNNLOSO} (in contrast to \eqref{eq:GNNLOSS}) there appears a one-particle term without $G$ factor. It comes from the displacement of the center-of-mass due to the rotation and the resulting special relativistic Lorentz contractions of different parts of the object (which has to have a finite size). Further discussions about this issue can be found in \cite{Steinhoff:2011} which are clarified graphically in particular in Fig. 1 therein. In \eqref{eq:GNNLOSS} there is no term without $G$ factor because all interactions between two spins are transmitted by the gravitational field in general relativity. \section{Test-Spin near Kerr Black Hole} \label{sec:testspin} In the last section we checked whether our results are compatible with kinematical restrictions of the Poincar\'e algebra. Here we derive an exact test-spin Hamiltonian to compare our results \eqref{eq:HNNLOSO} and \eqref{eq:HNNLOS1S2} with. In the following subsections we restrict ourselves to $d=3$, because in the test-spin case there are only delta integrals to be evaluated. A partial check of our Hamiltonians against the test-spin case is contained in \cite{Steinhoff:Puetzfeld:2012} for the case of aligned spins. There are various approaches to calculate the motion of a test-spin near a Kerr black hole (see, e.g., \cite{Barausse:Racine:Buonanno:2009} and references therein). Since in the counting used in \cite{Barausse:Racine:Buonanno:2009} the NNLO spin(1)-spin(2) interaction is at 4PN and therefore not considered therein, one needs to calculate the spin(1)-spin(2) contributions from \begin{align} H_{\text{Testspin}} &= -\hat{A}^{k\ell} \triad{m}{k} e^{(m)}_{\quad \ell,0} + \int \text{d}^3 x\,[\src{}\lapse - \src{i} \shiftup{i}] \label{eq:testspinhamiltonian}\,, \end{align} where $\lapse$, $\shift{i}$, the framefield $\triad{m}{k}$, and the implicit appearing metric provide the exterior gravitational field, and $\src{}$ and $\src{i}$ represent the test-spin moving in it \cite{Steinhoff:2011}. Note that the framefield in the first term has to be evaluated at the position of the test-spin. Since all spin dependencies of the metric are at least quadratic in the Kerr spin, and the only contribution which is linear in Kerr spin comes from the shift vector, the three-dimensional part of the metric and the lapse are identical with the appropriate components of the isotropic Schwarzschild metric which also comes from the spinless limit of the Kerr metric. The shift is given by the expressions in \cite[Eq. (54)]{Hergt:Schafer:2008:2}. The metric components generated by particle `$1$' are given by \begin{align} g_{00} &= -\left(\frac{1-\frac{m_1}{2 r_1}}{1+\frac{m_1}{2 r_1}}\right)^2\,,\\ g_{ij} &= \left(1+\frac{m_1}{2 r_1}\right)^4 \delta_{ij}\,,\\ g_{0i} &= \frac{2 m_1\, \nxa{1}{k}\kerrspin{1}{i}{k}}{r_1^2 \left(1 + \frac{m_1}{2 r_1}\right)^2}\,. \end{align} It is well-known that the metric components can be rewritten into a three-dimensional metric on the spatial hypersurface, lapse $\lapse$, and shift $\shiftup{i} = \gamma^{ij} \shift{j}$, using \eqref{eq:metricdecomp1} and \eqref{eq:metricdecomp2}. So one can immediately see that \begin{align} \gamma_{ij} = g_{ij} &= \left(1+\frac{m_1}{2 r_1}\right)^4 \delta_{ij}\label{eq:gammaiso}\,,\\ \gamma^{ij} &= \left(1+\frac{m_1}{2 r_1}\right)^{-4} \delta_{ij}\label{eq:gammainviso}\,,\\ \lapse &= \frac{1-\frac{m_1}{2 r_1}}{1+\frac{m_1}{2 r_1}}\,, \end{align} with the squares of the shift neglected, because they are quadratic in the Kerr spin, \begin{align} \shift{i} &= \frac{2 m_1\, \nxa{1}{k}\kerrspin{1}{i}{k}}{r_1^2 \left(1 + \frac{m_1}{2 r_1}\right)^2} = -2 m_1 \kerrspin{1}{i}{k} \left(\frac{1}{r_1 \left(1 + \frac{m_1}{2 r_1}\right)}\right)_{,k}\,, \end{align} see \cite{Hergt:Schafer:2008:2}. Here $\kerrspin{1}{i}{j} = \spin{1}{i}{j}/m_1$ is the Kerr spin belonging to the black hole located at position `1'. Note that to linear order in spin \begin{align} \pi^{ij} = \frac{6 m_1 \nxa{1}{k} \nxa{1}{(i} \kerrspin{1}{j)}{k}}{r_1^3 \left(1 + \frac{m_1}{2 r_1}\right)^4}\,, \end{align} (calculated from the inverse metric and the three-dimensional Christoffel symbols, see \cite[Eq. (65)]{Hergt:Schafer:2008:2}) fulfills the ADM gauge condition, so no further coordinate shift from quasi isotropic coordinates to another coordinate system is necessary. Due to the symmetric framefield gauge $\triad{i}{j} = \sqrt{\gamma_{ij}}$, the framefield is given by \eqref{eq:dreibein} \begin{align} \triad{i}{j} &= \left(1+\frac{m_1}{2 r_1}\right)^2 \delta_{ij}\label{eq:dreibeiniso}\,. \end{align} For the test-spin in a Kerr field it is sufficient to calculate \eqref{eq:testspinhamiltonian} where the sources are given by \eqref{eq:sourcehamilton} and \eqref{eq:sourcemomentum}. Since the metric and the framefield are proportional to $\delta_{ij}$ many terms vanish in the source. The only terms remaining are \begin{align} \src{} &= \sum_a \biggl[-\nmom{a} \dl{a} -\frac{1}{2}\frac{\hat{S}_{a\,li}\mom{a}{j}}{\nmom{a}} \gamma^{kl}\gamma^{ij}_{\quad,k}\dl{a} -\left( \frac{\mom{a}{l}}{m_a - \nmom{a}}\gamma^{ij}\gamma^{kl}\hat{S}_{a\,jk}\dl{a} \right)_{,i} \biggr]\,,\\ \src{i} &= \sum_a \biggl[\mom{a}{i}\dl{a} + \frac{1}{2}\biggl( \gamma^{jk} \hat{S}_{a\,ik} \dl{a} +\gamma^{jk}\gamma^{\ell p} \frac{2 \mom{a}{\ell}\mom{a}{(i}\hat{S}_{a\,k)p}}{\nmom{a}(m_a - \nmom{a})} \dl{a} \biggr)_{,j} \biggl]\,. \end{align} Inserting sources, metric components and framefield, and evaluating them at the testspin location gives the exact result \begin{align} H^{\text{Kerr}}_{\text{Testspin}} & \stackrel{\Order{a^1}}{=} \frac{1-\frac{m_1}{2 \relab{12}}}{1 + \frac{m_1}{2 \relab{12}}} \sqrt{m_2^2 + \frac{\vmom{2}^2}{\left(1+\frac{m_1}{2 \relab{12}}\right)^4}} \nonumber\\ & - \frac{m_1 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}} }{\relab{12}^2 \left(1+\frac{m_1}{2 \relab{12}}\right)^{6}} \left[ \frac{1-\frac{m_1}{2 \relab{12}}}{\sqrt{m_2^2 + \frac{\vmom{2}^2}{\left(1 + \frac{m_1}{2 \relab{12}}\right)^4}}} +\frac{1}{m_2 + \sqrt{m_2^2 + \frac{\vmom{2}^2}{\left(1 + \frac{m_1}{2 \relab{12}}\right)^4}}} \right] \nonumber\\ & + \frac{2 m_1 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}} }{\relab{12}^2 \left(1 + \frac{m_1}{2 \relab{12}}\right)^6} + \frac{m_1}{\relab{12}^3 \left(1 + \frac{m_1}{2 \relab{12}}\right)^7}\Biggl[ -\left(1 - \frac{5 m_1}{2 \relab{12}}\right)\scpm{\vkerrspin{1}}{\vspin{2}} \nonumber\\ & -3 \left(1 - \frac{m_1}{2 \relab{12}}\right)\biggl\{ -\scpm{\vnxa{12}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}} \nonumber\\ & + \frac{ \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}} - \scpm{\vnxa{12}}{\vmom{2}}^2 \scpm{\vkerrspin{1}}{\vspin{2}} + \scpm{\vnxa{12}}{\vmom{2}} \scpm{\vmom{2}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}} {\left(m_2 + \sqrt{m_2^2 + \frac{\vmom{2}^2}{\left(1 + \frac{m_1}{2 \relab{12}}\right)^4}}\right) \sqrt{m_2^2 + \frac{\vmom{2}^2}{\left(1 + \frac{m_1}{2 \relab{12}}\right)^4}}\left(1 + \frac{m_1}{2 \relab{12}}\right)^4}\biggr\} \Biggr]\,, \end{align} which leads after a post-Newtonian expansion (the post-Newtonian order given in the subscript is a formal one) \begin{align} H^{\text{Kerr}}_{\text{Testspin},\le 3\text{PN}} &\stackrel{\Order{a^1}}{\approx} m_2 c^2 + \left(\frac{\vmom{2}^2}{2 m_2} - \frac{m_1 m_2}{\relab{12}}\right) + \cInv{2} \biggl( -\frac{(\vmom{2}^2)^2}{8 m_2^3} - \frac{3 m_1 \vmom{2}^2}{2 m_2 \relab{12}} \nonumber\\ & +\frac{m_1}{\relab{12}^2} \biggl[ \frac{m_1 m_2}{2} -2 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}} -\frac{3 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{2 m_2} \biggr] \nonumber\\ & + \frac{m_1}{\relab{12}^3} \left[-\scpm{\vkerrspin{1}}{\vspin{2}}+3 \scpm{\vnxa{12}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}\right] \biggr) \nonumber\\ & + \cInv{4}\biggl(\frac{(\vmom{2}^2)^3}{16 m_2^5} +\frac{5 m_1 (\vmom{2}^2)^2}{8 m_2^3 \relab{12}} +\frac{m_1 \vmom{2}^2}{m_2 \relab{12}^2} \biggl[ \frac{5 m_1 }{2} +\frac{5 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{8 m_2^2} \biggr] \nonumber\\ & + \frac{m_1}{\relab{12}^3} \biggl[ - \frac{m_1^2 m_2}{4} + m_1 \biggl(6 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}} + \frac{5 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{m_2}\biggr) \nonumber\\ & - \frac{3 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{2 m_2^2} + \frac{3 \scpm{\vnxa{12}}{\vmom{2}}^2 \scpm{\vkerrspin{1}}{\vspin{2}}}{2 m_2^2} \nonumber\\ & - \frac{3 \scpm{\vnxa{12}}{\vmom{2}}\scpm{\vmom{2}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}}{2 m_2^2} \biggr] + \frac{6 m_1^2}{\relab{12}^4} \left[\scpm{\vkerrspin{1}}{\vspin{2}}-2 \scpm{\vnxa{12}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}\right] \biggr) \nonumber\\ & + \cInv{6}\biggl(-\frac{5(\vmom{2}^2)^4}{128 m_2^7} -\frac{7 m_1 (\vmom{2}^2)^3}{16 m_2^5 \relab{12}} -\frac{m_1 (\vmom{2}^2)^2}{m_2^3 \relab{12}^2} \biggl[ \frac{27 m_1}{16} + \frac{7 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{16 m_2^2} \biggr] \nonumber\\ & +\frac{m_1 \vmom{2}^2}{m_2 \relab{12}^3} \biggl[ -\frac{25 m_1^2}{8} -\frac{27 m_1 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{8 m_2^2} +\frac{9 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{8 m_2^3} \nonumber\\ & +\frac{9 \scpm{\vnxa{12}}{\vmom{2}}\scpm{\vmom{2}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}}{8 m_2^3} -\frac{9 \scpm{\vnxa{12}}{\vmom{2}}^2 \scpm{\vkerrspin{1}}{\vspin{2}}}{8 m_2^3} \biggr] + \frac{m_1^2}{\relab{12}^4} \biggl[ \frac{m_1^2 m_2}{8} \nonumber\\ & +m_1 \biggl( -\frac{21 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}}{2} -\frac{75 \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{8 m_2} \biggr) \nonumber\\ & +\frac{9 \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}}}{m_2^2} +\frac{9 \scpm{\vnxa{12}}{\vmom{2}}\scpm{\vmom{2}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}}}{m_2^2} \nonumber\\ & -\frac{9 \scpm{\vnxa{12}}{\vmom{2}}^2 \scpm{\vkerrspin{1}}{\vspin{2}}}{m_2^2} \biggr] + \frac{m_1^3}{\relab{12}^5} \biggl[ -\frac{63}{4}\scpm{\vkerrspin{1}}{\vspin{2}} +\frac{105}{4} \scpm{\vnxa{12}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}} \biggr] \biggr)\,, \end{align} in full agreement with the test-spin limit of the full point-mass Hamiltonian, the spin-orbit Hamiltonian and the spin(1)-spin(2) Hamiltonian up to and including the formal 3PN order. For later checks we provide also the expressions at formal 4PN order. The test-spin limit for the next-to-next-to-next-to-leading order (NNNLO) spin-orbit interaction is given by \begin{align} H^{\text{Kerr}}_{\text{Testspin},4\text{PN}, \text{SO}} &= \biggl( \frac{45}{128} \frac{m_1 (\vmom{2}^2)^3}{ m_2^7 \relab{12}^2} + \frac{13}{4}\frac{m_1^2 (\vmom{2}^2)^2}{m_2^5 \relab{12}^3} + \frac{315}{32} \frac{m_1^3 \vmom{2}^2}{m_2^3 \relab{12}^4} + \frac{105}{8} \frac{m_1^4}{m_2 \relab{12}^5}\biggr) \trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}} \nonumber\\ & + 14\frac{m_1^4}{\relab{12}^5} \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\,, \end{align} and for the NNNLO spin(1)-spin(2) interaction by \begin{align} H^{\text{Kerr}}_{\text{Testspin},4\text{PN}, \text{SS}} &= \biggl( -\frac{15}{16}\frac{m_1 (\vmom{2}^2)^2}{m_2^6 \relab{12}^3} -9 \frac{m_1^2 \vmom{2}^2}{m_2^4 \relab{12}^4} -\frac{231}{8} \frac{m_1^3}{m_2^2 \relab{12}^5} \biggr)[ \trpm{\vnxa{12}}{\vmom{2}}{\vkerrspin{1}}\trpm{\vnxa{12}}{\vmom{2}}{\vspin{2}} \nonumber\\ & +\scpm{\vnxa{12}}{\vmom{2}}\scpm{\vmom{2}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}} -\scpm{\vnxa{12}}{\vmom{2}}^2 \scpm{\vkerrspin{1}}{\vspin{2}} ] \nonumber\\ & + \frac{14 m_1^4}{\relab{12}^6} \biggl( 2 \scpm{\vkerrspin{1}}{\vspin{2}} -3 \scpm{\vnxa{12}}{\vkerrspin{1}}\scpm{\vnxa{12}}{\vspin{2}} \biggr)\,. \end{align} Notice that the formal 3PN spin(1)-spin(2) test-spin contributions were not given in \cite{Barausse:Racine:Buonanno:2009} (in their counting rules they would be at 4PN level). Further notice that there are no contributions coming from the first term in \eqref{eq:testspinhamiltonian} in the spin-orbit, and the spin(1)-spin(2) case, since in isotropic Schwarzschild coordinates it vanishes identically, see \eqref{eq:Ahat}, \eqref{eq:nS}, \eqref{eq:gammaiso}, \eqref{eq:gammainviso}, and \eqref{eq:dreibeiniso}. There are two possible further checks which use different approaches. As mentioned in \cite{Hergt:Steinhoff:Schafer:2011} a comparison of the effective field theory NNLO spin(1)-spin(2) potential in \cite{Levi:2011} to our NNLO spin(1)-spin(2) Hamiltonian would be a very strong check of both results since the EFT results are completely independent from the ADM formalism. Also a confirming check would be the derivation of both NNLO Hamiltonians using the spin-precession method shown in \cite{Damour:Jaranowski:Schafer:2008:1}. Due to the complicated structure of these comparisons they will be postponed to later publications. A very recent check of the NNLO spin-orbit Hamiltonian and the resulting equations of motion was performed in \cite{Marsat:Bohe:Faye:Blanchet:2012, Bohe:Marsat:Faye:Blanchet:2012} in harmonic gauge. Furthermore in \cite{Bohe:Marsat:Faye:Blanchet:2012} the near-zone metric was determined which is an important step towards the mentioned template calculations. \section{Conclusions and Outlook}\label{sec:conclusions} We have derived the next-to-next-to-leading order spin-orbit and spin(1)-spin(2) Hamiltonians for binary systems. The spin-orbit Hamiltonian completes the knowledge of binary black hole dynamics up to and including 3.5PN order if the objects are rapidly rotating. For neutron stars also the leading order cubic-in-spin Hamiltonians are needed as the results in \cite{Hergt:Schafer:2008:2, Hergt:Schafer:2008} are valid for black holes only and tidal deformation effects become very important \cite{Damour:Nagar:2009, Vines:Flanagan:2010, Bini:Damour:Faye:2012}. The Hamiltonians were checked using two methods. The fulfillment of the global approximate Poincar\'e algebra was a major criterion for the correctness of the derived Hamiltonians in the extended ADM formalism. During this check the center-of-mass vectors could be determined uniquely from an ansatz. Since the approximate Poincar\'e algebra is not sensitive to the static part of the spin(1)-spin(2) Hamiltonian and fixed only the difference of the two coefficients at the highest order in $G$ of the spin-orbit Hamiltonian we performed further checks. The most simple test is a linear-in-spin approximation of the Hamiltonian of a test-spin moving near a stationary Kerr black hole. We rederived the test-spin Hamiltonian from \cite{Barausse:Racine:Buonanno:2009} in a different manner in \Sec{sec:testspin} (avoiding the use of Dirac brackets). A comparison was straightforward as the same gauge was used. A more elaborate test is the recalculation of both NNLO Hamiltonians via the spin-precession frequency method in \cite{Damour:Jaranowski:Schafer:2008:1} and will be part of a further publication. Also a comparison of the NNLO spin(1)-spin(2) Hamiltonian with the NNLO spin(1)-spin(2) potential given in \cite{Levi:2011} will be part of a further publication and would be a very strong check, because the derivation of this potential is completely independent from the ADM formalism. The most important confirmation to date is the independent derivation of the NNLO spin-orbit acceleration in harmonic gauge \cite{Marsat:Bohe:Faye:Blanchet:2012}. The results given in this article complete the knowledge of the post-Newtonian approximate dynamics for binary black holes up to 3.5PN. For general compact objects like neutron stars the leading order cubic-in-spin Hamiltonians are still unknown. The NNLO spin(1)-spin(2) Hamiltonian is at 4PN, if both objects are rapidly rotating, but there are still some tasks left to get the full post-Newtonian approximate dynamics up to and including 4PN. For general compact objects the leading order quartic-in-spin Hamiltonians are also unknown, they are only known for black holes.\footnote{In \cite{Steinhoff:Puetzfeld:2012} the authors argued that the spin(1)$^4$ Hamiltonians derived in \cite{Hergt:Schafer:2008} are incomplete.} Furthermore the NNLO spin(1)$^2$ Hamiltonian -- which is also at 4PN if the object is maximally rotating and maybe stronger than NNLO spin(1)-spin(2) -- is completely unknown. Last but not least the $G^3$ up to the $G^5$ corrections to the 4PN point-mass Hamiltonian are also still unknown \cite{Jaranowski:Schafer:2012}. To get reasonable results for the templates the far-zone radiation field also must be calculated at higher order in the post-Newtonian approximation and also at higher orders in spin. The energy and angular momentum loss are also not known at a post-Newtonian order corresponding to next-to-next-to-leading order linear-in-spin. If radiation and fluxes are known at such high orders also a parameterization is necessary. These three major ingredients are needed for the analytical description of gravitational wave templates which are very sensitive to higher order post-Newtonian and spin corrections. Analytical results are still important because for spinning binaries the parameter space (masses and spin-directions of the components) is very large and numerical simulations are so time consuming that they cannot be used to cover the whole parameter space.% \footnote{In \cite{Hinder:2010} they estimate that the simulation of non-spinning binaries for eight orbits for mass ratio 1:1 consumes ca. $200\,000$ and for 1:10 up to two million CPU hours.} \ifnotadp \ifnotprd \paragraph*{Acknowledgments} \fi \ifprd \acknowledgments \fi \ack \setlength{\bibsep}{0pt} \bibliographystyle{utphys} \input{Technical3PNPaper_refs} \fi \ifadp \begin{acknowledgement} \ack \end{acknowledgement} \bibliographystyle{adp}
1,941,325,220,320
arxiv
\section{Introduction} Inflationary theory provides a natural resolution for the the flatness, monopole, and horizon problems of our present universe described by the standard big bang cosmology \cite{inflation}. In particular, our universe is homogeneous and isotropic to a very high degree of precision \cite{data,cobe}. Such an universe can be described by the well known Friedmann-Robertson-Walker (FRW) metric \cite{book}. Moreover, gravitational physics could be different from the standard Einstein models near the Planck scale \cite{string,scale}. For example, quantum gravity or string corrections could lead to some interesting cosmological applicastions\cite{string}. In particular, investigations have been conducted on the possibility of deriving inflation from higher order gravitational corrections \cite{jb1,jb2,Kanti99,dm95}. For example, a general analysis on the stability conditions of gravity theories could be useful in screening physical models compatible with our physical universe. In particular, the stability condition for a variety of pure gravity theories as a potential candidate of inflationary universe in the flat Friedmann-Robertson-Walker (FRW) space is derived in Ref. \cite{dm95,kp91,kpz99}. In addition, the highly isotropic universe should evolve from some initially anisotropic state before it becomes isotropic to such a high degree of precision. Nonetheless, it is interesting by itself to study the stability analysis of the anisotropic space during the post-inflationary epoch even anisotropy can be smoothed out by the proposed inflationary process. One would like to know whether our universe can evolve from certain anisotropic universe to a stable and isotropic final state. In particular, it is known that such inflationary solution exists for an NS-NS model with a metric, a dilaton, and an axion field \cite{CHM01}. This inflationary solution is also shown to be stable against small perturbations \cite{ck01}. Note that similar stability analysis has also been studied for a various of interesting models\cite{kim,abel}. The importance of higher derivative models have been known in many aspects. In particular, higher derivative terms are known to be important for the Planck scale physics\cite{kim,dm95}. For example, higher order corrections from quantum gravity or string theory have been considered as possible inflationary models \cite{green}. In addition, higher derivative terms also arise as the quantum corrections of the matter fields \cite{green}. Therefore, it is important to study the implications of the stability analysis of all possible higher derivative models. Recently, there are also growing interests in the study of Kantowski-Sachs (KS) type anisotropic spaces\cite{BPN,LH,NO}. We will hence try to study the problem of existence and stability conditions of an inflationary de Sitter final state for some higher derivative model in Kantowski-Sachs spaces. In particular, a large class of pure gravity models with inflationary KS$/$FRW solutions was presented in Ref. \cite{kao06}. Any KS type solution leading itself to an asymptotic FRW final state will be referred to as the KS$/$FRW solution in this paper for convenience. It has been shown that the existence of a stable de Sitter background is closely related to the choices of the coupling constants. Indeed, a pure gravity model given below: \cite{kao06} \begin{equation} {\cal L} = -R - \alpha R^2 - \beta R^\mu_\nu R^\nu_\mu + \gamma R^{\mu \nu}_{\;\;\; \beta \gamma} \, R^{\beta \gamma}_{\;\;\; \sigma \rho} \, R^{\sigma \rho}_{\;\;\; \mu \nu} \end{equation} admits an inflationary solution with a constant Hubble parameter determined by $H_0^4 = 1/4 \gamma$ if $\gamma >0$. Here $\alpha$, $\beta$, and $\gamma$ are coupling constants. This shows that (a) the $\gamma$ $R$-cubic term determines the scale of the inflation characterized by the Hubble parameter $H_0$ and (b) quadratic terms are irrelevant to the scale $H_0$ in the de Sitter phase. The quadratic terms are, however, important to the stability of the de Sitter phase. Indeed, perturbing the KS type metric with $H_i \to H_0 +\delta H_i$, one can show that \begin{equation} \delta H_i = c_i \exp [{-3H_0t \over 2} (1+ \delta_1) ] + d_i \exp [{-3H_0t \over 2} (1- \delta_1) ] \end{equation} for \begin{equation} \delta_1 = \sqrt{1+ 8/[27-9 (6 \alpha+2 \beta)H_0^2 ] } \end{equation} and some arbitrary constants $c_i, d_i$ to be determined by the initial perturbations. Here $H_i\equiv \dot{a}_i/a_i$ with $a_i(t)$ the scale factor in $i$-direction. We will describe the notation shortly in section II. It is easy to see that any small perturbation $\delta H_i$ will be stable against the de Sitter background if both modes characterized by the exponents \begin{equation} \Delta_{\pm} \equiv - [3H_0t / 2] [1 \pm \delta_1] \end{equation} are all negative. This will happen if $\delta_1 < 1$. In such case, the inflationary de Sitter space will remain a stable background as the universe evolves. More specifically, the constraint $ \beta H_0^2 > 35/18$ is required for both modes to be stable. It can be shown that the stability equation for the anisotropic KS space and the stability equation for the isotropic FRW space in the presence of the same inflationary de Sitter background turns out to be identical \cite{dm95,kp91, kpz99}. Therefore, the stability of isotropic perturbations also ensures the stability of the anisotropic perturbations. The stability of the isotropic perturbations for the FRW space is important for any physical models. Unfortunately, inflationary models that are stable against any isotropic perturbations will have problem with the graceful exit process. Therefore, the pure gravity model may have troubles dealing with the stability and exit mechanism all together. Instead of the pure gravity theory, a slow rollover scalar field may help resolving this problem. An inflationary de Sitter solution in a scalar-tensor model is expected to have one stable mode (against the perturbation in $\delta H_i$ direction) and one unstable mode (against the perturbation in $\delta \phi$ direction). As a result, the inflationary era will come to an end once the unstable mode takes over after a brief period of inflationary expansion. Therefore, we propose to study the effect of such theory. In particular, we will show in this paper that the roles played by the higher derivative terms are dramatically different in the inflationary phase of our physical universe in both pure gravity theory and scalar-tensor theory. First of all, third order term will be shown to determine the expansion rate $H_0$ for the inflationary de Sitter space. The quadratic terms will be shown to have nothing to do with the expansion rate of the background de Sitter space. They will however affect the stability condition of the de Sitter phase. Their roles played in the existence and stability condition of the evolution of the de Sitter space are dramatically different. \section{Non-redundant field equation and Bianchi identity in KS space} Given the metric of the following form: \begin{equation} ds^2=- dt^2+c^2(t)dr^2 + a^2(t) ( d^2 \theta +f^2(\theta) d \varphi^2) \end{equation} with $f(\theta)= (\theta, \sinh \theta, \sin \theta)$ denoting the flat, open and close anisotropic space known as Kantowski-Sachs type anisotropic spaces. More specifically, Bianchi I (BI), III(BIII), and Kantowski-Sachs (KS) space corresponds to the flat, open and closed model respectively. This metric can be rewritten as \begin{equation} \label{metric} ds^2=- dt^2+a^2(t)({dr^2\over {1-kr^2}}+r^2d\theta ^2) + a_z^2(t) dz^2 \end{equation} with $r$, $\theta$, and $z$ read as the polar coordinates and $z$ coordinate for convenience and for easier comparison with the FRW metric. Note that $k=0,1,-1$ stands for the flat, open and closed universes similar to the FRW space. Writing $H_{\mu \nu} \equiv G_{\mu \nu} -T_{\mu \nu}$, Einstein equation can be written as $D_\mu H^{\mu \nu}=0$ incorporating the Bianchi identity $D_\mu G^{\mu \nu}=0$ and the energy momentum conservation $D_\mu T^{\mu \nu}=0$. Here $G^{\mu \nu}$ and $T^{\mu \nu}$ represent the Einstein tensor and the energy momentum tensor coupled to the system respectively. With the metric (\ref{metric}), it can be shown that the $r$ component of the equation $D_\mu H^{\mu \nu}=0$ implies that \begin{equation} H^r_{\; r}=H^\theta_{\;\theta}. \end{equation} This result also says that any matter coupled to the system has the symmetric property $T^r_{\;r}=T^\theta_{\;\theta}$. In addition, the equations $D_\mu H^{\mu \theta}=0$ and $D_\mu H^{\mu z}=0$ both vanish identically for all kinds of energy momentum tensors. More interesting information comes from the $t$ component of this equation. It says: \begin{equation} (\partial_t + 3 H) H^t_{\; t} = 2 H_1 H^r_{\; r} +H_zH^z_{\; z}. \end{equation} This equation implies that (i) $H^t_{\; t}=0$ implies that $H^r_{\; r}=H^z_{\; z}=0$ and (ii) $H^r_{\; r}=H^z_{\; z}=0$ only implies $(\partial_t + 3 H) H^t_{\; t} =0$ instead of $H^t_t=0$. Case (ii) can be solved to give $ H^t_{\; t} =$ constant $\times \exp[-a^2a_z]$ which approaches zero when $a^2a_z \to \infty$. For the anisotropic KS spaces, the metric contains two independent variables $a$ and $a_z$. The Einstein field equations have, however, three non-vanishing components: $H^t_{\; t} =0$, $H^r_{\; r}=H^\theta_{\;\theta} =0$ and $H^z_{\; z} =0$. The Bianchi identity implies that the $tt$ component is not redundant and will hence be retained for complete analysis. Ignoring either one of the $rr$ or $zz$ components will not affect the final result of the system. In short, the $H^t_t=0$ equation, known as the generalized Friedman equation, is a non-redundant field equation as compared to the $H^r_r=0$ and $H^z_z=0$ equations. In addition, restoring the $g_{tt}$ component $b^2(t)=1/B_1$ will be helpful in deriving the non-redundant field equation associated with $G_{tt}$ that will be shown shortly. More specifically, the generalized KS metric will be written as: \begin{equation} \label{metricb} ds^2=-b^2(t) dt^2+a^2(t)({dr^2\over {1-kr^2}}+r^2d\theta ^2) + a_z^2(t) dz^2. \end{equation} In principle, the Lagrangian of the system can be reduced from a functional of the metric $g_{\mu \nu}$, ${\cal L}(g_{\mu \nu})$, to a simpler function of $a(t)$ and $a_z(t)$, namely $L(t) \equiv a^2 a_z {\cal L}(g_{\mu \nu}(a(t), a_z(t)))$. The equation of motion should be reconstructed from the variation of the reduced Lagrangian $L(t)$ with respect to the variable $a$ and $a_z$. The result is, however, incomplete because, the variation of $a$ and $a_z$ are related to the variation of $g_{rr}$ and $g_{zz}$ respectively. The field equation from varying $g_{tt}$ can not be derived without restoring the variable $b(t)$ in advance. This is the motivation to introduce the metric (\ref{metricb}) such that the reduced Lagrangian $L(t) \equiv ba^2 a_z {\cal L}(g_{\mu \nu}(b(t),a(t), a_z(t)))$ retains the non-redundant information of the $H^t_t=0$ equation. Non-redundant Friedmann equation can be reproduced resetting $b=1$ after the variation of $b(t)$ has been done. After some algebra, all non-vanishing components of the curvature tensor can be computed: \cite{kao06} \begin{eqnarray} R^{ti}_{\;\;\;tj} &=& [{1\over 2}\dot{B}_1H_i+B_1 ( \dot{H}_i+H^2_i) ] \delta^i_j ,\\ R^{ij}_{\;\;\;kl} &=& B_1H_iH_j \; \epsilon^{ijm}\epsilon_{klm}+{k \over a^2} \epsilon^{ijz}\epsilon_{klz} \end{eqnarray} with $H_i \equiv ( \dot{a} /a, \dot{a} /a, \dot{a_z} /a_z)$ $\equiv (H_1,H_2=H_1, H_z)$ for $r, \theta$, and $z$ component respectively. Given a Lagrangian $L = \sqrt{g} {\cal L}=L(b(t), a((t), a_z(t))$, it can be shown that \begin{eqnarray} L &=& { a^2 a_z \over \sqrt{B_1}} {\cal L} (R^{ti}_{\;\;\;tj}, R^{ij}_{\;\;\;kl}) = { a^2 a_z \over \sqrt{B_1}} {\cal L} (H_i, \dot{H}_i, a^2) \end{eqnarray} The variational equations for this action can be shown to be: \cite{kao06} \begin{eqnarray} \label{key0} {\cal L} +H_i ( {d \over dt} +3H )L^i &=& H_iL_i + \dot{H}_i L^i \\ {\cal L} + ( {d \over dt} +3H )^2 L^z &=& ( {d \over dt} +3H )L_z \end{eqnarray} Here $L_i \equiv \delta {\cal L} /\delta H_i$, $L^i \equiv \delta {\cal L} /\delta {\dot H}_i$, and $3H \equiv \sum_i H_i$. For simplicity, we will write ${\cal L}$ as $L$ from now on in this paper. As a result, the field equations can be written in a more comprehensive form: \begin{eqnarray} \label{key} DL &\equiv& L +H_i ( {d \over dt} +3H )L^i - H_iL_i - \dot{H}_i L^i =0\\ D_z L &\equiv& L + ( {d \over dt} +3H )^2 L^z - ( {d \over dt} +3H )L_z =0 \label{zeq} \end{eqnarray} \section{higher derivative induced gravity model} In this section, we will study the higher derivative induced gravity model: \begin{equation} { L} = -{\epsilon \over 2} \phi^2 R - \alpha R^2 - \beta R^\mu_\nu R^\nu_\mu + {\gamma \over \phi^2} R^{\mu \nu}_{\;\;\; \beta \gamma} \, R^{\beta \gamma}_{\;\;\; \sigma \rho} \, R^{\sigma \rho}_{\;\;\; \mu \nu} -{1 \over 2} \partial_\mu \phi \partial^\mu \phi - V(\phi) \equiv {\epsilon \over 2} \phi^2 L_0 + L_2 + {\gamma \over \phi^2} L_3 +L_\phi \end{equation} with $L_0 = -R$, $L_2 =-\alpha R^2 - \beta R^\mu_\nu R^\nu_\mu$, $L_3= R^{\mu \nu}_{\;\;\; \beta \gamma} \, R^{\beta \gamma}_{\;\;\; \sigma \rho} \, R^{\sigma \rho}_{\;\;\; \mu \nu}$ and $ L_\phi = -{1 \over 2} \partial_\mu \phi \partial^\mu \phi - V(\phi)$ denoting the lowest order curvature coupling, the higher order terms, and the scalar field Lagrangian respectively. Induced gravity models assume that all dimensionful parameters or coupling constants are induced by a proper choice of dynamical fields. For example, the gravitational constant is replaced by $8\pi G =2/(\epsilon \phi^2)$ as a dynamical field. In addition, the cosmological constant becomes $V(\phi)$ in this model. There is no need for any induced parameters for the quadratic terms $R^2$ and $R_{\mu \nu}^2$ because the coupling constants $\alpha$ and $\beta$ are both dimensionless by itself. This action of the system is also invariant under the global scale transformation: $g_{\mu \nu} \to \Lambda^{-2} g_{\mu \nu}$ and $\phi \to \Lambda \phi$ with arbitrary constant parameter $\Lambda$. The corresponding Lagrangian can be shown to be: \begin{eqnarray}&& L= \epsilon \phi^2(2A+B+2C+D)- 4 \alpha \left[ 4A^2+B^2+4C^2+D^2+4AB+8AC+4AD+4BC+2BD+4CD \right] \nonumber \\ && - 2 \beta \left[3A^2+B^2+3C^2+D^2+2AB+2AC+2AD+2BC+2CD \right]+ +8{\gamma \over \phi^2} \left[2A^3+B^3+2C^3+D^3 \right]\nonumber \\ &&+{1 \over 2}\dot{\phi}^2-V(\phi) \end{eqnarray} with \begin{eqnarray} && A=\dot{H}_1+H_1^2, \\ && B= H_1^2 + {k \over a^2}, \\ && C=H_1H_z, \\ && D=\dot{H}_z+H_z^2 . \end{eqnarray} This Lagrangian can be shown to reproduce the de Sitter models when we set $H_i \to H_0$ in the isotropic limit. The Friedmann equation reads: \begin{eqnarray} \label{keyphi} {1 \over 2} \epsilon \phi^2 DL_0 +DL_2 + {\gamma \over \phi^2} DL_3 + \epsilon \phi \dot{\phi} H_i {L_0^i} - 2 {\gamma \over \phi^3} \dot{\phi} H_i {L_3^i} = {1 \over 2}{\dot{\phi}}^2 +V(\phi) \end{eqnarray} for the induced gravity model. In addition, the scalar field equation can be shown to be: \begin{equation} \label{phieq} \ddot{\phi} +3 H_0 \dot{\phi} +V'= \epsilon \phi L_0 - 2 {\gamma \over \phi^3} L_3. \end{equation} The leading order de Sitter solution with $\phi=\phi_0$ and $H_i=H_0$ for all directions can be shown to be: \begin{eqnarray} V_0 &\equiv& V(\phi_0) = 3 \epsilon' \phi_0^2 H_0^2 ,\\ V'(\phi_0) &=& 12 \epsilon' \phi_0 H_0^2 \end{eqnarray} under the slow roll-over assumption $\ddot{\phi} << V'$ and $H_0 \dot{\phi} << V'$. Here $\epsilon' \equiv \epsilon [1-8\gamma H_0^4/(\epsilon \phi_0^4)]$. Indeed, It can be shown that in the de Sitter inflationary phase, the ignored part of the scalar field equation evolves as $\ddot{\phi} +3 H_0 \dot{\phi} \sim 0$. This equation leads to the approximated solution \begin{equation} \phi \sim \phi_0 + {\dot{\phi}_0 \over 3 H_0}[1 - \exp (- 3 H_0 t) ] \label{phit} \end{equation} during the de Sitter phase $H_i=H_0$. This result is clearly consistent with the slow roll-over assumption we just made. In summary, the leading order equations give us a few constraints on the field parameters: \begin{equation} \label{constraint} 4V_0 =\phi_0 {\partial V \over \partial \phi} (\phi= \phi_0) =12 \epsilon' \phi_0^2 H_0^2 . \end{equation} An appropriate effective spontaneously symmetry breaking potential $V$ of the following form \begin{equation} \label{V0} V(\phi)= { \lambda \over 4} (\phi^2-\phi_0)^2 +6\epsilon' H_0^2 (\phi^2-\phi_0^2) + 3 \epsilon' H_0^2 \phi_0^2 \end{equation} with arbitrary coupling constant $\lambda$ can be shown to be a good candidate satisfying all the scaling conditions (\ref{constraint}). The value of $H_0$ can be chosen to induce enough inflation for a brief moment as long as the slow rollover scalar field remains close to the initial state $\phi=\phi_0$. The de Sitter phase will hence remain valid and drive the inflationary process for a brief moment determined by the decaying speed of the scalar field. The stability conditions associated with this effective potential implied by the field equations will be studied in the following section. Note that the local extremum of this effective potential can be shown to be $\phi=0$ (local maximum) and $\phi^2=\phi_m^2= \phi_0^2- 12 \epsilon' H_0^2/\lambda <\phi_0^2$ (local minimum). In addition the minimum value of the effective potential can be shown to be \begin{equation} V_m =V_0 -36 \epsilon'^2 H_0^4/\lambda <V_0. \end{equation} The constraint $V_m>0$ implies that $\lambda \phi_0^2 > 12 \epsilon' H_0^2$. Or equivalently, it implies that $\phi_m^2 >0$. In addition, we will set $\epsilon \phi_m^2/2=1/(8\pi G)=1$ in Planck unit later in this paper. When the scalar field settles down to the local minimum $\phi_m$ of the effective potential at large time in the post inflationary era, it will oscillate around the local minimum and kick off the reheating process. The scalar field will eventually become a constant background field with a small cosmological constant $V_m = V(\phi_m)$. \section{Stability of higher derivative inflationary solution} Our universe could start out anisotropic and evolves to the present highly isotropic state in the post inflationary era. Therefore, a stable KS$/$FRW solution is necessary for any physical model of our universe. Given an effective action of the sort described by Eq. (\ref{V0}), the scaling constraints (\ref{constraint}) is required for the existence of the de Sitter solution $H_i=H_0$ and the static condition $\phi=\phi_0$. In addition to these constraints, small perturbations, $H_i=H_{i0}+ \delta H_i$ and $\phi=\phi_0+\delta \phi$, against the background de Sitter solution $(H_{i0}, \phi_0)$ may also put a few more constraint to the stability requirement of the system. This perturbation will enable one to understand whether the background solution is stable or not. In particular, it is interesting to learn whether a KS $\to$ FRW (KS/FRW) type evolutionary solution is stable or not. It can be shown that the perturbation equation for the Bianchi models are identical to the perturbation equation for the FRW models. Therefore, any inflationary solutions with a stable mode and an unstable mode will provide a natural way for the inflationary universe to exit the inflationary phase. Such models will, however, also be unstable against the anisotropic perturbations. Therefore, any inflationary solutions with a stable mode and an unstable mode is also a negative result to our search for a stable and isotropic inflationary model. As a result, such solution will be harmful for an anisotropic space to reach an isotropic FRW space once the de Sitter phase start to collapse. It will be shown shortly that the higher derivative induced gravity theory could hopefully resolve this problem all together. In practice, perturbing the background de Sitter solution along the $\delta H_i$ direction should be stable for at least a brief moment of the order $\Delta T \sim 60 H_0^{-1}$ for a physical model. As a result around $60$ e-fold inflation can be induced before the de Sitter phase collapses. And the resulting universe is stable against isotropic and anisotropic perturbations. In addition, the scalar field is expected to roll slowly from the initial state $\phi=\phi_0$. Therefore, the perturbation along the $\delta \phi$ direction is expected to be unstable. This will favor the system for a natural mechanism for graceful exit. Hence, we will try to study the stability equations of the system for small perturbations against the de Sitter background solutions. The first order perturbation equation for $DL$, with $H_i \to H_0+\delta H_i$, can be shown to be: \begin{eqnarray}\label{stable0} \delta ( DL) &=& <H_i L^{ij} \delta \ddot{H}_j> +3H <H_i L^{ij} \delta \dot{H}_j> + \delta <H_i\dot{L}^i>+3H <(H_i L^i_j + L^j) \delta H_j> \nonumber \\ && +<H_iL^i> \delta (3H) -<H_iL_{ij} \delta H_j>\label{stable1} \end{eqnarray} for any $DL$ defined by Eq. (\ref{key}) with all functions of $H_i$ evaluated at some FRW background with $H_i=H_0$. The notation $< A_iB_i> \equiv \sum_{i=1,z} A_iB_i$ denotes the summation over $i=1$ and $z$ for repeated indices. Note that we have absorbed the information of $i=2$ into $i=1$ since they contributes equally to the field equations in the KS type spaces. In addition, $L^{i}_{j} \equiv \delta^2 L / \delta \dot{H}_{i} \delta H_j$ and similarly for $L_{ij}$ and $L^{ij}$ with upper index $^i$ and lower index $_j$ denoting variation with respect to $\dot{H}_i$ and $H_j$ respectively for convenience. In addition, perturbing Eq. (\ref{zeq}) can also be shown to reproduce the Eq. (\ref{stable0}) in the de Sitter phase\cite{inflation}. In addition, it can be shown that \begin{eqnarray} <H_i L^{i1}> &=& 2 <H_i L^{iz} > ,\\ <H_i L^i_1> &=& 2 <H_i L^i_z>, \\ L^1 &=& 2 L^z ,\\ <H_iL_{i1}> &=& 2 <H_iL_{iz}>, \end{eqnarray} in the inflationary de Sitter background with $H_0=$ constant. Therefore, the stability equations (\ref{stable1}) can be greatly simplified. For convenience, we will define the operator ${\cal D}_L$ as \begin{eqnarray} {\cal D}_L \delta H \equiv <H_i L^{i1}> \delta \ddot{H} +3H <H_i L^{i1}> \delta \dot{H} +3H <H_i L^i_1 + L^1> \delta H+2 <H_iL^i> \delta H - <H_iL_{i1}> \delta H=0. \end{eqnarray} As a result, the stability equation (\ref{stable1}) becomes \begin{equation} \delta (DL)= {\cal D}_L (\delta H_1+ \delta H_z/2)={3 \over 2}{\cal D}_L (\delta H) =0 \end{equation} with $H=(2 H_1+H_z)/3$ as the average of $H_i$. Hence the leading order perturbation equation in $\delta H$ and $\delta \phi$ for the Friedmann equation of this model can be shown to be: \begin{equation} {\gamma H_0^4 \over \epsilon \phi_0^4} -6\epsilon (1- 24 {\gamma H_0^4 \over \epsilon \phi_0^4}) \phi_0 H_0 [\delta \dot{\phi} -H_0 \delta \phi] = {\epsilon \over 2} \phi_0^2 \delta (DL_0) +\delta (DL_2)+ {\gamma \over \phi^2} \delta (DL_3)= {3 \over 4} \epsilon \phi_0^2 {\cal D}_0 \delta H +{3 \over 2}{\cal D}_2 \delta H + {3\gamma \over 2 \phi^2} {\cal D}_3 \delta H \end{equation} with ${\cal D}_0 \delta H \equiv {\cal D}_{L_0} \delta H $, ${\cal D}_2 \delta H \equiv {\cal D}_{L_2} \delta H $ and ${\cal D}_3 \delta H \equiv {\cal D}_{L_3} \delta H $ as short-handed notations. This equation can further be shown to be: \begin{equation} \epsilon (1- 24 {\gamma H_0^4 \over \epsilon \phi_0^4}) \phi_0 [\delta \dot{\phi} -H_0 \delta \phi] = 4( 3\alpha +\beta -6 \gamma {H_0^2 \over \phi_0^2}) (\delta \ddot{H} +3H_0 \delta \dot{H}) + (24 \gamma {H_0^4 \over \phi_0^2}-\epsilon \phi_0^2) \delta H. \end{equation} Similarly, the leading perturbation of the scalar field equation can be shown to be: \begin{equation} \delta \ddot{\phi} +3H_0 \delta \dot{\phi} +(V'' -12 \epsilon' H_0^2 - 384 \gamma {H_0^6 \over \phi_0^4})\delta \phi = 6 \epsilon' \phi_0 (\delta \dot{H} +4 H_0 \delta H) \end{equation} The variational equation of $a_z$ can be shown explicitly to be redundant in the limit $H_i =H_0+\delta H_i$ and $\phi=\phi_0+ \delta \phi$ following the Bianchi identity. Assuming that $\delta H=\exp[hH_0t] \delta H_0$ and $\delta \phi =\exp[pH_0t] \delta \phi_0$ for some constants $h$ and $p$, one can write above equations as: \begin{eqnarray} \epsilon (1- 24 {\gamma H_0^4 \over \epsilon \phi_0^4})\phi_0[p -1] \delta \phi &=& 4( 3\alpha +\beta -6 \gamma {H_0^2 \over \phi_0^2}) H_0 [ h^2 +3h + {24 \gamma {H_0^4 / \phi_0^2}-\epsilon \phi_0^2 \over 4( 3\alpha +\beta -6 \gamma {H_0^2 / \phi_0^2})} ]\delta H, \\ H_0\left [ p^2+3p+{V'' \over H_0^2} -12\epsilon' -384 \gamma{ H_0^4 \over \phi_0^4} \right ] \delta \phi &=& 6 \epsilon' \phi_0 [ h +4 ] \delta H. \end{eqnarray} These equations are consistent when all coefficients vanish simultaneously. This implies that $h=-4$ and $p=1$. This set of solution $(h,p)=(-4,1)$ hence imposes two additional constraint \begin{equation} \label{ch} \epsilon-16 (3 \alpha +\beta) {H_0^2 \over \phi_0^2} + 72 \gamma {H_0^4 \over \phi_0^4} =0, \end{equation} \begin{equation} \label{cp} \lambda = 192 \gamma {H_0^6 \over \phi_0^6} - 2 {H_0^2 \over \phi_0^2} \end{equation} with $2 \lambda \phi_0^2= V_0''-12 \epsilon'H_0^2$. The coupling constant $\lambda$ has to positive in order for the effective potential $V(\phi)$ to be free from run-away negative global minimum at $\phi \to \infty$. As a result, the constraints $\epsilon>0$ and $\lambda >0$ imply that \begin{equation} \label{gamma} {2(3 \alpha+\beta)\phi_0^2 \over 9 H_0^2} > \gamma > {\phi_0^4 \over 96 H_0^4}. \end{equation} Therefore, the inflationary phase will remain stable against small perturbation along the $\delta H(= \exp[-4H_0t] \delta H_0)$ direction. In addition, the inflationary phase also has an unstable mode when we perturb the system along the $\delta \phi (=\exp[H_0t] \delta \phi_0)$ direction. The unstable mode is in fact consistent with the slow rollover assumption. The scalar field is expected to roll slowly off the initial state $\phi_0$ for a brief moment during the inflationary era. This unstable mode is hence responsible for the graceful exit process. Hence such system with a stable mode and an unstable mode is a very nice candidate for a inflationary model. The stable mode $\delta H = \exp[-4H_0t] \delta H_0$ implies that the de Sitter background solution will remain stable against small isotropic perturbation. It also implies that the system is stable against any anisotropic perturbation along all $\delta H_i$ directions. Therefore, the induced gravity model with a scalar field can indeed resolve the stability problem of the pure gravity model. \section{conclusion} The existence of a stable de Sitter background is closely related to the choices of the coupling constants. The pure higher derivative gravity model with quadratic and cubic interactions \cite{kao06} admits an inflationary solution with a constant Hubble parameter. Proper choices of the coupling constants allow the de Sitter phase to admit one stable mode and one unstable mode for the anisotropic perturbation. The stable mode favors a strong inflationary period and the unstable mode provides a natural mechanism for the graceful exit process. It is also found that the perturbation against the isotropic FRW background space and the perturbation against the anisotropic KS type background space obey the same perturbation equations. This is true for both pure and induced gravity models. As a result, the unstable mode in pure gravity model also means that the isotropic de Sitter background is unstable against anisotropic perturbations. Therefore, small anisotropic could be generated during the de Sitter phase for pure gravity model. We have shown that, for induced gravity models, stable mode for perturbations along the anisotropic $\delta H_i$ directions does exist with proper constraints imposed on the coupling constants. In addition, another unstable mode for perturbation against the scalar field background $\phi_0$ also exists with proper constraints. Therefore, isotropy of the de Sitter background can remain stable during the inflationary process for the induced gravity models. Explicit model with a spontaneously symmetry breaking $\phi^4$ potential is presented as an example. Proper constraints are derived for reference. It is shown that the scaling conditions (\ref{constraint}) must hold for the initial de Sitter phase solutions in order for the system to admit a constant background solution during the inflationary phase. Additional constraints (\ref{ch}) and (\ref{cp}) are also shown explicitly for this model. In addition, the inequality shown in Eq. (\ref{gamma}) is expected to hold for a physical model with this potential. In summary, we have shown that a stable mode for (an)isotropic perturbation against the de Sitter background does exist for induced gravity model The problem of graceful exit can rely on the unstable mode for the scalar field perturbation against the constant phase $\phi_0$. It is also found that the quadratic terms will not affect the the inflationary solution characterized by the Hubble parameter $H_0$. These quadratic terms play, however, critical role in the stability of the de Sitter background. In addition, it is also interesting to find that their coupling constants $\alpha$ and $\beta$ always show up as a linear combination of $3\alpha + \beta$ in these stability equations. Implications of these constraints deserve more attention for the applications to the inflationary models. \section*{Acknowledgments} This work is supported in part by the National Science Council of Taiwan.
1,941,325,220,321
arxiv
\subsection*{Proposed strategy} There are a few notable difficulties in establishing \Cref{conj:vector-borell}, compared with the scalar-valued case. Recall in particular that Borell's theorem is also known when the expectation of $f$ is constrained: among all functions $f: \R^n \to [-1, 1]$ with $\E[f] = v \in [-1, 1]$, the noise stability is minimized (for negative $\rho$) by a linear threshold function with the appropriate expectation, i.e.\ a function of the form $f_{\mathrm{opt}}(x) = \sgn(\inr{a}{x} + b)$. The formulation in Theorem~\ref{thm:borell} is then recovered just by noting that as a function of the constraint $v \in [-1, 1]$, the optimal noise stability is minimized when $v = 0$. In the vector-valued situation $k \ge 2$, we do not solve a constrained version of the problem. In fact, we do not even know of a good guess for the optimal stability among functions with $\E[f] = v \ne 0$. The guess that comes from naively extrapolating the $k=1$ solution, $f(x) = (x_{\le k} - a) / \|x_{\le k} - a\|$ appears not to be optimal. This inconvenient fact rules out certain proof techniques that work for the scalar-valued case---specifically, recent approaches using $\rho$-convexity~\cite{MN:15} and stochastic calculus~\cite{Eld:15}---because if those methods had worked, they would have also shown optimality of $f(x) = (x_{\le k} - a) / \|x_{\le k} - a\|$ in the constrained version. On the other hand, we also do not know how to exploit the older symmetrization-based approaches~\cite{Bor:85} because of the difficulty of symmetrizing vector-valued functions. We will propose a three-step strategy for Conjectures~\ref{conj:vector-borell} and~\ref{conj:vector-borell-positive-rho}. The reason that we have been unable to complete the proof is that one of the three steps is only proven for positive $\rho$, while another step is only proven for negative $\rho$. The first step is to consider noise stability for functions $f: S^{n-1} \to S^{n-1}$ and we prove that (in both positive-$\rho$ and negative-$\rho$ cases) the function $f(x) = x$ has optimal noise stability. The argument here is purely spectral: having shown that the eigenvectors of our noise operator (modified appropriately to live on the sphere) are spherical harmonics, we expand the function $f$ in the basis of spherical harmonics and show that the optimal thing to do is to put all the ``weight'' on ``level-1'' coefficients. One interesting feature of this argument is that it doesn't require $f$ to take values in $S^{n-1}$: we show that $f(x) = x$ has optimal noise stability among all functions $f: S^{n-1} \to \R^n$ satisfying $\E[\|f(\bx)\|^2] = 1$. This step of the proof also works for $0 < \rho \le 1$: in this range, the noise stability is \emph{maximized}, among functions with $\E[f] = 0$, by $f(x) = x$. The second step of the proof is to consider functions $f: \R^n \to S^{n-1}$. We do this by decomposing $\R^n$ into radial ``shells'' and applying the spherical argument on each shell. This step requires $-1 \le \rho \le 0$. The final step, which applies only to $0 \le \rho \le 1$ is a kind of dimension reduction. Specifically, we show that if $f: \R^n \to S^{k-1}$ has optimal stability then $f$ is ``essentially $k$-dimensional'' in the sense that up to a change of coordinates there is a function $g: \R^k \to S^{k-1}$ such that $f(x) = g(x_1, \dots, x_k)$. This essentially reduces the problem to the case of functions $f: \R^k \to S^{k-1}$, which was already handled in the second step. This step of the proof uses tools from the calculus of variations. Essentially, we show that if $f: \R^n \to S^{k-1}$ is not essentially $k$-dimensional then it can be modified in a way that improves the noise stability. Arguments of this kind go back to McGonagle and Ross~\cite{MR:15} in the setting of the Gaussian isoperimetric problem. They were developed in the vector-valued (but still isoperimetric) setting by Milman and Neeman~\cite{MN:18}, and then applied to noise stability by Heilman and Tarter~\cite{HT20}. Note that by combining the first two steps (which both apply for $-1 \le \rho \le 0$) we can show that \Cref{conj:vector-borell} holds in the case $k = n$. \section{Dimension reduction} \label{sec:dim-reduction} We will eventually be concerned with $3$-dimensional assignments to points which lie in a $n$-dimensional sphere, $S^{n-1}$. \Cref{thm:n-dim-borell} shows if we are allowed $n$-dimensional assignments, $f_\mathrm{opt}$ minimizes the noise stability for negative $\rho$. In this section, we will show that the optimization over $k$-dimensional assignments (for $k \le n$) reduces to the optimization over $n$-dimensional assignments, \emph{but only for non-negative $\rho$}. We do this by showing that optimally stable functions are ``at most $k$-dimensional,'' in the sense that they can be defined on $\R^k$ and not on $\R^n$. We say that $f: \R^n \to B^k$ is optimally stable with parameter $\rho \in [0, 1]$ if $\rho > 0$ and $\E_{\bx \sim_\rho \by}[\inr{f(\bx)}{f(\by)}]$ is maximal among all functions $f: \R^n \to B^k$ with $\E_{\bx}[f(\bx)] = 0$. \begin{theorem}\label{thm:dimension-reduction} For every $n, k \ge 1$ and every $\rho \in [0, 1]$, there is an optimally stable function $f$. Moreover, if $k \le n$ and $\rho \in (0, 1)$ then for every optimally stable function $f$, after a change of coordinates on $\R^n$, $f(x)$ depends only on $x_1, \dots, x_k$. \end{theorem} Let's address the existence part first, because it's easier. \begin{proof}[Proof of existence in Theorem~\ref{thm:dimension-reduction}] When $\rho \in \{0, 1\}$, existence is trivial because every function is optimally stable; from now on, assume $\rho \in (0, 1)$. Choose an optimizing sequence $f_n$, i.e.\ a sequence of functions $f_n: \R^n \to B^k$ such that $\E[f_n] = 0$ and $\E[\inr{f(\bx)}{f(\by)}]$ converges to the optimal value. Since $f_n$ are uniformly bounded in $L^2(\gamma)$, after passing to a subsequence we may assume that $f_n$ converges weakly to, say, $f$. By testing weak convergence against a constant function, it follows that $\E[f] = 0$. Recall that the noise operator $\mathrm{U}_\rho: L^2(\gamma) \to L^2(\gamma)$ is compact -- for example, because it acts diagonally on the Hermite basis, with eigenvalues that converge to zero. It follows that $\mathrm{U}_\rho f_n$ converges strongly in $L^2(\gamma)$ to $\mathrm{U}_\rho f$ and hence \begin{align*} \E_{\bx \sim_\rho \by} [\inr{f_n(\bx)}{f_n(\by)}] &= \E_{\bx} [\inr{f_n(\bx)}{\mathrm{U}_\rho f_n(\bx)}] \\ &= \E[\inr{f_n}{\mathrm{U}_\rho f_n - \mathrm{U}_\rho f}] + \E[\inr{f_n}{\mathrm{U}_\rho f}] \\ &\to \E[\inr{f}{\mathrm{U}_\rho f}], \end{align*} where the first term converged to zero because $\|f_n\|$ is bounded and $\|\mathrm{U}_\rho f_n - \mathrm{U}_\rho f\| \to 0$, and the second term converged to $\E[\inr{f}{\mathrm{U}_\rho f}]$ by the weak convergence of $f_n$. Since $\E[\inr{f}{\mathrm{U}_\rho f}] = \E_{\bx \sim_\rho \by} [\inr{f(\bx)}{f(\by)]}$, the limit function $f$ is optimally stable. \end{proof} For a differentiable function $f: \R^n \to \R^k$, we write $Df(x)$ for the $k \times n$ matrix of partial derivatives at the point $x \in \R^n$. For $v \in \R^n$, we will write $D_v f(x) \in \R^k$ for the directional derivative of $f$ in the direction $v$. Of course, $D_v f(x)$ is just an abbreviation for $(D f(x)) \cdot v$. \subsection{Outline of the dimension reduction} The main idea behind the proof of Theorem~\ref{thm:dimension-reduction} is perturbative: we show that if the function depends on more than $k$ coordinates, there is a perturbation $\tilde f$ of $f$ that satisfies $\E[\tilde f] = 0$ but has a better noise stability. We will consider two families of perturbations: ``value'' perturbations of the form $\tilde f(x) = f(x) + \epsilon \psi(x) + o(\epsilon)$, and ``spatial'' perturbations of the form $\tilde f(x) = f(x + \epsilon \Psi(x) + o(\epsilon))$; our final perturbation will be a combination of these. The perturbation $\tilde f$ will never be written down very explicitly. In most of our analysis, we will rather consider a one-parameter family $f_\epsilon$ of perturbations, and we will establish the \emph{existence} of a good perturbation by studying the derivatives of $f_\epsilon$ at $\epsilon = 0$. There are many technical details, partly because we are considering an infinite-dimensional optimization problem (over all $f: \R^n \to B^k$) and partly because the a priori the optimal functions could be almost arbitrarily nasty. However, most of our arguments have simple analogues for finite-dimensional constrained optimization. In particular, suppose that we are trying to maximize a differentiable function $\psi: \R^m \to \R$ while obeying the constraint $g(x) = 0$, for a differentiable $g: \R^m \to \R^k$. Classical Lagrangian theory for this problem implies that if $x_0 \in \R^m$ is a maximizer and $D g(x_0)$ has rank $k$ then there is some $\lambda \in \R^k$ such that $D \psi(x_0) = \lambda^T Dg(x_0)$: if this were not the case, there would be a curve $c: [-\delta, \delta] \to \R^n$ with $c(0) = x_0$, $g(c(t)) \equiv 0$, and $\left.\frac{d}{dt}\right|_{t=0} \psi(c(t)) \ne 0$, contradicting the maximality of $x_0$. The classical theory extends to second-order (at least, if $\psi$ and $g$ are twice-differentiable): if $x_0$ is a maximizer and $D g(x_0)$ has rank $k$ then the matrix $D^2 \psi - \sum_i \lambda_i D^2 g_i$ acts negatively on the kernel of $D g(x_0)$ (where $\lambda = (\lambda_1, \dots, \lambda_k)$ is the one whose existence was guaranteed by the first-order theory). This is essentially the constrained-optimization analogue of the statement that a function has a negative-semidefinite Hessian at a maximizer, and it can be proven by showing that if it fails to hold then there is a curve $c: [-\delta, \delta] \to \R^n$ with $c(0) = c_0$, $g(c(t)) \equiv 0$, and $\left.\frac{d^2}{dt^2}\right|_{t=0} \psi(c(t)) > 0$, contradicting the maximality of $x_0$. To prove Theorem~\ref{thm:dimension-reduction}, we first find analogues of the first- and second-order variational principles above. For the first-order conditions, we show (Lemma~\ref{lem:first-variation-lagrangian}) that there exists $\lambda \in \R^k$ such that \begin{equation}\label{eq:first-order-condition} |\mathrm{U}_\rho f - \lambda/2| f = \mathrm{U}_\rho f - \lambda/2. \end{equation} For the second-order conditions, we show that for the same $\lambda$ and for any nice enough vector field $\Psi: \R^n \to \R^n$ satisfying $\E [D_{\Psi(\bx)} f(\bx)] = 0$, \begin{equation}\label{eq:second-variation-for-smooth} \E_{\bx \sim_\rho \by} [\inr{D_{\Psi(\bx)} f(\bx)}{D_{\Psi(\by)} f(\by)}] - \E_\bx[|\mathrm{U}_\rho f - \lambda/2| \cdot |D_{\Psi(\bx)} f(\bx)|^2] \le 0. \end{equation} Note that the expression above is a quadratic function of the vector field $\Psi$, which can be though of as a ``direction'' along which we perturb $f$. In particular, our second-order condition really says -- as in the finite-dimensional case -- that a certain quadratic form acts non-positively on a certain subspace. Finally, we test~\eqref{eq:second-variation-for-smooth} by substituting constant vector fields $\Psi(x) \equiv v \in \R^n$, and show that either \[ \E_{\bx \sim_\rho \by} [\inr{D_{v} f(\bx)}{D_{v} f(\by)}] - \E_\bx[|\mathrm{U}_\rho f - \lambda/2| \cdot |D_{v} f(\bx)|^2] > 0 \] or $D_v f \equiv 0$. Hence, for every $v \in \R^n$, $\E[D_v f(\bx)] = 0$ implies $D_v f \equiv 0$. The function $v \mapsto \E[D_v f(\bx)]$ is linear, so if $W \subset \R^n$ is its kernel then $W$ has codimension at least $k$. After applying a change of variables so that $\mathrm{span} \{e_1, \dots, e_k\} \subseteq W^\perp$ the fact that $D_v f \equiv 0$ for $v \in W$ implies that $f$ is a function only of $x_1, \dots, x_k$. \subsection{Technicalities} One problem with the outline above is that we wrote ``$D_v f$'' several times, but no one told us that the optimal function $f$ was differentiable. We get around this difficulty by exploiting the ``smoothness'' of our objectives and constraints. For example, we don't care so much about the derivatives of $f$ as we do about how $\E[f_\epsilon]$ changes as we vary $\epsilon$. But $\E[f_\epsilon]$ has as many derivatives (in $\epsilon$) as we wish, because we may write $\E[f_\epsilon]= \int f(x + \epsilon \Psi(x) + o(\epsilon)) \frac{d\gamma}{dx}\, dx$ and then use a change of variables to pass the spatial perturbation onto the (very smooth) Gaussian density. Organizing the computations with this explicit change of variables is tedious, so what we actually do is to first derive our perturbative formulas for smooth functions $f$, then integrate by parts to push the derivatives onto $\frac{d\gamma}{dx}$. We then get formulas that make sense for non-smooth $f$; we show that they actually hold for non-smooth $f$ by taking smooth approximations. The rest of this section is about the integration-by-parts formulas and uniform approximations that make everything go through rigorously. In particular, we prove several non-smooth analogues of statements that are trivial for differentiable functions. For a $\calC^1$ vector field $W$, define \[ \div_\gamma W(x) = \div W(x) - \inr{W(x)}{x}. \] Note that this satisfies the product rule $\div_\gamma (fW) = f \div_\gamma W + \nabla_W f$ for $\calC^1$ functions $f: \R^n \to \R$. The point of this definition is the formula \[ \int \div_\gamma W \, d\gamma = 0 \] for compactly supported $W$. Using the product rule, this is equivalent to \begin{equation}\label{eq:derivative} \int f \div_\gamma W\, d\gamma = - \int \nabla_W f \, d\gamma \end{equation} for compactly supported $W$ and/or $f$. Now think of the left hand side as \emph{defining} the derivative of $f$ in a weak sense, noting that the left hand side makes sense for non-smooth $f$. Because of the way~\eqref{eq:derivative} expresses derivatives of $f$ in terms of derivatives of $W$, we will need to impose regularity on the vector fields $W$ that we consider. \begin{definition} A vector field $W$ is \emph{tame} if it's bounded, $\calC^\infty$-smooth, and if its derivatives of all orders are bounded. \end{definition} Next, we define our spatial perturbations and our main tool for approximating it by smooth functions: let $W: \R^n \to \R^n$ be a tame vector field and let $\{F_t: t \in \R\}$ be the flow along $W$, defined as the unique function satisfying $F_0(x) = x$ and \[ \frac{dF_t(x)}{dt} = W(F_t(x)) \] for all $t \in \R$ and $x \in \R^n$. Then $F_t$ is a $\calC^\infty$ diffeomorphism for all $t$. Given an optimal function $f: \R^n \to \R^k$, we may consider the competitor function $\spatial tWf$ given by \[ (\spatial tWf)(x) = f(F_t^{-1}(x)). \] It is well-known that functions in $L^2(\gamma)$ can be approximated (for example, by truncating and mollifying) using smooth functions. The point here is that we can do this approximation in such a way that it also applies \emph{uniformly in $t$} to the spatial perturbations $\spatial tWf$. \begin{lemma}\label{lem:uniform-approximation} If $f: \R^n \to \R^k$ is bounded and $W$ is tame then there is a sequence uniformly bounded functions $f_n \in \calC^\infty_c$ such that \[ \sup_{t \in [-1, 1]} \|\spatial tWf - \spatial tW{f_n}\|_{L_2(\gamma)} \to 0. \] \end{lemma} \begin{proof} Using a change of variables, we can write \[ \|\spatial tWg\|_{L_2(\gamma)}^2 = \int \|g(x)\|^2 |D F_t(x)| \phi(F_t(x))\, dx, \] where $|D F_t|$ denotes the Jacobian determinant of $F_t$. Now define $\tilde \phi(x) = \sup_{t \in [-1, 1]} |D F_t(x)| \phi(F_t(x))$. Then $\tilde \phi$ is integrable: $|D F_t(x)|$ is uniformly bounded for $t \in [-1, 1]$; also $|F_t(x) - x|$ is uniformly bounded and so \[ \phi(F_t(x)) \le C \exp(-((|x| - C)_+)^2/2) \] for some $C$, which is integrable. Define the finite measure $d \tilde \gamma = \tilde \phi \, dx$; note that our definition of $\tilde \phi$ ensures that \[ \|\spatial tWg\|_{L_2(\gamma)} \le \|g\|_{L_2(\tilde \gamma)} \] for every $t \in [-1, 1]$. Finally, take a uniformly bounded sequence of functions $f_n \in \calC^\infty_c$ such that $f_n \to f$ in $L^2(\tilde \gamma)$. Then the claim follows, because \[ \sup_{t \in [-1, 1]} \|\spatial tWf - \spatial tW{f_n}\|_{L_2(\gamma)} = \sup_{t \in [-1, 1]} \|\spatial tW{(f - f_n)}\|_{L_2(\gamma)} \le \|f - f_n\|_{L_2(\tilde \gamma)}. \] \end{proof} If $f: \R^n \to S^{k-1}$ is differentiable then for any $x, v \in \R^n$, $D_v f(x)$ is tangent to $S^{k-1}$ at $f(x)$; or in other words, $\inr{D_v f}{f} \equiv 0$. Here is an analogue for certain non-smooth $f$. \begin{lemma}\label{lem:tangential-derivative} Suppose $f: \R^n \to B^k$ is measurable and $\psi: \R^n \to [0, \infty)$ is a bounded, Lipschitz function that is differentiable on $\{\psi > 0\}$. Assume that $f(x) \in S^{k-1}$ whenever $\psi(x) > 0$, and that $\psi f$ has a uniformly bounded derivative. Then, for every tame vector field $W$, \[ \sum_i \int f_i \div_\gamma (\psi f_i W) \, d\gamma = 0. \] \end{lemma} (To see why this is an analogue of the easy fact above, note that integration by parts shows that the left hand side is $-\int \psi \inr{D_W f}{f}\, d\gamma$ in the case of smooth $f$.) \begin{proof} Since $\div_\gamma(\psi f_i W)$ is integrable, by the dominated convergence theorem it suffices to find uniformly bounded functions $f^\epsilon$ converging pointwise to $f$ such that \begin{equation}\label{eq:tangential-derivative-goal} \lim_{\epsilon \to 0} \sum_i \int f_i^\epsilon \div_\gamma (\psi f_i W) \, d\gamma = 0. \end{equation} Fix a constant $C \ge 1$ large enough to be larger than the the uniform bound on $\psi$ and the Lipschitz constants of both $\psi$ and $\psi f$. For $\epsilon > 0$, let $\eta^\epsilon: [0, \infty) \to [0, \infty)$ be a $\calC^1$ function satisfying \begin{itemize} \item $\eta^\epsilon(s) = s$ for $s \ge \epsilon$, \item $\eta^\epsilon(s) \ge \frac{\epsilon}{2}$ for all $s$, \item $(\eta^\epsilon)'(s) \le 1$ for all $s$, \item $(\eta^\epsilon)'(0) = 0$. \end{itemize} Now define $\psi^\epsilon = \eta^\epsilon \circ \psi$ and $f^\epsilon = \frac{\psi}{\psi^\epsilon} f$. Note that $f^\epsilon = f$ whenever $\psi \ge \epsilon$. Moreover, $\psi^\epsilon$ is $\calC^1$ with \[ |D_v \psi^\epsilon| \le |(\eta^\epsilon)' \circ \psi| |D_v \psi| \le C \] for any unit vector $v$. Since $\eta^\epsilon \ge \frac{\epsilon}{2}$, it follows that $f^\epsilon$ is $\calC^1$ with \begin{equation}\label{eq:tangential-derivative-bound} |D_v f^\epsilon| \le \frac{2}{\epsilon} |D_v (\psi f)| + |\psi f| \frac{|D_v \psi^\epsilon|}{\epsilon/2} \le \frac{4C^2}{\epsilon} \end{equation} for every unit vector $v$. Since $f^\epsilon = f$ whenever $\psi \ge \epsilon$, we have $|f^\epsilon(x)| = 1$ on $\{\psi \ge \epsilon\}$. It follows then that $\inr{D_v f^\epsilon(x)}{f(x)} \equiv 0$ on $\{\psi \ge \epsilon\}$. Therefore, \begin{align*} \sum_i \int f_i^\epsilon \div_\gamma (\psi f_i W) \, d\gamma &= -\int \inr{D_W f^\epsilon}{\psi f} \, d\gamma \\ &= -\int_{\{\psi < \epsilon\}} \inr{D_W f^\epsilon}{\psi f} \, d\gamma. \end{align*} By~\eqref{eq:tangential-derivative-bound}, \[ \left|\int_{\{\psi < \epsilon\}} \inr{D_W f^\epsilon}{\psi f} \, d\gamma\right| \le 4 C^2 \int_{\{\psi < \epsilon\}} \frac{\psi |W|}{\epsilon} \, d\gamma \le 4 C^2 \int_{\{0 < \psi < \epsilon\}} |W| \, d\gamma. \] Since $|W|$ is uniformly bounded, the final bound converges to zero as $\epsilon \to 0$. This establishes~\eqref{eq:tangential-derivative-goal} and thus completes the proof. \end{proof} Here's a simple bound on the derivatives of $\mathrm{U}_\rho g$ for any $L^2$ function $g$. \begin{lemma}\label{lem:gradient-bound} For any $g \in L^2(\gamma)$ and any $-1 < \rho < 1$, $\mathrm{U}_\rho g$ is $\calC^\infty$ smooth and satisfies \[ \E[\|\nabla^k \mathrm{U}_\rho g\|^2] \le C(\rho, k) \E [g^2] \] for some constant $C(\rho, k) < \infty$, where $\|\nabla^k g\|_2^2$ denotes the sum of squares of all $k$th order partial derivatives of $g$. \end{lemma} \begin{proof} With the change of variables $z = \rho x + \sqrt{1-\rho^2} y$, we can write \begin{align*} \mathrm{U}_\rho g(x) &= (2\pi)^{-n/2} \int g(\rho x + \sqrt{1-\rho^2} y) e^{-|y|^2/2}\, dy \\ &= (2\pi(1-\rho^2))^{-n/2} \int g(z) e^{-\frac{|z - \rho x|^2}{2(1-\rho^2)}} \, dz. \end{align*} This last formula is clearly differentiable in $x$. To show the claimed bound, recall that if $H_\alpha$ are the orthonormal Hermite functions (where $\alpha$ is a multi-index) then $\frac{\partial}{\partial x_i} H_\alpha = \sqrt{\alpha_i} H_{\alpha - e_i}$. Hence, if $g = \sum_\alpha H_\alpha \hat g_\alpha$ is the Hermite expansion of $g$ then \[ \E\left[\Big(\frac{\partial}{\partial x_i} \mathrm{U}_\rho g\Big)^2\right] = \sum_\alpha \alpha_i e^{-2 \rho |\alpha|} \hat g_\alpha^2 \] and so \[ \E\left[|\nabla \mathrm{U}_\rho g|^2\right] = \sum_\alpha |\alpha| e^{-2 \rho |\alpha|} \hat g_\alpha^2 \] The claimed inequality for $k=1$ follows because, for $x \ge 0$, $x e^{-\rho x}$ is bounded by a constant depending on $\rho$; for larger $k$ it follows by induction on $k$. \end{proof} Here is an integrated-by-parts version of the obvious fact that if $\E[|D_w f|^2] = 0$ then $f(x)$ is ``independent of $w$'' in the sense that $f(x) = f(y)$ whenever $x$ and $y$ differ by a multiple of $w$. \begin{lemma}\label{lem:direction-independent} For $f \in L^2(\gamma)$, $w \in \R^n$, and $0 < \rho < 1$, \[ \sum_i \E [f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)] \le 0, \] with equality if and only if there is a function $g: w^\perp \to \R$ with $f(x) = g(\Pi_{w^\perp} x)$ almost surely. \end{lemma} \begin{proof} Fix $s < 1$; since $\mathrm{U}_s f$ is sufficiently smooth (e.g.\ by Lemma~\ref{lem:gradient-bound}), \begin{align*} \sum_i \E [\mathrm{U}_s f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)] &= -\sum_i \E[\inr{D_w \mathrm{U}_s f_i}{D_w \mathrm{U}_\rho f_i}] \\ &= -\sum_i \E[\inr{D_w \mathrm{U}_s f_i}{D_w \mathrm{U}_{\sqrt{\rho/s}} \mathrm{U}_{\sqrt{\rho s}} f_i}] \\ &= -\sqrt{\frac s\rho} \sum_i \E[\inr{D_w \mathrm{U}_s f_i}{\mathrm{U}_{\sqrt{\rho/s}} D_w \mathrm{U}_{\sqrt{\rho s}} f_i}] \\ &= -\sqrt{\frac s\rho} \sum_i \E[\inr{\mathrm{U}_{\sqrt{\rho/s}} D_w \mathrm{U}_s f_i}{ D_w \mathrm{U}_{\sqrt{\rho s}} f_i}] \\ &= - \sum_i \E[\inr{D_w \mathrm{U}_{\sqrt{\rho s}} f_i}{ D_w \mathrm{U}_{\sqrt{\rho s}} f_i}] \\ &= -\E \|D_w \mathrm{U}_{\sqrt{\rho s}} f\|_2^2. \end{align*} Taking the limit as $s \to 1$, we obtain the identity \[ \sum_i \E [f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)] = -\sum_i \E[\inr{D_w f_i}{D_w \mathrm{U}_\rho f_i}] = -\E \|D_w \mathrm{U}_{\sqrt{\rho}} f\|_2^2 \] for any $f \in L^2(\gamma)$. The non-positivity claim follows easily, and it is also clear that zero is attained if and only if $\mathrm{U}_{\sqrt \rho} f$ is independent of $w$. To see that the same is true for $f$, suppose without loss of generality that $w = e_i$. Then $f$ is independent of $w$ if and only if $f$'s Hermite coefficients $\hat f_\alpha$ are zero whenever $\alpha_i > 0$. Since $\mathrm{U}_{\sqrt \rho}$ acts diagonally and non-degenerately on the Hermite basis, $f$ is independent of $w$ if and only if $\mathrm{U}_{\sqrt \rho} f$ is. \end{proof} \subsection{The first-order conditions} To derive the first-order optimality condition~\eqref{eq:first-order-condition}, we introduce the ``value'' perturbations described in the outline. For $f: \R^n \to \R^k$ and a vector field $W: \R^n \to \R^k$, define (for $t \in \R$) \[ (\valued tWf)(x) = \tilde N(f(x) + t W(x)), \] where $\tilde N(x) = x/\max\{1, \|x\|\}$. \begin{lemma} For any measurable $f: \R^n \to B^k$ and any bounded, measurable vector field $W: \R^n \to \R^k$, \[ \left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] = \E[W - \inr{f}{W}_+ f 1_{\{\|f\| = 1\}}], \] where $a_+ = \max\{a, 0\}$. \end{lemma} \begin{proof} If $\|f(x)\| < 1$, $\tilde N(f(x) + t W(x)) = f(x) + t W(x)$ for sufficiently small $t$. On the other hand, if $\|f(x)\| = 1$ then Taylor expansion gives \begin{equation}\label{eq:taylor-on-values} \tilde N(f(x) + t W(x)) = f(x) + tW(x) - t\inr{W(x)}{f(x)}_+ f(x) + O(t^2) \end{equation} for any $x \in \R^n$. The $O(t^2)$ term is uniform in $x$ because we assume $W$ to be uniformly bounded, and hence \[ \frac{\E[\tilde N(f + t W) - f]}{t} = \E[W - \inr{f}{W}_+ f 1_{\{\|f\| = 1\}}] + O(t), \] and the claim follows by taking the limit as $t \to 0$. \end{proof} It is important to note that we can perturb $\E[f]$ in all possible directions; this is our analogue of the fact that for the finite-dimensional constrained-optimization theory to hold, the constraint function should have a full-rank derivative. \begin{lemma}\label{lem:spanning-perturbations} For any measurable $f: \R^n \to B^k$, there exists a set $W_1, \dots W_k$ of vector fields such that \[ \left\{\left.\frac{d}{dt}\right|_{t=0} \E[\valued t{W_i}f]: i = 1, \dots, k\right\} \] spans $\R^k$. \end{lemma} \begin{proof} If $\{x: \|f(x)\| < 1\}$ has positive measure, the claim is clear because for vector fields $W$ supported on $\{x: \|f(x\| < 1\}$, $\left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] = \E[W]$. From now on, we will assume that $f: \R^n \to S^{k-1}$. First, choose $v_0 \in S^{k-1}$ belonging to the support of $f$. For some sufficiently small $\epsilon > 0$, let $A := \{x: |f(x) - v_0| < \epsilon\}$, and note that $A$ has positive measure. Let $w_1, \dots, w_{k-1}$ be a basis for $v_0^\perp$ and let $w_k = -v_0$; then $\{w_1, \dots, w_k\}$ spans $\R^k$. Define (for $i=1,\dots,k$) $W_i = w_i 1_A / \gamma(A)$. First, consider any $i = 1, \dots, k-1$. Since $\inr{W_i}{v_0} \equiv 0$, and since $W_i = 0$ whenever $|f(x) - v_0| \ge \epsilon$, \[ |\inr{f(x)}{W_i(x)}| \le \epsilon |W_i(x)| \] for every $x$. Therefore, \[ \tilde w_i := \left.\frac{d}{dt}\right|_{t=0} \E[\valued t{W_i}f] = \E[W_i] - \E[\inr{f}{W_i}_+ f] = w_i + O(\epsilon). \] On the other hand, for $i=k$ we have \[ |\inr{f(x)}{W_k(x)} + 1| \le \epsilon |W_k(x)|, \] meaning in particular that $\inr{f}{W_k} \le 0$ pointwise as soon as $\epsilon < 1$. Therefore, \[ \tilde w_k := \left.\frac{d}{dt}\right|_{t=0} \E[\valued t{W_k}f] = \E[W_k] = w_k. \] Since $\{\tilde w_1,\dots,\tilde w_k\}$ is an arbitrarily small perturbation of $\{w_1, \dots, w_k\}$, if $\epsilon > 0$ is sufficiently small then $\{\tilde w_1, \dots, \tilde w_k\}$ spans $\R^k$. \end{proof} The next step in establishing the first-order conditions is to show that it's enough to consider derivatives: if it's possible to improve the objective to first-order while preserving the constraints to first-order then it's also possible to improve the objective to first-order while preserving the constraints exactly. \begin{lemma}\label{lem:first-variation} If $\rho \in (0, 1)$ and $f: \R^n \to B^k$ is optimally stable then for every bounded, measurable vector field $W$, $\left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] = 0$ implies \[ \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued tWf)(\bx)}{(\valued tWf)(\by)}] = 0. \] \end{lemma} \begin{proof} Let $W$ be any vector field with $\left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] = 0$, and choose vector fields $W_1, \dots, W_k$ as in Lemma~\ref{lem:spanning-perturbations}. Consider the competitor function $f_{\alpha,\beta}(x) = \tilde N(f(x) + \sum_i \alpha_i W_i + \beta W)$. We define $L: \R^{k+1} \to \R^k$ by \[ L(\alpha, \beta) = \E[f_{\alpha,\beta}]. \] Then \begin{equation}\label{eq:diff-beta} \frac{\partial L}{\partial \beta} (0, 0) = \left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] = 0 \end{equation} and \begin{equation}\label{eq:diff-alpha} \frac{\partial L}{\partial \alpha_i} (0, 0) = \left.\frac{d}{dt}\right|_{t=0} \E[\valued t{W_i}f], \end{equation} which by our choice of $W_i$ implies that $D L$ (as a $(k+1) \times k$ matrix) has rank $k$. By the implicit function theorem, there is some interval $(-\epsilon, \epsilon)$ and a differentiable curve $\eta: (-\epsilon, \epsilon) \to \R^{k+1}$ such that $\eta(0) = 0$, $\eta'(0) \ne 0$ and $L(\eta(t)) = 0$ for all $t \in (-\epsilon, \epsilon)$. We'll write $\alpha(t)$ for the first $k$ coordinates of $\eta$, and $\beta(t)$ for the last coordinate. Now, the fact that $\eta'(0) \ne 0$ implies that at least one of $\alpha'(0)$ or $\beta'(0)$ is non-zero. But the chain rule and the fact that $L(\eta(t))$ is constant gives \[ 0 = \left.\frac{d}{dt}\right|_{t=0} L(\eta(t)) = \sum_i \frac{\partial L}{\partial \alpha_i}(0,0) \alpha'_i(0) + \frac{\partial L}{\partial \beta} (0,0) \beta'(0); \] the term involving $\beta$ vanishes because of~\eqref{eq:diff-beta}, while~\eqref{eq:diff-alpha} implies that the vectors $\frac{\partial}{\partial \alpha_i} L(0,0)$ are linearly independent. It follows that $\alpha'(0) = 0$, and so then we must have $\beta'(0) \ne 0$. Finally, we consider the objective value \[ J(\alpha, \beta) = \E_{\bx \sim_\rho \by} \left[\left\langle f_{\alpha,\beta}(\bx) , f_{\alpha,\beta}(\by) \right\rangle\right]. \] Since $f$ is optimally stable and $f_{\alpha(t),\beta(t)}$ satisfies the constraints, $\left.\frac{d}{dt}\right|_{t=0} J(\alpha(t), \beta(t)) = 0$. On the other hand, the chain rule gives \begin{multline*} 0 = \left.\frac{d}{dt}\right|_{t=0} J(\alpha(t), \beta(t)) = \sum_i \alpha'_i(0) \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W_i}f)(\bx)}{(\valued t{W_i}f)(\by)}] \\ + \beta'(0) \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W}f)(\bx)}{(\valued t{W}f)(\by)}]. \end{multline*} We showed that $\alpha'_i(0) = 0$ for all $i$, and $\beta'(0) \ne 0$, we conclude that \[ \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W}f)(\bx)}{(\valued t{W}f)(\by)}] = 0. \qedhere \] \end{proof} There is also a Lagrangian interpretation of Lemma~\ref{lem:first-variation}: \begin{lemma}\label{lem:first-variation-lagrangian} If $\rho \in (0, 1)$ and $f: \R^n \to B^k$ is optimally stable then there exists some $\lambda \in \R^k$ such that for every bounded, measurable vector field $W$, \[ \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W}f)(\bx)}{(\valued t{W}f)(\by)}] = \left\langle \lambda, \left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] \right\rangle. \] \end{lemma} \begin{proof} For a bounded, measurable vector field $W$, let \[ \phi(W) = \left.\frac{d}{dt}\right|_{t=0} \E[\valued tWf] \in \R^k \] and \[ \psi(W) = \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W}f)(\bx)}{(\valued t{W}f)(\by)}] \in \R \] noting that both $\phi(W)$ and $\psi(W)$ are linear functions of $W$. Let $\calX$ be any finite-dimensional subspace of bounded measurable vector fields for which $\{\phi(W): W \in \calX\}$ spans $\R^k$, and consider the linear map $L: \calX \to \R^{k+1}$ given by $L(W) = (\phi(W), \psi(W))$. By Lemma~\ref{lem:first-variation}, $(0, \dots, 0, 1)$ does not belong to the range of $L$; it follows that there exists $\lambda = \lambda(\calX)$ such that $(-\lambda, 1)$ is orthogonal to the range of $L$ (for example, $\lambda$ can be found by rescaling the residual of the orthogonal projection of $(0, \dots, 0, 1)$ onto the range of $L$). For this $\lambda$, we have $\inr{\lambda}{\phi(W)} = \psi(W)$ for all $W \in \calX$. Note that $\lambda(\calX)$ is unique, because the range of $L$ has dimension at least $k$. Now, if $\calX \subset \calX'$ are two vector spaces satisfying the spanning property above then $\lambda(\calX') = \lambda(\calX)$ (because $(-\lambda(\calX'), 1)$ is orthogonal to $L(\calX')$ and hence also $L(\calX)$, and $\lambda(\calX)$ is the unique vector with that property). It follows then that there is a $\lambda$ satisfying $\inr{\lambda}{\phi(W)} = \psi(W)$ for all bounded, measurable $W$: take any $\calX$ for which $\{\phi(W): W \in \calX\}$ spans $\R^k$, and take $\lambda = \lambda(\calX)$. Then for any bounded, measurable $W$, consider $\calX' = \mathrm{span}(W \cup \calX)$; since $\lambda(\calX') = \lambda$, it follows that $\inr{\lambda}{\phi(W)} = \psi(W)$. \end{proof} To make Lemma~\ref{lem:first-variation-lagrangian} more useful, we test it on ``local'' vector fields $W$ to extract valuable \emph{pointwise} information about optimally stable functions. First, note that the Taylor expansion~\eqref{eq:taylor-on-values} implies that \begin{align*} \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{(\valued t{W}f)(\bx)}{(\valued t{W}f)(\by)}] &= 2 \E_{\bx \sim_\rho \by} [\inr{f(\bx)}{W(\by) - \inr{W(\by)}{f(\by)}_+ f(\by)}] \\ &= 2 \E[\inr{W - \inr{W}{f}_+ f}{\mathrm{U}_\rho f}]. \end{align*} Therefore, Lemma~\ref{lem:first-variation-lagrangian} implies that there exists $\lambda \in \R^k$ such that \begin{equation}\label{eq:first-variation-in-space} 2 \E[\inr{W - \inr{W}{f}_+ f 1_{\{\|f\| = 1\}}}{\mathrm{U}_\rho f - \lambda/2}] = 0 \end{equation} for every bounded, measurable $W$. \begin{lemma}\label{lem:first-order-conditions} If $\rho \in (0, 1)$ and $f: \R^k \to B^k$ is optimally stable then for the $\lambda$ of Lemma~\ref{lem:first-variation-lagrangian}, we have \[ |\mathrm{U}_\rho f - \lambda/2| f = \mathrm{U}_\rho f - \lambda/2 \text{ a.e.} \] \end{lemma} \begin{proof} Suppose not, and choose some small $\epsilon > 0$ such that the set \[ A := \{x: |\mathrm{U}_\rho f - \lambda/2| > \epsilon \text{ and } |f - N(\mathrm{U}_\rho f - \lambda/2)| > \epsilon\} \] has positive measure. Then find some $v \in B^k$ such that \[ C := \{x: x \in A \text{ and } |f(x) - v| < \epsilon^3\} \] has positive measure. Since $|f - N(\mathrm{U}_\rho f - \lambda/2)| > \epsilon$ on $C$, it follows that $|v - N(\mathrm{U}_\rho f - \lambda/2)| > \epsilon - \epsilon^3$ on $C$, and so (for small enough $\epsilon > 0$) we can find some unit vector $w \in v^\perp$ such that $\inr{N(\mathrm{U}_\rho f - \lambda/2)}{w} \ge \epsilon/2$ on $C$. Now set $W = w 1_C$. On the set $C$, $w \in v^\perp$ and $|f(x) - v| \le \epsilon^3$ imply that $|\inr{W}{f}_+ f| \le \epsilon^3$. On the other hand, we also have (still on the set $C$) \[ \inr{W}{\mathrm{U}_\rho f - \lambda/2} = \inr{W}{N(\mathrm{U}_\rho f - \lambda/2)} |\mathrm{U}_\rho f - \lambda/2| \ge \frac{\epsilon}{2} |\mathrm{U}_\rho f - \lambda/2| \ge \frac{\epsilon^2}{2}, \] and if $\epsilon > 0$ is small enough then this contradicts~\eqref{eq:first-variation-in-space}. \end{proof} \subsection{Spatial perturbations} Next, we compute the first and second derivatives of the objectives and constraints for the spatial perturbations $\spatial tWf$ which, recall, is defined by letting $\{F_t: t \in \R\}$ be the flow along $W$, and setting \[ (\spatial tWf)(x) = f(F_t^{-1}(x)). \] \begin{lemma} For any bounded function $f: \R^n \to \R^k$ and any tame vector field $W$, $\E[\spatial tWf]$ is differentiable in $t$ and satisfies \[ \left.\frac{d}{dt}\right|_{t=0} \E[\spatial tWf] = \E [f \div_\gamma W]. \] \end{lemma} \begin{proof} For $f \in \calC_c^1$, this follows by writing out the definition of $\spatial tWf$, differentiating inside the integral, and integrating by parts using~\eqref{eq:derivative}: \[ \left.\frac{d}{dt}\right|_{t=0} \E [\spatial tWf] = \left.\frac{d}{dt}\right|_{t=0} \E[f\circ F_{-t}]= -\E [D_W f] = \E[f \div_\gamma W]. \] Because $\spatial sW{\spatial tWf} = \spatial{s+t}Wf$, this also implies that \begin{equation}\label{eq:smooth-derivative} \frac{d}{dt} \E[\spatial tWf] = \E[\spatial tWf \div_\gamma W] \end{equation} for $f \in \calC_c^1$. Next we handle the case of general bounded $f$. Take an approximating sequence $f_n$ as in Lemma~\ref{lem:uniform-approximation}. Defining $\phi(t) = \E[\spatial tWf]$ and $\phi_n(t) = \E[\spatial tWf_n]$, we see from~\eqref{eq:smooth-derivative} and the uniform boundedness of $f_n$ that $\phi_n'(t)$ is continuous in $t$, and bounded uniformly in $n$ and $t$. Moreover, Lemma~\ref{lem:uniform-approximation} ensures that $\phi_n(t) \to \phi(t)$ uniformly for $t \in [-1, -1]$, and that $\phi'_n(t)$ converges uniformly. It follows that $\phi(t)$ is differentiable in $t$ and satisfies \[ \phi'(0) = \lim_{n\to\infty} \phi_n'(0) = \E [f \div_\gamma W]. \] \end{proof} Next, we do a similar computation for the objective function: \begin{lemma} For any bounded, measurable function $f: \R^n \to \R^k$ and any tame vector field $W$, $\E_{\bx \sim_\rho \by}[\inr{\spatial tWf (\bx)}{\spatial tWf(\by)}]$ is differentiable in $t$ and satisfies \[ \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}] = 2\E [\inr{f}{\mathrm{U}_\rho f} \div_\gamma W + \inr{f}{D_W \mathrm{U}_\rho f}]. \] \end{lemma} \begin{proof} If $f$ is $\calC_c^\infty$, we compute by calculus that \begin{align*} \left.\frac{d}{dt}\right|_{t=0} \E_{\bx \sim_\rho \by}[\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}] &= -2 \E_{\bx \sim_\rho \by}[\inr{D_{W(\bx)} f(\bx)}{f(\by)}] \\ &= -2 \E[\inr{D_{W} f}{\mathrm{U}_\rho f}] \end{align*} For each coordinate, we integrate by parts using~\eqref{eq:derivative}: \[ -2 \E [D_{W} f_i \cdot \mathrm{U}_\rho f_i] = 2\E [f_i \div_\gamma( (\mathrm{U}_\rho f_i) W)] = 2\E [f_i \mathrm{U}_\rho f_i \div_\gamma(W) + f_i D_W \mathrm{U}_\rho f_i]. \] Summing over $i$ completes the proof for $f \in \calC_c^\infty$; note that because $\spatial sW{\spatial tWf} = \spatial{s+t}Wf$, we also have the derivative at $t \ne 0$: \begin{equation*} \frac{d}{dt} \E_{\bx \sim_\rho \by}[\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}] = 2\E [\inr{\spatial tWf}{\mathrm{U}_\rho \spatial tWf} \div_\gamma W + \inr{\spatial tWf}{D_W \mathrm{U}_\rho \spatial tWf}]. \end{equation*} Now consider a bounded function $f$ and choose an approximating sequence $f_n$ as in Lemma~\ref{lem:uniform-approximation}. Letting $\phi(t) = \E_{\bx \sim_\rho \by} [\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}]$ and $\phi_n(t) = \E_{\bx \sim_\rho \by} [\inr{\spatial tW{f_n}(\bx)}{\spatial tW{f_n}(\by)}]$, we note from the formula above that $\phi_n'(t)$ is uniformly bounded and converging uniformly (for $t \in [-1, 1]$, where the boundedness of the term involving $D_W \mathrm{U}_\rho \spatial tW{f_n}$ follows from Lemma~\ref{lem:gradient-bound}). Since $\phi_n(t) \to \phi(t)$ uniformly for $t \in [-1, 1]$, it follows that $\phi(t)$ is differentiable and $\phi'(0) = \lim_{n\to\infty} \phi_n'(0)$. \end{proof} \subsection{Second variation} Here, we establish the second-order optimality condition for spatial perturbations. First, let us observe that everything is twice differentiable in $t$: \begin{lemma} For any measurable $f: \R^n \to B^k$ and any tame vector field $W$, both $\E[\spatial tWf]$ and $\E_{\bx \sim_\rho \by} [\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}]$ are twice differentiable in $t$. \end{lemma} \begin{proof} This can be seen simply by writing out the definitions and changing variables so that the derivatives fall only on the Gaussian kernel, much as in the proof of Lemma~\ref{lem:gradient-bound}. \end{proof} \begin{lemma}\label{lem:stability} Suppose $f$ is optimally stable. If $\rho > 0$ and $W$ is a tame vector field such that \[ \left.\frac{d}{dt}\right|_{t=0} \E [\spatial tWf] = 0, \] then \[ \left.\frac{d^2}{dt^2}\right|_{t=0} \E_{\bx \sim_\rho \by} [\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}] - \left\langle \lambda , \left.\frac{d^2}{dt^2}\right|_{t=0} \E[\spatial tWf] \right \rangle \le 0. \] If $\rho < 0$ then under the same assumptions, the left hand side above is \emph{at least} zero. \end{lemma} This motivates the definition: \begin{definition} For a tame vector field $W$, define the \emph{index form} \[ Q(W) = \left.\frac{d^2}{dt^2}\right|_{t=0} \E_{\bx \sim_\rho \by} [\inr{\spatial tWf(\bx)}{\spatial tWf(\by)}] - \left\langle \lambda , \left.\frac{d^2}{dt^2}\right|_{t=0} \E[\spatial tWf] \right \rangle \] \end{definition} \begin{proof} Take vector fields $W_1, \dots, W_m$ as in the proof of Lemma~\ref{lem:first-variation}, and define \[ f_{\alpha,\beta}(x) = N(f(F_\beta(x)) + \sum_i \alpha_i W_i(x)), \] where $F_\beta$ is the flow along the vector field $W$. As in the proof of Lemma~\ref{lem:first-variation}, defining $L(\alpha, \beta) = \E [f_{\alpha,\beta}]$ implies that $\frac{\partial L}{\partial \beta} (0, 0) = 0$, while $D L(0, 0)$ has rank $k$. Therefore (as in the proof of Lemma~\ref{lem:first-variation}) we can find smooth curves $\alpha(t) \in \R^k$ and $\beta(t) \in \R$ such that $\alpha'(0) = 0$, $\beta'(0) \ne 0$, and $L(\alpha(t), \beta(t)) \equiv 0$ for $t$ in some interval $(-\epsilon, \epsilon)$. Taking second derivatives with respect to $t$, we have \begin{equation}\label{eq:stability-second-derivative-of-constraints} 0 = \left.\frac{d^2}{dt^2}\right|_{t=0} L(\alpha(t), \beta(t)) = \sum_i \alpha_i''(0) \frac{\partial L}{\partial \alpha_i}(0,0) + (\beta'(0))^2 \frac{\partial^2 L}{\partial \beta^2}(0,0) \end{equation} Define $J(\alpha, \beta) = \E_{\bx \sim_\rho \by} \inr{f_{\alpha,\beta}(\bx)}{f_{\alpha,\beta}(\by)}$ and let $K(t) = J(\alpha(t), \beta(t))$. First, note that (because $\alpha'(0) = 0$) $K'(0) = \beta'(0) \frac{\partial J}{\partial \beta}(0,0)$. Since $\beta'(0) \ne 0$, we must have $\frac{\partial J}{\partial \beta}(0, 0) = 0$ -- otherwise, there would be some small $t$ (either positive or negative) giving a contradiction to the optimality of $f$. Taking another derivative, the optimality of $f$ implies that \[ 0 \ge K''(0) = \sum_i \alpha_i''(0) \frac{\partial J}{\partial \alpha_i}(0, 0) + (\beta'(0))^2 \frac{\partial^2 J}{\partial \beta^2}(0,0). \] Finally, recall from Lemma~\ref{lem:first-variation-lagrangian} that $\frac{\partial J}{\partial \alpha_i} = \inr{\lambda}{\frac{\partial L}{\partial \alpha_i}}$; going back to~\eqref{eq:stability-second-derivative-of-constraints}, we obtain \[ 0 \ge K''(0) = -(\beta'(0))^2 \left\langle \lambda, \frac{\partial^2 L}{\partial \beta^2}(0,0) \right \rangle + (\beta'(0))^2 \frac{\partial^2 J}{\partial \beta^2}(0,0); \] dropping the (positive) $(\beta'(0))^2$ terms and untangling the notation, this is equivalent to the claim. \end{proof} \subsection{The index form for translations} Here, we compute the index form $Q$ for constant vector fields $W \equiv w$, giving the rigorous, integrated-by-parts analogue of~\eqref{eq:second-variation-for-smooth} (at least, for $W \equiv w$, which is all that we will need). \begin{lemma}\label{lem:Q} For any measurable $f: \R^n \to B^k$ and any $w \in \R^n$, \begin{equation*} Q(w) = 2 \sum_i \left( \E [f_i \div_\gamma(\div_\gamma((\mathrm{U}_\rho f_i - \lambda/2) w) w)] - \frac{1}{\rho} \E [f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)] \right). \end{equation*} \end{lemma} \begin{proof} Note that $\spatial twf(x) = f(x - tw)$. If $f \in \calC^\infty$, we simply compute \[ \left.\frac{d^2}{dt^2}\right|_{t=0} \E[\spatial twf] = \E [D_w (D_w f)] \] and \begin{align*} \left.\frac{d^2}{dt^2}\right|_{t=0} & \E_{\bx \sim_\rho \by} [\inr{\spatial twf(\bx)}{\spatial twf(\by)}] \\ &= 2\E[ \inr{D^2_{w,w} f(\bx)}{f(\by)} + \inr{D_w f(\bx)}{D_w f(\by)}] \\ &= 2\E[\inr{D^2_{w,w} f}{\mathrm{U}_\rho f} + \inr{D_w f}{\mathrm{U}_\rho D_w f}] \\ &= 2\E[\inr{D^2_{w,w} f}{\mathrm{U}_\rho f}] + \frac{2}{\rho} \E[\inr{D_w f}{D_w \mathrm{U}_\rho f}], \end{align*} and hence \begin{equation}\label{eq:Q-for-smooth-functions} Q(w) = 2\E [\inr{D^2_{w,w} f}{\mathrm{U}_\rho f - \lambda/2}]+ \frac{2}{\rho} \E[\inr{D_w f}{D_w \mathrm{U}_\rho f}]. \end{equation} Integrating the first term by parts twice gives \begin{align*} \E [D^2_{w,w} f_i \cdot (\mathrm{U}_\rho f_i - \lambda_i/2)] &= -\E [D_w f_i \cdot \div_\gamma((\mathrm{U}_\rho f_i - \lambda_i/2) w)] \\ &= \E [f_i \div_\gamma(\div_\gamma((\mathrm{U}_\rho f_i - \lambda_i/2) w) w)]; \end{align*} integrating the second term of~\eqref{eq:Q-for-smooth-functions} by parts gives \[ \E [D_w f_i \cdot D_w \mathrm{U}_\rho f_i] = -\E [f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)]. \] Overall, we obtain \begin{equation*} Q(w) = 2 \sum_i \E\left[ f_i \div_\gamma(\div_\gamma((\mathrm{U}_\rho f_i - \lambda_i/2) w) w) - \frac{1}{\rho} f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w) \right]. \end{equation*} By the familiar approximation argument (noting that the terms involving second derivatives of $\mathrm{U}_\rho f_i$ are controlled by Lemma~\ref{lem:gradient-bound}), the same formula applies for all bounded, measurable functions $f$. \end{proof} Our formula for $Q(w)$ in Lemma~\ref{lem:Q} doesn't require $f$ to be optimally stable. However, if $f$ is optimally stable, the first-order conditions allow us to find a simpler formula. \begin{lemma}\label{lem:Q-simpler} If $f: \R^n \to B^k$ is optimally stable then for any $w \in \R^n$, \[ Q(w) = 2\frac{\rho - 1}{\rho} \sum_i \E[f_i \div_\gamma ((D_w \mathrm{U}_\rho f_i) w)]. \] \end{lemma} \begin{proof} The point is to show that \[ \sum_i \E [f_i \div_\gamma(\div_\gamma((\mathrm{U}_\rho f_i - \lambda_i/2) w) w)] = \sum_i \E [f_i \div_\gamma((D_w \mathrm{U}_\rho f_i) w)]; \] then the claim follows immediately. To show the identity above, note that (by the product rule) \[ \div_\gamma ((\mathrm{U}_\rho f_i - \lambda_i/2) w) = (\mathrm{U}_\rho f_i - \lambda_i/2) \div_\gamma w + D_w \mathrm{U}_\rho f_i; \] plugging this in above, it suffices to show that \begin{equation}\label{eq:Q-simpler-subgoal} \sum_i \E [f_i \div_\gamma((\mathrm{U}_\rho f_i - \lambda_i/2) \div_\gamma w \cdot w)] = 0 \end{equation} But Lemma~\ref{lem:first-variation-lagrangian} implies that $\mathrm{U}_\rho f - \lambda/2 = |\mathrm{U}_\rho f - \lambda/2| f$;~\eqref{eq:Q-simpler-subgoal} then follows from Lemma~\ref{lem:tangential-derivative} with $\psi = |\mathrm{U}_\rho f - \lambda/2|$ and $W = (\div_\gamma w) w$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:dimension-reduction}] Suppose $f$ is optimally stable and suppose that $\rho \in (0, 1)$. By Lemma~\ref{lem:stability}, for any $w$ with \[ \left.\frac{d}{dt}\right|_{t=0} \E[\spatial twf] = 0, \] we have $Q(w) \le 0$. Using the formula for $Q$ in Lemma~\ref{lem:Q-simpler}, for such $w$ we have \[ \sum_i \E[f_i \div_\gamma ((D_w \mathrm{U}_\rho f_i) w)] \ge 0. \] But then Lemma~\ref{lem:direction-independent} implies that $f(x)$ can be written as a function of $\Pi_{w^\perp} x$. Note that the map \[ L(w) = \left.\frac{d}{dt}\right|_{t=0} \E[\spatial twf] \] is a linear map $\R^n \to \R^k$. Then $\ker L$ has dimension at least $n-k$ After applying a change of coordinates in $\R^n$, we may assume that $\ker L$ contains the span of $e_{k+1}, \dots, e_n$,; then the previous paragraph implies that $f(x)$ depends only on $x_1, \dots, x_m$. \end{proof} \subsection{The case of negative $\rho$} Many of the technical results we developed above apply to the case of negative $\rho$, with a few sign changes. Notably, the sign in Lemma~\ref{lem:first-order-conditions} changes to \[ |U_\rho f - \lambda/2| f = \lambda/2 - U_\rho f; \] and because the negative-$\rho$ case is a minimization problem instead of a maximization problem, the sign of the second-order conditions flips also: if $\rho < 0$ then the final inequality of Lemma~\ref{lem:stability} is reversed. Because of these sign changes, the constant vector fields $w$ turn out not to contradict any stability. In fact, with $\rho < 0$ then $Q(w) \le 0$ for every $w$, whether or not $\spatial twf$ preserves expectations to first-order. It remains plausible that there are some other vector fields that will imply Theorem~\ref{thm:dimension-reduction} in the case of negative $\rho$, but we were unable to find them. \section{Integrality gaps}\label{sec:integrality-gaps} In this section, we prove \Cref{thm:integrality-gap-heisenberg,thm:integrality-gap-prod}, which we restate here. \begin{theorem}[Integrality gap for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP; \Cref{thm:integrality-gap-heisenberg} restated] \label{thm:integrality-gap-heisenberg-restated} Assuming \cref{conj:vector-borell-intro}, the \text{\sc Quantum} \text{\sc Max-Cut}\xspace semidefinite program $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$ has integrality gap $\alpha_{\mathrm{GP}}$. \end{theorem} \begin{theorem}[Integrality gap for product state SDP; \Cref{thm:integrality-gap-prod} restated] \label{thm:integrality-gap-prod-restated} Assuming \cref{conj:vector-borell-intro}, the product state semidefinite program $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G)$ has integrality gap $\alpha_{\mathrm{BOV}}$. \end{theorem} Recalling \Cref{def:integrality-gap}, our goal is to compute $$\inf_{\text{instances }\mathcal I\text{ of }\mathcal P}\left\{\frac{\mathrm{OPT}(\mathcal I)}{\mathrm{SDP}(\mathcal I)}\right\},$$ where $\mathcal P$ is the problem of either computing the value or the product state value of a \text{\sc Quantum} \text{\sc Max-Cut}\xspace instance. To upper bound this quantity, we construct a specific instance $\mathcal I$ and give an upper-bound for $\mathrm{OPT}(\mathcal I)$ and a lower-bound for $\mathrm{SDP}(\mathcal I)$. Note that for the product state case, we only optimize over product states, whereas for the general \text{\sc Quantum} \text{\sc Max-Cut}\xspace we consider all quantum states. However, the specific instance $\mathcal I$ we consider will correspond to a graph of high degree, and for such graphs Brandao and Harrow \cite{BH16} show it suffices to consider product states. The instance we use for both integrality gaps is the $\rho$-correlated Gaussian graph. This was also used as an integrality gap for the \text{\sc Max-Cut}\xspace problem in the work of~\cite{OW08}, and we will follow their proof closely. As in their proof, we will have to deal with the technicality that the Gaussian graph is actually an infinite graph, and so our final integrality gap instance will involve discretizing the Gaussian graph to produce a finite graph. \subsection{The Gaussian graph as an integrality gap} To begin, we provide a lower-bound for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP. \begin{lemma}[\text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP Lower Bound]\label{lem:heis-sdp-lower} $$\text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal G_\rho^n) \geq \tfrac{1}{4} - \tfrac{3}{4} \rho - O(\sqrt{\log n/n}).$$ \end{lemma} \begin{proof} Consider the feasible solution $f_\mathrm{ident}(x) = x/\Vert x \Vert$ to the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP. It has value \begin{equation}\label{eq:value-of-ident} \E_{\bx \sim_{\rho} \by}\left[ \frac{1}{4} - \frac{3}{4}\left\langle \frac{\bx}{\Vert \bx \Vert}, \frac{\by}{\Vert \by \Vert}\right\rangle\right] = \frac{1}{4} - \frac{3}{4}\E_{\bx \sim_{\rho} \by}\left\langle \frac{\bx}{\Vert \bx \Vert}, \frac{\by}{\Vert \by \Vert}\right\rangle. \end{equation} Intuitively, we expect $\langle \bx/ \Vert \bx \Vert, \by/ \Vert \by \Vert \rangle$ to roughly be equal to~$\rho$, at least when~$n$ is large. Formally, we will use the inequality \begin{equation*} \E_{\bx \sim_{\rho} \by}\left\langle \frac{\bx}{\Vert \bx \Vert}, \frac{\by}{\Vert \by \Vert}\right\rangle \leq \rho + O(\sqrt{\log n/n}), \end{equation*} which was shown in the proof of~\cite[Theorem~$4.3$]{OW08}. This implies that \begin{equation*} \eqref{eq:value-of-ident} \geq \tfrac{1}{4} - \tfrac{3}{4} \rho - O(\sqrt{\log n/n}). \end{equation*} As this lower-bounds the value of $f_{\mathrm{ident}}$, it also lower-bounds the value of the SDP. \end{proof} An essentially identical proof also yields the following lemma, which gives a lower-bound for the product state SDP. \begin{lemma}[Product State SDP Lower Bound] $$\text{\sc SDP}_{\text{\sc Prod}}\xspace(\mathcal{G}_\rho^n) \geq \tfrac{1}{4} - \tfrac{1}{4} \rho - O(\sqrt{\log n/n}).$$ \end{lemma} On the other hand, assuming \Cref{conj:vector-borell-intro}, the optimal product state assignment is given by $f_{\mathrm{opt}}(x) = x_{\leq 3} / \Vert x_{\leq 3} \Vert$, and so by~\Cref{prop:opt-formula} the product state value can be computed exactly as \begin{equation*} \text{\sc Prod}\xspace(\mathcal{G}^n_\rho) = \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho). \end{equation*} Not only that, $\mathcal{G}_\rho^n$ is a weighted, regular graph of infinite degree. Thus, we should now be able to apply \Cref{thm:BH} (or one of its nonuniform analogues) to show that its maximum energy is exactly equal to its product value. Strictly speaking, the maximum energy $\text{\sc QMax-Cut}\xspace(\mathcal{G}^n_\rho)$ is not well-defined because $\mathcal{G}^n_\rho$ is an infinite graph. However, we will define $\text{\sc QMax-Cut}\xspace(\mathcal{G}^n_\rho)$ to be $\text{\sc Prod}\xspace(\mathcal{G}^n_\rho)$ and show in the following section that these quantities are indeed approximately equal in the discretized graph. \subsection{Discretizing the Gaussian graph} The following lemma shows $\mathcal{G}^n_\rho$ can be discretized with a negligible loss in value. The proof is given in \Cref{sec:card-reduc}. \begin{lemma}[Graph Discretization]\label{lem:discretization} Let $G = \mathcal G^n_\rho$ be the $\rho$-correlated Gaussian graph. Then for every $\eps > 0$, there exists a finite, weighted graph~$G'$ such that \begin{IEEEeqnarray*}{rClrCl} \text{\sc SDP}_{\text{\sc QMC}}\xspace(G') &\geq& \text{\sc SDP}_{\text{\sc QMC}}\xspace(G) - \eps, \quad&\quad\text{\sc QMax-Cut}\xspace(G') &\leq& \text{\sc QMax-Cut}\xspace(G) + \eps, \\ \text{\sc SDP}_{\text{\sc Prod}}\xspace(G') &\geq& \text{\sc SDP}_{\text{\sc Prod}}\xspace(G) - \eps, \quad&\quad \text{\sc Prod}\xspace(G') &\leq& \text{\sc Prod}\xspace(G) + \eps. \end{IEEEeqnarray*} \end{lemma} With the instance in hand, we now prove that it yields our desired integrality gaps. \begin{proof}[Proof of \Cref{thm:integrality-gap-heisenberg-restated,thm:integrality-gap-prod-restated}] We start with the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP. Assume \cref{conj:vector-borell-intro}. Then combining \Cref{lem:heis-sdp-lower,lem:discretization} and taking the dimension~$n$ suitably large, there exists a graph~$G$ such that \begin{equation*} \frac{\text{\sc QMax-Cut}\xspace(G)}{\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)} \leq \frac{\tfrac{1}{4} - \tfrac{1}{4}F^*(3,\rho)}{\tfrac{1}{4} -\tfrac{3}{4}\rho} + \eps, \end{equation*} for each $\eps > 0$. Taking the infimum over~$\eps$, the integrality gap of $\text{\sc SDP}_{\text{\sc QMC}}\xspace$ is at most \begin{equation*} \frac{\tfrac{1}{4} - \tfrac{1}{4}F^*(3,\rho)}{\tfrac{1}{4} - \tfrac{3}{4}\rho}. \end{equation*} This is minimized by $\rho = \rho_{\mathrm{GP}}$, in which case it is equal to $\alpha_{\mathrm{GP}}$. Hence, the integrality gap of the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP is at most~$\alpha_{\mathrm{GP}}$. On the other hand, the integrality gap is at least~$\alpha_{\mathrm{GP}}$ because the GP algorithm shows there always exists a solution of value at least $\alpha_{\mathrm{GP}} \cdot \text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$. As a result, the integrality gap is exactly~$\alpha_{\mathrm{GP}}$, concluding the proof. The case of the product state SDP follows by a similar argument. \end{proof} \section{Algorithmic gap for the BOV algorithm}\label{sec:algorithmic-gap} The main goal of this section is to prove \Cref{thm:algo-gap}, which we restate here. We note that it does not require assuming \cref{conj:vector-borell-intro}. \begin{theorem}[Algorithmic gap for product state SDP; \Cref{thm:algo-gap} restated]\label{thm:algo-gap-restated} The Briët-Oliveira-Vallentin algorithm has algorithmic gap $\alpha_{\mathrm{BOV}}$. \end{theorem} Recall that the algorithmic gap is the quantity $$\inf_{\text{graphs }G}\left\{\frac{A_{\mathrm{BOV}}(G)}{\text{\sc Prod}\xspace(G)}\right\},$$ where $A_{\mathrm{BOV}}(G)$ is the average value of the product state output by the BOV algorithm on graph~$G$. This is at least~$\alpha_{\mathrm{BOV}}$ by~\cref{thm:bov-thm}, and so we need to show that it is also at most~$\alpha_{\mathrm{BOV}}$, which entails finding a graph~$G$ in which the BOV algorithm outputs a solution of value $\alpha_{\mathrm{BOV}} \cdot \text{\sc Prod}\xspace(G)$. Our construction is based on a classic algorithmic gap instance for the Goemans-Williamson SDP called the \emph{noisy hypercube graph}, essentially due to Karloff~\cite{Kar99} (cf.\ the exposition in \cite{OD08}). The BOV algorithm solves the product state SDP and rounds its solution using projection rounding. The SDP is only guaranteed to return \emph{some} optimal (or near-optimal) solution, but we are free to choose which of these optimal solutions to provide to the BOV algorithm (see \cite{OD08} for more details). As in the construction of the integrality gaps in \Cref{sec:integrality-gaps}, the algorithmic gap instance and the optimal SDP solution are motivated by the fact that the BOV algorithm performs worst on edges $(u, v)$ where the SDP vectors have inner product $\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle = \rho_{\mathrm{BOV}}$. The graph is an analogue of the $\rho$-correlated sphere graph on the Boolean hypercube, known as the noisy hypercube. \begin{definition}[Noisy hypercube graph] Let $n$ be a positive integer and $-1 \leq \rho \leq 1$. We define the \emph{$\rho$-noisy hypercube} to be the graph $\mathcal{H}^n_\rho$ with vertex set $\{-1, 1\}^n$ in which a random edge $(\bx, \by)$ is distributed as two $\rho$-correlated Boolean strings. \end{definition} As defined, the noisy hypercube does not correspond to a legitimate \text{\sc Quantum} \text{\sc Max-Cut}\xspace instance, as it contains self-loops. For now, we will analyze the noisy hypercube as if this is not an issue, and we will remove the self-loops at the end of the section. The SDP solution we will consider is the ``identity solution'', i.e. the function $f_{\mathrm{ident}}:\{-1, 1\}^n \rightarrow S^{n-1}$ defined by $f_{\mathrm{ident}}(x) = \tfrac{1}{\sqrt{n}} x$ for each $x \in \{-1, 1\}^n$. The following three lemmas (\ref{prod-sdp-value}, \ref{lem:opt-sdp-value}, \ref{lem:proj-rounding-value}) establish a upper bound on $A_{\mathrm{BOV}}(\mathcal{H}^n_\rho)$. \begin{lemma}[Value of the SDP solution]\label{prod-sdp-value} The feasible SDP solution $f_{\mathrm{ident}}(x)$ has value $\tfrac{1}{4} - \tfrac{1}{4} \rho$. \end{lemma} \begin{proof} We compute \begin{equation*} \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}[\langle f_{\mathrm{ident}}(\bx), f_{\mathrm{ident}}(\by)\rangle] = \tfrac{1}{n}\E_{\bx, \by} \langle \bx, \by\rangle = \tfrac{1}{n} \sum_{i=1}^n \E_{\bx, \by}[\bx_i \by_i] = \rho. \end{equation*} As a result, the value of $f_{\mathrm{ident}}$ is \begin{equation*} \E_{\bx, \by}[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{\mathrm{ident}}(\bx), f_{\mathrm{ident}}(\by)\rangle] = \tfrac{1}{4} - \tfrac{1}{4} \rho.\qedhere \end{equation*} \end{proof} Our next lemma shows that $f_{\mathrm{ident}}$ achieves the optimal SDP value, and so it is fair for the BOV algorithm to receive it as a solution to the product state SDP. \begin{lemma}[Value of the SDP]\label{lem:opt-sdp-value} The value of the SDP is $\text{\sc SDP}_{\text{\sc Prod}}\xspace(\mathcal{H}^n_\rho) = \tfrac{1}{4} - \tfrac{1}{4}\rho$. As a result, $f_{\mathrm{ident}}$ is an optimal SDP solution. \end{lemma} \begin{proof} The function $f_{\mathrm{ident}}$ is a feasible solution with value $\tfrac{1}{4}-\tfrac{1}{4}\rho$, and so it suffices to show that $\text{\sc SDP}_{\text{\sc Prod}}\xspace(\mathcal{H}^n_\rho) \leq \tfrac{1}{4} - \tfrac{1}{4}\rho$. This entails showing that \begin{equation*} \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}[\tfrac{1}{4} - \tfrac{1}{4}\langle f(\bx), f(\by)\rangle] \leq \tfrac{1}{4} - \tfrac{1}{4}\rho, \end{equation*} for all functions $f:\{-1, 1\}^n \rightarrow S^{N-1}$, where $N = 2^n$ is the number of vertices in $\mathcal{H}^n_\rho$. Equivalently, we will show $\E_{\bx, \by}\langle f(\bx), f(\by)\rangle \geq \rho$ for all such~$f$. Write $f = (f_1,\dots,f_N)$ where $f_i : \{-1,1\}^n \rightarrow \mathbb R$ for each~$i$. Then, \begin{equation*} \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}\langle f(\bx), f(\by)\rangle = \sum_{i=1}^N \E_{\bx, \by} [f_i(\bx) f_i(\by)] \geq \sum_{i=1}^N \rho \cdot \E_{\bx} [f_i(\bx)^2] = \rho \cdot \E_{\bx}\left[\sum_{i=1}^N f_i(\bx)^2\right] = \rho. \end{equation*} The inequality here is due to \Cref{prop:stab-bound} and we defer it to the appendix. This completes the proof. \end{proof} Given $f_{\mathrm{opt}} = f_{\mathrm{ident}}$, the BOV algorithm performs projection rounding and outputs the solution. The following lemma shows the value of the output. \begin{lemma}[Value of the projection rounding]\label{lem:proj-rounding-value} Given the SDP solution $f_{\mathrm{ident}}$, projection rounding will produce a random product state whose average value is at most \begin{equation*} \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + O(\sqrt{\log(n)/n}). \end{equation*} \end{lemma} \begin{proof} By the Chernoff bound, \begin{equation}\label{eq:totally-sick-chernoff-bound} \Pr_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}[\langle \bx, \by\rangle = \rho \cdot n \pm O(\sqrt{n \log n})] \leq O(1/\sqrt{n}). \end{equation} Let $(x, y)$ be an edge in $\mathcal{H}^n_\rho$ such that $\langle x, y\rangle = \rho \cdot n \pm O(\sqrt{n \log n})$. Then \begin{equation*} \rho_{x, y} := \langle f_{\mathrm{ident}}(x), f_{\mathrm{ident}}(y)\rangle = \rho \pm O(\sqrt{\log n/n}). \end{equation*} On this edge, the random product state produced by projection rounding has average value $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho_{x, y})$ due to \Cref{thm:exact-formula-for-average-inner-product}. Since $\rho_{x, y}$ is within $O(\sqrt{\log n / n})$ of~$\rho$, and $F^*(3, \cdot)$ is Lipschitz by \Cref{lem:inner_prod_lipschitz}, we can bound the average value of this edge by \begin{equation*} \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + O(\sqrt{\log n/n}). \end{equation*} As for the remaining edges, we can trivially bound the average value of each by~$1$, and this contributes an extra $O(1/\sqrt{n})$ to the total value of the product state due to \Cref{eq:totally-sick-chernoff-bound}. \end{proof} Now, we compute the optimal product state value for $\mathcal{H}^n_\rho$. To do so, we consider a family of product state solutions $f:\{-1, 1\}^n \rightarrow S^2$ we call \emph{embedded dictators}, in which there exists an $i \in [n]$ such that $f(x) = (x_i, 0, 0)$ for all $x$. \begin{lemma}[Value of embedded dictators]\label{lem:embedded-dictators} Embedded dictators $f(x) = (x_i, 0, 0)$ achieve value $\tfrac{1}{4} - \tfrac{1}{4}\rho$. Hence, the product state value of $\mathcal{H}_\rho^n$ is $\text{\sc Prod}\xspace(\mathcal{H}_\rho^n) = \tfrac{1}{4}-\tfrac{1}{4}\rho$. \end{lemma} \begin{proof} Consider the product state corresponding to the function $f_{\mathrm{dict}}:\{-1,1\}^n \rightarrow S^2$ given by $f_{\mathrm{dict}}(x) = (x_i, 0, 0)$. It has value \begin{equation*} \tfrac{1}{4} - \tfrac{1}{4}\E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}[\langle f_{\mathrm{dict}}(\bx), f_{\mathrm{dict}}(\by)\rangle] = \tfrac{1}{4} - \tfrac{1}{4}\E_{\bx \sim_\rho \by}[\bx_i \by_i] = \tfrac{1}{4} - \tfrac{1}{4} \rho. \end{equation*} This shows the product state value is at least $\tfrac{1}{4} - \tfrac{1}{4} \rho$. It is also at most $\tfrac{1}{4} - \tfrac{1}{4} \rho$ because this is the value of the product state SDP, which is an upper bound on the product state value. \end{proof} \begin{proof}[Proof of \Cref{thm:algo-gap-restated}] To begin, we will remove the self-loops from $\mathcal{H}^n_\rho$, which have total weight \begin{equation*} w_{\mathrm{loops}} = \Pr_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}[\bx = \by] = \left(\tfrac{1}{2} + \tfrac{1}{2}\rho\right)^n. \end{equation*} Let $\calH'$ be the graph with vertex set $\{-1, 1\}^n$ in which a random edge $(\bx, \by)$ is distributed as two $\rho$-correlated Boolean strings, conditioned on $\bx \neq \by$. Then for each edge $(x, y)$, if $w(x, y)$ is its weight in $\mathcal{H}^n_\rho$ and $w'(x, y)$ is its weight in $\calH'$, we have that $w'(x, x) = 0$, and \begin{equation*} w'(x, y) = \frac{1}{1 - w_{\mathrm{loops}}} \cdot w(x, y) \end{equation*} for $x \neq y$. Consider an SDP solution $f:\{-1, 1\}^n \rightarrow S^{N-1}$. It has value~$0$ on each self-loop in $\mathcal{H}^n_\rho$. As a result, if it has value~$\nu$ in $\mathcal{H}^n_\rho$, then it has value $\nu / (1 - w_{\mathrm{loops}})$ in $\calH'$. This argument applies to the value of product states as well. In summary, all values are ``scaled up'' by a factor of $1/(1-w_{\mathrm{loops}})$ in $\calH'$. This implies that $f_{\mathrm{ident}}$ is still an optimal SDP solution, and by \Cref{lem:proj-rounding-value} and \Cref{lem:embedded-dictators}, the ratio of the average value of the resulting solution to $\text{\sc Prod}\xspace(\calH')$ is \begin{equation*} \frac{\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + O(\sqrt{\log(n)/n})}{\tfrac{1}{4} - \tfrac{1}{4}\rho}. \end{equation*} Taking an infimum over $n$, we can upper bound the algorithmic gap by \begin{equation*} \frac{\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho)}{\tfrac{1}{4} - \tfrac{1}{4}\rho}. \end{equation*} This is minimized at $\rho = \rho_{\mathrm{BOV}}$, in which case it is $\alpha_{\mathrm{BOV}}$, matching the approximation ratio of the BOV algorithm. \end{proof} \ignore{ \section{A dictator test for the product state value} Now we show that the noisy hypercube serves as a dictatorship test for functions of the form $f:\{-1, 1\}^n \rightarrow B^k$. Informally, this means that if $f$ is an embedded dictator, it should have high value, and if it is ``far'' 'from a dictator (in the sense that if it has no ``notable'' input coordinates), then it should have low value. This will be an important ingredient in our Unique-Games hardness proof in~XXX. We have already shown that embedded dictators achieve value $\tfrac{1}{4} - \tfrac{1}{4} \rho$ on the noisy hypercube in~XXX. Now we will upper-bound the value that functions ``far'' from dictators achieve. We will show that their value, up to small error, is at most the optimum product state value on the Gaussian graph $\mathcal{G}^n_\rho$, which we have shown to be $\tfrac{1}{4} - \tfrac{1}{4} F^*(k, \rho)$. \begin{theorem}[Dictatorship test soundness]\label{def:embedded-dic} Let $-1 < \rho \leq 0$. Then for any $\epsilon > 0$, there exists a small enough $\delta = \delta(\epsilon, \rho) > 0$ and large enough $m = m(\epsilon, \rho) \geq 0$ such that the following is true. Let $f: \{-1, 1\}^n \rightarrow B^k$ satisfy $\Inf_i^{\leq k}[f] \leq \delta$ for all $i \in [n]$. Then $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq F^*(k, \rho) - \eps$. In other words, the value of~$f$ on the noisy hypercube $\mathcal{H}^n_\rho$ is at most $ \tfrac{1}{4} - \tfrac{1}{4} F^*(k, \rho) + \eps. $ \end{theorem} The $k = 1$ case is the Majority is Stablest theorem of~\cite{MOO10}, which serves as the soundness case for the $\text{\sc Max-Cut}\xspace$ dictatorship test; our theorem generalizes Majority is Stablest to larger values of~$k$. The proof follows the same outline as the proof of Majority is Stablest: we apply an ``invariance principle'' to exchange $f$'s Boolean inputs with Gaussians of the same mean and variance. Then we use our results on the noise stability of functions in Gaussian space to upper-bound the value of~$f$. The invariance principle we will use is the following one due to Isaksson and Mossel~\cite{IM12}, which applies to vector-valued functions. \begin{theorem}[Vector-valued invariance principle \cite{IM12}]\label{thm:vector-invariance} Fix $\tau, \gamma \in (0,1)$ and set $d = \tfrac{1}{18}\log\tfrac{1}{\tau} / \ln(2)$. Let $f = (f_1,\dots,f_k)$ be a $k$-dimensional multilinear polynomial such that $\Var[f_j] \leq 1$, $\Var[f_j^{> d}] < (1-\gamma)^{2d}$, and $\Inf^{\leq d}_i[f_j] \leq \tau$ for each $j \in [k]$ and $i \in [n]$. Let $\bx$ be a uniformly random string over $\{-1,1\}^n$ and $\by$ be an $n$-dimensional standard Gaussian random variable. Furthermore, let $\Psi : \mathbb R^k \rightarrow \mathbb R$ be Lipschitz continuous with Lipschitz constant $A$. Then, $$|\E[\Psi(f(\bx))] - \E[\Psi(f(\by))]| \leq C_kA \tau^{\gamma/(18 \ln 2)}$$ where $C_k$ is a parameter depending only on $k$. \end{theorem} \begin{proposition}\label{prop:zeta-lipschitz} Let $\mathrm{dist}_{B^k}:\R^k\rightarrow \R$ be the function defined by \begin{equation*} \mathrm{dist}_{B^k}(y) = \left\{\begin{array}{cl} 0 & \text{if $\Vert y \Vert_2 \leq 1$,}\\ \Vert y - \tfrac{y}{\Vert y \Vert} \Vert_2 & \text{otherwise,} \end{array}\right. \end{equation*} which computes the distance of a point $y \in \mathbb R^k$ from $B^{k}$. Then $\zeta$ is $1$-Lipschitz. \end{proposition} \begin{proof} It suffices to show that $\zeta(y + z) \leq \zeta(y) + \Vert z \Vert_2$ for all $y, z \in \R^n$. This is clearly true if $\Vert y + z \Vert_2 \leq 1$, as $\zeta(y+1) = 0$ in that case. On the other hand, when $\Vert y + z \Vert_2 \geq 1$, \begin{equation*} \zeta(y+z) = \Vert y + z- \tfrac{y+z}{\Vert y+z \Vert} \Vert_2 = \Vert y \end{equation*} \end{proof} \begin{theorem}[Vector Valued Majority is Stablest]\label{thm:vec-mis} Fix $\rho \in (-1,0]$. Then for any $\epsilon > 0$, there exists small enough $\delta = \delta(\epsilon, \rho)$ and large enough $k = k(\epsilon, \rho) \geq 0$ such that if $f : \{-1,1\}^n \rightarrow B^{m-1}$ is any function satisfying $$\Inf^{\leq k}_i[f] = \sum_{j=1}^m \Inf^{\leq k}_i[f_j] \leq \delta \text{ for all }i = 1, \dots, n$$ then $$\E_{x \sim_\rho y} [f(x)f(y)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq F^*(m,\rho) - \epsilon$$ \end{theorem} \begin{proof} Throughout, we'll use $\bx$ to denote string in $\{-1,1\}^n$ and $\by$ to denote a vector in $\mathbb R^n$. Since the statement of the vector-valued invariance principle (\Cref{thm:vector-invariance}) requires a function with low high-degree variance, we consider $g = \mathbf{T}_{1-\gamma} f$. Then for each $j \in [m]$, $$\Var[g_j^{\geq d}] = \sum_{|S| \geq d} (1-\gamma)^{2|S|}\widehat {f}_j(S)^2 \leq (1-\gamma)^{2d} \Var[f_j^{\geq d}] \leq (1-\gamma)^{2d}.$$ Also, $\Inf_i[g_j] \leq \Inf_i[f_j]$. Next, we bound the error in the quantity $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$ when we consider $g$ in place $f$. \begin{align} \ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g]} &= \ABS{\sum_S \rho^{|S|} \norm{\widehat{f}(S)}_2^2 - \sum_S (\rho(1-\gamma)^2)^{|S|} \norm{\widehat{f}(S)}_2^2}\nonumber\\ &=\sum_S \ABS{\rho^{|S|}\big(1 - (1-\gamma)^{2|S|}\big)} \norm{\widehat{f}(S)}_2^2\nonumber\\ &\leq \rho(1-(1-\gamma)^2) \sum_S\norm{\hat f(S)}_2^2\nonumber\\ &\leq \rho(1-(1-\gamma)^2) \E[\norm{f}_2^2]\label{eq:bound1} \end{align} $f$ has range $B^{m-1}$ and thus $\E[\norm{f}_2^2] \leq 1$. By choosing $\gamma(\epsilon, \rho)$ to be sufficiently small we can bound this quantity by $\epsilon/2$. Next, since $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}$ is a function of the coefficients of the polynomial corresponding to $f$, we have that $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\bx)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\by)]$. However, although $g(x) \in B^{k}$ for all $x \in \{-1, 1\}^n$ (since $f(x) \in B^{k}$ by assumption), $g(y)$ may take values outside the unit ball for $y \in \R^n$. This prevents us from directly applying \Cref{thm:k-dim-borell} to~$g$. We'll instead apply the theorem to the function $g':\R^n \rightarrow B^k$ defined as $$g'(y) = \left\{\begin{array}{cl} g(y) & \text{if $g(y) \in B^{k}$,}\\ \frac{g(y)}{\NORM{g(y)}} & \text{otherwise.} \end{array}\right.$$ Applying \Cref{thm:k-dim-borell} yields that $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g'] \geq F^\star(k,\rho)$. It remains to bound $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g']}$. \begin{align*} \ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g\rangle + \langle g', \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g - g', \mathbf{T}_\rho g\rangle +\langle g', \mathbf{T}_\rho g - \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g - g', \mathbf{T}_\rho g\rangle} +\ABS{\langle g', \mathbf{T}_\rho g - \mathbf{T}_\rho g'\rangle}\\ &\leq \norm{g - g'}_2\norm{\mathbf{T}_\rho g}_2 + \norm{g'}_2\norm{\mathbf{T}_\rho g - \mathbf{T}_\rho g'}_2 \tag{by Cauchy-Schwartz}\\ &\leq \norm{g - g'}_2\norm{g}_2 + \norm{g'}_2\norm{g - g'}_2 \tag{because $\mathbf{T}_\rho$ is a contraction}\\ &= \norm{g - g'}_2(\norm{g}_2 + \norm{g'}_2) \\ &\leq 2\norm{g - g'}_2. \tag{because $\Vert g' \Vert_2 \leq \Vert g \Vert_2 \leq 1$} \end{align*} Now we just need to bound $\norm{g - g'}_2$ by $\epsilon/4$. Note that $\norm{g - g'}_2^2 = \E[\zeta(g(y))]$, where $\zeta$ is the 1-Lipschitz function from \Cref{prop:zeta-lipschitz}. Let $\delta = (\epsilon^2/(36\cdot C_m))^{18/\gamma}$. We can now apply \Cref{thm:vector-invariance} with $\Psi = \zeta$ and $\tau = \delta$, which yields $$\E_{\by}[\zeta(g)] = |\E_{\bx}[\zeta(g(\bx))] - \E_{\by}[\zeta(g(\by))]| \leq \epsilon^2/36$$ Then, $\norm{g-g'}_2 \leq \sqrt{\epsilon^2/36} = \epsilon/6$ and $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} \leq \epsilon/2$. Combining this with our bound of $\epsilon/2$ for \Cref{eq:bound1} concludes the proof. \end{proof} \section{Unique Games hardness of the quantum Heisenberg model} \begin{definition}[Unique Label Cover]\label{def:unique-games} The Unique Label Cover problem, $\mathcal L(V, W, E, [M], \{\sigma_{v,w}\}_{(v,w) \in E})$ is defined as follows: Given is a bipartite graph with left side vertices $V$, right side vertices $W$, and a set of edges $E$. The goal is to assign one `label' to every vertex of the graph, where $[M]$ is the set of allowed labels. The labeling is supposed to satisfy certain constraints given by bijective maps $\sigma_{v,w} : [M] \rightarrow [M]$. There is one such map for every edge $(v,w) \in E$. A labeling $L : V \cup W \rightarrow [M]$ `satisfies' an edge $(v,w)$ if $$\sigma_{v,w}(L(w)) = L(v)$$ \end{definition} \begin{conjecture}[Unique Games Conjecture \cite{Kho02}] For any $\eta, \gamma > 0$, there exists a constant $M = M(\eta, \gamma)$ such that it is $\NP$-hard to distinguish whether a Unique Label Cover problem with label set size $M$ has optimum at least $1 - \eta$ or at most $\gamma$. \end{conjecture} Our hardness result is stated as follows. \begin{theorem}[UG-Hardness of Approximating Quantum \text{\sc Max-Cut}\xspace]\label{thm:ug-hardness} For any $\rho \in [-1,0)$ and $\epsilon > 0$, there exists an instance of the Heisenberg Hamiltonian such that deciding whether the maximum energy is greater than $\frac{1- \rho}{4} - \epsilon$ or less than $\frac{1 - F^\star(3, \rho)}{4} + \epsilon$ is Unique Games-Hard. In more standard notation, we say that it is UG-Hard to $(\frac{1 - F^\star(3,\rho)}{4} + \epsilon, \frac{1- \rho}{4} - \epsilon)$-approximate Quantum \text{\sc Max-Cut}\xspace. \end{theorem} \begin{remark} $\frac{1 - F^\star(3,\rho)}{4}$ is exactly the product state value obtained by the rounding algorithm of \cite{GP19} on the integrality gap instance in \Cref{sec:integrality-gaps}. \end{remark} \begin{remark} Minimizing the ratio $\frac{1-\rho}{4}/\frac{1-F^\star(3,\rho)}{4}$ w.r.t $\rho$ yields $\alpha_\mathrm{BOV}$, which shows \Cref{thm:main-inapprox}. \end{remark} \begin{theorem}\label{thm:quant-ug-hardness} Fix a CSP over domain $S^{m-1}$ with predicate set $\Psi$. Suppose there exists an $(\alpha,\beta)$-Embedded-Dictator-vs.-No-Notables test using predicate set $\Psi$. Then for all $\delta > 0$, it is ``UG-hard'' to $(\alpha+\delta,\beta- \delta)$-approximate $\text{\sc Max-CSP}\xspace(\Psi)$.\footnote{For simplicity, we assume each constraint in $T$ has width 2, but we could easily extend this to width $c$ constraints.} \end{theorem} \begin{proof} The proof is by reduction from the Unique Games problem. Pick constants $\eta, \gamma$ and $M = M(\eta, \gamma)$ satisfying the Unique Games Conjecture. Let $\mathcal L(U, V, E, [M], \{\pi_{u,v}\}_{(u,v) \in E})$ be a Unique Games instance. The reduction produces a Heisenberg Hamiltonian instance with graph~$G$ whose vertex set is $U \times \{-1, 1\}^M$. A random edge in $G$ is sampled as follows: pick~$\bu \in U$ uniformly at random, and sample two uniformly random neighbors $\bv, \bv' \sim N(\bu)$ independently. Let $\bx$ and $\by$ be $\rho$-correlated $M$-dimensional Boolean strings. Output the edge between $(\bv, \bx \circ \pi_{\bu, \bv'})$ and $(\bv', \by \circ \pi_{\bu, \bv'})$, where $w \circ \sigma$ is the string in which $(w \circ \sigma)_i = w_{\sigma(i)}$. A product state assignment to~$G$ corresponds to a function $f_v:\{-1, 1\}^M \rightarrow S^2$ for each $v \in V$. It has value \begin{equation*} \E_{\bu \sim U}\E_{\bv, \bv' \sim N(\bu)} \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{\bv}(\bx \circ \pi_{\bu, \bv}), f_{\bv}(\by \circ \pi_{\bu, \bv'})\rangle\right]. \end{equation*} \textit{Completeness.} Assume $\mathcal L$ has a labeling $L:U \cup V \rightarrow [m]$ satisfying more than $(1- \eta)$-fraction of the edges. For each $v \in V$, let $f_v(x) = (x_{L(v)}, 0, \dots, 0)$. To analyze the performance of~$f$, let us first fix a vertex $u \in U$ and two neighbors $v, w \in N(u)$, and condition on the case that $L$ satisfies both edges $(u, v)$ and $(u, w)$. This means that $\pi_{v\rightarrow u}(L(v)) = L(u) = \pi_{w\rightarrow u}(L(w))$. Thus, for each $x, y \in \{-1, 1\}^M$, \begin{align*} f_v(x \circ \pi_{v\rightarrow u}) &= ((x \circ \pi_{v \rightarrow u})_{L(v)}, 0, 0) = (x_{\pi_{v \rightarrow u}(L(v))},0,0) = (x_{L(u)},0,0), \end{align*} and similarly $f_w(y \circ \pi_{w \rightarrow u}) = (y_{L(u)}, 0, 0)$. As a result, the value of $f$ conditioned on $u$, $v$, and $w$ is \begin{align*} \E_{\bx, \by} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{v}(\bx \circ \pi_{v \rightarrow u}), f_{w}(\by \circ \pi_{w \rightarrow u})\rangle\right] =\E_{\bx, \by} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle (\bx_{L(u)}, 0, 0), (\by_{L(u)}, 0, 0)\rangle\right], \end{align*} which is just the value of the $L(u)$-th embedded dictator on the noisy hypercube, i.e.\ $1/4 - 1/4 \rho$. Now we average over $\bu, \bv, \bw$. Because $\mathcal L$ is a biregular Unique Games instance, picking a random vertex $\bu \in U$ and neighbor $\bv \in N(\bu)$ is equivalent to picking a uniformly random edge from~$E$. Therefore, by the union bound, the probability that the assignment~$L$ satisfies both edges $(\bu, \bv)$ and $(\bu, \bw)$ is at least $1-2\eta$. As we have seen, conditioned on this event, the assignment~$f$ has value at least $1/4 - 1/4\rho$. As a result, we can lower-bound the value of~$f$ by \begin{equation*} (1-2\eta) \cdot (\tfrac{1}{4} - \tfrac{1}{4}\rho) \geq \tfrac{1}{4} - \tfrac{1}{4}\rho - \eta. \end{equation*} This completes the completeness case. \textit{Soundness.} We will show the contrapositive. Suppose there is a product state assignment $\{f_v\}_{v \in V}$ to $G$ with value at least $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \eps$. We will use this to construct a randomized assignment $\bL:U \cup V \rightarrow [M]$ whose average value is at least~$\gamma$, which implies that the Unique Games instance has value at least~$\gamma$. For each $u \in U$, we define the function $g_u: \{-1, 1\}^M \rightarrow B^2$ as \begin{equation*} g_u(x) = \E_{\bv \sim N(u)}[ f_{\bv}(x \circ \pi_{\bv \rightarrow u}) ]. \end{equation*} Then we can rewrite the value of the assignment $\{f_v\}$ as \begin{align*} \E_{\bu} \E_{\bv, \bw \sim N(\bu)} \E_{\bx, \by}\left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{\bv}(\bx \circ \pi_{\bv \rightarrow \bu}), f_{\bw}(\by \circ \pi_{\bw \rightarrow \bu})\rangle\right] &= \E_{\bu} \E_{\bx, \by}[\tfrac{1}{4} - \tfrac{1}{4}\langle g_{\bu}(\bx), g_{\bu}(\by)\rangle]\\ &= \E_{\bu}[\tfrac{1}{4} - \tfrac{1}{4} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_{\bu}]]. \end{align*} Since~$f$ has value at least $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \eps$, an averaging argument implies that at least an $\eps/2$ fraction of $u \in U$ satisfy $\tfrac{1}{4} - \tfrac{1}{4} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_{u}] \geq \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \eps/2$. Rearranging, these $u$'s satisfy \begin{equation*} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_u] \leq F^*(3, \rho) + 2 \eps. \end{equation*} We call any such~$u$ ``good''. We now apply the soundness of our dictatorship test to the good $u$'s: by XXX, any such $u$ has a ``notable'' coordinate, i.e. an $i$ such that $\Inf^{\leq m}_i[g_u] > \delta$. Our random assignment will then use this~$i$ as its label for the vertex~$u$: $\bL(u) = i$. (If~$u$ has multiple notable coordinates, then we pick one of these arbitrarily as the label for~$u$.) Next we'll need to obtain labels for the neighbors of~$u$. We will use the condition that~$u$ is good to derive that many of~$u$'s neighbors~$v$ have notable coordinates. This requires relating the Fourier spectrum of $g_u$ to the Fourier spectra of the neighboring $f_v$'s. To begin, \begin{equation*} \chi_S(x \circ \pi_{v \rightarrow u}) = \prod_{i \in S}(x \circ \pi_{v \rightarrow u})_i = \prod_{i \in S} x_{\pi_{v\rightarrow u}(i)} = \prod_{j \in T} x_j = \chi_T(x), \end{equation*} where $T = \pi_{v \rightarrow u}(S) = \{\pi_{v\rightarrow u}(i) : i \in S\}$. As a result, \begin{equation*} f_v(x \circ \pi_{v \rightarrow u}) = \sum_{S \subseteq [n]} \widehat{f}(S) \chi_{S}(x \circ \pi_{v \rightarrow u}) = \sum_{T \subseteq [n]} \widehat{f}(\pi_{u \rightarrow v}(T)) \chi_T(x). \end{equation*} Averaging over all $\bv \in N(u)$, \begin{equation*} g_u(x) = \E_{\bv \sim N(u)}[ f_{\bv}(x \circ \pi_{\bv \rightarrow u}) ] = \sum_{T \subseteq [n]} \E_{\bv \sim N(u)}[\widehat{f}(\pi_{u \rightarrow v}(T))] \chi_T(x) = \sum_{T \subseteq [n]} \widehat{g}(T) \chi_T(x). \end{equation*} Hence, \begin{align*} \delta &< \Inf^{\leq m}_i[g_u]\\ &= \sum_{|T| \leq m: T \ni i} \Vert \widehat{g}(T) \Vert^2_2\\ &= \sum_{|T| \leq m: T \ni i} \left\Vert \E_{\bv \sim N(u)}[\widehat{f}(\pi_{\bu \rightarrow v}(T))] \right\Vert^2_2\\ & \leq \sum_{|T| \leq m: T \ni i} \E_{\bv \sim N(u)} \left\Vert \widehat{f}(\pi_{\bu \rightarrow v}(T)) \right\Vert^2_2\tag{by XXX}\\ & = \E_{\bv \sim N(u)}\left[\sum_{|T| \leq m: T \ni i} \left\Vert \widehat{f}(\pi_{\bu \rightarrow v}(T)) \right\Vert^2_2\right]\\ & = \E_{\bv \sim N(u)}[\Inf^{\leq m}_{\pi_{\bv \rightarrow u}(i)}[f_{\bv}]] \end{align*} By another averaging argument, a $\delta/2$-fraction of $u$'s neighbors~$v$ satisfy $\Inf^{\leq m}_{\pi_{v \rightarrow u}(i)}[f_{\bv}] \geq \delta/2$. We call these the ``good neighbors''. For each good neighbor~$v$, the set of possible labels $$S_v = \{j : \Inf^{\leq m}_j[f_v] \geq \delta/2 \}$$ is non-empty. In addition, one of these labels~$j$ satisfies $j = \pi_{v \rightarrow u}(i)$. On the other hand, by \Cref{lem:influence-bound}, $|S_v| \leq XXX$ and so this set is not too large either. For each good neighbor, we assign the label of $\bL(v)$ by picking a uniformly random $j \in S_v$. For all other vertices (i.e.\ those which are not good or good neighbors), we assign $\bL$ a random label. Now we consider the expected number of edges in $\mathcal{L}$ satisfied by $\bL$. Given a random edge $(\bu, \bv)$, the probability that $\bu$ is good is at least $\eps/2$; conditioned on this, the probability that $\bv$ is a good neighbor is at least $\delta/2$. Assuming both hold, since $S_{\bv}$ is of size at most XXX and contains one label equal to $\pi_{v \rightarrow u}(L(u))$, then $\bL$ satisfies the edge $(\bu, \bv)$ with probability at least XXX. In total, $\bL$ satisfies at least an $$XXX$$ fraction of the edges. Setting $\gamma =XXX$ concludes the proof. \end{proof} } \ignore{ \section{Gap Instances} In this section, we prove theorems \ref{thm:integrality-gap-heisenberg}, \ref{thm:integrality-gap-prod}, and \ref{thm:algo-gap}. \subsection{Integrality Gaps}\label{sec:integrality-gaps} \sdpintgap* \prodintgap* Recalling \Cref{def:integrality-gap}, our goal is to bound $$\inf_{\text{instances }\mathcal I\text{ of }\mathcal P}\left\{\frac{\mathrm{OPT}(\mathcal I)}{\mathrm{SDP}(\mathcal I)}\right\}$$ where $\mathcal P$ is either the Heisenberg Hamiltonian SDP, or the product state SDP. To upper bound this quantity, we construct a specific instance $\mathcal I$, and give an upperbound for $\mathrm{OPT}(\mathcal I)$ and a lowerbound for $\mathrm{SDP}(\mathcal I)$. Note that for product state approximation, we only optimize over product state, whereas for the general Heisenberg Hamiltonian we consider all quantum states. However, the specific instance $\mathcal I$ we consider will correspond to a high-degree graph, and for such graphs Brandow and Harrow \cite{BH16} show it suffices to consider product states. \begin{theorem}[Product-State Approximations \cite{BH16}] Let $G = (V,E)$ be a $D$-regular graph, and let $H$ be a Hamiltonian on $n = |V|$ qubits given by $$H = \E_{(i,j) \in E} H_{i,j} = \frac{2}{nD}\sum_{(i,j) \in E} H_{i,j}$$ where $H_{i,j}$ acts only on qubits $i,j$ and $\norm{H_{i,j}} \leq 1$. Then there exists a state $\ket \psi = \ket{\psi_1} \otimes \cdots \otimes \ket{\psi_n}$ such that $$\tr[H \psi] \geq \lambda_{\textrm{max}}(H) - O(D^{-1/3})$$ \end{theorem} Therefore, by considering Hamiltonians on graphs with sufficiently large degree, we have that $\text{\sc Prod}\xspace(G) \geq \mathrm{OPT}(G) - \epsilon$ and for both theorems \ref{thm:integrality-gap-heisenberg} and \ref{thm:integrality-gap-prod}, it suffices to upper bound $\text{\sc Prod}\xspace(G)$, for the same instance $G$. Next, recall from equations \ref{eq:bov-ineq} and \ref{eq:gp-ineq} that the SDP values are quite similar, with just a multiplicative difference. As such, the analysis for the upper bound in both cases will be highly similar. The instance we construct bears resemblance to the integrality instance for the \text{\sc Max-Cut}\xspace SDP, given by \cite{FS02}. In particular, we take our graph $G$ to correspond to all points in $S^{n-1}$, the $n$-dimensional sphere, with edges drawn from a $\rho$-correlated multivariate Gaussian normal distribution. \begin{definition}[$\rho$-correlated Sphere Graph]\label{def:graph} The $\rho$-correlated sphere graph, denoted $\mathcal G^n_\rho$, has vertices on $S^{n-1}$ and edges $(\bx/||\bx||,\by/||\by||)$, where, recalling the notation from \Cref{sec:notation}, $\bx \sim_\rho \by$ are $\rho$-correlated $n$-dimensional Gaussians. \end{definition} Although one might object that $\mathcal G^n_\rho$ is an infinite graph, the following lemma shows $G_{\rho,n}$ can be discretized with a negligible loss in value. The proof is given in \Cref{sec:card-reduc}. \begin{lemma}[Graph Discretization]\label{lem:discretization} Let $G = \mathcal G^n_\rho$ be a $\rho$-correlated sphere graph such that $\mathrm{SDP}(G) \geq c$ and $\text{\sc Prod}\xspace(G) \leq s$. Then, there exists a finite, weight graph $G'$ (with $n = 1/\epsilon^{O(d)}$ vertices) containing no self-loops and satisfying $\mathrm{SDP}(G') \geq c - \epsilon$ and $\text{\sc Prod}\xspace(G') \leq s + \epsilon$.\ynote{Do we need loopless?} \end{lemma} With the instance $G_{\rho,n}$ in hand, we state our upperbound for $\text{\sc Prod}\xspace(G_{\rho,n})$. \begin{lemma}[Product State Value Upper Bound]\label{lem:prod-upper} Recall that a product state assignment to $G = \mathcal G^n_\rho$ is given by a function $f : S^{n-1} \rightarrow S^2$ and obtains value, \begin{equation}\label{eq:prod-val} \text{\sc Val}\xspace_{G}(f) = \frac{1}{4}\PAREN{1-\E_{\bx \sim_\rho \by}\Inner{f\PAREN{\Normed{\bx}}, f\PAREN{\Normed{\by}}}} \end{equation} Then, $$\text{\sc Prod}\xspace(G) = \max_{f : S^{n-1} \rightarrow S^2} \text{\sc Val}\xspace_{G}(f) \leq \text{\sc Val}\xspace_{G}(f_\mathrm{opt})$$ \end{lemma} Setting $k = 3$ in the following theorem gives a characterization of $\text{\sc Val}\xspace_{\mathcal G^n_\rho}(f_\mathrm{opt})$ solely as a function of $\rho$. \begin{theorem}[\cite{BOV10}]\label{thm:hypergeometric} Let $u,v$ be unit vectors in $\mathbb R^n$ with $\langle u,v\rangle = \rho$ and let $\bZ \in \mathbb R^{k\times n}$ be a random matrix whose entries are distributed independently according to the standard normal distribution with mean $0$ and variance $1$. Then, $$F^\star(k,\rho) := \E\left[\frac{\bZ u}{||\bZ u||}\cdot\frac{\bZ v}{||\bZ v||}\right] = \frac{2}{k}\left(\frac{\Gamma((k+1)/2)}{\Gamma(k/2)}\right)^2\rho \cdot {}_2F_1(1/2,1/2; k/2+1;\rho^2)$$ where ${}_2F_1$ is the hypergeometric function and $\Gamma$ is the standard gamma function. \end{theorem} Next, the following theorem gives an upperbound of for the Heisenberg Hamiltonian SDP. \begin{lemma}[Heisenberg Hamiltonian SDP Lower Bound]\label{lem:heis-sdp-lower} The assignment $f_\mathrm{ident}(x) = x$ for $x \in S^{n-1}$ is a feasible solution to the Heisenberg Hamiltonian SDP on $G^n_\rho$ and obtains value $\frac{1}{4}(1 -3\rho)$. Then, $$\frac{1}{4}(1-3\rho) \leq \text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal G_\rho^n)$$ \end{lemma} An essentially identical proof also yields the following corollary, which gives a upper bound for the product state SDP. \begin{corollary}[Product State SDP Lower Bound]\label{lem:prod-sdp-lower} The assignment $f_\mathrm{ident}(x) = x$ for $x \in S^{n-1}$ is a feasible solution to the product state SDP on $G_{\rho,n-1}$ and obtains value $\frac{1}{4}(1 -\rho)$. Then, $$\frac{1}{4}(1-\rho) \leq \text{\sc SDP}_{\text{\sc Prod}}\xspace(\mathcal G^n_\rho)$$ \end{corollary} Then, the proof of \Cref{thm:integrality-gap-heisenberg} follows simply from \Cref{lem:prod-upper} and \Cref{lem:heis-sdp-lower}. \begin{proof}[Proof of \Cref{thm:integrality-gap-heisenberg}] Combining \Cref{lem:prod-upper} and \Cref{lem:heis-sdp-lower} yields \begin{equation} \label{eq:prod-ratio} \frac{\text{\sc Prod}\xspace(\mathcal G_\rho^n)}{\text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal G_\rho^n)} \leq \frac{\text{\sc Val}\xspace_{\mathcal G_\rho^n}(f_\mathrm{opt})}{\frac{1}{4}(1-3\rho)} = \frac{ \frac{1}{4}\PAREN{1 - F^\star(3,\rho)}}{\frac{1}{4}(1-3\rho)}\\ \end{equation} This quantity is minimzied at $\rho = \rho_{\mathrm{GP}}$ and is equal to $\alpha_{\mathrm{GP}}$. \end{proof} Finally, using $\frac{1}{4}(1-\rho)$ in place of $\frac{1}{4}(1-3\rho)$ yields a value of $\alpha_{\mathrm{BOV}}$, obtained at $\rho = \rho_{\mathrm{BOV}}$. To conclude, we give the proofs from \Cref{lem:prod-upper} and \Cref{lem:heis-sdp-lower}. \begin{proof}[Proof of \Cref{lem:prod-upper}] This case is a fairly straightforward consequence of \Cref{thm:k-dim-borell}, with $k = 3$. However, note that the theorem considers functions $f : \R^n \rightarrow B^3$, whereas we have $f : \mathbb S^{n-1} \rightarrow S^2$. Observe that since $S^{n-1} \subseteq \R^n$ and $S^2 \subseteq B^3$, functions of the latter class are always a member of the former. Therefore, \begin{align} \label{eq:stab-upper} \text{\sc Prod}\xspace(G_{\rho,n}) = \max_{f : S^{n-1} \rightarrow S^2} \text{\sc Val}\xspace_{G_{\rho,n}}(f) &\leq \max_{f : \R^n \rightarrow B^3} \frac{1}{4}\PAREN{1 - \E_{\bx\sim_\rho \by} \Inner{f(\bx), f(\by)}}\nonumber\\ &= \max_{f : \R^n \rightarrow B^3} \frac{1}{4}\PAREN{1 - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]}\nonumber\\ &= \frac{1}{4}\PAREN{1 - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_\mathrm{opt}]} \end{align} Where (\ref{eq:stab-upper}) follows from the $\rho < 0$ case of \Cref{thm:k-dim-borell}. In order to show that this is in fact equal to $\text{\sc Val}\xspace_{G_{\rho,n}}(f_\mathrm{opt})$, we need to restrict the domain of $f_\mathrm{opt}$ to $S^{n-1}$. Note that $$f_\mathrm{opt}\PAREN{\Normed{x}} = \Normed{\Pi^{(k)}\Normed{x}} = \frac{\frac{1}{\norm x}\Pi^{(k)}x}{\frac{1}{\norm x} \NORM{\Pi^{(k)}x}} = \Normed{\Pi^{(k)}x} = f_\mathrm{opt}(x)$$ Therefore, $\frac{1}{4}(1-\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f_\mathrm{opt}]) = \text{\sc Val}\xspace_{G_{\rho,n}}(f_\mathrm{opt})$ and we get the desired upperbound. \end{proof} \begin{proof}[Proof of \Cref{lem:heis-sdp-lower}] Given the assignment $f_\mathrm{ident}$, the SDP obtains value $$\frac{1}{4}\E_{\bx \sim_\rho \by}[1 - 3\inner{\bx,\by}] = \frac{1}{4}\PAREN{1-\E_{\bx \sim_\rho \by} \inner{\bx,\by}} = \frac{1}{4}(1 - 3\rho)$$ Since $f_\mathrm{ident}$ is a feasible solution, this is a lower bound on $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G_{\rho,n})$. \end{proof} \subsection{Algorithmic Gaps} Next we prove \Cref{thm:algo-gap}, \algogap* This proof closely resembles that of \cite{kar99}, who gave an algorithmic gap instance for the \text{\sc Max-Cut}\xspace SDP. Recall, we are trying to bound the quantity, $$\inf_{\text{instances }\mathcal I\text{ of }\mathcal P}\left\{\frac{\mathrm{A}(\mathcal I)}{\mathrm{OPT}(\mathcal I)}\right\}$$ Furthermore, the algorithm of \cite{BOV10} shows that for any instance $\mathcal I$, $$\alpha_{\mathrm{BOV}} \leq \frac{\mathrm{A}(\mathcal I)}{\text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal I)} \leq \frac{\mathrm{A}(\mathcal I)}{\mathrm{OPT}(\mathcal I)}$$ Thus, to show an algorithmic gap of $\alpha_{\mathrm{BOV}}$, it suffices to demonstrate an instance $\mathcal I$ such that $\frac{\mathrm{A}(\mathcal I)}{\mathrm{OPT}(\mathcal I)} \leq \alpha_{\mathrm{BOV}}$. As before our instance will be a graph, with a Hamiltonian term on each edge. Much like the rounding algorithm of \cite{GW95}, the analysis of \cite{BOV10} is tight for edges $(u,v)$ where $\langle u,v \rangle = \rho_{\mathrm{BOV}}$. This informs our graph construction. \begin{definition}[Discrete Embedded Graph]\label{def:discrete-graph} The discrete embedded graph, denoted $\mathcal B^n_\rho$ has vertex set $\{-\tfrac{1}{\sqrt n},\tfrac{1}{\sqrt n}\}^n$. The edge distribution corresponds to the following random experiment, \begin{enumerate} \item Pick $v \in \{-\tfrac{1}{\sqrt n},\tfrac{1}{\sqrt n}\}^n$ \item Set $u = v$ \item For each $i \in [n]$, set $u_i = -u_i$ with probability $\tfrac{1}{2}-\tfrac{1}{2}\rho$ \end{enumerate} \end{definition} As desired, $\E_{(\bu,\bv) \sim \mathcal B^n_\rho} [\inner{\bu, \bv}] = \rho$. On the graph $\mathcal B^n_\rho$, value of the SDP relaxation is equal to the identity assignment $f_\mathrm{ident}(v) = v$. \begin{lemma}[Value of the SDP Relaxation] Let $G(V,E) = \mathcal B^n_\rho$. Then, $$\text{\sc SDP}_{\text{\sc QMC}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \inner{f(\bu), f(\bv)}] \leq \frac{1}{4}(1 - 3\rho) = \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f_\mathrm{ident}(\bu), f_\mathrm{ident}(\bv)\rangle]$$ \end{lemma} \begin{proof} The last equality follows by construction of $G$, so we just need to show the inequality. To do so, we will show, $$\min_{f\colon V \rightarrow S^{n-1}} \E_{(u,v) \sim E}[\langle f(\bu),f(\bv)\rangle] \geq \rho$$ Fix any assignment $f$. Although the domain for $f$ is $\{-\tfrac{1}{\sqrt n}, \tfrac{1}{\sqrt n}\}^n$, by scaling, it suffices to consider functions $f : \{-1,1\}^n \rightarrow \mathbb S^{n-1}$. Write $f = (f_1,\dots,f_n)$ where $f_1 : \{-1,1\}^n \rightarrow \mathbb R$ with the added constraint that for all $x$, $\sum_i f_i(x)^2 = 1$. For any edge $(\bu,\bv)$, $\bu$ and $\bv$ are $\rho$ correlated and by definition, $\E_{(\bu,\bv) \sim E)}[\langle f(\bu),f(\bv)\rangle] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$. Then, \begin{align*} \E_{(\bu,\bv) \sim E)}[\langle f(\bu),f(\bv)\rangle] &= \sum_{i=1}^n \E_{(\bu,\bv) \sim E)}[ f_i(\bu),f_i(\bv)]\\ &= \sum_{i=1}^n \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_i]\\ &\geq \sum_{i=1}^n \rho \cdot \E[f_j(x)^2] && \text{By \Cref{prop:stab-bound}}\\ &= \rho \E[\sum_{i=1}^n f_j(x)^2] = \rho \end{align*} \end{proof} But we also have that $f_\mathrm{ident}$ is a feasible SDP relaxation so $\text{\sc Val}\xspace_{\mathcal B^n_\rho}(f_\mathrm{ident}) = \text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal B^n_\rho)$ and $f_\mathrm{ident}$ is a possible output for the SDP assignment. Letting $\rho = \rho_{\mathrm{BOV}}$ be the worst angle for the rounding scheme of \cite{BOV10}, we get that \begin{equation}\label{eq:alg-upper-bound} \mathrm A(\mathcal B^n_\rho) = \alpha_{\mathrm{BOV}} \cdot \text{\sc Val}\xspace_{\mathcal B^n_\rho}(f_\mathrm{ident}) = \alpha_{\mathrm{BOV}} \cdot \text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal B^n_\rho) \end{equation} Next, consider the assignment $f^{(1)}(x) = (x_1/|x_1|,0,0)$. This assignment achieves value, \begin{align*} \text{\sc Val}\xspace_{\mathcal B^n_\rho}(f^{(1)}) = \E_{(\bu,\bv) \sim E}[\tfrac{1}{4}-\tfrac{3}{4}\langle f^{(1)}(\bu),f^{(1)}(\bv)\rangle] = \E_{(\bu,\bv) \sim E}[\tfrac{1}{4}-\tfrac{3}{4}\tfrac{\bu_1}{|\bu_1|}\tfrac{\bv_1}{|\bv_1|}] = \tfrac{1}{4}(1-3\rho) = \text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal B^n_\rho) \end{align*} Since $\text{\sc Val}\xspace_{\mathcal B^n_\rho}(f^{(1)})$ is an lower bound on $\mathrm{OPT}(\mathcal B^n_\rho)$, we get that \begin{equation}\label{eq:opt-lower-bound} \text{\sc SDP}_{\text{\sc QMC}}\xspace(\mathcal B^n_\rho) \leq \mathrm{OPT}(\mathcal B^n_\rho). \end{equation} Combining equations \ref{eq:alg-upper-bound} and \ref{eq:opt-lower-bound} yields $$\mathrm A(\mathcal B^n_\rho) \leq \alpha_{\mathrm{BOV}} \mathrm{OPT}(\mathcal B^n_\rho) \implies \frac{\mathrm A(\mathcal B^n_\rho)}{\mathrm{OPT}(\mathcal B^n_\rho)} \leq \alpha_{\mathrm{BOV}}$$ } \section{Introduction} Over the last~$30$ years, starting with the proof of the PCP theorem~\cite{AS98,ALM+98}, researchers have gained a nearly complete understanding of the approximability of classical constraint satisfaction problems (CSPs), modulo the still-unproven Unique Games Conjecture (UGC) of Khot~\cite{Kho02}. Due to work of Raghavendra~\cite{Rag08,Rag09}, we know that for each CSP, there is a ``canonical algorithm'' which achieves a certain approximation ratio~$\alpha > 0$, and that it is~$\NP$-hard for any polynomial-time algorithm to do better than~$\alpha$, assuming the UGC. This canonical algorithm is based on the ``basic'' \emph{semidefinite programming (SDP)} relaxation of the CSP, which is itself derived from the second level\ignore{\footnote{A level-$l$ (or degree-$l$) SoS is a sum of polynomials, each of which are squares, of degree at most $l$.}} of the \emph{sum of squares (SoS)} hierarchy. However, for the \emph{quantum} analogue of CSPs, known as the \emph{local Hamiltonian problem}, our understanding remains incomplete. The quantum analogue of the PCP theorem remains a conjecture, and there is no general theory of optimal algorithms, be they quantum or classical. In addition, there is a natural quantum analogue of the SoS hierarchy, but in spite of recent progress in analyzing it, many basic questions remain open, such as how best to round its solutions. On top of this, approximation algorithms for the local Hamiltonian problem face an additional challenge not present for classical CSPs, namely what kind of quantum state should they output? The local Hamiltonian problem is an optimization problem over $n$-qubit states; however, actually outputting a general $n$-qubit state is infeasible on a classical computer, as it requires exponentially many bits to describe. Instead, algorithms typically output states from a subset of quantum states, called an \emph{ansatz}, which can be efficiently represented on a classical computer. By far the most popular is the ansatz of \emph{product states}, i.e.\ states of the form $\ket{\psi_1} \ot \cdots \ot \ket{\psi_n}$, which possess no entanglement but often give surprisingly good approximations to the optimal value~\cite{BH16}. In general, though, we lack a theory of optimal ansatzes for polynomial-time classical algorithms. To study these questions, we focus on a special case of the local Hamiltonian problem known as \text{\sc Quantum} \text{\sc Max-Cut}\xspace, which has been suggested as a useful testbed for designing approximation algorithms~\cite{GP19}. \text{\sc Quantum} \text{\sc Max-Cut}\xspace is a natural maximization variant of the \emph{anti-ferromagnetic Heisenberg XYZ model}, a classic family of $\QMA$-complete~\cite{CM16,PM17} 2-local Hamiltonians first investigated by Heisenberg~\cite{Hei28} which models magnetic systems where interacting particles have opposing spins. The constraints in \text{\sc Quantum} \text{\sc Max-Cut}\xspace involve pairs of qubits and, loosely speaking, enforce that the two qubits have opposing values in each of the Pauli~$X$, $Y$, and $Z$ bases. As a result, it can be viewed as a quantum analogue of the classical \text{\sc Max-Cut}\xspace problem. For the special case of finding the best product state in the \text{\sc Quantum} \text{\sc Max-Cut}\xspace problem, an SDP algorithm of Briët, Oliveira, and Vallentin~\cite{BOV10} gives an approximation ratio of~$0.956$. For the more general case of finding the best (possibly entangled) state, Gharibian and Parekh~\cite{GP19} gave an algorithm with approximation ratio~$0.498$. Their algorithm, which uses the product state ansatz, is based on rounding the corresponding ``basic'' SDP, derived from the second level of the \emph{noncommutative sum of squares (ncSoS)} hierarchy, a variant of the SoS hierarchy for optimization problems over matrices. Following this work, Anshu, Gosset, and Morenz~\cite{AGM20} gave an algorithm with an improved approximation ratio of~$0.531$. To achieve this, they used an ansatz which is more expressive than product states---tensor products of one- and two-qubit states--- as it is known that product states cannot surpass approximation ratio $0.5$. Parekh and Thompson~\cite{PT21} then showed that a similar algorithm could be captured by the level-4 ncSoS hierarchy, with a slightly improved approximation ratio of~$0.533$. The starting point of our work is a well-known statement from Gaussian geometry called \emph{Borell's inequality}~\cite{Bor:85}, which has played a central role in the inapproximability of the classical \text{\sc Max-Cut}\xspace problem ever since it was introduced to theoretical computer science in the work of Mossel, O'Donnell, and Oleszkiewicz~\cite{MOO10}. The earlier work of Khot, Kindler, Mossel, and O'Donnell~\cite{KKMO07} had shown that the Goemans-Williamson SDP algorithm~\cite{GW95} is the optimal polynomial-time approximation algorithm for \text{\sc Max-Cut}\xspace under the UGC, assuming an unproven statement they dubbed ``Majority is Stablest'' and left as a conjecture. One of the main results of~\cite{MOO10} was to supply a proof of this Majority is Stablest conjecture, which they did by translating it to Gaussian geometry and applying Borell's inequality, thereby settling the UG-hardness of \text{\sc Max-Cut}\xspace. In addition, Borell's inequality serves as an important technical ingredient in the analysis of the standard SDP integrality gap instance for \text{\sc Max-Cut}\xspace~\cite{FS02,OW08}. The first contribution of our work is to identify and conjecture a natural generalization of Borell's inequality suitable for applications to \text{\sc Quantum} \text{\sc Max-Cut}\xspace. \begin{conjecture}[Vector-valued Borell's inequality; negative~$\rho$ case]\label{conj:vector-borell-intro} Let $k \leq n$ be positive integers and $-1\leq \rho \leq 0$. Let $f : \R^n \rightarrow S^{k-1}$. In addition, let $f_{\mathrm{opt}}:\R^n\rightarrow S^{k-1}$ be defined by $f_{\mathrm{opt}}(x) = x_{\leq k}/\Vert x_{\leq k}\Vert$, where $x_{\leq k} = (x_1, \ldots, x_k)$. Then \begin{equation*} \E_{\bu \sim_\rho \bv} \langle f(\bu), f(\bv)\rangle \geq \E_{\bu \sim_\rho \bv}\langle f_{\mathrm{opt}}(\bu), f_{\mathrm{opt}}(\bv)\rangle, \end{equation*} where $\bu \sim_\rho \bv$ means that $\bu$ and $\bv$ are $\rho$-correlated Gaussian vectors. \end{conjecture} We also conjecture that the optimizers $f_{\mathrm{opt}}$ are unique, up to some natural symmetries; see \Cref{conj:vector-borell} for the full statement. The original Borell's inequality is when $k = 1$; in this case $f_{\mathrm{opt}}$ is just defined as $f_{\mathrm{opt}}(x) = \sgn(x_1)$, and the quantity $\E_{\bu \sim_\rho \bv}[f(\bu) f(\bv)]$ is known as the \emph{Gaussian noise stability} of $f$. When the sphere $S^{k-1}$ is replaced by the probablity simplex $\Delta^{k-1}$, the analogue of \cref{conj:vector-borell-intro} is a well-known open problem---known as the ``Peace Sign Conjecture''~\cite{IM12}---that was recently solved when $\rho$ is positive but sufficiently close to zero~\cite{HT20}. Although we are unable to prove \cref{conj:vector-borell-intro}, we are able to give evidence in favor of it. For our first piece of evidence, we show that \cref{conj:vector-borell-intro} is true when $n = k$, which we prove via a spectral argument. For our second piece of evidence, we instead consider the ``positive $\rho$ case'' of $0 \leq \rho \leq 1$, in which we would like to identify the function which \emph{maximizes} the expression $ \E_{\bu \sim_\rho \bv} \langle f(\bu), f(\bv)\rangle$ rather than minimizes it. In this case, it is clear that our conjectured optimizer $f_{\mathrm{opt}}(x) = x_{\leq k}/\Vert x_{\leq k}\Vert$ is \emph{not} the maximizer, as any constant function will have value~1, but it is plausible that $f_{\mathrm{opt}}$ is still the maximizer among all functions which are mean-zero. For our second piece of evidence, then, we show a dimensionality reduction statement in the positive $\rho$ case, implying that any maximizing mean-zero $f$ is ``intrinsically'' $k$-dimensional; in other words, it suffices to only consider the $n = k$ case. To show this, we extend the recent calculus of variations approach for proving Borell's inequality due to Heilman and Tarter~\cite{HT20} to output dimensions of size~$k$ larger than~$1$. Unfortunately, since our reduction to the $n = k$ case only holds for positive $\rho$, and our proof of the $n = k$ case only holds for negative $\rho$, these two pieces do not combine to give a full proof of the conjecture. For more details, along with formal statements of these results, see \cref{part:borell}. Next, we show that the vector-valued Borell's inequality allows us to address several questions related to the optimality of the SDP algorithms for \text{\sc Quantum} \text{\sc Max-Cut}\xspace. \begin{theorem}[Main results, informal]\mbox{} Suppose that the vector-valued Borell's inequality is true. Then the following statements hold. \begin{enumerate} \item There exists an integrality gap of $0.498$ for the basic SDP, matching the rounding algorithm of Gharibian and Parekh~\cite{GP19}. This shows that the product state ansatz is optimal for the basic SDP. Combined with the work of Anshu, Gosset, and Morenz~\cite{AGM20}, this also shows that the basic SDP does not achieve the optimal approximation ratio. \item It is Unique-Games-hard (UG-hard) to compute a $(0.956+\eps)$-approximation to the value of the best product state, matching an approximation algorithm due to Briët, Oliveira, and Vallentin~\cite{BOV10}. More generally, for any fixed~$k$, it is UG-hard to outperform the approximation algorithm of Briët, Oliveira, and Vallentin~\cite{BOV10} for Rank-$k$ \text{\sc Max-Cut}\xspace ($k=3$ corresponding to the aforementioned $0.956$-approximation). \item It is UG-hard to compute a $(0.956+\eps)$-approximation to the value of the best (possibly entangled) state. \end{enumerate}\label{thm:informal-intro} \end{theorem} To our knowledge, this is the first constant-factor hardness of approximation result for a natural family of local Hamiltonians which does not already contain a hard-to-approximate classical CSP as a special case, modulo our conjectured vector-valued Borell's inequality. We also show that our conjecture implies sharp inapproximability results for \text{\sc Quantum} \text{\sc Max-Cut}\xspace with respect to product states. Finally, in a striking departure from classical CSPs, in which the level-2 SoS relaxation is optimal under the UGC, we show that our conjecture implies that the level-4 ncSoS relaxation for \text{\sc Quantum} \text{\sc Max-Cut}\xspace strictly improves upon the level-2 relaxation. Our results highlight the importance of gaining a better understanding of the level-4 ncSoS relaxation, as well as the importance of settling \cref{conj:vector-borell-intro}. The relevance of Conjecture~\ref{conj:vector-borell-intro} to our main results is discussed in more detail in Section~\ref{sec:tech-overview-overview} below. \ignore{ Indeed, without additional assumptions on the form of the Hamiltonian, these results establish that \emph{we should not expect} to find a better product state than that achievable by~\cite{BOV10} for the kind of approximation studied. The first question is also interesting from a physical perspective because it points to limitations on ``local'' theories for describing for describing quantum states. A feasible solution to the SDP studied in~\cite{BGKT19, GP19} can be interpreted as a ``quasi'' quantum state which respects all $1$-local observables but not some $2$-local observables. Our results imply performance bounds on these ``low-order theories.'' We also highlight the usefulness of semidefinite programming as a concrete algorithmic framework for studying the local Hamiltonian problem. For example, although answering questions such as ``what is the optimal ansatz?'' might currently be out of reach, we have shown that they can be answered for a fixed level of the ncSoS hierarchy. In addition, they suggest that understanding the level-4 ncSoS relaxation is a natural future direction. } \paragraph{Related work.} An earlier draft of this work incorrectly claimed a complete proof of the vector-valued Borell's inequality. The bug was in the proof of the dimensionality reduction step, which we incorrectly claimed held for both positive \emph{and} negative $\rho$. We thank Steve Heilman for pointing out the error in this proof. Subsequent to the original posting of this work, Parekh and Thompson gave an algorithm for \text{\sc Quantum} \text{\sc Max-Cut}\xspace based on the level-4 ncSoS relaxation which achieves a $0.5$-approximation and uses the product state ansatz~\cite{PT22}. This is optimal, as there are simple graphs of value 1 in which product states achieve value at most 0.5~\cite{GP19}. Combined with our \cref{thm:informal-intro} and assuming the vector-valued Borell's inequality, this shows that level-4 ncSoS outperforms level-2, even when one is restricted to using the product state ansatz. In the local Hamiltonian problem, one is given a set of constraints on a system of~$n$ qubits, and the objective is to find the ``ground state energy,'', i.e.\ the optimum energy of a state under these constraints. This is a central problem in both the fields of condensed matter physics and quantum computing, and the study of this problem has led to a rich exchange of ideas between the two; see~\cite{GHLS15} for an excellent survey on the topic. Its importance in condensed matter physics stems from the fact that many interesting real-world systems can be modeled as local Hamiltonians, with the ground state energy corresponding to the energy of the system at zero temperature. In quantum computing, it is the canonical $\QMA$-complete problem~\cite{KSV02}, and is thus intractable to solve exactly with a classical computer unless $\mathsf{BPP} = \QMA$. Classical CSPs form a special case of the local Hamiltonian problem, and so the classical PCP theorem applies to local Hamiltonians as well. This means that given a general instance of the local Hamiltonian problem, it is $\NP$-hard (though not necessarily $\NP$-complete) to estimate its ground-state energy to a certain constant accuracy. The \emph{quantum PCP conjecture}~\cite{AAV13} asserts that this task is in fact $\QMA$-complete. One of the key differences between these two possibilities is that every local Hamiltonian has an efficient quantum witness of its ground state energy, namely its $n$-qubit ground state, but it may not have an efficient \emph{classical} (i.e.\ $\NP$) witness, as the ground state need not be efficiently representable on a classical computer. Researchers have designed classical approximation algorithms for various classes of $2$-local Hamiltonian problems, the majority of which use the product state ansatz. These include algorithms for Hamiltonians whose local terms are positive semi-definite~\cite{GK12,HEP20,PT20,PT22}, traceless Hamiltonians~\cite{HM17,BGKT19,NST09,PT20}, and fermionic Hamiltonians~\cite{BGKT19,HOD22}. In condensed matter physics, the use of product states as an ansatz is widespread and known as \emph{mean-field theory}~\cite{GHLS15}; in quantum chemistry, it is known as the \emph{Hartree-Fock method}~\cite{BH16}. The ubiquity of this ansatz stems both from the ease with which it can be analyzed and from folklore that product states well-approximate ground states in some situations. This folklore was formalized in the work of Brandão and Harrow~\cite{BH16}, which showed that product states give a good approximation to the ground states of local Hamiltonians whose interaction graphs are high degree or sufficiently good expanders, due to monogamy of entanglement. As a result, these Hamiltonians cannot serve as hard instances for the quantum PCP conjecture, as product states have an efficient classical description. They then used this structural result to design approximation algorithms for Hamiltonians with interaction graphs which are either planar, dense, or have low threshold rank. This is an example of how the study of approximation algorithms can shed light on the limitations of the quantum PCP conjecture. To our knowledge, the only classical approximation algorithms which do not use the product state ansatz are the aforementioned algorithms of~\cite{AGM20,PT21}, which use tensor products of one- and two-qubit states (and which still rely heavily on the product state algorithms), and the intriguing recent algorithms of~\cite{AGM20,AGKS21}, which use an ansatz consisting of states of the form $\ket{u} = U\ket{\psi}$, where $\ket{\psi}$ is a product state and $U$ is a low-depth quantum circuit. These two works give several settings in which product states can be ``improved'' by post-processing them with a low-depth quantum circuit; for example, the former work~\cite{AGM20} shows an algorithm using this ansatz which is guaranteed to outperform any product state algorithms on degree-3 and 4 instances of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. \ignore{ To see why this task is in~$\QMA$, note that every local Hamiltonian has an efficient \emph{quantum} witness for its ground state energy, namely its lowest energy state, which requires only~$n$ qubits. As describe above, though, such a state need not be efficiently representable on a classical computer, and so this problem is not known to be in~$\NP$. We note that the question of whether each local Hamiltonian has an efficient classical witness seems related to the question of what the optimal ansatz is\knote{Is it worth mentioning QMA vs. QCMA here?}, though we are not aware of any formal connection between the two. One distinction is that the ansatz should be able to be efficiently optimized over, whereas this need not be the case with an efficient classical witness. Researchers have designed classical approximation algorithms for various classes of maximum $2$-local Hamiltonian problems, which we summarize in \Cref{table:approx_algs}. As $2$-local Hamiltonian generalizes classical $2$-CSP, most of these problem classes have natural classical counterparts (see Table 1 in~\cite{PT20}). Most of these approximation algorithms use a product state ansatz, and the more recent works generate an approximate solution using an SDP relaxation of either the optimal energy or optimal energy over product states.} \part{Introduction} \setcounter{page}{1} \input{intro_ack.tex} \input{tech_overview.tex} \newpage \part{A vector-valued Borell's inequality}\label{part:borell} \input{borell-intro.tex} \input{same_dim.tex} \input{dimension-reduction.tex} \newpage \part{Hardness of \text{\sc Quantum} \text{\sc Max-Cut}\xspace} \input{prelims_for_hardness.tex} \input{gap_instances} \input{ug_hardness.tex} \newpage \part{Appendix} \section{Other Lemmas} \subsection{Cardinality Reduction}\label{sec:card-reduc} \ignore{ \begin{lemma} Let $0 \leq \eps \leq 1$. Given a vertex $u \in S^{n-1}$, consider the hyperspherical cap \begin{equation*} B(u, \eps) = \{v \in S^{n-1} : \Vert u - v \Vert_2^2 \leq \eps\}. \end{equation*} Then the measure of $B(u, \eps)$ can be lower-bounded by~XXX and upper-bounded by~XXX. \end{lemma} \begin{proof} Suppose without loss of generality that $u = (1, 0, \ldots, 0)$. Then the point \begin{equation*} v = (1-\eps^2/2, \sqrt{\eps^2-\eps^4/4},0, \ldots, 0) \end{equation*} has distance exactly~$\eps$ from~$u$. Let $\phi$ be the angle between $u$ and $v$, i.e.\ $\cos(\phi) = \langle u, v\rangle = 1 - \eps^2/2$. By~XXX, the surface areas of the $B(u, \eps)$ is exactly \begin{equation*} \frac{2\pi^{(n-1)/2}}{\Gamma((n-1)/2)} \int_0^{\phi} \sin^{n-2}\theta \mathrm{d}\theta. \end{equation*} respectively. Dividing the first by the second, this means the measure of $B(u, \eps)$ on the sphere is \begin{equation*} \frac{\Gamma(n/2)}{\sqrt{\pi}\Gamma((n-1)/2)} \int_0^{\phi} \sin^{n-2}\theta \mathrm{d}\theta. \end{equation*} \end{proof} } In this section, we prove \Cref{lem:discretization}, using an argument which closely follows \cite[Appendix B]{OW08}. All of these transformations are standard. \begin{proof} First, we give a series of transformations to yield a well-behaved finite graph. For each transformation, we argue that SDP value and product state value are within $\epsilon$ of the original graph. Finally, we show that in our final graph $G'$, using \Cref{cor:BH-nonuniform-easy-to-use}, $\text{\sc Prod}\xspace(G') \geq \text{\sc QMax-Cut}\xspace(G') - \epsilon$, concluding the proof. Let $G_0 = G$. We will first construct $G_1$, which restricts the Gaussian graph $\mathcal G^n_\rho$ to the sphere graph $S^{n-1}$. Next, $G_2$ will be a graph on a finite vertex set. Following that, our final graph $G' = G_3$ will remove self-loops, leaving us with a weighted, simple graph with finite vertex set. In this proof, we will identify a graph by the distribution on its edges. For $u,v \subseteq S^{n-1}$, we will write $G(u,v)$ for the probability weight $G$ puts on edges $(u,v)$. We start with the construction for $G_1$. For $G_0$, let $f : \mathbb R^n \rightarrow S^{n-1}$ be an SDP assignment obtaining $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G_0)$. (Note that this is also an optimal SDP assignment for $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G_0)$.) Let $G_1$ be the graph in which $G_1(u,v) = G_0(f^{-1}(u),f^{-1}(v))$. Then if we take the identity map as the SDP embedding, we see that $$\text{\sc SDP}_{\text{\sc QMC}}\xspace(G_1) \geq \E_{(\bu,\bv) \sim G_1}[\tfrac{1}{4}-\tfrac{3}{4}\langle \bu,\bv\rangle] = \E_{\bu \sim_\rho \bv}[\tfrac{1}{4} - \tfrac{3}{4}\langle f(\bu), f(\bv)\rangle] = \text{\sc SDP}_{\text{\sc QMC}}\xspace(G_0) =: c_{\mathrm{H}}.$$ A similar argument shows that $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G_1) \geq \text{\sc SDP}_{\text{\sc Prod}}\xspace(G_0) =: c_{\text{\sc Prod}}$. Furthermore, for any assignment $h : S^{n-1} \rightarrow S^{2}$ on $G_1$, the assignment $h \circ f$ yields an assignment for $G_0$ and thus $\text{\sc Prod}\xspace(G_1) \leq \text{\sc Prod}\xspace(G_0)$. To construct $G_2$, we use an argument originally from \cite{FS02}. Pick some $\epsilon$-net $\mathcal N$ over $S^{n-1}$, so that every point in $S^{n-1}$ is within distance $\eps$ to some point in~$\mathcal N$; it is known that constructions exist with $|\mathcal{N}| \leq 1/\epsilon^{O(d)}$. Then partition $S^{n-1}$ using Voronoi cells $\{C_v\}_{v \in \mathcal N}$ based on $\mathcal N$. For each $v \in \mathcal N$, the corresponding cell $C_v \subseteq S^{n-1}$ consists of all points in $S^{n-1}$ which are closer to $v$ than any other $u \in \mathcal N$. Then $G_2$ is the finite graph on vertex set $\mathcal N$ in which $G_2(u,v) = G_1(C_u,C_v)$. We first observe that $$\text{\sc Prod}\xspace(G_2) \leq \text{\sc Prod}\xspace(G_1) = s$$ since any assignment $f$ on $G_2$ can be extended to an assignment of equal value on $G_1$. Furthermore, we claim $$\text{\sc SDP}_{\text{\sc QMC}}\xspace(G_2) \geq c_{\mathrm{H}} - 3\epsilon.$$ To see this, consider the SDP assignment $f:\mathcal N \rightarrow S^{n-1}$ which maps each $v \in \mathcal N$ to itself. We can extend this to a function with domain all of $S^{n-1}$ by setting $f(u) = v$ for each $u \in C_v$. Then \begin{equation}\label{eq:using-weird-assignment} \text{\sc SDP}_{\text{\sc QMC}}\xspace(G_2) \geq \E_{(\bu, \bv) \sim G_2}[\tfrac{1}{4} - \tfrac{3}{4} \langle f(\bu), f(\bv) \rangle] = \E_{(\bx, \by) \sim G_1}[\tfrac{1}{4} - \tfrac{3}{4} \langle f(\bx), f(\by) \rangle]. \end{equation} Let $C_{\bu}$ be the Voronoi cell $\bx$ falls inside and $C_{\bv}$ be the Voronoi cell $\by$ falls inside. Then because $\mathcal N$ is an $\eps$-net, we can write $\bx = (\bu + \boldsymbol{\eta}_1)$ and $\by = (\bv + \boldsymbol{\eta}_2)$, where $\boldsymbol{\eta}_1$ and $\boldsymbol{\eta}_2$ have length at most $\epsilon$. Thus, \begin{equation*} \langle \bx,\by\rangle = \langle \bu + \boldsymbol{\eta}_1, \bv+\boldsymbol{\eta}_2\rangle \geq \langle \bu,\bv\rangle - 3\epsilon = \langle f(\bx), f(\by) \rangle - 3\epsilon. \end{equation*} As a result, \begin{equation*} \eqref{eq:using-weird-assignment} \geq \E_{(\bx, \by) \sim G_1}[\tfrac{1}{4} - \tfrac{3}{4} \langle \bx, \by \rangle] - 3\eps = c_{\mathrm{H}} - 3\eps. \end{equation*} A similar argument shows that $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G_2) \geq c_{\text{\sc Prod}} - 3\eps$. Finally, we use a simple construction appearing in \cite{KO06} (and originally due to \cite{ABH+05}) in order to remove self-loops. Conveniently, this construction will also make it easy to show that the product state value and maximum energy are close. Our graph $G_3$ will be parameterized by an integer $M$ which we will select later but which is at least $1/\eps$. For each vertex $v \in G_2$, we will create $M$ many vertices $\{(v, j)\}_{j \in [M]}$, each with weight $\frac{1}{M}$ of the original. To sample a random edge in $G_3$, we simply sample $(\bu, \bv)$ from $G_2$, let $\bi, \bj \in [M]$ be independent, uniformly random, and output the edge between $(\bu, \bi)$ and $(\bv, \bj)$. It is clear that $\text{\sc Prod}\xspace(G_3) \geq \text{\sc Prod}\xspace(G_2)$ because any assignment $f: \mathcal V \rightarrow S^{2}$ can be converted into an assignment $f'$ for $G_3$ of equal value by setting $f'(u, i) = f$. On the other hand, $\text{\sc Prod}\xspace(G_2) \geq \text{\sc Prod}\xspace(G_3)$ as well. To see this, consider a product state assignment $f: \mathcal V \times [M] \rightarrow S^2$ for $G_3$. It has value \begin{align*} \E_{(\bu, \bv) \sim G_2}\E_{\bi, \bj \sim [M]}[\tfrac{1}{4} - \tfrac{1}{4} \langle f(\bu, \bi), f(\bv, \bj) \rangle] = \E_{\bu, \bv \sim G_2}[\tfrac{1}{4} - \tfrac{1}{4} \langle \E_{\bi} f(\bu, \bi), \E_{\bj} f(\bv, \bj)\rangle]. \end{align*} This is the value that the assignment $f': \mathcal V \rightarrow B^3$ defined as $f'(u) = \E_{\bi} f(u, \bi)$ achieves on the graph $G_2$. As $f'$ has range $B^3$, there exists a function with range $S^2$ whose value is at least as high. As a result, $\text{\sc Prod}\xspace(G_3) = \text{\sc Prod}\xspace(G_2)$. A similar argument shows that the two SDP values remain the same as well. Now, the total weight of self-loops in this graph is at most $\frac{1}{M}$, and so by removing these edges and scaling the remaining weights to sum to one we produce a graph $G'$ with no self-loops in which the SDP and product state values have increased by at most $\frac{1}{M} \leq \epsilon$. Next, we use \Cref{cor:BH-nonuniform-easy-to-use} to relate the product state value to the optimal state value. Recall that we need to bound the quantity \begin{equation}\label{eq:bh-bound} 20\cdot (n \cdot \max_{(u,i), (v,j)} \{A_{(u,i), (v,j)}\} \cdot \max_{(u,i)} \{p_{(u,i)}\})^{1/8} + \max_{(u,i)} \{p_{(u,i)}\} \end{equation} for $G'$. Here, $n$ is the number of vertices in $G_3$, $A_{(u,i), (v,j)}$ is the probability of an edge ending in $(u,i)$ conditioned on starting from $(v,j)$, and $p_{(u,i)}$ is one half the total weight of edges on $(u,i)$. We'll give a bound for the above quantity \textit{before} removing self loops (i.e.\ for the graph $G_3$). However, observe that self loops consist of at most $1/M$ of the total edge weight and thus, $$p'_{u,i} \leq p_{u,i}/(1 - \tfrac{1}{M}),$$ $$A'_{(u,i),(v,j)} \leq A'_{(u,i),(v,j)}/(1 - \tfrac{1}{M}),$$ where $p'_{u,i}$ and $A'_{(u,i),(v,j)}$ are the quantities after removing self-loops (i.e.\ for the graph $G'$). Choosing $M$ sufficiently large makes this difference negligible. Now, observe that if $p^{(2)}_{u}$ is the weight function associated with $G_2$, then $p_{u,i} = p^{(2)}_u/M$, since each vertex $(u,i)$ in $G_3$ inherits $1/M$ of the total edge weight of vertex $u$. Thus, by choosing $M$ sufficiently large, we can bound the last additive term by $\epsilon/2$. Next, since whenever $(v,j)$ is connected to a vertex $(u, i)$, it is in fact connected to all vertices $\{(u, k)\}_{k \in [M]}$ with an equal weight, we have an easy upper bound of $1/M$ on $A_{(u,i),(v,j)}$. Finally, using that $n = M|\mathcal V|$, where $\mathcal V$ is $G_2$'s vertex set, we can rewrite \Cref{eq:bh-bound} as, \begin{align*} 20\cdot \Paren{M|V| \cdot \max_{(u,i), (v,j)} \{A_{(u,i), (v,j)}\} &\cdot \frac{1}{M}\max_{u} \{p_u\}}^{1/8} + \epsilon/2\\ &= 20\cdot \PAREN{|V| \cdot \max_{u} \{p_u\} \cdot \frac{1}{M}}^{1/8} + \epsilon/2 \end{align*} Thus, noting that $|V| \cdot \max_u \{p_u\}$ is just a constant $C$ which is independent of~$M$, we can bound this equation by~$\epsilon$ by taking~$M$ sufficiently large. \end{proof} \subsection{Proofs of Various Fourier Properties}\label{sec:fourier-proofs} \begin{proposition}\label{eq:rho-stab-diff} Let $f : \{-1,1\}^n \rightarrow \mathbb R^k$ be a function and let $\rho,\gamma \in [0,1)$. Then $$\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho [f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho(1-\gamma)^2}[f]} \leq \frac{2\gamma}{1-\rho}\Var[f].$$ \end{proposition} \begin{proof} Let $\eps = \rho - (1-\gamma)^2\rho \in [0,\rho)$, so that $\rho(1-\gamma)^2 = \rho - \epsilon$. Then $$\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho [f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho - \epsilon}[f]} = \sum_{S \subseteq [n]} (\rho^{|S|} - (\rho - \epsilon)^{|S|})\norm{\widehat f(S)}_2^2.$$ When $\epsilon = 0$, the proposition clearly holds. Assume $\epsilon \in (0,\rho)$. For $S \not= \emptyset$, we'll bound the term $(\rho^{|S|} - (\rho - \epsilon)^{|S|})$. When $S = \emptyset$ this quantity is just $0$. Let $k = |S|$, and define the function $g(\delta) = (\rho - \epsilon + \delta)^k$. This function is continuous in $\delta$, and applying the Mean Value Theorem on $\delta \in [0, \epsilon]$ yields a $\delta'$ such that $$\frac{\mathrm{d} g}{\mathrm d \delta}(\delta')= \frac{g(\epsilon) - g(0)}{\epsilon - 0} = \frac{\rho^k - (\rho - \epsilon)^k}{\epsilon}$$ Furthermore, $g'(\delta') = k(\rho - \epsilon + \delta')^{k-1}$. For any $x \in (0,1]$ and $k \in \mathbb N^+$, we have that $(1-x)^{k-1}k \leq 1/x$. Letting $1-x = \rho - \epsilon + \delta'$, we get $$\frac{\rho^k - (\rho - \epsilon)^k}{\epsilon} = k(\rho - \epsilon + \delta')^{k-1} \leq \frac{1}{1-\rho + \epsilon - \delta'} \leq \frac{1}{1-\rho} \implies \rho^k - (\rho - \epsilon)^k \leq \frac{\epsilon}{1- \rho}.$$ Thus, $$\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho [f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho - \epsilon}[f]} \leq \frac{\epsilon}{1-\rho} \sum_{S \subseteq [n] : S \not= \emptyset} \norm{\hat f(S)}_2^2 = \frac{\epsilon}{1- \rho} \Var[f].$$ Substituting in $\epsilon = \rho - \rho(1-\gamma)^2 \leq 2\gamma$ concludes the proof. \end{proof} \begin{comment} \begin{proof}[Proof of \Cref{lem:influence-bound}] Observe \begin{align*} \sum_{i = 1}^n \Inf_{i}^{\leq 1-\epsilon}[\boldf] &= \sum_{k=0}^{m-1} \sum_{i=1}^n \sum_{S : |S| \leq (1-\epsilon), i \in S} \widehat{f_k}(S)^2\\ &= \sum_{k=0}^{m-1} \sum_{S : |S| \leq (1-\epsilon)} |S| \widehat{f_k}(S)^2\\ &\leq \sum_{k=0}^{m-1} (1-\epsilon) \sum_{S : |S| \leq (1-\epsilon)} \widehat{f_k}(S)^2\\ &\leq \sum_{k=0}^{m-1} (1-\epsilon) \sum_{S} \widehat{f_k}(S)^2\\ &= m(1-\epsilon) \end{align*} Thus by the Pigeonhole Principle, there can be at most $m(1-\epsilon)/\tau$ coordinates with $\Inf_i^{\leq 1-\epsilon}[\boldf] \geq \tau$. \end{proof} \end{comment} \begin{proposition}[Stability Bound]\label{prop:stab-bound} Let $f : \{-1,1\}^n \rightarrow \mathbb R$ and $\rho \in [-1,0]$. Then, $$\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq \rho \cdot \E[f(\bx)^2].$$ \end{proposition} \begin{proof} First, note that $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] = \sum_S \rho^{|S|} \widehat f(S)^2$. Since $\rho \in [-1,0]$, we can lower bound this by $\rho\cdot \sum_S \widehat f(S)^2$. Finally, using Parseval's theorem, this is exactly equal to $\rho\cdot \E[f(\bx)^2]$. \end{proof} \subsection{Proofs of Lipschitz Properties} \begin{lemma}[Lipschitz Property of $\Psi$]\label{lem:psi-lipschitz} Let $\Psi : \mathbb R^n \rightarrow \mathbb R$ be defined as, $$\Psi(v) = \left\{\begin{array}{cl} \norm{v}_2^2 & \text{if $\norm{v}_2 \leq 1$,}\\ 1 & \text{otherwise.} \end{array}\right.$$ Then, $\Psi$ is Lipschitz continuous with constant $2$. In particular, for any $u,v \in \mathbb R^n$, $$\abs{\Psi(u) - \Psi(v)} \leq 2 \norm{u - v}_2$$ \end{lemma} \begin{proof} The proof is by reduction to the function $\mathrm{Sq} : \mathbb R \rightarrow \mathbb R$, defined as, $$\mathrm{Sq}(x) = \left\{\begin{array}{cl} 0 & \text{if $x < 0$,}\\ x^2 & \text{if $x \in [0,1]$,}\\ 1 & \text{if $x > 1$.} \end{array}\right.$$ As in the proof of the Majority is Stablest theorem in \cite{OD14}, we see that $\mathrm{Sq}$ is $2$-Lipschitz. Fix any $u,v \in \mathbb R^n$. By applying the Lipschitz property of $\mathrm{Sq}$, we can write $$\abs{\Psi(u) - \Psi(v)} = \abs{\mathrm{Sq}(\norm{u}_2) - \mathrm{Sq}(\norm{v}_2)} \leq 2\abs{\norm{u}_2 - \norm{v}_2}.$$ Finally, applying the reverse triangle inequality, we obtain an upper bound of $2\norm{u-v}_2$, which concludes the proof. \end{proof} \begin{lemma}[Lipschitz Property of $\Phi$]\label{lem:phi-lipschitz} Let $\Phi : \mathbb R^n \rightarrow \mathbb R$ be defined as $\Phi(v) = v - \mathcal R(v)$, where $\mathcal R$ rounds vectors to the unit ball $B^n$ and is defined as $$\mathcal{R}(v) = \left\{\begin{array}{cl} v & \text{if $\norm{v}_2 < 1$,}\\ \tfrac{v}{\norm{v}_2} & \text{otherwise.} \end{array}\right.$$ Then $\Phi$ is Lipschitz continuous with constant $2$. In particular, for any $u,v \in \mathbb R^n$, $$\norm{\Phi(u) - \Phi(v)}_2 \leq 2 \norm{u - v}_2.$$ \end{lemma} \begin{proof} First, we show that $\mathcal R(v)$ is in fact $1$-Lipschitz. Without loss of generality, we can assume vectors $u,v \in \mathbb R^2$ and take $u = r\cdot (1,0)$ and $v = s\cdot(v_1,v_2)$ where $\norm{(v_1,v_2)}_2 = 1$. We want to show $$\norm{\mathcal R(u) - \mathcal R(v)}_2 \leq \norm{u - v}_2.$$ Certainly, this holds when $r,s \leq 1$. Consider the case when $r,s \geq 1$. Then \begin{align*} \norm{\mathcal R(u) - \mathcal R(v)}_2 &= \norm{(1,0) - (v_1,v_2)}_2\\ &= \sqrt{(1-v_1)^2 + v_2^2}\\ &= \sqrt{(1-v_1)^2 + 1 - v_1^2}\\ &= \sqrt{2 - 2 v_1}. \end{align*} On the other hand, \begin{align*} \norm{u - v}_2 &= \sqrt{(r-s v_1)^2 + s^2 v_2^2}\\ &= \sqrt{r^2 - 2rs v_1 + s^2v_1^2 + s^2(1-v_1^2)}\\ &= \sqrt{r^2 - 2rs v_1 + s^2}. \end{align*} Thus, it suffices to show $2-2v_1 \leq r^2 - 2rs v_1 + s^2$. Rearranging so that all terms including $v_1$ are on the LHS, we get $$v_1 \cdot 2(rs-1) \leq^? r^2 + s^2 - 2.$$ Since $s,r \geq 1$, the LHS is maximized for $v_1 = 1$ and thus this is true if and only if $$0 \leq^? r^2 + s^2 -2rs.$$ Factoring the RHS yields $(r-s)^2$, which is indeed at least $0$. Finally, we consider the case when $r \leq 1$ and $s \geq 1$ (the case of $r \geq 1$ and $s \leq 1$ is symmetric and we omit it). Then we again have $\norm{u-v}_2 = \sqrt{r^2 - 2rs v_1 + s^2}$. However, we now have $$\norm{\mathcal R(u) - \mathcal R (v)}_2 = \sqrt{(r-v_1)^2 + v_2} = \sqrt{(r-v_1)^2 + 1 - v_2^2} = \sqrt{r^2 - 2rv_1 + 1}.$$ As a result, we want to show \begin{align*} r^2 - 2rv_1 + 1 &\leq^? r^2 - 2rsv_1 + s^2\\ v_1 \cdot 2(rs-r) + 1 &\leq^? s^2\\ 2(rs-r) + 1 &\leq^? s^2\tag{LHS maximized when $v_1 = 1$}\\ 2s - 2 + 1 &\leq^? s^2\tag{LHS maximized when $r = 1$}\\ 0 &\leq^? s^2 - 2s + 1 = (s-1)^2. \end{align*} We conclude that $\mathcal R(\cdot)$ is 1-Lipschitz. Now we show that $\Phi(u) = u - \mathcal R(u)$ is $2$-Lipschitz. \begin{align*} \norm{\Phi(u) - \Phi(v)}_2 = \norm{u - \mathcal R(u) - v + \mathcal R(v)}_2 &\leq \norm{u - v}_2 + \norm{\mathcal R(u) - \mathcal R(v)}_2\tag{by the triangle inequality}\\ &\leq 2 \norm{u-v}_2.\tag{by the Lipschitz property of $\mathcal R(\cdot)$} \end{align*} This concludes the proof. \end{proof} \begin{corollary}\label{cor:phi-lipschitz} The function $\Phi_i(v) : \mathbb R^n \rightarrow \mathbb R$, defined as $\Phi_i(v) = \Phi(v)_i$, is $2$-Lipschitz. \end{corollary} \begin{proof} Take any $u,v \in \mathbb R^n$. Then \begin{equation*} |\Phi_i(u) - \Phi_i(v)| \leq \sqrt{\sum_{i}^n (\Phi_i(u) - \Phi_i(v))^2} = \norm{\Phi(u) - \Phi(v)}_2 \leq 2\norm{u -v}_2.\qedhere \end{equation*} \end{proof} \begin{lemma}\label{lem:inner_prod_lipschitz} Let $\bx \sim_\rho \by$ be $\rho$-correlated random variables in $k$ dimensions. The function: $$ \E_{\bx \sim_\rho \by} \left \langle \frac{\bx}{||\bx||}, \frac{\by}{||\by||} \right \rangle =\frac{2}{k} \left( \frac{\Gamma((k+1)/2)}{\Gamma(k/2)}\right)^2 \rho\,\, \,_2 F_1[1/2, 1/2, k/2+1, \rho^2] $$ in $\rho$ is: \begin{enumerate} \item non-negative for $\rho \in [0,1]$, \item an odd function, \item $C$-Lipschitz for $\rho\in [-1, 1]$ for some constant $C$ which is a function of $k$ if $k\geq 3$, \item $C$-Lipschitz for $\rho\in [-1+\epsilon, 1-\epsilon]$ for some constant $C(k, \epsilon)$ for any $\epsilon>0$ if $k=1, 2$, and \item convex for $\rho \in [0,1]$. \end{enumerate} \end{lemma} \begin{proof} From the definition of $\,_2 F_1$: \begin{equation}\label{eq:hypergeometric-sum-def} \,_2F_1 [a, b, c, z]=\sum_{n=0}^\infty \frac{(a)_n (b)_n}{(c)_n} \frac{z^n}{z!}, \end{equation} we see that $\,_2 F_1[1/2, 1/2, k/2+1, \rho^2] \geq 0$, establishing the first two properties. For the third and fourth property, since the function $f(\rho) = \rho\,\, \,_2 F_1[1/2, 1/2, k/2+1, \rho^2]$ is differentiable in $\rho$ so it is Lipschitz if we can upper bound the absolute value of the derivative in the interval containing $\rho$. By 15.2.1 in \cite{AS72}, \begin{equation*} \frac{d}{dz}\,_2F_1 [a, b, c, z] = \frac{ab}{c}\,_2F_1 [a+1, b+1, c+1, z] \end{equation*} \noindent Applying the product rule, taking the derivative of $f$ yields: \begin{align}\label{eq:d_of_hypergeo} \frac{d}{d\rho} f(\rho) =\frac{\rho^2 \, _2F_1\left(\frac{3}{2},\frac{3}{2};\frac{k}{2}+2;\rho^2\right)}{2 \left(\frac{k}{2}+1\right)}+\, _2F_1\left(\frac{1}{2},\frac{1}{2};\frac{k}{2}+1;\rho^2\right). \end{align} It is clear the derivative is non-negative where defined, from the definition of $\,_2F_1$, \Cref{eq:hypergeometric-sum-def}, since it is a convergent sum of non-negative numbers as in the first property. For $k\geq 3$ the derivative is defined at all $\rho\in[-1, 1]$, whereas for $k=1, 2$ the derivative is defined for all $\rho\in (-1, 1)$. Hence, we will consider an interval $\rho\in [-1+\delta, 1-\delta]$ where $\delta=\epsilon$ for $k=1, 2$ and $\delta=0$ for $k\geq 3$. The derivative is increasing in $z$ for fixed $a$, $b$ and $c$ by \Cref{eq:hypergeometric-sum-def} so we may upper bound the derivative in the interval $\rho\in [-1+\delta, 1-\delta]$ as: $$ \left| \frac{d}{d\rho} f(\rho)\right| \leq \left| \frac{d}{d\rho} f(\rho)\right|_{\rho=1-\delta}= \frac{(1-\delta)^2 \, _2F_1\left(\frac{3}{2},\frac{3}{2};\frac{k}{2}+2;(1-\delta)^2\right)}{2 \left(\frac{k}{2}+1\right)}+\, _2F_1\left(\frac{1}{2},\frac{1}{2};\frac{k}{2}+1;(1-\delta)^2\right), $$ establishing the third and fourth property. For the last property, we compute the second derivative of $f$ using \Cref{eq:d_of_hypergeo}: \begin{equation*} \frac{d^2}{d\rho^2}f(\rho) = \frac{9\rho^3 \, _2F_1\left(\frac{5}{2},\frac{5}{2};\frac{k}{2}+3;\rho^2\right)}{4 \left(\frac{k}{2}+1\right) \left(\frac{k}{2}+2\right)} + \frac{(\rho^2 + 2\rho) \, _2F_1\left(\frac{3}{2},\frac{3}{2};\frac{k}{2}+2;\rho^2\right)}{2 \left(\frac{k}{2}+1\right)} + \, _2F_1\left(\frac{1}{2},\frac{1}{2};\frac{k}{2}+1;\rho^2\right), \end{equation*} which is non-negative for $\rho \in [0,1)$ by the definition \Cref{eq:hypergeometric-sum-def}. Note that $\frac{d^2}{d\rho^2}f(\rho)$ fails to be defined at $\rho=1$ in general since $\,_2F_1[a, b;c;1]$ fails to be absolutely convergent when $a+b>c$ and $5/2+5/2$ may be larger than $k/2+3$ depending on $k$. However, convexity in $[0, 1)$ and continuity in $[0, 1]$ of the function itself imply convexity in $[0, 1]$. \end{proof} \section{Preliminaries}\label{sec:hardness-prelims} \ignore{ \subsection{The \text{\sc Max-Cut}\xspace problem} \begin{definition}[Weighted graph] A \emph{weighted graph} $G = (V, E, w)$ is an undirected graph with weights on the edges specified by $w: E \rightarrow \R^{\geq 0}$. The weights are assumed to sum to~$1$, i.e.\ $\sum_{e \in E} w(e) = 1$, and so they specify a probability distribution on the edges. We will write $\boldsymbol{e} \sim E$ or $(\bu, \bv) \sim E$ for a random edge sampled from this distribution. We will often use the word \emph{graph} as shorthand for weighted graph. \end{definition} \begin{definition}[\text{\sc Max-Cut}\xspace]\label{def:max-cut} Given a graph $G$, \text{\sc Max-Cut}\xspace is the problem of partitioning the vertices of~$G$ into two sets to maximize the probability that a random edge is cut. The optimum value is given by \begin{equation*} \text{\sc Max-Cut}\xspace(G) = \max_{f:V \rightarrow \{\pm 1\}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)]. \end{equation*} \end{definition} } \subsection{Pauli matrices} \begin{definition}[Pauli matrices]\label{def:pauli} The \emph{Pauli matrices} are the Hermitian matrices \begin{equation*} X = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \quad Y = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}, \quad Z = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}. \end{equation*} \end{definition} \begin{notation} We will generally use $P$ or $Q$ for a variable in $\{I, X, Y, Z\}$. \end{notation} \begin{proposition}[Properties of the Pauli matrices]\label{prop:pauli-props} The Pauli matrices have the following properties. \begin{enumerate} \item $X^2 = Y^2 = Z^2 = I$. \item $XY = i Z$, $YZ = iX$, and $ZX = i Y$. \item $XY = -YX$, $YZ = - ZY$, and $ZX = - XZ$. \item $\tr[X] = \tr[Y] = \tr[Z] = 0$. \end{enumerate} \end{proposition} \subsection{On \text{\sc Quantum} \text{\sc Max-Cut}\xspace and the Heisenberg model}\label{sec:qmaxcut-heisenberg} The \emph{quantum Heisenberg model} is a family of Hamiltonians first studied by Heisenberg in~\cite{Hei28}. Given an unweighted graph $G = (V, E)$, a Hamiltonian from this model is written as \begin{equation*} H = - \E_{(u, v) \in E} (J_X \cdot X_u X_v + J_Y \cdot Y_u Y_v + J_Z \cdot Z_u Z_v) - m \sum_{u \in V} Z_u, \end{equation*} where $P_u$ for $P \in \{X, Y, Z\}$ refers to the Pauli matrix~$P$ applied to the $u$-th qubit, $J_X, J_Y, J_Z$ are real-valued coefficients known as \emph{coupling constants}, and $m$ is a real-valued coefficient known as the \emph{external magnetic field}. As is typical in Hamiltonian complexity, and unlike in \text{\sc Quantum} \text{\sc Max-Cut}\xspace, the ground state energy of this Hamiltonian is defined to be its \emph{minimum} (rather than maximum) eigenvalue, and the ground state is defined to be the corresponding eigenvector. The \emph{ferromagnetic case} refers to the case when $J_X, J_Y, J_Z \geq 0$, in which case neighboring qubits tend to have the same values in the $X,Y,Z$ bases, and the \emph{anti-ferromagnetic case} is when $J_X, J_Y, J_Z \leq 0$, in which case they have opposing values. The \emph{anti-ferromagnetic Heisenberg XYZ model} (which we will henceforth simply refer to as the ``Heisenberg model''), is the case when $J_X = J_Y = J_Z = -1$ and $m = 0$. It is natural to allow for the graph $G = (V, E, w)$ to be weighted, in which case we can write a Hamiltonian from this model as \begin{equation*} H_G^{\text{\sc Heis}} = \E_{(\bu, \bv) \sim E}[X_{\bu} X_{\bv} + Y_{\bu} Y_{\bv} + Z_{\bu} Z_{\bv}]. \end{equation*} As we have mentioned before, \text{\sc Quantum} \text{\sc Max-Cut}\xspace was defined to be a natural maximization version of the Heisenberg model. Indeed, if $H_G$ is the \text{\sc Quantum} \text{\sc Max-Cut}\xspace instance corresponding to~$G$, then $H_G = (I - H_G^{\text{\sc Heis}})/4$. This means that if $\ket{\psi}$ is the minimum energy state of $H_G^{\text{\sc Heis}}$ and has energy $\nu$, then $\ket{\psi}$ is also the \emph{maximum} energy state of $H_G$ and has energy $(1-\nu)/4$. Where the two variants differ is in their approximability. As we have seen throughout this work, one can achieve a constant-factor approximation to the \text{\sc Quantum} \text{\sc Max-Cut}\xspace objective in polynomial time. On the other hand, the best known approximation algorithm for the Heisenberg model objective is due to~\cite{BGKT19} and achieves a $1/O(\log(n))$ approximation. In particular, if the minimum energy of $H_G^{\text{\sc Heis}}$ is $\nu$, this algorithm finds a product state with energy no greater than $\nu/O(\log(n))$. The source of this difference is the identity term $I \ot I$ in the \text{\sc Quantum} \text{\sc Max-Cut}\xspace objective, which ``inflates'' the energy of a state relative to its energy in the Heisenberg model. For example, a tensor product of maximally mixed qubits always has objective value~$1/4$ in \text{\sc Quantum} \text{\sc Max-Cut}\xspace due to these identity terms (which, in turn, implies one can always trivially achieve an approximation ratio of $1/4$). In the Heisenberg model, however, its objective value is~$0$, and so it gives no approximation to the optimum value. An analogous situation occurs in the classical world, where the \text{\sc Max-Cut}\xspace objective \begin{equation*} \max_{f:V\rightarrow \{-1, 1\}} \E_{(\bu, \bv) \sim E}[\tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)] \end{equation*} has a constant-factor $0.878567$-approximation~\cite{GW95}, but the shifted and rescaled objective \begin{equation*} \min_{f:V\rightarrow \{-1, 1\}} \E_{(\bu, \bv) \sim E}[f(\bu) f(\bv)] \end{equation*} gives us the (anti-ferromagnetic) \emph{Ising model} problem, for which the best-known algorithm is due to Charikar and Wirth~\cite{CW04} and achieves an approximation ratio of $1/O(\log(n))$. The Heisenberg model is ``notoriously difficult to solve even on bipartite graphs, in contrast to \text{\sc Max-Cut}\xspace''~\cite{GP19}. Only a few explicit solutions have been found, several of which are well-known results in the physics literature. These include the Heisenberg model on the cycle graph, whose solution due to Bethe is known as the ``Bethe ansatz''~\cite{Bet31}, and on the complete bipartite graph, known as the ``Lieb-Mattis model''~\cite{LM62}. To our knowledge, \cite[Section 5.2]{CM16} contains a complete list of known explicit solutions. This difficulty of the finding solutions for the Heisenberg model was explained by the works of~\cite{CM16,PM17}, who showed that it is a $\QMA$-complete problem. This implies that \text{\sc Quantum} \text{\sc Max-Cut}\xspace is also $\QMA$-complete. \subsection{Alternative expressions for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction} There are several alternative ways of writing the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction \begin{equation*} h = \tfrac{1}{4}(I \ot I - X \ot X - Y \ot Y - Z \ot Z) \end{equation*} which are common in the literature. The first involves the singlet state. \begin{definition}[Singlet state] The \emph{two-qubit singlet state} is \begin{equation*} \ket{s} = \tfrac{1}{\sqrt{2}} \ket{01} - \tfrac{1}{\sqrt{2}} \ket{10}. \end{equation*} It is also known as the \emph{two-qubit anti-symmetric state}, and as the element $\ket{\Psi^-}$ of the \emph{Bell basis} of two-qubit states \begin{equation*} \ket{\Phi^{\pm}} = \tfrac{1}{\sqrt{2}} \ket{00} \pm \tfrac{1}{\sqrt{2}} \ket{11}, \quad \ket{\Psi^{\pm}} = \tfrac{1}{\sqrt{2}} \ket{01} \pm \tfrac{1}{\sqrt{2}} \ket{10}. \end{equation*} \end{definition} \ignore{ \begin{definition}[The anti-ferromagnetic Heisenberg interaction] The \emph{anti-ferromagnetic Heisenberg interaction} is the $2$-qubit operator $ h = \ket{s}\bra{s}. $ \end{definition} } The following proposition gives a convenient expression for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction as the projector on the singlet state. \begin{proposition}[Rewriting the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction]\label{prop:rewrite-interaction} \begin{equation*} h = \tfrac{1}{4}\cdot (I \ot I - X \ot X) \cdot (I \ot I - Z \ot Z) = \ket{s}\bra{s}. \end{equation*} \end{proposition} \begin{proof} The first equality follows from $(X \ot X) (Z \ot Z) = - Y \ot Y$. We verify the second equality by checking that both sides have the same eigendecomposition. Let \begin{equation}\label{eq:bell-basis} \tfrac{1}{\sqrt{2}}\ket{0,a} + \tfrac{1}{\sqrt{2}}(-1)^b \ket{1, 1+a} \end{equation} be a member of the Bell basis, for $a, b \in \{0, 1\}$. This is a $1$-eigenvector of~$\ket{s}\bra{s}$ if $a, b = 1$ and a $0$-eigenenvector otherwise. Now we verify this holds for the LHS: \begin{align*} & \tfrac{1}{4} \cdot (I \ot I - X \ot X) \cdot (I \ot I - Z \ot Z) \cdot \big(\tfrac{1}{\sqrt{2}}\ket{0,a} + \tfrac{1}{\sqrt{2}}(-1)^b \ket{1, 1+a}\big)\\ ={}&\tfrac{1}{4\sqrt{2}} \cdot (I \ot I - X \ot X) \cdot \big(\ket{0,a} - (-1)^a \ket{0, a} + (-1)^b \ket{1, 1+a}- (-1)^{b} \cdot (-1)^{2+a} \ket{1, 1+a}\big)\\ ={}&\tfrac{1}{4\sqrt{2}} \cdot (1 - (-1)^a) \cdot (I \ot I - X \ot X) \cdot \big(\ket{0,a} + (-1)^b \ket{1, 1+a}\big)\\ ={}&\tfrac{1}{4\sqrt{2}} \cdot (1 - (-1)^a) \cdot \big(\ket{0,a} - \ket{1, 1+a} + (-1)^b \ket{1, 1+a} - (-1)^b \ket{0, a}\big)\\ ={}& \tfrac{1}{4\sqrt{2}}\cdot (1 - (-1)^a) \cdot (1- (-1)^b) \cdot (\ket{0, a} - \ket{1, 1+a}). \end{align*} If $a, b = 1$, then this is equal to~\eqref{eq:bell-basis}, showing that it is a $1$-eigenvector. Otherwise, this is zero, which completes the proof. \end{proof} \ignore{ \begin{definition}[The anti-ferromagnetic Heisenberg model] Let $G = (V, E, w)$ be a graph known as the \emph{interaction graph}. Consider a quantum state in $(\C^2)^{V}$ containing a qubit for each vertex $v \in V$. For any $u, v \in V$, we write $h_{u, v}$ as shorthand for the operator \begin{equation*} h_{u, v} \otimes I_{V \setminus \{u, v\}}, \end{equation*} where~$h$ acts on the~$u$ and~$v$ qubits and~$I$ acts on the rest. The corresponding instance of the \emph{anti-ferromagnetic Heisenberg model} is given by \begin{equation*} H_G = \E_{(\bu, \bv) \sim E} h_{\bu, \bv}. \end{equation*} \end{definition} \begin{definition}[Energy] Let $H_G$ be an instance of the \emph{anti-ferromagnetic Heisenberg model}. Its \emph{maximum energy} is \begin{equation*} \text{\sc QMax-Cut}\xspace(G) = \lambda_{\mathrm{max}} (H_G) = \max_{\ket{\psi} \in (\C^2)^{V}} \bra{\psi} H_G \ket{\psi}. \end{equation*} We may also refer to this as the \emph{value} of $H_G$. \end{definition} } Though we will not need it, we note the following additional way of rewriting the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction for didactic purposes. The proof is left to the reader. \begin{proposition} \begin{equation*} h = \tfrac{1}{2}\cdot (I \ot I - \mathsf{SWAP}), \end{equation*} where $\mathsf{SWAP}$ is the two-qubit swap gate. \end{proposition} \subsection{Product states}\label{sec:product-states} \ignore{ \begin{definition}[Product state value] The \emph{product state value of $H_G$} is \begin{equation*} \text{\sc Prod}\xspace(G) = \max_{\forall v \in V, \ket{\psi_v} \in \C^2} \bra{\psi_G} H_G \ket{\psi_G}, \end{equation*} where $\ket{\psi_G} = \otimes_{v \in V} \ket{\psi_v}$. \end{definition} \begin{definition}[Balls and spheres] Given a dimension $d \geq 1$, the $d$-dimensional unit ball and sphere are given by \begin{equation*} B^d = \{x \in \R^d \mid \Vert x \Vert \leq 1\}, \qquad S^{d-1} = \{x \in \R^d \mid \Vert x \Vert = 1\}. \end{equation*} \end{definition} } The following definition gives a convenient decomposition for single-qubit quantum states. It can be derived using the properties in \Cref{prop:pauli-props}. \begin{definition}[Bloch spheres and Bloch vectors]\label{def:bloch_vectors} Let $\rho$ be a one qubit density matrix. Then there exists a coefficient vector $c = (c_X, c_Y, c_Z) \in B^3$ such that \begin{equation*} \rho = \frac{1}{2}\cdot (I + c_X X + c_Y Y + c_Z Z). \end{equation*} In addition, $\rho$ is a pure state if and only if $\Vert c\Vert = 1$; equivalently, if $c \in S^2$. We'll refer to the vector $c$ as the \textit{Bloch vector} for $\rho$. \end{definition} Using this, we can now prove the alternate form for the product state value. \begin{proposition}[Rewriting the product state value; \Cref{prop:rewrite-product} restated]\label{prop:rewrite-product-restated} \begin{equation*} \text{\sc Prod}\xspace(G) = \max_{f:V \rightarrow S^2} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{1}{4} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{proposition} \begin{proof} Let $\ket{\psi_G} = \otimes_{v \in V} \ket{\psi_v}$ be a product state. For each $v \in V$, let $f(v) = (v_X, v_Y, v_Z) \in S^2$ be its Bloch sphere coefficient vector. In other words, \begin{equation*} \ket{\psi_v}\bra{\psi_v} = \frac{1}{2}\cdot (I + v_X X + v_Y Y + v_Z Z). \end{equation*} Then the energy of $\ket{\psi_G}$ is given by \begin{equation}\label{eq:first-step-of-energy-calculation} \tr[H_G \cdot \ket{\psi_G}\bra{\psi_G}] = \E_{(\bu, \bv) \sim E}\tr[h_{\bu, \bv} \cdot \ket{\psi_G}\bra{\psi_G}] = \E_{(\bu, \bv) \sim E}\tr[h_{\bu, \bv} \cdot \ket{\psi_{\bu}}\bra{\psi_{\bu}} \ot \ket{\psi_{\bv}} \bra{\psi_{\bv}}]. \end{equation} For any edge $(u, v) \in E$, the energy of the $(u,v)$-interaction is \begin{align*} &\tr[h_{u, v} \cdot \ket{\psi_u}\bra{\psi_u} \ot \ket{\psi_v} \bra{\psi_v}]\\ ={}& \tr\Big[\frac{1}{4} \cdot ( I \ot I - X \ot X - Y \ot Y - Z \ot Z)\\ &\qquad \qquad \qquad \qquad\cdot \frac{1}{2}\cdot (I + u_X X + u_Y Y + u_Z Z) \ot \frac{1}{2}\cdot (I + v_X X + v_Y Y + v_Z Z)\Big]\\ ={}& \frac{1}{4} \cdot (1 - u_X v_X - u_Y v_Y - u_Z v_Z) \tag{by \Cref{prop:pauli-props}}\\ ={}& \frac{1}{4} \cdot (1 - \langle f(u), f(v)\rangle). \end{align*} Substituting into \Cref{eq:first-step-of-energy-calculation}, the energy of $\ket{\psi_G}$ is \begin{equation*} \E_{(\bu, \bv) \sim E}[\tfrac{1}{4} - \tfrac{1}{4}\langle f(\bu), f(\bv)\rangle]. \end{equation*} This concludes the proof. \end{proof} \begin{remark}\label{rem:prod-state-bad} One consequence of \Cref{prop:rewrite-product-restated} is that $\text{\sc Prod}\xspace(G) \leq 1/2$ always, even though $\text{\sc QMax-Cut}\xspace(G)$ can be as large as~$1$. Indeed, $\bra{\psi_1, \psi_2} h \ket{\psi_1,\psi_2} \leq 1/2$ for any qubit states $\ket{\psi_1}, \ket{\psi_2} \in \C^2$, even though $\bra{s} h \ket{s} = 1$ by \Cref{prop:rewrite-interaction}. \end{remark} Although \Cref{rem:prod-state-bad} shows that the product states can in general give a poor approximation to the energy, there are interesting special cases in which they still give good approximations. The following result shows that this holds provided that the degree of~$G$ is large. \begin{theorem}[Corollary 4 of~\cite{BH16}]\label{thm:BH} Let $G = (V, E, w)$ be a $D$-regular graph with uniform edge weights. Then \begin{equation*} \text{\sc Prod}\xspace(G) \geq \text{\sc QMax-Cut}\xspace(G) - O\left(\frac{1}{D^{1/3}}\right). \end{equation*} \end{theorem} Unfortunately, we will not be able to apply this theorem directly because our graphs will not be precisely unweighted, $D$-regular graphs, but instead high-degree graphs with different weights on different edges. In this case, Brandão and Harrow provide the following bound. \begin{theorem}[Theorem 8 of~\cite{BH16}]\label{thm:BH-nonuniform} Let $G = (V, E, w)$ be a weighted graph. Define \begin{enumerate} \item the probability distribution $(p_u)_{u \in V}$ such that $p_u = \tfrac{1}{2} \Pr_{\boldsymbol{e} \sim E}[\text{$\boldsymbol{e}$ contains $u$}]$, \item the $|V| \times |V|$ matrix $A$ such that $A_{u, v} = \Pr_{(\bu', \bv') \sim E}[\bu' = u \mid \bv' = v].$ \end{enumerate} Then the following inequality holds. \begin{equation*} \text{\sc QMax-Cut}\xspace(G) \leq \text{\sc Prod}\xspace(G) + 20\cdot (\tr[A^2] \Vert p \Vert_2^2)^{1/8} + \Vert p \Vert_2^2. \end{equation*} \end{theorem} We note that in the case of a $D$-regular graph, $\tr[A^2] = n/D$ and $\Vert p \Vert_2^2 = 1/n$. We will use the following corollary, which we will find easier to apply. \begin{corollary}\label{cor:BH-nonuniform-easy-to-use} In the setting of \Cref{thm:BH-nonuniform}, the following inequality holds. \begin{equation*} \text{\sc QMax-Cut}\xspace(G) \leq \text{\sc Prod}\xspace(G) + 20\cdot (n \cdot \max_{u, v} \{A_{u, v}\} \cdot \max_u \{p_u\})^{1/8} + \max_u \{p_u\}, \end{equation*} where $n$ is the number of vertices in~$G$. \end{corollary} \begin{proof} First, we bound the $\tr[A^2]$ term: \begin{equation*} \tr[A^2] = \sum_{u, v} A_{u, v} \cdot A_{v, u} \leq \max_{v, u} \{A_{v, u}\} \cdot \sum_{u, v} A_{u, v} = \max_{v, u} \{A_{v, u}\} \cdot \sum_{v} 1 = \max_{v, u} \{A_{v, u}\} \cdot n. \end{equation*} Next, we bound the $\Vert p \Vert_2^2$ term: \begin{equation*} \Vert p \Vert_2^2 = \sum_{u} p_u^2 \leq \max_u \{p_u\} \cdot \sum_u p_u = \max_u \{p_u\}. \end{equation*} Substituting these bounds into \Cref{thm:BH-nonuniform} completes the proof. \end{proof} \subsection{Deriving the basic SDPs}\label{sec:sdp_proofs} Now we will show to derive the basic SDP for \text{\sc Quantum} \text{\sc Max-Cut}\xspace. As a warm-up, we will recall a standard method for deriving the \text{\sc Max-Cut}\xspace SDP \begin{equation}\label{eq:mcsdp-restated} \text{\sc SDP}_{\text{\sc MC}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} \langle f(\bu), f(\bv)\rangle]. \end{equation} One way of deriving this SDP is as follows: let $f:V\rightarrow \{-1, 1\}$ be an assignment to the vertices. Consider the $n \times n$ matrix $M$ defined as $M(u, v) = f(u) \cdot f(v)$. Then~$M$ is a real, PSD matrix such that $M(v, v) = 1$ for all $v \in V$. Furthermore, we can write the value of~$f$ in terms of~$M$ as \begin{equation}\label{eq:objective-in-terms-of-M} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} M(\bu, \bv)]. \end{equation} Now, we relax our problem and consider optimizing \Cref{eq:objective-in-terms-of-M} over all real, PSD matrices~$M$ such that $M(v, v) = 1$. Such an~$M$ can be written as the Gram matrix of a set of real vectors of dimension~$n$; i.e. there is a function $f:V \rightarrow \R^{n}$ such that $M(u, v) = \langle f(u), f(v)\rangle$. This yields the SDP in \Cref{eq:mcsdp-restated}. \paragraph{The basic SDP for \text{\sc Quantum} \text{\sc Max-Cut}\xspace.} Now we show how to derive the basic SDP for \text{\sc Quantum} \text{\sc Max-Cut}\xspace. To begin, let $G = (V, E, w)$ be an $n$-vertex graph. Let $\ket{\psi} \in (\C^2)^{V}$ be a quantum state. Consider the set of $3n$ vectors $X_u \ket{\psi}$, $Y_u \ket{\psi}$, $Z_u\ket{\psi}$, where $P_u$ denotes the Pauli matrix~$P$ acting on qubit~$u$. The Gram matrix of these vectors, denoted $M(\cdot,\cdot)$, is the $3n \times 3n$ matrix whose rows and columns are indexed by Pauli matrices $P_u$ such that \begin{equation*} M(P_u, Q_v) = \bra{\psi} P_u Q_v\ket{\psi}. \end{equation*} Using $M$, we can express the energy of $\ket{\psi}$ as follows: \begin{align} \bra{\psi}H_G\ket{\psi} = \E_{(\bu, \bv) \sim E} \bra{\psi} h_{\bu, \bv}\ket{\psi} & = \E_{(\bu, \bv) \sim E} \tfrac{1}{4} \cdot \bra{\psi}( I_{\bu} \ot I_{\bv} - X_{\bu} \ot X_{\bv} - Y_{\bu} \ot Y_{\bv} - Z_{\bu} \ot Z_{\bv})\ket{\psi}\nonumber\\ &= \tfrac{1}{4}\cdot\E_{(\bu, \bv) \sim E}[1 - M(X_{\bu}, X_{\bv}) - M(Y_{\bu}, Y_{\bv}) - M(Z_{\bu}, Z_{\bv})].\label{eq:new-objective} \end{align} Let us derive some constraints on this matrix: \begin{enumerate} \item \textbf{PSD:} $M$ is Hermitian and PSD. \item \textbf{Unit length:} For each $P_u$, $M(P_u, P_u) = 1$. \item \textbf{Commuting Paulis:} For each $P_u, Q_v$ such that $u \neq v$, $P_u$ commutes with~$Q_v$. This implies that $ M(P_u, Q_v) = M(Q_v, P_u) $ and is therefore real because $M$ is Hermitian. \item \textbf{Anti-commuting Paulis:} For each $P_u, Q_u$ such that $P \neq Q$, $P_u$ anti-commutes with $Q_u$.\label{item:anticommute} This implies that $ M(P_u, Q_u) = - M(Q_u, P_u) $ and therefore has no real part because $M$ is Hermitian. \end{enumerate} Now we relax our problem and consider optimizing \Cref{eq:new-objective} over all matrices~$M$ that satisfy these four conditions. This is a relaxation because not all matrices~$M$ correspond to Gram matrices of vectors of the form $P_u \ket{\psi}$. Prior to stating the SDP, we perform one final simplification. Given such an~$M$, consider the matrix $M' = \tfrac{1}{2}(M + M^T)$. This satisfies all four conditions, has the same energy as~$M$, and moreover satisfies $M'(P_u, Q_u) = 0$ for $P \neq Q$. We can therefore replace~\Cref{item:anticommute} with this stronger condition, which implies that~$M'$ is real. Thus, $M'$ is a real, symmetric $3n \times 3n$ PSD matrix, so we can write it as the Gram matrix of a set of real vectors of dimension $3n$. In other words, there are functions \begin{equation*} f_X, f_Y, f_Z: V \rightarrow \R^{3n} \end{equation*} such that $M'(P_u, Q_v) = \langle f_P(u) , f_Q(v)\rangle$. Putting everything together, we have the following SDP. \begin{proposition}[\text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP]\label{prop:sum-of-squares-equals-sdp} Let $G = (V, E, w)$ be an $n$-vertex graph. The value of the SDP for \text{\sc Quantum} \text{\sc Max-Cut}\xspace can be written as \begin{align} \text{\sc SDP}_{\text{\sc QMC}}\xspace(G) = \max~& \frac{1}{4} \cdot \E_{(\bu, \bv) \sim E}[1 - \langle f_X(\bu), f_X(\bv)\rangle- \langle f_Y(\bu), f_Y(\bv)\rangle - \langle f_Z(\bu), f_Z(\bv)\rangle], \label{eq:heis-first-sdp}\\ \mathrm{s.t.}~& \langle f_P(v), f_Q(v) \rangle = 0, \quad \forall v \in V,~P \neq Q \in \{X, Y, Z\},\nonumber\\ &f_X, f_Y, f_Z:V \rightarrow S^{3n-1}.\nonumber \end{align} \end{proposition} This SDP can also be viewed as the degree-2 relaxation for \text{\sc Quantum} \text{\sc Max-Cut}\xspace in the \emph{non-commutative Sum of Squares (ncSoS)} hierarchy. We give a didactic treatment of this perspective in \Cref{sec:ncsos} (which is not necessary to understand the rest of this paper). We now further simplify the SDP relaxation for \text{\sc Quantum} \text{\sc Max-Cut}\xspace and derive the expression from \Cref{def:heis-sdp}. \begin{proposition}[\text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP, simplified version; \Cref{def:heis-sdp}]\label{prop:equivalence} \begin{equation}\label{eq:sdp-mc-heis} \text{\sc SDP}_{\text{\sc QMC}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f(\bu), f(\bv)\rangle]. \end{equation} \end{proposition} \begin{proof} We first show how to convert a solution for~\eqref{eq:heis-first-sdp} into a solution for~\eqref{eq:sdp-mc-heis} without decreasing the value. To begin, we can rewrite~\eqref{eq:heis-first-sdp} as \begin{equation}\label{eq:split-up} \tfrac{1}{3} \cdot \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f_X(\bu), f_X(\bv)\rangle] +\tfrac{1}{3} \cdot \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f_Y(\bu), f_Y(\bv)\rangle] +\tfrac{1}{3} \cdot \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f_Z(\bu), f_Z(\bv)\rangle]. \end{equation} Pick the term $P \in \{X, Y, Z\}$ with the largest value, and set $f = f_P$. Then $f$ has value in \eqref{eq:sdp-mc-heis} at least the value of $f_X, f_Y, f_Z$ in \eqref{eq:split-up}. The only caveat is that $f$ maps into $S^{3n-1}$ rather than $S^{n-1}$. However, $f$ only outputs $n$ different vectors, so these can be represented in~$n$-dimensional space while preserving inner products. Next, we reverse. Let $f:V\rightarrow S^{n-1}$ be a solution to \eqref{eq:sdp-mc-heis}. We define \begin{equation*} f_X(v) = e_1 \otimes f(v), \quad f_Y(v) = e_2 \otimes f(v), \quad f_Z(v) = e_3 \otimes f(v), \end{equation*} where $e_1, e_2, e_3$ are standard basis vectors in $\R^3$. Then $\langle f_P(v), f_Q(v)\rangle = 0$ for $P \neq Q$ because $\langle e_i, e_j\rangle = 0$ for $i \neq j$. In addition, $\langle f_P(u), f_P(v) \rangle = \langle f(u), f(v)\rangle$. Thus, the value of this assignment \begin{align*} \eqref{eq:heis-first-sdp} & = \frac{1}{4} \cdot \E_{(\bu, \bv) \sim E}[1 - \langle f_X(\bu), f_X(\bv)\rangle- \langle f_Y(\bu), f_Y(\bv)\rangle - \langle f_Z(\bu), f_Z(\bv)\rangle]\\ & = \frac{1}{4} \cdot \E_{(\bu, \bv) \sim E}[1 - \langle f(\bu), f(\bv)\rangle- \langle f(\bu), f(\bv)\rangle - \langle f(\bu), f(\bv)\rangle]\\ & = \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f(\bu), f(\bv)\rangle]. \end{align*} As a result, the value remains unchanged. This completes the proof. \end{proof} \subsection{Projection rounding} We recall the performance of projection rounding, first stated in \Cref{eq:might-reference-later}. \begin{theorem}[{\cite[Lemma 2.1]{BOV10}}]\label{thm:exact-formula-for-average-inner-product} Let $-1 \leq \rho \leq 1$, and let $u$ and $v$ be two $n$-dimensional unit vectors such that $\langle u, v\rangle = \rho$. Let $\bZ$ be a random $k \times n$ matrix consisting of $kn$ i.i.d.\ standard Gaussians. Then \begin{equation*} F^*(k, \rho) := \E_{\bZ}\left\langle \frac{\bZ u}{\Vert \bZ u\Vert}, \frac{\bZ v}{\Vert \bZ v\Vert}\right\rangle = \frac{2}{k}\left(\frac{\Gamma((k+1)/2)}{\Gamma(k/2)}\right)^2 \langle u, v\rangle \,_2F_1\left(1/2,1/2;k/2 + 1;\langle u, v\rangle^2\right), \end{equation*} where $_2F_1(\cdot, \cdot;\cdot;\cdot)$ is the Gaussian hypergeometric function. \ignore{ \begin{equation*} \E_{\boldf} \langle \boldf(u), \boldf(v)\rangle = \frac{2}{3}\left(\frac{\Gamma(2)}{\Gamma(3/2)}\right)^2\rho_{u, v}\cdot\,_2F_1\left(1/2,1/2;5/2;\rho_{u, v}^2\right), \end{equation*}} \end{theorem} In the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP, if an edge $(u, v)$ has value $\tfrac{1}{4} - \tfrac{3}{4} \rho$, then projection rounding will produce a solution whose value on this edge is $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho)$ in expectation. Similarly, in the product state SDP, an edge with value $\tfrac{1}{4} - \tfrac{1}{4}\rho$ will be rounded into a solution with value $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho)$ on this edge. We can then define our approximation ratios as the worst case rounding over all values of~$\rho$. \begin{definition}[Approximation ratios]\label{def:ratios} The constant $\alpha_{\mathrm{GP}}$ is defined as the solution to the minimization problem \begin{equation*} \alpha_{\mathrm{GP}} = \min_{-1 \leq \rho < 1/3} \frac{\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho)}{\tfrac{1}{4} - \tfrac{3}{4} \rho}, \end{equation*} and the constant $\rho_{\mathrm{GP}}$ is defined as the minimizing value of~$\rho$. In addition, the constant $\alpha_{\mathrm{BOV}}$ is defined as the solution to the minimization problem \begin{equation*} \alpha_{\mathrm{BOV}} = \min_{-1 \leq \rho \leq 1} \frac{\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho)}{\tfrac{1}{4} - \tfrac{1}{4} \rho}, \end{equation*} and the constant $\rho_{\mathrm{BOV}}$ is defined as the minimizing value of~$\rho$. \end{definition} We note that the minimization for $\alpha_{\mathrm{GP}}$ is only over $\rho \leq 1/3$, because when $\rho \geq 1/3$ the denominator is $\leq 0$. \begin{proposition}[Formula for the optimum value]\label{prop:opt-formula} Let $f_{\mathrm{opt}}:\R^n \rightarrow S^{k-1}$ be defined by $f_{\mathrm{opt}}(x) = x_{\leq k}/\Vert x_{\leq k} \Vert$, where $x_{\leq k} = (x_1, \ldots, x_k)$. Then \begin{equation*} \E_{\bx \sim_\rho \by}\langle f_{\mathrm{opt}}(\bx), f_{\mathrm{opt}}(\by)\rangle = F^*(k, \rho). \end{equation*} \end{proposition} \begin{proof} Let $u, v \in \R^n$ be any unit vectors with $\langle u, v \rangle = \rho$. Let $\bZ$ be an $n \times n$ matrix consisting of~$n^2$ i.i.d.\ standard Gaussians. Then $\tfrac{1}{\sqrt{n}} \bZ u $ and $\tfrac{1}{\sqrt{n}}\bZ v$ are distributed as $\rho$-correlated Gaussians. If $\Pi_{\leq k}$ is the $k \times n$ matrix which projects a vector down to its first $k$ coordinates, we have \begin{align*} \E_{\bx \sim_\rho \by}\langle f_{\mathrm{opt}}(\bx), f_{\mathrm{opt}}(\by)\rangle &= \E_{\bZ} \langle f_{\mathrm{opt}}(\tfrac{1}{\sqrt{n}} \bZ u), f_{\mathrm{opt}}(\tfrac{1}{\sqrt{n}} \bZ v)\rangle\\ &= \E_{\bZ} \langle f_{\mathrm{opt}}(\bZ u), f_{\mathrm{opt}}(\bZ v)\rangle = \E_{\bZ} \left\langle \frac{\Pi_{\leq k} \bZ u}{\Vert \Pi_{\leq k} \bZ u\Vert}, \frac{\Pi_{\leq k} \bZ v}{\Vert \Pi_{\leq k} \bZ v\Vert}\right\rangle. \end{align*} Now note that $\Pi_{\leq k} \bZ$ is distributed as a random $k \times n$ matrix consisting of $kn$ i.i.d.\ standard Gaussians. Applying \Cref{thm:exact-formula-for-average-inner-product}, this is exactly equal to $F^*(k, \rho)$. \end{proof} \subsection{Fourier analysis on the hypercube}\label{sec:fourier-analysis} We will review basic concepts in the Fourier analysis of Boolean functions. See~\cite{OD14} for further details. \begin{definition}[Fourier transform] Let $f:\{-1, 1\}^n \rightarrow \R$ be a function. Then it has a unique representation as a multilinear polynomial known as the \emph{Fourier transform}, given by \begin{equation*} f(x) = \sum_{S \subseteq [n]} \widehat{f}(S) \chi_S(x), \end{equation*} where $\widehat{f}(S)$ is a real coefficient called the \emph{Fourier coefficient}, and $\chi_S(x)$ is the monomial $\prod_{i \in S} x_i$. We extend this definition to functions $f:\{-1, 1\}^n \rightarrow \R^k$ as follows: let $f = (f_1, \ldots, f_k)$. Then \begin{equation*} f(x) = (f_1(x), \ldots, f_k(x)) = \sum_{S \subseteq [n]} (\widehat{f}_1(S), \ldots, \widehat{f}_k(S)) \chi_S(x) = \sum_{S \subseteq [n]} \widehat{f}(S) \chi_S(x), \end{equation*} where $\widehat{f}(S) = (\widehat{f}_1(S), \ldots, \widehat{f}_k(S))$ is a vector-valued Fourier coefficient. \end{definition} \begin{definition}[Variance and influences] Let $f:\{-1, 1\}^n \rightarrow \R^k$. Then \begin{equation*} \E_{\bx\sim \{-1, 1\}^n} \Vert f(\bx)\Vert_2^2 = \sum_{S \subseteq [n]} \Vert \widehat{f}(S) \Vert_2^2. \end{equation*} Its \emph{variance} is the quantity \begin{equation*} \Var[f] = \E_{\bx \sim \{-1, 1\}^n} \Vert f(\bx) - \E[f] \Vert_2^2 = \sum_{S \subseteq [n], S \neq \emptyset} \Vert \widehat{f}(S)\Vert_2^2. \end{equation*} Given a coordinate~$i$, we define its \emph{influence} as \begin{equation*} \Inf_i[f] = \sum_{S \subseteq [n], S \ni i} \Vert \widehat{f}(S)\Vert_2^2 = \sum_{S \subseteq [n], S \ni i} \sum_j \widehat{f_j}(S)^2 = \sum_j \Inf_i[f_j]. \end{equation*} We will also need truncated versions of these two measures: \begin{equation*} \Inf^{\leq m}_i[f] = \sum_{|S| \leq m, S \ni i} \Vert \widehat{f}(S)\Vert_2^2, \qquad \Var[f^{>m}] = \sum_{|S| > m} \Vert \widehat{f}(S) \Vert_2^2. \end{equation*} \end{definition} \begin{proposition}[Only few noticeable coordinates]\label{prop:influence-bound} Let $f:\{-1, 1\}^n \rightarrow B^k$. Then there are at most $m/\delta$ coordinates~$i$ such that $\Inf^{\leq m}_i[f] \geq \delta$. \end{proposition} \begin{proof} Let $N$ be the set of all such coordinates. Then \begin{multline*} |N| \cdot \delta \leq \sum_{i \in N} \Inf^{\leq m}_i[f] = \sum_{i \in N}\sum_{|S| \leq m, S \ni i} \Vert \widehat{f}(S)\Vert_2^2 = \sum_{|S| \leq m} |S \cap N| \cdot \Vert \widehat{f}(S)\Vert_2^2\\ \leq \sum_{|S| \leq m} m\cdot \Vert \widehat{f}(S)\Vert_2^2 \leq m \cdot \sum_{S \subseteq [n]} \Vert \widehat{f}(S)\Vert_2^2 = m \cdot \E_{\bx} \Vert f(\bx) \Vert_2^2 \leq m. \end{multline*} Rearranging this gives $|N| \leq m /\delta$. \end{proof} \begin{definition}[Correlated Boolean variables~{\cite[Definition 2.40]{OD14}}] Given a fixed $x \in \{-1, 1\}^n$, we say that $\by \in \{-1, 1\}^n$ is \emph{$\rho$-correlated to~$x$} if each coordinate $\by_i$ is sampled independently according to the following distribution: \begin{equation*} \by_i = \left\{\begin{array}{rl} x_i & \text{with probability $\tfrac{1}{2} + \tfrac{1}{2}\rho$}\\ -x_i & \text{with probability $\tfrac{1}{2} - \tfrac{1}{2}\rho$}. \end{array}\right. \end{equation*} In addition, we say that $\bx$ and $\by$ are \emph{$\rho$-correlated $n$-dimensional Boolean strings} if $\bx$ is sampled from $\{-1, 1\}^n$ uniformly at random and $\by$ is $\rho$-correlated to~$\bx$. Note that for each $i$, $\E[\bx_i \by_i] = \rho$. \end{definition} \begin{definition}[Noise stability] Let $f: \{-1, 1\}^n \rightarrow \R^k$, and let $-1 \leq \rho \leq 1$. Given an input $x \in \{-1, 1\}^n$, we write \begin{equation*} \mathbf{T}_\rho f(x) = \E_{\substack{\text{$\by$ which is}\\\text{$\rho$-correlated to~$x$}}}[f(\by)] = \sum_{S \subseteq [n]} \rho^{|S|} \widehat{f}(S) \chi_S(x). \end{equation*} Then the \emph{Boolean noise stability} of $f$ at~$\rho$ is \begin{equation*} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] = \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}}\langle f(\bx), f(\by)\rangle = \E_{\bx \sim \{-1, 1\}^n} \langle f(\bx), \mathbf{T}_\rho f(\bx)\rangle = \sum_{S \subseteq [n]} \rho^{|S|} \widehat{f}(S)^2. \end{equation*} This coincides with the Gaussian noise sensitivity of~$f$. To see this, note that the Fourier expansion allows us to extend~$f$'s domain to all of $\R^n$. Then \begin{equation*} \E_{\bx \sim_\rho \by}\langle f(\bx), f(\by) \rangle = \sum_{S \subseteq [n]} \rho^{|S|} \widehat{f}(S)^2. \end{equation*} Hence, we use $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$ as for both notions. \end{definition} \section{Rank-constrained \text{\sc Max-Cut}\xspace}\label{sec:rank-constrained} We now show how our results extend to the rank-constrained \text{\sc Max-Cut}\xspace problem. Recall from \Cref{def:rank-k-maxcut} that rank-$k$ \text{\sc Max-Cut}\xspace is the problem of computing the value \begin{equation*} \text{\sc Max-Cut}\xspace_k(G) = \max_{f:V \rightarrow S^{k-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} \langle f(\bu), f(\bv)\rangle]. \end{equation*} This was introduced in the work of Briët, Oliveira, and Vallentin~\cite[Section 6]{BOV10} as the Laplacian special case of a more general problem known as the \emph{rank-constrained Grothendieck problem}. Their work lists numerous applications of the general rank-constrained Grothendieck problem, though to our knowledge there are no applications of the Laplacian special case (i.e.\ the rank-constrained \text{\sc Max-Cut}\xspace problem) aside from the $k = 3$ case, which corresponds to the product state value of \text{\sc Quantum} \text{\sc Max-Cut}\xspace, as we have seen. The BOV algorithm that we have already seen for rank-3 \text{\sc Max-Cut}\xspace (equivalently, for the product state value of \text{\sc Quantum} \text{\sc Max-Cut}\xspace) is actually the $k=3$ special case of an algorithm for rank-$k$ \text{\sc Max-Cut}\xspace for general~$k$, which we will also refer to as the ``BOV algorithm'' in this section. For general $k$, the BOV algorithm first solves the standard \text{\sc Max-Cut}\xspace SDP to produce a vector solution $f_{\mathrm{SDP}}:V \rightarrow S^{n-1}$, which it then rounds into a random function $\boldf:V \rightarrow S^{k-1}$ using projection rounding. To compute the algorithm's approximation ratio, they go edge-by-edge: for each edge $(u, v) \in E$,, if we set $\rho_{u, v} = \langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle$, then the $f_{\mathrm{SDP}}$'s value for that edge is $\tfrac{1}{2} - \tfrac{1}{2} \rho_{u, v}$, whereas the expectation of $\boldf$'s value for that edge is $\tfrac{1}{2} - \tfrac{1}{2} F^*(k, \rho)$ (see \Cref{thm:exact-formula-for-average-inner-product} for a definition of $F^*(k, \rho)$). This motivates studying the following quantity. \begin{definition}[Approximation ratio for rank-$k$ \text{\sc Max-Cut}\xspace] Let $k \geq 1$. The constant $\akmc{k}$ is defined as the solution to the minimization problem \begin{equation*} \akmc{k} = \min_{-1 \leq \rho \leq 1} \frac{\tfrac{1}{2} - \tfrac{1}{2} F^*(k,\rho)}{\tfrac{1}{2} - \tfrac{1}{2}\rho}, \end{equation*} and the constant $\rkmc{k}$ is defined as the minimizing value of~$\rho$. \end{definition} For $k = 1$, $\akmc{1} = 0.8785\ldots$ and $\rkmc{1} = -0.689\ldots$, corresponding to the Goemans-Williamson algorithm. For $k = 3$, $\akmc{3} = 0.9563\ldots$ and $\rkmc{3} = -0.584\ldots$, corresponding to the rank-3 BOV algorithm. \cite{BOV10} also compute the $k = 2$ values numerically and find $\akmc{2} = 0.9349\ldots$ and $\rkmc{2} = -0.617\ldots$. Having defined these quantities, the expected value of $\boldf$ is \begin{equation*} \E_{\boldf} \E_{(\bu, \bv) \sim E}[\tfrac{1}{2} - \tfrac{1}{2}\langle \boldf(\bu), \boldf(\bv)\rangle] = \E_{(\bu, \bv)\sim E}[\tfrac{1}{2} - \tfrac{1}{2} F^*(k, \rho_{\bu, \bv})] \geq \akmc{k} \cdot \E_{(\bu, \bv)\sim E}[\tfrac{1}{2} - \tfrac{1}{2} \rho_{\bu, \bv}] = \text{\sc SDP}_{\text{\sc MC}}\xspace(G), \end{equation*} and so the BOV algorithm has approximation ratio at least $\akmc{k}$. This gives the following theorem. \begin{theorem}[Performance of the BOV algorithm for rank-$k$ \text{\sc Max-Cut}\xspace~\cite{BOV10}] The BOV algorithm for rank-$k$ \text{\sc Max-Cut}\xspace achieves approximation ratio $\akmc{k}$. \end{theorem} Our results on the product state value of \text{\sc Quantum} \text{\sc Max-Cut}\xspace imply that the BOV algorithm is optimal for rank-3 \text{\sc Max-Cut}\xspace. In fact, our proofs extend in a straightforward manner to show that the BOV algorithm is optimal for all constant values of~$k$. One slight technicality is that our proofs require the worst-case~$\rho$ to be negative, as this is the only regime for which our vector-valued Borell's inequality applies. The following proposition establishes this for rank-$k$ \text{\sc Max-Cut}\xspace. \begin{proposition}[Negative $\rho$ is the worst case] For all $k \geq 1$, $-1 \leq \rkmc{k} \leq 0$. \end{proposition} \begin{proof} The proposition follows from two claims about $F^*(k, \rho)$: (i) that $F^*(k, \rho)$ always has the same sign as~$\rho$, and (ii) that $|F^*(k, \rho)| \leq |\rho|$. Together, these imply that $\tfrac{1}{2} - \tfrac{1}{2} F^*(k, \rho) \geq \tfrac{1}{2} - \tfrac{1}{2} \rho$ whenever $0 \leq \rho \leq 1$, and so their ratio is always at least~$1$ for~$\rho$ in this range. But $\akmc{k}$ is an approximation ratio and so is always between $0$ and~$1$, and thus the minimizing value of~$\rho$ must be in the interval $[-1, 0]$. Now we prove the claims. We recall from \Cref{thm:exact-formula-for-average-inner-product} that \begin{equation*} F^*(k, \rho) =\frac{2}{k} \left( \frac{\Gamma((k+1)/2)}{\Gamma(k/2)}\right)^2 \rho\,\, \,_2 F_1[1/2, 1/2, k/2+1, \rho^2]. \end{equation*} Property (i) follows because, by \Cref{lem:inner_prod_lipschitz}, $F^*(k, \rho) \geq 0$ for $\rho \in [0,1]$, and $F^*$ is an odd function in $\rho$. \Cref{lem:inner_prod_lipschitz} also establishes that $F^*$ is convex for $\rho \in [0,1]$. One may directly evaluate the expression for $F^*(k, \rho)$ above to see that $F^*(k, 0) = 0$ and $F^*(k, 1) = 1$. Thus $F^*(k, \rho) \leq \rho$ for $\rho \in [0,1]$; because $F^*$ is odd in $\rho$, $F^*(k, \rho) \geq \rho$ for $\rho \in [-1,0]$, establishing (ii). \end{proof} Having established this, it is straightforward to extend our proofs to the case of rank-$k$ \text{\sc Max-Cut}\xspace, and we omit the details. (One very minor difference in the $k = 1,2$ case for the algorithmic gap is that the Lipschitz guarantee from \Cref{lem:inner_prod_lipschitz} only holds when bounded away from $-1$ and $1$. However, inspecting the proof of the algorithmic gap shows that this suffices.) Our results for rank-$k$ \text{\sc Max-Cut}\xspace are stated as follows. \begin{theorem}[Hardness for rank-$k$ \text{\sc Max-Cut}\xspace] Let $k \geq 1$ be fixed. Then the following three statements hold. \begin{enumerate} \item The \text{\sc Max-Cut}\xspace semidefinite program $\text{\sc SDP}_{\text{\sc MC}}\xspace(G)$, when viewed as a relaxation of $\text{\sc Max-Cut}\xspace_k(G)$, has integrality gap $\akmc{k}$. \item The BOV algorithm for rank-$k$ \text{\sc Max-Cut}\xspace has algorithmic gap $\akmc{k}$. \item Assuming the UGC, it is $\NP$-hard to approximate $\text{\sc Max-Cut}\xspace_k(G)$ to within a factor of $\akmc{k}+\eps$, for all $\eps > 0$. \end{enumerate} \end{theorem} \section{The spherical case}\label{sec:spherical} Here we consider the case of $f: S^{n-1} \to B^n$. Since we want to work with Gaussian noise on $\R^n$, this shell decomposition imposes a specific noise operator on $S^{n-1}$. In this section, we will work with a more general noise operator that includes the ones we will need in later sections. The ``uniform'' measure on the sphere will be denoted $\omega$: this is the unique rotationally invariant probability measure on $S^{n-1}$. If $\bu \sim \omega$, we will write $\tilde \omega$ for the distribution of $\bu_1$ (which is the same as the distribution of $\inr{v}{\bu}$ for any $v \in S^{n-1}$. Note that $\tilde \omega$ is a density on $[-1, 1]$, with density \begin{equation}\label{eq:ultraspheric-density} d\tilde \omega(t) = \frac{1}{Z_n} (1 - t^2)^\frac{n-3}{2} \, dt, \end{equation} where $Z_n$ is a normalizing constant. The proof of Lemma 4.17 in~\cite{FE12} depicts how $d\tilde \omega$ arises from $d\omega$. For a function $g: [-1, 1] \to \R$ satisfying \begin{equation}\label{eq:g-integrability} \int_{-1}^1 |g(t)| \,d\tilde \omega(t) < \infty, \end{equation} define the operator $\mathrm{U}_g$, acting on functions $f: S^{n-1} \to \R^k$ by \begin{equation}\label{eq:Ug} \mathrm{U}_g f(u) = \int_{S^{n-1}} g(\inr uv) f(v) \,d\omega(v). \end{equation} To make the definition fully rigorous, note that the integrability condition~\eqref{eq:g-integrability} implies that if $f$ is bounded then $\mathrm{U}_g f$ is defined pointwise. Then Jensen's inequality implies that $\|\mathrm{U}_g f\|_{L^p(\omega)} \le C \|\mathrm{U}_g f\|_{L^p(\omega)}$ for every bounded $f$ -- with $C$ being the left hand side of~\eqref{eq:g-integrability} -- and since bounded functions are dense in $L^p(\omega)$ it follows that $\mathrm{U}_g$ can be uniquely extended to an operator $L^2(\omega) \to L^2(\omega)$. We will be interested in non-negative $g$, and it might also be convenient to imagine $g$ as integrating to 1; i.e., with $\int_{-1}^1 g(t)\, d\tilde \omega(t) = 1$. In this case $\mathrm{U}_g f(u)$ is an average of values of $f$, much like our Gaussian noise operator $\mathrm{U}_\rho$. The main result of this section is that if $g$ is monotonic then the function $f(x) = x$ is optimally stable. \begin{theorem}\label{thm:spherical-noise} If $g: [-1, 1] \to [0, \infty)$ satisfies~\eqref{eq:g-integrability} and is non-decreasing then for every $f: S^{n-1} \to \R^n$ with $\E_{\bu \sim \omega} [\|f(\bu)\|^2] = 1$, \[ \E_{\bu \sim \omega} \inr{f(\bu)}{\mathrm{U}_g f(\bu)} \ge \E_{\bu \sim \omega} \inr{f_{\mathrm{opt}}(\bu)}{\mathrm{U}_g f_{\mathrm{opt}}(\bu)}, \] where $f_{\mathrm{opt}}(u) = u$. On the other hand, if $g$ is non-\emph{increasing} then for every $f: S^{n-1} \to \R^n$ with $\E_{\bu \sim \omega} [\|f(\bu)\|^2] = 1$ and $\E_{\bu \sim \omega}[f] = 0$, \[ \E_{\bu \sim \omega} \inr{f(\bu)}{\mathrm{U}_g f(\bu)} \le \E_{\bu \sim \omega} \inr{f_{\mathrm{opt}}(\bu)}{\mathrm{U}_g f_{\mathrm{opt}}(\bu)}. \] \end{theorem} \subsection{Spherical harmonics} We prove Theorem~\ref{thm:spherical-noise} by decomposing each coordinate of the vector-valued function $f$ into spherical harmonics. We suggest~\cite{FE12,D13} as introductory references for spherical harmonics. \begin{definition}[Spherical harmonics] A \emph{homogeneous polynomial of degree $d$} is a function $p:\R^n \rightarrow \R$ expressible as a linear combination of degree-$d$ monomials. We define the sets $\{\calH_d: d=0, 1, \dots\}$ by first setting $\calH_0$ to be the set of constant functions $S^{n-1} \to \R$, and then inductively defining $\calH_d$ to be the functions $S^{n-1} \to \R$ that can be represented as homogeneous polynomials of degree $d$, and which are orthogonal to $\bigoplus_{k=0}^{d-1} \calH_k$. The elements of $\calH_d$ are called degree-$d$ spherical harmonics. \end{definition} One subtlety (that will not be particularly important for us) is that distinct polynomials may give rise to the same function $S^{n-1} \to \R$; for example, the constant function $f(u) = 1$ can be written both as the constant polynomial 1 and as the degree-2 polynomial $u_1^2 + \cdots + u_n^2$, which evaluates to 1 on the sphere. The name \emph{spherical harmonics} comes from the fact that $\calH_d$ can be equivalently defined as the set of homogeneous degree-$d$ polynomials $p$ that are \emph{harmonic} in the sense that $\sum_{i=1}^n \frac{\partial^2}{\partial x_i^2} p(x) = 0$. The first important thing about the spaces $\calH_d$ is that they form an orthogonal decomposition of $L^2(S^{n-1},\omega)$: the $\calH_d$ are orthogonal in the sense that if $f \in \calH_d$ and $g \in \calH_{d'}$ for $d \ne d'$ then $\E_{\bu \sim \omega} [f(\bu) g(\bu)] = 0$; they decompose $L^2(S^{n-1},\omega)$ in the sense that if $H_d: L^2(S^{n-1},\omega) \to \calH_d$ is the orthogonal projection operator then $f = \sum_{d \ge 0} H_d f$ for every $f \in L^2(S^{n-1},\omega)$~\cite{D13}. In this sense, the decomposition into spherical harmonics is analogous to the decomposition of a Boolean function into Fourier levels~\cite{OD14}, or the decomposition of a function in $L^2(\R^n, \gamma)$ into Hermite levels~\cite{OD14}. Then second important thing about the spaces $\calH_d$ is the Funk-Hecke formula, which essentially says that noise operators like our $\mathrm{U}_g$ act diagonally on spherical harmonics (much like the usual Boolean and Gaussian noise operators act diagonally on the Fourier and Hermite bases respectively). \begin{theorem}[Funk-Hecke formula~\cite{H1917}; see~\cite{D13}, Theorem 2.9]\label{thm:funk-hecke} For any $g: [-1, 1] \to \R$ satisfying~\eqref{eq:g-integrability}, let $\mathrm{U}_g$ be defined as in~\eqref{eq:Ug} (with $k=1$). Then for every $d \in \{0, 1, \dots\}$ there exists a $\lambda_d$ such that for every $f \in \calH_d$ and every $u \in S^{n-1}$, \[ \mathrm{U}_g f(u) = \lambda_d f(u). \] In other words, spherical harmonics are the eigenvalues of $\mathrm{U}_g$ and the eigenvalues depend only on the degree $d$. \end{theorem} One particularly nice feature of the Funk-Hecke formula is that because the eigenvalues depend only on the degree $d$, we can compute $\lambda_d$ by choosing the most convenient $f \in \calH_d$ and $u \in S^{n-1}$. This leads us to the Gegenbauer polynomials, a family of univariate polynomials that capture the \emph{zonal} spherical harmonics (see~\cite{D13}, Theorem 2.6), those depending only on one direction. \begin{definition}[Gegenbauer polynomials]\label{def:gegenbauer} Let $\alpha > -\tfrac{1}{2}$ and $d$ be a nonnegative integer. The \emph{Gegenbauer polynomial} with Gegenbauer index~$\alpha$ and degree~$d$ is a univariate, real polynomial denoted $C^{(\alpha)}_d$. The Gegenbauer polynomials correspond to the zonal spherical harmonics of interest to us when $\alpha = \frac{n-2}{2}$ and $n \geq 3$, and we will henceforth make these assumptions on $\alpha$ and $n$. \end{definition} Gegenbauer polynomials may be defined recursively, using generating functions, in terms of the Gaussian hypergeometric function, or as special cases of other polynomials (see~\cite{AS72}, Chapter 22). We will only need a few properties of them. \begin{proposition}[Properties of the Gegenbauer polynomials]\label{prop:gegenbauer} \mbox{} \begin{enumerate} \item \label{item:geg-low-deg} (\cite{AS72}, 22.4.2) We have the following explicit formulas for low-degrees: \begin{equation*} C^{(\alpha)}_0(t) = 1\text{, and } C^{(\alpha)}_1(t) = 2 \alpha t. \end{equation*} \iffalse \item The even-degree polynomials are even and the odd-degree polynomials are odd, i.e. \begin{equation*} C^{(\alpha)}_d(-t) = (-1)^d \cdot C^{(\alpha)}_d(t). \end{equation*} \fi \item \label{item:geq-inequality} (\cite{AS72}, 22.14.2, \cite{S39}, Theorem 7.4.1) For $-1 \leq t \leq 1$ and $\alpha > 0$, \begin{equation*} |C^{(\alpha)}_d(t)| \leq C^{(\alpha)}_d(1) = \frac{(2 \alpha)_d}{d!}, \end{equation*} with a strict inequality if $d \ge 1$ and $-1 < t < 1$. \item \label{item:geg-integral} (\cite{AS72}, 22.13.2) For each integer $d \geq 0$, define the quantity \begin{equation*} \mathrm{ratio}_d(t) = \frac{C^{(\alpha)}_d(t)}{C^{(\alpha)}_d(1)} \cdot (1-t^2)^{\alpha - \tfrac{1}{2}}. \end{equation*} Then \begin{equation*} \int \mathrm{ratio}_d(t) \, dt = - \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{d (d + 2\alpha)} \cdot \frac{C^{(\alpha + 1)}_{d-1}(t)}{C_d^{(\alpha)}(1)}. \end{equation*} \item \label{item:geg-harmonic} (\cite{D13}, Theorem 2.6) For each integer $d \ge 0$, the function $S^{n-1} \to \R$ defined by $u \mapsto C_d^{(\alpha)}(u_1)$ belongs to $\calH_d$. \end{enumerate} \end{proposition} \subsection{Eigenvalues of the noise operator} The last property of \Cref{prop:gegenbauer} shows the relevance of Gegenbauer polynomials to the computation of $\lambda_d$: letting $h(u) = C_d^{(\alpha)}(u_1)$, we have \[ \lambda_d = \frac{\mathrm{U}_g h(e_1)}{h(e_1)} = \frac{\E_{\bu \sim \omega}[h(\bu) g(\inr{\bu}{e_1})]}{h(e_1)} = \E_{\bt \sim \tilde \omega} \left[\frac{C_d^{(\alpha)}(\bt)}{C_d^{(\alpha)}(1)} g(\bt)\right]. \] Recalling the formula~\eqref{eq:ultraspheric-density} for the density of $\tilde \omega$, we conclude: \begin{corollary}[Eigenvalues of $\mathrm{U}_g$]\label{cor:eigenvalue-formula} \[ \lambda_d = \frac{1}{Z_n} \int_{-1}^1 \mathrm{ratio}_d(t) g(t) \, dt. \] \end{corollary} \begin{remark}[The 2-dimensional case] As noted in \Cref{def:gegenbauer}, we have thus far required $n \geq 3$. When $n=2$, spherical harmonics reduce to Fourier series. If $t = \cos(\theta)$, we may define $C_d(t) := \cos(d\theta) = T_d(t)$, where the latter is the degree-$d$ Chebyshev polynomial of the first kind. Analogues of the properties in \Cref{prop:gegenbauer} hold in this case, allowing us to recover our results for $n=2$. \end{remark} Since we are interested in comparing $\lambda_d$ as $d$ varies, the factor $\frac{1}{Z_n}$ is unimportant for us. The key bound that we will need essentially amounts to considering the case of the indicator function $g = 1_{[-1, t]}$, defined to be $1$ on $[-1,t]$ and $0$ elsewhere. \begin{lemma}[Key Gegenbauer lemma]\label{lem:key-gegenbauer} For each integer $d \geq 0$, define the quantity \begin{equation*} \nu_d(t) = \int_{-1}^t \mathrm{ratio}_d(w)\, dw, \end{equation*} where $-1 < t < 1$. Then $|\nu_d(t)| < - \nu_1(t)$ for $d \geq 1$. In addition, $\nu_1(t)\leq 0$. \end{lemma} \begin{proof} By \Cref{item:geg-integral} of the Gegenbauer properties, \begin{equation*} \nu_d(t) = - \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{d (d + 2\alpha)} \cdot \frac{C^{(\alpha + 1)}_{d-1}(t)}{C_d^{(\alpha)}(1)}. \end{equation*} Using \Cref{item:geg-low-deg}, we can simplify the $d=1$ case as follows: \begin{equation*} \nu_1(t) = - \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{ (1 + 2\alpha)} \cdot \frac{C^{(\alpha + 1)}_{0}(t)}{C_1^{(\alpha)}(1)} = - \frac{(1-t^2)^{\alpha + \tfrac{1}{2}}}{ (1 + 2\alpha)}. \end{equation*} This is clearly $\leq 0$, as $\alpha > -\tfrac{1}{2}$. Finally, by \Cref{item:geq-inequality}, we have the bound \begin{align*} |\nu_d(t)| &= \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{d (d + 2\alpha)} \cdot \frac{|C^{(\alpha + 1)}_{d-1}(t)|}{C_d^{(\alpha)}(1)}\\ &< \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{d (d + 2\alpha)} \cdot \frac{C^{(\alpha + 1)}_{d-1}(1)}{C_d^{(\alpha)}(1)}\\ &= \frac{2 (1-t^2)^{\alpha + \tfrac{1}{2}} \alpha}{d (d + 2\alpha)} \cdot \frac{(2\alpha + 2)_{d-1} \cdot d!}{(d-1)! \cdot (2\alpha)_d}\\ &= \frac{(1-t^2)^{\alpha + \tfrac{1}{2}}}{(2\alpha + 1)}= - \nu_1(t). \end{align*} This completes the proof. \end{proof} Once we have considered the case of $g = 1_{[-1, t]}$, all other monotonic cases follow simply by expressing monotonic functions as linear combinations of indicator functions: \begin{corollary}\label{cor:eigenvalue-bounds} In the setting of \Cref{thm:spherical-noise}, if $g$ is non-increasing then $\lambda_1 \le 0$ and $|\lambda_d| < -\lambda_1$ for all $d \ge 2$. On the other hand, if $g$ is non-decreasing then $\lambda_1 \ge 0$ and $|\lambda_d| < \lambda_1$ for all $d \ge 2$. \end{corollary} \begin{proof} If $g: [-1, 1] \to [0, \infty)$ is non-increasing then in can be written as a linear combination of non-increasing indicator functions: there is a measure $\mu$ on $[-1, 1]$ such that \[ g(t) = \int_{-1}^1 1_{[-1, s]}(t) \, d\mu(s). \] By \Cref{cor:eigenvalue-formula} and Fubini's theorem, \[ \lambda_d = \frac{1}{Z_n} \int_{-1}^1 \int_{-1}^1 \mathrm{ratio}_d(t) 1_{[-1, s]}(t) \, d\mu(s) \, dt = \frac{1}{Z_n} \int_{-1}^1 \nu_d(s)\, d\mu(s), \] where $\nu_d$ is defined as in \Cref{lem:key-gegenbauer}. The claim then follows from \Cref{lem:key-gegenbauer}. For the case of non-decreasing $g$, note that $\nu_d(1) = 0$ for all $d \ge 1$, for example because of \Cref{item:geg-harmonic} and the fact that spherical harmonics of degree $d \ge 1$ are orthogonal to constant functions (which are the spherical harmonics of degree $0$). Then we represent $g$ a linear combination of non-decreasing indicator functions by choosing $\mu$ such that \[ g(t) = \int_{-1}^1 1_{[s, 1]}(t) \, d\mu(s) = \int_{-1}^1 1 - 1_{[-1, s]}(t) \, d\mu(s); \] and finally, we have \[ \lambda_d = \frac{1}{Z_n} \int_{-1}^1 \int_{-1}^1 \mathrm{ratio}_d(t) (1 - 1_{[-1, s]}(t)) \, d\mu(s) \, dt = -\frac{1}{Z_n} \int_{-1}^1 \nu_d(s)\, d\mu(s), \] and we conclude as before using \Cref{lem:key-gegenbauer}. \end{proof} Finally, \Cref{thm:spherical-noise} follows from \Cref{cor:eigenvalue-bounds} simply by decomposing the function $f$ in spherical harmonics. \begin{proof}[Proof of \Cref{thm:spherical-noise}] Assume first that $g$ is non-increasing, and choose $f: S^{n-1} \to \R^n$ with $\E[\|f\|^2] = 1$. Recall that $H_d: L^2(S^{n-1}) \to \calH_d$ is the orthogonal projection onto degree-$d$ spherical harmonics. We extend $H_d$ to act on vector-valued functions coordinate-wise, so that if $f_1, \dots, f_n$ are the coordinate functions of $f$ then $H_d f = (H_d f_1, \dots, H_d f_n)$. We also have $\mathrm{U}_g f = (\mathrm{U}_g f_1, \ldots, \mathrm{U}_g f_n)$. Recall that $f = \sum_{d \ge 0} H_d f$; then \Cref{thm:funk-hecke} implies that \[ \E_{\bu \sim \omega} \inr{f(\bu)}{\mathrm{U}_g f(\bu)} = \E_{\bu \sim \omega} \left\langle \sum_{d \ge 0} (H_d f)(\bu), \sum_{d' \ge 0} (\mathrm{U}_g H_{d'} f)(\bu) \right\rangle = \sum_{d \ge 0} \lambda_d \E_{\bu \sim \omega}[\|H_d f(\bu)\|^2], \] where the cross-terms with $d \ne d'$ vanished because of the orthogonality of spherical harmonics. On the other hand, the orthogonality of the decomposition $f = \sum_{d \ge 0} H_d f$ implies that \[ \sum_{d \ge 0} \lambda_d \E_{\bu \sim \omega} [\|H_d f(\bu)\|^2] = \E_{\bu \sim \omega} [\|f(\bu)\|^2] = 1. \] Since (by \Cref{cor:eigenvalue-bounds} and the fact that $\lambda_0 = \E_{\bt \sim \tilde \omega}[g(\bt)] \ge 0$) $\lambda_1$ is the most-negative of all eigenvalues, \[ \E_{\bu \sim \omega} \inr{f(\bu)}{\mathrm{U}_g f(\bu)} \ge \lambda_1. \] Since $f(u) = u$ is a degree-1 spherical harmonic, we get equality in this case. This completes the proof for non-increasing $g$. When $g$ is non-decreasing, the argument is the same except that the assumption $\E[f] = 0$ implies that $H_0 f = 0$, and then \Cref{cor:eigenvalue-bounds} implies that $\lambda_1$ is the most positive among all remaining eigenvalues. Therefore, \[ \E_{\bu \sim \omega} \inr{f(\bu)}{\mathrm{U}_g f(\bu)} \le \lambda_1, \] and as before we have equality for $f(u) = u$. \end{proof} \section{The full-dimensional case} Here we consider the case when $f$ is an assignment from $\R^n$ to the ball $B^k$ when $k=n$, and we consider a correlation parameter $-1 \le \rho \le 0$. \begin{theorem}[Vector-valued Borell's inequality; $n$-dimensional outputs]\label{thm:n-dim-borell} Let $f:\R^n \rightarrow B^n$. In addition, let $f_{\mathrm{opt}} :\R^n \rightarrow B^n$ be defined by $f_{\mathrm{opt}}(x) = x / \Vert x\Vert$. Let $-1 \leq \rho \leq 0$. Then \begin{equation*} \stab_\rho[f] \geq \stab_\rho[f_{\mathrm{opt}}]. \end{equation*} Moreover, if $\stab_\rho[f] = \stab_\rho[f_{\mathrm{opt}}]$ then there is an orthogonal matrix $M$ such that $f(x) = f_{\mathrm{opt}}(Mx)$ almost surely. \end{theorem} Our goal is to lower bound the expression \begin{equation*} \E_{\bx \sim_{\rho} \by} \langle f(\bx), f(\by)\rangle. \end{equation*} Fourier analysis is a natural tool to bring to bear on this problem. Although it is natural to consider the Hermite polynomials since the expectation is ``diagonal'' in this set of polynomials (e.g.~\cite{OD14}, Proposition 11.33), the optimal~$f_{\mathrm{opt}}$'s expansion in the Hermite basis of polynomials is complicated (see~\cite{PT20} for a Hermite expansion yielding that of $f_{\mathrm{opt}}$). As a result, it is difficult to compare the value of $f$ to the value of $f_{\mathrm{opt}}$ using the Hermite basis. Instead, we reparameterize $\bx$ as $\bx = \br \cdot \bu$, where $\br$ is the length (or radius) of~$\bx$ and $\bu$ is the unit vector in the direction of~$\bx$. Similarly, we will reparameterize $\by$ as $\by = \bs \cdot \bv$. For each value~$r$ that the random variable~$\br$ may take, we will think of $f$ as specifying a separate function on the unit sphere $S^{n-1}$. We denote this function as $f_{r}:S^{n-1} \rightarrow B^n$ and define it by \begin{equation*} f_{r}(\bu) := f(r \cdot \bu) = f(\bx). \end{equation*} Using this, we can rewrite our original expectation as \begin{equation}\label{eq:reparameterized-expectation} \E_{\br, \bs} \E_{\bu, \bv} \langle f_{\br}(\bu), f_{\bs}(\bv)\rangle. \end{equation} What is nice about this reparameterization is that it simplifies our optimizer $f_{\mathrm{opt}}$. In particular, for each fixed $r \geq 0$, $(f_{\mathrm{opt}})_r(u)$ is simply equal to $u$. To analyze \Cref{eq:reparameterized-expectation}, we first condition on fixed values of $r, s \geq 0$. This gives the expression \begin{equation*} \E_{\bu, \bv} \langle f_{r}(\bu), f_{s}(\bv)\rangle. \end{equation*} This is just an expectation involving two functions on the sphere (under a distribution on~$\bu$ and~$\bv$ described in the next section). If we take $\mathrm{U}_{\rho}^{r,s}$ as the standard Gaussian noise operator (\Cref{def:noise-operator}) conditioned on $r$,$s$, then we can then further rewrite the above expectation as \begin{equation}\label{eq:introduce-noise-operator} \E_{\bu} \langle f_{r}(\bu), \mathrm{U}_\rho^{r, s} f_{s}(\bu)\rangle, \end{equation} where we may think of $\mathrm{U}_{\rho}^{r, s} f_s(u)$ as the average of $f_s(\bv)$ over a random~$\bv$, conditioned on~$r$, $s$, and $\bu=u$. This noise operator turns out to fall into the setting that we considered in the previous section, and so applying \Cref{thm:spherical-noise} for each fixed $r,s$ will allow us to prove \Cref{thm:n-dim-borell}. \subsection{The induced noise operator} If $(\bx, \by)$ are $\rho$-correlated random variables then the probability density function (PDF) can be written as \begin{equation*} G_\rho(x, y)= \frac 1A_\rho e^{-\frac{\Vert x \Vert^2+\Vert y \Vert^2- 2 \rho\langle x, y \rangle }{2(1-\rho^2)}} = \frac 1A_\rho e^{-\frac{\Vert x \Vert^2+\Vert y \Vert^2}{2(1-\rho^2)}} e^{\frac{\rho r s\langle u, v \rangle}{(1-\rho^2)}}, \end{equation*} where $A_\rho$ is a normalizing constant. When we reparameterize according to $(r, s, u, v)$ we obtain: \begin{equation*} G_\rho(r, s, u, v) = \frac 1A_\rho (rs)^{n-1} e^{-\frac{r^2+s^2}{2(1-\rho^2)}}e^{\frac{\rho r s \langle u, v \rangle}{(1-\rho^2)}}, \end{equation*} where the $(rs)^{n-1}$ factor arises from the change of variables. We will be more interested, however, in the conditional distributions: \begin{definition}[Conditioned correlated Gaussians] We denote by $G^{r, s}_\rho(u,v)$ the PDF of $(\bu,\bv)$, with respect to the measure $\omega$, conditioned on the values~$r$,~$s$. We write $(\bu,\bv) \sim \normal^{r, s}_\rho$ for correlated random variables drawn from this distribution. We denote by $G^{r, s}_\rho(v \mid u)$ the PDF of $\bv$, with respect to the measure $\omega$, conditioned on the values~$r$,~$s$, and~$u$. This can be written as \begin{align}\label{eq:conditioned_pdf} G^{r, s}_\rho(v \mid u) &= \frac{1}{A^{r, s}_\rho } e^{\frac{\rho r s \langle u, v\rangle}{ (1-\rho^2)}}, \end{align} where $A^{r, s}_\rho$ is a normalizing constant that depends on $r$, $s$, and $\rho$. \end{definition} We note that \eqref{eq:conditioned_pdf} depends only on the quantity $\langle u, v\rangle$, and it is monotonically decreasing in this quantity because $r,s \geq 0$ and $\rho \leq 0$. \begin{definition}[Conditioned Gaussian noise operator]\label{def:noise-operator-on-shell} The \emph{conditioned Gaussian noise operator} is an operator on $L_2(S^{n-1},\omega)$ which acts on a function $f:S^{n-1}\rightarrow \mathbb{R}$ as: \begin{equation*} \mathrm{U}^{r, s}_\rho f(u) = \E_{(\bu,\bv) \sim \normal^{r,s}_\rho}[f(\bv) \mid \bu = u] = \int_{S^{n-1}} G^{r,s}_\rho(v \mid u) f(v)\, d\omega(v). \end{equation*} \end{definition} Note in particular that for every $r,s \ge 0$ and for every $-1 \le \rho \le 0$, $\mathrm{U}^{r,s}_\rho$ is a noise operator of the form~\eqref{eq:Ug}, for the non-increasing function $g(t) = \frac{1}{A^{r,s}_\rho} e^{\frac{\rho r s t}{1-\rho^2}}$. We have that for any $u$, \begin{equation*} 1 = \int_{S^{n-1}} G^{r,s}_\rho(v \mid u)\, d\omega(v) = \int^1_{-1} g(t)\,d\tilde \omega(t), \end{equation*} by the definitions of $\omega$ and $\tilde \omega$, demonstrating that $g \geq 0$ satisfies \eqref{eq:g-integrability}. Recalling from \Cref{thm:funk-hecke} that the eigenfunctions of $\mathrm{U}^{r,s}_\rho$ are spherical harmonics, let $\lambda_d^{r,s}$ be the eigenvalue corresponding to $\calH_d$; i.e., $\mathrm{U}^{r,s}_\rho h = \lambda_d^{r,s} h$ for all $h \in \calH_d$. Then \Cref{cor:eigenvalue-bounds} implies that \begin{equation}\label{eq:lambda1-bound} \lambda_1^{r,s} \le -|\lambda_d|^{r,s} \text{ for every $d \ge 0$}, \end{equation} with equality only if $d = 1$. Moreover, the fact that $G^{r,s}_\rho$ is a probability density implies that $\lambda_0^{r,s} = 1$. The following lemma gives our main lower-bound. It shows that if $f_r$ and $f_s$ have mean zero, then the average inner product is lower-bounded by $\lambda_1^{r,s}$, exactly the value that the optimizer $f_{\mathrm{opt}}$ would achieve. However, when they are not mean-zero, they can outperform the optimizer; consider $f_r = (1, 0, \ldots, 0)$ and $f_s = (-1, 0, \ldots, 0)$, which have average inner-product $-1$. To compensate for this, the lemma includes a correction factor depending on the means of~$f_r$ and~$f_s$. \begin{lemma}[Main lower bound]\label{lem:degree-0-and-1-lower-bound} \begin{equation*} \E_{(\bu, \bv) \sim \normal_\rho^{r, s}} \langle f_r(\bu), f_s(\bv)\rangle \geq \langle \E[f_r], \E[f_s]\rangle + \lambda_1^{r,s}, \end{equation*} with equality if and only if there is an orthogonal matrix $M$ so that $f_r(u) = f_s(u) = M u$. \end{lemma} \begin{proof} Recalling that $H_d$ is the orthogonal projection onto $\calH_d$ and that $U^{r,s}_\rho h = \lambda^{r,s}_d h$ for all $h \in \calH_d$, we have \begin{align*} \E_{(\bu, \bv) \sim \normal_\rho^{r, s}} \langle f_r(\bu), f_s(\bv)\rangle &= \E_{\bu \sim \omega} \inr{f_r(\bu)}{\mathrm{U}_\rho^{r,s} f_s(\bu)} \\ &= \sum_{d \ge 0} \E_{\bu \sim \omega} \inr{f_r(\bu)}{\mathrm{U}_\rho^{r,s} H_d f_s(\bu)} \\ &= \sum_{d \ge 0} \lambda_d^{r,s} \E_{\bu \sim \omega} \inr{f_r(\bu)}{H_d f_s(\bu)} \\ &= \sum_{d \ge 0} \lambda_d^{r,s} \E_{\bu \sim \omega} \inr{H_d f_r(\bu)}{H_d f_s(\bu)} \end{align*} Recall that $\lambda^{r, s}_0 = 1$ and that $H_0 f_r$ is the constant function $\E_{\bu \sim \omega}[f_r(\bu)]$; then \[ \E_{(\bu, \bv) \sim \normal_\rho^{r, s}} \langle f_r(\bu), f_s(\bv)\rangle = \langle \E[f_r], \E[f_s]\rangle + \sum_{d \ge 1} \lambda_d^{r,s} \E_{\bu \sim \omega} \inr{H_d f_r(\bu)}{H_d f_s(\bu)}. \] For the second term, Cauchy-Schwarz (twice) and~\eqref{eq:lambda1-bound} imply that \begin{align} \label{eq:same-dim-first-cs} \sum_{d \ge 1} \lambda_d^{r,s} \E_{\bu \sim \omega} \inr{H_d f_r(\bu)}{H_d f_s(\bu)} & \ge -\sum_{d \ge 1} |\lambda_d^{r,s}| \sqrt{\E [\|H_d f_r\|^2] \E[\|H_d f_s\|^2]} \\ \label{eq:same-dim-lambda} & \ge \lambda_1^{r,s} \sum_{d \ge 1} \sqrt{\E [\|H_d f_r\|^2] \E[\|H_d f_s\|^2]} \\ \notag & \ge \lambda_1^{r,s} \sqrt{\sum_{d \ge 1} \E [\|H_d f_r\|^2] \cdot \sum_{d \ge 1} \E[\|H_d f_s\|^2]} \end{align} Finally, recalling that $\sum_{d \ge 0} \E [\|H_d f_r\|^2] = \E[\|f_r\|^2] \le 1$, we have \[ \sum_{d \ge 1} \lambda_d^{r,s} \E_{\bu \sim \omega} \inr{H_d f_r(\bu)}{H_d f_s(\bu)} \ge \lambda_1^{r,s}. \] This completes the proof of the inequality. The equality simply follows because if $f_r(u) = f_s(u) = u$ then all inequalities in this proof are equalities. By the equality cases in~\eqref{eq:lambda1-bound}, we have equality in~\eqref{eq:same-dim-lambda} if and only if $f_r$ and $f_s$ are both affine functions: $f_r(u) = \E[f_r] + M_r u$ for some $n \times n$ matrix $M_r$ and $f_s(u) = \E[f_s] + M_s u$ for some $n \times n$ matrix $M_s$. Then we have equality in~\eqref{eq:same-dim-first-cs} if and only if $M_s$ is a non-negative scalar multiple of $M_r$. Because $f_r(u)$ takes values in $B^n$, we must have $\|M_r\|_{\textrm{op}} \le 1$ and $\|M_s\|_{\textrm{op}} \le 1$. But in order to have equality in $\E[\|H_1 f_r\|^2] \le 1$, we must have $\|M_r\|_2^2 = 1$, and so $M_r$ is an orthogonal matrix. Similarly $M_s$ must be an orthogonal matrix, and since it is a non-negative multiple of $M_r$ they must be equal. Finally, $\E[\|f_r\|^2] = \|\E[f_r]\|^2 + \E[\|H_1 f_r\|^2] \le 1$, and so if $\E[\|H_1 f_1\|^2]= 1$ then we must have $\E[f_r] = 0$; similarly for $\E[f_s]$. \end{proof} Now we prove \Cref{thm:n-dim-borell}. At a high-level, the correction term in \Cref{lem:degree-0-and-1-lower-bound} showed that one can improve on the optimizer for fixed~$r$ and~$s$ by using nonzero means $\E[f_r]$ and $\E[f_s]$. What we will now show is that although this is true for fixed~$r$ and~$s$, when averaged over random~$\br$ and~$\bs$ this correction term no longer helps. In other words, we will show that $\E_{\br, \bs} \langle \E_{\bu}[f_{\br}(\bu)], \E_{\bu}[f_{\bs}(\bu)]\rangle$ is nonnegative, and so it can only increase the average inner product. \begin{proof}[Proof of \Cref{thm:n-dim-borell}] Our goal is to lower-bound \begin{equation}\label{eq:first-equation-in-big-proof} \E_{\bx \sim_\rho \by} \langle f(\bx), f(\by)\rangle = \E_{\br, \bs} \E_{\bu, \bv} \langle f_{\br}(\bu), f_{\bs}(\bv)\rangle. \end{equation} Setting $g(r) = \E_{\bu} f_r(\bu)$, \Cref{lem:degree-0-and-1-lower-bound} implies that this is at least \begin{equation*} \eqref{eq:first-equation-in-big-proof} \geq \E_{\br, \bs}[\langle g(\br), g(\bs)\rangle + \lambda_1^{\br,\bs}] = \E_{\br, \bs}\langle g(\br), g(\bs) \rangle + \E_{\br, \bs} \lambda_1^{\br,\bs}. \end{equation*} The second term is exactly the value of our conjectured optimizer via~\Cref{lem:degree-0-and-1-lower-bound}. As a result, it suffices to show that the first term is nonnegative. We will begin by rewriting it as \begin{equation}\label{eq:first-rewrite} \E_{\bx \sim_\rho \by} \langle g(\Vert \bx \Vert), g(\Vert \by \Vert)\rangle. \end{equation} Consider the following method of drawing two $\rho$-correlated strings $\bx$ and $\by$: first, sample $\bz, \bz', \bz'' \sim \normal(0,1)^n$. Next, set \begin{equation*} \bx = \rho' \cdot\bz + \sqrt{1 - (\rho')^2} \cdot\bz', \qquad \by = - (\rho'\cdot \bz + \sqrt{1 - (\rho')^2} \cdot\bz''), \end{equation*} where $\rho' = \sqrt{-\rho}$. Then conditioned on $\bz$, $\bx$ and $-\by$ are independent and identically distributed random variables. Hence, we can write \begin{equation*} \eqref{eq:first-rewrite} = \E_{\bz} \E_{\bx, \by} \langle g(\Vert \bx \Vert), g(\Vert \by \Vert)\rangle = \E_{\bz} \langle \E_{\bx} g(\Vert \bx \Vert), \E_{\by}g(\Vert \by \Vert)\rangle =\E_{\bz} \langle \E_{\bx} g(\Vert \bx \Vert), \E_{\by}g(\Vert -\by \Vert)\rangle. \end{equation*} Note that the last equality holds because $\Vert -\by \Vert=\Vert \by \Vert$. For each~$\bz$, the two terms in the inner product are equal, and so this is nonnegative. This completes the proof of the inequality. To see the equality cases, recall that we applied the bound of \Cref{lem:degree-0-and-1-lower-bound} for every $r$ and $s$. If equality is attained in the inequality, we must have equality in \Cref{lem:degree-0-and-1-lower-bound} for almost every $r$ and $s$. It follows that the matrix $M$ of \Cref{lem:degree-0-and-1-lower-bound} must be independent of $r$ and $s$, and the claimed characterization of equality cases follows. \end{proof} \subsection{The positive-$\rho$ case} We assumed in this section that $\rho \le 0$. In the case $\rho > 0$, the Gaussian noise model induces a spherical noise model of the form~\eqref{eq:Ug} with an \emph{increasing} function $g$. By the results of Section~\ref{sec:spherical}, \begin{equation}\label{eq:positive-rho-eigen-inequality} \lambda_1^{r,s} \ge |\lambda_d^{r,s}| \end{equation} for all $d \ge 2$, and so \Cref{lem:degree-0-and-1-lower-bound} may be extended to the $\rho > 0$ case, with the opposite inequality. The problem comes from the first term on the right hand side of \Cref{lem:degree-0-and-1-lower-bound}; this term has a non-negative sign, which is is our favor when $\rho < 0$ but against us when $\rho > 0$. It is possible that this non-negative term is cancelled out by the difference between the two sides of~\eqref{positive-rho-eigen-inequality}, but we were not able to show this. \section{The non-commutative Sum of Squares hierarchy}\label{sec:ncsos} The Sum of Squares (SoS) hierarchy gives a canonical method for strengthening the basic SDP to achieve better approximation ratios. It features a tunable parameter $d$; as~$d$ is increased, the quality of the approximation improves, but the runtime needed to compute the optimum increases as well. We will give a didactic overview of the SoS hierarchy in order to explain how our basic SDP for \text{\sc Quantum} \text{\sc Max-Cut}\xspace arises naturally as the level-$2$ SoS relaxation. For a more extensive treatment of Sum of Squares, consult the excellent notes at~\cite{BS16}. \subsection{Sum-of-squares relaxations for \text{\sc Max-Cut}\xspace} We begin with the sum-of-squares relaxation for the \text{\sc Max-Cut}\xspace problem, which generalizes the basic SDP from \Cref{def:mcsdp}. In fact, we will state the SoS hierarchy in terms of a general polynomial optimization problem over Boolean (i.e.\ $\pm 1$) variables. Let $\calI$ be a finite set and $x = \{x_i\}_{i \in \calI}$ be a set of indeterminates indexed by~$\calI$. Let $p(x)$ be a polynomial in the $x_i$'s, and consider the optimization problem \begin{align*} \max &~~p(x) \\ \text{s.t.} &~~x_i^2 = 1,~\forall i \in \calI. \end{align*} For example, \text{\sc Max-Cut}\xspace is the case when $\calI = V$ and $p(x) = \E_{(\bu, \bv) \sim E}[\frac{1}{2}- \frac{1}{2} x_{\bu} x_{\bv}]$. An alternative way to write this maximization is over probability distributions $\mu$ on $\pm 1$ assignments, i.e. functions $\mu: \{-1, 1\}^{\calI} \rightarrow \R^{\geq 0}$ such that $\sum_x \mu(x) = 1$. Then the optimum value is equal to \begin{align*} \max &~~\E_{\mu}[p(\bx)] = \sum_{x \in \{-1, 1\}^{\calI}} \mu(x) \cdot p(x) \\ \text{s.t.} &~~\text{$\mu$ is a probability distribution,} \end{align*} because we can take $\mu$ to have support only on the optimizing $x$'s. Note that because~$\mu$ is a probability distribution, it satisfies two properties: (i) $\E_{\mu}[1] = 1$, (ii) for each polynomial~$q(x)$, $\E_{\mu}[q(\bx)^2] \geq 0$. Indeed, both of these properties hold pointwise, for all~$x$. The SoS hierarchy replaces this optimization over probability distributions with an optimization over ``pseudo-distributions'' while partially maintaining these two properties. \begin{definition}[The SoS hierarchy for Boolean optimization problems] Let $\mu: \{-1, 1\}^\calI \rightarrow \R$ be a function. Given a polynomial $q(x)$, we write \begin{equation*} \widetilde{\E}_\mu[q(x)] = \sum_{x \in \{-1, 1\}^{\calI}} \mu(x) \cdot q(x). \end{equation*} We say that $\mu$ is a \emph{degree-$d$ pseudo-distribution} if $\widetilde{\E}_\mu[1] = 1$ and $\widetilde{\E}_\mu[q(x)^2] \geq 0$ for all polynomials~$q$ of degree at most~$d/2$. In this case, we say that $\widetilde{\E}_\mu[\cdot]$ is a \emph{degree-$d$ pseudo-expectation}. The value of the degree-$d$ SoS relaxation is simply the maximum of $\widetilde{\E}_\mu[p(x)]$ over all pseudo-distributions~$\mu$. \end{definition} It can be shown that the value of the degree-$2$ SoS is equal to the basic SDP. \subsection{Sum-of-squares relaxations for \text{\sc Quantum} \text{\sc Max-Cut}\xspace}\label{sec:heis-sos} Now, we extend the Sum of Squares hierarchy to optimization problems over quantum states. We will consider states consisting of qubits indexed by a finite set $\calI$, i.e.\ unit vectors in $(\C^2)^{\otimes \calI}$. Let $H$ be a square matrix acting on $(\C^2)^{\otimes \calI}$ and consider the optimization problem \begin{align*} \max &~~\tr[\rho \cdot H] \\ \text{s.t.} &~~\text{$\rho$ is a density matrix.} \end{align*} For example, \text{\sc Quantum} \text{\sc Max-Cut}\xspace is the case when $\calI = V$ and $H = \E_{(\bu, \bv) \sim E}[h_{\bu, \bv}]$. Because~$\rho$ is a density matrix, it satisfies three properties: (1) $\rho$ is Hermitian, (2) $\tr[\rho \cdot I] = 1$, and (3) for any matrix $M$, $\tr[\rho \cdot M^\dagger M] \geq 0$. The last of these is because $A = M^\dagger M$ is PSD, and a matrix $\rho$ is PSD if and only if $\tr[\rho \cdot A] \geq 0$ for all PSD matrices~$A$. The SoS hierarchy will instead optimize over ``pseudo-density matrices'' while partially maintaining these three properties. To begin, we first define a matrix analogue of degree. \begin{definition}[The basis of Pauli matrices] The Pauli matrices $\{I, X, Y, Z\}^{\otimes n}$ form an orthogonal basis for the set of $2^n \times 2^n$ matrices. Given $P, Q \in \{I, X, Y, Z\}^{\otimes n}$, they satisfy \begin{equation*} \tr[P Q] = \left\{\begin{array}{cl} 2^n & \text{if $P = Q$,}\\ 0 & \text{otherwise}. \end{array}\right. \end{equation*} Given a $2^n \times 2^n$ matrix $M$, we write $\widehat{M}(P)$ for the coefficient of~$M$ on~$P$ in this basis. In other words, \begin{equation*} M = \sum_{P \in \{I, X, Y, Z\}^{\otimes n}} \widehat{M}(P) \cdot P. \end{equation*} In addition, $M$ is Hermitian if and only if $\widehat{M}(P)$ is real, for all~$P$. \end{definition} \begin{definition}[Degree of a matrix]\label{def:matrix-degree} Given $P \in \{I, X, Y, Z\}^{\otimes n}$, the \emph{degree} of $P$, denoted $|P|$, is the number of qubits on which~$P$ is not the $2 \times 2$ identity matrix. More generally, we say that a $2^n \times 2^n$ matrix~$M$ has degree-$d$ if $\widehat{M}(P) = 0$ for all $|P| > d$. \end{definition} Now we describe the analogue of the sum-of squares hierarchy for quantum states, which is known as the \emph{NPA} or \emph{non-commutative Sum of Squares (ncSoS) hierarchy}. \begin{definition}[The ncSoS hierarchy for quantum optimization problems] Let $\rho$ and $M$ be square matrices acting on $(\C^2)^{\calI}$. We write \begin{equation*} \widetilde{\E}_{\rho}[M]= \tr[\rho \cdot M]. \end{equation*} We say that $\rho$ is a \emph{degree-$d$ pseudo-density matrix} if $\rho$ is Hermitian, $\widetilde{\E}_\rho[I] = 1$, and $\widetilde{\E}_\rho[M^\dagger M] \geq 0$ for all matrices~$M$ of degree at most~$d/2$ (cf.\ \Cref{def:matrix-degree}). In this case, we say that $\widetilde{\E}_\rho[\cdot]$ is a \emph{degree-$d$ pseudo-expectation}. The value of the degree-$d$ ncSoS relaxation is simply the maximum of $\widetilde{\E}_\rho[H]$ over all pseudo-distributions~$\rho$. \end{definition} \begin{remark}[Convergence of the SoS relaxation] When $d = 2 n$, where $n = |\calI|$, the SoS relaxation solves the optimization problem exactly. This is because every square matrix $M$ acting on $(\C^2)^\calI$ is degree-$n$; thus $ \tr[\rho \cdot M^\dagger M] = \widetilde{\E}_\rho[M^\dagger M] \geq 0$, and so~$\rho$ must be positive semidefinite. \end{remark} \subsection{Degree-two non-commutative Sum of Squares} Now we analyze the degree-2 ncSoS relaxation for \text{\sc Quantum} \text{\sc Max-Cut}\xspace and show that it coincides with the basic SDP we considered in \Cref{sec:sdp_proofs}. We begin with a definition. \begin{definition}[Degree-$d$ slice of a matrix]\label{def:slice} Given a $2^n \times 2^n$ matrix~$M$, we write $M^{=d}$ for its degree-$d$ component, i.e. \begin{equation*} M^{=d} = \sum_{P : |P| = d} \widehat{M}(P) \cdot P. \end{equation*} \end{definition} Let~$\rho$ be a feasible solution to the degree-2 ncSoS relaxation. We will begin by showing that we may assume without loss of generality that $\rho$ only has degree~$0$ and~$2$ components, i.e.\ that $\rho = \rho^{=0} + \rho^{=2}$. Prior to showing this, we will need a technical lemma. \begin{lemma}\label{prop:pauli-commute-negation} Let $P, Q, R \in \{I, X, Y, Z\}^{\otimes n}$. Then $\tr(P Q R) = (-1)^{|P| + |Q| + |R|} \cdot \tr(P R Q)$. \end{lemma} \begin{proof} First, we prove this for~$n = 1$. When $n = 1$, both sides are zero unless $QR$ is the same Pauli matrix as~$P$, up to a multiplicative constant. Suppose this is so. If one of~$P$, $Q$, or $R$ is the identity matrix, then the other two are equal to each other, and so $\tr(P Q R) = \tr(P R Q)$. This satisfies the equality because $|P| + |Q| + |R|$ is either $0$ or $2$ in this case. Otherwise, none of $P$, $Q$, or $R$ is the identity matrix, and so they are distinct Pauli matrices, which means~$Q$ and~$R$ anticommute. So $\tr(P Q R) = - \tr(P R Q)$, satisfying the equality because $|P| + |Q| + |R| =3$ in this case. Now, the general~$n$ case follows from the~$n= 1$ case because \begin{equation*} \tr(P Q R) = \prod_{i=1}^n \tr(P_1 Q_1 R_1) = \prod_{i=1}^n \Big((-1)^{|P_1| + |Q_1| + |R_1|} \cdot\tr(P_1 R_1 Q_1)\Big) = (-1)^{|P| + |Q| + |R|} \cdot \tr(P R Q). \end{equation*} This completes the proof. \end{proof} \begin{proposition}[Restricting to the degree-0 and degree-2 slices] Let $\rho$ be a feasible solution. Then $\rho^{=0} + \rho^{=2}$ is a feasible solution with the same value as~$\rho$. \end{proposition} \begin{proof} To begin, we claim that $\rho' = \rho^{=0} + \rho^{=1} + \rho^{=2}$ is a feasible solution with the same value as~$\rho$. This is because the constraints $\widetilde{\E}_\rho[I] = 1$ and $\widetilde{\E}_\rho[M^\dagger M] \geq 0$ and the objective $\widetilde{\E}_\rho[H_G]$ feature matrices of degree at most~$2$, and so these values are unchanged if we replace~$\rho$ with $\rho^{=0} + \rho^{=1} + \rho^{=2}$. Next, we claim that $\rho'' = \rho^{=0} - \rho^{=1} + \rho^{=2}$ is a feasible solution with the same value as $\rho'$. The constraint $\widetilde{\E}_{\rho''}[I] = 1$ and the value $\widetilde{\E}_{\rho''}[H_G]$ feature matrices which have no degree-1 terms, so negating $\rho^{=1}$ doesn't affect these expressions. As for the remaining constraint, for each degree-1 matrix~$M$, \begin{align*} \widetilde{\E}_{\rho''}[M^\dagger M] & = \tr((\rho^{=0} - \rho^{=1} + \rho^{=2}) \cdot M^\dagger M)\\ & = \sum_{|P| \leq 2} \sum_{|Q|, |R| \leq 1} (-1)^{|P|} \cdot \widehat{\rho}(P) \widehat{M}^\dagger(Q)\widehat{M}(R) \cdot \tr(P Q R) \\ & = \sum_{|P| \leq 2} \sum_{|Q|, |R| \leq 1} (-1)^{|R| + |Q|} \cdot \widehat{\rho}(P) \widehat{M}^\dagger(Q)\widehat{M}(R) \cdot \tr(P R Q) \tag{by \Cref{prop:pauli-commute-negation}}\\ & = \tr((\rho^{=0} + \rho^{=1} + \rho^{=2}) \cdot (M^{=0} - M^{=1}) (M^{=0} - M^{=1})^\dagger)\\ & = \widetilde{\E}_{\rho'}[(M^{=0} - M^{=1}) (M^{=0} - M^{=1})^\dagger] \geq 0. \end{align*} Hence, $\rho''$ satisfies this constraint as well. We conclude by noting that $\tfrac{1}{2}(\rho' + \rho'') = \rho^{=0} + \rho^{=2}$ is a feasible solution with the same value as~$\rho$, because the constraints and the objective are linear functions of~$\rho$. \end{proof} Henceforth, we assume $\rho = \rho^{=0} + \rho ^{=2}$. Using this, we note that the constraint $\widetilde{\E}_\rho[M^\dagger M] \geq 0$ holding for all $M$ which are degree-$1$ is equivalent to it holding only for~$M$ which are homogeneous degree-$1$ (i.e.\ with no degree-$0$ term). This is because if we write $M = M^{=0} + M^{=1} = \widehat{M}(I) \cdot I + M^{=1}$, then \begin{align*} \tr(\rho \cdot M^\dagger M) &= |\widehat{M}(I)|^2 \cdot \tr(\rho) + \widehat{M}(I)^\dagger \cdot \tr(\rho \cdot M^{=1}) + \widehat{M}(I) \cdot \tr(\rho \cdot (M^{=1})^\dagger) + \tr(\rho \cdot (M^{=1})^\dagger M^{=1})\\ &= |\widehat{M}(I)|^2 + \tr(\rho \cdot (M^{=1})^\dagger M^{=1}) \tag{because $\rho$ has no degree-$1$ component}\\ &\geq \tr(\rho \cdot (M^{=1})^\dagger M^{=1}), \end{align*} which is $\geq 0$ because $\widetilde{\E}_\rho[(M^{=1})^\dagger M^{=1}] \geq 0$. Now, we let $R(\cdot, \cdot)$ be the $3n \times 3n$ matrix whose rows and columns are indexed by degree-$1$ Pauli matrices such that \begin{equation*} R(P_i, Q_j) = \tr(\rho \cdot P_i Q_j). \end{equation*} for all $P, Q \in \{X, Y, Z\}$ and $i, j \in \{1, \ldots, n\}$. When $i \neq j$ or $i = j$ and $P = Q$, then $P_i Q_j$ is a Pauli matrix in $\{I, X, Y, Z\}^{\otimes n}$, and so \begin{equation*} R(P_i, Q_j) = \tr(\rho \cdot P_i Q_j) = 2^n \cdot \widehat{\rho}(P_i Q_j), \end{equation*} which is a real number. On the other hand, when $i = j$ but $P \neq Q$, then $P_i Q_j = P_i Q_i$ is a degree-$1$ Pauli matrix times a phase of $i$ or $-i$. In this case, \begin{equation*} R(P_i, Q_i) = \tr(\rho \cdot P_i Q_i) = 0, \end{equation*} because~$\rho$ has no degree-$1$ component. Put together, these imply that $R$ is a real-valued matrix. We can now rewrite our constraints and objective function in terms of this matrix. First, the constraint $\tr(\rho) = 1$ corresponds to \begin{equation*} R(P_i, P_i) = \tr(\rho \cdot P_i P_i) = \tr(\rho) = 1 \end{equation*} for any $P_i$. Next, the objective function is \begin{align*} \tr(\rho \cdot H_G) & = \E_{(\bi, \bj) \sim E} \tr(\rho \cdot h_{\bi, \bj})\\ & = \E_{(\bi, \bj) \sim E} \tr(\rho \cdot \tfrac{1}{4} \cdot( I_{\bi} \ot I_{\bj} - X_{\bi} \ot X_{\bj} - Y_{\bi} \ot Y_{\bj} - Z_{\bi} \ot Z_{\bj}))\\ &= \tfrac{1}{4} - \tfrac{1}{4} \E_{(\bi, \bj) \sim E} \sum_{P \in \{X, Y, Z\}} \tr(\rho \cdot P_{\bi} \ot P_{\bj})\\ &= \tfrac{1}{4} - \tfrac{1}{4} \E_{(\bi, \bj) \sim E} \sum_{P \in \{X, Y, Z\}} R(P_{\bi}, P_{\bj}). \end{align*} For the last constraint, let $M = \sum_{P_i} \widehat{M}(P_i) \cdot P_i$ be any homogeneous degree-1 matrix. Then \begin{align*} 0 \leq \tr(\rho \cdot M^\dagger M) &= \sum_{P_i, Q_j} \widehat{M}(P_i)^\dagger \widehat{M}(Q_j) \cdot \tr(\rho \cdot P_i Q_j)\\ &= \sum_{P_i, Q_j} \widehat{M}(P_i)^\dagger \widehat{M}(Q_j) \cdot R(P_i,Q_j) = \mathrm{vec}(M)^\dagger\cdot R \cdot\mathrm{vec}(M), \end{align*} where $\mathrm{vec}(M)$ is the height-$3n$ vector with $\mathrm{vec}(M)(P_i) = \widehat{M}(P_i)$. As the $\widehat{M}(P_i)$'s are allowed to be arbitrary complex numbers, this condition is equivalent to $R$ being positive semidefinite. As a result, this matrix has the exact same form and objective as the matrix $M'(\cdot, \cdot)$ from \Cref{sec:sdp_proofs}; following the steps in that proof, one can then convert $R$ into a solution to the basic SDP. This completes the proof. \ignore{ We claim that $\rho' = \rho^{=0} + \rho^{=1} + \rho^{=2}$ is a feasible solution with the same value as~$\rho$. This is because the constraints $\widetilde{\E}_\rho[I] = 1$ and $\widetilde{\E}_\rho[M^\dagger M] \geq 0$ and the objective $\widetilde{\E}_\rho[H_G]$ feature matrices of degree at most~$2$, and so these values are unchanged if we replace~$\rho$ with $\rho^{=0} + \rho^{=1} + \rho^{=2}$. } \ignore{ Next, consider the $(3n + 1) \times (3n+1)$ matrix $R$ whose rows and columns are indexed by the degree-0 and degree-1 Pauli matrices $P, Q$ in which $R(P, Q) = \tr[\rho' \cdot P Q]$. Using~$R$, we may express the SoS objective as \begin{align} \tr[\rho \cdot H_G] & = \E_{(\bi, \bj) \sim E} \tr[\rho \cdot h_{\bi, \bj}]\nonumber\\ & = \E_{(\bi, \bj) \sim E} \tr[\rho \cdot \tfrac{1}{4} \cdot( I_{\bi} \ot I_{\bj} - X_{\bi} \ot X_{\bj} - Y_{\bi} \ot Y_{\bj} - Z_{\bi} \ot Z_{\bj})]\nonumber\\ &= \tfrac{1}{4} - \tfrac{1}{4} \E_{(\bi, \bj) \sim E} \sum_{P \in \{X, Y, Z\}} \tr[\rho \cdot P_{\bi} \ot P_{\bj}]\nonumber\\ &= \tfrac{1}{4} - \tfrac{1}{4} \E_{(\bi, \bj) \sim E} \sum_{P \in \{X, Y, Z\}} R(P_{\bi}, P_{\bj}).\label{eq:new-objective} \end{align} Let us derive some constraints on~$R$: \begin{enumerate} \item \textbf{PSD:} $R$ is Hermitian and PSD. It is Hermitian because $\rho'$ is Hermitian, and so $$R(P, Q)^\dagger = \tr[\rho' \cdot PQ]^\dagger = \tr[\rho' \cdot QP] = R(Q, P).$$ It is PSD because for any vector $v = (v_P)_P$ with a component for each degree-0 and degree-1 Pauli matrix~$P$, \begin{equation*} v^\dagger R v = \sum_{P, Q} v_P^\dagger v_Q \tr[\rho' \cdot P Q] = \tr\Big[\rho' \cdot \Big(\sum_P v_P P\Big)^\dagger \Big(\sum_Q v_Q Q\Big)\Big] \geq 0, \end{equation*} because $M = \sum_P v_P P$ is a degree-$1$ matrix. \item \textbf{Unit length:} For each $P$, $R(P, P) = 1$. \item \textbf{Commuting Paulis:} If $P$ and $Q$ are commuting then $R(P, Q) = R(Q, P)$. \item \textbf{Anti-commuting Paulis:} If $P$ and $Q$ are anti-commuting then $R(P, Q) = R(Q, P)$. \end{enumerate} Indeed, $R$ satisfies these constraints if and only if $\rho'$ is a feasible solution to the level-2 ncSoS relaxation. Thus, the problem of maximizing~\eqref{eq:new-objective} subject to~$R$ satisfying these four constraints is equivalent to the level-2 ncSoS relaxation. We note that the objective~\eqref{eq:new-objective} only contains degree-$2$ terms, and so it remains unchanged if we replace it with~$R'$, in which we zero out all entries in~$R$ of the form $R(I, P)$ and $R(P, I)$, where $P$ is a degree-$1$ Pauli matrix. In addition, $R'$ satisfies the above four properties if and only if~$R$ does, and so it remains a feasible solution. (The chief difficulty is showing that~$R'$ is still PSD, but this follows from the fact that $v^\dagger R v \geq 0$ for all vectors in which $v_I = 0$.) Now consider the $3n \times 3n$ submatrix of $R'$ indexed by the degree-1 Pauli matrices. Then this matrix has the exact same form and objective as the matrix $M(\cdot, \cdot)$ from \Cref{sec:sdp_proofs}; following the steps in that proof, one can then convert $R$ into a solution to the basic SDP. This completes the proof. } \section{Technical overview} In this section, we give a technical overview of our results. We begin with definitions of the problems we consider. Then, we state SDP relaxations for these problems and rounding algorithms for these SDPs that have been considered in the literature. Finally, we state our results formally and give an overview of our proofs. \subsection{The \text{\sc Max-Cut}\xspace problem} The classical analogue of the \text{\sc Quantum} \text{\sc Max-Cut}\xspace problem is the \text{\sc Max-Cut}\xspace problem. The simplest of all nontrivial CSPs, this is the problem of partitioning the vertices of a graph into two sets in order to maximize the number of edges crossing the partition. \begin{notation} We use \textbf{boldface} to denote random variables. \end{notation} \begin{definition}[Weighted graph] A \emph{weighted graph} $G = (V, E, w)$ is an undirected graph with weights on the edges specified by $w: E \rightarrow \R^{\geq 0}$. The weights specify a probability distribution on the edges, i.e.\ $\sum_{e \in E} w(e) = 1$. We write $\boldsymbol{e} \sim E$ or $(\bu, \bv) \sim E$ for a random edge sampled from this distribution. We generally use \emph{graph} as shorthand for weighted graph. \end{definition} \begin{definition}[\text{\sc Max-Cut}\xspace]\label{def:max-cut} Given a graph $G = (V, E, w)$, a \emph{cut} is a function $f:V\rightarrow\{\pm 1\}$. The \emph{value} of the cut is \begin{equation*} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)]. \end{equation*} \text{\sc Max-Cut}\xspace is the problem of finding the value of the largest cut, i.e.\ the quantity \begin{equation*} \text{\sc Max-Cut}\xspace(G) = \max_{f:V \rightarrow \{\pm 1\}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)]. \end{equation*} \end{definition} \text{\sc Max-Cut}\xspace appears on Karp's original list of $\NP$-complete problems~\cite{Kar72}, and it is $\NP$-hard to approximate to a factor better than $\tfrac{16}{17}$~\cite{TSSW00}. Goemans and Williams gave an algorithm using the basic SDP with approximation ratio $0.878$~\cite{GW95}, and this SDP algorithm was later shown to be optimal by~\cite{KKMO07}, at least assuming the UGC. Even without assuming the UGC, though, it is known that the SoS hierarchy is still the optimal algorithm among a large class of algorithms for \text{\sc Max-Cut}\xspace, namely those given by polynomial-size SDPs~\cite{LRS15}. \subsection{The \text{\sc Quantum} \text{\sc Max-Cut}\xspace problem} \ignore{ The particular subset of Heisenberg Hamiltonians we study will be defined using a weighted graph. An edge will correspond to an interaction term, and the ``weight'' of an edge will correspond to the strength of that particular term. \begin{notation} We use \textbf{boldface} to denote random variables. \end{notation} \begin{definition}[Weighted graph] A \emph{weighted graph} $G = (V, E, w)$ is an undirected graph with weights on the edges specified by $w: E \rightarrow \R^{\geq 0}$. The weights are assumed to sum to~$1$, i.e.\ $\sum_{e \in E} w(e) = 1$, and so they specify a probability distribution on the edges. We will write $\boldsymbol{e} \sim E$ or $(\bu, \bv) \sim E$ for a random edge sampled from this distribution. We will often use the word \emph{graph} as shorthand for weighted graph. \end{definition} The classical analogue of the Heisenberg model is the \text{\sc Max-Cut}\xspace problem. \begin{definition}[\text{\sc Max-Cut}\xspace]\label{def:max-cut} Given a graph $G = (V, E, w)$, a \emph{cut} is a function $f:V\rightarrow\{\pm 1\}$. The \emph{value} of the cut is \begin{equation*} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)]. \end{equation*} \text{\sc Max-Cut}\xspace is the problem of finding the value of the largest cut, i.e.\ the quantity \begin{equation*} \text{\sc Max-Cut}\xspace(G) = \max_{f:V \rightarrow \{\pm 1\}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} f(\bu) f(\bv)]. \end{equation*} \end{definition} } \ignore{ Next, we define our central problem of interest, the anti-ferrogmagnetic Heisenberg XYZ model. Consistent with our interpretation of this model as a ``quantum max cut,'' we will follow the works of \cite{GP19,AGM20} and define it as a maximization problem with an additional identity term, rather than as a minimization problem. Although finding a ground state of the latter is equivalent to finding a maximum-energy state of the former, the two problems differ in terms of approximability, where a $O(\log(n))$-approximation~\cite{BGKT19} is the best known for the minimization version that is of more traditional physical interest. } The main focus of this work is the \text{\sc Quantum} \text{\sc Max-Cut}\xspace problem, a special case of the local Hamiltonian problem first introduced by Gharibian and Parekh in~\cite{GP19}. Although the local Hamiltonian problem is typically stated as a minimization problem, they instead defined \text{\sc Quantum} \text{\sc Max-Cut}\xspace to be a maximization problem, as this makes it more convenient to study from an approximation algorithms perspective and resembles \text{\sc Max-Cut}\xspace. As stated earlier, \text{\sc Quantum} \text{\sc Max-Cut}\xspace is a natural maximization variant of the anti-ferromagnetic Heisenberg XYZ model; we discuss this viewpoint, as well as the Heisenberg model in greater detail, in~\Cref{sec:qmaxcut-heisenberg}. \begin{definition}[The \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction] The \emph{\text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction} is the $2$-qubit operator $ h = \tfrac{1}{4}(I \ot I - X \ot X - Y \ot Y - Z \ot Z). $ \end{definition} Here~$X$, $Y$, and $Z$ refer to the standard \emph{Pauli matrices}. Intuitively, the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction, when applied to a pair of qubits, enforces that they are opposites in the~$X$, $Y$, and~$Z$ bases. \begin{definition}[\text{\sc Quantum} \text{\sc Max-Cut}\xspace] Let $G = (V, E, w)$ be a graph known as the \emph{interaction graph}. The corresponding instance of the \emph{\text{\sc Quantum} \text{\sc Max-Cut}\xspace problem} is the matrix which acts on $(\C^2)^{\ot V}$ given by \begin{equation*} H_G = \sum_{(u, v) \in E} w_{u, v} \cdot h_{u, v} =\E_{(\bu, \bv) \sim E} h_{\bu, \bv}. \end{equation*} Here, $h_{\bu, \bv}\in \mathbb{C}^{2^{|V|} \times 2^{|V|}}$ is shorthand for $h_{\bu, \bv} \ot I_{V \setminus \{\bu, \bv\}}$, where $h_{\bu,\bv}$ is the \text{\sc Quantum} \text{\sc Max-Cut}\xspace interaction applied to the qubits~$\bu$ and~$\bv$. \end{definition} \begin{definition}[Energy] Let $H_G$ be an instance of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. Given a state $\ket{\psi} \in (\C^2)^{\otimes V}$, its \emph{value} or \emph{energy} is the quantity $\bra{\psi} H_G \ket{\psi}$. The \emph{maximum energy} of $H_G$, also referred to as its \emph{value}, is \begin{equation*} \text{\sc QMax-Cut}\xspace(G) = \lambda_{\mathrm{max}} (H_G) = \max_{\ket{\psi} \in (\C^2)^{\otimes V}} \bra{\psi} H_G \ket{\psi}. \end{equation*} \end{definition} To see one way \text{\sc Quantum} \text{\sc Max-Cut}\xspace and \text{\sc Max-Cut}\xspace are related, if we let $D_G$ be the diagonal matrix consisting of the diagonal entries of $H_G$ in the computational basis, then $\lambda_{\mathrm{max}} (D_G)$ is half the value of the maximum cut in $G$. The second problem we consider is the special case of \text{\sc Quantum} \text{\sc Max-Cut}\xspace when the optimization is only over product states, which has been a common approach in approximation algorithms for the local Hamiltonian problem. \begin{definition}[Product state value] The \emph{product state value of $H_G$} is \begin{equation*} \text{\sc Prod}\xspace(G) = \max_{\forall v \in V, \ket{\psi_v} \in \C^2} \bra{\psi_G} H_G \ket{\psi_G}\text{, where } \ket{\psi_G} = \otimes_{v \in V} \ket{\psi_v}. \end{equation*} \end{definition} There is an alternative expression for the product state value which we will find convenient to use. It is related to the \emph{Bloch sphere} representation of qubits; see \Cref{sec:product-states} for a proof. \begin{definition}[Balls and spheres] Given a dimension $d \geq 1$, the $d$-dimensional unit ball and sphere are given by $B^d = \{x \in \R^d \mid \Vert x \Vert \leq 1\}$, and $S^{d-1} = \{x \in \R^d \mid \Vert x \Vert = 1\}$, respectively. \end{definition} \begin{proposition}[Rewriting the product state value]\label{prop:rewrite-product} \begin{equation*} \text{\sc Prod}\xspace(G) = \max_{f:V \rightarrow S^2} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{1}{4} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{proposition} Note the similarity to the definition of \text{\sc Max-Cut}\xspace (\Cref{def:max-cut}): aside from the extra factor of $\tfrac{1}{2}$, the key distinction is that the function~$f$ has range $S^2$ rather than $S^0 = \{-1, 1\}$. As a result, the product state value can be viewed as an additional quantum generalization of \text{\sc Max-Cut}\xspace. We may further generalize \text{\sc Max-Cut}\xspace by allowing $f$ to have range $S^{k-1}$, yielding the rank-constrained version of \text{\sc Max-Cut}\xspace studied by Briët, Oliveira, and Vallentin~\cite{BOV10}. \begin{definition}[Rank-$k$ \text{\sc Max-Cut}\xspace]\label{def:rank-k-maxcut} \begin{equation*} \text{\sc Max-Cut}\xspace_k(G) = \max_{f:V \rightarrow S^{k-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{definition} Note that $\text{\sc Max-Cut}\xspace_3(G) = 2 \cdot \text{\sc Prod}\xspace(G)$. Indeed, what we refer to as the BOV algorithm for the product state value was actually originally stated in~\cite{BOV10} as an algorithm for $\text{\sc Max-Cut}\xspace_3(G)$, though it applies equally well to both cases. \subsection{Semidefinite programming relaxations} A standard approach for solving \text{\sc Max-Cut}\xspace is through its SDP relaxation. \begin{definition}[The \text{\sc Max-Cut}\xspace SDP]\label{def:mcsdp} Let $G = (V, E, w)$ be an $n$-vertex graph. The value of the \text{\sc Max-Cut}\xspace SDP is \begin{equation*} \text{\sc SDP}_{\text{\sc MC}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{2} - \tfrac{1}{2} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{definition} This is a relaxation because the optimal objective of the \text{\sc Max-Cut}\xspace SDP is at least as large as $\text{\sc Max-Cut}\xspace(G)$ for all graphs~$G$. It can be approximated in polynomial time, meaning one can compute an $f:V \rightarrow S^{n-1}$ of value $\text{\sc SDP}_{\text{\sc MC}}\xspace(G) - \eps$ in time $\poly(n) \cdot \log(1/\eps)$\footnote{The usual considerations show arbitrary additive approximation in polynomial time, e.g.~see~\cite{VB96}.}. In addition, it is equivalent to the level-2 SoS relaxation for \text{\sc Max-Cut}\xspace. (Note also that $\text{\sc SDP}_{\text{\sc MC}}\xspace(G) = \text{\sc Max-Cut}\xspace_n(G)$.) There is a similar SDP relaxation for the product state value of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. \begin{definition}[The product state SDP] \begin{equation*} \text{\sc SDP}_{\text{\sc Prod}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{1}{4} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{definition} \noindent This is in fact the SDP relaxation given by level-2 of the SoS hierarchy applied to the product state value. Note that $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G) = \tfrac{1}{2} \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$. Now we state the SDP relaxation we will use for the maximum energy of \text{\sc Quantum} \text{\sc Max-Cut}\xspace, which is equivalent to the level-2 ncSoS relaxation of the maximum energy. \begin{definition}[The \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP]\label{def:heis-sdp} \begin{equation*} \text{\sc SDP}_{\text{\sc QMC}}\xspace(G) = \max_{f:V\rightarrow S^{n-1}} \E_{(\bu, \bv) \sim E}[ \tfrac{1}{4} - \tfrac{3}{4} \langle f(\bu), f(\bv)\rangle]. \end{equation*} \end{definition} \noindent Deriving this requires a bit more care than either of the \text{\sc Max-Cut}\xspace or product state SDPs. Indeed, it is not even obvious at first glance that this is a legitimate relaxation! We include a proof of this fact in \Cref{sec:sdp_proofs} and a discussion of the ncSoS hierarchy in \Cref{sec:ncsos}. To our knowledge, we are the first to observe this particularly simple form of the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP. Note that $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G) = \tfrac{3}{2} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G) - \tfrac{1}{2}$. It may be surprising that these three very different optimization problems yield such similar SDPs. Indeed, the optimizing function $f:V\rightarrow S^{n-1}$ is the same in each of them. In spite of this, however, the quality of the SDP relaxation is different in each case because each case features a different objective function to compare the SDP against. \subsection{Rounding algorithms}\label{sec:rounding} Most SDP algorithms work by computing the optimal SDP solution and then converting it to a solution to the original optimization problem in a process known as \emph{rounding}. We will look at the standard Goemans-Williamson algorithm, used to round the \text{\sc Max-Cut}\xspace SDP, and two generalizations of this algorithm, used to round the product state and \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDPs. \paragraph{Halfspace rounding.} The Goemans-Williamson algorithm~\cite{GW95} uses a procedure called ``halfspace rounding'' to round the $\text{\sc Max-Cut}\xspace$ SDP optimum $f_{\mathrm{SDP}}:V \rightarrow S^{n-1}$ into a random cut $\boldf:V \rightarrow \{-1, 1\}$. Halfspace rounding works as follows: \begin{enumerate} \item Sample a random vector $\bz = (\bz_1, \ldots, \bz_n)$ from the $n$-dimensional Gaussian distribution. \item For each $u \in V$, set $\boldf(u)$ equal to \begin{equation*} \boldf(u) := \sgn(\langle \bz, f_{\mathrm{SDP}}(u)\rangle) = \frac{\langle \bz, f_{\mathrm{SDP}}(u)\rangle}{|\langle \bz, f_{\mathrm{SDP}}(u)\rangle |}. \end{equation*} \end{enumerate} Goemans and Williamson showed that for each $u, v \in V$, \begin{equation}\label{eq:gw-ineq} \E_{\boldf}[\tfrac{1}{2}-\tfrac{1}{2}\boldf(u) \boldf(v)] \geq \alpha_{\mathrm{GW}} \cdot[\tfrac{1}{2}-\tfrac{1}{2}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle], \end{equation} where $\alpha_{\mathrm{GW}} = 0.878567$. Taking an average over the edges in~$G$, this shows that the expected value of~$\boldf$ is at least $\alpha_{\mathrm{GW}} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$, and hence at least $\alpha_{\mathrm{GW}} \cdot \text{\sc Max-Cut}\xspace(G)$. This shows that the Goemans-Williamson algorithm is an $\alpha_{\mathrm{GW}}$-approximation algorithm. \paragraph{Projection rounding.} Briët, Oliveira, and Vallentin~\cite{BOV10} suggested a generalization of halfspace rounding, which we refer to as ``projection rounding'', in order to round solutions of $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G)$. The goal is to convert an SDP solution $f_{\mathrm{SDP}}:V \rightarrow S^{n-1}$ into a function $\boldf:V \rightarrow S^{2}$, which can then be converted into a product state via~\Cref{def:bloch_vectors}. Projection rounding works as follows: \begin{enumerate} \item Sample a random $3 \times n$ matrix $\bZ$ consisting of $3n$ i.i.d. standard Gaussians. \item For each $u \in V$, set $\boldf(u) = \bZ f_{\mathrm{SDP}}(u) / \Vert \bZ f_{\mathrm{SDP}}(u)\Vert_2$. \end{enumerate} Briët, Oliveira, and Vallentin showed that for each $u, v \in V$, \begin{equation}\label{eq:bov-ineq} \E_{\boldf}[\tfrac{1}{4}-\tfrac{1}{4}\langle\boldf(u), \boldf(v)\rangle] \geq \alpha_{\mathrm{BOV}} \cdot[\tfrac{1}{4}-\tfrac{1}{4}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle], \end{equation} where $\alpha_{\mathrm{BOV}} = 0.956$. With the same reasoning as above, they conclude the following. \begin{theorem}[Performance of the BOV algorithm~\cite{BOV10}]\label{thm:bov-thm} The Briët-Oliveira-Vallentin algorithm for the product state value achieves approximation ratio $\alpha_{\mathrm{BOV}}$. \end{theorem} Next, Gharibian and Parekh~\cite{GP19} used projection rounding to round solutions of $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$ into product states. Like~\cite{BOV10}, this involves rounding $f_{\mathrm{SDP}}:V \rightarrow S^{n-1}$, now the solution of $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$, into $\boldf:V \rightarrow S^{2}$. They establish the inequality \begin{equation}\label{eq:gp-ineq} \E_{\boldf}[\tfrac{1}{4}-\tfrac{1}{4}\langle \boldf(u), \boldf(v)\rangle] \geq \alpha_{\mathrm{GP}} \cdot[\tfrac{1}{4}-\tfrac{3}{4}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle], \end{equation} where $\alpha_{\mathrm{GP}} = 0.498$. Note that this inequality possesses an asymmetry not present in \cref{eq:gw-ineq,eq:bov-ineq}: the coefficient of $\boldf(u) \boldf(v)$ on the left-hand side is $\tfrac{1}{4}$ but the coefficient of $\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle$ on the right-hand side is $\tfrac{3}{4}$. This asymmetry comes about because they are solving the SDP relaxation for the maximum energy, which is an optimization over all states, but only rounding into the set of product states. Nevertheless, this yields the following theorem. \begin{theorem}[Performance of the GP algorithm~\cite{GP19}] The Gharibian-Parekh algorithm for \text{\sc Quantum} \text{\sc Max-Cut}\xspace achieves approximation ratio $\alpha_{\mathrm{GP}}$. \end{theorem} \subsection{SDP and algorithmic gaps} The quality of an SDP relaxation is traditionally measured through its \emph{integrality gap}. \begin{definition}[Integrality gap]\label{def:integrality-gap} Let $\mathcal{P}$ denote a maximization problem and let $\mathrm{SDP}(\cdot)$ be a semidefinite programming relaxation for $\calP$. Given an instance~$\calI$ of $\calP$, its \emph{integrality gap} is the quantity \begin{equation*} \mathrm{GapSDP}(\calI) = \frac{\mathrm{OPT}(\calI)}{\mathrm{SDP}(\calI)}. \end{equation*} The \emph{integrality gap} of the SDP is defined to be the minimum integrality gap among all instances, i.e. \begin{equation*} \inf_{\text{instances $\calI$}}\{\mathrm{GapSDP}(\calI)\}. \end{equation*} \end{definition} The integrality gap of an SDP serves as a bound on the approximation ratio of any algorithm based on rounding its solutions. This is because one typically analyzes a rounding algorithm by comparing the value of its solution to the value of the SDP, as done for the rounding algorithms in \Cref{sec:rounding}. We refer to this as the \emph{standard analysis} of rounding algorithms. We therefore typically view a rounding algorithm as optimal if its worst-case performance matches the integrality gap. Usually, though, one actually cares about how the rounded solution compares to the optimal value, not the SDP value. To show that one has given a tight analysis of an algorithm's approximation ratio, one must actually exhibit a matching \emph{algorithmic gap}. \begin{definition}[Algorithmic gap] Let $\mathcal{P}$ denote a maximization problem. Let $A$ be an approximation algorithm for $\calP$, and let $A(\calI)$ be the expected value of the solution it outputs on input~$\calI$. Given an instance~$\calI$, its \emph{algorithmic gap} is the quantity \begin{equation*} \mathrm{Gap}_{A}(\calI) = \frac{A(\calI)}{\mathrm{OPT}(\calI)}. \end{equation*} The \emph{algorithmic gap} of~$A$ is defined to be the minimum algorithmic gap among all instances, i.e. \begin{equation*} \inf_{\text{instances $\calI$}}\{\mathrm{Gap}_{A}(\calI)\}. \end{equation*} \end{definition} \subsection{Our results} Our first set of results are a pair of integrality gaps for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace and product state SDPs. \begin{restatable}[Integrality gap for the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP]{theorem}{sdpintgap}\label{thm:integrality-gap-heisenberg} Assuming \cref{conj:vector-borell-intro}, the \text{\sc Quantum} \text{\sc Max-Cut}\xspace semidefinite program $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$ has integrality gap $\alpha_{\mathrm{GP}}$. \end{restatable} Assuming the vector-valued Borell's inequality, this matches the approximation ratio of the GP algorithm and shows that projection rounding is optimal for $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$. In addition, it shows that product states are the optimal ansatz for this SDP, as the GP algorithm outputs product states. Finally, it implies that the GP algorithm is strictly worse than the algorithms of~\cite{AGM20,PT21}, and that level-$4$ of the ncSoS hierarchy strictly improves upon level-$2$ of the ncSoS hierarchy. \begin{restatable}[Integrality gap for product state SDP]{theorem}{prodintgap}\label{thm:integrality-gap-prod} Assuming \cref{conj:vector-borell-intro}, the product state semidefinite program $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G)$ has integrality gap $\alpha_{\mathrm{BOV}}$. \end{restatable} This matches the approximation ratio of the BOV algorithm and shows that projection rounding is optimal for $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G)$, assuming~\cref{conj:vector-borell-intro}. Next, we show an algorithmic gap for the product state SDP. This shows that the ``standard analysis'' of the BOV algorithm is sharp, and so its approximation ratio is $\alpha_{\mathrm{BOV}}$ exactly. We note that this result is unconditional, and therefore not reliant on~\cref{conj:vector-borell-intro}. \begin{restatable}[Algorithmic gap for product state SDP]{theorem}{algogap}\label{thm:algo-gap} The Briët-Oliveira-Vallentin algorithm has algorithmic gap $\alpha_{\mathrm{BOV}}$. \end{restatable} Finally, we prove a Unique Games-hardness result for the product state value. This uses the standard framework of~\cite{KKMO07,Rag08} for translating SDP integrality gaps into inapproximability results. We apply this framework to the integrality gap from \Cref{thm:integrality-gap-prod}. \begin{theorem}[Inapproximability of the product state value]\label{thm:main-inapprox} Assuming \cref{conj:vector-borell-intro} and the Unique Games Conjecture, it is $\NP$-hard to approximate $\text{\sc Prod}\xspace(G)$ to within a factor of $\alpha_{\mathrm{BOV}}+\eps$, for all $\eps > 0$. \end{theorem} This shows that the BOV algorithm is optimal, assuming \cref{conj:vector-borell-intro} and the UGC. Next, we observe that the \text{\sc Quantum} \text{\sc Max-Cut}\xspace instances which occur in this proof have interaction graphs of high degree. Hence, by \cite{BH16} their product state value is roughly identical to their maximum energy. As a consequence, we also derive a Unique-Games hardness result for the maximum energy. \begin{theorem}[Inapproximability of \text{\sc Quantum} \text{\sc Max-Cut}\xspace]\label{thm:inapprox_general_state} Assuming \cref{conj:vector-borell-intro} and the Unique Games Conjecture, it is $\NP$-hard to approximate $\text{\sc QMax-Cut}\xspace(G)$ to within a factor of $\alpha_{\mathrm{BOV}}+\eps$, for all $\eps > 0$. \end{theorem} This is our one result which is \emph{not} tight, to our knowledge, as the best known approximation for $\text{\sc Quantum} \text{\sc Max-Cut}\xspace$ achieves approximation $0.533$~\cite{PT21}, which is less than $\alpha_{\mathrm{BOV}}=0.956$. The difficulty is that $\text{\sc SDP}_{\text{\sc QMC}}\xspace(G)$ is not an optimal SDP, as it is outperformed by the algorithms of~\cite{AGM20,PT21}, and so we cannot convert an integrality gap for it into a UG-hardness result. We also generalize our results for the product state value to hold for $\text{\sc Max-Cut}\xspace_k$ for any fixed~$k$. See~\Cref{sec:rank-constrained} for more details. \ignore{ We also considered the problem of whether the GP algorithm is optimal, at least among all algorithms using the product state ansatz. This potentially \emph{could} be done by converting~\Cref{thm:integrality-gap-heisenberg} into an inapproximability result using the framework of~\cite{KKMO07,Rag08}. However, we failed to do so after repeated attempts. \begin{remark}[Results for $\text{\sc Max-Cut}\xspace_k$] We have $\text{\sc Prod}\xspace(G) = \frac{1}{2}\text{\sc Max-Cut}\xspace_3(G)$ and $\text{\sc SDP}_{\text{\sc Prod}}\xspace(G) = \frac{1}{2}\text{\sc SDP}_{\text{\sc MC}}\xspace(G)$, so the above results may be viewed as results for $\text{\sc Max-Cut}\xspace_3(G)$. These results generalize to $\text{\sc Max-Cut}\xspace_k(G)$ for $k \geq 3$ using $\text{\sc SDP}_{\text{\sc MC}}\xspace(G)$ as a relaxation for $\text{\sc Max-Cut}\xspace_k(G)$ (i.e., $\text{\sc Max-Cut}\xspace_k(G) \leq \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$, for $k \geq 1$). Although most of our key results are proven for the case of general $k$, for some of them we do focus on the $k=3$ case for the sake of exposition.\onote{John and Yeongwoo, please modify this statement to more accurately reflect what we do.} \end{remark} } \subsection{Proof overview}\label{sec:tech-overview-overview} Our proof of \Cref{thm:integrality-gap-heisenberg,thm:integrality-gap-prod} is inspired by a well-known integrality gap construction for the \text{\sc Max-Cut}\xspace SDP due to Feige and Schechtman~\cite{FS02} which achieves an integrality gap of~$\alpha_{\mathrm{GW}}$. We will begin with an overview of this construction and a related construction called the ``Gaussian graph'', and then we will discuss how to modify these constructions to give integrality gaps for the two \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDPs. \paragraph{Integrality gap for the \text{\sc Max-Cut}\xspace SDP.} The construction of the Feige-Schechtman \text{\sc Max-Cut}\xspace integrality gap is motivated by the following two desiderata. \begin{enumerate} \item \label{item:desiderata-one} Given an optimal solution $f_{\mathrm{SDP}}:V \rightarrow S^{n-1}$ to the \text{\sc Max-Cut}\xspace SDP, halfspace rounding outputs a random solution~$\boldf$ with expected value exactly equal to $\alpha_{\mathrm{GW}} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$. \item \label{item:desiderata-two} This random solution~$\boldf$ is always an optimal cut, i.e.\ it has value $\text{\sc Max-Cut}\xspace(G)$. \end{enumerate} As we saw in \Cref{sec:rounding}, the random solution~$\boldf$ output by halfspace rounding has expected value at least $\alpha_{\mathrm{GW}} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$. Hence, if \Cref{item:desiderata-one} were not true, one of these solutions would have value strictly bigger than $\alpha_{\mathrm{GW}} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$, contradicting~$G$ being an integrality gap. Likewise, if \Cref{item:desiderata-two} were not true, there would exist a cut with value at least~$\alpha_{\mathrm{GW}} \cdot \text{\sc SDP}_{\text{\sc MC}}\xspace(G)$. Now we use \Cref{item:desiderata-one} to derive a constraint on $f_{\mathrm{SDP}}$. Recall from \Cref{eq:gw-ineq} that the value of $\boldf$ is at least $\alpha_{\mathrm{GW}}$ times the value of $f_{\mathrm{OPT}}$ in the SDP \emph{for each edge $(u,v)$}. In other words, \begin{equation}\label{eq:gw-ineq-restated} \E_{\boldf}[\tfrac{1}{2}-\tfrac{1}{2}\boldf(u) \boldf(v)] \geq \alpha_{\mathrm{GW}} \cdot[\tfrac{1}{2}-\tfrac{1}{2}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle], \end{equation} This means that if \Cref{item:desiderata-one} is true, \Cref{eq:gw-ineq-restated} must be satisfied with equality, for each edge $(u, v)$. To see what this implies, we first recall the proof of \Cref{eq:gw-ineq-restated}. Letting $\rho_{u,v}$ denote the inner product $\rho_{u,v} = \langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle$, there is an exact formula for the left-hand side, namely \begin{equation*} \E_{\boldf}[\tfrac{1}{2}-\tfrac{1}{2}\boldf(u) \boldf(v)] = \frac{\arccos(\rho_{u,v})}{\pi}. \end{equation*} See \cite{GW95} for a proof of this fact. Then \Cref{eq:gw-ineq-restated} follows as a consequence of the statement \begin{equation*} \min_{-1\leq \rho \leq 1} \frac{\arccos(\rho)/\pi}{\tfrac{1}{2}-\tfrac{1}{2}\rho} = \alpha_{\mathrm{GW}}. \end{equation*} There is in fact a \emph{unique} minimizer of this expression, which we write as $\rho_{\mathrm{GW}} \approx -0.69$. As a result, \Cref{eq:gw-ineq-restated} is satisfied with equality if and only if $\rho_{u,v} = \rho_{\mathrm{GW}}$. Thus, \Cref{item:desiderata-one} implies that $\rho_{u,v} = \rho_{\mathrm{GW}}$ for each edge $(u,v)$. Motivated by this, Feige and Schectman consider the \emph{$n$-dimensional sphere graph} $\mathcal{S}^{n-1}_{\rho_{\mathrm{GW}}}$. This is an infinite graph with vertex set $S^{n-1}$ in which two vertices $u, v \in S^{n-1}$ are connected whenever $\langle u, v\rangle \approx \rho_{\mathrm{GW}}$. There is a natural SDP embedding of the sphere graph $f_{\mathrm{SDP}}:S^{n-1} \rightarrow S^{n-1}$, in which $f_{\mathrm{SDP}}(u) = u$, for each $u \in S^{n-1}$. It has value \begin{equation*} \E_{(\bu, \bv) \sim E}[\tfrac{1}{2}-\tfrac{1}{2}\langle f_{\mathrm{SDP}}(\bu), f_{\mathrm{SDP}}(\bv)\rangle] = \E_{(\bu, \bv) \sim E}[\tfrac{1}{2}-\tfrac{1}{2}\langle \bu, \bv \rangle] \approx \tfrac{1}{2} - \tfrac{1}{2} \rho_{\mathrm{GW}}. \end{equation*} Thus, $\text{\sc SDP}_{\text{\sc MC}}\xspace(\mathcal{S}^{n-1}) \gtrsim \tfrac{1}{2} - \tfrac{1}{2} \rho_{\mathrm{GW}}$. In addition, this graph satisfies our two desiderata: \begin{enumerate} \item By construction, $\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle \approx \rho_{\mathrm{GW}}$ for each edge $(u, v)$. Thus, hyperplane rounding will produce a random cut $\boldf$ with average value $\approx \alpha_{\mathrm{GW}} \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho_{\mathrm{GW}})$. \item Each cut $\boldf$ is of the form $\boldf(u) = \sgn(\langle \bz, f_{\mathrm{SDP}}(u) \rangle) = \sgn(\langle \bz, u \rangle)$, where~$\bz$ is a random Gaussian. By rotational symmetry, all of these cuts have the same value, and in particular they have the same value as the case when $\bz = e_1$, i.e.\ the cut $f_{\mathrm{opt}}(u) = \sgn(u_1)$. The main technical argument of~\cite{FS02} is that this is in fact the optimal cut. In other words, for every function $f : S^{n-1} \rightarrow \{-1, 1\}$, \begin{equation}\label{eq:whatevs} \E_{(\bu, \bv) \sim E}[f(\bu) f(\bv)] \geq \E_{(\bu, \bv) \sim E}[f_{\mathrm{opt}}(\bu) f_{\mathrm{opt}}(\bv)]. \end{equation} \end{enumerate} \noindent Together, these imply that $\mathcal{S}^{n-1}_{\rho_{\mathrm{GW}}}$ has integrality gap~$\alpha_{\mathrm{GW}}$. \paragraph{Moving to the Gaussian graph.} We will actually use a second, related construction of the integrality gap called the \emph{Gaussian graph}, which will turn out to be more convenient to analyze in our case. It is defined as follows. \begin{definition}[$\rho$-correlated Gaussian graph] Let $n$ be a positive integer and $-1\leq \rho \leq 1$. We define the \emph{$\rho$-correlated Gaussian graph} to be the infinite graph $\mathcal{G}^n_\rho$ with vertex set $\R^n$ in which a random edge $(\bu, \bv)$ is distributed as two $\rho$-correlated Gaussian vectors. \end{definition} For large~$n$, the Gaussian graph, when scaled by a factor of $\tfrac{1}{\sqrt{n}}$, behaves like the sphere graph. For example, if $\bu$ is a random Gaussian vector, then $\tfrac{1}{\sqrt{n}}\bu$ is close to a unit vector with high probability. In addition, if $(\bu, \bv)$ is a random edge, then $\langle \tfrac{1}{\sqrt{n}}\bu, \tfrac{1}{\sqrt{n}}\bv \rangle \approx \rho$ with high probability. As a result, an argument similar to above shows that it has an integrality gap of $\alpha_{\mathrm{GW}}$. The optimal SDP assignment is the function $f_{\mathrm{SDP}}(u) = u/\Vert u \Vert$. Hyperplane rounding produces a cut of the form $\boldf(u) = \sgn(\langle \bz, f_{\mathrm{SDP}}(u)\rangle)$, which is equivalent (up to rotation) to the assignment $f_{\mathrm{opt}}(u) = \sgn(u_1)$. The analogous statement to \Cref{eq:whatevs} above that $f_{\mathrm{opt}}$ is optimal is a well-known result in Gaussian geometry known as \emph{Borell's inequality} or \emph{Borell's isoperimetric theorem}. There is a version of this theorem for positive and negative~$\rho$, but we will only need to state the negative~$\rho$ case as the integrality gap only requires the $\rho_{\mathrm{GW}} \approx -0.69$ case. \begin{theorem}[Borell's isoperimetric theorem, negative $\rho$ case]\label{thm:borell} Let $n$ be a positive integer and $-1\leq \rho \leq 0$. Let $f : \R^n \rightarrow \{-1, 1\}$. In addition, let $f_{\mathrm{opt}}:\R^n\rightarrow \{-1, 1\}$ be defined by $f_{\mathrm{opt}}(x) = \sgn(x_1)$. Then \begin{equation*} \E_{\bu \sim_\rho \bv}[f(\bu) f(\bv)] \geq \E_{\bu \sim_\rho \bv}[f_{\mathrm{opt}}(\bu) f_{\mathrm{opt}}(\bv)]. \end{equation*} \end{theorem} \paragraph{Integrality gap for \text{\sc Quantum} \text{\sc Max-Cut}\xspace.} The integrality gaps we design for the product state and \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDPs are constructed along very similar lines to the \text{\sc Max-Cut}\xspace integrality gaps. In both cases, our goal is to show that projection rounding is the optimal rounding algorithm, which motivates us to study the conditions in which projection rounding performs worst. As we saw in \Cref{sec:rounding}, projection rounding applied to the product state and \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDPs yields a random function $\boldf:V \rightarrow S^2$ satisfying the following conditions, for each edge $(u,v)$: \begin{align} \E_{\boldf}[\tfrac{1}{4}-\tfrac{1}{4}\langle\boldf(u), \boldf(v)\rangle] &\geq \alpha_{\mathrm{BOV}} \cdot[\tfrac{1}{4}-\tfrac{1}{4}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle],\label{eq:bov-ineq-restated}\\ \E_{\boldf}[\tfrac{1}{4}-\tfrac{3}{4}\langle\boldf(u), \boldf(v)\rangle] &\geq \alpha_{\mathrm{GP}} \cdot[\tfrac{1}{4}-\tfrac{3}{4}\langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle],\label{eq:pk-ineq-restated} \end{align} respectively. As in the case of \text{\sc Max-Cut}\xspace, the left-hand sides of these equations depend only on the inner product $\rho_{u, v} = \langle f_{\mathrm{SDP}}(u), f_{\mathrm{SDP}}(v)\rangle$. This was shown by~\cite{BOV10}, who gave the following exact expression for this quantity: \begin{equation}\label{eq:might-reference-later} \E_{\boldf} \langle \boldf(u), \boldf(v)\rangle = \frac{2}{3}\left(\frac{\Gamma(2)}{\Gamma(3/2)}\right)^2\rho_{u, v}\cdot\,_2F_1\left(1/2,1/2;5/2;\rho_{u, v}^2\right), \end{equation} where $_2F_1(\cdot, \cdot;\cdot;\cdot)$ is the Gaussian hypergeometric function. Thus, one can compute the approximation ratios $\alpha_{\mathrm{BOV}}$ and $\alpha_{\mathrm{GP}}$ by finding the ``worst case'' values of $\rho_{u,v}$. For \Cref{eq:bov-ineq-restated}, this is $\rho_{\mathrm{BOV}} \approx -0.584$; for \Cref{eq:pk-ineq-restated}, this is $\rho_{\mathrm{GP}} \approx -0.97$. This suggests finding a graph for the product state SDP and the \text{\sc Quantum} \text{\sc Max-Cut}\xspace SDP in which $\rho_{u, v} = \rho_{\mathrm{BOV}}$ and $\rho_{u, v} = \rho_{\mathrm{GP}}$ for each edge $(u, v)$, respectively. Again, we will use the $\rho$-correlated Gaussian graph, where again the optimum SDP assignment is the function $f_{\mathrm{SDP}}(u) = u/\Vert u \Vert$. Projection rounding produces a solution of the form $\boldf(u) = \bZ u / \Vert \bZ u \Vert$, where $\bZ$ is a random $3 \times n$ Gaussian matrix. When $n$ is large, this is roughly equivalent to projecting $u$ onto a random 3-dimensional subspace, in which case it is equivalent (up to rotation) to $f_{\mathrm{opt}}(u) = (u_1, u_2, u_3) / \Vert (u_1, u_2, u_3)\Vert$. We must now show that $f_{\mathrm{opt}}$ is indeed the optimal solution. In the case of the product state value, we must show that it is the best among all product states; equivalently, among all functions $f:\R^n \rightarrow S^2$. On the other hand, for the case of the ground state energy, we must show that it is the best among all quantum states, which need not be product. Fortunately, the Gaussian graph is of high (in fact, infinite) degree, and so by \cite{BH16} the optimal state is a product state. \onote{We will need to qualify this since the graph is weighted.} The optimality of $f_{\mathrm{opt}}$ then follows from our conjectured vector-valued analogue of Borell's isoperimetric theorem (\cref{conj:vector-borell-intro}, in the case $k=3$). \section{Conclusion and open questions} In this work, we have made progress on understanding the approximability of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. However, there are many interesting questions which remain open, such as finding the optimal approximation ratio. We list these below. \begin{enumerate} \item Most obviously, is the vector-valued Borell's inequality true? \item Does there exist an algorithmic gap instance for the GP algorithm with algorithmic gap~$\alpha_{\mathrm{GP}}$? We believe there is but were unable to find one. The key difficulty seems to be that an algorithmic gap instance should be a low-degree graph with an entangled maximum energy state. Otherwise, the optimizing state would be close to a product state, and in this case the GP algorithm matches the $\alpha_{\mathrm{BOV}}$ approximation ratio of the BOV algorithm. But finding explicit examples of families of \text{\sc Quantum} \text{\sc Max-Cut}\xspace instances with entangled maximum energy states for which we can even compute their optimum value is a difficult problem, and only a few such examples are known (see~\cite{F17, M13}). \item Can we perform an optimal analysis of the level-4 ncSoS relaxation of \text{\sc Quantum} \text{\sc Max-Cut}\xspace? This would involve designing a rounding algorithm and finding an integrality gap which matches its performance, as well as identifying the optimal ansatz, which would need to be more powerful than product states. Inspired by the work of~\cite{AGM20}, Parekh and Thompson~\cite{PT21} have considered tensor products of one- and two-qubit states, but it is unclear whether this is the optimal ansatz. \item Could the level-4 ncSoS relaxation actually be optimal for \text{\sc Quantum} \text{\sc Max-Cut}\xspace? \item The BOV algorithm is essentially the optimal algorithm for \text{\sc Quantum} \text{\sc Max-Cut}\xspace on high-degree graphs. What about low-degree graphs? Can we design improved approximation algorithms in this case as well? Recently, for example, Anshu, Gosset, Morenz Korol and Soleimanifar have demonstrated a way to improve the objective value of a given product state assuming the graph is low degree \cite{AGKS21}. \item Are there \emph{quantum} approximation algorithms for \text{\sc Quantum} \text{\sc Max-Cut}\xspace whose approximation ratios we can analyze? \item Can we prove a hardness of approximation result for \text{\sc Quantum} \text{\sc Max-Cut}\xspace which improves upon our~\Cref{thm:inapprox_general_state}? This would involve showing a reduction from the Unique Games problem which outputs low-degree instances of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. This is because high-degree instances have maximum energy states which can be approximated by product states, and so the BOV algorithm produces an $\alpha_{\mathrm{BOV}}$-approximation in this case. However, traditional Unique Games reductions typically produce high-degree instances, and so it seems like new techniques might be needed. This is related to our difficulty in producing algorithmic gap instances with entangled maximum energy states, and so designing algorithmic gap instances might be a good first step. \item Is it $\QMA$-hard to approximate \text{\sc Quantum} \text{\sc Max-Cut}\xspace? Proving this unconditionally would also prove the quantum PCP conjecture, putting it beyond the range of current techniques. But it might be possible to show this \emph{assuming} the quantum PCP conjecture is true. \item Since the Heisenberg model has an analytic solution only for certain graphs (see for example the ``Bethe Ansatz'' ~\cite{Bet31}), physicists use a set of heuristic algorithms for approximating the ground state of Hamiltonians from the Heisenberg model~\cite{B30, A88, F79}. Can we find a rigorous theoretical justification to support the success of these heuristics in practice? \item How well do the techniques used to design approximation algorithms for \text{\sc Quantum} \text{\sc Max-Cut}\xspace carry over to other families of Hamiltonians? One natural family is the set of Hamiltonians in which each local term is a projective matrix. An $\alpha$-approximation for this case gives an $\alpha$-approximation for any Hamiltonian with positive-semidefinite local terms. This was considered in \cite{PT20} where it was shown that a rounding algorithm akin to that used in \cite{GP19} also applies to this case. This was followed up by an approach in \cite{PT22} that applies to more general ansatzes. However, the analysis in these works is not tight so proving the actual performance of the algorithm is an open question. \end{enumerate} \section*{Acknowledgments} We would like to thank Anurag Anshu, Srinivasan Arunachalam, and Penghui Yao for their substantial contributions to this paper. We would also like to thank Steve Heilman for pointing out a bug in a previous draft of the paper. \begin{comment} JW thanks Ryan O'Donnell for discussions on integrality gaps and various proofs of Borell's inequality and Amey Bhangale for help with the Unique Games Conjecture. OP and KT are with Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing and Quantum Algorithms Teams programs. JN is supported by a fellowship from the Alfred P.\ Sloan Foundation, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2047/1 – 390685813. Most of this research was performed while JW was at the University of Texas at Austin. \end{comment} \ignore{ \begin{table}[ht] \caption{Approximation Algorithms for maximum $2$-local Hamiltonian}\label{table:approx_algs} \centering \begin{tabular}{c | c} \hline\hline Problem class & Approximation factor \\ \hline \, & \textcolor{red}{0.956}~\cite{BOV10}$^a$\\ Heisenberg XYZ & \textcolor{red}{0.498}~\cite{GP19}\\ \, & 0.531~\cite{AGM20}\\ \, & 0.533~\cite{PT21}\\ \hline \, & \textcolor{red}{$1/2-\epsilon$} (for dense instances)~\cite{GK12}\\ Positive semi-definite terms & \textcolor{red}{0.328}~\cite{HEP20}\\ \, & \textcolor{red}{0.387} (\textcolor{red}{0.467} for maximally entangled)~\cite{PT20}\\ \hline \, & \textcolor{red}{$\Omega(1/\ell)$}~\cite{HM17}\\ Traceless & \textcolor{red}{$\Omega(1/\log(n))$}~\cite{BGKT19} \\ \, & PTAS (for planar instances)~\cite{NST09}\\ \, & \textcolor{red}{0.187} (for bipartite instances)~\cite{PT20}\\ \hline Dense or & \,\\ Planar or & \textcolor{red}{PTAS}~\cite{BH16}\\ Low threshold rank & \,\\ \hline Fermionic Hamiltonians & \textcolor{red}{$\Omega(1/(n\log(n)))$} \cite{BGKT19}\\ \hline \end{tabular} {\footnotesize The first approximation factor is relative to best product state and not comparable to others. \textcolor{red}{Red} approximation factors use a product state ansatz. The number of qubits or the number of Fermionic modes is $n$, and $\ell$ is the max number of $2$-local terms a qubit participates in. PTAS refers to polynomial-time approximation schemes. } \end{table} } \section{A dictator test for the product state value}\label{sec:dictator-test} Now we show that the noisy hypercube serves as a dictatorship test for functions of the form $f:\{-1, 1\}^n \rightarrow B^k$, assuming \cref{conj:vector-borell-intro}. Informally, this means that if $f$ is an embedded dictator, it should have high value, and if it is ``far'' from a dictator (in the sense that it has no ``notable'' input coordinates) then it should have low value. This will be an important ingredient in our Unique-Games hardness proof in~\Cref{sec:ug-hardness} below. We have already shown that embedded dictators achieve value $\tfrac{1}{4} - \tfrac{1}{4} \rho$ on the noisy hypercube in~\Cref{lem:embedded-dictators}. Now we will upper-bound the value that functions ``far'' from dictators achieve. We will show that their value, up to small error, is at most the optimum product state value on the Gaussian graph $\mathcal{G}^n_\rho$, which we have shown to be $\tfrac{1}{4} - \tfrac{1}{4} F^*(k, \rho)$. Throughout this section, we will make heavy use of the various Fourier analytic quantites defined in \Cref{sec:fourier-analysis}. \begin{theorem}[Dictatorship test soundness]\label{thm:dictator-test} Assume \cref{conj:vector-borell-intro}. Let $-1 < \rho \leq 0$. Then for any $\epsilon > 0$, there exists a small enough $\delta = \delta(\epsilon, \rho) > 0$ and large enough $m = m(\epsilon, \rho) \geq 0$ such that the following is true. Let $f: \{-1, 1\}^n \rightarrow B^k$ be any function satisfying $$\Inf^{\leq m}_i[f] = \sum_{j=1}^k \Inf^{\leq m}_i[f_j] \leq \delta,\quad \text{for all $i = 1, \dots, n$}.$$ Then $$\E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}} [f(\bx)f(\by)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq F^*(k,\rho) - \epsilon.$$ In other words, the value of~$f$ on the noisy hypercube $\mathcal{H}^n_\rho$ is at most $ \tfrac{1}{4} - \tfrac{1}{4} F^*(k, \rho) + \eps. $ \end{theorem} The $k = 1$ case is the negative~$\rho$ case of the Majority is Stablest theorem of~\cite{MOO10}, which serves as the soundness case for the $\text{\sc Max-Cut}\xspace$ dictatorship test; our theorem generalizes the negative~$\rho$ case of Majority is Stablest to larger values of~$k$. The proof follows the same outline as the proof of Majority is Stablest appearing in~\cite[Chapter 11.7]{OD14}: we apply an ``invariance principle'' to exchange $f$'s Boolean inputs with Gaussians of the same mean and variance. Then we use our \cref{conj:vector-borell-intro} on the noise stability of functions in Gaussian space to upper-bound the value of~$f$. The invariance principle we will use is the following one due to Isaksson and Mossel~\cite{IM12}, which applies to vector-valued functions. \begin{theorem}[Vector-valued invariance principle \cite{IM12}]\label{thm:vector-invariance} Fix $\delta, \gamma \in (0,1)$ and set $m = \tfrac{1}{18}\log\tfrac{1}{\delta}$. Let $f = (f_1,\dots,f_k)$ be a $k$-dimensional multilinear polynomial such that $\Var[f_j] \leq 1$, $\Var[f_j^{> m}] \leq (1-\gamma)^{2m}$, and $\Inf^{\leq m}_i[f_j] \leq \delta$ for each $j \in [k]$ and $i \in [n]$. Let $\bx$ be a uniformly random string over $\{-1,1\}^n$ and $\by$ be an $n$-dimensional standard Gaussian random variable. Furthermore, let $\Psi : \mathbb R^k \rightarrow \mathbb R$ be Lipschitz continuous with Lipschitz constant $A$. Then, $$|\E[\Psi(f(\bx))] - \E[\Psi(f(\by))]| \leq C_kA \delta^{\gamma/(18 \ln 2)}$$ where $C_k$ is a parameter depending only on $k$. \end{theorem} As a first step in proving \Cref{thm:dictator-test}, we claim that we can assume $f$ is odd (i.e.\ $f(x) = -f(-x)$) without loss of generality. \begin{lemma}\label{lem:decreasing-stab} Fix $\rho \in (-1,0]$ and $f : \{-1,1\}^n \rightarrow B^k$. Define the function $g(x) = \tfrac{1}{2}(f(x) - f(-x))$. Then, $g$ is odd, has range~$B^k$, and satisfies $$\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g]$$ and $$\forall i \in [n],\ \Inf_i^{\leq m}[f] \geq \Inf_i^{\leq m}[g].$$ \end{lemma} \begin{proof} Oddness follows from $g(-x) = \tfrac{1}{2}(f(-x) - f(x)) = -g(x)$. In addition, $g(x)$ maps into $B^k$ because it is the average of two points in $B^k$. The Fourier transform of $f$ is $\sum_{S \subseteq [n]} \hat f(S) \chi_S(x)$. Noting that $\chi_S(x) = \chi_S(-x)$ for $|S|$ even and $-\chi_S(-x)$ for $|S|$ odd, the Fourier transform of $g$ is $$g(x) = \sum_{\text{$|S|$ odd}} \hat f(S) \chi_S(x).$$ Thus, for any $i \in [n]$, $$\Inf_i^{\leq m}[g] = \sum_{\substack{|S| \leq m, S \ni i, \\ \text{$|S|$ odd}}} \norm{\hat f(S)}_2^2 \leq \sum_{|S| \leq m, S \ni i} \norm{\hat f(S)}_2^2 = \Inf_i^{\leq m}[f].$$ Furthermore, using that $\rho^{|S|} \geq 0$ for $|S|$ even, \begin{equation*}\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] = \sum_{\text{$|S|$ odd}} \rho^{|S|}\norm{\hat f(S)}_2^2 \leq \sum_{S \subseteq [n]} \rho^{|S|} \norm{\hat f(S)}_2^2 = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f].\qedhere \end{equation*} \end{proof} Therefore, to lower bound $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$, it suffices to lower bound the noise stability of the odd function $g$. Then, the proof of \Cref{thm:dictator-test} is essentially by reduction to case of \emph{positive} $\rho$, i.e.\ $\rho \in [0,1)$, but only for odd functions. This requires an analogue of \cref{conj:vector-borell-intro} for the case of positive $\rho$ and odd~$f$, which we derive as follows. \begin{corollary}[Vector-valued Borell's inequality; positive $\rho$ and odd $f$ case]\label{cor:vecor-borell-positive-rho} Assume \cref{conj:vector-borell-intro}. Then it holds in the reverse direction when $\rho \in [0,1]$, with the additional assumption that $f$ is odd. In particular, $$\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \leq \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_\mathrm{opt}],$$ where $f_{\mathrm{opt}}(x) = x_{\leq k}/\Vert x_{\leq k}\Vert$ and $x_{\leq k} = (x_1, \ldots, x_k)$. \end{corollary} \begin{proof} Because $f$ is odd, \begin{equation}\label{eq:odd-stab} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] = \E_{\bx \sim_\rho \by}[\langle f(\bx),f(\by)\rangle] = -\E_{\bx \sim_\rho \by}[\langle f(\bx),f(-\by)\rangle] = -\E_{\bx \sim_{-\rho} \by}[\langle f(\bx),f(\by)\rangle] = -\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f], \end{equation} where we have used the fact that if $\bx$ and $\by$ are $\rho$-correlated, then $\bx$ and $-\by$ are $(-\rho)$-correlated. Recalling that $\rho \geq 0$, we apply \cref{conj:vector-borell-intro} and obtain $\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f] \geq \mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f_\mathrm{opt}]$. Observing that $f_{\mathrm{opt}}$ is an odd function, we further have $$\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f] \geq \mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f_\mathrm{opt}] = -\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_\mathrm{opt}].$$ Thus, negating the above inequality and combining with \Cref{eq:odd-stab} yields $$\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f] = -\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f] \leq \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f_\mathrm{opt}].$$ This concludes the proof. \end{proof} Now we prove the main theorem of this section. \begin{proof}[Proof of \Cref{thm:dictator-test}] To begin, we choose parameters \begin{align*} \gamma &= \tfrac{1}{6}(1+\rho)\epsilon, \tag{dictated by \Cref{eq:apply-gamma}}\\ \delta &= \PAREN{\tfrac{\epsilon}{12k C_k}}^{(18 \ln 2) / \gamma}, \tag{so that the error in \Cref{thm:vector-invariance} is at most $\epsilon/(6k)$}\\ m &= \tfrac{1}{18}\log \tfrac{1}{\delta}. \tag{dictated by \Cref{thm:vector-invariance}} \end{align*} Using \Cref{lem:decreasing-stab}, assume without loss of generality that $f$ is odd. Throughout, we'll use $x$ to denote a string in $\{-1,1\}^n$ and $y$ to denote a vector in $\mathbb R^n$. To prove the claim, we'll perform a series of modifications to the initial function $f$ so that we can apply \Cref{thm:vector-invariance}. At each step, we'll show that each modified function has noise stability close to $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho [f]$. In particular, we'll consider the following functions: \begin{enumerate} \item $g(x) = \mathbf{T}_{1-\gamma} f(x)$, \item $g(y)$, where $y \in \mathbb R^n$, \item $\mathcal R \circ g(y)$, which is $g(y)$ rounded to $B^{k}$. \end{enumerate} \textbf{Step 1.} Since the statement of the vector-valued invariance principle (\Cref{thm:vector-invariance}) requires a function with low high-degree variance, we consider $g = \mathbf{T}_{1-\gamma} f$. Then for each $j \in [k]$, $$\Var[g_j^{> m}] = \sum_{|S| > m} (1-\gamma)^{2|S|}\widehat {f}_j(S)^2 \leq (1-\gamma)^{2m} \sum_{S \subseteq [n]} \widehat{f}_j(S)^2 \leq (1-\gamma)^{2m}.$$ Also, $\Inf_i^{\leq m}[g_j] \leq \Inf_i^{\leq m}[f_j]$ for all $i \in [n]$. Furthermore, note that since $f$ is odd, $g$ is also odd. Next, we bound the error in the quantity $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$ when we consider $g$ in place $f$. Since $g = \mathbf{T}_{1-\gamma} f$, we see that $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] = \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho(1-\gamma)^2}[f]$, and so it suffices to bound $|\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho(1-\gamma)^2}[f]|$. To do so, we use \Cref{eq:rho-stab-diff}. However, note this proposition only applies for $\rho > 0$. But since $f$ is odd, we have, $$|\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho(1-\gamma)^2}[f]|=|-\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f] + \mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho(1-\gamma)^2}[f]|=|\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho(1-\gamma)^2}[f]|,$$ which allows us to apply \Cref{eq:rho-stab-diff} with $\rho' = -\rho$. This yields \begin{equation}\label{eq:apply-gamma} |\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho(1-\gamma)^2}[f]| \leq \tfrac{2\gamma}{1+\rho} \Var[f] \leq \tfrac{2\gamma}{1+\rho}\E[\norm{f}_2^2] \leq\tfrac{2\gamma}{1+\rho}, \end{equation} where we used $\Var[f] \leq \E[\norm{f}_2^2]$ for all functions and the fact that $f$'s range is $B^k$. For our choice of $\gamma$, this is equal to $\epsilon/3$.\\ \textbf{Step 2.} Next, we bound the error accrued when we apply $g$ on Gaussian inputs. Consider the function $$\Psi(v) = \left\{\begin{array}{cl} \norm{v}_2^2 & \text{if $\norm{v}_2 \leq 1$,}\\ 1 & \text{otherwise.} \end{array}\right.$$ By \Cref{lem:psi-lipschitz}, $\Psi$ is $2$-Lipschitz. We'll rewrite $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g]$ using $\Psi$ in order to apply \Cref{thm:vector-invariance}. \begin{align*} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\bx)] &= \E_\bx [\langle g(\bx), \mathbf{T}_\rho g(\bx)\rangle]\\ &= \E_\bx [\langle \mathbf{T}_{\sqrt{-\rho}} g(\bx), \mathbf{T}_{-\sqrt{-\rho}} g(\bx) \rangle]\\ &= -\E_\bx [\langle \mathbf{T}_{\sqrt{-\rho}} g(\bx), \mathbf{T}_{\sqrt{-\rho}} g(\bx) \rangle] \tag{Since $g$ is odd}\\ &= -\E_\bx [\langle \norm{\mathbf{T}_{\sqrt{-\rho}} g(\bx)}_2^2 \rangle]\\ &= -\E_{\bx}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}g(\bx))}, \end{align*} where the last step used that $\mathbf{T}_{\sqrt{-\rho}} g(\bx) \in B^k$ and hence is unchanged by $\Psi(\cdot)$. Applying \Cref{thm:vector-invariance}, we get $$\ABS{\E_{\bx}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}[g(\bx)])} - \E_{\by}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}[g(\by)])}} \leq \epsilon/(6k) \leq \epsilon/3.$$ Here, we chose the ``$f$'' function in \Cref{thm:vector-invariance} to be $T_{\sqrt{-\rho}} g$, which satisfies the low variance and influence properties because~$g$ does. \textbf{Step 3.} The term $-\E_{\by}[\Psi(\mathbf{T}_{\sqrt{-\rho}}[g(\by)])]$ is \textit{almost} ready for application of \cref{conj:vector-borell-intro} through \Cref{cor:vecor-borell-positive-rho}. However, although $g(x)$ is bounded in $B^k$, the same might not hold for $g(y)$, where $y \in \mathbb R^n$. To fix this, we consider $\mathcal R \circ g$, where $$\mathcal R(v) = \left\{\begin{array}{cl} v & \text{if $\norm{v}_2 \leq 1$,}\\ \tfrac{v}{\norm{v}} & \text{otherwise.} \end{array}\right.$$ In other words, $\calR(v)$ rounds a vector $v$ to the unit ball $B^m$. Then \begin{align} &\ABS{\E_{\by}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}[g(\by)])} - \E_{\by}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}[\mathcal R \circ g(\by)])}}\nonumber\\ &\leq \E_{\by} \ABS{\Psi(\mathbf{T}_{\sqrt{-\rho}}[g(\by)]) -\Psi(\mathbf{T}_{\sqrt{-\rho}}[\mathcal R \circ g(\by)])}\nonumber\\ &\leq 2\E_{\by} \NORM{\mathbf{T}_{\sqrt{-\rho}}[g(\by)] -\mathbf{T}_{\sqrt{-\rho}}[\mathcal R \circ g(\by)]}_2\tag{$\Psi$ is $2$-Lipschitz}\nonumber\\ &\leq 2\sum_{i=1}^k \E_{\by} \ABS{\mathbf{T}_{\sqrt{-\rho}}[g_i(\by)] -\mathbf{T}_{\sqrt{-\rho}}[(\mathcal R \circ g)_i(\by)]}\tag{$\norm{\cdot}_2 \leq \norm{\cdot}_1$}\nonumber\\ &\leq 2\sum_{i=1}^k \E_{\by} \ABS{g_i(\by) -(\mathcal R \circ g)_i(\by)}\tag{$\mathbf{T}_{\sqrt{-\rho}}$ is a contraction}\nonumber\\ &= 2\sum_{i=1}^k \E_{\by} |\Phi_i(g(\by))|,\label{eq:exp-sum} \end{align} where we define the map $\Phi_i : \mathbb R^k \rightarrow \mathbb R$ to be $\Phi_i(v) = v_i - \mathcal R(v)_i$, which is $2$-Lipschitz by \Cref{cor:phi-lipschitz}. To evaluate this, we first recall that on \emph{Boolean} inputs~$x$, $g(x) \in B^k$, and so we have that $\Phi_i(g(x)) = 0$. Therefore $$\ABS{\E_{\bx}[\Phi_i(g(\bx))] - \E_{\by}[\Phi_i(g(\by))]} = \ABS{\E_{\by}[\Phi_i(g(\by))]},$$ and applying \Cref{thm:vector-invariance} one last time, we can upper bound this by $\epsilon/{6k}$. The sum in \Cref{eq:exp-sum} is in turn upper bounded by $\epsilon/3$. Finally, applying \Cref{cor:vecor-borell-positive-rho} to \begin{equation*} -\E_{\by}\BRAC{\Psi(\mathbf{T}_{\sqrt{-\rho}}[\mathcal R \circ g(\by)])} = -\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[\mathcal R \circ g] \end{equation*} yields a lower bound of $-\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[f_\mathrm{opt}] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_\mathrm{opt}]$, for which \Cref{prop:opt-formula} and \Cref{thm:exact-formula-for-average-inner-product} give an explicit formula. Through the three transformations, we accrue an error of at most $\epsilon$, which proves the claim. \end{proof} \begin{comment} \begin{proof} Next, since $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}$ is a function of the coefficients of the polynomial corresponding to $f$, we have that $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\bx)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\by)]$. However, although $g(x) \in B^{k}$ for all $x \in \{-1, 1\}^n$ (since $f(x) \in B^{k}$ by assumption), $g(y)$ may take values outside the unit ball for $y \in \R^n$. This prevents us from directly applying \Cref{thm:vector-borell} to~$g$. We'll instead apply the theorem to the function $g':\R^n \rightarrow B^k$ defined as $$g'(y) = \left\{\begin{array}{cl} g(y) & \text{if $g(y) \in B^{k}$,}\\ \frac{g(y)}{\NORM{g(y)}} & \text{otherwise.} \end{array}\right.$$ Applying \Cref{thm:vector-borell} yields that $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g'] \geq F^\star(k,\rho)$. It remains to bound $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g']}$. \begin{align*} \ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g\rangle + \langle g', \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g - g', \mathbf{T}_\rho g\rangle +\langle g', \mathbf{T}_\rho g - \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g - g', \mathbf{T}_\rho g\rangle} +\ABS{\langle g', \mathbf{T}_\rho g - \mathbf{T}_\rho g'\rangle}\\ &\leq \norm{g - g'}_2\norm{\mathbf{T}_\rho g}_2 + \norm{g'}_2\norm{\mathbf{T}_\rho g - \mathbf{T}_\rho g'}_2 \tag{by Cauchy-Schwartz}\\ &\leq \norm{g - g'}_2\norm{g}_2 + \norm{g'}_2\norm{g - g'}_2 \tag{because $\mathbf{T}_\rho$ is a contraction}\\ &= \norm{g - g'}_2(\norm{g}_2 + \norm{g'}_2) \\ &\leq 2\norm{g - g'}_2. \tag{because $\Vert g' \Vert_2 \leq \Vert g \Vert_2 \leq 1$} \end{align*} Now we just need to bound $\norm{g - g'}_2$ by $\epsilon/4$. Note that $\norm{g - g'}_2^2 = \E[\zeta(g(y))]$, where $\zeta$ is the 1-Lipschitz function from \Cref{prop:zeta-lipschitz}. Let $\delta = (\epsilon^2/(36\cdot C_m))^{18/\gamma}$. We can now apply \Cref{thm:vector-invariance} with $\Psi = \zeta$ and $\tau = \delta$, which yields $$\E_{\by}[\zeta(g)] = |\E_{\bx}[\zeta(g(\bx))] - \E_{\by}[\zeta(g(\by))]| \leq \epsilon^2/36$$ Then, $\norm{g-g'}_2 \leq \sqrt{\epsilon^2/36} = \epsilon/6$ and $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} \leq \epsilon/2$. Combining this with our bound of $\epsilon/2$ for \Cref{eq:bound1} concludes the proof. \end{proof} \end{comment} \section{Unique Games hardness of \text{\sc Quantum} \text{\sc Max-Cut}\xspace}\label{sec:ug-hardness} Now we prove hardness of \text{\sc Quantum} \text{\sc Max-Cut}\xspace. Our starting point is the Unique Games problem. \begin{definition}[Unique Games]\label{def:unique-games} The Unique Games problem is defined as follows. An instance is a tuple $\calI(U, V, E, [M], \{\pi_{u\rightarrow v}\}_{(u, v) \in E})$, corresponding to a bipartite graph with left side vertices~$U$, right side vertices~$V$, and edges~$E$, in addition to a bijection $\pi_{u \rightarrow v}:[M] \rightarrow [M]$ for each $(u, v) \in E$. We will also write $\pi_{v \rightarrow u}$ for $\pi_{u \rightarrow v}^{-1}$. A labeling of the vertices is a function $L : U \cup V \rightarrow [M]$, which satisfies the edge $(u, v) \in E$ if $\pi_{u \rightarrow v}(L(u)) = L(v)$. The value of~$L$ is the fraction of edges it satisfies, and the value of the instance~$\calI$ is the maximum value of any labeling. \end{definition} \begin{conjecture}[Unique Games Conjecture \cite{Kho02}]\label{con:ugc} For any $\gamma > 0$, there exists a constant $M = M(\gamma)$ such that it is $\NP$-hard to distinguish whether an instance of the Unique Games problem with label set size $M$ has value at least $1 - \gamma$ or at most $\gamma$. Furthermore, we may assume the constraint graph $\mathcal{C} = (U \cup V, E)$ is biregular. \end{conjecture} The fact that the constraint graph may be taken to be biregular is a consequence of the result of Khot and Regev~\cite{KR08}, as pointed out by Bansal and Khot~\cite{BK10}. Our hardness result is stated as follows. \begin{theorem}[UG-Hardness of Approximating \text{\sc Quantum} \text{\sc Max-Cut}\xspace]\label{thm:ug-hardness} Assume \cref{conj:vector-borell-intro}. For any $\rho \in (-1,0)$ and $\epsilon > 0$, given an instance of \text{\sc Quantum} \text{\sc Max-Cut}\xspace, the Unique Games Conjecture implies that the following two tasks are $\NP$-hard: \begin{enumerate} \item distinguishing if the product state value is greater than $\tfrac{1}{4} - \tfrac{1}{4}\rho - \eps$ or less than $\tfrac{1}{4} - \tfrac{1}{4}F^*(3, \rho) + \eps$, \item distinguishing if the maximum energy is greater than $\tfrac{1}{4} - \tfrac{1}{4}\rho - \eps$ or less than $\tfrac{1}{4} - \tfrac{1}{4}F^*(3, \rho) + \eps$. \end{enumerate} Choosing $\rho = \rho_{\mathrm{BOV}}$, these imply that approximating the product state value and maximum energy to a factor $\alpha_{\mathrm{BOV}}+\eps$ is $\NP$-hard, assuming the Unique Games Conjecture. \end{theorem} \ignore{ \begin{remark} We note that our proof also extends to the $\text{\sc Max-Cut}\xspace_k$ problem. In particular, it shows that for $\rho \in (-1, 0)$, it is $\NP$-hard (assuming the UGC) to distinguish if the $\text{\sc Max-Cut}\xspace_k$ value is greater than $\tfrac{1}{2} - \tfrac{1}{2} \rho - \eps$ or smaller than $\tfrac{1}{2} - \tfrac{1}{2} F^*(k, \rho) + \eps$. This shows that the rank-$k$ projection rounding algorithm of~\cite{BOV10} is optimal for each fixed~$k$. We note that the worst~$\rho$ for each fixed~$k$ is in the range $(-1,0)$, which follows because the sign of $\rho$ and $F^*(k, \rho)$ are the same and $|F^*(k, \rho)| \leq |\rho|$; so it suffices to just prove the inapproximability result for this range. \end{remark} } The proof mostly follows the standard outline for UG-hardness proofs introduced in~\cite{KKMO07}. In particular, the graph produced by the reduction is exactly the same as the one produced in their \text{\sc Max-Cut}\xspace reduction, with the one exception that we will eventually eliminate all self-loops so that it is a well-defined \text{\sc Quantum} \text{\sc Max-Cut}\xspace instance. The chief new difficulty is that in order to estimate the maximum energy of the graph produced by the reduction, we use \Cref{cor:BH-nonuniform-easy-to-use} to relate it to the product state value; however, the error term this theorem produces is a somewhat odd analogue of degree for weighted graphs, and bounding it is slightly tedious. \begin{proof}[Proof of \Cref{thm:ug-hardness}] The proof is by reduction from the Unique Games problem. To begin, we choose parameters \begin{align*} \delta &= \delta(\eps/2, \rho), \tag{$\delta(\cdot, \cdot)$ from \Cref{thm:dictator-test}}\\ m &= m(\eps/2, \rho), \tag{$m(\cdot, \cdot)$ from \Cref{thm:dictator-test}}\\ M &= \max\{M(\gamma), \tfrac{8 \log(\eps/200)}{\log(1/2 - \rho/2)}\}. \tag{$M(\cdot)$ from \Cref{con:ugc}; dictated by \Cref{eq:m-fact,eq:m-fact-2}}\\ \gamma & = \frac{\eps \delta^2}{16M}, \tag{dictated by \Cref{eq:used-gamma,eq:used-gamma-again}} \end{align*} Let $\calI(U, V, E, [M], \{\pi_{u\rightarrow v}\}_{(u,v) \in E})$ be a biregular instance of the Unique Games problem. The reduction produces a \text{\sc Quantum} \text{\sc Max-Cut}\xspace instance with graph~$G$ whose vertex set is $V \times \{-1, 1\}^M$. A random edge in $G$ is sampled as follows: pick~$\bu \in U$ uniformly at random, and sample two uniformly random neighbors $\bv, \bw \sim N(\bu)$ independently, where $N(\bu)$ is the set of $\bu$'s neighbors. Let $\bx$ and $\by$ be $\rho$-correlated $M$-dimensional Boolean strings. Output the edge between $(\bv, \bx \circ \pi_{\bv\rightarrow \bu})$ and $(\bw, \by \circ \pi_{\bw \rightarrow \bu})$. Given $w \in \{-1, 1\}^M$ and $\sigma:[M]\rightarrow [M]$, we write $w \circ \sigma \in \{-1, 1\}^M$ for the string in which $(w \circ \sigma)_i = w_{\sigma(i)}$. A product state assignment to~$G$ corresponds to a function $f_v:\{-1, 1\}^M \rightarrow S^2$ for each $v \in V$. It has value \begin{equation*} \E_{\bu \sim U}\E_{\bv, \bw \sim N(\bu)} \E_{\substack{\text{$(\bx, \by)$ $\rho$-correlated}\\\text{$n$-dim Boolean strings}}} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{\bv}(\bx \circ \pi_{\bv \rightarrow \bu}), f_{\bw}(\by \circ \pi_{\bw \rightarrow \bu})\rangle\right]. \end{equation*} \textit{Completeness.} Assume $\mathcal I$ has a labeling $L:U \cup V \rightarrow [M]$ satisfying more than $(1- \gamma)$-fraction of the edges. For each $v \in V$, let $f_v(x) = (x_{L(v)}, 0, \dots, 0)$. To analyze the performance of~$f$, let us first fix a vertex $u \in U$ and two neighbors $v, w \in N(u)$, and condition on the case that $L$ satisfies both edges $(u, v)$ and $(u, w)$. This means that $\pi_{v\rightarrow u}(L(v)) = L(u) = \pi_{w\rightarrow u}(L(w))$. Thus, for each $x \in \{-1, 1\}^M$, \begin{align*} f_v(x \circ \pi_{v\rightarrow u}) &= ((x \circ \pi_{v \rightarrow u})_{L(v)}, 0, 0) = (x_{\pi_{v \rightarrow u}(L(v))},0,0) = (x_{L(u)},0,0), \end{align*} and similarly $f_w(y \circ \pi_{w \rightarrow u}) = (y_{L(u)}, 0, 0)$ for each $y \in \{-1, 1\}^M$. As a result, the value of $f$ conditioned on $u$, $v$, and $w$ is \begin{align*} \E_{\bx, \by} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{v}(\bx \circ \pi_{v \rightarrow u}), f_{w}(\by \circ \pi_{w \rightarrow u})\rangle\right] =\E_{\bx, \by} \left[\tfrac{1}{4} - \tfrac{1}{4}\langle (\bx_{L(u)}, 0, 0), (\by_{L(u)}, 0, 0)\rangle\right], \end{align*} which is just the value of the $L(u)$-th embedded dictator on the noisy hypercube, i.e.\ $1/4 - 1/4 \rho$. Now we average over $\bu, \bv, \bw$. Because $\mathcal L$ is a biregular Unique Games instance, it is in particular left-regular, and so picking a random vertex $\bu \in U$ and neighbor $\bv \in N(\bu)$ is equivalent to picking a uniformly random edge from~$E$. Therefore, by the union bound, the probability that the assignment~$L$ satisfies both edges $(\bu, \bv)$ and $(\bu, \bw)$ is at least $1-2\gamma$. As we have seen, conditioned on this event, the assignment~$f$ has value at least $1/4 - 1/4\rho$. Due to our choice of $\gamma$, we can lower-bound the value of~$f$ by \begin{align} (1-2\gamma) \cdot (\tfrac{1}{4} - \tfrac{1}{4}\rho) &\geq \tfrac{1}{4} - \tfrac{1}{4}\rho - \gamma \nonumber\\ &\geq \tfrac{1}{4} - \tfrac{1}{4}\rho - \tfrac{1}{2} \eps.\label{eq:used-gamma} \end{align} This completes the completeness case. \textit{Soundness.} We will show the contrapositive. Suppose there is a product state assignment $\{f_v\}_{v \in V}$ to $G$ with value at least $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \tfrac{1}{2}\eps$. We will use this to construct a randomized assignment $\bL:U \cup V \rightarrow [M]$ whose average value is at least~$\gamma$, which implies that the Unique Games instance has value at least~$\gamma$. For each $u \in U$, we define the function $g_u: \{-1, 1\}^M \rightarrow B^3$ as \begin{equation*} g_u(x) = \E_{\bv \sim N(u)}[ f_{\bv}(x \circ \pi_{\bv \rightarrow u}) ]. \end{equation*} Then we can rewrite the value of the assignment $\{f_v\}$ as \begin{align*} \E_{\bu} \E_{\bv, \bw \sim N(\bu)} \E_{\bx, \by}\left[\tfrac{1}{4} - \tfrac{1}{4}\langle f_{\bv}(\bx \circ \pi_{\bv \rightarrow \bu}), f_{\bw}(\by \circ \pi_{\bw \rightarrow \bu})\rangle\right] &= \E_{\bu} \E_{\bx, \by}[\tfrac{1}{4} - \tfrac{1}{4}\langle g_{\bu}(\bx), g_{\bu}(\by)\rangle]\\ &= \E_{\bu}[\tfrac{1}{4} - \tfrac{1}{4} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_{\bu}]]. \end{align*} Since~$f$ has value at least $\tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \tfrac{1}{2}\eps$, an averaging argument implies that at least an $\eps/4$ fraction of $u \in U$ satisfy $\tfrac{1}{4} - \tfrac{1}{4} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_{u}] \geq \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \eps/4$. Rearranging, these $u$'s satisfy \begin{equation*} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g_u] \leq F^*(3, \rho) - \eps. \end{equation*} We call any such~$u$ ``good''. We now apply the soundness of our dictatorship test to the good $u$'s: by \Cref{thm:dictator-test}, any such $u$ has a ``notable'' coordinate, i.e.\ an $i$ such that $\Inf^{\leq m}_i[g_u] > \delta$. Our random assignment will then use this~$i$ as its label for the vertex~$u$: $\bL(u) = i$. (If~$u$ has multiple notable coordinates, then we pick one of these arbitrarily as the label for~$u$.) Next we'll need to obtain labels for the neighbors of~$u$. We will use the condition that~$u$ is good to derive that many of~$u$'s neighbors~$v$ have notable coordinates. This requires relating the Fourier spectrum of $g_u$ to the Fourier spectra of the neighboring $f_v$'s. To begin, for any subset $S \subseteq [M]$, \begin{equation*} \chi_S(x \circ \pi_{v \rightarrow u}) = \prod_{i \in S}(x \circ \pi_{v \rightarrow u})_i = \prod_{i \in S} x_{\pi_{v\rightarrow u}(i)} = \prod_{j \in T} x_j = \chi_T(x), \end{equation*} where $T = \pi_{v \rightarrow u}(S) = \{\pi_{v\rightarrow u}(i) : i \in S\}$. As a result, \begin{equation*} f_v(x \circ \pi_{v \rightarrow u}) = \sum_{S \subseteq [n]} \widehat{f}_v(S) \chi_{S}(x \circ \pi_{v \rightarrow u}) = \sum_{T \subseteq [n]} \widehat{f}_v(\pi_{u \rightarrow v}(T)) \chi_T(x). \end{equation*} Averaging over all $\bv \in N(u)$, \begin{equation*} g_u(x) = \E_{\bv \sim N(u)}[ f_{\bv}(x \circ \pi_{\bv \rightarrow u}) ] = \sum_{T \subseteq [n]} \E_{\bv \sim N(u)}[\widehat{f}_{\bv}(\pi_{u \rightarrow \bv}(T))] \chi_T(x) = \sum_{T \subseteq [n]} \widehat{g}(T) \chi_T(x). \end{equation*} Hence, \begin{align*} \delta &< \Inf^{\leq m}_i[g_u]\\ &= \sum_{|T| \leq m: T \ni i} \Vert \widehat{g}(T) \Vert^2_2\\ &= \sum_{|T| \leq m: T \ni i} \left\Vert \E_{\bv \sim N(u)}[\widehat{f}_{\bv}(\pi_{u \rightarrow \bv}(T))] \right\Vert^2_2\\ & \leq \sum_{|T| \leq m: T \ni i} \E_{\bv \sim N(u)} \left\Vert \widehat{f}_{\bv}(\pi_{u \rightarrow \bv}(T)) \right\Vert^2_2\tag{because $\Vert \cdot \Vert^2_2$ is convex}\\ & = \E_{\bv \sim N(u)}\left[\sum_{|T| \leq m: T \ni i} \left\Vert \widehat{f}_{\bv}(\pi_{u \rightarrow \bv}(T)) \right\Vert^2_2\right]\\ & = \E_{\bv \sim N(u)}[\Inf^{\leq m}_{\pi_{u \rightarrow \bv}(i)}[f_{\bv}]]. \end{align*} By another averaging argument, a $\delta/2$-fraction of $u$'s neighbors~$v$ satisfy $\Inf^{\leq m}_{\pi_{u \rightarrow v}(i)}[f_{v}] \geq \delta/2$. We call these the ``good neighbors''. For each good neighbor~$v$, the set of possible labels $$S_v = \{j : \Inf^{\leq m}_j[f_v] \geq \delta/2 \}$$ is non-empty. In addition, one of these labels~$j$ satisfies $j = \pi_{u \rightarrow v}(i)$. On the other hand, by \Cref{prop:influence-bound}, $|S_v| \leq 2m/\delta$ and so this set is not too large either. For each good neighbor, we assign the label of $\bL(v)$ by picking a uniformly random $j \in S_v$. For all other vertices (i.e.\ those which are not good or good neighbors), we assign $\bL$ a random label. Now we consider the expected number of edges in $\mathcal{I}$ satisfied by $\bL$. Given a random edge $(\bu, \bv)$, the probability that $\bu$ is good is at least $\eps/4$; conditioned on this, the probability that $\bv$ is a good neighbor is at least $\delta/2$. Assuming both hold, since $S_{\bv}$ is of size at most $2M/\delta$ and contains one label equal to $\pi_{\bu \rightarrow \bv}(L(\bu))$, then $\bL$ satisfies the edge $(\bu, \bv)$ with probability at least $\delta/2M$. In total, $\bL$ satisfies at least an \begin{equation}\label{eq:used-gamma-again} \frac{\eps \delta^2}{16 M} = \gamma \end{equation} fraction of the edges. This concludes the proof. \textit{Moving from the product state value to the maximum energy.} First, we modify the graph~$G$ to remove any self-loops. To do this, we modify the distribution on edges $(\bv, \bx \circ \pi_{\bv \rightarrow \bu})$ and $(\bw, \by \circ \pi_{\bw \rightarrow \bu})$ so that $\bx$ and $\by$ are distributed as $\rho$-correlated Boolean strings \emph{conditioned on them not being equal}. This removes all self-loops, as any self-loop in the graph must have $\bv = \bw$ and $\bx \circ \pi_{\bv \rightarrow \bu} = \by \circ \pi_{\bw \rightarrow \bu}$, which implies that $\bx = \by$. (Note that this also removes some edges which are \emph{not} self-loops, namely those for which $\bv \neq \bw$.) Given that $\rho$-correlated $\bx$ and $\by$ are equal with probability \begin{equation}\label{eq:m-fact} (\tfrac{1}{2} + \tfrac{1}{2} \rho)^M \leq \eps/4, \end{equation} removing this event can only change the product state value of the graph by at most~$\eps/4$. Next, we apply~\Cref{cor:BH-nonuniform-easy-to-use} to bound the value of~$H_G$ over general states in the soundness case. To do so, let us write $E$ for the edges of~$G$, and define $p_{v, x} = \tfrac{1}{2} \Pr_{\boldsymbol{e} \sim E}[\text{$\boldsymbol{e}$ contains $(v, x)$}]$. Note that the distribution of a random edge $(\bv, \bx \circ \pi_{\bv\rightarrow \bu})$ and $(\bw, \by \circ \pi_{\bw \rightarrow \bu})$ is symmetric and never contains self-loops, and so $p_{v, x} = \Pr[(\bw, \by \circ \pi_{\bw\rightarrow \bu}) = (v, x)]$. But the UG instance $\calI$ is biregular, and so $\bw$ is just a uniformly random element of~$V$, and $\by$ is just a uniformly random string in $\{-1, 1\}^M$. Hence, $p_{v, x} =|V|^{-1}2^{-M}$ for each $v, x$, and so $\max_{v, x}\{p_{v, x}\} = |V|^{-1}2^{-M}$. The next thing we have to bound to apply~\Cref{cor:BH-nonuniform-easy-to-use} is the maximum of \begin{equation*} \Pr[(\bv, \bx \circ \pi_{\bv\rightarrow \bu}) = (v', x') \mid (\bw, \by \circ \pi_{\bw \rightarrow \bu}) = (w', y')] \end{equation*} over all $v', w' \in V$ and $x', y' \in \{-1, 1\}^M$. Note that if we condition on a fixed value for $\bu$ and on the event that $\bv = v'$, then this is just the maximum probability that $\bx \circ \pi_{\bv \rightarrow \bu}$ equals a fixed string, given that $\bx$ is $\rho$-correlated but not equal to~$y'$. Given that $\bx$ is most likely to be $-y'$ since $\rho$ is negative, this probability is \begin{equation*} \tfrac{1}{1 - (\tfrac{1}{2} + \tfrac{1}{2} \rho)^M} \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho)^M \leq 2 \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho)^M, \end{equation*} where we normalized by $1 - (\tfrac{1}{2} + \tfrac{1}{2} \rho)^M$ due to the condition that $\bx \neq \by$. Then averaging over $\bu$ and $\bv$ can only decrease this bound. \ignore{ Next, let $A$ be the matrix such that \begin{equation*} A_{(v, x), (w, y)} = \Pr_{((\bv', \bx'), (\bw', \by')) \sim E}[(\bv', \bx') = (v, x) \mid (\bw', \by') = (w, y)]. \end{equation*} $A$ can be viewed as the transition matrix of a random walk on the vertices of~$G$, in which the probability of transitioning from vertex $(w, y)$ to $(v, x)$ is given by $A_{(v, x), (w, y)}$. Let $(\bv_0, \bx_0)$ be a uniformly random starting vertex, and let $(\bv_1, \bx_1)$ and $(\bv_2, \bx_2)$ be the next two steps of the walk. Then \begin{equation*} \tr[A^2] = \sum_{(w, x)} \Pr[(\bv_2, \bx_2) = (w, x) \mid (\bv_0, \bx_0) = (w, x)]. \end{equation*} As a result, because $(\bv_0, \bx_0)$ is uniformly random, \begin{equation*} \tr[A^2] \Vert p \Vert_2^2 = \sum_{(w, x)} \Pr[(\bv_0, \bx_0) = (\bv_2, \bx_2) = (w, x)] = \Pr[(\bv_0, \bx_0) = (\bv_2, \bx_2)]. \end{equation*} Let us condition this probability on fixed values of $(v_0, x_0)$ and $(v_1, x_1)$. Then we can bound the probability that $(\bv_2, \bx_2) = (v_0, x_0)$ by the probability that $\bx_2 = x_0$. But given $x_1$, $\bx_2$ will be generated by taking a $\rho$-correlated sample $\by \sim_{\rho} x_1$ and applying some permutation to $\by$. So we can bound this by the maximum probability that a~$\rho$-correlated sample equals a fixed string, which is \begin{equation*} \tfrac{1}{1 - (\tfrac{1}{2} + \tfrac{1}{2} \rho)^M} \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho)^M \leq 2 \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho)^M, \end{equation*} where we recall that all edge weights are normalized by $1 - (\tfrac{1}{2} + \tfrac{1}{2} \rho)^M$ because we removed self-loops. } Now we can apply \Cref{cor:BH-nonuniform-easy-to-use}, which states that \begin{equation*} \text{\sc QMax-Cut}\xspace(G) \leq \text{\sc Prod}\xspace(G) + 20\cdot (2 \cdot (\tfrac{1}{2} - \tfrac{1}{2} \rho)^M)^{1/8} + \tfrac{1}{|V| 2^M}, \end{equation*} which is at most $\eps/4$ by our choice of~$M$. Hence, \begin{equation}\label{eq:m-fact-2} \text{\sc QMax-Cut}\xspace(G) \leq \tfrac{1}{4} - \tfrac{1}{4} F^*(3, \rho) + \eps, \end{equation} which completes the proof. \end{proof} \ignore{ \section{Unique Games Hardness of the Quantum Heisenberg Model} \subsection{Definitions and Preliminaries} In this section, we apply \Cref{thm:vector-borell} to show that solving ``difficult'' instances of the Unique Label Cover reduces to $\alpha_\mathrm{BOV}$-approximating the product state value of the Heisenberg model. Assuming the Unique Games conjecture of Khot \cite{Kho02}, this yields a $\NP$-hardness result for product state approximation of the Heisenberg model. \begin{definition}[Unique Label Cover]\label{def:unique-games} The Unique Label Cover problem, $\mathcal L(V, W, E, [M], \{\sigma_{v,w}\}_{(v,w) \in E})$ is defined as follows: Given is a biregular, bipartite graph with left side vertices $V$, right side vertices $W$, and a set of edges $E$. The goal is to assign one `label' to every vertex of the graph, where $[M]$ is the set of allowed labels. The labeling is supposed to satisfy certain constraints given by bijective maps $\sigma_{v,w} : [M] \rightarrow [M]$. There is one such map for every edge $(v,w) \in E$. A labeling $L : V \cup W \rightarrow [M]$ `satisfies' an edge $(v,w)$ if $$\sigma_{v,w}(L(w)) = L(v)$$ \end{definition} \begin{conjecture}[Unique Games Conjecture \cite{Kho02}] For any $\eta, \gamma > 0$, there exists a constant $M = M(\eta, \gamma)$ such that it is $\NP$-hard to distinguish whether a Unique Label Cover problem with label set size $M$ has optimum at least $1 - \eta$ or at most $\gamma$. \end{conjecture} Our hardness result is stated as follows, \begin{theorem}[UG-Hardness of Approximating Quantum \text{\sc Max-Cut}\xspace]\label{thm:ug-hardness} For any $\rho \in [-1,0)$ and $\epsilon > 0$, there exists an instance of the Heisenberg Hamiltonian such that deciding whether the maximum energy is greater than $\frac{1- \rho}{4} - \epsilon$ or less than $\frac{1 - F^\star(3, \rho)}{4} + \epsilon$ is Unique Games-Hard. In more standard notation, we say that it is UG-Hard to $(\frac{1 - F^\star(3,\rho)}{4} + \epsilon, \frac{1- \rho}{4} - \epsilon)$-approximate Quantum \text{\sc Max-Cut}\xspace. \end{theorem} \begin{remark} $\frac{1 - F^\star(3,\rho)}{4}$ is exactly the product state value obtained by the rounding algorithm of \cite{GP19} on the integrality gap instance in \Cref{sec:integrality-gaps}. \end{remark} \begin{remark} Minimizing the ratio $\frac{1-\rho}{4}/\frac{1-F^\star(3,\rho)}{4}$ w.r.t $\rho$ yields $\alpha_\mathrm{BOV}$, which shows \Cref{thm:main-inapprox}. \end{remark} To show this result, we slightly generalize the framework of showing Unique Games hardness results for CSPs, developed in \cite{KKMO07} for the case of classical $\text{\sc Max-Cut}\xspace$. In that work, the authors show that in order to show hardness of approximation results for a general CSP over domain $\{-1,1\}$, it suffices to develop a ``Dictator-vs.-No-Notables'' test (also known as Dicator-vs.-Quasirandomness). Such a test distinguishes between ``dictators'', which are functions of the form $f(x_1,\dots,x_n) = x_i$, and functions which have no influential coordinates.fi \begin{definition}[$(\alpha,\beta)$-Dictator-vs.-No-Notables \cite{KKMO07,OD14}]\label{def:dictator-no-notables} Let $\Psi$ be a finite set of predicates over the domain $\Omega = \{-1,1\}$. Let $0 < \alpha < \beta \leq 1$ and let $\lambda : [0,1] \rightarrow [0,1]$ satisfy $\lambda(\epsilon) \rightarrow 0$ as $\epsilon \rightarrow 0$. Suppose that for each $n \in \mathbb N^+$ there is a local tester for functions $f : \{-1,1\}^n \rightarrow \{-1,1\}$ with the following properties: \begin{itemize} \item \textbf{Completeness}. If $f$ is a dictator then the test accepts with probability at least $\beta$. \item \textbf{Soundness}. If $f$ has no $(\epsilon,\epsilon)$-notable coordinates -- i.e., $\Inf_i^{\leq 1-\epsilon}[f] \leq \epsilon$ for all $i \in [n]$ then the test accepts with probability at most $\alpha + \lambda(\epsilon)$. \item The tester’s accept/reject decision uses predicates from $\Psi$; i.e., the tester can be viewed as an instance of Max-CSP($\Psi$). \end{itemize} \end{definition} In the case of \text{\sc Max-Cut}\xspace, the appropriate test is the Long Code test, which queries $f$ at two $\rho$-correlated points $x,x'$ then checks $f(x) \not= f(x')$. Completeness is straightforward to verify. However, the showing soundness requires the Majority is Stablest theorem, which bounds the maximum stability of a function with no influential coordinates. The proof of Majority is Stablest uses an ``invariance principle'' to relate functions over discrete domains (e.g. $\{-1,1\}^n$) and no highly influential coordinates to functions over continuous domains ($\mathbb R^n$). \begin{theorem}[Invariance Principle \cite{MOO10, OD14}]\label{thm:invariance} Fix $\epsilon, \gamma \in [0,1]$ and set $d = \log\tfrac{1}{\epsilon}$. Let $f$ be a $n$-variate multilinear polynomial $$f(x) = \sum_{S \subseteq N} \hat f(S) \prod_{i \in S} x_i$$ such that $\Var[f] \leq 1$, $\Var[f^{> d}] < (1-\gamma)^{2d}$, and $\Inf^{\leq d}_i[f] \leq \epsilon$ for all $i \in [n]$. Let $\bx$ be a uniformly random string over $\{-1,1\}^n$ and $\by$ a $n$-dimensional standard Gaussian random variable. Assume $\psi : \mathbb R \rightarrow \mathbb R$ is $c$-Lipschitz continuous, then $$|\E[\psi(f(\bx))] - \E[\psi(f(\by))] \leq O(c) \cdot 2^k \epsilon^{1/4}$$ \end{theorem} The Majority is Stablest theorem is the following, \begin{theorem}[Majority is Stablest \cite{MOO10, KKMO07}\footnote{This theorem was proved in \cite{MOO10} for positive $\rho$ and the condition $\Inf_i(f) \leq \delta$. The version we state here is due to \cite{KKMO07} and was originally given as a conjecture.}]\label{thm:mis} Fix $\rho \in (-1,0]$. For any $\epsilon > 0$, there is a small enough $\delta = \delta(\epsilon, \rho)$ and large enough $k = k(\epsilon, \rho)$ such that if $f : \{-1,1\}^n \rightarrow [-1,1]$ is any function satisfying $\Inf^{\leq k}_i[f] \leq \delta$ for all $i \in [n]$, then $$\mathbb S_\rho(f) \geq \Lambda_\rho(\mu) + \epsilon$$ \end{theorem} Here, $\Lambda_\rho(\mu)$ is the \textit{Gaussian quadrant probability function} \cite{OD14}. Now, given a $(\alpha,\beta)$-Dictator-vs.-No-Notables test one can show this yields a UG-hardness result for a related constraint satisfaction problem. \begin{theorem}[\cite{KKMO07}]\label{thm:classical-ug-hardness} Fix a CSP over domain $\Omega = \{-1,1\}$ with predicate set $\Psi$. Suppose there exists an $(\alpha,\beta)$-Dictator-vs.-No-Notables test using predicate set $\Psi$. Then for all $\delta > 0$, it is ``UG-hard'' to $(\alpha+\delta,\beta- \delta)$-approximate $\text{\sc Max-CSP}\xspace(\Psi)$. \end{theorem} \subsubsection{Example: Max Cut} Assuming the existence of a test satisfying \Cref{def:dictator-no-notables}, let's consider how \Cref{thm:classical-ug-hardness} works for classical $\text{\sc Max-Cut}\xspace$. Given a Unique Label Cover instance on graph $G = (V,E)$ with labels $[M]$, we observe that a labeling to vertex, i.e. $L(v)$ for $v \in V$, can be represented by the dicator $f_{L(v)}(x) = x_{L(v)}$, where $f : \{-1,1\}^M \rightarrow \{-1,1\}$. When $G$ has a satisfying assignment, given two neighbors $w,w'$ of $v$, we have that $L(v)) = \sigma_{(v,w)}(L(w)) = \sigma_{(v,w')}(L(w'))$. Therefore, we have that, $$f_{L(v)}(x) = f_{L(w)}(x \circ \sigma_{(v,w)}) = f_{L(w')}(x \circ \sigma_{(v,w')})$$ where $x \circ \sigma$ is used to represent the string $(x_{\sigma(1)},x_{\sigma(2)},\dots,x_{\sigma(n)})$. In particular, $g_w = f_{L(w)}(x \circ \sigma_{(v,w)})$ yields the same dicator as $g_w'$ and applying the $(\alpha,\beta)$-Dictator-vs.-No-Notables test yields success with probability $\geq \beta$. Note that since $\text{\sc Max-Cut}\xspace$ is $\text{\sc Max-CSP}\xspace(\not=)$, we use the $\not=$ predicate. On the other hand, if the test succeeds with probability $\alpha + \lambda(\epsilon)$, then for some constant fraction of $v \in V$, the test passes with high probability on neighbors $w,w'$ of $v$. By the second property in \Cref{def:dictator-no-notables}, $g_w, g_{w'}$ have some influential coordinates. Decoding these coordinates as a labeling for $v$ yields a labeling for $G$ which satisfies a large fraction of the edges. \subsection{Vector-Valued Majority is Stablest} \subsubsection{Definitions} Throughout this section, we'll work extensively with functions expressed as multilinear polynomials. Given the function $f : \Omega_1 \times \cdots \times \Omega_n \rightarrow \mathbb R$, the corresponding polynomial is of the form $$Q(x) = \sum_S c_S \chi_S$$ where $\chi_S = \Pi_{i \in S} x_i$, and $x_i$ is a variable over $\Omega_i$. The \textit{degree} of $Q(x)$ is $\max \{|S| : c_S \not= 0\}$. We also use the notation $$Q^{\leq d}(x) = \sum_{S : |S| \leq d} c_S \chi_S$$ These polynomials will be evaluated on independent, identically distributed collections of random variables $(x_1,\dots,x_n)$. Furthermore, these variables are restricted to be \textit{orthonormal}. Then, we are able to obtain the following formulas, familiar from the analysis of boolean functions, \begin{proposition}\label{prop:fourier-properties} Let $\bx = (x_1,\dots,x_n)$ be a collection of orthonormal, i.i.d. random variables. Then, $$\E[Q(\bx)] = c_\emptyset \qquad \E[Q(\bx)^2] = \sum_S c^2_S \qquad \Var[Q(\bx)] = \sum_{S : |S| > 0} c^2_S$$ $$\Inf_i[Q(\bx)] = \sum_{S : i \in S} c^2_S \qquad \mathbf{T}_\rho[Q(\bx)] = \sum_S \rho^{|S|} c_S \chi_S \qquad \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[Q(\bx)] = \sum_S \rho^{|S|} c^2_S$$ \end{proposition} Note that the above formulas, with the exception of $\mathbf{T}_\rho$, do not depend on the random variables $\bx$, and instead on the polynomial $Q$. $\mathbf{T}_\rho$ should be interpretted as an operator on multilinear polynomials. $Q$ will often correspond to a function $f$, and as a slight abuse of notation, we will use $f$ in place of $Q$ as an argument to the above definitions (e.g. $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$). In this case, $\hat f(S)$ refers to the coefficient $c_S$. The standard definition for the norm of a function defined over random variables $\bx$ is, $$\norm{f}_2^2 = \langle f, f \rangle = \E_{x\sim \bx}[f(x)^2]$$ Observe that this is exactly $\sum_S c_S^2 = \sum_S \hat f(S)^2$. The inner product can be extended to two different functions $f,g$ and yields $\langle f, g \rangle = \sum_S \hat f(S) \hat g(S)$. Next, we consider the case of vector-valued functions. We will work most often with functions mapping boolean strings $\{-1,1\}^n$ to $\mathbb R^m$. Such a function can be expressed as $m$ different functions for each output coordinate, written as $f = (f_1,\dots,f_m)$. In general, the definitions in \Cref{prop:fourier-properties} can be applied to each $f_i$, and extended to $f$ by taking the sum. For instance, \begin{equation}\label{eq:stab} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] = \sum_{i\in[n]} \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f_i] = \sum_{i\in[n]} \sum_S \rho^{|S|} {\hat f_i(S)}^2 \end{equation} We might want to consider vector-valued polynomial, rather than $m$ different polynomials for each $f_i$. We define, $$f = \sum_S \hat{f}(S) \chi_S$$ where $\hat{f} = (\hat f_1,\dots,\hat f_m)$. Noting that $\norm{(x_1,\dots,x_m)}_2 = \sqrt{x_1^2 + \cdots + x_m^2}$, we can simplify \Cref{eq:stab} as $$\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] = \sum_{i\in[n]} \sum_S \rho^{|S|} {\hat f_i(S)}^2 = \sum_S \rho^{|S|}\norm{\hat{f}(S)}_2^2$$ A special case of one of the above definitions will be ``low-degree'' influences. $$\Inf^{\leq d}_i[Q(\bx)] = \sum_{S : |S| \leq d, i \in S} c^2_S$$ \subsubsection{Majority is Stablest} Now, we will generalize the Majority is Stablest theorem to vector-valued functions. Fortunately, a suitable generalization of the invariance principle already exists. \begin{theorem}[Vector Invariance Principle \cite{IM12}]\label{thm:vector-invariance} Fix $\tau, \gamma \in [0,1]$ and set $d = \log\tfrac{1}{\tau}$. Let $f = (f_1,\dots,f_m)$ be a $m$-dimensional multilinear polynomial such that $\Var[f_j] \leq 1$, $\Var[f_j^{> d}] < (1-\gamma)^{2d}$, and $\Inf^{\leq d}_i[f_j] \leq \tau$ for each $j \in [m]$ and $i \in [n]$. Let $\bx$ be a uniformly random string over $\{-1,1\}^n$ and $\by$ a $n$-dimensional standard Gaussian random variable. Furthermore, let $\Psi : \mathbb R^m \rightarrow \mathbb R$ be Lipschitz continuous with Lipschitz constant $A$. Then, $$|\E[\Psi(f(\bx))] - \E[\Psi(f(\by))]| \leq C_mA \tau^{\gamma/18}$$ where $C_m$ is a parameter depending only on $m$. \end{theorem} Carrying out a proof similar to that of \Cref{thm:mis} yields the following, \begin{theorem}[Vector Valued Majority is Stablest]\label{thm:dictator-test} Fix $\rho \in (-1,0]$. Then for any $\epsilon > 0$, there exists small enough $\delta = \delta(\epsilon, \rho)$ and large enough $k = k(\epsilon, \rho) \geq 0$ such that if $f : \{-1,1\}^n \rightarrow B^{m-1}$ is any function satisfying $$\Inf^{\leq k}_i[f] = \sum_{j=1}^m \Inf^{\leq k}_i[f_j] \leq \delta \text{ for all }i = 1, \dots, n$$ then $$\E_{x \sim_\rho y} [f(x)f(y)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq F^*(m,\rho) - \epsilon$$ \end{theorem} \begin{comment} As a simple corollary, \Cref{thm:dictator-test} can be extended to $\rho \in (-1,0]$. \begin{corollary}\label{cor:vector-mis} \Cref{thm:mis-positive-rho} holds in reverse for $\rho \in (-1,0]$, i.e. $\mathbb S_\rho(f) \geq F^\star(k,\rho) - \epsilon$. \end{corollary} \begin{proof} Let $f$ satisfy the conditions of \Cref{thm:mis-positive-rho} and define $g(\bx) = (f(\bx) - f(\bx))/2$. It's easy to see $g(x) = \sum_{|S|\text{ odd}} \hat f(S)\chi_S(x)$ and thus $\E[g] = 0$ and $\Inf^{\leq d}_i[g] \leq \Inf_i^{\leq d}[f]$. Next, recalling that $\mathbb S_\rho[f] = \sum_{S} \rho^{|S|}\hat f(S)^2$, we further see that $$\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] \geq \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] = -\mathbf{Stab}} \newcommand{\stab}{\Stab_{-\rho}[g]$$ Applying the previous theorem to $-g$ yields a lower bound of $-F^\star(k,-\rho) - \epsilon$. Recall from \Cref{thm:hypergeometric} that $$F^\star(k,-\rho) := \E\left[\frac{Zu}{||Zu||}\cdot\frac{Zv}{||Zv||}\right]$$ where $\langle u,v\rangle = -\rho$. Then, since $\langle u,-v\rangle = \rho$, $-F^\star(k,-\rho) = F^\star(k,\rho)$ which concludes the proof. \end{proof} We now give the proof of \Cref{thm:mis-positive-rho}. \end{comment} \begin{proof} Throughout, we'll use $\bx$ to denote string in $\{-1,1\}^n$ and $\by$ to denote a vector in $\mathbb R^n$. Since the statement of \Cref{thm:vector-invariance} requires a function with low high-degree variance, we consider $g = \mathbf{T}_{1-\gamma} f$. Then for each $j \in [m]$, $$\Var[g_j^{\geq d}] = \sum_{|S| \geq d} (1-\gamma)^2\hat {g}_j(S)^2 \leq (1-\gamma)^{2d} \Var[g_j^{\geq d}] \leq (1-\gamma)^{2d}$$ Also, $\Inf_i[g_j] \leq \Inf_i[f_j]$. Next, we bound the error in the quantity $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f]$ when we consider $g$ in place $f$. \begin{align} \ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[f] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g]} &= \ABS{\sum_S \rho^{|S|} \norm{\widehat{f}(S)}_2^2 - \sum_S (\rho(1-\gamma)^2)^{|S|} \norm{\widehat{f}(S)}_2^2}\nonumber\\ &=\sum_S \ABS{(\rho(1 - (1-\gamma)^2))^{|S|}} \norm{\widehat{f}(S)}_2^2\nonumber\\ &\leq \rho(1-(1-\gamma)^2) \sum_S\norm{\hat f(S)}_2^2\nonumber\\ &\leq \rho(1-(1-\gamma)^2) \E[\norm{f}_2^2]\label{eq:bound1} \end{align} $f$ has range $B^{m-1}$ and thus $\E[\norm{f}_2^2] \leq 1$. By choosing $\gamma(\epsilon, \rho)$ to be sufficiently small we can bound this quantity by $\epsilon/2$. Next, since $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}$ is a function of the coefficients of the polynomial corresponding to $f$, we have that $\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\bx)] = \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g(\by)]$. However, although $g(\bx) \in B^{m-1}$ (since $f(\bx) \in B^{m-1}$ by definition), $g(y)$ may take values outside the unit ball. This prevents us from directly applying \Cref{thm:vector-borell}. We'll instead apply the theorem to the function $$g'(\by) = \begin{cases} g(\by) & \text{if }g(\by) \in B^{m-1}\\ \frac{g(\by)}{\NORM{g(\by)}} & \text{otherwise} \end{cases}$$ Applying \Cref{thm:vector-borell} yields that $\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g'] \geq \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[f_\mathrm{opt}] = F^\star(m,\rho)$. It remains to bound $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_{\rho}[g']}$. \begin{align*} \ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g'\rangle}\\ &= \ABS{\langle g, \mathbf{T}_\rho g\rangle - \langle g', \mathbf{T}_\rho g\rangle - \PAREN{\langle g', \mathbf{T}_\rho g'\rangle - \langle g', \mathbf{T}_\rho g\rangle}}\\ &= \ABS{\langle g - g', \mathbf{T}_\rho g\rangle +\langle g', \mathbf{T}_\rho g - \mathbf{T}_\rho g'\rangle}\\ &\leq \ABS{\norm{g - g'}_2\norm{\mathbf{T}_\rho g}_2 + \norm{g'}_2\norm{\mathbf{T}_\rho g - \mathbf{T}_\rho g'}_2} && \text{Cauchy-Schwartz}\\ &\leq \ABS{\norm{g - g'}_2\norm{g}_2 + \norm{g'}_2\norm{g - g'}_2} && \text{$\mathbf{T}_\rho$ is a contraction}\\ &\leq \norm{g - g'}_2(\norm{g}_2 + \norm{g'}_2)\\ &\leq \norm{g - g'}_2(\norm{g - g' + g'}_2 + \norm{g'}_2)\\ &\leq \norm{g - g'}_2(\norm{g - g'}_2 + 2\norm{g'}_2)\\ &\leq 3\norm{g - g'}_2 \end{align*} where we have used that for $\norm{g - g'}_2 \leq 1$, $\norm{g - g'}_2^2 \leq \norm{g - g'}_2$ and that $\norm{g'}_2 \leq 1$. Now we just need to bound $\norm{g - g'}_2$ by $\epsilon/6$. Note that $\norm{g - g'}_2^2 = \E[\zeta(g(y))]$, where $\zeta$ is the 1-Lipschitz \ynote{Should we include a proof of this?} function yielding the squared distance of a point $y \in \mathbb R^n$ from $B^{m-1}$ \begin{equation*} \zeta(y) = \begin{cases} 0 & \text{if }\norm{y}_2 \leq 1\\ \norm{y - \tfrac{y}{\norm{y}}}_2^2 & \text{otherwise} \end{cases} \end{equation*} Let $\delta = (\epsilon^2/(36\cdot C_m))^{18/\gamma}$. We can now apply \Cref{thm:vector-invariance} with $\Psi = \zeta$ and $\tau = \delta$, which yields $$\E_{\by}[\zeta(g)] = |\E_{\bx}[\zeta(g(\bx))] - \E_{\by}[\zeta(g(\by))]| \leq \epsilon^2/36$$ Then, $\norm{g-g'}_2 \leq \sqrt{\epsilon^2/36} = \epsilon/6$ and $\ABS{\mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g] - \mathbf{Stab}} \newcommand{\stab}{\Stab_\rho[g']} \leq \epsilon/2$. Combining this with our bound of $\epsilon/2$ for \Cref{eq:bound1} concludes the proof. \end{proof} \subsection{Embedded Dictator-vs.-No-Notables} Recall that in the case of quantum \text{\sc Max-Cut}\xspace, a product state assignment corresponds to a Bloch vector $v \in S^2$. With this in mind, we extend the domain of predicates in \Cref{def:dictator-no-notables} to $S^2$. Additionally, rather than working with a tester $T$ which output either \texttt{YES}/\texttt{NO} (equivalently $1$/$0$), our tester will output a value $v \in [-1,1]$. As such, rather than characterizing $T$ using a success probability, we work with the expected value of $T$.\footnote{Note for $1$/$0$-valued testers, the expected value and success probability are exactly the same.}. \begin{definition}[Real-Valued Tester] A real-valued tester $T$ for functions $f : \{-1,1\}^n \rightarrow S^{m-1}$ of type $\Psi : (S^{m-1})^r \rightarrow [-1,1]$ is a randomized algorithm which selects $r$ strings $x_1,\dots,x_r \in \{-1,1\}^n$ from a distribution $\mathcal D$, queries $f(x_1),\dots,f(x_r)$, then outputs the result of applying predicate $\Psi$. The value of such a tester is $$\text{\sc Val}\xspace_T(f) = \E_{\bx_1,\dots,\bx_r \sim \mathcal D}[\Psi(f(\bx_1),\dots,f(\bx_r))]$$ \end{definition} \begin{notation} Given a tester $T$ with a length $l$ predicate and a collection of $l$ functions $F = \{f^{(1)},\dots,f^{(l)}\}$ we use the notation $T(F)$ to denote evaluating $T$ on the collection $F$ as follows, $$\E_{\bx_1,\dots,\bx_r \sim \mathcal D}[\Psi(f^{(1)}(\bx_1),\dots,f^{(l)}(\bx_r))]$$ \end{notation} Next, to generalize the notion of a dictator to vector-valued functions, we introduce the \textit{embedded dictators}. \begin{definition}[Embedded Dictator] Let $f : \{-1,1\}^n \rightarrow S^{m-1}$ be a vector-valued function, with $f(x) = (f_1(x),\dots,f_{n+1}(x))$. Then, we say $f$ is a embedded dictator if $f_i$ is a dictator, for some $i$ and $f_j = 0$ for all other $j \not= i$. \end{definition} Then, we have the following test. \begin{definition}[$(\alpha,\beta)$-Embedded-Dictator-vs.-No-Notables]\label{def:embedded-dic} Let $\Psi$ be a finite set of predicates over the domain $S^2$. Let $-1 \leq \beta < 0 < \alpha \leq 1$ and let $\lambda : [0,1] \rightarrow [0,1]$ satisfy $\lambda(\epsilon) \rightarrow 0$ as $\epsilon \rightarrow 0$. Suppose that for each $n \in \mathbb N^+$ there is a real-valued tester for functions $f : \{-1,1\}^n \rightarrow S^2$ with the following properties: \begin{itemize} \item \textbf{Completeness}. If $f$ is an embedded dictator then the test has expected value at least $\beta$. \item \textbf{Soundness}. If $f$ has no $(\epsilon,\epsilon)$-notable coordinates -- i.e., $\Inf_j^{\leq 1-\epsilon}[f] = \sum_{i \in [3]} \Inf_j^{\leq 1-\epsilon}[f_i] \leq \epsilon$ for all $j \in [n]$ then the test has expected value at most $\alpha + \lambda(\epsilon)$. \end{itemize} Then this yields a family of testers called a $(\alpha,\beta)$-Embedded-Dicator-vs.-No-Notables test using predicate $\Psi$. \end{definition} We'll also need a simple lemma, which bounds the maximum number of coordinates with large low degree influence. For a proof see \Cref{sec:fourier-proofs}. \begin{lemma}[Bounds on Influential Coordinates]\label{lem:influence-bound} Fix a function $f : \{-1,1\}^n \rightarrow S^{m-1}$. Then the set, $$S = \{i \in [n] : \Inf^{\leq 1-\epsilon}_i[f] > \tau\}$$ has size at most $m(1-\epsilon)/\tau$. \end{lemma} Before giving a tester satisfying these properties, we state our analogue of \Cref{thm:classical-ug-hardness}, which shows that such a tester is useful for showing hardness-of-approximation results. \begin{theorem}\label{thm:quant-ug-hardness} Fix a CSP over domain $S^{m-1}$ with predicate set $\Psi$. Suppose there exists an $(\alpha,\beta)$-Embedded-Dictator-vs.-No-Notables test using predicate set $\Psi$. Then for all $\delta > 0$, it is ``UG-hard'' to $(\alpha+\delta,\beta- \delta)$-approximate $\text{\sc Max-CSP}\xspace(\Psi)$.\footnote{For simplicity, we assume each constraint in $T$ has width 2, but we could easily extend this to width $c$ constraints.} \end{theorem} \begin{proof} As in \cite{KKMO07}, the proof will be a reduction from a hard instance of Unique Label Cover $\mathcal L$ to a Max-CSP($\Psi$) instance $\mathcal I$. Pick constants $\eta, \gamma$ and $M = M(\eta, \gamma)$ satisfying the Unique Games Conjecture. Then, take an instance of Unique Label Cover $\mathcal L(V, W, E, [M], \{\sigma_{v,w}\}_{(v,w) \in E})$ and for each vertex $v \in V \cup W$, associate the function $f_v : \{-1,1\}^M \rightarrow S^{m-1}$. Furthermore, let $T = \Psi(f(x_1),f(x_2))$ be a tester satisfying the properties given in \Cref{def:embedded-dic}. To form our instance $\mathcal I$, consider all triples $(v,w, w')$ where $v \in V$ and $w, w' \in N(v) \subseteq W$. Then, evaluate $T$ on the collection $\{g_w,g_{w'}\}$, which yields an instance of \text{\sc Max-CSP}\xspace($\Psi$) of the form \begin{equation}\label{eq:constraint} \mathcal I = \E_{(\bv,\bw,\bw')}[T(\{g_{\bw},g_{\bw'}\})] \end{equation} As before, $g_{w}(x) = f_w(x \circ \sigma_{v,w})$. Observe when $(v,w)$ is satisfied under $\sigma_{v,w}$, $g_w(x) = f_v(x)$. In the completeness case, we show that when $\mathcal L$ has value at least $1- \eta$, we can construct $f_v$ such that evaluating each constraint of \Cref{eq:constraint} is analogous to applying $T$ to an embedded dictator $g$. For soundness, we use the contrapositive and show that if $\mathcal I$ has value more than $\alpha + 2\lambda(\epsilon)$, then there is a labeling satisfying at least $\gamma$ portion of the edges.\\ \textit{Completeness.} Assume $\mathcal L$ has a labeling $L$ satisfying more than $(1- \eta)$-fraction of the edges. For each $v \in W \cup V$, let $f_v(x) = (x_{L(v)}, 0, \dots, 0)$. Select triples $(v,w,w')$ by first picking a vertex $v \in V$ and neighbors $w,w' \in N(v) \subseteq W$. By a result in \cite{KR08}, we can assume graph for $\mathcal L$ is uniform on $V$'s side; then, picking a random vertex $v \in V$ and neighbor $w \in N(v)$ is equivalent to picking a uniformly random edge from $E$. Therefore, we by applying the union bound, we see that the probability edges $(v,w)$ and $(v,w')$ are both satisfied in the labeling is at least $1 - 2\eta$. Denote this event $\mathrm{Sat}_{v,w,w'}$. Then we can rewrite $\mathcal I$ as \begin{align*} \E_{(\bv,\bw,\bw')}[T(\{g_{\bw}, g_{\bw'}\})] &= \E_{(\bv,\bw,\bw')}[T(\{g_{\bw}, g_{\bw'}\}) | \mathrm{Sat}_{\bv,\bw,\bw'}]\Pr[\mathrm{Sat}_{\bv,\bw,\bw'}]\\ &\qquad + \E_{(\bv,\bw,\bw')}[T(\{g_{\bw}, g_{\bw'}\}) | \neg\mathrm{Sat}_{\bv,\bw,\bw'}]\Pr[\neg\mathrm{Sat}_{\bv,\bw,\bw'}]\\ &\geq \E_{(\bv,\bw,\bw')}[T(\{g_{\bw}, g_{\bw'}\}) | \mathrm{Sat}_{\bv,\bw,\bw'}](1-2\eta) \end{align*} When $(v,w)$ and $(v,w')$ are both satisfied, $g_w(x) = g_{w'}(x) = f_v(x)$ and this is equivalent to applying $T$ to the embedded dictator function $g_v$. Thus, $\mathcal I$ has value at least $(1-2\eta)\beta = \beta - O(\eta)$. \textit{Soundness.} Assume instance $\mathcal I$ has value more than $\alpha + 2\lambda(\epsilon)$. Let $F = \{g_v\}_{v \in V \cup W}$ be an assignment which obtains this value. We observe that, \begin{align*} \text{\sc Val}\xspace_\mathcal{I}(F) &= \E_{(\bv,\bw,\bw')}[T(\{g_{\bw},g_{\bw'}\})]\\ &= \E_{(\bv,\bw,\bw')}\BRAC{\E_{\bx_1,\bx_2 \sim \mathcal D}[\Psi(g_{\bw}(\bx_1),g_{\bw'}(\bx_2))]}\\ &= \E_{\bv}\BRAC{\E_{\bx_1,\bx_2 \sim \mathcal D}[\Psi(\E_{\bw} g_{\bw}(\bx_1),\E_{\bw} g_{\bw}(\bx_2))]}\\ &= \E_{\bv} \BRAC{T\BRAC{\E_{\bw \in N({\bv})} g_{\bw}}} \end{align*} Where the second to last equality is because $(w,w') \in N(v)$ are selected uniformly and independently at random. Thus, we can write $\text{\sc Val}\xspace_\mathcal{I}(F)$ as $\E_{\bv \in V}[T(h_{\bv})]$, where $h_v = \E_{\bw \in N(v)}g_{\bw}$. Let $V'$ be a subset of $V$ such that for each $v \in V'$, $T(h_v) \geq \alpha + \lambda(\epsilon)$. We call these \textit{good} vertices. Furthermore, let $c = |V'|/|V|$. By writing $\text{\sc Val}\xspace_\mathcal{I}(F)$ as $c\E_{\bv \in V'}[T(h_{\bv})] + (1-c)\E_{\bv \in V \setminus V'}[T(h_{\bv})]$, one can see that $c \geq \lambda(\epsilon)$ and we can select $\lambda(\epsilon)$ good vertices. Thus, since $T$ a $(\alpha,\beta)$-Embedded-Dicatator-vs.-No-Notables test, for each $v$ the set $$\mathrm{Notables}_v = \{j \in [M] : \Inf_j^{\leq 1-\epsilon}[h_v] > \epsilon\}$$ is non-empty. Selecting just one $j$ from each $\mathrm{Notables}_v$ yields an assignment to the $\lambda(\epsilon)$ good vertices $v \in V'$. Next we'll need to obtain labels for neighbors of the good vertices. Fix a good vertex $v$. Note that the function $\Inf_j$ is convex and therefore, $\epsilon < \Inf_j^{\leq 1-\epsilon}[h_v] \leq \mathbb E_{\bw \in N(v)}[\Inf_j^{\leq 1-\epsilon}[g_{\bw}]]$. Applying the same averaging argument yields that for $\epsilon/2$ many neighbors $w$, $\Inf_j^{\leq 1-\epsilon}[g_w] \geq \epsilon/2$. Recalling that $g_w(x) = f_w(x \circ \sigma_{(v,w)})$, we equivalently obtain $\Inf_{\sigma^{-1}_{(v,w)}(j)}^{\leq 1-\epsilon}[f_w] \geq \epsilon/2$. This shows that the set of good labels $$S_w = \{j : \Inf^{\leq 1-\epsilon}_j[f_w] \geq \epsilon/2 \}$$ is non-empty. Next, by \Cref{lem:influence-bound}, we note $|S_w| \leq 6(1-\epsilon)/\epsilon$. Assign a label to $w$ by picking a uniformly random $j \in S_w$. Finally, we consider the expected number of satisfied edges. The probability of selecting a good vertex $v \in V$ is at least $\lambda(\epsilon)$. For each such vertex, $\epsilon/2$ of its neighbors are assigned a label $j \in S_w$. Since this set has size at most $6(1-\epsilon)/\epsilon$, this yields a probability of $\epsilon/(6(1-\epsilon))$ of selecting the appropriate label. Thus, in expectation we satisfy $$\frac{\lambda(\epsilon)\epsilon^2}{6(1-\epsilon)}$$ of the edges. Setting $\gamma =\frac{\lambda(\epsilon)\epsilon^2}{6(1-\epsilon)}$ concludes the proof. \end{proof} \subsection{Test for Quantum Heisenberg Model} With the reassurance that constructing a test satisfying \Cref{def:dictator-no-notables} yields a $(\alpha+\delta, \beta-\delta)$-hardness result, we give our test $T$ \ynote{name this test?} for the predicate $\Psi(\bx,\by) = 1 - \langle \bx, \by \rangle$ on functions $f \colon \{-1,1\}^n \rightarrow S^{m-1}$. Let $\mathcal D_T$ be a distribution corresponding to the following sampling procedure \begin{enumerate} \item Pick $x \in \{-1,1\}^n$ randomly. \item Construct $y \in \{-1,1\}^n$ by copying $x$, then flipping each coordinate independently with probability $\rho$. \end{enumerate} Given $\bx,\by \sim \mathcal D_T$, return $1 - \langle f(\bx), f(\by)\rangle$. Then, $\text{\sc Val}\xspace_T(f) = \E_{\bx,\by \sim \mathcal D_T}[1 - \langle f(\bx), f(\by)\rangle]$. \begin{lemma} Test $T$ is a $(1-F^\star(m, \rho),1- \rho)$-Embedded-Dictators-vs.-No-Notables test for the predicate $\Psi(\bx,\by) = 1 - \langle \bx, \by \rangle$. \end{lemma} \begin{proof} \textit{Completeness}. Let $f$ be an embedded dictator. WLOG, let $f(x) = (x_i, 0, \dots, 0)$. Then, $$\text{\sc Val}\xspace_T(f) = \E_{\bx,\by \sim \mathcal D_T}[1 - \langle f(\bx), f(\by)\rangle] = \E_{\bx,\by \sim \mathcal D_T}[1 - \bx_i \by_i] = 1-\rho$$ \textit{Soundness}. Assume $f$ has no $(\epsilon, \epsilon)$-notable coordinates. Then, by the Vector Majority is Stablest Theorem in the form of \Cref{thm:dictator-test} yields, \begin{align*} \E_{x,y \sim \mathcal D_T}[1 - \langle f(x), f(y)\rangle] \leq 1 - F^\star(m, \rho) + \lambda(\epsilon) \end{align*} where $\lambda$ is a function depending only on $\epsilon$ and approaches $0$ as $\epsilon \rightarrow 0$. \end{proof} }
1,941,325,220,322
arxiv
\section{Introduction} There has been rapidly growing interest in the behavior of materials at nanometer scales \cite{nanotechnology01}. One motivation is to construct ever smaller machines \cite{bhushan04}, and a second is to improve material properties by controlling their structure at nanometer scales \cite{valiev02}. For example, decreasing crystallite size may increase yield strength by suppressing dislocation plasticity, and material properties may be altered near free interfaces or grain boundaries. To make progress, this research area requires experimental tools for characterizing nanoscale properties. Theoretical models are also needed both to interpret experiments and to allow new ideas to be evaluated. One common approach for measuring local properties is to press tips with characteristic radii of 10 to 1000 nm into surfaces using an atomic force microscope (AFM) or nanoindenter \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,schwarz97,schwarz97b,asif01,jarvis93,kiely98,wahl98}. Mechanical properties are then extracted from the measured forces and displacements using classic results from continuum mechanics \cite{johnson85}. A potential problem with this approach is that continuum theories make two key assumptions that must fail as the size of contacting regions approaches atomic dimensions. One is to replace the atomic structure in the bulk of the solid bodies by a continuous medium with internal stresses determined by a continuously varying strain field. The second is to model interfaces by continuous, differentiable surface heights with interactions depending only on the surface separation. Most authors go further and approximate the contacting bodies by smooth spheres. In a recent paper \cite{luan05}, we analyzed the limits of continuum mechanics in describing nanometer scale contacts between non-adhesive surfaces with curvature typical of experimental probes. As in studies of other geometries \cite{miller96,landman96,vafek99}, we found that behavior in the bulk could be described by continuum mechanics down to lengths as small as two or three atomic diameters. However, the atomic structure of surfaces had profound consequences for much larger contacts. In particular, atomic-scale changes in the configuration of atoms on nominally cylindrical or spherical surfaces produced factor of two changes in the width of the contacting region and the stress needed to produce plastic yield, and order of magnitude changes in friction and stiffness. In this paper we briefly revisit non-adhesive contacts with an emphasis on the role of surface roughness. We then extend our atomistic studies to the more common case of adhesive interactions. One important result is that the work of adhesion is very sensitive to small changes in the positions of surface atoms. Changes in other quantities generally mirror those for non-adhesive tips, and small differences in the magnitude of these effects can be understood from geometrical considerations. The results are used to test continuum-based methods of analyzing AFM measurements of friction and stiffness \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,schwarz97,schwarz97b}. We show that the models may appear to provide a reasonable description when limited information about the true contact structure is available. When the full range of information accessible to simulations is examined, one finds that the contact area and pressure distributions may be very different than inferred from the models. Section \ref{sec:continuum} reviews continuum results for contact without and with adhesion, and briefly describes the effect of surface roughness. The methods used in our atomistic simulations and the geometries of the tips are described in Sec. \ref{sec:method}. Section \ref{sec:nonadhere} presents results for purely repulsive interactions and Sec. \ref{sec:adhere} describes trends with the strength of adhesion. A summary and conclusions are presented in Sec. \ref{sec:conclusions}. \section{Continuum Contact Mechanics} \label{sec:continuum} As noted above, contact mechanics calculations assume that the contacting solids are described by continuum elasticity so that the discrete atomic structure can be ignored. In most cases the two solids are also assumed to be isotropic with Young's moduli $E_1$ and $E_2$ and Poisson ratios $\nu_1$ and $\nu_2$. Then the results depend only on an effective modulus $E^*$ satisfying: \begin{equation} 1/E^*\equiv(1-\nu_1^2)/E_1 +(1-\nu_2^2)/E_2 . \label{eq:effectmod} \end{equation} Three-dimensional crystalline solids are not isotropic, but the theories can still be applied with an effective $E^*$ that depends on orientation and is determined numerically \cite{johnson85}. Continuum theories also neglect the atomic structure of the surface. In most cases the surfaces are assumed to be spherical, with radii $R_1$ and $R_2$. For elastic, frictionless solids the contact of two spheres is equivalent to contact between a sphere of radius $R = (R_1^{-1}+R_2^{-1})^{-1}$ and a flat solid \cite{johnson85}. From Eq. (\ref{eq:effectmod}), one may then map contact between any two spherical surfaces onto contact between a rigid sphere of radius $R$ and a flat elastic solid of modulus $E^*$. This is the case considered in our simulations, and previous results indicate this mapping remains approximately correct at atomic scales \cite{luan05}. Non-adhesive contact is described by Hertz theory \cite{johnson85}, which assumes solids interact with an infinitely sharp and purely repulsive ``hard-wall'' interaction. The surfaces contact in a circular region of radius $a$ that increases with the normal force or load $N$ pushing the surfaces together as \cite{johnson85}: \begin{equation} \label{eq:hertza} a = \left( \frac{3NR}{4E^*} \right) ^{1/3}\ \ . \end{equation} The normal pressure $p$ within the contact has a simple quadratic dependence on the radial distance from the center $r$: \begin{equation} p(r)=\frac{2aE^*}{\pi{R}}\sqrt{1-\frac{r^2}{a^2}}\ \ , \end{equation} and the surfaces separate slowly outside the contact. The normal displacement of the tip $\delta$ is related to $a$ by: \begin{equation} \label{eq:hertzd} \delta_H = a^2/R = (\frac{3NR}{4E^*})^{2/3} \ \ , \end{equation} where the subscript $H$ indicates the Hertz prediction and $\delta_{H}=0$ corresponds to the first contact between tip and substrate. Adhesion can be treated most simply in the opposite limits of very short-range interactions considered by Johnson, Kendall and Roberts (JKR) \cite{johnson71} and of infinite range interactions considered by Derjaguin, Muller and Toporov (DMT) \cite{derjaguin75}. The strength of adhesion is measured by the work of adhesion per unit area $w$. In DMT theory the attractive forces just produce an extra contribution to the normal force, so that $N$ is replaced by $N + 2\pi w R$ in Eqs. (\ref{eq:hertza}) and (\ref{eq:hertzd}). JKR theory treats the edge of the contact as a crack tip and calculates the stress by adding the crack and Hertz solutions. The normal force in Eq. (\ref{eq:hertza}) is then replaced by $N+ 3\pi w R + \left[6\pi w R N + (3\pi w R)^2\right]^{1/2}$ and the equation for $\delta$ is modified (Sec. \ref{sec:adhereload}). The two approaches lead to very different functional relations between $a$ and $N$. For example, the contact radius goes to zero at pulloff for DMT theory, but remains finite for JKR. They also predict different values of the pulloff force, $N_c$, where the surfaces separate. The normalized pulloff force, $N_c/\pi w R$, is -3/2 in JKR theory and -2 for DMT. Finally, the surfaces separate outside the contact with infinite slope in JKR theory, and gradually in DMT theory. The Maugis-Dugdale (M-D) model \cite{maugis92} provides a simple interpolation between the JKR and DMT limits. The surfaces are assumed to have a hard-wall contact interaction that prevents any interpenetration, plus a constant attractive force per unit area, $\sigma_0$, that extends over a finite distance $h_0$. The work of adhesion is just the integral of the attractive force, implying $\sigma_0 h_0 = w$. The M-D model produces coupled equations for the contact pressure that can be solved to yield a relation between the load, normal displacement, and area. As discussed further in Section \ref{sec:adhere}, the edge of the contact is broadened by the finite interaction range, making it useful to define three characteristic radii that converge to the JKR value for $a$ in the limit of short-range interactions. Maugis introduced a transition parameter \cite{maugis92} \begin{equation} \lambda \equiv \left( \frac{9Rw^2 }{2\pi {E^*}^2 h_0^3} \right)^{1/3}, \end{equation} that measures the ratio of the normal displacement at pulloff from JKR theory to the interaction range $h_0$. Tabor \cite{tabor76} had previously defined a similar parameter, $\mu$, that is about 16\% smaller than $\lambda$ for typical interaction potentials \cite{johnson97}. Johnson and Greenwood \cite{johnson97} have provided an adhesion map characterizing the range of $\lambda$ over which different models are valid. For $\lambda > 5$ the interaction range is short and JKR theory is accurate, while DMT is accurate for $\lambda < 0.1$. For most materials, both $h_0$ and the ratio $w/E^*$ are of order 1 nm. The JKR limit is only reached by increasing $R$ to macroscopic dimensions of micrometers or larger. JKR theory has been tested in experiments with centimeter scale radii using the surface force apparatus (SFA) \cite{horn87} and hemispherical elastomers \cite{Shull02,newby95}. Scanning probe microscope tips typically have $R$ between 10 and 100 nm, and the value of $\lambda \sim 0.1$ to 1 lies between JKR and DMT limits \cite{carpick97}. The same is true in our simulations, where $\lambda$ for adhesive tips varies between 0.1 and 0.75. For this reason we will compare our results to M-D theory below. We also found it useful to use a simple interpolation scheme suggested by Schwarz \cite{schwarz03}. Both he and Carpick et al. \cite{carpick99} have proposed formulae for the contact radius that interpolate smoothly between DMT and JKR. These approaches have been attractive in analyzing experimental data because of their simple analytic forms. No direct measurement of contact area has been possible in nanometer scale single asperity contacts. Instead, the contact area has been determined by measurements of contact stiffness \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,johnson97,jarvis93,kiely98,wahl98,asif01}, conductance \cite{lantz97}, or friction \cite{lantz97,carpick97,carpick97b,carpick04,carpick04b,schwarz97,schwarz97b}. The validity of these approaches is not clear \cite{luan05}, and will be tested below. The stiffness against normal displacements of the surfaces can be determined from the derivative of $N$ with respect to $\delta$ in M-D theory. The tangential stiffness $k$ is normally calculated by assuming friction prevents sliding at the interface, even though all theories described above assume zero friction in calculating the contact area. With this assumption $k=8G^* a$, where $G^* $ is the effective bulk shear modulus. Relating the friction to contact area requires assumptions about the friction law. Many authors have assumed that the friction is proportional to area \cite{carpick04,carpick04b,carpick97b,carpick97,pietrement01,lantz97,schwarz97,schwarz97b}, but simulations \cite{luan05,wenning01,muser01prl} and experiments in larger contacts \cite{gee90,berman98} show that this need not be the case. The effect of surface roughness on contact has been considered within the continuum framework \cite{johnson85}. In general, results must be obtained numerically. One key parameter is the ratio of the root mean squared (rms) roughness of the surface, $\Delta$, to the normal displacement $\delta$. When $\Delta/ \delta < 0.05 $, results for nonadhesive contacts lie within a few percent of Hertz theory \cite{johnson85}. As $\Delta / \delta$ increases, the contact area broadens and the pressure in the central region decreases. Adhesion is more sensitive to roughness \cite{fuller75}. The analysis is complicated by the fact that $\Delta$ usually depends on the range of lengths over which it is measured. The natural upper bound corresponds to the contact diameter and increases with load, while the lower bound at atomic scales is unclear. The role of roughness is discussed further in Secs. \ref{sec:nonadhere} and \ref{sec:adhere}. \section{Simulation methods} \label{sec:method} We consider contact between a rigid spherical tip and an elastic substrate with effective modulus $E^*$. As noted above, continuum theory predicts that this problem is equivalent to contact between two elastic bodies, and we found this equivalence was fairly accurate in previous studies of non-adhesive contact \cite{luan05}. To ensure that any deviations from the continuum theories described above are associated only with atomic structure, the substrate is perfectly elastic. Continuum theories make no assumptions about the nature of the atomic structure and interactions within the solids. Thus any geometry and interaction potentials can be used to explore the type of deviations from continuum theory that may be produced by atomic structure. We use a flat crystalline substrate to minimize surface roughness, and use tips with the minimum roughness consistent with atomic structure. The interactions are simple pair potentials that are widely used in studies that explore generic behavior \cite{allen87}. They illustrate the type of deviations from continuum theory that may be expected, but the magnitude of deviations for real materials will depend on their exact geometry and interactions. Atoms are placed on sites of a face-centered cubic (fcc) crystal with a (001) surface. We define a characteristic length $\sigma$ so that the volume per atom is $\sigma^3$ and the nearest-neighbor spacing is $2^{1/6}\ \sigma$. Nearest-neighbors are coupled by ideal Hookean springs with spring constant $\kappa$. Periodic boundary conditions are applied along the surface of the substrate with period $L$ in each direction. The substrate has a finite depth $D$ and the bottom is held fixed. For the results shown below, $L=190.5\ \sigma$ and $D=189.3\ \sigma$. The continuum theories assume a semi-infinite substrate, and we considered smaller and larger $L$ and $D$ to evaluate their effect on calculated quantities. Finite-size corrections for the contact radius and lateral stiffness are negligible for $a/D < 0.1$ \cite{johnson85}, which covers the relevant range of $a/R < 0.2$. Corrections to the normal displacement are large enough to affect plotted values. We found that the leading analytic corrections \cite{johnson01,sridhar04,adams06} were sufficient to fit our results at large loads, as discussed in Sec. \ref{sec:adhereload}. Note that previous simulations of AFM contact have used much shallower substrates ($D\sim 10\ \sigma$) \cite{harrison99,landman90,sorensen96,nieminen92,raffi-tabar92,tomagnini93}. This places them in a very different limit than continuum theories, although they provide interesting insight into local atomic rearrangements. Three examples of atomic realizations of spherical tips are shown in Fig. \ref{fig:tips}. All are identical from the continuum perspective, deviating from a perfect sphere by at most $\sigma$. The smoothest one is a slab of f.c.c. crystal bent into a sphere. The amorphous and stepped tips were obtained by cutting spheres from a bulk glass or crystal, and are probably more typical of AFM tips \cite{carpick97,foot1}. Results for crystalline tips are very sensitive to the ratio $\eta$ between their nearest-neighbor spacing and that of the substrate, as well as their crystalline alignment \cite{muser01prl,wenning01}. We will contrast results for an aligned commensurate tip with $\eta=1$ to those for an incommensurate tip where $\eta=0.94437$. To mimic the perfectly smooth surfaces assumed in continuum theory, we also show results for a high density tip with $\eta=0.05$. In all cases $R=100\ \sigma \sim 30$ nm, which is a typical value for AFM tips. Results for larger radius show bigger absolute deviations from continuum predictions, but smaller fractional deviations \cite{luan05}. Atoms on the tip interact with the top layer of substrate atoms via a truncated Lennard-Jones (LJ) potential \cite{allen87} \begin{equation} V_{LJ}=-4\epsilon_i \left[ \left(\frac{\sigma}{r}\right)^6- \left(\frac{\sigma}{r}\right)^{12} \right] -V_{\rm cut},\qquad r<r_{\rm cut} \label{eq:lj} \end{equation} where $\epsilon_i$ characterizes the adhesive binding energy, the potential vanishes for $r >r_{\rm cut}$, and the constant $V_{\rm cut}$ is subtracted so that the potential is continuous at $r_{\rm cut}$. Purely repulsive interactions are created by truncating the LJ potential at its minimum $r_{\rm cut} = 2^{1/6}\ \sigma$. Studies of adhesion use $r_{\rm cut}=1.5\ \sigma$ or $r_{\rm cut}=2.2\ \sigma$ to explore the effect of the range of the potential. In order to compare the effective strength of adhesive interactions and the cohesive energy of the solid substrate, we introduce a unit of energy $\epsilon$ defined so that the spring constant between substrate atoms $\kappa= 50\ \epsilon /\sigma^2$. If the solid atoms interacted with a truncated LJ potential with $\epsilon$ and $r_{\rm cut}=1.5\ \sigma$, they would have the same equilibrium lattice constant and nearly the same spring constant, $\kappa=57\ \epsilon/\sigma^2$, at low temperatures and small deformations. Thus $\epsilon_i/\epsilon$ is approximately equal to the ratio of the interfacial binding energy to the cohesive binding energy in the substrate. The elastic properties of the substrate are not isotropic. We measure an effective modulus $E^* = 55.0\ \epsilon/\sigma^3$ for our geometry using Hertzian contact of a high density tip. This is between the values calculated from the Young's moduli in different directions. The sound velocity is also anisotropic. We find longitudinal sound velocities of 8.5 and 9.5 $\sigma/t_{LJ}$ and shear velocities of 5.2 and 5.7 $\sigma/t_{LJ}$ along the (001) and (111) directions, respectively. Here $t_{LJ}$ is the natural characteristic time unit, $t_{LJ}=\sqrt{m\sigma^2/\epsilon}$, where $m$ is the mass of each substrate atom. The effective shear modulus for lateral tip displacements is $G^* = 18.3\ \epsilon/\sigma^3$. The simulations were run with the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code \cite{plimpton95,lammps}. The equations of motion were integrated using the velocity-Verlet algorithm with time step 0.005$\ t_{LJ}$ \cite{allen87}. Temperature, $T$, only enters continuum theory through its effect on constitutive properties, and it is convenient to run simulations at low temperatures to minimize fluctuations. A Langevin thermostat was applied to solid atoms to maintain $T$=0.0001$\ \epsilon/k_B$, where $k_B$ is Boltzmann's constant. This is about four orders of magnitude below the melting temperature of a Lennard-Jones solid. The damping rate was $0.1\ t_{LJ}^{-1}$, and damping was only applied perpendicular to the sliding direction in friction measurements. In simulations, a tip was initially placed about $r_{\rm cut}$ above the substrate. It was then brought into contact by applying a very small load, with the lateral position kept fixed. The load was changed in discrete steps and the system was allowed to equilibrate for 350$\ t_{LJ}$ at each load before making measurements. This interval is about 20 times longer than the time for sound to propagate across the substrate, and allowed full stress equilibration. Results from loading and unloading cycles showed no noticeable hysteresis. To obtain results near the pulloff force, we moved the tip away from the substrate at a very slow velocity $v=0.0003\ \sigma/t_{LJ}$ and averaged results over small ranges of displacement. This approach was consistent with constant load measurements and allowed us to reach the region that is unstable at constant load. To compare to continuum predictions we calculated the stresses exerted on the substrate by the tip. The force from the tip on each substrate atom was divided by the area per atom to get the local stresses. These were then decomposed into a normal stress or pressure $p$ on the substrate, and a tangential or shear stress ${\bf \tau}_{\rm sur}$. The continuum theories described in Sec. \ref{sec:continuum} assume that the projection of the force normal to the undeformed substrate equals the projection normal to the locally deformed substrate. This is valid in the assumed limits of $a/R << 1$ and ${\bf \tau}_{\rm sur}=0$. It is also valid for most of our simulations (within $<2$\%), but not for the case of bent commensurate tips where ${\bf \tau}_{\rm sur}$ becomes significant. Normal and tangential stresses for bent commensurate tips were obtained using the local surface orientation of the nominally spherical tip. Correcting for the orientation changed the normal stress by less than 5\% of the peak value, and the shear stress by less than 20\%. Friction forces are known to vary with many parameters \cite{muser04}. Of particular concern is the dependence on extrinsic quantities such as the stiffness of the system that imposes lateral motion. Results at constant normal load are often very different than those at fixed height, motion perpendicular to the nominal sliding direction can raise or lower friction, and the kinetic friction can be almost completely eliminated in very stiff systems \cite{muser03acp,socoliuc04}. A full re-examination of these effects is beyond the scope of this paper. Our sole goal is to measure the friction in a consistent manner that allows us to contrast the load dependent friction for different tip geometries and minimizes artifacts from system compliance. In friction simulations, the tip is sheared at a constant low velocity $v'=0.01\ \sigma/t_{LJ}$ along the (100) direction with a constant normal load. This is typical of AFM experiments where the low normal stiffness of the cantilever leads to a nearly constant normal load, and the high lateral stiffness limits lateral motion in the direction perpendicular to the sliding direction. The measured friction force varies periodically with time as the tip moves by a lattice constant of the substrate. The time-averaged or kinetic friction during sliding is very sensitive to both lateral stiffness and load \cite{socoliuc04}. We focus instead on the peak force, which is less sensitive. In the limit of low sliding velocities this would correspond to the static friction. For bent and stepped commensurate tips there is a single strong friction peak. For incommensurate and amorphous tips, there may be multiple peaks of different size corresponding to escape from different metastable energy minima \cite{muser01prl,muser03acp}. The static friction was determined from the highest of these friction peaks, since lateral motion would stop at any lower force. With a single peak per period, the time between peaks is $\sim \sigma/v'=100\ t_{LJ}$. This is several times the sound propagation time, and the measured force should be close to the static friction. For incommensurate tips the time between peaks was an order of magnitude smaller and dynamic effects may be more significant. However, they are not expected to affect the load dependence significantly, and are much too small to affect the dramatic difference between incommensurate and other tips. The total lateral stiffness of the system, $k$, corresponds to the derivative of $F$ with lateral tip displacement evaluated at a potential energy minimum. Since the tip is rigid, $k$ is determined by displacements in the substrate and at the interface. The interfacial stiffness $k_{\rm i}$ and substrate stiffness $k_{\rm sub}$ add in series because stress is transmitted through interfacial interactions to the substrate. Thus the total stiffness is \cite{socoliuc04,luan05}: \begin{equation} k^{-1}=k_{\rm sub}^{-1}+k_{\rm i}^{-1} \ \ . \label{eq:stiff} \end{equation} If the tip were not rigid, it would also contribute a term to the right-hand-side of Eq. (\ref{eq:stiff}). We evaluate $k$ from the derivative of $F$ during sliding, making the assumption that the results are in the quasisatic limit. For bent and stepped commensurate tips there is a single potential energy minima, and for amorphous tips one minimum dominated the periodic force. For incommensurate tips, there are many closely spaced minima and we evaluate $k$ from the derivative in the minimum preceding the largest friction peak. Due to the small magnitude of forces and short time intervals, the relative errors in these values are as big as 50\%. To estimate the lateral stiffness in the substrate, $k_{\rm sub}$, we fix the relative positions of those substrate atoms that lie inside the contact, and move them laterally at a slow velocity. The total force between these atoms and the rest of the substrate is measured and its derivative with respect to distance gives the lateral stiffness in the substrate. In principal, there might also be some relative displacement between atoms in the contact that is not captured by this approach, but the results for the substrate stiffness are consistent with continuum predictions. Values of the adhesion energy per unit area $w$ were obtained for flat, rigid surfaces of the same nominal geometry as the tip. For bent crystal tips (Fig. \ref{fig:tips}), the tip was just flattened back into a crystal. For stepped tips, we used an unstepped crystal with the same spacing and interactions. For amorphous tips, an amorphous solid was cleaved with a flat surface rather than a sphere. The resulting surfaces were then brought into contact with the substrate and allowed to equilibrate at zero load. At the low temperatures used here, the adhesion energy is just the potential energy difference between contacting and separated configurations. \section{Nonadhesive contacts} \label{sec:nonadhere} \subsection{Pressure distribution} Figure \ref{fig:hertzpofr} contrasts the distribution of normal pressure $p$ under five tips: (a) dense, (b) bent commensurate, (c) bent incommensurate, (d) amorphous and (e) stepped. In each case, $R=100\ \sigma$ and the dimensionless load is $N/(R^2 E^*) =0.0018$. Hertz theory predicts the same pressure distribution (solid lines) for all tips. Points show the actual local pressure on each substrate atom as a function of radial distance $r$ from the center of the spherical tip, and circles in (c) and (d) show the average over bins of width $\sigma$. Clearly, small deviations in atomic structure lead to large changes in the mean pressure and the magnitude of pressure fluctuations. We find that these deviations become larger as $N$ is decreased, and the contact radius drops closer to the atomic size. One possible source of deviations from Hertz theory is friction, but we find the mean tangential forces are small in most cases. The exception is the bent commensurate tip (Fig. \ref{fig:hertzpofr}(b)), where the tangential stress rises with $r$ and is comparable to the normal stress near the edge of the contact. This result is not surprising given the high friction measured for commensurate tips below, and reflects the strong tendency for atoms in the substrate to remain locked in epitaxial registry with atoms in the tip. However, the deviation from Hertz theory is in the opposite direction from that expected from friction. Since this contact was made by gradually increasing the load, friction should decrease the contact size rather than broadening it. Another possible origin of the deviations from Herts theory is surface roughness. From continuum theory (Sec. \ref{sec:continuum}), this is characterized by the ratio of rms surface roughness $\Delta$ to normal displacement $\delta$. The normal displacement for all tips is about the same, $\delta \approx 1.5\ \sigma$, but $\Delta$ is difficult to define. The reason is that there is no unique definition of the surface height for a given set of atomic positions. For example, one might conclude that $\Delta=0$ for the substrate, since all atoms lie on the same plane. However, if a tip atom were moved over the surface with a small load, its height would increase as it moved over substrate atoms and be lowest at sites centered between four substrate atoms \cite{muser03acp}. For the parameters used here, the total height change is about 0.33$\ \sigma$. Similar deviations from a sphere are obtained for the bent commensurate and incommensurate tips. The height change decreases as the ratio of the nearest-neighbor spacing to the Lennard-Jones diameter for interfacial interactions decreases, and is only 0.0007$\ \sigma$ for the dense tip. Amorphous and stepped tips have additional roughness associated with variations in the position of atomic centers relative to a sphere. The total variation is about $\sigma$, or about three times the height change as an atom moves over the substrate. A reasonable estimate is that $\Delta/\delta < 0.1$ for the bent commensurate and incommensurate tips, $\Delta/\delta < 10^{-3}$ for the dense tip, and $\Delta/\delta \sim 0.3$ for the amorphous and stepped tips. However, the ambiguity in $\Delta$ is one of the difficulties in applying continuum theory in nanoscale contacts. The closely spaced atoms on the dense tip approximate a continuous sphere, and the resulting pressure distribution is very close to Hertz theory (Fig. \ref{fig:hertzpofr}(a)). Results for the bent commensurate tip are slightly farther from Hertz theory. The deviations can not be attributed to roughness, because fluctuations at a given $r$ are small, and the pressure in the central region is not decreased. The main change is to smear the predicted sharp pressure drop at the edge of the contact. This can be attributed to the finite range of the repulsive potential between surfaces. We can estimate the effective interaction range by the change in height of an atom, $dh = 0.04\ \sigma$, as $p/E^*$ decreases from 0.1 to 0. The effective range is much smaller for the dense tip because $\sim 400$ times as many atoms contribute to the repulsive potential. In Hertz theory \cite{johnson85}, the separation between surfaces only increases with distance $(r-a)$ from the edge of the contact as $(8/3\pi)(r-a)^{3/2} (2a)^{1/2}/R$. Equating this to $dh$ gives $r-a \approx 1\ \sigma$ for the bent commensurate tip, which is very close to the range over which the edge of the contact is shifted from the Hertz prediction. Note that this analysis predicts that the shift in the edge of the contact will grow as $\sqrt{R}$, and simulations with larger $R$ confirm this. However, the fractional change in $a$ decreases as $1/\sqrt{R}$. The larger values of pressure at low $r$ result from the greater stiffness of the repulsive potential as $p$ increases. All of the above effects could be included in continuum theory by changing the form of the repulsive potential \cite{greenwood97}. For bent incommensurate and amorphous tips (Fig. \ref{fig:hertzpofr} (c) and (d)), the variations in pressure at a given $r$ are as large as the mean (circles) \cite{footn}. While all atoms on the commensurate tip can simultaneously fit between substrate atoms, minimizing fluctuations in force, atoms on the incommensurate tip sample all lateral positions and experience a range of forces at a given height. The mean pressure for the incommensurate tip remains fairly close to the commensurate results, but there is slightly more smearing at large $r$ due to the variations in relative height of substrate and tip atoms. The mean pressure on the amorphous tip shows the depression at small $r$ and increase at large $r$ that are expected for rough tips in continuum theory \cite{johnson85}. The magnitude of the central drop is about 18\%, which is consistent with $\Delta/\delta \sim 0.2$ in continuum theory (Fig. 13.12 of Ref. \cite{johnson85}). The lack of a noticeable drop for incommensurate tips implies that the effective $\Delta/\delta < .03$. The implication is that the incoherent height changes on amorphous tips contribute to the effective roughness in continuum theory, while the atomic corrugation on bent tips does not. The effective roughness in both cases is about 0.1$\ \sigma$ smaller than our estimates for $\Delta$ above. Results for stepped tips show the largest deviations from Hertz theory, and they are qualitatively different than those produced by random roughness. The terraced geometry of this tip (Fig. \ref{fig:tips}) is closest to that of a flat punch. In continuum theory, the pressure on a flat punch is smallest in the center, and diverges as the inverse square root of the distance from the edge. The simulation results show qualitatively similar behavior. The main effect of atomic structure is to cut off the singularity at a distance corresponding to an atomic separation. Similar effects are observed in simulations of other geometries \cite{vafek99,miller96,landman96}. Note that the terraces are only flat because the sphere was cut from a crystal that was aligned with the substrate. We also examined tips cut from a crystal that was slightly tilted away from the (001) direction \cite{foot1}. This produces inclined terraces that contact first along one edge. The resulting pressure distribution is very different, and closest to the continuum solution for contact by an asymmetric wedge. Figure \ref{fig:hertzpofr} has an important general implication about the probability $P(p)$ of finding a local pressure $p$ at a point in the contact. For smoothly curved surfaces, continuum theory predicts that the derivative of the pressure diverges at the edge of the contact \cite{johnson85}. Thus $P(p) \rightarrow 0$ as $p\rightarrow 0$ \cite{persson01}. The finite resolution at atomic scales always smears out the change in $p$, leading to a non-zero value of $P(0)$. Indeed, the approximately constant value of $dp/dr$ near the contact edge in Fig. \ref{fig:hertzpofr} leads to a local maximum in $P$ at $p=0$. Similar behavior is observed for randomly rough atomic contacts \cite{luan05mrs} and in continuum calculations for piecewise planar surfaces \cite{hyun04,pei05}. Plastic deformation is usually assumed to occur when the deviatoric shear stress $\tau_s$ exceeds the yield stress of the material. In Hertz theory, $\tau_s$ reaches a maximum value at a depth of about 0.5$a$. The pressure variations at the surface shown in Fig. \ref{fig:hertzpofr} lead to changes in both the magnitude and position of the peak shear stress \cite{luan05}. Factors of two or more are typical for amorphous and stepped tips. Thus tip geometry may have a significant impact on the onset of yield. Of course atomistic effects also influence the yield stress at nanometer scales, and a full evaluation of this effect is left to future work. Saint-Venant's principal implies that the pressure distribution should become independent of tip geometry at depths greater than about $3a$, but the shear stress at these depths is substantially smaller than peak values and yield is unlikely to occur there. \subsection{Variations with load} Figure \ref{fig:hertz} shows the load dependence of (a) normal displacement, (b) radius, (c) friction and (d) lateral stiffness for the same tips as Fig. \ref{fig:hertzpofr}. Each quantity is raised to a power that is chosen so that Hertz theory predicts the indicated straight line. A small finite-depth correction ($\sim 2$\%) is applied to the Hertz prediction for $\delta$ (Eq. (\ref{eq:depth})). As also found for cylindrical contacts \cite{luan05}, the normal displacement shows the smallest deviation from Hertz theory because it represents a mean response of many substrate atoms. Results for all bent crystals are nearly indistinguishable from the straight line. Results for the stepped surface are lower at small loads. Since the entire tip bottom contacts simultaneously, it takes a larger load to push the tip into the substrate. The amorphous results are shifted upwards by a fairly constant distance of about 0.2 $\sigma$. We define the origin of $\delta$ as the tip height where the first substrate atom exerts a repulsive force on the tip. This is strongly dependent on the height of the lowest tip atom, while subsequent deformation is controlled by the mean tip surface. Agreement with Hertz is greatly improved by shifting the amorphous curve by this small height difference. Note that the zero of $\delta$ is difficult to determine experimentally and is usually taken as a fit parameter. If this is done, even results for the amorphous system yield values of $R$ and $E^*$ that are within 10\% of their true values. Thus measurements of $\delta$, interpreted with continuum theory for spheres, can provide useful values of elastic properties at nanometer scales. As expected from the observed pressure distributions (Fig. \ref{fig:hertzpofr}), the contact radius is generally larger than the Hertz prediction. The shift is smallest for the dense tip because it approximates a continuous surface and the high density leads to a repulsive potential that rises more than a hundred times more rapidly than for other tips. Results for bent crystal and amorphous tips are shifted upwards by fairly load-independent offsets of $\sim 1 - 3\ \sigma$, leading to large fractional errors at low loads (up to 100\%). The stepped crystal shows qualitatively different behavior, with $a$ rising in discrete steps as sequential terraces come into contact. Note that the size of the first terrace is not unique, but depends on the registry between the bounding sphere and crystalline lattice \cite{foot1}. Larger deviations may be observed when the first step has very few atoms. Such tips may be more likely to be chosen for AFM studies because they tend to give sharper images. In order to predict the friction between surfaces, one must make assumptions about how $F$ depends on area and load. The straight line in Fig. \ref{fig:hertz}(c) corresponds to a friction force that is proportional to load. Static friction values for bent and stepped commensurate surfaces are consistent with this line and a coefficient of friction $\mu \equiv F/N = 0.63$. Analytic \cite{muser01prl} and simple numerical \cite{ringlein04} models show that this is a general feature of mated surfaces where each tip atom has the same lateral position relative to substrate atoms. The friction on amorphous and incommensurate surfaces is always lower and scales more closely with the contact area, as indicated by broken line fits to $F \propto N^{2/3}$ and discussed further in Sec. \ref{sec:adhereload} \cite{foot2}. Many authors have made this assumption in fitting experimental data, but it is not obvious why it should hold. The friction per unit area between flat amorphous surfaces decreases as the square root of the area, but rises linearly with the normal pressure \cite{muser01prl}. Wenning and M\"user have noted that these two factors may combine for spherical contacts to give a net friction that rises linearly with area \cite{wenning01}. However, their argument would predict that the frictional force in a cylinder-on-flat geometry would not scale with area, and our previous simulations found that it did \cite{luan05}. Continuum theory predicts that the lateral stiffness $k=8G^* a$, and should follow the straight line in Fig. \ref{fig:hertz}(d). Measured values of the total stiffness (open symbols) are always substantially lower. This is because continuum theory assumes that there is no displacement at the interface, only within the substrate. In reality, the frictional force is always associated with atomic scale displacements of interfacial atoms relative to a local energy minimum \cite{ringlein04,muser03acp}. The derivative of force with displacement corresponds to an interfacial stiffness $k_{\rm i}$ that adds in series with the substrate contribution (Eq. (\ref{eq:stiff})) \cite{socoliuc04,luan05}. Our numerical results show that $k_{\rm i}$ can reduce $k$ by more than an order of magnitude, particularly for tips where $F$ is small. We also evaluated the substrate stiffness $k_{\rm sub}$ by forcing all contacting substrate atoms to move together. These results (filled symbols) lie much closer to the Hertz prediction. Only the stepped tip shows large discrepancies, and these are correlated with the large deviation between the measured and predicted contact radii. \section{Adhesive contacts} \label{sec:adhere} \subsection{Pressure distribution} Figure \ref{fig:cutoff} compares the calculated pressure distribution in adhesive contacts with the Maugis-Dugdale prediction (lines). A bent commensurate tip was used to minimize deviations from continuum predictions for a sphere. Results for two different $r_{\rm cut}$ are presented to indicate difficulties in fitting longer-range interactions to M-D theory. The work of adhesion was calculated for unbent surfaces (Sec. \ref{sec:method}) with $\epsilon_i/\epsilon = 0.5$, yielding $w=1.05\ \epsilon/\sigma^2$ and $1.65\ \epsilon/\sigma^2$ for $r_{\rm cut}=1.5\ \sigma$ and $2.2\ \sigma$, respectively. This leaves only one fitting parameter in M-D theory. For the dashed lines, the width of the attractive interaction $h_0 = w/\sigma_0$ was chosen to coincide with the range of the atomic potential. The dotted line shows a fit with $h_0=0.8\ \sigma$ for $r_{\rm cut}=2.2\ \sigma$, which gives better values for the pulloff force, but poorer radii (Sec. \ref{sec:adhereload}). In M-D theory, it is common to identify two radii, $a$ and $c$, with the inner and outer edges of the plateau in the pressure, respectively \cite{maugis92,johnson97}. For $r< a$ the surfaces are in hard-sphere contact, and for $a<r<c$ they are separated and feel the constant attraction. The continuously varying interactions between atoms in our simulations lead to qualitatively different behavior. There is no sharp transition where the surfaces begin to separate, and the attraction shows a smooth rise to a peak, followed by a decay. To facilitate comparison to continuum theories, we introduce the three characteristic radii indicated by arrows for each $r_{\rm cut}$. The innermost, $r_a$, corresponds to the point where the interaction changes from repulsive to attractive and can be calculated in M-D theory. The outermost, $r_c$, corresponds to $c$ in M-D theory -- the point where interactions end. The middle, $r_b$, corresponds to the peak attraction. It was also studied by Greenwood \cite{greenwood97} for contact of a featureless sphere that interacted with the flat substrate through Lennard-Jones interactions. He found $r_b$ lay close to the JKR prediction for contact radius at large loads. All three radii converge in the limit of repulsive interactions or the JKR limit of an infinitely narrow interaction range. At small radii the atomistic results for $p$ lie above M-D theory, and they drop below near $r/\sigma=10$. These deviations can be attributed to the increasing stiffness of the repulsion between tip and substrate atoms with increasing pressure. Just as in the non-adhesive case, the stiffer interactions in the center of the tip lead to bigger pressures for a given interpenetration. The change in pressure with separation produces less smearing at the edge of the repulsive region of the contact than for the non-adhesive case (Fig. \ref{fig:hertzpofr}(b)), and the values of $r_a$ are very close to M-D theory. This can be understood from the fact that surfaces separate much more rapidly in JKR theory ($\propto (r-a)^{1/2}$) than in Hertz theory ($\propto (r-a)^{3/2}$) (Sec. \ref{sec:nonadhere}). Thus the same height change in the repulsive region corresponds to a much smaller change in radius. Of course the finite range of the attractive tail of the potential leads to a broad region of adhesive forces out to $r_c$. The continuous variation of $p$ in the attractive tail is only crudely approximated by M-D theory. The difficulty in determining the optimum choice for $h_0$ increases with the range of interactions, as discussed further below. Figure \ref{fig:r05pofr} shows the effect of tip geometry on pressure distribution. We found that the work of adhesion was very sensitive to tip geometry. To compensate for this effect, we varied $\epsilon_i$ (Table I) to yield the same $w$ for the tips in Fig. \ref{fig:r05pofr}. Then all tips should have the same pressure distribution in continuum theory. The M-D predictions for $p$ are not shown because even the bent commensurate tips produce significantly different results (Fig. \ref{fig:cutoff}). Instead, we compare other tips to the bent commensurate tip. As for non-adhesive tips, local fluctuations in pressure are small for commensurate tips (Fig. \ref{fig:r05pofr}(a) and (d)) and comparable to the mean pressure for incommensurate or amorphous tips (Fig. \ref{fig:r05pofr}(b) and (c)). Note however, that the fluctuations become smaller in the adhesive regime (large $r$). This is because the potential varies more slowly with height, so fluctuations in the separation between atoms have less effect on $p$. One consequence is that the outer edge of the contact is nearly the same for commensurate and incommensurate tips. The radii for the amorphous tip are significantly larger than bent tips, presumably because of a much greater effective roughness. As for nonadhesive tips, the mean pressure on incommensurate tips is close to the commensurate results. Adhesion reduces the roughness-induced drop in pressure in the central region of the amorphous tip. For the stepped tip, the contact radius is dominated by the size of the terraces, but adhesive tails are visible outside the edge of each terrace. Only $r_c$ is easily defined for the stepped tips. This increases in a nearly stepwise manner, and its load dependence is not shown below. For the amorphous and incommensurate tips, radii are determined from the mean pressure at a given radius (open circles). Errorbars are less than 0.5 $\sigma$. \subsection{Variations of radius and displacement with load} \label{sec:adhereload} Figure \ref{fig:r05rofN}(b) compares the measured contact radii for the tips of Fig. \ref{fig:r05pofr} to M-D theory as load is varied. The simulation results for $r_c$ (open symbols) decrease with decreasing load as $r_a$ (closed symbols) decreases to zero. All interactions in the contact are then adhesive. As $r_c$ continues to drop, the area contributing to adhesion drops, and the load rises back toward zero. This regime is not considered in the original M-D theory. The extension to $r_a=0$ by Johnson and Greenwood (JG) \cite{johnson97} is shown by a dashed line in the figure. Along this line the stress in the contact is constant, giving $N=-\sigma_0 \, \pi r_c^2$. As for non-adhesive tips, the contacts tend to be larger than continuum theory predicts. However, the shift for bent tips is smaller than in the non-adhesive case, and the commensurate and incommensurate results are closer, as expected from Fig. \ref{fig:r05pofr}. Larger deviations are observed for the amorphous tip, with radii typically 2 or 3 $\sigma$ larger than predicted. The deviation becomes most pronounced at negative loads, where the amorphous tip remains in contact well below the predicted pulloff force. Figure \ref{fig:r05rofN}(a) compares the value of $r_b$ to the JKR prediction for contact radius. As found by Greenwood \cite{greenwood97}, the numerical results are close to JKR at large loads, but deviate at negative loads because M-D and JKR predict different pulloff forces. Since JKR assumes a singular adhesive stress at the radius of the contact, it seems natural that its predictions lie closest to the position of the peak tensile stress. Figure \ref{fig:e05rofN} shows the calculated radii for bent and amorphous tips with the same interaction energy $\epsilon_i/\epsilon = 0.5$. The small changes in tip geometry lead to a roughly four fold variation in both $w$ and $N_c$ (Table I). The largest $w$ is obtained for commensurate tips because all atoms can optimize their binding coherently. Atoms on incommensurate tips sample all positions relative to the substrate, and can not simultaneously optimize the binding energy. The larger height fluctuations on amorphous tips lead to even greater reductions in $w$. In the simulations, the pulloff force, $N_c$, corresponds to the most negative load where the surfaces remain in contact and the $r_i$ can be defined. Its normalized value, $N_c/\pi w R$, is equal to -3/2 in JKR theory, -2 in DMT theory, and lies between these values for M-D theory. Table I shows normalized results for various tips. As expected from the good agreement in Figs. \ref{fig:r05rofN} and \ref{fig:e05rofN}, results for bent tips lie between JKR and DMT values and can be fit with M-D theory. The values for stepped and amorphous tips lie outside the bounds of M-D theory. This is an important result because pulloff forces are frequently used to measure the work of adhesion. Based on continuum theory, one would expect that the uncertainty in such measurements is less than the difference between JKR and DMT predictions. Our results show factor of two deviations for stepped tips, which may be common in scanning probe experiments. Other simulations showed that the stepped tip values are strongly dependent on the size of the first terrace, as well as any tilt of the terraces or incommensurability. It might seem surprising that the stepped tip has a smaller pulloff force than the bent tip, because the entire first terrace can adhere without any elastic energy penalty. However this effect is overcome by the greater contact size for bent tips: The radius of the first terrace, $r_t \sim 6\ \sigma$, is smaller than the values of $r_b$ and $r_c$ at pulloff for bent tips. As the adhesion is decreased, the predicted contact size at pulloff will drop below $r_t$ and the stepped tip may then have a larger pulloff force than predicted. This limit can also be reached by increasing the width of the first terrace. For a tip that lies entirely within a sphere of radius $R$, $r_t^2 < R^2-(R-d)^2\approx 2dR$ where $d$ is the layer spacing in the crystal. For our geometry this corresponds to $r_t < 12\ \sigma$, which is about twice the value for our tip. As noted above, terraces with even smaller radii may be preferentially selected for imaging studies and will lead to lower $|N_c|$. As $r_{\rm cut}$ increases, it becomes harder for the M-D model to simultaneously fit both the radii and the pulloff force. Figure \ref{fig:cutoff22} shows simulation data for a bent commensurate tip with $r_{\rm cut}=2.2\ \sigma$. Using the value of $h_0=1.2\ \sigma$ (Fig. \ref{fig:cutoff}) reproduces all radii fairly well at large loads, but gives a substantially different pulloff force, $-906\ \epsilon/\sigma$ instead of $-872\ \epsilon/\sigma$. Decreasing $h_0$ to $0.8\ \sigma$ fits the pulloff force, and improves the fit to $r_a$ at low $N$. However, the predicted values for $r_a$ at large $N$ are slightly too high and the values for $r_c$ are shifted far below ($\sim 2-3\ \sigma$) simulation data. This failure is not surprising given the crude approximation for adhesive interactions in M-D theory. As might be expected, the best values for the pulloff force are obtained by fitting the region near the peak in the force, rather than the weak tail (Fig. \ref{fig:cutoff}). It should be noted that for bent crystalline and amorphous tips, all of our results for $r$ can be fit accurately to M-D theory if $E^*$, $w$, and $R$ are taken as adjustable parameters. The typical magnitude of adjustments is 10 to 30\%, which is comparable to typical experimental uncertainties for these quantities in nanoscale experiments. Indeed one of the common goals of these experiments is to determine any scale dependence in continuum parameters. For this reason it would be difficult to test continuum theory in scanning probe experiments even if $r$ could be measured directly. Experiments can more easily access the variation of normal displacement with load. This requires subtracting the height change due to the normal compliance of the machine controlling the tip, which is difficult for standard AFM cantilevers, but possible in stiffer devices \cite{jarvis93,kiely98,asif01}. The absolute zero of $\delta$ is not fixed, but must be fitted to theory. Figure \ref{fig:r05dofN} shows two measures of the normal displacement in our simulations. One, $\delta_{\rm tip}$, corresponds to the experimentally accessible tip displacement. We associate the zero of $\delta_{\rm tip}$ with the point where the first substrate atom exerts a repulsive force. The second, $\delta_{\rm sur}$, is the actual depression of the lowest substrate atom on the surface relative to the undeformed surface. The two differ because of the interfacial compliance normal to the surface, which is assumed to vanish in continuum theory. The simulation results for $\delta$ are more sensitive to the finite sample depth $D$ and lateral periodicity $L $ than other quantities. To account for this, the predictions of M-D and JKR theory are shifted by the leading analytic corrections in $a/D$ \cite{johnson01,sridhar04,adams06}: \begin{equation} \delta = \delta_{H}(1-b\frac{a}{D})+\delta_{\rm adhesion}(1-d\frac{a}{D}) \, , \label{eq:depth} \end{equation} where $\delta_H$ is the Hertz prediction (Eq. (\ref{eq:hertzd})), $b$ and $d$ are fit parameters, and $\delta_{\rm adhesion} =-\sqrt{2w\pi a /{E}^*} $ for JKR theory and $-(2\sigma_0/E^*)\sqrt{c^2- a^2}$ for M-D theory. We obtained $b=0.8$ from simulations with dense, non-adhesive tips (Fig. \ref{fig:hertz}) and $d=0.3$ from simulations with dense, adhesive tips. Results for dense tips are then indistinguishable from the fit lines in Figs. \ref{fig:hertz}(a) and \ref{fig:r05dofN}. Values of $b$ and $d$ from numerical studies of continuum equations are of the same order \cite{johnson01,sridhar04,adams06}, but to our knowledge these calculations have not considered our geometry where $L\approx D$. With the finite-size corrections, continuum results for $\delta$ lie very close to the simulation data for bent tips. Agreement is best for $\delta_{\rm sur}$ because neither it nor continuum theory include the interfacial compliance. The choice of zero for $\delta_{\rm tip}$ is not precisely consistent with M-D theory. The first repulsive interaction would correspond to the first non-zero $r_a$, which occurs at a slightly negative $\delta$ in M-D theory. For the parameters used here, the shift in $\delta$ is about 0.24 $\sigma$, which is about twice the discrepancy between $\delta_{\rm tip}$ and M-D theory at large loads. This implies that the interfacial compliance produces a comparable correction. The effects of interfacial compliance are biggest for the most negative values of $\delta_{\rm tip}$. Here $\delta_{\rm tip}$ decreases monotonically as $N$ increases to zero. In contrast, $\delta_{\rm sur}$ is flat and then increases back towards zero. In this regime, $r_a =0$ and the only interaction comes from the attractive tail of the potential. The net adhesive force gradually decreases ($N$ increases) as the magnitude of the separation between tip and substrate, $\Delta \delta \equiv \delta_{\rm tip}-\delta_{\rm sur}$, increases (inset). Most of this change occurs in $\delta_{\rm tip}$, while the displacement of the surface relaxes back to zero as the attractive force on it goes to zero. The JG extension to M-D theory is expressed in terms of $\delta_{\rm tip}$ and provides a good description of its change with $N$ (dashed line), even with the assumption of a constant attractive $\sigma_0$. For amorphous tips, both values of $\delta$ are substantially above M-D theory. The shifts are bigger than in the non-adhesive case, about $0.35\ \sigma$ vs. 0.25$\ \sigma$. This is correlated with the larger than predicted pulloff force. Based on the increase in $|N_c|$, the effective work of adhesion appears to be larger by about 30\% than that measured for a flat surface. The stronger adhesive contribution to the total normal force leads to a larger value of $\delta$. One may understand these changes in terms of the effect of surface roughness. Atomic-scale roughness on a nominally flat surface prevents many atoms from optimizing their binding energy. As $\delta$ and the contact area shrink, the long wavelength height fluctuations become irrelevant and no longer prevent the few atoms remaining in the contact from adhering. Thus while the large load values of $\delta$ can be fit to continuum predictions with the measured $w$ and a simple shift in the origin of $\delta$, the small $N$ values correspond to a larger work of adhesion. The magnitude of the increase ($\sim 30$\%) is modest given that the incommensurate tip has about twice as large a $w$ as the amorphous tip for the same interaction energy $\epsilon_i$. The data for stepped tips are qualitatively different than the others. As for the non-adhesive case, $\delta$ is lower than the continuum prediction at large loads, because the flat tip is harder to push into the substrate. The deviation increases rapidly at negative loads, with a sharp drop near $N=0$ where the contact shrinks to the first terrace. As noted above, the radius of the first terrace is smaller than the radius at pulloff predicted by continuum theory, and the pulloff force is less than half the predicted value. \subsection{Friction and lateral stiffness} \label{sec:adherefriction} Scanning probe microscopes can most easily detect friction forces and the lateral compliance of the tip/substrate system \cite{carpick97,carpick97b}. Figure \ref{fig:r05FofN} shows $F$ as a function of load for five different adhesive tips. As for non-adhesive tips, tip geometry has a much larger effect on $F$ than other quantities, and values for bent commensurate and incommensurate tips differ by two orders of magnitude. Since the friction was measured at constant load (Sec. \ref{sec:method}), only values in the stable regime, $d\delta_{\rm tip}/dN >0 $ could be obtained. Even in this regime, we found that the tip tended to detach at $N > N_c$. This was particularly pronounced for commensurate tips. Indeed the bent and stepped commensurate tips detached after the first peak in the friction force for all the negative loads shown in Fig. \ref{fig:r05FofN}(a). At loads closer to the pulloff force, detachment occurred at even smaller lateral displacements. As noted above, bent commensurate tips have the strongest adhesion energy because all atoms can simultaneously optimize their binding. For the same reason, the adhesion energy changes rapidly as atoms are displaced laterally away from the optimum position, allowing pulloff above the expected $N_c$. The extent of the change depends on the sliding direction and registry \cite{harrison99,muser03acp,muser04}. We consider sliding in the (100) direction (Sec. \ref{sec:method}), where atoms move from points centered between substrate atoms towards points directly over substrate atoms. This greatly reduces the binding energy, leading to detachment at less than half the pulloff force. Note that changes in binding energy with lateral displacement lead directly to a lateral friction force \cite{muser03acp} and the bent and stepped commensurate tips also have the highest friction. We suspect that tips with higher friction may generally have a tendency to detach farther above $N_c$ during sliding than other tips. At the macro scale, friction is usually assumed to be directly proportional to load. All the tips have substantial friction at zero load, due to adhesion. The friction force also varies non-linearly with $N$, showing discrete jumps for the stepped tip, and curvature for the other tips, particularly near $N_c$. The curvature at small $N$ is reminiscent of the dependence of radius on load. Several authors have fit AFM friction data assuming that $F$ scales with contact area, and using a continuum model to determine the area as a function of load \cite{carpick97,carpick97b,pietrement01,lantz97,carpick04,carpick04b,schwarz97,schwarz97b}. The dotted lines in Fig. \ref{fig:r05FofN} show attempts to fit $F$ using the area from JKR theory with $w$ adjusted to fit the pulloff force and the proportionality constant chosen to fit the friction at high loads. None of the data is well fit by this approach. The solid lines show fits to an expression suggested by Schwarz \cite{schwarz03} that allows one to simply interpolate between DMT and JKR limits as a parameter $t_1$ is increased from 0 to 1. Reasonable fits are possible for non-stepped tips when this extra degree of freedom is allowed. Values of the friction per unit area are within 25\% of those determined below from the true area (Table II), but the values of $w$ and $t_1$ do not correspond to any real microscopic parameters. For example, for the three non-stepped tips where a direct measurement gives $w=0.46\ \epsilon/\sigma^2$, the fits give $w=0.41\ \epsilon/\sigma^2$, $0.55\ \epsilon/\sigma^2$ and $0.94\ \epsilon/\sigma^2$ for commensurate, amorphous and incommensurate tips, respectively. The value of $t_1$ varies from 0.1 to 0.5 to 1.5, respectively. Note that the value of $t_1=1.5$ for incommensurate tips is outside the physical range. In this case, and some experiments \cite{carpick04}, the friction rises more slowly than the JKR prediction, while M-D theory and simpler interpolation schemes \cite{schwarz03,carpick99} always give a steeper rise. Such data seem inconsistent with $F$ scaling with area. Our simulations allow us to test the relationship between $F$ and area without any fit parameters. However, it is not obvious which radius should be used to determine area. Figure \ref{fig:r05Fofr} shows friction plotted against $r_a^2$, $r_b^2$ and $r_c^2$. The stepped tip is not shown, since only $r_c$ is easily defined and it increases in one discontinuous jump over the range studied. For all other tips the friction is remarkably linear when plotted against any choice of radius squared. In contrast, plots of $N$ vs. $r^2$ show significant curvature. For $F$ to be proportional to area, the curves should also pass through the origin. This condition is most closely met by $r_a^2$, except for the incommensurate case. The idea that friction should be proportional to the area where atoms are pushed into repulsion seems natural. The other radii include attractive regions where the surfaces may separate far enough that the variation of force with lateral displacement is greatly reduced or may even change phase. However, in some cases the extrapolated friction remains finite as $r_a$ goes to zero and in others it appears to vanish at finite $r_a$. Also shown in Fig. \ref{fig:r05Fofr} are results for non-adhesive tips (open circles). Values for incommensurate and amorphous tips were doubled to make the linearity of the data more apparent. Only the bent commensurate data shows significant curvature, primarily at small $N$. As noted in Sec. \ref{sec:nonadhere}, friction is proportional to load for this tip, and the curvature is consistent with this and $a^2 \sim N^{2/3}$. No curvature is visible when adhesion is added, but the fact that the linear fit extrapolates to $F=0$ at $r_a/\sigma \sim 4$ suggests that the linearity might break down if data could be obtained at lower loads. Results for adhesive and non-adhesive amorphous tips can be fit by lines through the origin (not shown), within numerical uncertainty. The same applies for non-adhesive bent incommensurate tips, but adding adhesion shifts the intercept to a positive force at $r_a=0$. It would be interesting to determine whether this extrapolation is valid. Friction can be observed with purely adhesive interactions, since the only requirement is that the magnitude of the energy varies with lateral displacement \cite{muser03acp}. However, it may also be that the friction on incommensurate tips curves rapidly to zero as $r_a \rightarrow 0$. Indeed one might expect that for very small contacts the tip should behave more like a commensurate tip, leading to a more rapid change of $F$ with area or load. The only way to access smaller $r_a$ is to control the tip height instead of the normal load. This is known to affect the measured friction force \cite{muser04}, and most experiments are not stiff enough to access this regime. Figure \ref{fig:r05kofr} shows the lateral stiffness as a function of the three characteristic radii. Except for the stepped tip (not shown), $k$ rises linearly with each of the radii. As for non-adhesive tips, the slope is much smaller than the value $8G^*$ predicted by continuum theory (solid lines) because of the interfacial compliance (Eq. (\ref{eq:stiff})). The intercept is also generally different from the origin, although it comes closest to the origin for $r_a$. As for friction, it seems that the repulsive regions produce the dominant contribution to the stiffness. Results for non-adhesive tips are also included in Fig. \ref{fig:r05kofr}. They also are linear over the whole range, and the fits reach $k=0$ at a finite radius $a_k$. This lack of any stiffness between contacting surfaces seems surprising. Note however that linear fits to Fig. \ref{fig:hertz}(b) also would suggest that the radius approached a non-zero value, $a_0$, in the limit of zero load. Moreover, the values of $a_0$ and $a_k$ follow the same trends with tip geometry and have similar sizes. The finite values of $a_0$ and $a_k$ can be understood from the finite range of the repulsive part of the interaction. As long as atoms are separated by less than $r_{\rm cut}$ there is a finite interaction and the atoms are considered inside $a$. However, the force falls rapidly with separation, and atoms near this outer limit contribute little to the friction and stiffness. If $\delta h$ is the distance the separation must decrease to get a significant interaction, then $a_0 = (2R\delta h)^{1/2}$ at the point where the first significant force is felt. Taking the estimate of $\delta h=0.04\ \sigma$ from Sec. \ref{sec:nonadhere}, then $a_0 \sim 3\ \sigma$, which is comparable to the observed values of $a_0$ and $a_k$. The shift is smaller for the adhesive case because there are still strong interactions when $r_a$ goes to zero. The larger shifts for the amorphous tip may reflect roughness, since the first point to contact need not be at the origin. We conclude that the linear fits in Fig. \ref{fig:r05kofr} go to zero at finite radius, because $r_a$ overestimates the size of the region that makes significant contributions to forces, particularly for non-adhesive tips. Note that the plots for friction with non-adhesive tips (Fig. \ref{fig:r05Fofr}) are also consistent with an offset, but that the offset appears much smaller when plotted as radius squared. The slope of the curves in Fig. \ref{fig:r05Fofr} can be used to define a differential friction force per unit area or yield stress $\tau_f \equiv \partial F/ \partial \pi r_a^2$. It is interesting to compare the magnitude of these values (Table II) to the bulk yield stress of the substrate $\tau_y$. Assuming Lennard-Jones interactions, the ideal yield stress of an fcc crystal in the same shearing direction is 4 to 10 $\epsilon \sigma^{3}$, depending on whether the normal load or volume is held fixed. The commensurate tip is closest to a continuation of the sample, and the force on all atoms adds coherently. As a result $\tau_f$ is of the same order as $\tau_y$, even for the non-adhesive tip. Values for adhesive amorphous and incommensurate tips are about one and two orders of magnitude smaller than $\tau_y$, respectively. This reflects the fact that the tip atoms can not optimize their registry with the substrate. Removing adhesive interactions reduces $\tau_y$ by an additional factor of about four in both cases. In continuum theory, $k=8G^* r$, and the slope of fits in Fig. \ref{fig:r05kofr} could then be used to determine the effective shear modulus $G^*$. However, as noted above, the interfacial compliance leads to much lower stiffnesses. To illustrate the magnitude of the change we quote values of $G' \equiv (1/8) \partial k /\partial r$ in Table II. All values are below the true shear modulus $G^* =18.3\ \epsilon/\sigma^3$ obtained from the substrate compliance alone (Fig. \ref{fig:hertz}). As always, results for bent commensurate tips come closest to continuum theory with $G'/G^* \sim 0.7$. Values for adhesive amorphous and incommensurate tips are depressed by factors of 3 and 20 respectively, and removing adhesion suppresses the value for amorphous tips by another factor of four. Carpick et al. noted that if friction scales with area and $k$ with radius, then the ratio $F/k^2$ should be constant \cite{carpick97thesis,carpick97b,carpick04,carpick04b}. Defining the frictional force per unit area as $\tau_f$ and using the expression for $k$ from continuum theory, one finds $64 {G^*}^2 F/\pi k^2 = \tau_f$. In principal, this allows continuum predictions to be checked and $\tau_f$ to be determined without direct measurement of contact size. Figure \ref{fig:r05ratofN} shows: \begin{equation} \tau_f^{\rm eff} \equiv 64 (G^*)^2 F/\pi k^2 \label{eq:taueff} \end{equation} as a function of $N$ for different tip geometries and interactions. Except for the non-adhesive stepped tip, the value of $F /k^2$ is fairly constant at large loads, within our numerical accuracy. Some of the curves rise at small loads because the radius at which $F$ reaches zero in Fig. \ref{fig:r05Fofr} tends to be smaller than that where $k$ reaches zero in Fig. \ref{fig:r05kofr}. These small radii are where continuum theory would be expected to be least accurate. Note that the deviations are larger for the non-adhesive tips, perhaps because the data extends to smaller radii. The data for stepped tips are of particular interest because the contact radius jumps in one discrete step from the radius of the first terrace to the radius of the second. The friction and stiffness also show discontinuous jumps. Nonetheless, the ratio $F/k^2$ varies rather smoothly and even has numerical values close to those for other tips. The most noticeable difference is that the data for the nonadhesive stepped tip rises linearly with load, while all other tips tend to a constant at high load. These results clearly demonstrate that success at fitting derived quantities like $F$ and $k$ need not imply that the true contact area is following continuum theory. The curves for $\tau_f^{\rm eff}$ in Fig. \ref{fig:r05ratofN} are all much higher than values of the frictional stress $\tau_f$ obtained directly from the friction and area (Table II). Even the trends with tip structure are different. The directly measured frictional stress decreases from bent commensurate to amorphous to bent incommensurate, while $\tau_f^{\rm eff}$ is largest for the amorphous and smallest for the bent commensurate tip. These deviations from the continuum relation are directly related to the interfacial compliance $k_{\rm i}$. The continuum expression for the lateral stiffness neglects $k_{\rm i}$ and gives too small a radius at each load. This in turn over-estimates the frictional stress by up to two orders of magnitude. Similar effects are likely to occur in experimental data. Experimental plots of $F/k^2$ have been obtained for silicon-nitride tips on mica and sodium chloride \cite{carpick97thesis,carpick04,carpick04b} and on carbon fibers \cite{pietrement01}. Data for carbon fibers and mica in air showed a rapid rise with decreasing $N$ at low loads \cite{pietrement01,carpick97thesis}. For mica the increase is almost an order of magnitude, which is comparable to our results for non-adhesive bent incommensurate tips. This correspondence may seem surprising given that the experiments measured adhesion in air. However, the adhesive force was mainly from long-range capillary forces that operate outside the area of contact. Following DMT theory, they can be treated as a simple additive load that does not affect the contact geometry. In contrast, data for mica in vacuum is well fit by JKR theory, implying a strong adhesion within the contact \cite{carpick04,carpick04b}. The measured value of $F/k^2$ is nearly constant for this system, just as in our results for most adhesive tips. Results for carbon fibers in vacuum \cite{pietrement01} show a linear rise like that seen for nonadhesive stepped tips. From a continuum analysis of the carbon fiber data, the frictional stress was estimated to be $\tau_f \sim 300$ MPa assuming a bulk shear stress of $G^*=9.5$ GPa \cite{pietrement01}. Note that Fig. \ref{fig:r05ratofN} would suggest $\tau_f/G^* \sim 0.1$ to 0.3, while the true values (Table II) are as low as $0.0002$. The data on carbon fibers could be fit with the bulk shear modulus, but data on mica and NaCl \cite{carpick97thesis,carpick04} indicated that $G^*$ was 3 to 6 times smaller than bulk values. Our results show that the interfacial compliance can easily lead to reductions of this magnitude and a corresponding increase in $\tau_f$, and that care must be taken in interpreting experiments with continuum models. \section{Discussion and Conclusion} \label{sec:conclusions} The results described above show that many different effects can lead to deviations between atomistic behavior and continuum theory and quantify how they depend on tip geometry for simple interaction potentials (Fig. \ref{fig:tips}). In general, the smallest deviations are observed for the idealized model of a dense tip whose atoms form a nearly continuous sphere, although this tip has nearly zero friction and lateral stiffness. Deviations increase as the geometry is varied from a bent commensurate to a bent incommensurate to an amorphous tip, and stepped tips exhibit qualitatively different behavior. Tip geometry has the smallest effect on the normal displacement and normal stiffness (Fig. \ref{fig:hertz} and \ref{fig:r05dofN}) because they reflect an average response of the entire contact. Friction and lateral stiffness are most affected (Fig. \ref{fig:hertz} and \ref{fig:r05FofN}), because they depend on the detailed lateral interlocking of atoms at the interface. One difference between simulations and continuum theory is that the interface has a finite normal compliance. Any realistic interactions lead to a gradual increase in repulsion with separation rather than an idealized hard-wall interaction. In our simulations the effective range over which interactions increase is only about 4\% of the atomic spacing, yet it impacts results in several ways. For bent commensurate tips it leads to an increase in pressure in the center of the contact (Figs. \ref{fig:hertzpofr} and \ref{fig:cutoff}). The pressure at the edge of nonadhesive contacts drops linearly over about $2\ \sigma$, while continuum theory predicts a diverging slope. The width of this smearing grows as the square root of the tip radius and leads to qualitative changes in the probability distribution of local pressures \cite{persson01,hyun04,pei05,luan05mrs}. These effects could be studied in continuum theories with soft-wall interactions. The normal interfacial compliance also leads to the offset in linear fits of $F$ vs. $r_a^2$ and $k$ vs. $r_a$ (Figs. \ref{fig:r05Fofr} and \ref{fig:r05kofr}). Fits to the friction and local stiffness extrapolate to zero at finite values of $r_a$ because atoms at the outer edge of the repulsive range contribute to $r_a$ but interact too weakly to contribute substantially to $F$ and $k$. This effect is largest for non-adhesive tips. Approximating a spherical surface by discrete atoms necessarily introduces some surface roughness. Even bent crystalline tips have atomic scale corrugations, reflecting the variation in interaction as tip atoms move from sites nestled between substrate atoms to sites directly above. Amorphous and stepped tips have longer wavelength roughness associated with their random or layered structures respectively. This longer wavelength roughness has a greater effect on the contacts. For non-adhesive interactions, incommensurate and amorphous tips have a lower central pressure and wider contact radius than predicted for ideal spheres. These changes are qualitatively consistent with continuum calculations for spheres with random surface roughness \cite{johnson85}. However the effective magnitude of the rms roughness $\Delta$ is smaller than expected from the atomic positions. The correlated deviations from a sphere on stepped tips, lead to qualitative changes in the pressure distribution on the surface (Fig. \ref{fig:hertzpofr} and \ref{fig:r05pofr}). However, these changes are also qualitatively consistent with what continuum mechanics would predict for the true tip geometry, which is closer to a flat punch than a sphere. We conclude that the usual approximation of characterizing tips by a single spherical radius is likely to lead to substantial errors in calculated properties. Including the true tip geometry in continuum calculations would improve their ability to describe nanometer scale behavior. Unfortunately this is rarely done, and the atomic-scale tip geometry is rarely measured. Recent studies of larger tips and larger scale roughness are an interesting step in this direction \cite{thoreson06}. Roughness also has a strong influence on the work of adhesion $w$ (Table I). Values of $w$ were determined independently from interactions between nominally flat surfaces. For a given interaction strength, commensurate surfaces have the highest $w$, because each atom can optimize its binding simultaneously. The mismatch of lattice constants in incommensurate geometries lowers $w$ by a factor of two, and an additional factor of two drop is caused by the small ($\Delta \sim 0.3\ \sigma$) height fluctuations on amorphous surfaces. In continuum theory, these changes in $w$ should produce nearly proportional changes in pulloff force $N_c$, and tips with the same $w$ and $h_0$ should have the same $N_c$. Measured values of $N_c$ differ from these predictions by up to a factor of two. It is particularly significant that the dimensionless pulloff force for amorphous and stepped tips lies outside the limits provided by JKR and DMT theory. Experimentalists often assume that these bounds place tight limits on errors in inferred values of $w$. In the case of amorphous tips the magnitude of $N_c$ is 30\% higher than expected. The higher than expected adhesion in small contacts may reflect a decrease in effective roughness because long-wavelength height fluctuations are suppressed. Stepped tips show even larger deviations from continuum theory that are strongly dependent on the size of the first terraces \cite{foot1}. Tips selected for imaging are likely to have the smallest terraces and the largest deviations from continuum theory. Adding adhesion introduces a substantial width to the edge of the contact, ranging from the point where interactions first become attractive $r_a$ to the outer limits of attractive interactions $r_c$ (Fig. \ref{fig:cutoff}). As the range of interactions increases, it becomes increasingly difficult to fit both these characteristic radii and the pulloff force with the simple M-D theory (Fig. \ref{fig:cutoff22}). For short-range interactions, good fits are obtained with the measured $w$ for bent tips. Data for amorphous tips can only be fit by increasing $w$, due to the reduction in effective roughness mentioned above (Fig. \ref{fig:r05rofN}). For stepped tips the contact radius increases in discrete jumps as successive terraces contact the surface. The normal interfacial compliance leads to significant ambiguity in the definition of the normal displacement as a function of load (Fig. \ref{fig:r05dofN}). Continuum theory normally includes only the substrate compliance, while experimental measures of the total tip displacement $\delta_{\rm tip}$ include the interfacial compliance. The substrate compliance was isolated by following the displacement of substrate atoms, $\delta_{\rm sur}$ and found to agree well with theory for bent tips in the repulsive regime. Johnson and Greenwood's extension of M-D theory \cite{johnson97} includes the interfacial compliance in the attractive tail of the potential. It provides a good description of $\delta_{\rm tip}$ in the regime where $r_a=0$. Here $\delta_{\rm tip} - \delta_{\rm sur}$ increases to the interaction range $h_0$. Results for amorphous tips show the greater adhesion noted above. Stepped tips follow continuum theory at large loads but are qualitatively different at negative loads. The most profound effects of tip geometry are seen in the lateral stiffness $k$ and friction $F$, which vary by one and two orders of magnitude respectively. Continuum theories for $k$ do not include the lateral interfacial compliance $k_{\rm i}$. This adds in series with the substrate compliance $k_{\rm sub}$ (Eq. (\ref{eq:stiff})). Except for commensurate tips, $k_{\rm i} << k_{\rm sub}$ and the interface dominates the total stiffness \cite{luan05}. Experiments have also seen a substantial reduction in the expected lateral stiffness from this effect \cite{socoliuc04,carpick97thesis}. The friction on non-adhesive commensurate tips (bent or stepped) increases linearly with load, as frequently observed for macroscopic objects. In all other cases, $F$ is a nonlinear function of load. Our ability to directly measure contact radii allowed us to show that $F$ scales linearly with contact area for incommensurate, amorphous and adhesive bent commensurate tips. These tips also show a linear scaling of $k$ with radius. While these scalings held for any choice of radius, the linear fits are offset from the origin. It appears that the effective area contributing to friction and stiffness is often a little smaller than the area of repulsive interactions corresponding to $r_a$. As noted above, the offset from $r_a$ appears to correspond to the finite range over which repulsive forces rise at the interface. Experimental data \cite{carpick97,carpick97b,pietrement01,lantz97,carpick04,carpick04b,schwarz97,schwarz97b} for friction and stiffness have been fit to continuum theory with the assumptions that $F \propto r^2$ and $k \propto r$, but without the offsets seen in Figs. \ref{fig:r05Fofr} and \ref{fig:r05kofr}. We showed that our data for bent and amorphous tips could be fit in this way (Fig. \ref{fig:r05FofN}), but that the fit parameters did not correspond to directly measured values. This suggests that care should be taken in interpreting data in this manner. We also examined the ratio $F/k^2$. In continuum theory, this is related to the friction per unit area $\tau_f^{\rm eff}$ through Eq. (\ref{eq:taueff}). Our results for $F/k^2$ (Fig. \ref{fig:r05ratofN}) show the range of behaviors observed in experiments, with a relatively constant value for adhesive cases, a rapid increase at low loads in some nonadhesive cases, and a linear rise for non-adhesive stepped tips. The directly measured values of $\tau_f$ (Table II) are smaller than $\tau_f^{\rm eff}$ by up to two orders of magnitude, and have qualitatively different trends with tip geometry. The difference is related to a reduction in the stiffness $k$ due to interfacial compliance. This reduces the inferred value of bulk shear modulus $G'$ and increases the calculated contact area at any given area. We expect that experimental results for $F/k^2$ will produce similar overestimates of the true interfacial shear force. It remains unclear why $F$ and $k$ should follow the observed dependence on $r_a$. Analytic arguments for clean, flat surfaces indicate that $F$ is very sensitive to structure, with the forces on commensurate, incommensurate and disordered surfaces scaling as different powers of area \cite{muser01prl,muser03acp}. Only when glassy layers are introduced between the surfaces, does the friction scale in a universal manner \cite{he99,muser01prl,he01tl,he01b}. Wenning and M\"user \cite{wenning01} have argued that the friction on clean, amorphous tips rises linearly with area because of a cancellation of two factors, but have not considered $k$. Naively, one might expect that the length over which the force rises is a constant fraction of the lattice spacing and that $k$ is proportional to $F$. However, the friction traces change with load and do not always drop to zero between successive peaks. We hope that our results will motivate further analytic studies of this problem, and simulations with glassy films and more realistic potentials. While we have only considered single asperity contacts in this paper, it is likely that the results are relevant more broadly. Many experimental surfaces have random roughness on all scales that can be described by self-affine fractal scaling. Continuum models of contact between such surfaces show that the radius of most of contacts is comparable to the lower length scale cutoff in fractal scaling \cite{hyun04,pei05}. This is typically less than a micrometer, suggesting that typical contacts have nanometer scale dimensions where the effects considered here will be relevant. \acknowledgments We thank G. G. Adams, R. W. Carpick, K. L. Johnson, M. H. M\"user and I. Sridhar for useful discussions. This material is based upon work supported by the National Science Foundation under Grants No. DMR-0454947, CMS-0103408, PHY99-07949 and CTS-0320907. \newpage \section{Introduction} There has been rapidly growing interest in the behavior of materials at nanometer scales \cite{nanotechnology01}. One motivation is to construct ever smaller machines \cite{bhushan04}, and a second is to improve material properties by controlling their structure at nanometer scales \cite{valiev02}. For example, decreasing crystallite size may increase yield strength by suppressing dislocation plasticity, and material properties may be altered near free interfaces or grain boundaries. To make progress, this research area requires experimental tools for characterizing nanoscale properties. Theoretical models are also needed both to interpret experiments and to allow new ideas to be evaluated. One common approach for measuring local properties is to press tips with characteristic radii of 10 to 1000 nm into surfaces using an atomic force microscope (AFM) or nanoindenter \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,schwarz97,schwarz97b,asif01,jarvis93,kiely98,wahl98}. Mechanical properties are then extracted from the measured forces and displacements using classic results from continuum mechanics \cite{johnson85}. A potential problem with this approach is that continuum theories make two key assumptions that must fail as the size of contacting regions approaches atomic dimensions. One is to replace the atomic structure in the bulk of the solid bodies by a continuous medium with internal stresses determined by a continuously varying strain field. The second is to model interfaces by continuous, differentiable surface heights with interactions depending only on the surface separation. Most authors go further and approximate the contacting bodies by smooth spheres. In a recent paper \cite{luan05}, we analyzed the limits of continuum mechanics in describing nanometer scale contacts between non-adhesive surfaces with curvature typical of experimental probes. As in studies of other geometries \cite{miller96,landman96,vafek99}, we found that behavior in the bulk could be described by continuum mechanics down to lengths as small as two or three atomic diameters. However, the atomic structure of surfaces had profound consequences for much larger contacts. In particular, atomic-scale changes in the configuration of atoms on nominally cylindrical or spherical surfaces produced factor of two changes in the width of the contacting region and the stress needed to produce plastic yield, and order of magnitude changes in friction and stiffness. In this paper we briefly revisit non-adhesive contacts with an emphasis on the role of surface roughness. We then extend our atomistic studies to the more common case of adhesive interactions. One important result is that the work of adhesion is very sensitive to small changes in the positions of surface atoms. Changes in other quantities generally mirror those for non-adhesive tips, and small differences in the magnitude of these effects can be understood from geometrical considerations. The results are used to test continuum-based methods of analyzing AFM measurements of friction and stiffness \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,schwarz97,schwarz97b}. We show that the models may appear to provide a reasonable description when limited information about the true contact structure is available. When the full range of information accessible to simulations is examined, one finds that the contact area and pressure distributions may be very different than inferred from the models. Section \ref{sec:continuum} reviews continuum results for contact without and with adhesion, and briefly describes the effect of surface roughness. The methods used in our atomistic simulations and the geometries of the tips are described in Sec. \ref{sec:method}. Section \ref{sec:nonadhere} presents results for purely repulsive interactions and Sec. \ref{sec:adhere} describes trends with the strength of adhesion. A summary and conclusions are presented in Sec. \ref{sec:conclusions}. \section{Continuum Contact Mechanics} \label{sec:continuum} As noted above, contact mechanics calculations assume that the contacting solids are described by continuum elasticity so that the discrete atomic structure can be ignored. In most cases the two solids are also assumed to be isotropic with Young's moduli $E_1$ and $E_2$ and Poisson ratios $\nu_1$ and $\nu_2$. Then the results depend only on an effective modulus $E^*$ satisfying: \begin{equation} 1/E^*\equiv(1-\nu_1^2)/E_1 +(1-\nu_2^2)/E_2 . \label{eq:effectmod} \end{equation} Three-dimensional crystalline solids are not isotropic, but the theories can still be applied with an effective $E^*$ that depends on orientation and is determined numerically \cite{johnson85}. Continuum theories also neglect the atomic structure of the surface. In most cases the surfaces are assumed to be spherical, with radii $R_1$ and $R_2$. For elastic, frictionless solids the contact of two spheres is equivalent to contact between a sphere of radius $R = (R_1^{-1}+R_2^{-1})^{-1}$ and a flat solid \cite{johnson85}. From Eq. (\ref{eq:effectmod}), one may then map contact between any two spherical surfaces onto contact between a rigid sphere of radius $R$ and a flat elastic solid of modulus $E^*$. This is the case considered in our simulations, and previous results indicate this mapping remains approximately correct at atomic scales \cite{luan05}. Non-adhesive contact is described by Hertz theory \cite{johnson85}, which assumes solids interact with an infinitely sharp and purely repulsive ``hard-wall'' interaction. The surfaces contact in a circular region of radius $a$ that increases with the normal force or load $N$ pushing the surfaces together as \cite{johnson85}: \begin{equation} \label{eq:hertza} a = \left( \frac{3NR}{4E^*} \right) ^{1/3}\ \ . \end{equation} The normal pressure $p$ within the contact has a simple quadratic dependence on the radial distance from the center $r$: \begin{equation} p(r)=\frac{2aE^*}{\pi{R}}\sqrt{1-\frac{r^2}{a^2}}\ \ , \end{equation} and the surfaces separate slowly outside the contact. The normal displacement of the tip $\delta$ is related to $a$ by: \begin{equation} \label{eq:hertzd} \delta_H = a^2/R = (\frac{3NR}{4E^*})^{2/3} \ \ , \end{equation} where the subscript $H$ indicates the Hertz prediction and $\delta_{H}=0$ corresponds to the first contact between tip and substrate. Adhesion can be treated most simply in the opposite limits of very short-range interactions considered by Johnson, Kendall and Roberts (JKR) \cite{johnson71} and of infinite range interactions considered by Derjaguin, Muller and Toporov (DMT) \cite{derjaguin75}. The strength of adhesion is measured by the work of adhesion per unit area $w$. In DMT theory the attractive forces just produce an extra contribution to the normal force, so that $N$ is replaced by $N + 2\pi w R$ in Eqs. (\ref{eq:hertza}) and (\ref{eq:hertzd}). JKR theory treats the edge of the contact as a crack tip and calculates the stress by adding the crack and Hertz solutions. The normal force in Eq. (\ref{eq:hertza}) is then replaced by $N+ 3\pi w R + \left[6\pi w R N + (3\pi w R)^2\right]^{1/2}$ and the equation for $\delta$ is modified (Sec. \ref{sec:adhereload}). The two approaches lead to very different functional relations between $a$ and $N$. For example, the contact radius goes to zero at pulloff for DMT theory, but remains finite for JKR. They also predict different values of the pulloff force, $N_c$, where the surfaces separate. The normalized pulloff force, $N_c/\pi w R$, is -3/2 in JKR theory and -2 for DMT. Finally, the surfaces separate outside the contact with infinite slope in JKR theory, and gradually in DMT theory. The Maugis-Dugdale (M-D) model \cite{maugis92} provides a simple interpolation between the JKR and DMT limits. The surfaces are assumed to have a hard-wall contact interaction that prevents any interpenetration, plus a constant attractive force per unit area, $\sigma_0$, that extends over a finite distance $h_0$. The work of adhesion is just the integral of the attractive force, implying $\sigma_0 h_0 = w$. The M-D model produces coupled equations for the contact pressure that can be solved to yield a relation between the load, normal displacement, and area. As discussed further in Section \ref{sec:adhere}, the edge of the contact is broadened by the finite interaction range, making it useful to define three characteristic radii that converge to the JKR value for $a$ in the limit of short-range interactions. Maugis introduced a transition parameter \cite{maugis92} \begin{equation} \lambda \equiv \left( \frac{9Rw^2 }{2\pi {E^*}^2 h_0^3} \right)^{1/3}, \end{equation} that measures the ratio of the normal displacement at pulloff from JKR theory to the interaction range $h_0$. Tabor \cite{tabor76} had previously defined a similar parameter, $\mu$, that is about 16\% smaller than $\lambda$ for typical interaction potentials \cite{johnson97}. Johnson and Greenwood \cite{johnson97} have provided an adhesion map characterizing the range of $\lambda$ over which different models are valid. For $\lambda > 5$ the interaction range is short and JKR theory is accurate, while DMT is accurate for $\lambda < 0.1$. For most materials, both $h_0$ and the ratio $w/E^*$ are of order 1 nm. The JKR limit is only reached by increasing $R$ to macroscopic dimensions of micrometers or larger. JKR theory has been tested in experiments with centimeter scale radii using the surface force apparatus (SFA) \cite{horn87} and hemispherical elastomers \cite{Shull02,newby95}. Scanning probe microscope tips typically have $R$ between 10 and 100 nm, and the value of $\lambda \sim 0.1$ to 1 lies between JKR and DMT limits \cite{carpick97}. The same is true in our simulations, where $\lambda$ for adhesive tips varies between 0.1 and 0.75. For this reason we will compare our results to M-D theory below. We also found it useful to use a simple interpolation scheme suggested by Schwarz \cite{schwarz03}. Both he and Carpick et al. \cite{carpick99} have proposed formulae for the contact radius that interpolate smoothly between DMT and JKR. These approaches have been attractive in analyzing experimental data because of their simple analytic forms. No direct measurement of contact area has been possible in nanometer scale single asperity contacts. Instead, the contact area has been determined by measurements of contact stiffness \cite{carpick97,carpick97b,carpick04,carpick04b,pietrement01,lantz97,johnson97,jarvis93,kiely98,wahl98,asif01}, conductance \cite{lantz97}, or friction \cite{lantz97,carpick97,carpick97b,carpick04,carpick04b,schwarz97,schwarz97b}. The validity of these approaches is not clear \cite{luan05}, and will be tested below. The stiffness against normal displacements of the surfaces can be determined from the derivative of $N$ with respect to $\delta$ in M-D theory. The tangential stiffness $k$ is normally calculated by assuming friction prevents sliding at the interface, even though all theories described above assume zero friction in calculating the contact area. With this assumption $k=8G^* a$, where $G^* $ is the effective bulk shear modulus. Relating the friction to contact area requires assumptions about the friction law. Many authors have assumed that the friction is proportional to area \cite{carpick04,carpick04b,carpick97b,carpick97,pietrement01,lantz97,schwarz97,schwarz97b}, but simulations \cite{luan05,wenning01,muser01prl} and experiments in larger contacts \cite{gee90,berman98} show that this need not be the case. The effect of surface roughness on contact has been considered within the continuum framework \cite{johnson85}. In general, results must be obtained numerically. One key parameter is the ratio of the root mean squared (rms) roughness of the surface, $\Delta$, to the normal displacement $\delta$. When $\Delta/ \delta < 0.05 $, results for nonadhesive contacts lie within a few percent of Hertz theory \cite{johnson85}. As $\Delta / \delta$ increases, the contact area broadens and the pressure in the central region decreases. Adhesion is more sensitive to roughness \cite{fuller75}. The analysis is complicated by the fact that $\Delta$ usually depends on the range of lengths over which it is measured. The natural upper bound corresponds to the contact diameter and increases with load, while the lower bound at atomic scales is unclear. The role of roughness is discussed further in Secs. \ref{sec:nonadhere} and \ref{sec:adhere}. \section{Simulation methods} \label{sec:method} We consider contact between a rigid spherical tip and an elastic substrate with effective modulus $E^*$. As noted above, continuum theory predicts that this problem is equivalent to contact between two elastic bodies, and we found this equivalence was fairly accurate in previous studies of non-adhesive contact \cite{luan05}. To ensure that any deviations from the continuum theories described above are associated only with atomic structure, the substrate is perfectly elastic. Continuum theories make no assumptions about the nature of the atomic structure and interactions within the solids. Thus any geometry and interaction potentials can be used to explore the type of deviations from continuum theory that may be produced by atomic structure. We use a flat crystalline substrate to minimize surface roughness, and use tips with the minimum roughness consistent with atomic structure. The interactions are simple pair potentials that are widely used in studies that explore generic behavior \cite{allen87}. They illustrate the type of deviations from continuum theory that may be expected, but the magnitude of deviations for real materials will depend on their exact geometry and interactions. Atoms are placed on sites of a face-centered cubic (fcc) crystal with a (001) surface. We define a characteristic length $\sigma$ so that the volume per atom is $\sigma^3$ and the nearest-neighbor spacing is $2^{1/6}\ \sigma$. Nearest-neighbors are coupled by ideal Hookean springs with spring constant $\kappa$. Periodic boundary conditions are applied along the surface of the substrate with period $L$ in each direction. The substrate has a finite depth $D$ and the bottom is held fixed. For the results shown below, $L=190.5\ \sigma$ and $D=189.3\ \sigma$. The continuum theories assume a semi-infinite substrate, and we considered smaller and larger $L$ and $D$ to evaluate their effect on calculated quantities. Finite-size corrections for the contact radius and lateral stiffness are negligible for $a/D < 0.1$ \cite{johnson85}, which covers the relevant range of $a/R < 0.2$. Corrections to the normal displacement are large enough to affect plotted values. We found that the leading analytic corrections \cite{johnson01,sridhar04,adams06} were sufficient to fit our results at large loads, as discussed in Sec. \ref{sec:adhereload}. Note that previous simulations of AFM contact have used much shallower substrates ($D\sim 10\ \sigma$) \cite{harrison99,landman90,sorensen96,nieminen92,raffi-tabar92,tomagnini93}. This places them in a very different limit than continuum theories, although they provide interesting insight into local atomic rearrangements. Three examples of atomic realizations of spherical tips are shown in Fig. \ref{fig:tips}. All are identical from the continuum perspective, deviating from a perfect sphere by at most $\sigma$. The smoothest one is a slab of f.c.c. crystal bent into a sphere. The amorphous and stepped tips were obtained by cutting spheres from a bulk glass or crystal, and are probably more typical of AFM tips \cite{carpick97,foot1}. Results for crystalline tips are very sensitive to the ratio $\eta$ between their nearest-neighbor spacing and that of the substrate, as well as their crystalline alignment \cite{muser01prl,wenning01}. We will contrast results for an aligned commensurate tip with $\eta=1$ to those for an incommensurate tip where $\eta=0.94437$. To mimic the perfectly smooth surfaces assumed in continuum theory, we also show results for a high density tip with $\eta=0.05$. In all cases $R=100\ \sigma \sim 30$ nm, which is a typical value for AFM tips. Results for larger radius show bigger absolute deviations from continuum predictions, but smaller fractional deviations \cite{luan05}. \begin{figure} \begin{center} \includegraphics[width=8cm]{tips.ps} \caption{ Snapshots of atoms near the center regions (diameter 50$\ \sigma$) of spherical tips with average radius $R$=100$\ \sigma$. From top to bottom, tips are made by bending a crystal, cutting a crystal or cutting an amorphous solid. Three ratios $\eta$ of the atomic spacing in bent crystals to that in the substrate are considered; a dense case $\eta=0.05$, a commensurate case $\eta=1$, and an incommensurate case $\eta=0.94437$. The step structure of cut crystalline tips is not unique, leading to variations in their behavior. } \label{fig:tips} \end{center} \end{figure} Atoms on the tip interact with the top layer of substrate atoms via a truncated Lennard-Jones (LJ) potential \cite{allen87} \begin{equation} V_{LJ}=-4\epsilon_i \left[ \left(\frac{\sigma}{r}\right)^6- \left(\frac{\sigma}{r}\right)^{12} \right] -V_{\rm cut},\qquad r<r_{\rm cut} \label{eq:lj} \end{equation} where $\epsilon_i$ characterizes the adhesive binding energy, the potential vanishes for $r >r_{\rm cut}$, and the constant $V_{\rm cut}$ is subtracted so that the potential is continuous at $r_{\rm cut}$. Purely repulsive interactions are created by truncating the LJ potential at its minimum $r_{\rm cut} = 2^{1/6}\ \sigma$. Studies of adhesion use $r_{\rm cut}=1.5\ \sigma$ or $r_{\rm cut}=2.2\ \sigma$ to explore the effect of the range of the potential. In order to compare the effective strength of adhesive interactions and the cohesive energy of the solid substrate, we introduce a unit of energy $\epsilon$ defined so that the spring constant between substrate atoms $\kappa= 50\ \epsilon /\sigma^2$. If the solid atoms interacted with a truncated LJ potential with $\epsilon$ and $r_{\rm cut}=1.5\ \sigma$, they would have the same equilibrium lattice constant and nearly the same spring constant, $\kappa=57\ \epsilon/\sigma^2$, at low temperatures and small deformations. Thus $\epsilon_i/\epsilon$ is approximately equal to the ratio of the interfacial binding energy to the cohesive binding energy in the substrate. The elastic properties of the substrate are not isotropic. We measure an effective modulus $E^* = 55.0\ \epsilon/\sigma^3$ for our geometry using Hertzian contact of a high density tip. This is between the values calculated from the Young's moduli in different directions. The sound velocity is also anisotropic. We find longitudinal sound velocities of 8.5 and 9.5 $\sigma/t_{LJ}$ and shear velocities of 5.2 and 5.7 $\sigma/t_{LJ}$ along the (001) and (111) directions, respectively. Here $t_{LJ}$ is the natural characteristic time unit, $t_{LJ}=\sqrt{m\sigma^2/\epsilon}$, where $m$ is the mass of each substrate atom. The effective shear modulus for lateral tip displacements is $G^* = 18.3\ \epsilon/\sigma^3$. The simulations were run with the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code \cite{plimpton95,lammps}. The equations of motion were integrated using the velocity-Verlet algorithm with time step 0.005$\ t_{LJ}$ \cite{allen87}. Temperature, $T$, only enters continuum theory through its effect on constitutive properties, and it is convenient to run simulations at low temperatures to minimize fluctuations. A Langevin thermostat was applied to solid atoms to maintain $T$=0.0001$\ \epsilon/k_B$, where $k_B$ is Boltzmann's constant. This is about four orders of magnitude below the melting temperature of a Lennard-Jones solid. The damping rate was $0.1\ t_{LJ}^{-1}$, and damping was only applied perpendicular to the sliding direction in friction measurements. In simulations, a tip was initially placed about $r_{\rm cut}$ above the substrate. It was then brought into contact by applying a very small load, with the lateral position kept fixed. The load was changed in discrete steps and the system was allowed to equilibrate for 350$\ t_{LJ}$ at each load before making measurements. This interval is about 20 times longer than the time for sound to propagate across the substrate, and allowed full stress equilibration. Results from loading and unloading cycles showed no noticeable hysteresis. To obtain results near the pulloff force, we moved the tip away from the substrate at a very slow velocity $v=0.0003\ \sigma/t_{LJ}$ and averaged results over small ranges of displacement. This approach was consistent with constant load measurements and allowed us to reach the region that is unstable at constant load. To compare to continuum predictions we calculated the stresses exerted on the substrate by the tip. The force from the tip on each substrate atom was divided by the area per atom to get the local stresses. These were then decomposed into a normal stress or pressure $p$ on the substrate, and a tangential or shear stress ${\bf \tau}_{\rm sur}$. The continuum theories described in Sec. \ref{sec:continuum} assume that the projection of the force normal to the undeformed substrate equals the projection normal to the locally deformed substrate. This is valid in the assumed limits of $a/R << 1$ and ${\bf \tau}_{\rm sur}=0$. It is also valid for most of our simulations (within $<2$\%), but not for the case of bent commensurate tips where ${\bf \tau}_{\rm sur}$ becomes significant. Normal and tangential stresses for bent commensurate tips were obtained using the local surface orientation of the nominally spherical tip. Correcting for the orientation changed the normal stress by less than 5\% of the peak value, and the shear stress by less than 20\%. Friction forces are known to vary with many parameters \cite{muser04}. Of particular concern is the dependence on extrinsic quantities such as the stiffness of the system that imposes lateral motion. Results at constant normal load are often very different than those at fixed height, motion perpendicular to the nominal sliding direction can raise or lower friction, and the kinetic friction can be almost completely eliminated in very stiff systems \cite{muser03acp,socoliuc04}. A full re-examination of these effects is beyond the scope of this paper. Our sole goal is to measure the friction in a consistent manner that allows us to contrast the load dependent friction for different tip geometries and minimizes artifacts from system compliance. In friction simulations, the tip is sheared at a constant low velocity $v'=0.01\ \sigma/t_{LJ}$ along the (100) direction with a constant normal load. This is typical of AFM experiments where the low normal stiffness of the cantilever leads to a nearly constant normal load, and the high lateral stiffness limits lateral motion in the direction perpendicular to the sliding direction. The measured friction force varies periodically with time as the tip moves by a lattice constant of the substrate. The time-averaged or kinetic friction during sliding is very sensitive to both lateral stiffness and load \cite{socoliuc04}. We focus instead on the peak force, which is less sensitive. In the limit of low sliding velocities this would correspond to the static friction. For bent and stepped commensurate tips there is a single strong friction peak. For incommensurate and amorphous tips, there may be multiple peaks of different size corresponding to escape from different metastable energy minima \cite{muser01prl,muser03acp}. The static friction was determined from the highest of these friction peaks, since lateral motion would stop at any lower force. With a single peak per period, the time between peaks is $\sim \sigma/v'=100\ t_{LJ}$. This is several times the sound propagation time, and the measured force should be close to the static friction. For incommensurate tips the time between peaks was an order of magnitude smaller and dynamic effects may be more significant. However, they are not expected to affect the load dependence significantly, and are much too small to affect the dramatic difference between incommensurate and other tips. The total lateral stiffness of the system, $k$, corresponds to the derivative of $F$ with lateral tip displacement evaluated at a potential energy minimum. Since the tip is rigid, $k$ is determined by displacements in the substrate and at the interface. The interfacial stiffness $k_{\rm i}$ and substrate stiffness $k_{\rm sub}$ add in series because stress is transmitted through interfacial interactions to the substrate. Thus the total stiffness is \cite{socoliuc04,luan05}: \begin{equation} k^{-1}=k_{\rm sub}^{-1}+k_{\rm i}^{-1} \ \ . \label{eq:stiff} \end{equation} If the tip were not rigid, it would also contribute a term to the right-hand-side of Eq. (\ref{eq:stiff}). We evaluate $k$ from the derivative of $F$ during sliding, making the assumption that the results are in the quasisatic limit. For bent and stepped commensurate tips there is a single potential energy minima, and for amorphous tips one minimum dominated the periodic force. For incommensurate tips, there are many closely spaced minima and we evaluate $k$ from the derivative in the minimum preceding the largest friction peak. Due to the small magnitude of forces and short time intervals, the relative errors in these values are as big as 50\%. To estimate the lateral stiffness in the substrate, $k_{\rm sub}$, we fix the relative positions of those substrate atoms that lie inside the contact, and move them laterally at a slow velocity. The total force between these atoms and the rest of the substrate is measured and its derivative with respect to distance gives the lateral stiffness in the substrate. In principal, there might also be some relative displacement between atoms in the contact that is not captured by this approach, but the results for the substrate stiffness are consistent with continuum predictions. Values of the adhesion energy per unit area $w$ were obtained for flat, rigid surfaces of the same nominal geometry as the tip. For bent crystal tips (Fig. \ref{fig:tips}), the tip was just flattened back into a crystal. For stepped tips, we used an unstepped crystal with the same spacing and interactions. For amorphous tips, an amorphous solid was cleaved with a flat surface rather than a sphere. The resulting surfaces were then brought into contact with the substrate and allowed to equilibrate at zero load. At the low temperatures used here, the adhesion energy is just the potential energy difference between contacting and separated configurations. \section{Nonadhesive contacts} \label{sec:nonadhere} \subsection{Pressure distribution} Figure \ref{fig:hertzpofr} contrasts the distribution of normal pressure $p$ under five tips: (a) dense, (b) bent commensurate, (c) bent incommensurate, (d) amorphous and (e) stepped. In each case, $R=100\ \sigma$ and the dimensionless load is $N/(R^2 E^*) =0.0018$. Hertz theory predicts the same pressure distribution (solid lines) for all tips. Points show the actual local pressure on each substrate atom as a function of radial distance $r$ from the center of the spherical tip, and circles in (c) and (d) show the average over bins of width $\sigma$. Clearly, small deviations in atomic structure lead to large changes in the mean pressure and the magnitude of pressure fluctuations. We find that these deviations become larger as $N$ is decreased, and the contact radius drops closer to the atomic size. \begin{figure} \begin{center} \includegraphics[width=10cm]{hertzpofr.ps} \caption{ (Color online) Local normal pressure vs. radius for five different tip geometries (a) dense tip, (b) bent commensurate crystal, (c) bent incommensurate crystal, (d) amorphous, and (e) stepped crystal. All are non-adhesive, and have the same nominal radius $R=100\ \sigma$, $\epsilon_i/\epsilon=1$, and normalized load $N/(R^2 E^*)=0.0018$. Solid lines show the prediction of Hertz theory, dots show the pressure on each surface atom, and circles in (c) and (d) show the mean pressure in radial bins of width $\sigma$. Squares in (b) show the component of the tangential force directed radially from the center of the contact. The azimuthal component is nearly zero, and tangential forces are much smaller for other tips. } \label{fig:hertzpofr} \end{center} \end{figure} One possible source of deviations from Hertz theory is friction, but we find the mean tangential forces are small in most cases. The exception is the bent commensurate tip (Fig. \ref{fig:hertzpofr}(b)), where the tangential stress rises with $r$ and is comparable to the normal stress near the edge of the contact. This result is not surprising given the high friction measured for commensurate tips below, and reflects the strong tendency for atoms in the substrate to remain locked in epitaxial registry with atoms in the tip. However, the deviation from Hertz theory is in the opposite direction from that expected from friction. Since this contact was made by gradually increasing the load, friction should decrease the contact size rather than broadening it. Another possible origin of the deviations from Herts theory is surface roughness. From continuum theory (Sec. \ref{sec:continuum}), this is characterized by the ratio of rms surface roughness $\Delta$ to normal displacement $\delta$. The normal displacement for all tips is about the same, $\delta \approx 1.5\ \sigma$, but $\Delta$ is difficult to define. The reason is that there is no unique definition of the surface height for a given set of atomic positions. For example, one might conclude that $\Delta=0$ for the substrate, since all atoms lie on the same plane. However, if a tip atom were moved over the surface with a small load, its height would increase as it moved over substrate atoms and be lowest at sites centered between four substrate atoms \cite{muser03acp}. For the parameters used here, the total height change is about 0.33$\ \sigma$. Similar deviations from a sphere are obtained for the bent commensurate and incommensurate tips. The height change decreases as the ratio of the nearest-neighbor spacing to the Lennard-Jones diameter for interfacial interactions decreases, and is only 0.0007$\ \sigma$ for the dense tip. Amorphous and stepped tips have additional roughness associated with variations in the position of atomic centers relative to a sphere. The total variation is about $\sigma$, or about three times the height change as an atom moves over the substrate. A reasonable estimate is that $\Delta/\delta < 0.1$ for the bent commensurate and incommensurate tips, $\Delta/\delta < 10^{-3}$ for the dense tip, and $\Delta/\delta \sim 0.3$ for the amorphous and stepped tips. However, the ambiguity in $\Delta$ is one of the difficulties in applying continuum theory in nanoscale contacts. The closely spaced atoms on the dense tip approximate a continuous sphere, and the resulting pressure distribution is very close to Hertz theory (Fig. \ref{fig:hertzpofr}(a)). Results for the bent commensurate tip are slightly farther from Hertz theory. The deviations can not be attributed to roughness, because fluctuations at a given $r$ are small, and the pressure in the central region is not decreased. The main change is to smear the predicted sharp pressure drop at the edge of the contact. This can be attributed to the finite range of the repulsive potential between surfaces. We can estimate the effective interaction range by the change in height of an atom, $dh = 0.04\ \sigma$, as $p/E^*$ decreases from 0.1 to 0. The effective range is much smaller for the dense tip because $\sim 400$ times as many atoms contribute to the repulsive potential. In Hertz theory \cite{johnson85}, the separation between surfaces only increases with distance $(r-a)$ from the edge of the contact as $(8/3\pi)(r-a)^{3/2} (2a)^{1/2}/R$. Equating this to $dh$ gives $r-a \approx 1\ \sigma$ for the bent commensurate tip, which is very close to the range over which the edge of the contact is shifted from the Hertz prediction. Note that this analysis predicts that the shift in the edge of the contact will grow as $\sqrt{R}$, and simulations with larger $R$ confirm this. However, the fractional change in $a$ decreases as $1/\sqrt{R}$. The larger values of pressure at low $r$ result from the greater stiffness of the repulsive potential as $p$ increases. All of the above effects could be included in continuum theory by changing the form of the repulsive potential \cite{greenwood97}. For bent incommensurate and amorphous tips (Fig. \ref{fig:hertzpofr} (c) and (d)), the variations in pressure at a given $r$ are as large as the mean (circles) \cite{footn}. While all atoms on the commensurate tip can simultaneously fit between substrate atoms, minimizing fluctuations in force, atoms on the incommensurate tip sample all lateral positions and experience a range of forces at a given height. The mean pressure for the incommensurate tip remains fairly close to the commensurate results, but there is slightly more smearing at large $r$ due to the variations in relative height of substrate and tip atoms. The mean pressure on the amorphous tip shows the depression at small $r$ and increase at large $r$ that are expected for rough tips in continuum theory \cite{johnson85}. The magnitude of the central drop is about 18\%, which is consistent with $\Delta/\delta \sim 0.2$ in continuum theory (Fig. 13.12 of Ref. \cite{johnson85}). The lack of a noticeable drop for incommensurate tips implies that the effective $\Delta/\delta < .03$. The implication is that the incoherent height changes on amorphous tips contribute to the effective roughness in continuum theory, while the atomic corrugation on bent tips does not. The effective roughness in both cases is about 0.1$\ \sigma$ smaller than our estimates for $\Delta$ above. Results for stepped tips show the largest deviations from Hertz theory, and they are qualitatively different than those produced by random roughness. The terraced geometry of this tip (Fig. \ref{fig:tips}) is closest to that of a flat punch. In continuum theory, the pressure on a flat punch is smallest in the center, and diverges as the inverse square root of the distance from the edge. The simulation results show qualitatively similar behavior. The main effect of atomic structure is to cut off the singularity at a distance corresponding to an atomic separation. Similar effects are observed in simulations of other geometries \cite{vafek99,miller96,landman96}. Note that the terraces are only flat because the sphere was cut from a crystal that was aligned with the substrate. We also examined tips cut from a crystal that was slightly tilted away from the (001) direction \cite{foot1}. This produces inclined terraces that contact first along one edge. The resulting pressure distribution is very different, and closest to the continuum solution for contact by an asymmetric wedge. Figure \ref{fig:hertzpofr} has an important general implication about the probability $P(p)$ of finding a local pressure $p$ at a point in the contact. For smoothly curved surfaces, continuum theory predicts that the derivative of the pressure diverges at the edge of the contact \cite{johnson85}. Thus $P(p) \rightarrow 0$ as $p\rightarrow 0$ \cite{persson01}. The finite resolution at atomic scales always smears out the change in $p$, leading to a non-zero value of $P(0)$. Indeed, the approximately constant value of $dp/dr$ near the contact edge in Fig. \ref{fig:hertzpofr} leads to a local maximum in $P$ at $p=0$. Similar behavior is observed for randomly rough atomic contacts \cite{luan05mrs} and in continuum calculations for piecewise planar surfaces \cite{hyun04,pei05}. Plastic deformation is usually assumed to occur when the deviatoric shear stress $\tau_s$ exceeds the yield stress of the material. In Hertz theory, $\tau_s$ reaches a maximum value at a depth of about 0.5$a$. The pressure variations at the surface shown in Fig. \ref{fig:hertzpofr} lead to changes in both the magnitude and position of the peak shear stress \cite{luan05}. Factors of two or more are typical for amorphous and stepped tips. Thus tip geometry may have a significant impact on the onset of yield. Of course atomistic effects also influence the yield stress at nanometer scales, and a full evaluation of this effect is left to future work. Saint-Venant's principal implies that the pressure distribution should become independent of tip geometry at depths greater than about $3a$, but the shear stress at these depths is substantially smaller than peak values and yield is unlikely to occur there. \subsection{Variations with load} Figure \ref{fig:hertz} shows the load dependence of (a) normal displacement, (b) radius, (c) friction and (d) lateral stiffness for the same tips as Fig. \ref{fig:hertzpofr}. Each quantity is raised to a power that is chosen so that Hertz theory predicts the indicated straight line. A small finite-depth correction ($\sim 2$\%) is applied to the Hertz prediction for $\delta$ (Eq. (\ref{eq:depth})). \begin{figure} \begin{center} \includegraphics[width=8cm]{hertz.eps} \caption{ Dimensionless plots of powers of (a) normal displacement $\delta$, (b) contact radius $a$, (c) static friction $F$, and (d) lateral stiffness $k$ vs. the cube root of the normal force $N$ for non-adhesive tips with $R=100\ \sigma$. The powers of $\delta$, $a$ and $k$ are chosen so that continuum theory predicts the indicated straight solid lines. In (c), the solid line corresponds to $F\propto N$, while dashed and dotted lines are fits to $F \propto N^{2/3}$. The continuum predictions for $\delta$ and $r$ are followed by a dense tip (stars). Also shown are results for bent commensurate (pentagons), bent incommensurate (squares), amorphous (triangles) and stepped commensurate tips (circles). In (d), the total lateral stiffness (open symbols) lies well below the continuum prediction because of the interfacial compliance. The stiffness from the substrate alone (filled symbols) scales with radius, as expected from Hertz theory. Numerical uncertainties are comparable to the symbol size. } \label{fig:hertz} \end{center} \end{figure} As also found for cylindrical contacts \cite{luan05}, the normal displacement shows the smallest deviation from Hertz theory because it represents a mean response of many substrate atoms. Results for all bent crystals are nearly indistinguishable from the straight line. Results for the stepped surface are lower at small loads. Since the entire tip bottom contacts simultaneously, it takes a larger load to push the tip into the substrate. The amorphous results are shifted upwards by a fairly constant distance of about 0.2 $\sigma$. We define the origin of $\delta$ as the tip height where the first substrate atom exerts a repulsive force on the tip. This is strongly dependent on the height of the lowest tip atom, while subsequent deformation is controlled by the mean tip surface. Agreement with Hertz is greatly improved by shifting the amorphous curve by this small height difference. Note that the zero of $\delta$ is difficult to determine experimentally and is usually taken as a fit parameter. If this is done, even results for the amorphous system yield values of $R$ and $E^*$ that are within 10\% of their true values. Thus measurements of $\delta$, interpreted with continuum theory for spheres, can provide useful values of elastic properties at nanometer scales. As expected from the observed pressure distributions (Fig. \ref{fig:hertzpofr}), the contact radius is generally larger than the Hertz prediction. The shift is smallest for the dense tip because it approximates a continuous surface and the high density leads to a repulsive potential that rises more than a hundred times more rapidly than for other tips. Results for bent crystal and amorphous tips are shifted upwards by fairly load-independent offsets of $\sim 1 - 3\ \sigma$, leading to large fractional errors at low loads (up to 100\%). The stepped crystal shows qualitatively different behavior, with $a$ rising in discrete steps as sequential terraces come into contact. Note that the size of the first terrace is not unique, but depends on the registry between the bounding sphere and crystalline lattice \cite{foot1}. Larger deviations may be observed when the first step has very few atoms. Such tips may be more likely to be chosen for AFM studies because they tend to give sharper images. In order to predict the friction between surfaces, one must make assumptions about how $F$ depends on area and load. The straight line in Fig. \ref{fig:hertz}(c) corresponds to a friction force that is proportional to load. Static friction values for bent and stepped commensurate surfaces are consistent with this line and a coefficient of friction $\mu \equiv F/N = 0.63$. Analytic \cite{muser01prl} and simple numerical \cite{ringlein04} models show that this is a general feature of mated surfaces where each tip atom has the same lateral position relative to substrate atoms. The friction on amorphous and incommensurate surfaces is always lower and scales more closely with the contact area, as indicated by broken line fits to $F \propto N^{2/3}$ and discussed further in Sec. \ref{sec:adhereload} \cite{foot2}. Many authors have made this assumption in fitting experimental data, but it is not obvious why it should hold. The friction per unit area between flat amorphous surfaces decreases as the square root of the area, but rises linearly with the normal pressure \cite{muser01prl}. Wenning and M\"user have noted that these two factors may combine for spherical contacts to give a net friction that rises linearly with area \cite{wenning01}. However, their argument would predict that the frictional force in a cylinder-on-flat geometry would not scale with area, and our previous simulations found that it did \cite{luan05}. Continuum theory predicts that the lateral stiffness $k=8G^* a$, and should follow the straight line in Fig. \ref{fig:hertz}(d). Measured values of the total stiffness (open symbols) are always substantially lower. This is because continuum theory assumes that there is no displacement at the interface, only within the substrate. In reality, the frictional force is always associated with atomic scale displacements of interfacial atoms relative to a local energy minimum \cite{ringlein04,muser03acp}. The derivative of force with displacement corresponds to an interfacial stiffness $k_{\rm i}$ that adds in series with the substrate contribution (Eq. (\ref{eq:stiff})) \cite{socoliuc04,luan05}. Our numerical results show that $k_{\rm i}$ can reduce $k$ by more than an order of magnitude, particularly for tips where $F$ is small. We also evaluated the substrate stiffness $k_{\rm sub}$ by forcing all contacting substrate atoms to move together. These results (filled symbols) lie much closer to the Hertz prediction. Only the stepped tip shows large discrepancies, and these are correlated with the large deviation between the measured and predicted contact radii. \section{Adhesive contacts} \label{sec:adhere} \subsection{Pressure distribution} Figure \ref{fig:cutoff} compares the calculated pressure distribution in adhesive contacts with the Maugis-Dugdale prediction (lines). A bent commensurate tip was used to minimize deviations from continuum predictions for a sphere. Results for two different $r_{\rm cut}$ are presented to indicate difficulties in fitting longer-range interactions to M-D theory. The work of adhesion was calculated for unbent surfaces (Sec. \ref{sec:method}) with $\epsilon_i/\epsilon = 0.5$, yielding $w=1.05\ \epsilon/\sigma^2$ and $1.65\ \epsilon/\sigma^2$ for $r_{\rm cut}=1.5\ \sigma$ and $2.2\ \sigma$, respectively. This leaves only one fitting parameter in M-D theory. For the dashed lines, the width of the attractive interaction $h_0 = w/\sigma_0$ was chosen to coincide with the range of the atomic potential. The dotted line shows a fit with $h_0=0.8\ \sigma$ for $r_{\rm cut}=2.2\ \sigma$, which gives better values for the pulloff force, but poorer radii (Sec. \ref{sec:adhereload}). \begin{figure} \begin{center} \includegraphics[width=8cm]{cutoffpofr.eps} \caption{ (Color online) Local pressure vs. radius for bent commensurate tips with $r_{\rm cut} = 1.5\ \sigma$ (filled squares) and $r_{\rm cut}=2.2\ \sigma$ (open squares) and with $R=100\ \sigma$, $\epsilon_i/\epsilon = 0.5 $ and $N/(R^2 E^*)=0.0016$. Dashed lines show Maugis-Dugdale fits with the measured work of adhesion and $h_0$ fit to the range of the interactions. A dotted line shows a fit for $r_{\rm cut}=2.2\ \sigma$ with $h_0=0.8\ \sigma$ that improves agreement with the measured pulloff force (Fig. \ref{fig:cutoff22}). Characteristic radii $r_a$, $r_b$ and $r_c$ are defined by the locations where the radially averaged pressure first becomes zero, is most negative, and finally vanishes, respectively. } \label{fig:cutoff} \end{center} \end{figure} In M-D theory, it is common to identify two radii, $a$ and $c$, with the inner and outer edges of the plateau in the pressure, respectively \cite{maugis92,johnson97}. For $r< a$ the surfaces are in hard-sphere contact, and for $a<r<c$ they are separated and feel the constant attraction. The continuously varying interactions between atoms in our simulations lead to qualitatively different behavior. There is no sharp transition where the surfaces begin to separate, and the attraction shows a smooth rise to a peak, followed by a decay. To facilitate comparison to continuum theories, we introduce the three characteristic radii indicated by arrows for each $r_{\rm cut}$. The innermost, $r_a$, corresponds to the point where the interaction changes from repulsive to attractive and can be calculated in M-D theory. The outermost, $r_c$, corresponds to $c$ in M-D theory -- the point where interactions end. The middle, $r_b$, corresponds to the peak attraction. It was also studied by Greenwood \cite{greenwood97} for contact of a featureless sphere that interacted with the flat substrate through Lennard-Jones interactions. He found $r_b$ lay close to the JKR prediction for contact radius at large loads. All three radii converge in the limit of repulsive interactions or the JKR limit of an infinitely narrow interaction range. At small radii the atomistic results for $p$ lie above M-D theory, and they drop below near $r/\sigma=10$. These deviations can be attributed to the increasing stiffness of the repulsion between tip and substrate atoms with increasing pressure. Just as in the non-adhesive case, the stiffer interactions in the center of the tip lead to bigger pressures for a given interpenetration. The change in pressure with separation produces less smearing at the edge of the repulsive region of the contact than for the non-adhesive case (Fig. \ref{fig:hertzpofr}(b)), and the values of $r_a$ are very close to M-D theory. This can be understood from the fact that surfaces separate much more rapidly in JKR theory ($\propto (r-a)^{1/2}$) than in Hertz theory ($\propto (r-a)^{3/2}$) (Sec. \ref{sec:nonadhere}). Thus the same height change in the repulsive region corresponds to a much smaller change in radius. Of course the finite range of the attractive tail of the potential leads to a broad region of adhesive forces out to $r_c$. The continuous variation of $p$ in the attractive tail is only crudely approximated by M-D theory. The difficulty in determining the optimum choice for $h_0$ increases with the range of interactions, as discussed further below. Figure \ref{fig:r05pofr} shows the effect of tip geometry on pressure distribution. We found that the work of adhesion was very sensitive to tip geometry. To compensate for this effect, we varied $\epsilon_i$ (Table I) to yield the same $w$ for the tips in Fig. \ref{fig:r05pofr}. Then all tips should have the same pressure distribution in continuum theory. The M-D predictions for $p$ are not shown because even the bent commensurate tips produce significantly different results (Fig. \ref{fig:cutoff}). Instead, we compare other tips to the bent commensurate tip. \begin{table} \label{table:work} \caption {Relations between interaction strength $\epsilon_i$, work of adhesion $w$, $N_c$ and dimensionless pulloff force $N_c/(\pi w R)$ for tips with different atomic-scale geometries. The last column gives the dimensionless pulloff force from M-D theory, $N_c^{M-D}$, with the measured $w$ and $h_0=0.5\ \sigma$. Values are accurate to the last significant digit. } \begin{tabular}{|l|c|c|c|c|c|} \hline \hline Tip geometry & ${\epsilon_i} \over {\epsilon}$ & $w ({\epsilon \over \sigma^2}) $ &$|N_c| ({\epsilon \over \sigma})$& ${{|N_c|}\over{\pi wR}}$ & $ { {|N_c^{M-D}|}\over{\pi w R}}$ \\ \hline Commensurate & 0.213 & 0.46 &256 & 1.77 & 1.74 \\ \hline & 0.5 & 1.05 &569 & 1.72 & 1.64 \\ \hline Incommensurate & 0.535 & 0.46 &258 & 1.79 & 1.74 \\ \hline & 0.5 & 0.45 &238 & 1.68 & 1.74 \\ \hline Amorphous & 1.0 & 0.46 &326 & 2.26 & 1.74 \\ \hline & 0.5 & 0.23 &136 & 1.88 & 1.83 \\ \hline Stepped & 0.213 & 0.46 &104 & 0.72 & 1.74 \\ \hline & 0.5 & 1.05 &168 & 0.51 & 1.64 \\ \hline \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=10cm]{r05pofr.ps} \caption{ (Color online) Normal pressure vs. radius for four different tip geometries: (a) bent commensurate crystal, (b) bent incommensurate crystal, (c) amorphous, and (d) stepped crystal. In all cases the tip radius $R=100\ \sigma$, the normalized load $N/(R^2 E^*)=0.0023$ and the surface energy $w = 0.46\ \epsilon/\sigma^2$. Dots show the normal pressure on each surface atom. In (b) and (c), circles show the mean pressure in radial bins of width $\sigma$, and lines show the bent commensurate results from (a). Horizontal dashed lines are at zero pressure. } \label{fig:r05pofr} \end{center} \end{figure} As for non-adhesive tips, local fluctuations in pressure are small for commensurate tips (Fig. \ref{fig:r05pofr}(a) and (d)) and comparable to the mean pressure for incommensurate or amorphous tips (Fig. \ref{fig:r05pofr}(b) and (c)). Note however, that the fluctuations become smaller in the adhesive regime (large $r$). This is because the potential varies more slowly with height, so fluctuations in the separation between atoms have less effect on $p$. One consequence is that the outer edge of the contact is nearly the same for commensurate and incommensurate tips. The radii for the amorphous tip are significantly larger than bent tips, presumably because of a much greater effective roughness. As for nonadhesive tips, the mean pressure on incommensurate tips is close to the commensurate results. Adhesion reduces the roughness-induced drop in pressure in the central region of the amorphous tip. For the stepped tip, the contact radius is dominated by the size of the terraces, but adhesive tails are visible outside the edge of each terrace. Only $r_c$ is easily defined for the stepped tips. This increases in a nearly stepwise manner, and its load dependence is not shown below. For the amorphous and incommensurate tips, radii are determined from the mean pressure at a given radius (open circles). Errorbars are less than 0.5 $\sigma$. \subsection{Variations of radius and displacement with load} \label{sec:adhereload} Figure \ref{fig:r05rofN}(b) compares the measured contact radii for the tips of Fig. \ref{fig:r05pofr} to M-D theory as load is varied. The simulation results for $r_c$ (open symbols) decrease with decreasing load as $r_a$ (closed symbols) decreases to zero. All interactions in the contact are then adhesive. As $r_c$ continues to drop, the area contributing to adhesion drops, and the load rises back toward zero. This regime is not considered in the original M-D theory. The extension to $r_a=0$ by Johnson and Greenwood (JG) \cite{johnson97} is shown by a dashed line in the figure. Along this line the stress in the contact is constant, giving $N=-\sigma_0 \, \pi r_c^2$. \begin{figure} \begin{center} \includegraphics[width=8cm]{r05rofN.eps} \caption{ Variation of contact radii with external load for bent commensurate (squares), bent incommensurate (triangles) and amorphous (circles) tips with $R =100\ \sigma$. In (a), the radius $r_b$ where the force is most attractive is compared to JKR theory (solid line). In (b), values of $r_a$ (filled symbols) and $r_c$ (open symbols) are compared to M-D theory (solid lines) with the JG extension for $r_c$ when $r_a=0$ (dashed line). All continuum fits use the independently measured surface energy $w = 0.46\ \epsilon/\sigma^2$ and an interaction range $h_0 =0.5\ \sigma$ that is consistent with the potential range. Numerical uncertainty in radii is comparable to the symbol size. } \label{fig:r05rofN} \end{center} \end{figure} As for non-adhesive tips, the contacts tend to be larger than continuum theory predicts. However, the shift for bent tips is smaller than in the non-adhesive case, and the commensurate and incommensurate results are closer, as expected from Fig. \ref{fig:r05pofr}. Larger deviations are observed for the amorphous tip, with radii typically 2 or 3 $\sigma$ larger than predicted. The deviation becomes most pronounced at negative loads, where the amorphous tip remains in contact well below the predicted pulloff force. Figure \ref{fig:r05rofN}(a) compares the value of $r_b$ to the JKR prediction for contact radius. As found by Greenwood \cite{greenwood97}, the numerical results are close to JKR at large loads, but deviate at negative loads because M-D and JKR predict different pulloff forces. Since JKR assumes a singular adhesive stress at the radius of the contact, it seems natural that its predictions lie closest to the position of the peak tensile stress. Figure \ref{fig:e05rofN} shows the calculated radii for bent and amorphous tips with the same interaction energy $\epsilon_i/\epsilon = 0.5$. The small changes in tip geometry lead to a roughly four fold variation in both $w$ and $N_c$ (Table I). The largest $w$ is obtained for commensurate tips because all atoms can optimize their binding coherently. Atoms on incommensurate tips sample all positions relative to the substrate, and can not simultaneously optimize the binding energy. The larger height fluctuations on amorphous tips lead to even greater reductions in $w$. \begin{figure} \begin{center} \includegraphics[width=8cm]{e05rofN.eps} \caption{ Contact radii as a function of normal load for (a) bent commensurate, (b) bent incommensurate, and (c) amorphous tips with $\epsilon_i =0.5 \epsilon$. While all have the same interaction strength, the work of adhesion varies by more than a factor of four. Broken lines show M-D predictions for $r_a$ (triangles) and $r_c$ (circles) with the indicated values of $w$ and $h_0=0.5\ \sigma$. The value of $r_b$ (squares) is closest to the JKR result (solid line). Deviations from the continuum predictions show the same trends as in Fig. \ref{fig:r05rofN}. Numerical uncertainty in radii is comparable to the symbol size. } \label{fig:e05rofN} \end{center} \end{figure} In the simulations, the pulloff force, $N_c$, corresponds to the most negative load where the surfaces remain in contact and the $r_i$ can be defined. Its normalized value, $N_c/\pi w R$, is equal to -3/2 in JKR theory, -2 in DMT theory, and lies between these values for M-D theory. Table I shows normalized results for various tips. As expected from the good agreement in Figs. \ref{fig:r05rofN} and \ref{fig:e05rofN}, results for bent tips lie between JKR and DMT values and can be fit with M-D theory. The values for stepped and amorphous tips lie outside the bounds of M-D theory. This is an important result because pulloff forces are frequently used to measure the work of adhesion. Based on continuum theory, one would expect that the uncertainty in such measurements is less than the difference between JKR and DMT predictions. Our results show factor of two deviations for stepped tips, which may be common in scanning probe experiments. Other simulations showed that the stepped tip values are strongly dependent on the size of the first terrace, as well as any tilt of the terraces or incommensurability. It might seem surprising that the stepped tip has a smaller pulloff force than the bent tip, because the entire first terrace can adhere without any elastic energy penalty. However this effect is overcome by the greater contact size for bent tips: The radius of the first terrace, $r_t \sim 6\ \sigma$, is smaller than the values of $r_b$ and $r_c$ at pulloff for bent tips. As the adhesion is decreased, the predicted contact size at pulloff will drop below $r_t$ and the stepped tip may then have a larger pulloff force than predicted. This limit can also be reached by increasing the width of the first terrace. For a tip that lies entirely within a sphere of radius $R$, $r_t^2 < R^2-(R-d)^2\approx 2dR$ where $d$ is the layer spacing in the crystal. For our geometry this corresponds to $r_t < 12\ \sigma$, which is about twice the value for our tip. As noted above, terraces with even smaller radii may be preferentially selected for imaging studies and will lead to lower $|N_c|$. As $r_{\rm cut}$ increases, it becomes harder for the M-D model to simultaneously fit both the radii and the pulloff force. Figure \ref{fig:cutoff22} shows simulation data for a bent commensurate tip with $r_{\rm cut}=2.2\ \sigma$. Using the value of $h_0=1.2\ \sigma$ (Fig. \ref{fig:cutoff}) reproduces all radii fairly well at large loads, but gives a substantially different pulloff force, $-906\ \epsilon/\sigma$ instead of $-872\ \epsilon/\sigma$. Decreasing $h_0$ to $0.8\ \sigma$ fits the pulloff force, and improves the fit to $r_a$ at low $N$. However, the predicted values for $r_a$ at large $N$ are slightly too high and the values for $r_c$ are shifted far below ($\sim 2-3\ \sigma$) simulation data. This failure is not surprising given the crude approximation for adhesive interactions in M-D theory. As might be expected, the best values for the pulloff force are obtained by fitting the region near the peak in the force, rather than the weak tail (Fig. \ref{fig:cutoff}). \begin{figure} \begin{center} \includegraphics[width=8cm]{cutoff22.eps} \caption{ Fits of M-D theory (broken lines) to $r_c$ (squares) and $r_a$ (circles), and of JKR theory (solid line) to $r_b$ (triangles) for a bent commensurate tip with $r_{\rm cut}=2.2\ \sigma$. For $h_0 =1.2\ \sigma$ (dashed line) M-D theory fits both $r_c$ and $r_a$ at large loads, but the magnitude of the pulloff force is too large. Decreasing $h_0$ to 0.8$\ \sigma$ (dotted lines) gives excellent agreement with the pulloff force, but not the radii (Fig. \ref{fig:cutoff}). Numerical uncertainty in radii is comparable to the symbol size. } \label{fig:cutoff22} \end{center} \end{figure} It should be noted that for bent crystalline and amorphous tips, all of our results for $r$ can be fit accurately to M-D theory if $E^*$, $w$, and $R$ are taken as adjustable parameters. The typical magnitude of adjustments is 10 to 30\%, which is comparable to typical experimental uncertainties for these quantities in nanoscale experiments. Indeed one of the common goals of these experiments is to determine any scale dependence in continuum parameters. For this reason it would be difficult to test continuum theory in scanning probe experiments even if $r$ could be measured directly. Experiments can more easily access the variation of normal displacement with load. This requires subtracting the height change due to the normal compliance of the machine controlling the tip, which is difficult for standard AFM cantilevers, but possible in stiffer devices \cite{jarvis93,kiely98,asif01}. The absolute zero of $\delta$ is not fixed, but must be fitted to theory. Figure \ref{fig:r05dofN} shows two measures of the normal displacement in our simulations. One, $\delta_{\rm tip}$, corresponds to the experimentally accessible tip displacement. We associate the zero of $\delta_{\rm tip}$ with the point where the first substrate atom exerts a repulsive force. The second, $\delta_{\rm sur}$, is the actual depression of the lowest substrate atom on the surface relative to the undeformed surface. The two differ because of the interfacial compliance normal to the surface, which is assumed to vanish in continuum theory. \begin{figure} \begin{center} \includegraphics[width=8cm]{r05dofN.eps} \caption{ Normal displacement as a function of external load measured from (a) the depression of the lowest substrate atom $\delta_{\rm sur}$ and (b) the displacement of the tip relative to the height where the first substrate atom exerts a repulsive force $\delta_{\rm tip}$. The tips are the same as in Fig. \ref{fig:r05rofN}. The JKR prediction is indicated by dotted lines, the M-D prediction by solid lines, and the JG extension for $r_a=0$ with dashed lines. All are corrected for the finite dimensions of the substrate as described in the text, and uncertainties are comparable to the symbol size. The difference $\Delta \delta \equiv \delta_{\rm tip}-\delta_{\rm sur}$ reflects the interfacial compliance, and is shown for the bent incommensurate tip in the inset. } \label{fig:r05dofN} \end{center} \end{figure} The simulation results for $\delta$ are more sensitive to the finite sample depth $D$ and lateral periodicity $L $ than other quantities. To account for this, the predictions of M-D and JKR theory are shifted by the leading analytic corrections in $a/D$ \cite{johnson01,sridhar04,adams06}: \begin{equation} \delta = \delta_{H}(1-b\frac{a}{D})+\delta_{\rm adhesion}(1-d\frac{a}{D}) \, , \label{eq:depth} \end{equation} where $\delta_H$ is the Hertz prediction (Eq. (\ref{eq:hertzd})), $b$ and $d$ are fit parameters, and $\delta_{\rm adhesion} =-\sqrt{2w\pi a /{E}^*} $ for JKR theory and $-(2\sigma_0/E^*)\sqrt{c^2- a^2}$ for M-D theory. We obtained $b=0.8$ from simulations with dense, non-adhesive tips (Fig. \ref{fig:hertz}) and $d=0.3$ from simulations with dense, adhesive tips. Results for dense tips are then indistinguishable from the fit lines in Figs. \ref{fig:hertz}(a) and \ref{fig:r05dofN}. Values of $b$ and $d$ from numerical studies of continuum equations are of the same order \cite{johnson01,sridhar04,adams06}, but to our knowledge these calculations have not considered our geometry where $L\approx D$. With the finite-size corrections, continuum results for $\delta$ lie very close to the simulation data for bent tips. Agreement is best for $\delta_{\rm sur}$ because neither it nor continuum theory include the interfacial compliance. The choice of zero for $\delta_{\rm tip}$ is not precisely consistent with M-D theory. The first repulsive interaction would correspond to the first non-zero $r_a$, which occurs at a slightly negative $\delta$ in M-D theory. For the parameters used here, the shift in $\delta$ is about 0.24 $\sigma$, which is about twice the discrepancy between $\delta_{\rm tip}$ and M-D theory at large loads. This implies that the interfacial compliance produces a comparable correction. The effects of interfacial compliance are biggest for the most negative values of $\delta_{\rm tip}$. Here $\delta_{\rm tip}$ decreases monotonically as $N$ increases to zero. In contrast, $\delta_{\rm sur}$ is flat and then increases back towards zero. In this regime, $r_a =0$ and the only interaction comes from the attractive tail of the potential. The net adhesive force gradually decreases ($N$ increases) as the magnitude of the separation between tip and substrate, $\Delta \delta \equiv \delta_{\rm tip}-\delta_{\rm sur}$, increases (inset). Most of this change occurs in $\delta_{\rm tip}$, while the displacement of the surface relaxes back to zero as the attractive force on it goes to zero. The JG extension to M-D theory is expressed in terms of $\delta_{\rm tip}$ and provides a good description of its change with $N$ (dashed line), even with the assumption of a constant attractive $\sigma_0$. For amorphous tips, both values of $\delta$ are substantially above M-D theory. The shifts are bigger than in the non-adhesive case, about $0.35\ \sigma$ vs. 0.25$\ \sigma$. This is correlated with the larger than predicted pulloff force. Based on the increase in $|N_c|$, the effective work of adhesion appears to be larger by about 30\% than that measured for a flat surface. The stronger adhesive contribution to the total normal force leads to a larger value of $\delta$. One may understand these changes in terms of the effect of surface roughness. Atomic-scale roughness on a nominally flat surface prevents many atoms from optimizing their binding energy. As $\delta$ and the contact area shrink, the long wavelength height fluctuations become irrelevant and no longer prevent the few atoms remaining in the contact from adhering. Thus while the large load values of $\delta$ can be fit to continuum predictions with the measured $w$ and a simple shift in the origin of $\delta$, the small $N$ values correspond to a larger work of adhesion. The magnitude of the increase ($\sim 30$\%) is modest given that the incommensurate tip has about twice as large a $w$ as the amorphous tip for the same interaction energy $\epsilon_i$. The data for stepped tips are qualitatively different than the others. As for the non-adhesive case, $\delta$ is lower than the continuum prediction at large loads, because the flat tip is harder to push into the substrate. The deviation increases rapidly at negative loads, with a sharp drop near $N=0$ where the contact shrinks to the first terrace. As noted above, the radius of the first terrace is smaller than the radius at pulloff predicted by continuum theory, and the pulloff force is less than half the predicted value. \subsection{Friction and lateral stiffness} \label{sec:adherefriction} Scanning probe microscopes can most easily detect friction forces and the lateral compliance of the tip/substrate system \cite{carpick97,carpick97b}. Figure \ref{fig:r05FofN} shows $F$ as a function of load for five different adhesive tips. As for non-adhesive tips, tip geometry has a much larger effect on $F$ than other quantities, and values for bent commensurate and incommensurate tips differ by two orders of magnitude. \begin{figure} \begin{center} \includegraphics[width=8cm]{fvsn.eps} \caption{ (Color online) Static friction $F$ vs. load $N$ for the indicated tip geometries with same systems as in Fig. \ref{fig:r05rofN}. Attempts to fit the data by assuming $F$ is proportional to the area predicted by JKR theory and the interpolation scheme of Ref. \cite{schwarz03} are shown by dotted and solid lines, respectively. Dashed lines are separate linear fits for the stepped tip for the cases where one or two terraces are in repulsive contact. Numerical uncertainties are comparable to the symbol size. } \label{fig:r05FofN} \end{center} \end{figure} Since the friction was measured at constant load (Sec. \ref{sec:method}), only values in the stable regime, $d\delta_{\rm tip}/dN >0 $ could be obtained. Even in this regime, we found that the tip tended to detach at $N > N_c$. This was particularly pronounced for commensurate tips. Indeed the bent and stepped commensurate tips detached after the first peak in the friction force for all the negative loads shown in Fig. \ref{fig:r05FofN}(a). At loads closer to the pulloff force, detachment occurred at even smaller lateral displacements. As noted above, bent commensurate tips have the strongest adhesion energy because all atoms can simultaneously optimize their binding. For the same reason, the adhesion energy changes rapidly as atoms are displaced laterally away from the optimum position, allowing pulloff above the expected $N_c$. The extent of the change depends on the sliding direction and registry \cite{harrison99,muser03acp,muser04}. We consider sliding in the (100) direction (Sec. \ref{sec:method}), where atoms move from points centered between substrate atoms towards points directly over substrate atoms. This greatly reduces the binding energy, leading to detachment at less than half the pulloff force. Note that changes in binding energy with lateral displacement lead directly to a lateral friction force \cite{muser03acp} and the bent and stepped commensurate tips also have the highest friction. We suspect that tips with higher friction may generally have a tendency to detach farther above $N_c$ during sliding than other tips. At the macro scale, friction is usually assumed to be directly proportional to load. All the tips have substantial friction at zero load, due to adhesion. The friction force also varies non-linearly with $N$, showing discrete jumps for the stepped tip, and curvature for the other tips, particularly near $N_c$. The curvature at small $N$ is reminiscent of the dependence of radius on load. Several authors have fit AFM friction data assuming that $F$ scales with contact area, and using a continuum model to determine the area as a function of load \cite{carpick97,carpick97b,pietrement01,lantz97,carpick04,carpick04b,schwarz97,schwarz97b}. The dotted lines in Fig. \ref{fig:r05FofN} show attempts to fit $F$ using the area from JKR theory with $w$ adjusted to fit the pulloff force and the proportionality constant chosen to fit the friction at high loads. None of the data is well fit by this approach. The solid lines show fits to an expression suggested by Schwarz \cite{schwarz03} that allows one to simply interpolate between DMT and JKR limits as a parameter $t_1$ is increased from 0 to 1. Reasonable fits are possible for non-stepped tips when this extra degree of freedom is allowed. Values of the friction per unit area are within 25\% of those determined below from the true area (Table II), but the values of $w$ and $t_1$ do not correspond to any real microscopic parameters. For example, for the three non-stepped tips where a direct measurement gives $w=0.46\ \epsilon/\sigma^2$, the fits give $w=0.41\ \epsilon/\sigma^2$, $0.55\ \epsilon/\sigma^2$ and $0.94\ \epsilon/\sigma^2$ for commensurate, amorphous and incommensurate tips, respectively. The value of $t_1$ varies from 0.1 to 0.5 to 1.5, respectively. Note that the value of $t_1=1.5$ for incommensurate tips is outside the physical range. In this case, and some experiments \cite{carpick04}, the friction rises more slowly than the JKR prediction, while M-D theory and simpler interpolation schemes \cite{schwarz03,carpick99} always give a steeper rise. Such data seem inconsistent with $F$ scaling with area. \begin{table} \label{table:stress} \caption {Frictional stresses $\tau_f$ and apparent moduli $G'$ evaluated from the derivatives of fits in Figs. \ref{fig:r05Fofr} and \ref{fig:r05kofr}, respectively, as a function of tip geometry and work of adhesion $w$. Values of $G'$ are less than the effective shear modulus of the substrate, $G^* = 18.3\ \epsilon/\sigma^3$, and frictional stresses are much smaller than expected from continuum expressions (Fig. \ref{fig:r05ratofN}). Statistical errorbars are indicated in parentheses. } \begin{tabular}{|l|c|c|c|} \hline \hline Tip geometry & $w (\epsilon \sigma^{-2}) $ &$\tau_f (\epsilon \sigma^{-3})$& $G'(\epsilon \sigma^{-3})$ \\ \hline Commensurate & 0 & 1.35(7) & 13.7(4) \\ \hline & 0.46 & 1.82(4) & 12.8(3) \\ \hline Incommensurate & 0 & 0.0044(7) & 0.70(5) \\ \hline & 0.46 & 0.0151(8) & 1.0(3) \\ \hline Amorphous & 0 & 0.056(4) & 1.53(5) \\ \hline & 0.46 & 0.24(2) & 4.3(6) \\ \hline \end{tabular} \end{table} Our simulations allow us to test the relationship between $F$ and area without any fit parameters. However, it is not obvious which radius should be used to determine area. Figure \ref{fig:r05Fofr} shows friction plotted against $r_a^2$, $r_b^2$ and $r_c^2$. The stepped tip is not shown, since only $r_c$ is easily defined and it increases in one discontinuous jump over the range studied. For all other tips the friction is remarkably linear when plotted against any choice of radius squared. In contrast, plots of $N$ vs. $r^2$ show significant curvature. \begin{figure} \begin{center} \includegraphics[width=8cm]{fvsrn.eps} \caption{ (Color online) Static friction $F$ plotted against radius squared for the indicated tip geometries with same parameters as in Fig. \ref{fig:r05rofN}. Values of $r_a^2$ (filled circles), $r_b^2$ (open squares), and $r_c^2$ (filled triangles) are shown for adhesive tips. Open circles show $F$ vs. $a^2$ for non-adhesive tips with data in (b) and (c) multiplied by a factor of two for clarity. Dashed lines show unconstrained linear fits to each data set. Numerical uncertainties are comparable to the symbol size. } \label{fig:r05Fofr} \end{center} \end{figure} For $F$ to be proportional to area, the curves should also pass through the origin. This condition is most closely met by $r_a^2$, except for the incommensurate case. The idea that friction should be proportional to the area where atoms are pushed into repulsion seems natural. The other radii include attractive regions where the surfaces may separate far enough that the variation of force with lateral displacement is greatly reduced or may even change phase. However, in some cases the extrapolated friction remains finite as $r_a$ goes to zero and in others it appears to vanish at finite $r_a$. Also shown in Fig. \ref{fig:r05Fofr} are results for non-adhesive tips (open circles). Values for incommensurate and amorphous tips were doubled to make the linearity of the data more apparent. Only the bent commensurate data shows significant curvature, primarily at small $N$. As noted in Sec. \ref{sec:nonadhere}, friction is proportional to load for this tip, and the curvature is consistent with this and $a^2 \sim N^{2/3}$. No curvature is visible when adhesion is added, but the fact that the linear fit extrapolates to $F=0$ at $r_a/\sigma \sim 4$ suggests that the linearity might break down if data could be obtained at lower loads. Results for adhesive and non-adhesive amorphous tips can be fit by lines through the origin (not shown), within numerical uncertainty. The same applies for non-adhesive bent incommensurate tips, but adding adhesion shifts the intercept to a positive force at $r_a=0$. It would be interesting to determine whether this extrapolation is valid. Friction can be observed with purely adhesive interactions, since the only requirement is that the magnitude of the energy varies with lateral displacement \cite{muser03acp}. However, it may also be that the friction on incommensurate tips curves rapidly to zero as $r_a \rightarrow 0$. Indeed one might expect that for very small contacts the tip should behave more like a commensurate tip, leading to a more rapid change of $F$ with area or load. The only way to access smaller $r_a$ is to control the tip height instead of the normal load. This is known to affect the measured friction force \cite{muser04}, and most experiments are not stiff enough to access this regime. Figure \ref{fig:r05kofr} shows the lateral stiffness as a function of the three characteristic radii. Except for the stepped tip (not shown), $k$ rises linearly with each of the radii. As for non-adhesive tips, the slope is much smaller than the value $8G^*$ predicted by continuum theory (solid lines) because of the interfacial compliance (Eq. (\ref{eq:stiff})). The intercept is also generally different from the origin, although it comes closest to the origin for $r_a$. As for friction, it seems that the repulsive regions produce the dominant contribution to the stiffness. \begin{figure} \begin{center} \includegraphics[width=7cm]{kvsr.eps} \caption{ (Color online) Lateral stiffness $k$ plotted against radius for the indicated tip geometries with same parameters as in Fig. \ref{fig:r05rofN}. Values of $r_a$ (filled circles), $r_b$ (open squares), and $r_c$ (filled triangles) are shown for adhesive tips. Open circles show $F$ vs. $a$ for non-adhesive tips. Broken lines show unconstrained linear fits to each data set, and the solid line indicates the slope predicted by continuum theory. Numerical uncertainties are comparable to the symbol size, except for the adhesive data in (b) where they may be up to 50\%. } \label{fig:r05kofr} \end{center} \end{figure} Results for non-adhesive tips are also included in Fig. \ref{fig:r05kofr}. They also are linear over the whole range, and the fits reach $k=0$ at a finite radius $a_k$. This lack of any stiffness between contacting surfaces seems surprising. Note however that linear fits to Fig. \ref{fig:hertz}(b) also would suggest that the radius approached a non-zero value, $a_0$, in the limit of zero load. Moreover, the values of $a_0$ and $a_k$ follow the same trends with tip geometry and have similar sizes. The finite values of $a_0$ and $a_k$ can be understood from the finite range of the repulsive part of the interaction. As long as atoms are separated by less than $r_{\rm cut}$ there is a finite interaction and the atoms are considered inside $a$. However, the force falls rapidly with separation, and atoms near this outer limit contribute little to the friction and stiffness. If $\delta h$ is the distance the separation must decrease to get a significant interaction, then $a_0 = (2R\delta h)^{1/2}$ at the point where the first significant force is felt. Taking the estimate of $\delta h=0.04\ \sigma$ from Sec. \ref{sec:nonadhere}, then $a_0 \sim 3\ \sigma$, which is comparable to the observed values of $a_0$ and $a_k$. The shift is smaller for the adhesive case because there are still strong interactions when $r_a$ goes to zero. The larger shifts for the amorphous tip may reflect roughness, since the first point to contact need not be at the origin. We conclude that the linear fits in Fig. \ref{fig:r05kofr} go to zero at finite radius, because $r_a$ overestimates the size of the region that makes significant contributions to forces, particularly for non-adhesive tips. Note that the plots for friction with non-adhesive tips (Fig. \ref{fig:r05Fofr}) are also consistent with an offset, but that the offset appears much smaller when plotted as radius squared. The slope of the curves in Fig. \ref{fig:r05Fofr} can be used to define a differential friction force per unit area or yield stress $\tau_f \equiv \partial F/ \partial \pi r_a^2$. It is interesting to compare the magnitude of these values (Table II) to the bulk yield stress of the substrate $\tau_y$. Assuming Lennard-Jones interactions, the ideal yield stress of an fcc crystal in the same shearing direction is 4 to 10 $\epsilon \sigma^{3}$, depending on whether the normal load or volume is held fixed. The commensurate tip is closest to a continuation of the sample, and the force on all atoms adds coherently. As a result $\tau_f$ is of the same order as $\tau_y$, even for the non-adhesive tip. Values for adhesive amorphous and incommensurate tips are about one and two orders of magnitude smaller than $\tau_y$, respectively. This reflects the fact that the tip atoms can not optimize their registry with the substrate. Removing adhesive interactions reduces $\tau_y$ by an additional factor of about four in both cases. In continuum theory, $k=8G^* r$, and the slope of fits in Fig. \ref{fig:r05kofr} could then be used to determine the effective shear modulus $G^*$. However, as noted above, the interfacial compliance leads to much lower stiffnesses. To illustrate the magnitude of the change we quote values of $G' \equiv (1/8) \partial k /\partial r$ in Table II. All values are below the true shear modulus $G^* =18.3\ \epsilon/\sigma^3$ obtained from the substrate compliance alone (Fig. \ref{fig:hertz}). As always, results for bent commensurate tips come closest to continuum theory with $G'/G^* \sim 0.7$. Values for adhesive amorphous and incommensurate tips are depressed by factors of 3 and 20 respectively, and removing adhesion suppresses the value for amorphous tips by another factor of four. Carpick et al. noted that if friction scales with area and $k$ with radius, then the ratio $F/k^2$ should be constant \cite{carpick97thesis,carpick97b,carpick04,carpick04b}. Defining the frictional force per unit area as $\tau_f$ and using the expression for $k$ from continuum theory, one finds $64 {G^*}^2 F/\pi k^2 = \tau_f$. In principal, this allows continuum predictions to be checked and $\tau_f$ to be determined without direct measurement of contact size. Figure \ref{fig:r05ratofN} shows: \begin{equation} \tau_f^{\rm eff} \equiv 64 (G^*)^2 F/\pi k^2 \label{eq:taueff} \end{equation} as a function of $N$ for different tip geometries and interactions. Except for the non-adhesive stepped tip, the value of $F /k^2$ is fairly constant at large loads, within our numerical accuracy. Some of the curves rise at small loads because the radius at which $F$ reaches zero in Fig. \ref{fig:r05Fofr} tends to be smaller than that where $k$ reaches zero in Fig. \ref{fig:r05kofr}. These small radii are where continuum theory would be expected to be least accurate. Note that the deviations are larger for the non-adhesive tips, perhaps because the data extends to smaller radii. \begin{figure} \begin{center} \includegraphics[width=7cm]{ratvsn.eps} \caption{ (Color online) Ratio of friction to stiffness squared as a function of load for the indicated tip geometries (a) without and (b) with adhesion ($w=0.46\ \epsilon/\sigma^2$). Lines are guides to the eye. Numerical uncertainties are comparable to the symbol size, except for the bent incommensurate data in (b), where they may be as large as 50\%. } \label{fig:r05ratofN} \end{center} \end{figure} The data for stepped tips are of particular interest because the contact radius jumps in one discrete step from the radius of the first terrace to the radius of the second. The friction and stiffness also show discontinuous jumps. Nonetheless, the ratio $F/k^2$ varies rather smoothly and even has numerical values close to those for other tips. The most noticeable difference is that the data for the nonadhesive stepped tip rises linearly with load, while all other tips tend to a constant at high load. These results clearly demonstrate that success at fitting derived quantities like $F$ and $k$ need not imply that the true contact area is following continuum theory. The curves for $\tau_f^{\rm eff}$ in Fig. \ref{fig:r05ratofN} are all much higher than values of the frictional stress $\tau_f$ obtained directly from the friction and area (Table II). Even the trends with tip structure are different. The directly measured frictional stress decreases from bent commensurate to amorphous to bent incommensurate, while $\tau_f^{\rm eff}$ is largest for the amorphous and smallest for the bent commensurate tip. These deviations from the continuum relation are directly related to the interfacial compliance $k_{\rm i}$. The continuum expression for the lateral stiffness neglects $k_{\rm i}$ and gives too small a radius at each load. This in turn over-estimates the frictional stress by up to two orders of magnitude. Similar effects are likely to occur in experimental data. Experimental plots of $F/k^2$ have been obtained for silicon-nitride tips on mica and sodium chloride \cite{carpick97thesis,carpick04,carpick04b} and on carbon fibers \cite{pietrement01}. Data for carbon fibers and mica in air showed a rapid rise with decreasing $N$ at low loads \cite{pietrement01,carpick97thesis}. For mica the increase is almost an order of magnitude, which is comparable to our results for non-adhesive bent incommensurate tips. This correspondence may seem surprising given that the experiments measured adhesion in air. However, the adhesive force was mainly from long-range capillary forces that operate outside the area of contact. Following DMT theory, they can be treated as a simple additive load that does not affect the contact geometry. In contrast, data for mica in vacuum is well fit by JKR theory, implying a strong adhesion within the contact \cite{carpick04,carpick04b}. The measured value of $F/k^2$ is nearly constant for this system, just as in our results for most adhesive tips. Results for carbon fibers in vacuum \cite{pietrement01} show a linear rise like that seen for nonadhesive stepped tips. From a continuum analysis of the carbon fiber data, the frictional stress was estimated to be $\tau_f \sim 300$ MPa assuming a bulk shear stress of $G^*=9.5$ GPa \cite{pietrement01}. Note that Fig. \ref{fig:r05ratofN} would suggest $\tau_f/G^* \sim 0.1$ to 0.3, while the true values (Table II) are as low as $0.0002$. The data on carbon fibers could be fit with the bulk shear modulus, but data on mica and NaCl \cite{carpick97thesis,carpick04} indicated that $G^*$ was 3 to 6 times smaller than bulk values. Our results show that the interfacial compliance can easily lead to reductions of this magnitude and a corresponding increase in $\tau_f$, and that care must be taken in interpreting experiments with continuum models. \section{Discussion and Conclusion} \label{sec:conclusions} The results described above show that many different effects can lead to deviations between atomistic behavior and continuum theory and quantify how they depend on tip geometry for simple interaction potentials (Fig. \ref{fig:tips}). In general, the smallest deviations are observed for the idealized model of a dense tip whose atoms form a nearly continuous sphere, although this tip has nearly zero friction and lateral stiffness. Deviations increase as the geometry is varied from a bent commensurate to a bent incommensurate to an amorphous tip, and stepped tips exhibit qualitatively different behavior. Tip geometry has the smallest effect on the normal displacement and normal stiffness (Fig. \ref{fig:hertz} and \ref{fig:r05dofN}) because they reflect an average response of the entire contact. Friction and lateral stiffness are most affected (Fig. \ref{fig:hertz} and \ref{fig:r05FofN}), because they depend on the detailed lateral interlocking of atoms at the interface. One difference between simulations and continuum theory is that the interface has a finite normal compliance. Any realistic interactions lead to a gradual increase in repulsion with separation rather than an idealized hard-wall interaction. In our simulations the effective range over which interactions increase is only about 4\% of the atomic spacing, yet it impacts results in several ways. For bent commensurate tips it leads to an increase in pressure in the center of the contact (Figs. \ref{fig:hertzpofr} and \ref{fig:cutoff}). The pressure at the edge of nonadhesive contacts drops linearly over about $2\ \sigma$, while continuum theory predicts a diverging slope. The width of this smearing grows as the square root of the tip radius and leads to qualitative changes in the probability distribution of local pressures \cite{persson01,hyun04,pei05,luan05mrs}. These effects could be studied in continuum theories with soft-wall interactions. The normal interfacial compliance also leads to the offset in linear fits of $F$ vs. $r_a^2$ and $k$ vs. $r_a$ (Figs. \ref{fig:r05Fofr} and \ref{fig:r05kofr}). Fits to the friction and local stiffness extrapolate to zero at finite values of $r_a$ because atoms at the outer edge of the repulsive range contribute to $r_a$ but interact too weakly to contribute substantially to $F$ and $k$. This effect is largest for non-adhesive tips. Approximating a spherical surface by discrete atoms necessarily introduces some surface roughness. Even bent crystalline tips have atomic scale corrugations, reflecting the variation in interaction as tip atoms move from sites nestled between substrate atoms to sites directly above. Amorphous and stepped tips have longer wavelength roughness associated with their random or layered structures respectively. This longer wavelength roughness has a greater effect on the contacts. For non-adhesive interactions, incommensurate and amorphous tips have a lower central pressure and wider contact radius than predicted for ideal spheres. These changes are qualitatively consistent with continuum calculations for spheres with random surface roughness \cite{johnson85}. However the effective magnitude of the rms roughness $\Delta$ is smaller than expected from the atomic positions. The correlated deviations from a sphere on stepped tips, lead to qualitative changes in the pressure distribution on the surface (Fig. \ref{fig:hertzpofr} and \ref{fig:r05pofr}). However, these changes are also qualitatively consistent with what continuum mechanics would predict for the true tip geometry, which is closer to a flat punch than a sphere. We conclude that the usual approximation of characterizing tips by a single spherical radius is likely to lead to substantial errors in calculated properties. Including the true tip geometry in continuum calculations would improve their ability to describe nanometer scale behavior. Unfortunately this is rarely done, and the atomic-scale tip geometry is rarely measured. Recent studies of larger tips and larger scale roughness are an interesting step in this direction \cite{thoreson06}. Roughness also has a strong influence on the work of adhesion $w$ (Table I). Values of $w$ were determined independently from interactions between nominally flat surfaces. For a given interaction strength, commensurate surfaces have the highest $w$, because each atom can optimize its binding simultaneously. The mismatch of lattice constants in incommensurate geometries lowers $w$ by a factor of two, and an additional factor of two drop is caused by the small ($\Delta \sim 0.3\ \sigma$) height fluctuations on amorphous surfaces. In continuum theory, these changes in $w$ should produce nearly proportional changes in pulloff force $N_c$, and tips with the same $w$ and $h_0$ should have the same $N_c$. Measured values of $N_c$ differ from these predictions by up to a factor of two. It is particularly significant that the dimensionless pulloff force for amorphous and stepped tips lies outside the limits provided by JKR and DMT theory. Experimentalists often assume that these bounds place tight limits on errors in inferred values of $w$. In the case of amorphous tips the magnitude of $N_c$ is 30\% higher than expected. The higher than expected adhesion in small contacts may reflect a decrease in effective roughness because long-wavelength height fluctuations are suppressed. Stepped tips show even larger deviations from continuum theory that are strongly dependent on the size of the first terraces \cite{foot1}. Tips selected for imaging are likely to have the smallest terraces and the largest deviations from continuum theory. Adding adhesion introduces a substantial width to the edge of the contact, ranging from the point where interactions first become attractive $r_a$ to the outer limits of attractive interactions $r_c$ (Fig. \ref{fig:cutoff}). As the range of interactions increases, it becomes increasingly difficult to fit both these characteristic radii and the pulloff force with the simple M-D theory (Fig. \ref{fig:cutoff22}). For short-range interactions, good fits are obtained with the measured $w$ for bent tips. Data for amorphous tips can only be fit by increasing $w$, due to the reduction in effective roughness mentioned above (Fig. \ref{fig:r05rofN}). For stepped tips the contact radius increases in discrete jumps as successive terraces contact the surface. The normal interfacial compliance leads to significant ambiguity in the definition of the normal displacement as a function of load (Fig. \ref{fig:r05dofN}). Continuum theory normally includes only the substrate compliance, while experimental measures of the total tip displacement $\delta_{\rm tip}$ include the interfacial compliance. The substrate compliance was isolated by following the displacement of substrate atoms, $\delta_{\rm sur}$ and found to agree well with theory for bent tips in the repulsive regime. Johnson and Greenwood's extension of M-D theory \cite{johnson97} includes the interfacial compliance in the attractive tail of the potential. It provides a good description of $\delta_{\rm tip}$ in the regime where $r_a=0$. Here $\delta_{\rm tip} - \delta_{\rm sur}$ increases to the interaction range $h_0$. Results for amorphous tips show the greater adhesion noted above. Stepped tips follow continuum theory at large loads but are qualitatively different at negative loads. The most profound effects of tip geometry are seen in the lateral stiffness $k$ and friction $F$, which vary by one and two orders of magnitude respectively. Continuum theories for $k$ do not include the lateral interfacial compliance $k_{\rm i}$. This adds in series with the substrate compliance $k_{\rm sub}$ (Eq. (\ref{eq:stiff})). Except for commensurate tips, $k_{\rm i} << k_{\rm sub}$ and the interface dominates the total stiffness \cite{luan05}. Experiments have also seen a substantial reduction in the expected lateral stiffness from this effect \cite{socoliuc04,carpick97thesis}. The friction on non-adhesive commensurate tips (bent or stepped) increases linearly with load, as frequently observed for macroscopic objects. In all other cases, $F$ is a nonlinear function of load. Our ability to directly measure contact radii allowed us to show that $F$ scales linearly with contact area for incommensurate, amorphous and adhesive bent commensurate tips. These tips also show a linear scaling of $k$ with radius. While these scalings held for any choice of radius, the linear fits are offset from the origin. It appears that the effective area contributing to friction and stiffness is often a little smaller than the area of repulsive interactions corresponding to $r_a$. As noted above, the offset from $r_a$ appears to correspond to the finite range over which repulsive forces rise at the interface. Experimental data \cite{carpick97,carpick97b,pietrement01,lantz97,carpick04,carpick04b,schwarz97,schwarz97b} for friction and stiffness have been fit to continuum theory with the assumptions that $F \propto r^2$ and $k \propto r$, but without the offsets seen in Figs. \ref{fig:r05Fofr} and \ref{fig:r05kofr}. We showed that our data for bent and amorphous tips could be fit in this way (Fig. \ref{fig:r05FofN}), but that the fit parameters did not correspond to directly measured values. This suggests that care should be taken in interpreting data in this manner. We also examined the ratio $F/k^2$. In continuum theory, this is related to the friction per unit area $\tau_f^{\rm eff}$ through Eq. (\ref{eq:taueff}). Our results for $F/k^2$ (Fig. \ref{fig:r05ratofN}) show the range of behaviors observed in experiments, with a relatively constant value for adhesive cases, a rapid increase at low loads in some nonadhesive cases, and a linear rise for non-adhesive stepped tips. The directly measured values of $\tau_f$ (Table II) are smaller than $\tau_f^{\rm eff}$ by up to two orders of magnitude, and have qualitatively different trends with tip geometry. The difference is related to a reduction in the stiffness $k$ due to interfacial compliance. This reduces the inferred value of bulk shear modulus $G'$ and increases the calculated contact area at any given area. We expect that experimental results for $F/k^2$ will produce similar overestimates of the true interfacial shear force. It remains unclear why $F$ and $k$ should follow the observed dependence on $r_a$. Analytic arguments for clean, flat surfaces indicate that $F$ is very sensitive to structure, with the forces on commensurate, incommensurate and disordered surfaces scaling as different powers of area \cite{muser01prl,muser03acp}. Only when glassy layers are introduced between the surfaces, does the friction scale in a universal manner \cite{he99,muser01prl,he01tl,he01b}. Wenning and M\"user \cite{wenning01} have argued that the friction on clean, amorphous tips rises linearly with area because of a cancellation of two factors, but have not considered $k$. Naively, one might expect that the length over which the force rises is a constant fraction of the lattice spacing and that $k$ is proportional to $F$. However, the friction traces change with load and do not always drop to zero between successive peaks. We hope that our results will motivate further analytic studies of this problem, and simulations with glassy films and more realistic potentials. While we have only considered single asperity contacts in this paper, it is likely that the results are relevant more broadly. Many experimental surfaces have random roughness on all scales that can be described by self-affine fractal scaling. Continuum models of contact between such surfaces show that the radius of most of contacts is comparable to the lower length scale cutoff in fractal scaling \cite{hyun04,pei05}. This is typically less than a micrometer, suggesting that typical contacts have nanometer scale dimensions where the effects considered here will be relevant. \acknowledgments We thank G. G. Adams, R. W. Carpick, K. L. Johnson, M. H. M\"user and I. Sridhar for useful discussions. This material is based upon work supported by the National Science Foundation under Grants No. DMR-0454947, CMS-0103408, PHY99-07949 and CTS-0320907. \newpage
1,941,325,220,323
arxiv
\section{Introduction} At the present, string theory is the only framework for realization of a unification of gravitation with gauge theory and quantum mechanics. In principle, it should be possible to derive all known physics from the string, as well as potentially provide something new and unexpected. This is the goal of string phenomenology. However, in spite of this there exist many solutions that may be derived from string, all of which are consistent vacua. One of these vacua should correspond to our universe, but then the question becomes why this particular vacuum is selected. One possible approach to this state of affairs is to statistically classify the possible vacua, in essence making a topographical map of the \lq landscape\rq. One then attempts to assess the liklihood that vacua with properties similar to ours will arise.\footnote{For example, see~\cite{Blumenhagen:2004xx, Gmeiner:2005vz, Douglas:2006xy, Gmeiner:2006qw, Gmeiner:2006vb,Dienes:2006ut}.} Another approach is to take the point of view that there are unknown dynamics, perhaps involving a departure from criticality, which determine the vacuum that corresponds to our universe. Regardless of the question of uniqueness, if string theory is correct then it should be possible to find a solution which corresponds \textit{exactly} to our universe, at least in it's low energy limit. Although there has been a great deal of progress in constructing semi-realistic models, this has not yet been achieved. An elegant approach to model construction involving Type I orientifold (Type II) compactifications is where chiral fermions arise from strings stretching between D-branes intersecting at angles (Type IIA picture) \cite{Berkooz:1996km} or in its T-dual (Type IIB) picture with magnetized D-branes \cite{Bachas:1995ik}. Many consistent standard-like and grand unified theory (GUT) models have been constructed~\cite{Blumenhagen:2000wh, Aldazabal:2000dg, Angelantonj:2000hi, Ellis:2002ci} using D-brane constructions. The first quasi-realistic supersymmetric models were constructed in Type IIA theory on a $\mathbf{T^6} /({\mathbb Z}_2 \times {\mathbb Z}_2)$ orientifold~\cite{CveticShiuUranga, Cvetic:2001tj}. Following this, models with standard-like, left-right symmetric (Pati-Salam), unflipped $SU(5)$ gauge groups were constructed based upon the same framework and systematically studied \cite{Cvetic:P:S:L,Cvetic:2002pj, Cvetic:2004nk,Chen:2006gd}. In addition, several different flipped $SU(5)$~\cite{Barr:1981qv,FSU(5)N,AEHN} models have also been built using intersecting D-brane constructions~ \cite{Ellis:2002ci, Chen:2005ab, Axenides:2003hs,Chen:2005cf,Chen:2005mm,Chen:2006ip}.\footnote{For excellent reviews, see~\cite{Blumenhagen:2005mu} and~\cite{Blumenhagen:2006ci}.} Although much progress has been made, none of these models have been completely satisfactory. Problems include extra chiral and non-chiral matter, and the lack of a complete set of Yukawa couplings, which are typically forbidden by global symmetries. In addition to the chiral matter which arises at brane intersections, D-brane constructions typically will have non-chiral open string states present in the low-energy spectrum associated with the D-brane position in the internal space and Wilson lines. This results in adjoint or additional matter in the symmetric and antisymmetric representations unless the open string moduli are completely frozen. These light scalars are not observed and are not present in the MSSM. While it is possible that these moduli will obtain mass after supersymmetry is broken, it would typically be of the TeV scale. While this would make them unobservable in present experiments, the succesful gauge unification in the MSSM would be spoiled by their presence. While it may be possible to find some scenarios where the problems created by these fields are ameliorated, it is much simpler to eliminate these fields altogether. One way to do this is to this is to construct intersecting D-brane models where the D-branes wrap rigid cycles.\footnote{This possibility was first explored in~\cite{Dudas:2005jx} and~\cite{Blumenhagen:2005tn}.} Another motiviation for the absence of these adjoint states is that this is consistent with a $k=1$ Kac-Moody algebra in models constructed from heterotic string, some of which may be dual. In this letter, we construct an intersecting D-brane model on the ${\mathbb Z}_2 \times {\mathbb Z}_2'$ orientifold background, also known as the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orientifold with discrete torsion, where the D-branes wrap rigid cycles thus eliminating the extra adjoint matter. This letter is organized as follows: First, we briefly review intersecting D-brane constructions on the ${\mathbb Z}_2 \times {\mathbb Z}_2'$ orientifold. We then proceed to construct a supersymmetric four-generation MSSM-like model obtained from a Pati-Salam model via spontaneous gauge symmetry breaking. All of the required Yukawa couplings are allowed by global symmetries present in the movel. We find that the tree-level gauge couplings are unified at the string scale. \section{Intersecting Branes on the ${\mathbb Z}_2 \times {\mathbb Z}_2$ Orientifold with and without Discrete Torsion} The ${\mathbb Z}_2 \times {\mathbb Z}_2$ orientifold has been the subject of extensive research, primarily because it is the simplest background space which allows supersymmetric vacua. We will essentially follow along with the development given in~\cite{Blumenhagen:2005tn}. The first supersymmetric models based upon the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orientifold were explored in ~\cite{CveticShiuUranga,Cvetic:2001tj,Cvetic:P:S:L,Cvetic:2002pj}. In Type IIA theory on the $\mathbf{T^6} /({\mathbb Z}_2 \times {\mathbb Z}_2)$ orientifold, the $\mathbf{T^6}$ is product of three two-tori and the two orbifold group generators $\theta$, $\omega$ act on the complex coordinates $(z_1,z_2,z_3)$ as \begin{eqnarray} \theta:(z_1,z_2,z_3)\rightarrow(-z_1,-z_2,z_3) \nonumber \\ \omega:(z_1,z_2,z_3)\rightarrow(z_1,-z_2,-z_3) \end{eqnarray} while the antiholomorphic involution $R$ acts as \begin{equation} R(z_1, z_2, z_3)\rightarrow(\bar{z}_1,\bar{z}_2,\bar{z}_3). \end{equation} As it stands, the signs of the $\theta$ action in the $\omega$ sector and vice versa have not been specified, and the freedom to do so is referred to as the choice of discrete torsion. One choice of discrete torsion corresponds to the Hodge numbers $(h_{11},h_{21}) = (3,51)$ and the corresponding to $(h_{11},h_{21}) = (51,3)$. These two different choices are referred to as with discrete torsion $({\mathbb Z}_2 \times {\mathbb Z}_2')$ and without discrete torsion $({\mathbb Z}_2 \times {\mathbb Z}_2)$ respectively. To date, most phenomenological models that have been constructed have been without discrete torsion. Consequently, all of these models have massless adjoint matter present since the D-branes do not wrap rigid 3-cycles. However, in the case of ${\mathbb Z}_2 \times {\mathbb Z}_2'$, the twisted homology contains collapsed 3-cycles, which allows for the construction of rigid 3-cycles. D6-branes wrapping cycles are specified by their wrapping numbers $(n^i, m^i)$ along the fundamental cycles $[a^i]$ and $[b^i]$ on each torus. However, cycles on the torus are, in general, different from the cycles defined on the orbifold space. In the case of the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orientifold, all of the 3-cycles on the orbifold are inherited from the torus, which makes it particulary easy to work with. The ${\mathbb Z}_2 \times {\mathbb Z}_2'$ orientifold contains 16 fixed points, from which arise 16 additional 2-cycles with the topology of $\mathbf{P}^1 \cong S^2$. As a result, there are 32 collapsed 3-cycles for each twisted sector. A $D6$-brane wrapping collapsed 3-cycles in each of the three twisted sectors will be unable to move away from a particular position on the covering space $\mathbf{T^6}$, which means that the 3-cycle will be rigid. A basis of twisted 3-cycles may be denoted as \begin{eqnarray} [\alpha^{\theta}_{ij,n}] &=& 2[\epsilon^{\theta}_{ij}]\otimes [a^3] \ \ \ \ \ \ \ \ \ [\alpha^{\theta}_{ij,m}] = 2[\epsilon^{\theta}_{ij}]\otimes [b^3], \end{eqnarray} \begin{eqnarray} [\alpha^{\omega}_{ij,n}] &=& 2[\epsilon^{\omega}_{ij}]\otimes [a^1] \ \ \ \ \ \ \ \ \ [\alpha^{\omega}_{ij,m}] = 2[\epsilon^{\omega}_{ij}]\otimes [b^1], \end{eqnarray} \begin{eqnarray} [\alpha^{\theta\omega}_{ij,n}] &=& 2[\epsilon^{\theta\omega}_{ij}]\otimes [a^2] \ \ \ \ \ \ \ \ \ [\alpha^{\theta\omega}_{ij,m}] = 2[\epsilon^{\theta\omega}_{ij}]\otimes [b^2]. \end{eqnarray} where $[\epsilon^{\theta}_{ij}]$, $[\epsilon^{\omega}_{ij}]$, and $[\epsilon^{\theta\omega}_{ij}]$ denote the 16 fixed points on $\mathbf{T}^2 \times \mathbf{T}^2$, where $i,j \in {1,2,3,4}$. A fractional D-brane wrapping both a bulk cycle as well as the collapsed cycles may be written in the form \begin{eqnarray} \Pi^F_a &=& \frac{1}{4}\Pi^B + \frac{1}{4}\left(\sum_{i,j\in S^a_{\theta}} \epsilon^{\theta}_{a,ij}\Pi^{\theta}_{ij,a}\right)+ \frac{1}{4}\left(\sum_{j,k\in S^a_{\omega}} \epsilon^{\omega}_{a,jk}\Pi^{\omega}_{jk,a}\right) + \frac{1}{4}\left(\sum_{i,k\in S^a_{\theta\omega}} \epsilon^{\theta\omega}_{a,ik}\Pi^{\theta\omega}_{ik,a}\right). \label{fraccycle} \end{eqnarray} where the $D6$-brane is required to run through the four fixed points for each of the twisted sectors. The set of four fixed points may be denoted as $S^g$ for the twisted sector $g$. The constants $\epsilon^{\theta}_{a,ij}$, $\epsilon^{\omega}_{a,jk}$ and $\epsilon^{\theta\omega}_{a,ki}$ denote the sign of the charge of the fractional brane with respect to the fields which are present at the orbifold fixed points. These signs, as well as the set of fixed points, must satisfy consistency conditions. However, they may be chosen differently for each stack. A bulk cycle on the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orbifold space consist of the toroidal cycle wrapped by the brane $D_a$ and it's three orbifold images: \begin{eqnarray} \left[\Pi^B_a \right] &=& \left(1 + \theta + \omega + \theta\omega \right)\Pi^{T^6}_a. \end{eqnarray} Each of these orbifold images in homologically identical to the original cycle, thus \begin{eqnarray} \left[\Pi^B_a \right] &=& 4\left[\Pi^{T^6}_a \right]. \end{eqnarray} If we calculate the intersection number between two branes, we will find \begin{eqnarray} \left[\Pi^B_a\right] \circ \left[\Pi^B_b\right] &=& 4~\left[\Pi^{T^6}_a \right]\circ \left[\Pi^{T^6}_b\right] \end{eqnarray} which indicates that the bulk cycles $\left[\Pi^B_a\right]$ do not expand a unimodular basis for the homology lattice $H_3(M,Z)$. Thus, we must normalize these purely bulk cycles as $\left[\Pi^o_a\right] = \frac{1}{2}\left[\Pi^B_a\right]$~\cite{Blumenhagen:2005mu, Blumenhagen:2005tn}. So, in terms of the cycles defined on the torus, the normalized purely bulk cycles of the orbifold are given by \begin{eqnarray} \left[\Pi^o_a\right] &=& \frac{1}{2}\left(1 + \theta + \omega + \theta\omega \right)\left[\Pi^{T^6}_a\right] = 2\left[\Pi^{T^6}_a\right]. \label{eqn:orbthreecycle} \end{eqnarray} Due to this normalization, a stack of $N$ $D6$-branes wrapping a purely bulk cycle will have a $U(N/2)$ gauge group in its world-volume. However, this does not apply to a brane wrapping collapsed cycles, so that a stack of $N$ branes wrapping fractional cycles as in eq.~\ref{fraccycle} will have in its world-volume a gauge group $U(N)$. Since we will have $D6$-branes which are wrapping fractional cycles with a bulk component as well as twisted cycles, we will need to be able to calculate the intersection numbers between pairs of twisted 3-cycles. For the intersection number between two twisted 3 cycles of the form $[\Pi^g_{ij,a}] = n^{I_g}_a[\alpha_{ij,n}]+m^{I_g}_a[\alpha_{ij,m}]$ and $[\Pi^h_{kl,b}] = n^{I_h}_b[\alpha_{kl,n}]+m^{I_h}_b[\alpha_{kl,m}]$ we have \begin{eqnarray} [\Pi^g_{ij,a}] \circ [\Pi^h_{kl,b}] &=& 4\delta_{ik}\delta_{jl}\delta^{gh}(n^{I_g}_am^{I_g}_b - m^{I_g}_a n^{I_g}_b) \end{eqnarray} where $I_g$ corresponds to the torus left invariant by the action of the orbifold generator $g$; specifically $I_{\theta} = 3$, $I_{\omega} = 1$, and $I_{\theta\omega} = 2$. Putting everything together, we will find for the intersection number between a brane $a$ and brane $b$ wrapping fractional cycles we will have \begin{eqnarray} \Pi^F_a \circ \Pi^F_b = \frac{1}{16}[\Pi^B_a \circ \Pi^B_b + 4(n_a^3m_b^3-m_a^3n_b^3)\sum_{i_aj_a\in S^a_{\theta}}\sum_{i_bj_b\in S^b_{\theta}}\epsilon^{\theta}_{a,i_aj_a}\epsilon^{\theta}_{b,i_bj_b}\delta_{i_ai_b}\delta_{j_aj_b} + \\ \nonumber 4(n_a^1m_b^1-m_a^1n_b^1)\sum_{j_ak_a\in S^a_{\omega}}\sum_{j_bk_b\in S^b_{\omega}}\epsilon^{\omega}_{a,j_ak_a}\epsilon^{\omega}_{b,j_bk_b}\delta_{j_aj_b}\delta_{k_ak_b} + \\ \nonumber 4(n_a^2m_b^2-m_a^2n_b^2)\sum_{i_ak_a\in S^a_{\theta\omega}}\sum_{i_bk_b \in S^b_{\theta\omega}}\epsilon^{\theta\omega}_{a,i_ak_a}\epsilon^{\theta\omega}_{b,i_bk_b}\delta_{i_ai_b}\delta_{k_ak_b}]. \end{eqnarray} The 3-cycle wrapped by the $O6$-planes is given by \begin{equation} 2q_{\Omega R}[a^1][a^2][a^3]-2q_{\Omega R\theta}[b^1][b^2][a^3]-2q_{\Omega R\omega}[a^1][b^2][b^3]-2q_{\Omega R\theta\omega}[b^1][a^2][b^3]. \end{equation} where the cross-cap charges $q_{\Omega R g}$ give the RR charge and tension of a given orientifold plane $g$, of which there are two types, $O6^{(-,-)}$ and $O6^{(+,+)}$. In this case, $q_{\Omega R g} = +1$ indicates an $O6^{(-,-)}$ plane, while $q_{\Omega R g} = -1$ indicates an $O6^{(+,+)}$ while the choice of discrete torsion is indicated by the product \begin{equation} q = \prod_g q_{\Omega R g}. \end{equation} The choice of no discrete torsion is given by $q = 1$, while for $q = -1$ is the case of discrete torsion, for which an odd number of $O^{(+,+)}$ must be present. The action of $\Omega R$ on the bulk cycles is the same in either case, and is essentially just changes the signs of the wrapping numbers as $n^i_a \rightarrow n^i_a$ and $m^i_a \rightarrow -m^i_a$. However, in addition, there is an action on the twisted 3 cycle as \begin{eqnarray} \alpha^g_{ij,n} \rightarrow -q_{\Omega R}q_{\Omega Rg}\alpha^g_{ij,n}, & \alpha^g_{ij,m} \rightarrow q_{\Omega R}q_{\Omega Rg}\alpha^g_{ij,m}. \end{eqnarray} Using these relations, one can work out the intersection number of a fractional cycle with it's $\Omega R$ image, we have \begin{eqnarray} \Pi'^F_a \circ \Pi^F_a = q_{\Omega R}\left(2q_{\Omega R}\prod_In^I_am^I_a - 2q_{\Omega R\theta}n^3_am^3_a -2q_{\Omega R\omega}n^1_am^1_a - 2q_{\Omega R\theta\omega}n^2_am^2_a\right) \end{eqnarray} while the intersection number with the orientifold planes is given by \begin{eqnarray} \Pi_{O6} \circ \Pi^F_a = 2q_{\Omega R}\prod_I m^I_a - 2q_{\Omega R\theta}n^1_an^2_am^3_a - 2q_{\Omega R\omega}m^1_an^2_an^3_a - 2q_{\Omega R\theta\omega}n^1_am^2_an^3_a. \end{eqnarray} \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \\ Representation & Multiplicity \\ \hline \hline & \\ $\mathbf{\Yasymm}$ & $\frac{1}{2}(\left[\Pi^o_{a'} \right]\circ \left[\Pi^o_a\right] + \left[\Pi_{O6}\right] \circ \left[\Pi^o_a\right])$\\ & \\ $\mathbf{\Ysymm}$ & $\frac{1}{2}(\left[\Pi^o_{a'}\right] \circ \left[\Pi^o_a\right] - \left[\Pi_{O6}\right] \circ \left[\Pi^o_a)\right]$\\ & \\ $(\mathbf{\overline{\fund}_a}, \mathbf{\fund_b})$ & $\left[\Pi^o_a\right] \circ \left[\Pi^o_b\right]$\\ & \\ $(\mathbf{\fund_a}, \mathbf{\fund_b})$ & $\left[\Pi^o_{a'}\right]\circ \left[\Pi^o_{b}\right]$\\ \hline \end{tabular} \end{center} \caption{Net chiral matter spectrum in terms of three-cycles.} \label{chiralmatter} \end{table} A generic expression for the \textit{net} number of chiral fermions in bifundamental, symmetric, and antisymmetric representations consistent with the vanishing of RR tadpoles can be given in terms of the three-cycles cycles~\cite{Blumenhagen:2002wn} which is shown in Table \ref{chiralmatter}. \section{Consistency and SUSY conditions} Certain conditions must be applied to construct consistent, supersymmetric vacua which are free of anomalies, which we discuss in the following sections. \subsection{RR and Torsion Charge Cancellation} With the choice of discrete torsion $q_{\Omega R} = -1$, $q_{\Omega R\theta} = q_{\Omega R\omega} = q_{\Omega R\theta\omega} = 1$, the condition for the cancellation of RR tadpoles becomes \begin{eqnarray} \sum N_a n_a^1 n_a^2 n_a^3 = -16, & \sum N_a m_a^1 m_a^2 n_a^3 = -16, \\ \nonumber \sum N_a m_a^1 n_a^2 m_a^3 = -16, & \sum N_a n_a^1 m_a^2 m_a^3 = -16. \end{eqnarray} whilst for the twisted charges to cancel, we require \begin{eqnarray} \sum_{a, ij \in S^{\omega}} N_a n_a^1 \epsilon^{\omega}_{ij,a} = 0, \ \ \ \ \sum_{a, jk \in S^{\theta\omega}} N_a n_a^2 \epsilon^{\theta\omega}_{jk,a} = 0, \ \ \ \ \sum_{a, ki \in S^{\theta}} N_a n_a^3 \epsilon^{\theta}_{ki,a} = 0. \end{eqnarray} where the sum is over \textit{each} each fixed point $[e^g_{ij}]$. As stated in Section 2, the signs $\epsilon^{\theta}_{ij,a}$, $\epsilon^{\omega}_{jk,a}$, and $\epsilon^{\theta\omega}_{ki,a}$ are not arbitrary as they must satisfy certain consisitency conditions. In particular, they must satisfy the condition \begin{eqnarray} \sum_{ij \in S^g}\epsilon^g_{a,ij} = 0 \ \ \ \mbox{mod} \ \ \ 4 \end{eqnarray} for each twisted sector. Additionally, the signs for different twisted sectors must satisfy \begin{eqnarray} \epsilon^{\theta}_{a,ij}\epsilon^{\omega}_{a,jk}\epsilon^{\theta\omega}_{a,ik} &=& 1, \\ \nonumber \epsilon^{\theta}_{a,ij}\epsilon^{\omega}_{a,jk} &=& \ \mbox{constant} \ \forall \ j. \end{eqnarray} Note that we may choose the set of signs differently for each stack provided that they satisfy the consistency conditions. A trivial choice of signs which satisfies the constraints placed on them is just to have them all set to $+1$, \begin{equation} \epsilon^{\theta}_{a,ij} = 1 \ \forall \ ij, \ \ \ \ \ \epsilon^{\omega}_{a,jk} = 1 \ \forall \ jk, \ \ \ \ \ \epsilon^{\theta\omega}_{a,ki} = 1\ \forall \ ki. \end{equation} Another possible non-trivial choice of signs consistent with the constraints is given by \begin{equation} \epsilon^{\theta}_{a,ij} = -1 \ \forall \ ij, \ \ \ \ \ \epsilon^{\omega}_{a,jk} = -1 \ \forall \ jk, \ \ \ \ \ \epsilon^{\theta\omega}_{a,ki} = 1\ \forall \ ki. \end{equation} More general sets of these signs may be found in~\cite{Blumenhagen:2005tn}. \subsection{Conditions for Preserving $N=1$ Supersymmetry} The condition to preserve $\emph{N}=1$ supersymmetry in four dimensions is that the rotation angle of any D-brane with respect to the orientifold plane is an element of $SU(3)$ \cite{Berkooz:1996km,CveticShiuUranga}. Essentially, this becomes a constraint on the angles made by each stack of branes with respect to the orientifold planes, \textit{viz} $\theta^1_a + \theta^2_a + \theta^3_a = 0$ mod $2\pi$, or equivalently $\sin(\theta^1_a + \theta^2_a + \theta^3_a)= 0$ and $\cos(\theta^1_a + \theta^2_a + \theta^3_a)= 1$. Applying simple trigonometry, these angles may be expressed in terms of the wrapping numbers as \begin{eqnarray} \tan \theta^i_a=\frac{m^i_a R^i_2}{n^i_a R^i_1} \end{eqnarray} where $R^i_2$ and $R^i_1$ are the radii of the $i^{\mathrm{th}}$ torus. We may translate these conditions into restrictions on the wrapping numbers as \begin{eqnarray} x_A\tilde{A_a}+x_B\tilde{B_a}+x_C\tilde{C_a}+x_D\tilde{D_a}= 0 \nonumber \\ A_a/x_A + B_a/x_B + C_a/x_C + D_a/x_D < 0 \label{susycond} \end{eqnarray} where we have made the definitions \begin{eqnarray} \tilde{A_a} &=& - m^1_am^2_am^3_a, \ \ \ \tilde{B}_a = n^1_an^2_am^3_a, \ \ \ \tilde{C}_a = m^1_an^2_an^3_a, \ \ \ \tilde{D}_a = n^1_am^2_an^3_a, \\ A_a &=& -n^1_an^2_an^3_a, \ \ \ B_a = m^1_am^1_an^3_a, \ \ \ C_a = n^1_am^1_am^3_a, \ \ \ D_a = m^1_an^1_am^3_a. \end{eqnarray} and the structure parameters related to the complex structure moduli are \begin{eqnarray} x_a = \lambda, \ \ \ x_b = \frac{\lambda}{\chi_2\cdot\chi_3}, \ \ \ x_c = \frac{\lambda}{\chi_1\cdot\chi_3}, \ \ \ \frac{\lambda}{\chi_1\cdot\chi_2}. \end{eqnarray} where $\lambda$ is a positive constant. One may invert the above expressions to find values for the complex structure moduli as \begin{eqnarray} \chi_1 = \lambda, \ \ \ \chi_2 = \frac{x_c}{x_b}\cdot\chi_1, \ \ \ \chi_3 = \frac{x_d}{x_b}\cdot\chi_1. \end{eqnarray} \subsection{The Green-Schwarz Mechanism} Although the total non-Abelian anomaly cancels automatically when the RR-tadpole conditions are satisfied, additional mixed anomalies like the mixed gravitational anomalies which generate massive fields are not trivially zero \cite{CveticShiuUranga}. These anomalies are cancelled by a generalized Green-Schwarz (G-S) mechanism which involves untwisted Ramond-Ramond forms. Integrating the G-S couplings of the untwisted RR forms to the $U(1)$ field strength $F_a$ over the untwisted cycles of $\mathbf{T^6/({\mathbb Z}_2\times {\mathbb Z}'_2)}$ orientifold, we find \begin{eqnarray} \int_{D6^{untw}_a} C_5 \wedge \textrm{tr}F_a \sim N_a \sum_i r_{ai}\int_{M_4} B^i_2 \wedge \textrm{tr}F_a, \end{eqnarray} where \begin{equation} B^i_2 = \int_{[\Sigma_i]} C_5,\;\; [\Pi_a]=\sum^{b_3}_{i=1} r_{ai}[\Sigma_i], \end{equation} and ${[\Sigma_i]}$ is the basis of homology 3-cycles, $b_3=8$. Under orientifold action only half survive. In other words, $\{r_{ai}\}=\{\tilde{B}_a, \tilde{C}_a, \tilde{D}_a, \tilde{A}_a\}$ in this definition. Thus the couplings of the four untwisted RR forms $B^i_2$ to the $U(1)$ field strength $F_a$ are \cite{Aldazabal:2000dg} \begin{eqnarray} N_a \tilde{B}_a \int_{M_4}B^1_2\wedge \textrm{tr}F_a,&& \; N_a \tilde{C}_a \int_{M_4}B^2_2\wedge \textrm{tr}F_a, \nonumber \\ N_a \tilde{D}_a \int_{M_4}B^3_2\wedge \textrm{tr}F_a,&& \; N_a \tilde{A}_a \int_{M_4}B^4_2\wedge \textrm{tr}F_a. \end{eqnarray} Besides the contribution to G-S mechanism from untwisted 3-cycles, the contribution from the twisted cycles should be taken into account. As in the untwisted case we integrate the Chern-Simons coupling over the exceptional 3-cycles from the twisted sector. We choose the sizes of the 2-cycles on the topology of $S^2$ on the orbifold singularities to make the integrals on equal foot to those from the untwisted sector. Consider the twisted sector $\theta$ as an example, \begin{eqnarray} \int_{D6^{tw,\theta}_a}C_5\wedge {\rm tr}F_a \sim N_a \sum_{i,j\in S^a_{\theta}} \epsilon^{\theta}_{a,ij} m^3_a \int_{M_4} B^{\theta ij}_2 \wedge {\rm tr}F_a. \end{eqnarray} where $B^{\theta ij}_2=\int_{[\alpha^{\theta}_{ij,m}]}C_5$, with orientifold action taken again. Although $i,j$ can run through each run through $\left\{1-4\right\}$ for each of the four fixed points in each sector, these are constrained by the wrapping numbers from the untwisted sector so that only four possibilities remain. A similar argument may be applied for $\omega$ and $\theta\omega$ twisted sectors: \begin{eqnarray} \int_{D6^{tw,\omega}_a}C_5\wedge {\rm tr}F_a \sim N_a \sum_{j,k\in S^a_{\omega}} \epsilon^{\omega}_{a,jk} m^1_a \int_{M_4} B^{\omega jk}_2 \wedge {\rm tr}F_a. \end{eqnarray} \begin{eqnarray} \int_{D6^{tw,\theta\omega}_a}C_5\wedge {\rm tr}F_a \sim N_a \sum_{i,j\in S^a_{\theta\omega}} \epsilon^{\theta\omega}_{a,ik} m^2_a \int_{M_4} B^{\theta\omega ik}_2 \wedge {\rm tr}F_a. \end{eqnarray} In summary, there are twelve additional couplings of the Ramond-Ramond 2-forms $B^i_2$ to the $U(1)$ field strength $F_a$ from the twisted cycles, giving rise to massive $U(1)$'s. However from the consistency condition of the $\epsilon$'s (see section 3.1) related to the discrete Wilson lines they may be dependent or degenerate. So even including the couplings from the untwisted sector we still have an opportunity to find a linear combination for a massless $U(1)$ group. Let us write down these couplings of the twisted sector explcitly: \begin{eqnarray} N_a \epsilon^{\theta}_{a,ij} m^3_a \int_{M_4} B^{\theta ij}_2 \wedge {\rm tr}F_a, \ \ \ N_a \epsilon^{\omega}_{a,jk} m^1_a \int_{M_4} B^{\omega jk}_2 \wedge {\rm tr}F_a, \nonumber \\ N_a \epsilon^{\theta\omega}_{a,ik} m^2_a \int_{M_4} B^{\theta\omega ik}_2 \wedge {\rm tr}F_a. \end{eqnarray} Checking the mixed cubic anomaly by introducing the dual field of $B^i_2$ in the diagram, we can find the contribution from both untwisted and twisted sectors having a intersection number form and which is cancelled by the RR-tadpole conditions mentioned. These couplings determine the linear combinations of $U(1)$ gauge bosons that acquire string scale masses via the G-S mechanism. Thus, in constructing MSSM-like models, we must ensure that the gauge boson of the hypercharge $U(1)_Y$ group does not receive such a mass. In general, the hypercharge is a linear combination of the various $U(1)$s generated from each stack : \begin{equation} U(1)_Y=\sum_a c_a U(1)_a \end{equation} The corresponding field strength must be orthogonal to those that acquire G-S mass. Thus we demand \begin{eqnarray} \sum_a c_a N_a \epsilon^{\omega}_{a,jk} m^1_a&=& 0, \ \ \ \ \sum_a c_a N_a \epsilon^{\theta\omega}_{a,ki} m^2_a = 0, \ \ \ \ \sum_a c_a N_a \epsilon^{\theta}_{a,ij} m^3_a = 0, \end{eqnarray} for the twisted couplings as well as \begin{eqnarray} \sum_a c_a N_a \tilde{A_a} &=& 0, \ \ \ \ \sum_a c_a N_a \tilde{B_a} = 0, \ \ \ \ \sum_a c_a N_a \tilde{C_a} = 0, \ \ \ \ \sum_a c_a N_a \tilde{D_a} = 0, \label{GSeq} \end{eqnarray} for the untwisted. \subsection{K-Theory Constraints} RR charges are not fully classified by homological data, but rather by K-theory. Thus, to cancel all charges including those visible by K-theory alone, we require the wrapping numbers to satisfy certain constraints. We will not state these constraints here, but we will refer the reader to~\cite{Blumenhagen:2005tn} where they are given explicitely. \section{MSSM via Pati-Salam} We begin with the seven-stack configuration of D-branes with the bulk wrapping numbers shown in Table~\ref{stacksPS}, which produce the intersection numbers shown in Tables 3-4. We make the choice of cross-cap charges $q_{\Omega R} = -1$, $q_{\Omega R\theta} = q_{\Omega R\omega} = q_{\Omega R\theta\omega} = 1$, and assume for simplicity that each stack passes throught the same set of fixed points. The resulting gauge group is that of a four generation Pati-Salam model. The \lq observable\rq \ matter spectrum is presented in Table 5. \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Stack & N & $(n_1,m_1)$&($n_2,m_2)$&$(n_3,m_3)$ & $\epsilon^{\theta}_{ij}~\forall~ij$ & $\epsilon^{\omega}_{jk}~\forall~jk$ & $\epsilon^{\theta\omega}_{ki}~\forall~kl$\\ \hline\hline $\alpha$ &4& (-1,-1) & (-1,-1) & (-1,-1) & -1 & -1 & 1 \\ $\beta$ &2& (-1,-1) & (-1,-1) & (-1,-1) & \ 1 & \ 1 & 1\\ $\gamma$ &2& ( 1, 1) & ( 1, 1) & (-1,-1) & \ 1 & \ 1 & 1\\ \hline $1$ &2& ( 1, 1) & ( 1, 1) & (-1,-1) & -1 & -1 & 1\\ $2$ &2& (-1,-1) & ( 1, 1) & ( 1, 1) & -1 & -1 & 1\\ $3$ &2& ( 1,-1) & ( 1,-1) & (-1, 1) & -1 & -1 & 1\\ $4$ &2& ( 1,-1) & (-1, 1) & ( 1,-1) & -1 & -1 & 1\\ \hline\hline \end{tabular} \end{center} \caption{Stacks, wrapping numbers, and torsion charges for a Pati-Salam model. With the choice of structure parameters $x_a = \sqrt{3}, x_b = x_c = x_d = \sqrt{3}/3$, $N=1$ SUSY will be preserved. The cycles wrapped by each of the stacks pass through the same set of fixed points.} \label{stacksPS} \end{table} \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &$\alpha$&$\beta$&$\gamma$&$1$&$2$&$3$&$4$&$\alpha'$&$\beta'$&$\gamma'$&$1'$&$2'$&$3'$&$4'$ \\ \hline\hline $\alpha$ &0& 0 & 0 & 0 & 0 & 0 & 0 & -8 & 4 &-4 & 0 & 0 & 0 & 0 \\ $\beta$ &-& 0 & 0 & 0 & 0 & -4 &-4 & 0 &-8 & 0 & 0 & 0 & 0 & 0 \\ $\gamma$ &-& - & 0 & 0 & 0 & 4 &-4 & 0 & 0 &-8 & 0 & 0 & 0 & 0 \\ $1$ &-& - & - & 0 & 0 & -8 & 0 & 0 &-4 & 4 &-8 & 0 & 0 & 0 \\ $2$ &-& - & - & - & 0 & 0 & 0 & 0 &-4 &-4 & 0 &-8 & 0 & 0 \\ $3$ &-& - & - & - & - & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 \\ $4$ &-& - & - & - & - & - & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 \\ \hline\hline \end{tabular} \end{center} \caption{Intersection numbers between different stacks giving rise to fermions in the bifundamental representation. The resulting gauge group and chiral matter content is that of a four-generation Pati-Salam model.} \label{intnumPS} \end{table} \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|} \hline Stack & Antisymmetric & Symmetric \\ \hline \hline $\alpha$ & 8 & 0 \\ $\beta$ & 8 & 0 \\ $\gamma$ & 8 & 0 \\ $1$ & 8 & 0 \\ $2$ & 8 & 0 \\ $3$ &-8 & 0 \\ $4$ &-8 & 0 \\ \hline\hline \end{tabular} \end{center} \caption{Intersection numbers between different stacks and their images giving rise to antisymmetric and symmetric representations for a Pati-Salam model.} \label{ASchiralmatterPS} \end{table} \begin{table}[f] \begin{center} \normalsize \begin{tabular}{|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}| @{}c@{}|@{}c@{}|}\hline Rep. & Multi. &$U(1)_{\alpha}$&$U(1)_{\beta}$& $U(1)_{\gamma}$&$U(1)_1$& $U(1)_2$& $U(1)_3$&$U(1)_4$ & Field \\ \hline \hline $(\mathbf{4}_{\alpha'} ,\mathbf{2}_{\gamma})$ & 4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & Matter\\ $(\mathbf{\bar{4}}_{\alpha'} ,\mathbf{\bar{2}}_{\beta})$ & 4 &-1 &-1 & 0 & 0 & 0 & 0 & 0 & Matter\\ $(\mathbf{2}_{\beta} ,\mathbf{\bar{2}}_{\gamma})^{\star}$ & - & 0 & 1 &-1 & 0 & 0 & 0 & 0 & EW Higgs\\ $(\mathbf{\bar{4}}_{\alpha} ,\mathbf{2}_{\gamma})^{\star}$ & - &-1 & 0 & 1 & 0 & 0 & 0 & 0 & GUT Higgs\\ $(\mathbf{4}_{\alpha} , \mathbf{\bar{2}}_{\beta})^{\star}$ & - & 1 &-1 & 0 & 0 & 0 & 0 & 0 & GUT Higgs\\ \hline $(\mathbf{6}_{\alpha'\alpha})$ & 8 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & -\\ $(\mathbf{1}_{\beta'\beta})$ & 8 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & $\phi_{\beta\beta}$\\ $(\mathbf{1}_{\gamma'\gamma})$ & 8 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & $\phi_{\gamma\gamma}$\\ \hline\hline \end{tabular} \label{spectrumPS} \caption{The \lq observable\rq \ spectrum of $SU(4)\times SU(2)_L \times SU(2)_R \times [U(2)^4\times U(1)^3]$. The $\star'd$ representations indicate light, non-chiral matter which is present between pairs of fractional branes which wrap homologically identical bulk cycles, but differ in their twisted cycles.} \end{center} \end{table} For Pati-Salam models constructed from bulk D-branes wrapping non-rigid cycles, the gauge symmetry may be broken to the MSSM by the process of brane splitting, which corresponds to assigning a VEV to an adjoint scalar in the field theoretic description. However, this option is not available in the present construction since the adjoint fields have been eliminated due to the rigidization of the cycles. Although the adjoint fields have been eliminated by splitting the bulk D-branes into their fractional consituents, light non-chiral matter in the bifundamental representation may still appear between pairs of fractional branes~\cite{Blumenhagen:2005tn}. These non-chiral states smoothly connect the configuration of fractional D-branes to one consisting of non-rigid D-branes. In the present case, all of the fractional D-branes are wrapping bulk cycles which are homologically identical, but differ in their twisted cycles. As discussed in~\cite{Blumenhagen:2005tn}, one may compute the overlap between two such boundary states: \begin{equation} \tilde{A}_{a_ia_j} = \int_0^\infty dl\left\langle a_i\right|e^{-2\pi l H_{cl}}\left|a_j\right\rangle + \int_0^\infty dl\left\langle a_j\right|e^{-2\pi l H_{cl}}\left|a_i\right\rangle. \end{equation} Due to the different signs for the twisted sector, it is found that in the loop channel amplitude \begin{equation} A_{a_ia_j} = \int_0^\infty \frac{dl}{l}Tr_{ij+ji} \left(\frac{1+\theta+\omega+\theta\omega}{4}e^{-2\pi l H_{cl}} \right) \end{equation} one massless hypermultiplet appears. Thus, the required states to play the role of the Higgs fields are present in this non-chiral sector. In principle, one should determine that there are flat directions that can give the necessary VEV's to these states. This process would correspond geometrically to a particular brane recombination, where the CFT techniques fail and only a field theory analysis of D- and F-flat directions is applicable. For instance, a configuration of fractional branes in which one of these states receives a VEV should smoothly connect this configuration to one in which there is a stack of bulk D-branes wrapping a non-rigid cycle that has been split by assigning a VEV to an ajoint scalar. Such computations are technically very involved and beyond the scope of the present work, and we defer this for later work. In Tables 6-9, we present an MSSM model which is obtained from the above Pati-Salam model by separating the stacks as \begin{eqnarray} \alpha \rightarrow \alpha_B + \alpha_L, \ \ \ \ \ \beta \rightarrow \beta_{r1} + \beta_{r2}. \end{eqnarray} This does not mean that the stacks are located at different points in the internal space. After all, there are no adjoint scalars which may receive a VEV. Rather, this separation reflects that there has been a spontaneous breaking of the Pati-Salam gauge symmetry down to the MSSM by the Higgs mechanism, where we have identified the Higgs states with $(\mathbf{4},\mathbf{2},1)$ and $(\mathbf{\bar{4}},1, \mathbf{2})$ representations of $SU(4)\times SU(2)_L \times SU(2)_R$ present in the non-chiral sector. The resulting gauge group of the model is then given by $SU(3)\times SU(2)_L \times U(1)_{Y}\times SU(2)^4 \times U(1)^8$, and the MSSM hypercharge is found to be \begin{eqnarray} Q_Y = \frac{1}{6}\left(U(1)_{\alpha_B} - 3U(1)_{\alpha_L} - 3U(1)_{\beta_{r1}} + 3U(1)_{\beta_{r2}}\right). \label{hypercharge} \end{eqnarray} Of course, this is just \begin{eqnarray} Q_Y = \frac{Q_B - Q_L}{2}+ Q_{I_{3R}}, \end{eqnarray} where $Q_B$ and $Q_L$ are baryon number and lepton number respectively, while $Q_{I_{3R}}$ is like the third component of right-handed weak isospin. \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Stack & N & $(n_1,m_1)$&($n_2,m_2)$&$(n_3,m_3)$ & $\epsilon^{\theta}_{ij}~\forall~ij$ & $\epsilon^{\omega}_{jk}~\forall~jk$ & $\epsilon^{\theta\omega}_{ki}~\forall~kl$\\ \hline\hline $\alpha_B$ &3& (-1,-1) & (-1,-1) & (-1,-1) & -1 & -1 & 1\\ $\alpha_L$ &1& (-1,-1) & (-1,-1) & (-1,-1) & -1 & -1 & 1\\ $\beta_{r1}$ &1& (-1, -1) & (-1, -1) & (-1, -1) & \ 1 & \ 1 & 1\\ $\beta_{r2}$ &1& (-1, -1) & (-1, -1) & (-1, -1) & \ 1 & \ 1 & 1\\ $\gamma$ &2& ( 1, 1) & ( 1, 1) & (-1,-1) & \ 1 & \ 1 & 1\\ \hline $1$ &2& ( 1, 1) & ( 1, 1) & (-1,-1) & -1 & -1 & 1\\ $2$ &2& (-1,-1) & ( 1, 1) & ( 1, 1) & -1 & -1 & 1\\ $3$ &2& ( 1,-1) & ( 1,-1) & (-1, 1) & -1 & -1 & 1\\ $4$ &2& ( 1,-1) & (-1, 1) & ( 1,-1) & -1 & -1 & 1\\ \hline\hline \end{tabular} \end{center} \caption{Stacks, wrapping numbers, and torsion charges for an MSSM-like model. The three-cycles wrapped by each of the stacks pass through the same set of fixed points.} \label{stacks} \end{table} As discussed, up to twelve $U(1)$ factors may obtain a mass \textit{via} the GS mechanism. In order for the hypercharge to remain massless, it must be orthogonal to each of these factors. In this case, there are only four due to the degeneracy of the stacks. These $U(1)$'s remain to all orders as global symmetries and are given by \begin{eqnarray} U(1)_A = 3U(1)_{\alpha_B} - U(1)_{\alpha_L} - U(1)_{\beta_{r1}} - U(1)_{\beta_{r2}} + 2U(1)_{\gamma}-2U(1)_1 \\ \nonumber + 2U(1)_2 + 2U(1)_3 + 2U(1)_4, \\ \nonumber \\ \nonumber U(1)_B = -3U(1)_{\alpha_B} + U(1)_{\alpha_L} - U(1)_{\beta_{r1}} - U(1)_{\beta_{r2}}+ 2U(1)_{\gamma} + 2U(1)_1 \\ \nonumber + 2U(1)_2 - 2U(1)_3 + 2U(1)_4, \\ \nonumber \\ \nonumber U(1)_C = 3U(1)_{\alpha_B} - U(1)_{\alpha_L} - U(1)_{\beta_{r1}} - U(1)_{\beta_{r2}} - 2U(1)_{\gamma} + 2U(1)_1 \\ \nonumber - 2U(1)_2 - 2U(1)_3 + 2U(1)_4, \\ \nonumber \\ \nonumber U(1)_D = -3U(1)_{\alpha_B} + U(1)_{\alpha_L} - U(1)_{\beta_{r1}} - U(1)_{\beta_{r2}}- 2U(1)_{\gamma} - 2U(1)_1 \\ \nonumber - 2U(1)_2+ 2U(1)_3 + 2U(1)_4. \label{globalsym} \end{eqnarray} Note that the hypercharge orthogonal to each of these $U(1)$ factors and so will remain massless. The \lq observable\rq \ sector basically consists of a four-generation MSSM plus right-handed neutrinos. The rest of the spectrum primarily consists of vector-like matter, many of which are singlets under the MSSM gauge group. \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &$\alpha_B$&$\alpha_L$&$\beta_{r1}$&$\beta_{r2}$&$\gamma$&$1$&$2$&$3$&$4$&$\alpha_B'$&$\alpha_L'$&$\beta_{r1}'$&$\beta_{r2}'$&$\gamma'$&$1'$&$2'$&$3'$&$4'$ \\ \hline\hline $\alpha_B$ &0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -8 &-8 & 4 & 4 &-4 & 0 & 0 & 0 & 0 \\ $\alpha_L$ &-& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &-8 & 4 & 4 &-4 & 0 & 0 & 0 & 0 \\ $\beta_{r1}$ &-& - & 0 & 0 & 0 & 0 & 0 & -4 &-4 & 0 & 0 &-8 &-8 & 0 & 0 & 0 & 0 & 0 \\ $\beta_{r2}$ &-& - & - & 0 & 0 & 0 & 0 & -4 &-4 & 0 & 0 & 0 &-8 & 0 & 0 & 0 & 0 & 0 \\ $\gamma$ &-& - & - & - & 0 & 0 & 0 & 4 &-4 & 0 & 0 & 0 & 0 &-8 & 0 & 0 & 0 & 0 \\ $1$ &-& - & - & - & - & 0 & 0 & -8 & 0 & 0 & 0 &-4 &-4 & 4 &-8 & 0 & 0 & 0 \\ $2$ &-& - & - & - & - & - & 0 & 0 & 0 & 0 & 0 &-4 &-4 &-4 & 0 &-8 & 0 & 0 \\ $3$ &-& - & - & - & - & - & - & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0 \\ $4$ &-& - & - & - & - & - & - & - & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 \\ \hline\hline \end{tabular} \end{center} \caption{Intersection numbers between different stacks giving rise to fermions in the bifundamental representation. The resulting gauge group and chiral matter content is that of a four-generation MSSM-like model.} \label{intnum} \end{table} \begin{table}[f] \begin{center} \begin{tabular}{|c|c|c|} \hline Stack & Antisymmetric & Symmetric \\ \hline \hline $\alpha_B$ & 8 & 0 \\ $\alpha_L$ & 8 & 0 \\ $\beta_{r1}$ & 8 & 0 \\ $\beta_{r2}$ & 8 & 0 \\ $\gamma$ & 8 & 0 \\ $1$ & 8 & 0 \\ $2$ & 8 & 0 \\ $3$ &-8 & 0 \\ $4$ &-8 & 0 \\ \hline\hline \end{tabular} \end{center} \caption{Intersersection numbers between different stacks and their images giving rise to antisymmetric and symmetric representations for an MSSM-like model.} \label{ASchiralmatter} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}||@{}c@{}||@{}c@{}| @{}c@{}|@{}c@{}||@{}c@{}||@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}||@{}c@{}||@{}c@{}|}\hline Rep. & Multi. &$U(1)_{\alpha_B}$&$U(1)_{\alpha_L}$& $U(1)_{\beta_{r1}}$ & $U(1)_{\beta_{r2}}$ & $U(1)_{\gamma}$ & $Q_Y$ & $U(1)_A$ & $U(1)_B$ & $U(1)_C$ & $U(1)_D$ & Field \\ \hline \hline $(\mathbf{3}_{\alpha_B'} ,\mathbf{2}_{\gamma})$ & 4 & 1 & 0 & 0 & 0 & 1 & 1/6 & 5 & -1& 1& -5 & $Q$\\ $(\mathbf{\bar{3}}_{\alpha_B'} ,\mathbf{1}_{\beta_{r2}})$ & 4 & -1 & 0 & 0 & -1 & 0 & -2/3 & -2 & 4&-2 & 4& $U^c$\\ $(\mathbf{\bar{3}}_{\alpha_B'} ,\mathbf{1}_{\beta_{r1}})$ & 4 & -1 & 0& -1 & 0 & 0 & 1/3 & -2 & 4& -2& 4& $D^c$\\ $(\mathbf{1}_{\alpha_L'} ,\mathbf{2}_{\gamma})$ & 4 & 0 & 1 & 0 & 0 & 1 & -1/2 & 3 & 1& -1& -3 & $L$ \\ $(\mathbf{1}_{\alpha_L'},\mathbf{1}_{\beta_{r1}})$ & 4 & 0 & -1 & -1 & 0 & 0 & 1 & 0 & 2& 0& 2 & $E^c$\\ $(\mathbf{1}_{\alpha_L},\mathbf{1}_{\beta_{r2}})$ & 4 & 0 & -1 & 0 & -1 & 0 & 0 & 0 & 2& 0& 2& $N$ \\ \hline $(\mathbf{1}_{\beta_{r1}} ,\mathbf{\bar{2}}_{\gamma})^{\star}$ & - & 0 & 0 & 1 & 0 & -1 & -1/2 & -3 & -3 & 1 & 1 & $H_d$ \\ $(\mathbf{\bar{2}}_{\gamma} ,\mathbf{1}_{\beta_{r2}})^{\star}$ & - & 0 & 0 & 0 & 1 & -1 & 1/2 & -3 & -3 & 1 & 1 & $H_u$ \\ \hline \hline $(\mathbf{1}_{\gamma'\gamma})$ & 8 & 0 & 0 & 0 & 0 & -2 & 0 & -4 & -4 & 4& 2& $\phi_{\gamma\gamma}$\\ $(\mathbf{1}_{\beta_{r1}'},\mathbf{1}_{\beta_{r2}})$ & 8 & 0 & 0 & 1 & 1 & 0 & 0 & -2 & -2& -2&-2 & $\phi_{\beta_{r1r2}}$\\ $(\mathbf{3}_{\alpha_B'} ,\mathbf{1}_{\alpha_L})$ & 8 & 1 & 1 & 0 & 0 & 0 & -1/3 & 4 & -4& 4&-4 & $D_1$ \\ $(\mathbf{\bar{3}}_{\alpha_B'\alpha_B})$ & 8 & 2 & 0 & 0 & 0 & 0 & 1/3 & 6 & -6& 6 & -6& $D^c_2$\\ \hline\hline \end{tabular} \label{MSSMspectrum} \caption{The \lq observable\rq \ spectrum of $\left[SU(3)\times SU(2)_L \times U(1)_Y\right]\times U(2)^4 \times U(1)^4$. The $\star'd$ representations indicate light, non-chiral matter which exist between pairs of fractional branes which wrap identical bulk cycles, but differ in their twisted cycles.} \end{center} \end{table} Using the states listed in Table 9, we may construct all of the required MSSM Yukawa couplings, \begin{equation} W_Y = y_u H_u Q U^c + y_d H_d Q D^c + y_l H_d L E^c \end{equation} keeping in mind that all of the MSSM fields are charged under the global symmetries defined in eqns. 45. Typically, this results in the forbidding of some if not all of the desired Yukawa couplings. In this case, all of the Yukawa couplings are allowed by the global symmetries including a trilinear Dirac mass term for neutrinos, \begin{equation} W_D = \lambda_{\nu}L N H_u. \end{equation} By itself, such a term would imply neutrino masses of the order of the quarks and charged leptons. However, if in addition there exist a Majorana mass term for the right-handed neutrinos, \begin{equation} W_m = M_m N N, \end{equation} a see-saw mechanism may be employed. Such a mass term may in principle be generated by $E2$ instanton effects~\cite{Ibanez:2006da, Blumenhagen:2006xt}. This mechanism may also be employed to generate a $\mu$-term of the order of the EW scale. In addition to the matter spectrum charged under the MSSM gauge groups and total gauge singlets, there is additional vector-like matter transforming under the \lq hidden\rq \ gauge group $U(2)_1 \otimes U(2)_2 \otimes U(2)_3 \otimes U(2)_4$. By choosing appropriate flat directions, we may deform the fractional cycles wrapped by these stacks into bulk cycles such that \begin{equation} U(2)_1 \otimes U(2)_2 \rightarrow U(1); \ \ \ \ \ U(2)_3 \otimes U(2)_4 \rightarrow U(1). \end{equation} Thus, matter transforming under these gauge groups becomes a total gauge singlet or becomes massive and disappears from the spectrum altogether. The remaining eight pairs of exotic color triplets present in the model resulting from the breaking $\mathbf{6}\rightarrow \mathbf{3}\oplus \mathbf{\bar{3}}$, while not truly vector-like due to their different charges under the global symmetries, may in principle become massive via instanton effects in much the same way a $\mu$-term may be generated. \section{Gauge Coupling Unification} The MSSM predicts the unification of the three gauge couplings at an energy $\sim2\times10^{16}$~GeV. In intersecting D-brane models, the gauge groups arise from different stacks of branes, and so they will not generally have the same volume in the compactified space. Thus, the gauge couplings are not automatically unified. The low-energy $N=1$ supergravity action is basically determined by the K\"ahler potential $K$, the superpotential $W$ and the gauge kinetic function $f$. All of these functions depend on the background space moduli fields. \noindent For branes wrapping cycles not invariant under $\Omega R$, the holomorphic gauge kinetic function for a D6 brane stack $P$ is given by~\cite{Blumenhagen:2006ci} \begin{eqnarray} f_P = \frac{1}{2\pi l_s}\left[e^{\phi}\int_{\Pi_P}\mbox{Re}(e^{-i\theta_P}\Omega_3)-i\int_{\Pi_P}C_3\right] \end{eqnarray} from which it follows\footnote{This is closely related to the SUSY conditions.} (with $\theta_P = 0$ for ${\mathbb Z}_2\times{\mathbb Z}_2$) \begin{eqnarray} f_P &=& (n_P^1\,n_P^2\,n_P^3\,s-n_P^1\,m_P^2\,m_P^3\,u^1-n_P^2\,m_P^1\,m_P^3\,u^2- n_P^3\,m_P^1\,m_P^2\,u^3) \label{gaugefunction} \end{eqnarray} where $u^i$ and $s$ are the complex structure moduli and dilaton in the field theory basis. The gauge coupling constant associated with a stack P is given by \begin{eqnarray} g_{D6_P}^{-2} &=& |\mathrm{Re}\,(f_P)|.\label{idb:eq:gkf} \end{eqnarray} \noindent Thus, we identify the $SU(3)$ holomorphic gauge function with stack $\alpha_{B}$, and the $SU(2)$ holomorphic gauge function with stack $\gamma$. The $U(1)_Y$ holomorphic gauge function is then given by taking a linear combination of the holomorphic gauge functions from all the stacks. In this way, it is found~\cite{Blumenhagen:2003jy} that \begin{equation} f_Y = \frac{1}{6}f_{\alpha_B} + \frac{1}{2}f_{\alpha_L} + \frac{1}{2}f_{\beta_{r1}} + \frac{1}{2}f_{\beta_{r2}}. \end{equation} Thus, it follows that the tree-level MSSM gauge couplings will be unified at the string scale \begin{equation} g^2_{s} = g^2_{w} = \frac{5}{3}g^2_Y \end{equation} since each stack will have the same gauge kinetic function. \section{Conclusion} In this letter, we have constructed an intersecting D-brane model on the ${\mathbb Z}_2 \times {\mathbb Z}_2'$ orientifold background, also known as the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orientifold with discrete torsion, where the D-branes wrap rigid cycles, thus eliminating the extra adjoint matter. The model constructed is a supersymmetric four generation MSSM-like model obtained from a spontaneously broken Pati-Salam, with a minimum of extra matter. All of the required Yukawa couplings are allowed by global symmetries which arise via a generalized Green-Schwarz mechanism. In addition, we find that the tree-level gauge couplings are unified at the string scale with a canonical normalization. The main drawback of this model is that there are four generations of MSSM matter. However, the existence of a possible fourth generation is rather tightly constrained, although it is not completely ruled out. Of course, the actual fermion masses await a detailed analysis of the Yukawa couplings. The emergence of three light generations may in fact be correlated with the existence of three twisted sectors. If there turns out to be a fourth generation, then it would almost certainly be discovered at LHC within the next few years. Another interesting possibility is that the presence of discrete torsion will complexify the Yukawa couplings and thereby introduce $CP$ violation into the CKM matrix~\cite{Abel:2002az}. Clearly, there is much work to be done to work out the detailed phenomenology of this model and we plan to return to this topic in the near future. With the LHC era just around the corner, it would be nice to have testable string models in hand. \section{Acknowledgements} The work of C-M Chen is supported by the Mitchell-Heep Chair in High Energy Physics. The work of D.V. Nanopoulos is supported by DOE grant DE-FG03-95-Er-40917. We thank Tianjun Li for a critical reading of the manuscript and for helpful suggestions. We would also like to thank Mirjam Cvetic for useful discussions and helpful advice. \newpage
1,941,325,220,324
arxiv
\section{Introduction} \label{sec:intro} Let us consider a stochastic evolution equation of the type \begin{equation} \label{eq:0} du + Au\,dt = F(u)\,dt + B(u)\,dW, \qquad u(0)=u_0, \end{equation} where $A$ is a linear maximal monotone operator on a Hilbert space of functions $H$, the coefficients $F$ and $B$ satisfy suitable integrability assumptions, and $W$ is a cylindrical Wiener process. Precise assumptions on the data of the Cauchy problem \eqref{eq:0} are given in \S\ref{sec:main} below. Our goal is to establish a maximum principle for (local) mild solutions to \eqref{eq:0}, i.e. to provide sufficient conditions on the operator $A$ and on the coefficients $F$ and $B$ such that positivity of the initial datum $u_0$ implies positivity of the solution $u$ (see Theorem~\ref{thm:pos} below). A simpler problem was studied in \cite{cm:pos1}, where coefficients $F$ and $B$ are assumed to be Lipschitz continuous. Here we simply assume that $F$ and $B$ satisfy rather minimal integrability conditions and that a local mild solution exists. On the other hand, in \cite{cm:pos1} the linear operator $A$ need only generate a positivity preserving semigroup, while here we require that $A$ generates a sub-Markovian semigroup. We refer to \cite{cm:pos1} for a discussion about the relation of other positivity results for solutions to stochastic partial differential equations with ours. It is however probably worth pointing out that most existing results seem to deal with equations in the variational setting (see, e.g., \cite{Kry:MP-SPDE,Kry:shortIto,Pard}). As an application, we provide an alternative, more direct proof of the positivity of forward rates in the Heath-Jarrow-Morton \cite{HJM} framework with respect to the one in \cite{cm:pos1}. This is obtained, as is now classical, viewing forward curves as solutions to the so-called Musiela stochastic PDE (see, e.g., \cite{filipo,cm:MF10}). \section{Assumptions and main result} \label{sec:main} Let $(\Omega,\mathscr{F},\P)$ be a probability space endowed with a complete right-continuous filtration $(\mathscr{F}_t)_{t\in[0,T]}$, with $T>0$ a fixed final time, on which all random elements will be defined. Identities and inequalities between random variables are meant to hold $\P$-almost surely, and two stochastic processes are declared equal, unless otherwise stated, if they are indistinguishable. The $\sigma$-algebra of progressively measurable subsets of $\Omega\times[0,T]$ will be denoted by $\mathscr{R}$. We shall denote a cylindrical Wiener process on a separable Hilbert space $U$ by $W$. Standard notation and terminology of stochastic calculus for semimartingales will be used throughout (see, e.g., \cite{Met}). In particular, given an adapted process $X$ and a stopping time $\tau$, $X^\tau$ will denote the process $X$ stopped at $\tau$. Similarly, if $X$ is also c\`adl\`ag, $X^{\tau-}$ stands for the process $X$ pre-stopped at $\tau$. For any separable Hilbert spaces $E_1$ and $E_2$, we will use the symbols $\mathscr{L}(E_1,E_2)$ $\mathscr{L}^2(E_1,E_2)$ for the space of linear continuous and Hilbert-Schmidt operators from $E_1$ to $E_2$, respectively. The space of continuous bilinear maps from $E_1 \times E_1$ to $E_2$ will be denoted by $\mathscr{L}_2(E_1;E_2)$. The $n$-th order Fr\'echet and G\^ateaux derivatives of a function $\Phi: E_1 \to E_2$ at a point $x \in E_1$ are denoted by $D^n\Phi(x)$ and $D^n_{\mathcal{G}}\Phi(x)$, respectively, omitting the superscript if $n=1$, as usual. \medskip We shall work under the following standing assumptions. \smallskip\par\noindent \textbf{(A1)} There exists an open set $\mathcal{O}$ in $\mathbb{R}^d$, $d \geq 1$, and a Borel measure $\mu$ such that $H=L^2(\mathcal{O},\mu)$. \smallskip\par\noindent The norm and scalar product on $H$ will be denoted by $\norm{\cdot}$ and $\ip{\cdot}{\cdot}$, respectively. \smallskip\par\noindent \textbf{(A2)} $A$ is a linear maximal monotone operator on $H$ such that its resolvent is sub-Markovian and is a contraction with respect to the $L^1(\mathcal{O},\mu)$-norm. \smallskip\par\noindent Recall that the resolvent of $A$, i.e. the family of linear continuous operators on $H$ defined by \[ J_\lambda := (I+\lambda A)^{-1}, \qquad \lambda>0, \] is said to be sub-Markovian if, for every $\lambda>0$ and every $\phi \in H$ such that $0 \leq \phi \leq 1$ a.e. in $\mathcal{O}$, one has $0 \leq J_\lambda\phi \leq 1$ a.e. in $\mathcal{O}$. \smallskip\par\noindent \textbf{(A3)} $F:\Omega \times[0,T] \times H \to H$ and $B:\Omega \times[0,T] \times H \to \mathscr{L}^2(U,H)$ are $\mathscr{R} \otimes \mathscr{B}(H)$-measurable, and there exists a constant $C>0$ such that \[ -\ip{F(\omega,t,h)}{h_-} + \frac12\norm[\big]{1_{\{h<0\}}B(\omega,t,h)}_{\mathscr{L}^2(U,H)}^2 \leq C \norm{h_-}^2_{L^2(\mathcal{O})} \qquad \forall (\omega,t,h) \in \Omega \times [0,T] \times H. \] In particular, note that choosing $h=0$ yields $F(\cdot, 0)=0$ and $B(\cdot, 0)=0$. \smallskip\par\noindent \textbf{(A4)} $u_0\in L^0(\Omega,\mathscr{F}_0; H)$ \medskip \begin{defi} A local mild solution to the Cauchy problem \eqref{eq:0} is a pair $(u,\tau)$, where $\tau$ is a stopping time with $\tau \leq T$, and $u:[\![0,\tau[\![ \to H$ is a measurable adapted process with continuous trajectories such that, for any stopping time $\sigma<\tau$, one has \begin{itemize} \item[(i)] $S(t-\cdot) F(u) \ind{\cc{0}{\sigma}} \in L^0(\Omega;L^1(0,t;H))$ for all $t \in [0,T]$; \item[(ii)] $S(t-\cdot) B(u) \ind{\cc{0}{\sigma}} \in L^0(\Omega;L^2(0,t;\mathscr{L}^2(U,H)))$ for all $t \in [0,T]$, \end{itemize} and \[ u = S(\cdot)u_0 + \int_0^\cdot S(\cdot-s)F(s,u(s))\,ds + \int_0^\cdot S(\cdot-s)B(s,u(s))\,dW(s). \] \end{defi} The last identity is to be understood in the sense of indistinguishability of processes defined on the stochastic interval $\co{0}{\tau}$. Here the stochastic convolution is defined on $\cc{0}{\sigma}$, for every stopping time $\sigma<\tau$, as \[ \biggl( \int_0^t S(t-s) B(s,u(s)) \ind{\cc{0}{\sigma}}(s)\,dW(s) \biggr)_{t\in[0,\sigma]}. \] The main result is the following. \begin{thm} \label{thm:pos} Let $(u,\tau)$ be a local mild solution to the Cauchy problem \eqref{eq:0} such that, for every stopping time $\sigma<\tau$, one has \begin{itemize} \item[(i)] $F(u)\ind{\cc{0}{\sigma}} \in L^0(\Omega;L^1(0,T;H))$; \item[(ii)] $B(u)\ind{\cc{0}{\sigma}} \in L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H)))$. \end{itemize} If $u_0 \geq 0$ a.e. in $\mathcal{O}$, then $u^{\tau-}(t) \geq 0$ a.e.~in $\mathcal{O}$ for all $t \in [0,T]$. \end{thm} \section{Auxiliary results} \label{sec:aux} The arguments used in the proof of Theorem~\ref{thm:pos} (see \S\ref{sec:proof} below) rely on the following results, that we recall here for the reader's convenience. The first is a continuous dependence result for mild solutions to stochastic evolution equations in the form \eqref{eq:0} with respect to the coefficients and the initial datum. This is a consequence of a more general statement proved in \cite[Corollary~3.4]{KvN2}. Let \begin{align*} (u_{0n})_n &\subset L^0(\Omega,\mathscr{F}_0;H),\\ (f_n)_n, f &\subset L^0(\Omega;L^1(0,T;H)),\\ (G_n)_n, G &\subset L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H))) \end{align*} be such that the $H$-valued processes $f_n$, $f$, $G_nv$, and $Gv$ are strongly measurable and adapted for all $v \in U$ and $n \in \mathbb{N}$. Then the Cauchy problems \[ du_n + Au_n\,dt = f_n\,dt + G_n\,dW, \qquad u_n(0)=u_{0n}, \] and \[ du + Au\,dt = f\,dt + G\,dW, \qquad u(0)=u_0, \] admit unique mild solutions $u_n$ and $u$, respectively. \begin{prop} \label{prop:micia} Assume that \begin{alignat*}{3} u_{0n} &\longrightarrow u_0 &&\quad \text{in } L^0(\Omega; H),\\ f_n &\longrightarrow f &&\quad \text{in } L^0(\Omega;L^1(0,T;H)),\\ G_n &\longrightarrow G &&\quad \text{in } L^0(\Omega; L^2(0,T;\mathscr{L}^2(U,H))). \end{alignat*} Then $u_n \to u$ in $L^0(\Omega;C([0,T];H))$. \end{prop} The second result we shall need is a generalized It\^o formula, the proof of which can be found in \cite{cm:pos1}. \begin{prop} \label{prop:Ito} Let $G \colon H \to \mathbb{R}$ be continuously Fr\'echet differentiable and $DG$ be G\^ateaux differentiable, with $D^2_{\mathcal{G}}G \colon H \to \mathscr{L}_2(H;\mathbb{R})$ such that $(\varphi,\zeta_1,\zeta_2) \mapsto D^2_{\mathcal{G}}G(\varphi)[\zeta_1,\zeta_2]$ is continuous, and assume that $G$, $DG$, and $D^2_{\mathcal{G}}G$ are polynomially bounded. Moreover, let $f \in L^0(\Omega;L^1(0,T;H))$ and $\Phi \in L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H)))$ be measurable and adapted processes, and $v_0 \in L^0(\Omega,\mathscr{F}_0;H)$. Setting \[ v := v_0 + \int_0^\cdot f(s)\,ds + \int_0^\cdot \Phi(s)\,dW(s), \] one has \begin{align*} G(v) &= G(v_0) + \int_0^\cdot \Bigl( DG(v)f % + \frac12 \operatorname{Tr}\bigl( \Phi^* D^2_{\mathcal{G}}G(v)\Phi \bigr)\Bigr)(s)\,ds\\ &\quad + \int_0^\cdot DG(v(s)) \Phi(s)\,dW(s). \end{align*} \end{prop} Finally, we recall an inequality for maximal monotone linear operators with sub-Markovian resolvent, due to Br\'ezis and Strauss (see \cite[Lemma~2]{BreStr}).\footnote{For a related inequality cf. also \cite[Lemma~5.1]{RW:nonmon}.} \begin{lem} \label{lem:brez-str} Let $\beta \colon \mathbb{R} \to 2^\mathbb{R}$ be a maximal monotone graph with $0 \in \beta(0)$. Let $\varphi \in L^p(\mathcal{O})$ with $A\varphi \in L^p(\mathcal{O})$, and $z \in L^q(\mathcal{O})$ with $z\in\beta(\varphi)$ a.e.~in $\mathcal{O}$, where $p,q \in [1,+\infty]$ and $1/p+1/q=1$. Then \[ \int_\mathcal{O} (A\varphi) z \geq 0. \] \end{lem} We include a sketch of proof for the reader's convenience, assuming for simplicity that $\beta: \mathbb{R} \to \mathbb{R}$ is continuous and bounded. Let $j:\mathbb{R} \to \mathbb{R}_+$ a (differentiable, convex) primitive of $\beta$ and \[ A_\lambda := \frac{1}{\lambda}\bigl( I - (I+\lambda A)^{-1}\bigr) = \frac{1}{\lambda}(I-J_\lambda), \qquad \lambda>0, \] the Yosida approximation of $A$. It is well known that $A_\lambda$ is a linear maximal monotone bounded operator on $H$ and that, for every $v \in \mathsf{D}(A)$, $A_\lambda v \to Av$ as $\lambda \to 0$. Let $v \in \mathsf{D}(A)$. The convexity of $j$ implies, for every $\lambda>0$, \begin{align*} \ip[\big]{A_\lambda v}{\beta(v)}_{L^2} &= \frac{1}{\lambda} \ip[\big]{v - J_\lambda v}{j'(v)}_{L^2}\\ &\geq \frac{1}{\lambda} \biggl( \int_\mathcal{O} j(v) - \int_\mathcal{O} j(J_\lambda v) \biggr) = \frac{1}{\lambda} \bigl( \norm{j(v)}_{L^1} - \norm{j(J_\lambda v)}_{L^1} \bigr). \end{align*} Since $J_\lambda$ is sub-Markovian and $j$ is convex, the generalized Jensen inequality for positive operators (see \cite{Haa07}) and the contractivity of $J_\lambda$ in $L^1$ imply that \[ \norm[\big]{j(J_\lambda v)}_{L^1} \leq \norm[\big]{J_\lambda j(v)}_{L^1} \leq \norm[\big]{j(v)}_{L^1}, \] i.e. that \[ \ip[\big]{A_\lambda v}{\beta(v)}_{L^2} \geq 0 \] for every $\lambda \to 0$. Passing to the limit as $\lambda \to 0$ yields $\ip{Av}{\beta(v)}_{L^2} \geq 0$. \section{Proof of Theorem~\ref{thm:pos}} \label{sec:proof} The proof is divided into two parts. First we show that a local mild solution $u$ to \eqref{eq:0} can be approximated by strong solutions to regularized equations. As a second step, we show that such approximating processes are positive, thanks to a suitable version of It\^o's formula. \subsection{Approximation of the solution} Let $(u,\tau)$ be a local mild solution to \eqref{eq:0}. Let $\sigma$ be a stopping time with $\sigma<\tau$, so that $u:[\![0,\sigma]\!]\to H$ is well defined, and set \begin{align*} \bar{u} &:= u^\sigma \in L^0(\Omega;C([0,T];H)),\\ \bar{F} &:= F(\cdot,u) \ind{\cc{0}{\sigma}} \in L^0(\Omega;L^1(0,T;H)),\\ \bar{B} &:= B(\cdot,u) \ind{\cc{0}{\sigma}} \in L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H))). \end{align*} Note that, by assumption (A3), $F(\cdot,0)=0$ and $B(\cdot,0)=0$, hence \begin{align*} \bar{F} = F(\cdot,u) \ind{\cc{0}{\sigma}} = F(\cdot,u\ind{\cc{0}{\sigma}}), \\ \bar{B} = B(\cdot,u)\ind{\cc{0}{\sigma}} =B(\cdot,u\ind{\cc{0}{\sigma}}). \end{align*} In particular, one has \begin{equation} \label{mild_bar} \bar{u}(t) := S(t)u_0 + \int_0^t S(t-s)\bar F(s)\,ds + \int_0^t S(t-s)\bar B(s)\,dW(s) \end{equation} for all $t \in [0,T]$ $\P$-a.s., or, equivalently, $\bar{u}$ is the unique global mild solution to the Cauchy problem \[ d\bar{u} + A\bar{u}\,dt =\bar{F}\,dt + \bar{B}\,dW, \qquad \bar{u}(0)=u_0. \] Recalling that $J_\lambda \in \mathscr{L}(H,\mathsf{D}(A))$ for all $\lambda>0$, one has \begin{align*} \bar{F}_\lambda &:= J_\lambda F(\cdot,u) \ind{\cc{0}{\sigma}} = J_\lambda \bar{F} \in L^0(\Omega;L^1(0,T;\mathsf{D}(A))),\\ \bar{B}_\lambda &:=J_\lambda B(\cdot,u) \ind{\cc{0}{\sigma}} = J_\lambda \bar{B} \in L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,\mathsf{D}(A)))),\\ u_{0\lambda} &:= J_\lambda u_0 \in L^0(\Omega,\mathscr{F}_0;\mathsf{D}(A)), \end{align*} where the second assertion is an immediate consequence of the ideal property of Hilbert-Schmidt operators. The process $u_\lambda:\Omega\times[0,T]\to H$ defined as \begin{equation} \label{mild_bar_lam} u_\lambda(t) := S(t)u_{0\lambda } + \int_0^t S(t-s)\bar F_\lambda(s)\,ds + \int_0^t S(t-s)\bar B_\lambda(s)\,dW(s), \qquad t\in[0,T], \end{equation} therefore belongs to $L^0(\Omega;C([0,T];\mathsf{D}(A)))$ and is the unique global strong solution to the Cauchy problem \[ du_\lambda + Au_\lambda\,dt = \bar{F}_\lambda\,dt + \bar{B}_\lambda\,dW, \qquad u_\lambda(0)=u_{0\lambda}, \] i.e. \begin{equation} \label{strong} u_\lambda + \int_0^\cdot Au_\lambda(s)\,ds = u_{0\lambda} + \int_0^\cdot \bar{F}_\lambda(s)\,ds + \int_0^\cdot \bar{B}_\lambda(s)\,dW(s) \end{equation} in the sense of indistinguishable $H$-valued processes. Furthermore, since $J_\lambda$ is contractive and converges to the identity in the strong operator topology of $\mathscr{L}(H,H)$ as $\lambda \to 0$, i.e. $J_\lambda h \to h$ for every $h \in H$, one has \begin{align*} u_{0\lambda} \longrightarrow u_0 &\quad \text{in } L^0(\Omega; H),\\ \bar{F}_\lambda \longrightarrow \bar{F} &\quad \text{in } L^0(\Omega; L^2(0,T; H)),\\ \bar{B}_\lambda \longrightarrow \bar{B} &\quad \text{in } L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H))), \end{align*} where the second convergence follows immediately by the dominated convergence theorem, and the third one by a continuity property of Hilbert-Schmidt operators (see, e.g., \cite[Theorem~9.1.14]{HvNVW2}). Finally, thanks to Proposition~\ref{prop:micia}, we deduce that \begin{equation} \label{conv} u_\lambda \longrightarrow \bar{u} \quad\text{in } L^0(\Omega;C([0,T];H)). \end{equation} \subsection{Positivity} Let us introduce the functional \begin{align*} G \colon H &\longrightarrow \mathbb{R}_+,\\ G \colon \varphi &\longmapsto \frac12 \int_\mathcal{O} \abs{\varphi_-}^2, \end{align*} as well as the family, indexed by $n \in \mathbb{N}$, of regularized functionals \begin{align*} G_n \colon H &\longrightarrow \mathbb{R}_+,\\ G_n \colon \varphi &\longmapsto \frac12 \int_\mathcal{O} g_n(\varphi), \end{align*} where $g_n:\mathbb{R} \to \mathbb{R}_+$ is convex, twice continuously differentiable, identically equal to zero on $\mathbb{R}_+$, strictly positive and decreasing on $\mathbb{R}_-$, such that $(g_n'')$ is uniformly bounded, and $g'_n(r) \to -r^-$ as $n \to \infty$ for all $r \in \mathbb{R}$. The existence of such an approximating sequence is well known (see, e.g., \cite[\S3]{scar-stef-order} for details). One can verify (see, e.g., \cite{cm:pos1}) that, for every $n \in \mathbb{N}$, $G_n$ is everywhere continuously Fr\'echet differentiable with derivative \begin{align*} DG_n \colon H &\longrightarrow \mathscr{L}(H,\mathbb{R}) \simeq H,\\ DG_n \colon \varphi &\longmapsto g_n'(\varphi), \end{align*} and that $DG_n \colon H \to H$ is G\^ateaux differentiable with G\^ateaux derivative given by \begin{align*} D^2_{\mathcal G}G_n \colon H &\longrightarrow \mathscr{L}(H,H) \simeq \mathscr{L}_2(H;\mathbb{R}),\\ D^2_{\mathcal G}G_n \colon \varphi &\longmapsto \Bigl[ (\zeta_1,\zeta_2) \mapsto \int_{\mathcal{O}} g_n''(\varphi)\zeta_1\zeta_2 \Bigr]. \end{align*} Furthermore, the map $(\varphi,\zeta_1,\zeta_2) \mapsto D^2_{\mathcal{G}}G_n(\varphi)(\zeta_1,\zeta_2)$ is continuous. Proposition~\ref{prop:Ito} applied to the process $u_\lambda$ defined by \eqref{strong} then yields \begin{equation} \label{ito} \begin{split} &G_n(u_\lambda) + \int_0^\cdot \ip[\big]{Au_\lambda}{DG_n(u_\lambda)}(s)\,ds\\ &\hspace{5em} = G_n(u_{0\lambda}) % + \int_0^\cdot DG_n(u_\lambda(s)) \bar{B}_\lambda(s)\,dW(s)\\ &\hspace{5em} \quad + \int_0^\cdot \Bigl( DG_n(u_\lambda) \bar{F}_\lambda + \frac12 \operatorname{Tr}\bigl(\bar{B}_\lambda^* D^2_{\mathcal{G}}G_n(u_\lambda) \bar{B}_\lambda\bigr) \Bigr)(s)\,ds \end{split} \end{equation} Recalling that $g'_n: \mathbb{R} \to \mathbb{R}$ is increasing, Lemma~\ref{lem:brez-str} implies that \[ \ip{Au_\lambda}{DG_n(u_\lambda)} = \ip{Au_\lambda}{g'(u_\lambda)} \geq 0, \] hence also, denoting a complete orthonormal system of $U$ by $(e_j)$, \begin{align*} \int_\mathcal{O} g_n(u_\lambda(t)) &\leq \int_\mathcal{O} g_n(u_{0\lambda}) + \int_0^t g_n'(u_\lambda(s)) \bar{B}_\lambda(s)\,dW(s)\\ &\quad + \int_0^t g_n'(u_\lambda(s)) \bar{F}_\lambda(s)\,ds + \frac12 \int_0^t \sum_{j=0}^\infty\int_\mathcal{O} g_n''(u_\lambda(s)) \abs[\big]{\bar{B}_\lambda(s)e_j}^2\,ds \end{align*} for every $t \in [0,T]$ and $n \in \mathbb{N}$. We are now going to pass to the limit as $n \to \infty$ in this inequality. Recalling that $(g''_n)$ is uniformly bounded and that the paths of $u_\lambda$ belong to $C([0,T]; H)$ $\P$-a.s., the dominated convergence theorem yields \begin{align*} \int_\mathcal{O} g_n(u_\lambda(t)) &\longrightarrow \frac12 \norm[\big]{u_\lambda^-(t)}^2 \quad \forall t \in [0,T],\\ \int_\mathcal{O} g_n(u_{0\lambda}) &\longrightarrow \frac12 \norm[\big]{u_{0\lambda}^-}^2. \end{align*} Note that $u_0$ is positive and $J_\lambda$ is positivity preserving, hence $u_{0\lambda}=J_\lambda u_0$ is also positive and, in particular, $u_{0\lambda}^-$ is equal to zero a.e. in $\mathcal{O}$. Let us introduce the (real) continuous local martingales $(M^n)_{n \in \mathbb{N}}$, $M$, defined as \begin{align*} M^{\lambda,n}_t &:= \int_0^t g_n'(u_\lambda(s)) \bar{B}_\lambda(s)\,dW(s),\\ M^\lambda_t &:= -\int_0^t (u_\lambda^-(s) \bar{B}_\lambda(s)\,dW(s). \end{align*} One has, by the ideal property of Hilbert-Schmidt operators, \begin{align*} \bigl[M^{\lambda,n}-M^\lambda,M^{\lambda,n}-M^\lambda\bigr]_t &= \int_0^t \norm[\big]{(g'_n(u_\lambda(s)) + u_\lambda^-(s))% \bar{B}_\lambda(s)}_{\mathscr{L}^2(U,\mathbb{R})}^2\,ds\\ &\leq \int_0^t \norm[\big]{g'_n(u_\lambda(s)) + u_\lambda^-(s)}^2 \norm[\big]{\bar{B}_\lambda(s)}^2_{\mathscr{L}^2(U,H)}\,ds \end{align*} for all $t \in [0,T]$. Recalling that $u_\lambda\in L^0(\Omega;C([0,T];H))$ and $g_n'(r) \to -r^-$ for every $r \in \mathbb{R}$, it follows by the dominated convergence theorem that $[M^{\lambda,n}-M^\lambda,M^{\lambda,n}-M^\lambda] \to 0$, hence that $M^{\lambda,n} \to M^\lambda$, as $n \to \infty$, i.e. that \[ \int_0^t g_n'(u_\lambda(s)) \bar{B}_\lambda(s)\,dW(s) \longrightarrow -\int_0^t u_\lambda^-(s) \bar{B}_\lambda(s)\,dW(s) \] for all $t \in [0,T]$. Similarly, the pathwise continuity of $u_\lambda$ and the dominated convergence theorem yield \[ \int_0^t g_n'(u_\lambda(s)) \bar{F}_\lambda(s)\,ds \longrightarrow -\int_0^t u_\lambda^-(s) \bar{F}_\lambda(s)\,ds \] for all $t \in [0,T]$ as $n \to \infty$. Finally, the pointwise convergence $g_n'' \to \ind{\mathbb{R}_-}$ and the dominated convergence theorem imply that \[ \int_0^t \sum_{j=0}^\infty \int_\mathcal{O} g_n''(u_\lambda(s)) \abs[\big]{\bar{B}_\lambda(s)e_j}^2\,ds \longrightarrow \int_0^t \sum_{j=0}^\infty \int_\mathcal{O} \ind{\{u_\lambda(s)<0\}} \abs[\big]{\bar{B}_\lambda(s)e_j}^2\,ds \] for all $t \in [0,T]$ as $n \to \infty$. We are thus left with \begin{align*} \norm[\big]{u_\lambda^-(t)}^2 &\leq \int_0^t \Bigl( -2\ip{u_\lambda^-(s)}{\bar{F}_\lambda(s)} + \sum_{j=0}^\infty \int_\mathcal{O} \ind{\{u_\lambda(s)<0\}} \abs[\big]{\bar{B}_\lambda(s)e_j}^2 \Bigr)\,ds\\ &\quad -\int_0^t u_\lambda^-(s) \bar{B}_\lambda(s)\,dW(s). \end{align*} Let us now take the limit as $\lambda \to 0$: if follows from the convergence property \eqref{conv} and the continuous mapping theorem that \[ \norm[\big]{u_\lambda^-(t)}^2 \longrightarrow \norm[\big]{\bar{u}^-(t)}^2. \] Recalling that $\bar{F}_\lambda = J_\lambda F(\bar{u})$, which converges pointwise to $F(\bar{u})$, one has \[ \int_0^t -2\ip[\big]{u_\lambda^-(s)}{\bar{F}_\lambda(s)}\,ds \longrightarrow \int_0^t -2\ip[\big]{\bar{u}^-(s)}{F(s,\bar{u}(s))}\,ds \] Appealing again to \eqref{conv}, it is not difficult to check that \[ \ind{\{\bar{u}(s)<0\}} \leq \liminf_{\lambda\to0} \ind{\{u_\lambda(s)<0\}} \quad \text{a.e.~in } \mathcal{O} \quad \forall s \in [0,T]. \] Hence it follows from Fatou's lemma that \[ \int_0^t \sum_{j=0}^\infty \int_\mathcal{O} \ind{\{u_\lambda(s)<0\}} \abs[\big]{\bar{B}_\lambda(s)e_j}^2\,ds \longrightarrow \int_0^t \sum_{j=0}^\infty \int_\mathcal{O} \ind{\{\bar{u}(s)<0\}} \abs[\big]{B(s,\bar{u}(s))e_j}^2\,ds. \] Let us define the real continuous local martingales $(M^\lambda)_{\lambda>0}$, $M$, defined as \begin{align*} M^\lambda_t &:= -\int_0^t u_\lambda^-(s) \bar{B}_\lambda(s)\,dW(s),\\ M_t &:= -\int_0^t \bar{u}^-(s) \bar{B}(s)\,dW(s). \end{align*} One has \[ \bigl[M^\lambda-M,M^\lambda-M\bigr]_t = \int_0^t \norm[\big]{u_\lambda^-(s)\bar{B}_\lambda(s) - \bar{u}^-(s)\bar{B}(s)}^2_{\mathscr{L}^2(U,\mathbb{R})}\,ds, \] where, by the ideal property of Hilbert-Schmidt operators and the contractivity of $J_\lambda$, \begin{align*} \norm[\big]{u_\lambda^-\bar{B}_\lambda - \bar{u}^-\bar{B}}_{\mathscr{L}^2(U,\mathbb{R})} &\leq \norm[\big]{(u_\lambda^- - \bar{u}^-)\bar{B}_\lambda}_{\mathscr{L}^2(U,\mathbb{R})} + \norm[\big]{\bar{u}^-(\bar{B}_\lambda - \bar{B})}_{\mathscr{L}^2(U,\mathbb{R})}\\ &\leq \norm[\big]{u_\lambda^- - \bar{u}^-} \norm[\big]{\bar{B}}_{\mathscr{L}^2(U,H)} + \norm[\big]{\bar{u}^-} \norm[\big]{\bar{B}_\lambda - \bar{B}}_{\mathscr{L}^2(U,H)}. \end{align*} Since, as $\lambda \to 0$, $u_\lambda$ converges to $\bar{u}$ in the sense of \eqref{conv} and, as already seen, $\bar{B}_\lambda$ converges to $\bar{B}$ in $L^0(\Omega;L^2(0,T;\mathscr{L}^2(U,H)))$, the dominated convergence theorem yields, for every $t \in [0,T]$, \[ \bigl[M^\lambda-M,M^\lambda-M\bigr]_t \longrightarrow 0, \] thus also \[ \int_0^t u_\lambda^-(s) \bar{B}_\lambda(s)\,dW(s) \longrightarrow \int_0^t \bar u^-(s) B(s,\bar{u}(s))\,dW(s). \] Recalling assumption (A3), one obtains, for every $t \in [0,T]$, \[ \norm{\bar{u}^-(t)}^2 \leq 2C \int_0^t \norm{\bar{u}^-(s)}^2\,ds -2 \int_0^t \bar{u}^-(s) B(s,\bar{u}(s))\,dW(s), \] thus also, integrating by parts, \[ e^{-2Ct} \norm{\bar{u}^-(t)}^2 \leq -2 \int_0^t e^{-2Cs} \bar{u}^-(s) B(s,\bar{u}(s))\,dW(s) =: \tilde M_t. \] The process $\tilde M$ is a positive local martingale, hence a supermartingale, with $\tilde M(0)=0$, therefore $M$ is identically equal to zero. This implies that $\norm{\bar{u}^-(t)}=0$ for all $t \in [0,T]$, hence, in particular, that $u(t)$ is positive a.e. in $\mathcal{O}$ for all $t \in [0,T]$. By definition of $\bar{u}$, we deduce that \[ u^{\sigma} \geq 0 \quad\text{a.e.~in } \Omega \times[0,T] \times \mathcal{O} \] for every $\sigma<\tau$. Since $\sigma$ is arbitrary, this readily implies that \[ u \geq 0 \quad\text{a.e.~in } \co{0}{\tau} \times \mathcal{O}, \] thus completing the proof of Theorem~\ref{thm:pos}. \section{Positivity of forward rates} Musiela's stochastic PDE can be written as \begin{equation} \label{eq:Mus} du + Au\,dt = \beta(t,u)\,dt + \sum_{k=1}^\infty \sigma_k(t,u)\,dw^k(t), \qquad u(0)=u_0, \end{equation} where $-A$ is (formally, for the moment) the infinitesimal generator of the semigroup of translations, $(w^k)_{k\in\mathbb{N}}$ is a sequence of independent standard Wiener processes, $\sigma_k$ is a random, time-dependent superposition operator for each $k \in \mathbb{N}$, as well as $\beta$, and $u$ takes values in a space of continuous functions, so that $u(t,x):=[u(t)](x)$, $x \geq 0$, models the value of the forward rate prevailing at time $t$ for delivery at time $t+x$. In order to exclude arbitrage (or, more precisely, in order for the corresponding discounted bond price process to be a local martingale), $\beta$ needs to satisfy the so-called Heath-Jarrow-Morton no-arbitrage condition \[ \beta(t,v) = \sum_{k=1}^\infty \sigma_k(t,v) \int_0^\cdot [\sigma_k(t,v)](y)\,dy. \] In order for \eqref{eq:Mus} to admit a solution with continuous paths, a by now standard choice of state space is the Hilbert space $H_\alpha$, $\alpha>0$, which consists of absolutely continuous functions $\phi:\mathbb{R}_+ \to \mathbb{R}$ such that \[ \norm[\big]{\phi}^2_{H_\alpha} := \phi(\infty)^2 + \int_0^\infty \abs{\phi'(x)}^2 e^{\alpha x}\,dx < \infty. \] Under measurability, local boundedness, and local Lipschitz continuity conditions on $(\sigma_k)$, one can rewrite \eqref{eq:Mus} as \begin{equation} \label{eq:Musa} du + Au\,dt = \beta(t,u)\,dt + B(t,u)\,dW(t), \qquad u(0)=u_0, \end{equation} where $A$ is the generator of the semigroup of translations on $H_\alpha$, $W$ is a cylindrical Wiener process on $U=\ell^2$, and $B:\Omega \times \mathbb{R}_+ \times H \to \mathscr{L}^2(U,H)$ is such that \[ \sum_{k=1}^\infty \int_0^\cdot \sigma_k(s,v(s))\,dw^k(s) = \int_0^\cdot B(s,v(s))\,dW(s). \] Under such assumptions on $(\sigma_k)$, \eqref{eq:Mus} admits a unique local mild solution with values in $H_\alpha$. If $(\sigma_k)$ satisfy stronger (global) boundedness and Lipschitz continuity assumptions, then the local mild solution is in fact global. For details we refer to \cite{filipo}, as well as to \cite{cm:pos1}. Positivity of forward rates, i.e. of the mild solution to \eqref{eq:Mus}, is established in \cite{cm:pos1} by proving positivity of mild solutions in weighted $L^2$ spaces to regularized versions of \eqref{eq:Mus}. Such an approximation argument is employed because the conditions on $(\sigma_k)$ ensuring (local) Lipschitz continuity of the coefficients in the associated stochastic evolution \eqref{eq:Musa} equation in $H_\alpha$ do not imply (local) Lipschitz continuity of the coefficients if state space is changed to a weighted $L^2$ space. Thanks to Theorem~\ref{thm:pos}, we can give a much shorter, more direct proof of the (criterion for the) positivity of forward rates. Let $L^2_{-\alpha}$ denote the weighted space $L^2(\mathbb{R}_+,e^{-\alpha x}\,dx)$, and note that $H_\alpha$ is continuously embedded in $L^2_{-\alpha}=:H$. Let us check that assumptions (A1), (A2), and (A3) are satisfied. Assumption (A1) holds true with the choice $\mathcal{O}=\mathbb{R}_+$, endowed with the absolutely continuous measure $m(dx):=e^{-\alpha x}\,dx$. As far as assumption (A2) is concerned, a simple computation shows that $A+\alpha I$ is monotone on $L^2_{-\alpha}$, and, by standard ODE theory, one also verifies that the range of $A + \alpha I + I$ coincides with the whole space $L^2_{-\alpha}$, therefore $A+\alpha I$ is maximal monotone. Even though $A$ itself is not maximal monotone, this is clearly not restrictive, as the ``correction'' term $\alpha I$ can be incorporated in $\beta$ without loss of generality. To verify that the resolvent $J_\lambda \in \mathscr{L}(H)$ of $A+\alpha I$ is sub-Markovian, let $y \in H$, so that $J_\lambda y \in \mathsf{D}(A)$ is the unique solution $y_\lambda$ to the problem \[ y_\lambda - \lambda y_\lambda' + \lambda\alpha y_\lambda = y. \] If $0\leq y\leq1$ a.e. in $\mathbb{R}_+$, then we have, multiplying both sides by ${(y_\lambda-1)^+}$, in the sense of the scalar product of $H$, that \begin{align*} &(1+\lambda\alpha) \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha} - \lambda \ip[\big]{y'_\lambda}{(y_\lambda-1)^+}_{-\alpha}\\ &\hspace{5em}= \ip[\big]{y}{(y_\lambda-1)^+}_{-\alpha} \leq \ip[\big]{1}{(y_\lambda-1)^+}_{-\alpha}. \end{align*} Here and in the following we denote the scalar product and norm of $L^2_{-\alpha}$ simply by $\ip{\cdot}{\cdot}_{-\alpha}$ and $\norm{\cdot}_{-\alpha}$, respectively. Since \begin{equation} \label{eq:iole} \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha} - \ip[\big]{1}{(y_\lambda-1)^+}_{-\alpha} = \norm[\big]{(y_\lambda-1)^+}^2_{-\alpha}, \end{equation} we obtain \[ \norm[\big]{(y_\lambda-1)^+}^2_{-\alpha} - \frac{\lambda}{2} \int_0^{\infty} \frac{d}{dx}((y_\lambda-1)^+)^2(x) e^{-\alpha x}\,dx + \lambda\alpha \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha} \leq 0, \] where, integrating by parts, \begin{align*} &- \frac{\lambda}{2} \int_0^{\infty} \frac{d}{dx}((y_\lambda-1)^+)^2(x) e^{-\alpha x}\,dx\\ &\hspace{5em} = -\frac{\lambda\alpha}{2} \int_0^{\infty} ((y_\lambda(x)-1)^+)^2 e^{-\alpha x}\,dx + \frac{\lambda}{2} ((y_\lambda(0)-1)^+)^2\\ &\hspace{5em} = -\frac{\lambda\alpha}{2} \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha} + \frac{\lambda\alpha}{2} \ip[\big]{1}{(y_\lambda-1)^+}_{-\alpha} + \frac{\lambda}{2} ((y_\lambda(0)-1)^+)^2\\ &\hspace{5em} \geq -\frac{\lambda\alpha}{2} \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha}. \end{align*} Rearranging the terms yields \[ \norm[\big]{(y_\lambda-1)^+}^2_{-\alpha} + \frac{\lambda\alpha}{2} \ip[\big]{y_\lambda}{(y_\lambda-1)^+}_{-\alpha} \leq 0, \] where the second term on the left-hand side is positive by \eqref{eq:iole}. Therefore $\norm{(y_\lambda-1)^+}_{-\alpha}=0$, which implies that $y_\lambda \leq 1$ a.e. in $\mathbb{R}_+$. A completely similar argument, i.e. scalarly multiplying the resolvent equation by $y_\lambda^-$, also shows that $y_\lambda \geq 0$ a.e. in $\mathbb{R}_+$, thus completing the proof that $J_\lambda$ is sub-Markovian. We still need to show that $J_\lambda$ is contractive in $L^1_{-\alpha}$. Let $y,z \in H$ and $y_\lambda:=J_\lambda y$, $z_\lambda:=J_\lambda z$, so that \begin{equation} \label{eq:yzl} (y_\lambda-z_\lambda) - \lambda(y_\lambda - z_\lambda)' + \lambda\alpha(y_\lambda-z_\lambda) = y-z. \end{equation} Define the sequences of functions $(\gamma_k), (\hat{\gamma}_k) \subset \mathbb{R}^\mathbb{R}$ as \[ \gamma_k: r \mapsto \tanh(kr), \qquad \hat{\gamma}_k: r \mapsto \int_0^r \gamma_k(s)\,ds, \] and recall that, as $k \to \infty$, $\gamma_k$ converges pointwise to the sign function, and $\hat{\gamma}_k$ converges pointwise to the absolute value function. Scalarly multiplying \eqref{eq:yzl} with $\gamma_k(y_\lambda-z_\lambda)$ yields \begin{align*} &(1+\lambda\alpha) \ip[\big]{y_\lambda-z_\lambda}{\gamma_k(y_\lambda-z_\lambda)}_{-\alpha} - \lambda \ip[\big]{(y_\lambda-z_\lambda)'}% {\gamma_k(y_\lambda-z_\lambda)}_{-\alpha}\\ &\hspace{5em} = \ip[\big]{y-z}{\gamma_k(y_\lambda-z_\lambda)}_{-\alpha} \leq \norm[\big]{y-z}_{L^1_{-\alpha}}, \end{align*} where, integrating by parts, \begin{align*} \ip[\big]{(y_\lambda-z_\lambda)'}{\gamma_k(y_\lambda-z_\lambda)}_{-\alpha} &= \int_0^\infty \bigl( \gamma_k(y_\lambda-z_\lambda)(x) (y_\lambda-z_\lambda)'(x) \bigr) e^{-\alpha x}\,dx\\ &= \int_0^\infty \frac{d}{dx} \hat{\gamma}_k(y_\lambda-z_\lambda)(x) e^{-\alpha x}\,dx\\ &= - \hat{\gamma}_k(y_\lambda(0)-z_\lambda(0)) + \alpha \int_0^\infty \hat{\gamma}_k(y_\lambda-z_\lambda)(x) e^{-\alpha x}\,dx\\ &\leq \alpha \int_0^\infty \hat{\gamma}_k(y_\lambda-z_\lambda)(x) e^{-\alpha x}\,dx. \end{align*} This implies \begin{align*} &\ip[\big]{y_\lambda-z_\lambda}{\gamma_k(y_\lambda-z_\lambda)}_{-\alpha}\\ &\hspace{3em} + \lambda\alpha \ip[\big]{y_\lambda-z_\lambda}{\gamma_k(y_\lambda-z_\lambda)}_{-\alpha} - \lambda\alpha \int_0^\infty \hat{\gamma}_k(y_\lambda-z_\lambda)(x) e^{-\alpha x}\,dx\\ &\hspace{5em} \leq \norm[\big]{y-z}_{L^1_{-\alpha}}. \end{align*} Taking the limit as $k \to \infty$, the sum of the second and third term on the left-hand side converges to zero by the dominated convergence theorem, while the first term on the left-hand side converges to $\norm{y_\lambda-z_\lambda}_{L^1_{-\alpha}}$, thus proving that \[ \norm[\big]{y_\lambda-z_\lambda}_{L^1_{-\alpha}} \leq \norm[\big]{y-z}_{L^1_{-\alpha}}, \] i.e. that the resolvent of $A+\alpha I$ is contractive in $L^1_{-\alpha}$. We have thus shown that assumption (A2) holds for $A+\alpha I$. Moreover, assumption (A3) is satisfied if, for example, \[ \abs{\sigma_k(\omega,t,x,r)} \ind{\{r \leq 0\}} \lesssim r^-. \] for all $k \in \mathbb{N}$ and $(\omega,t,x) \in \Omega \times \mathbb{R}_+^2$ (see \cite{cm:pos1}, where also slightly more general sufficient conditions are provided). Since all integrability assumptions of Theorem~\ref{thm:pos} are satisfied, as it follows by inspection of the proof of well-posedness in $H_\alpha$ (see~\cite{filipo,cm:pos1,cm:MF10}), we conclude that, under the above assumptions on $(\sigma_k)$, forward rates are positive at all times. \bibliographystyle{amsplain}
1,941,325,220,325
arxiv
\section{Introduction} Relativistic field theories of gauge symmetry type can be formulated as constrained Hamiltonian systems. Using a \( (1 + 3) \)-decomposition of spacetime, the four dimensional field equations usually split into hyperbolic evolution equations and elliptic constraint equations for the Cauchy data. A remarkable observation is that the constraints often can be formulated in terms of the momentum map associated to the action of the symmetry group of the theory. This is usually verified by a fairly routine calculation in each example separately (see, \eg, \parencite{ArmsMarsdenEtAl1981,Arms1981} for pure Yang--Mills theory and \parencite{FischerMarsden1979,ArmsFischerMarsden1975} for general relativity). Going beyond a case-by-case study, the general philosophy that the subset of the phase space cut out by the constraints can be identified with the zero set of a momentum map seems to be true in a large number of models. In this paper, we will argue that the relationship between constraints and momentum maps is not a lucky coincidence but is rooted in a special feature of the four dimensional physical action. This special feature is the main subject of the present paper. Inspired by the Clebsch optimal control problem \parencite{Gay-BalmazRatiu2011}, we study a variational principle associated to a class of degenerate Lagrangians, whose degeneracy results from the action of the symmetry group. The defining feature of this principle is that the Lagrange multipliers are elements of the Lie algebra of the symmetry group and that they couple to the configuration variables via a given Lie algebra action. The resulting equations of motion, which we call the Clebsch--Euler--Lagrange equations, decompose into an evolution and a constraint equation. Next, we define a Clebsch--Legendre transformation similar to the ordinary Legendre transformation leading to what we call the Clebsch--Hamiltonian picture. As we will show, the constraints that arise from the degeneracy of the Lagrangian are phrased as momentum map constraints on the Hamiltonian side. We then use the geometric formulation of the Dirac--Bergmann algorithm in the formulation of \textcite{GotayNesterHinds1978} to derive the Clebsch--Legendre transformation from an ordinary Legendre transformation of an extended Lagrangian system. In this process, we isolate two constraint equations, which are later given a geometric interpretation in terms of a symplectic reduction by stages procedure. This analysis of the constraints extends the well-known results for Yang--Mills theory \parencite{BergveltDeKerf1986,BergveltDeKerf1986a} and general relativity \parencite{Giulini2015}. Since constraints which allow a reformulation in terms of symmetric Hamiltonian systems are abundant in field theories, this observation suggests that the Clebsch--Lagrange principle is fundamental for Cauchy problems. The two examples that will be discussed are the Yang--Mills--Higgs equations and the Einstein equation. In both cases, we will complete the following program: \begin{enumerate} \item Using a \( (1+3)\)-splitting of spacetime, formulate the field equations as a Cauchy problem and determine the constraints on the initial data. In particular, find a symplectic manifold on which the dynamics takes place. \item Identify the Lagrange multipliers as elements of the Lie algebra of the symmetry group and determine their action on the phase space. \item Calculate the Lagrangian in the \( (1+3) \)-splitting and show that it is of the Clebsch--Lagrange form. \item Pass to the Clebsch--Hamiltonian picture using the Clebsch--Legendre transformation. In particular, phrase the constraints in terms of the momentum map. \item Relate the Clebsch--Hamiltonian picture to the Hamiltonian system with constraints obtained by the ordinary Legendre transformation. \item Discuss the symmetry reduction as a symplectic reduction by stages. \end{enumerate} After accomplishing this program, we are left with a singular symplectic cotangent bundle reduction in infinite dimensions, which is studied in detail in a separate paper \parencite{DiezRudolphReduction}. \paragraph*{Acknowledgments} We thank the anonymous reviewer for his/her comments and suggestions on an earlier draft, which significantly contributed to improving the quality of the paper. We gratefully acknowledge support of the Max Planck Institute for Mathematics in the Sciences in Leipzig and of the University of Leipzig. \section{Clebsch--Lagrange variational principle} \label{sec:clebschLagrange} In \parencite{Gay-BalmazRatiu2011}, \citeauthor{Gay-BalmazRatiu2011} study a class of optimal control problems associated to group actions. The special feature of the \emphDef{Clebsch optimal control problem} is that the control variables are Lie algebra valued and couple to the state variables via the symmetry group action. Let \( G \) be a Lie group that acts smoothly on the smooth manifold \( Q \) and let \( \LieA{g} \) be its Lie algebra. Given a smooth cost function \( l: Q \times \LieA{g} \to \R \), the Clebsch optimal control problem consists in finding curves \( t \mapsto q(t) \) and \( t \mapsto \xi(t) \) such that \begin{equation} s[q, \xi] = \int_0^T l\bigl(q(t), \xi(t)\bigr) \dif t \end{equation} is minimized subject to the optimal control constraint \( \dot q(t) = - \xi(t) \ldot q(t) \) and the endpoint constraints \( q(0) = q_i \) and \( q(T) = q_f \). Here, \( \xi \ldot q \) is the fundamental vector field at \( q \in Q \) generated by the action of \( \xi \in \LieA{g} \). The Pontryagin maximum principle shows that the control variable \( \xi \) satisfies the (generalized) Euler-Poincaré equation, see \parencite[Theorem~7.1]{Gay-BalmazRatiu2011}. This observation allows one to give an optimal control formulation for many systems including the heavy top and the compressible or magnetohydrodynamic fluid flow. \subsection{Clebsch--Lagrangian picture} We now describe a novel variational principle which draws its inspiration from the Clebsch optimal control problem and which will turn out to be useful for relativistic field theories with constraints. The idea is to dismiss the optimal control constraint and to treat, instead, the quantity \( \dot q(t) + \xi(t) \ldot q(t) \) as an effective velocity in the Lagrangian formulation. As above, let \( Q \) be a smooth manifold and let \( \TBundle Q \) be its tangent bundle. Assume that a Lie group \( G \) acts smoothly on \( Q \). The configuration space \( Q \) as well as the Lie group \( G \) are assumed to be (infinite-dimensional) Fréchet manifolds. We refer to \parencite{Neeb2006,Hamilton1982} for the differential calculus in Fréchet spaces. In order to stay close to physics notation, we will usually denote points in \( \TBundle Q \) by pairs \( (q, \dot{q}) \), where \( q \in Q \) stands for the base point of the vector \( \dot{q} \in \TBundle_q Q \). Sometimes, we will also write a point in \( \TBundle Q \) as a pair \( (q, v) \) with \( q \in Q \) and \( v \in \TBundle_q Q \). Moreover, we use the dot notation \( g \cdot q \) for the action of \( g \in G \) on \( q \in Q \). For the derivative of the action a similar notation using lower dots is employed, \ie, the fundamental vector field \( \xi_* \) generated by \( \xi \in \LieA{g} \) is written as \( \xi_* (q) = \xi \ldot q \in \TBundle_q Q \) and the lifted \( G \)-action on \( \TBundle Q \) has the form \( g \ldot X_q \in \TBundle_{g \cdot q} Q \) for \( X_q \in \TBundle_q Q \). Given a smooth function \( L: \TBundle Q \times \LieA{g} \to \R \), we consider the variational principle \( \diF S = 0 \) for curves \( t \mapsto q(t) \in Q \) and \( t \mapsto \xi(t) \in \LieA{g} \), where the physical action \( S \) is of the form \begin{equation}\label{eq:clebschLagrange:action} S[q, \xi] = \int_0^T L\bigl(q(t), \dot q(t) + \xi(t) \ldot q(t), \xi(t) \bigr) \dif t \end{equation} and where the variations of \(t \mapsto q(t) \) are restricted to vanish at the endpoints. Here, \(t \mapsto (q (t), \dot q (t) + \xi(t) \ldot q (t)) \) is viewed as a curve in \( \TBundle Q \). We will refer to this variational principle as the \emphDef{Clebsch--Lagrange principle} and to \( L \) as the \emphDef{Clebsch--Lagrangian}. In order to derive the associated equations of motions, we first recall some notions and results from geometric mechanics (see, \eg \parencite{AbrahamMarsdenEtAl1980, RudolphSchmidt2012}). In particular, we need the theory of linear connections in vector bundles. A linear connection in a vector bundle \( \pi: E \to M \) is a vector bundle homomorphism \( K: \TBundle E \to E \) over \( \pi: E \to M \) and over \( \TBundle M \to M \) such that the following digram commutes: \begin{equationcd} \VBundle E \to[r, "K"] & E \to[d, "\id_E"] \\ E \times_M E \to[r, "\pr_2"] \to[u, "\mathrm{vl}"] & E, \end{equationcd} where the vertical tangent bundle \( \VBundle E = \ker \tangent \pi \) is identified with \( E \times_M E \) via the linear structure on \( E \), that is, \begin{equation} \mathrm{vl}: E \times_M E \to \VBundle E, \quad (e, v) \mapsto \difFracAt{}{\varepsilon}{0} (e + \varepsilon v) \in \VBundle_e E. \end{equation} The kernel \( \HBundle E \equiv \ker K \) of \( K \) is then a vector subbundle of \( \TBundle E \) such that \( \TBundle E = \HBundle E \oplus \VBundle E \) is a fiberwise topological isomorphism. Accordingly, the connection \( K \) yields a bundle isomorphism \begin{equation} \label{eq:connection:decomposition} \TBundle E \to E \times_M \TBundle M \times_M E, \quad (Z_e) \mapsto \bigl(e, \tangent_e \pi(Z_e), K(Z_e)\bigr). \end{equation} The inverse is given by \begin{equation} \label{eq:connection:decompositionInverse} E \times_M \TBundle M \times_M E \to \TBundle E, \quad (e, X, v) \mapsto \mathrm{vl}(e, v) + X^K_e, \end{equation} where \( X^K_e \) denotes the \( K \)-horizontal lift of \( X \) to \( \TBundle_e E \). Given a smooth section \( \phi \) of \( E \), its covariant derivative \( \nabla \phi \) is defined by \begin{equation} \label{eq:connection:covDeriv} \nabla_X \phi = K \bigl(\tangent \phi (X)\bigr) \end{equation} for every \( X \in \VectorFieldSpace(M) \). Using the Leibniz rule, \( \nabla \) is extended to the exterior covariant derivative \( \dif_K: \DiffFormSpace^k(M, E) \to \DiffFormSpace^{k+1}(M, E) \) of \( E \)-valued differential forms. A soldering form on \( E \) is a vector bundle morphism \( \vartheta: \TBundle M \to E \) (usually, \( \vartheta \) is required to be an isomorphism of vector bundles, but we do not need this assumption). The tangent bundle \( E = \TBundle M \) comes equipped with a tautological soldering form given by the identity \( \vartheta: \TBundle M \to \TBundle M \). Given a soldering form \( \vartheta \in \DiffFormSpace^1(M, E) \), the torsion of a connection \( K \) on \( E \) relative to \( \vartheta \) is defined to be \( T = \dif_K \vartheta \in \DiffFormSpace^2(M, E) \). In particular, we have \begin{equation} T (X, Y) = \dif_K \vartheta (X, Y) = \nabla_X \bigl(\vartheta (Y)\bigr) - \nabla_Y \bigl(\vartheta (X)\bigr) - \vartheta (\commutator{X}{Y}), \end{equation} which, for the tangent bundle \( E = \TBundle M \) with the tautological soldering form, reduces to the usual defining relation for the torsion. Let \( f: N \to M \) be a smooth map, and let \( E \to M \) be a vector bundle endowed with a connection \( K \). The pull-back bundle \( f^* E \) is a smooth vector bundle over \( N \). The connection on \( E \) yields via pull-back a connection in \( f^* E \) and thus induces a covariant derivative \begin{equation} \nabla^f_Y: \sSectionSpace(f^* E) \to \sSectionSpace(f^* E), \quad \psi \mapsto K\bigl(\tangent \psi (Y)\bigr) \end{equation} for every \( Y \in \VectorFieldSpace(N) \). In particular, for a curve \( \gamma: \R \supseteq I \to M \), we write \begin{equation} \label{eq:connection:alongCurve} \DifFrac{}{t} \equiv \nabla^\gamma_{\difp_t}: \sSectionSpace(\gamma^* E) \to \sSectionSpace(\gamma^* E) \end{equation} for the covariant derivative along \( \gamma \) in the direction of the canonical vector field \( \difp_t \) on \( I \). Returning to our original setting, let \( Q \) be a smooth manifold and let \( L: \TBundle Q \to \R \) be a smooth Lagrangian. The \emphDef{fiber or vertical derivative} \( \difFibre L: \TBundle Q \to \TBundle' Q \) of \( L \) is defined by \begin{equation} \label{FibDer} \dualPair{\difFibre L (v)}{w} = \difFracAt{}{\varepsilon}{0} L(v + \varepsilon w)\, , \end{equation} where \( v, w \in \TBundle_q Q \), and \( \TBundle' Q \) denotes the fiberwise dual of the tangent bundle. In order to stay close to the usual physics notation, the fiber derivative \( \difFibre L: \TBundle Q \to \TBundle' Q \) will be written as \( \difpFrac{L}{\dot q} \). We stress that the latter symbol will always stand for the mapping given by~\eqref{FibDer}, \begin{equation} \dualPair*{\difpFrac{L}{\dot{q}}(q, v)}{w} = \difFracAt{}{\varepsilon}{0} L(q, v + \varepsilon w) \,, \qquad w \in \TBundle_q Q. \end{equation} In order to define the partial derivative \( \difpFrac{L}{q} \) in an intrinsic way, we need a linear connection in \( \TBundle Q \). By~\eqref{eq:connection:decomposition}, every connection \( K: \TBundle (\TBundle Q) \to \TBundle Q \) in the bundle \( \TBundle Q \to Q \) yields a vector bundle isomorphism \begin{equation} \label{eq:clebschLagrange:decompositionTTwo} \TBundle (\TBundle Q) \isomorph \TBundle Q \times_Q \TBundle Q \times_Q \TBundle Q. \end{equation} Accordingly, we will write elements of \( \TBundle (\TBundle Q) \) as tuples \( (q, \dot{q}, \diF q, \diF \dot{q}) \), where \( \dot{q}, \diF q \) and \( \diF \dot{q} \) are elements of \( \TBundle_q Q \). We emphasize that the isomorphism~\eqref{eq:clebschLagrange:decompositionTTwo} and, in particular, the component \( \diF \dot{q} \) depend on the connection \( K \). Now, for \( L: \TBundle Q \to \R \), the derivative of \( L \) at \( (q, \dot{q}) \in \TBundle Q \) with respect to the second component in the decomposition~\eqref{eq:clebschLagrange:decompositionTTwo} will be written as \( \difpFrac{L}{q}(q, \dot{q}): \TBundle Q \to \R \). In contrast to the fiber derivative \( \difpFrac{L}{\dot{q}} \), the partial derivative \( \difpFrac{L}{q} \) depends on the choice of a connection in \( \TBundle Q \). In summary, we express \( \tangent L: \TBundle (\TBundle Q) \to \R \) as \begin{equation} \label{eq:clebschLagrange:tangentLagrange} \tangent L(q, \dot{q}, \diF q, \diF \dot{q}) = \dualPair*{\difpFrac{L}{q}(q, \dot{q})}{\diF q} + \dualPair*{\difpFrac{L}{\dot{q}}(q, \dot{q})}{\diF \dot{q}}. \end{equation} This relation is the intrinsic geometric version of the corresponding coordinate expression found in the physics literature. For \( L: \TBundle Q \times \LieA{g} \to \R \), the derivative of \( L \) at \( (q, \dot q, \xi) \in \TBundle Q \times \LieA{g} \) in the \( \LieA{g} \)-direction yields an element of the topological dual \( \LieA{g}' \) and will be denoted by \( \difpFrac{L}{\xi}(q, \dot q, \xi) \). In infinite dimensions, one needs to be careful with the notion of a cotangent bundle, because the canonical candidate --- the fiberwise topological dual \( \TBundle' Q \) of the tangent bundle --- fails to be a \emph{smooth} bundle. Following \parencite{DiezRudolphReduction}, we define the cotangent bundle \( \CotBundle Q \) to be some smooth vector bundle\footnote{ For example, we may take \( \CotBundle Q \) to be \( \TBundle Q \) and the pairing given by a Riemannian structure on \( Q \). In applications, the fiber of \( \TBundle Q \) is often a space of mappings so that a convenient choice of the cotangent bundle consists of regular distributions inside the space of all distributions. } over \( Q \) which is fiberwise in duality with the tangent bundle relative to some chosen pairing \( \dualPairDot: \CotBundle Q \times \TBundle Q \to \R \). In line with our notation for points of \( \TBundle Q \), we will denote points in the cotangent bundle \( \CotBundle Q \) by pairs \( (q, p) \) with \( q \in Q \) and \( p \in \CotBundle_{q} Q \). The dual pairing yields an embedding of the cotangent bundle \( \CotBundle Q \) into the topological dual \( \TBundle' Q \) of the tangent bundle. In the following, we always assume that all occurring dual objects can be represented by points of the cotangent bundle \( \CotBundle Q \). In particular, the derivatives \( \difpFrac{L}{q} \) and \( \difpFrac{L}{\dot{q}} \) are viewed as maps \( \TBundle Q \to \CotBundle Q \) (and not merely to \( \TBundle' Q \)). Regularity of \( L \) then means that \( \difpFrac{L}{\dot{q}} \) is a diffeomorphism between \( \TBundle Q \) and \( \CotBundle Q \). Similarly as for the cotangent bundle, we choose a non-degenerate pairing \( \kappa: \LieA{g}^* \times \LieA{g} \to \R \). This gives an embedding of \( \LieA{g}^* \) into the topological dual \( \LieA{g}' \). We silently assume that all occurring dual objects that a priori are only in \( \LieA{g}' \) may actually be represented by elements of \( \LieA{g}^* \). This remark applies in particular to the partial derivative \( \difpFrac{L}{\xi} \). In order to formulate the variational principle, we also need the following technical tool. A \emphDef{local addition} on \( Q \) is a smooth map \( \eta: \TBundle Q \supseteq U \to Q \) defined on an open neighborhood \( U \) of the zero section in \( \TBundle Q \) such that the composition of \( \eta \) with the zero section is the identity on \( Q \), \ie, \( \eta(q, 0) = q \), and the map \( \pr_Q \times \eta: \TBundle Q \supseteq U \to Q \times Q \) is a diffeomorphism onto an open neighborhood of the diagonal, \cf \parencite[Section~42.4]{KrieglMichor1997}. A local addition may be constructed using the exponential map of a Riemannian metric on \( Q \) but it also exists if \( Q \) is an affine space, a Lie group or a space of sections of a fiber bundle. With these preliminaries out of the way, we are now able to state the equations of motion corresponding to the Clebsch--Lagrange variational principle. \begin{thm}\label{prop:clebschLagrange:clebschEulerLagrangeDirect} Let \( Q \) be a smooth Fréchet \( G \)-manifold. Assume that \( Q \) can be endowed with a local addition\footnote{More generally, one could give up the assumption of the existence of a local addition and instead work in the diffeological category \parencite{Iglesias-Zemmour2013}. }. Moreover, assume that \( \TBundle Q \) and \( \CotBundle Q \) are endowed with dual torsion-free linear connections. Then, for a Clebsch--Lagrangian \( L: \TBundle Q \times \LieA{g} \to \R \) the following are equivalent: \begin{thmenumerate} \item The curves \( t \mapsto q(t) \in Q \) and \( t \mapsto \xi(t) \in \LieA{g} \) are solutions of the variational Clebsch--Lagrange problem~\eqref{eq:clebschLagrange:action}. \item The following equations hold: \begin{subequations}\label{eq:clebschLagrange:clebschEulerLagrangeDirect} \begin{align+} \DifFrac{}{t} \left( \difpFrac{L}{\dot{q}} \right) - \dualPair*{\difpFrac{L}{\dot{q}}}{\nabla \xi_*} - \difpFrac{L}{q} &= 0, \label{eq:clebschLagrange:clebschEulerLagrangeDirect:evol} \\ \dualPair*{\difpFrac{L}{\dot{q}}}{\zeta \ldot q} + \kappa\left(\difpFrac{L}{\xi}, \zeta\right) &= 0, \label{eq:clebschLagrange:clebschEulerLagrangeDirect:constraint} \end{align+} \end{subequations} for all \( \zeta \in \LieA{g} \), where evaluation of all derivatives at \( (q, \dot q + \xi \ldot q, \xi) \in \TBundle Q \times \LieA{g} \) is understood. Furthermore, the second term in~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:evol} denotes the functional \( T_q Q \ni X \mapsto \dualPair[\big]{\difpFrac{L}{\dot{q}}}{\nabla_X \xi_*} \in \R \) viewed as an element of \( \CotBundle Q \). \qedhere \end{thmenumerate} \end{thm} The equations~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect} will be referred to as the \emphDef{Clebsch--Euler--Lagrange equations}. \begin{proof} Let \( I = [0, T] \) with \( T > 0 \). Since \( Q \) admits a local addition, the space \( \sFunctionSpace_{q_i q_f}(I, Q) \) of paths in \( Q \) with given endpoints \( q_i \) and \( q_f \) is a smooth infinite-dimensional manifold, \cf \parencite[Theorem~7.6]{Wockel2014}. Similarly, as \( \LieA{g} \) is an affine space, the space \( \sFunctionSpace(I, \LieA{g}) \) of paths in \( \LieA{g} \) is a smooth manifold, too. Since the Lagrangian is smooth, the action \( S \) is a smooth function on \( \sFunctionSpace_{q_i q_f}(I, Q) \times \sFunctionSpace(I, \LieA{g}) \). The variational principle amounts to looking for extrema of \( S \). Let \( \bigl(q_\varepsilon(t), \dot{q}_\varepsilon(t)\bigr) \) be a smooth perturbation of the curve \( \bigl(q(t), \dot{q}(t)\bigr) \) in \( \TBundle Q \). In line with usual notation, we write \begin{equation} \label{eq:clebschLagrange:clebschEulerLagrangeDirect:variation} \difFracAt{}{\varepsilon}{0}\bigl(q_\varepsilon(t), \dot{q}_\varepsilon(t)\bigr) \equiv \bigl(\diF q(t), \diF \dot{q}(t)\bigr) \in \TBundle_{q(t)} Q \times \TBundle_{q(t)} Q \end{equation} for the corresponding curve in \( \TBundle(\TBundle Q) \) under the isomorphism~\eqref{eq:clebschLagrange:decompositionTTwo}. The curve \( t \mapsto \bigl(\diF q(t), \diF \dot{q}(t)\bigr) \) should be considered as a tangent vector field along the curve \( t \mapsto q(t) \), which may be viewed as an tangent vector to \( \sFunctionSpace_{q_i q_f}(I, Q) \) at that curve. Let \( \phi: I \times \R \to Q \) be defined by \( \phi(t, \varepsilon) = q_\varepsilon(t) \), and let \( \vartheta \) be the tautological soldering form on \( \TBundle Q \). Note that \( (\phi^* \vartheta)_{t, \varepsilon} (\difp_t) = \dot{q}_\varepsilon (t) \) and \( (\phi^* \vartheta)_{t, \varepsilon} (\difp_\varepsilon) = \diF q_\varepsilon (t) \). Since the connection \( K \) on \( \TBundle Q \) is torsion-free with respect to \( \vartheta \), using~\eqref{eq:connection:alongCurve} and~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:variation}, we obtain \begin{equation}\begin{split} 0 &= \phi^* T (\difp_t, \difp_\varepsilon) \\ &= \dif_{\phi^* K} (\phi^* \vartheta) (\difp_t, \difp_\varepsilon) \\ &= \nabla^\phi_{\difp_t} \bigl(\phi^* \vartheta (\difp_\varepsilon)\bigr) - \nabla^\phi_{\difp_\varepsilon} \bigl(\phi^* \vartheta (\difp_t)\bigr) - \phi^* \vartheta \bigl(\commutator{\difp_t}{\difp_\varepsilon}\bigr) \\ &= \DifFrac{}{t} (\diF q) - \diF \dot{q}(t). \end{split}\end{equation} That is, \( \diF \dot{q}(t) = \DifFrac{}{t} (\diF q) \). Moreover, by~\eqref{eq:connection:covDeriv}, we have \begin{equation} K \left(\difFracAt{}{\varepsilon}{0} \xi \ldot q_\varepsilon\right) = K \bigl(\tangent \xi_* (\diF q)\bigr) = \nabla_{\diF q} \xi_*, \end{equation} where \( \xi_* \) is viewed as a section \( Q \to \TBundle Q \) and where we suppressed the explicit time dependence. In passing, we note that \( \tangent \xi_* (\diF q) \) is the fundamental vector field at \( \diF q \) on \( \TBundle Q \) generated by \( \xi \). Thus, under the isomorphism~\eqref{eq:clebschLagrange:decompositionTTwo}, we obtain \begin{equation}\begin{split} \difFracAt{}{\varepsilon}{0}\bigl(q_\varepsilon(t), \dot{q}_\varepsilon(t) + \xi(t) \ldot q_\varepsilon(t)\bigr) &= \Bigl(\diF q(t), \diF \dot{q}(t) + \nabla_{\diF q (t)} \xi(t)_* \Bigr) \\ &= \left(\diF q(t), \DifFrac{}{t} (\diF q) + \nabla_{\diF q (t)} \xi(t)_* \right). \end{split}\end{equation} Thus, using~\eqref{eq:clebschLagrange:tangentLagrange}, variation of the path in \( Q \) yields \begin{align} &\int_0^T \left( \dualPair*{\difpFrac{L}{q}}{\diF q} + \dualPair*{\difpFrac{L}{\dot{q}}}{\DifFrac{}{t} (\diF q)} + \dualPair*{\difpFrac{L}{\dot{q}}}{\nabla_{\diF q} \xi_*} \right) \dif t = 0, \label{eq:clebschLagrange:proof:geom} \intertext{which after integration by parts on the second term is equivalent to} & \int_0^T \left(\dualPair*{\difpFrac{L}{q}}{\diF q} - \dualPair*{\DifFrac{}{t} \left(\difpFrac{L}{\dot{q}}\right)}{\diF q} + \dualPair*{\difpFrac{L}{\dot{q}}}{\nabla_{\diF q} \xi_*} \right) \dif t = 0. \end{align} Since the variations \( \diF q \) are arbitrary, we get the evolution equation~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:evol}. Similarly, let \( t \mapsto \diF \xi(t) \in \LieA{g} \) be a tangent vector to \( t \mapsto \xi(t) \) in \( \sFunctionSpace(I, \LieA{g}) \). Variation of \( \xi \) yields \begin{align} \dualPair*{\difpFrac{L}{\dot{q}}}{\diF \xi \ldot q} + \kappa\left(\difpFrac{L}{\xi}, \diF \xi\right) = 0, \end{align} which clearly gives~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:constraint}. \end{proof} \begin{remark}[Clebsch optimal control problem] \label{rem:clebschLagrange:optimalControl} Suppose the Clebsch--Lagrange problem is supplemented by the optimal control constraint \( \dot q = - \xi \ldot q \). Then,~\eqref{eq:clebschLagrange:action} reduces to the variational principle \( \diF s = 0 \) for the action \begin{equation} s[q, \xi] = \int_0^T l(q, \xi) \dif t, \end{equation} where the effective Lagrangian is given by \( l(q, \xi) = L(q, 0, \xi) \). We hence recover the Clebsch optimal control problem discussed at the beginning of the section. For the applications we have in mind, the optimal control constraint is a too strong condition. In the context of Yang--Mills theory, it amounts to requiring that the color-electromagnetic field vanishes and, for Einstein's equation, it is equivalent to a zero extrinsic curvature. \end{remark} \subsection{Clebsch--Hamilton picture} \label{MomMapCon} In this section, we discuss the Hamiltonian counterpart of the Clebsch--Euler--Lagrange equations. Let \( E: \TBundle Q \times \LieA{g} \to \R \), \begin{equation} E(q, \dot{q}, \xi) \defeq \dualPair*{\difpFrac{L}{\dot{q}}(q, \dot{q} + \xi \ldot q, \xi)}{\dot{q} + \xi \ldot q} - L(q, \dot{q} + \xi \ldot q, \xi), \end{equation} be the energy function associated to the Clebsch--Lagrangian \( L \). The \emphDef{Clebsch--Legendre transformation} is defined by \begin{equation} \label{eq:clebschLagrange:clebschLegendre} \mathrm{CL}: \TBundle Q \times \LieA{g} \to \CotBundle Q \times \LieA{g}, \quad (q, \dot{q}, \xi) \mapsto \left(q, \difpFrac{L}{\dot{q}}(q, \dot{q} + \xi \ldot q, \xi), \xi \right). \end{equation} We say that \( L \) is regular if \( \mathrm{CL} \) is a diffeomorphism. For a regular Clebsch--Lagrangian \( L \), the associated \emphDef{Clebsch--Hamiltonian} \( H: \CotBundle Q \times \LieA{g} \to \R \) is defined by \( H = E \circ \mathrm{CL}^{-1} \). \begin{remark} Clearly \( \mathrm{CL} \) is a diffeomorphism if and only if the fiber derivative \begin{equation} \TBundle Q \to \CotBundle Q, \quad (q, v) \mapsto \left(q, \difpFrac{L}{\dot{q}}(q, v, \xi)\right) \end{equation} is a diffeomorphism for every \( \xi \in \LieA{g} \). Moreover, the Clebsch--Hamiltonian \( H \) coincides with the Hamiltonian corresponding to the Clebsch--Lagrangian \( L \) via the ordinary Legendre transformation with \( \xi \) viewed as a parameter, that is, \begin{equation}\label{eq:hamiltonian} H \left(q, p , \xi \right) = \dualPair*{p}{v} - L(q, v, \xi)\, , \end{equation} where the relation \( p = \difpFrac{L}{\dot{q}}(q, v, \xi) \) defines \( v \) as a function of \( q, p \) and \( \xi \). \end{remark} Let \( K \) be a torsion-free connection in \( \TBundle Q \) and let \( \bar{K} \) be the dual connection in \( \CotBundle Q \). By~\eqref{eq:connection:decomposition}, \( \bar{K} \) induces a decomposition \begin{equation} \label{eq:clebschLagrange:hamilton:decompCotBundle} \TBundle (\CotBundle Q) \isomorph \CotBundle Q \times_Q \TBundle Q \times_Q \CotBundle Q \end{equation} and we will write points in \( \TBundle (\CotBundle Q) \) as tuples \( (q, p, \delta q, \delta p) \) with \( p, \delta p \in \CotBundle_q Q \) and \( \delta q \in \TBundle_q Q \). Note that the component \( \delta p \) depends on the choice of the connection \( \bar{K} \). Given a smooth function \( H: \CotBundle Q \times \LieA{g} \to \R \) (not necessarily obtained via a Clebsch--Legendre transformation), we decompose the derivative \( \tangent H: \TBundle(\CotBundle Q) \to \R \) relative to~\eqref{eq:clebschLagrange:hamilton:decompCotBundle} as follows: \begin{equation} \label{eq:clebschLagrange:hamilton:decompTHamilton} \tangent H (q, p, \delta q, \delta p) = \dualPair*{\difpFrac{H}{q}(q, p)}{\delta q} + \dualPair*{\delta p}{\difpFrac{H}{p}(q, p)}, \end{equation} where we assume that the partial derivatives are maps \( \difpFrac{H}{q}: \CotBundle Q \to \CotBundle Q \) and \( \difpFrac{H}{p}: \CotBundle Q \to \TBundle Q \). We emphasize that, according to~\eqref{eq:connection:decompositionInverse}, \( \difpFrac{H}{p} \) is the intrinsically defined fiber derivative of \( H \) while \( \difpFrac{H}{q} \) depends on the choice of the connection \( \bar{K} \). \begin{prop}[Equations under Clebsch--Legendre transformation] \label{prop:clebschLagrange:legendreDirect} For a regular Clebsch--Lagrangian \( L: \TBundle Q \times \LieA{g} \to \R \) with associated Clebsch--Hamiltonian \( H: \CotBundle Q \times \LieA{g} \to \R \), the curves \( t \mapsto q(t) \in Q \) and \( t \mapsto \xi(t) \in \LieA{g} \) are solutions of the Clebsch--Lagrange variational problem~\eqref{eq:clebschLagrange:action} if and only if the curves \begin{equation} t \mapsto (q(t), p(t), \xi(t)) = \mathrm{CL}\bigl(q(t), \dot{q}(t), \xi(t)\bigr) \end{equation} satisfy the following equations \begin{subequations}\label{eq:clebschLagrange:hamiltonianDirect}\begin{gather+} \label{eq:clebschLagrange:hamiltonianDirect:hamiltonian} \difpFrac{H}{q}(q, p, \xi) = - \DifFrac{}{t} p - \bar{K}(\xi \ldot p), \qquad \difpFrac{H}{p}(q, p, \xi) = \dot q + \xi \ldot q, \\ \label{eq:clebschLagrange:hamiltonianDirect:constraint} \dualPair*{p}{\zeta \ldot q} = \kappa\left(\difpFrac{H}{\xi}(q, p, \xi), \zeta\right), \end{gather+}\end{subequations} for all \( \zeta \in \LieA{g} \). \end{prop} \begin{proof} The proof is by direct inspection. Let the curve \( t \mapsto p(t) \) be defined by \( p(t) \defeq \difpFrac{L}{\dot{q}} \bigl(q, \dot q + \xi \ldot q, \xi \bigr)(t) \). Using the Clebsch--Euler--Lagrange equation~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:evol}, we have \begin{equation}\label{eq:clebschLagrange:hamiltonian:proof:pTime} \DifFrac{}{t} p = \dualPair{p}{\nabla \xi_*} + \difpFrac{L}{q}(q, \dot q + \xi \ldot q, \xi). \end{equation} On the other hand, using the identification~\eqref{eq:hamiltonian} of \( H \) as the Legendre transform of \( L \) with \( v = \dot q + \xi \ldot q \), we get \begin{equation} \difpFrac{H}{q}(q, p, \xi) = \dualPair*{p}{\difpFrac{v}{q}} - \dualPair*{\difpFrac{L}{\dot q}}{\difpFrac{v}{q}} - \difpFrac{L}{q}(q, v, \xi) = - \difpFrac{L}{q}(q, \dot q + \xi \ldot q, \xi). \end{equation} Comparing with~\eqref{eq:clebschLagrange:hamiltonian:proof:pTime}, we see that~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:evol} is equivalent to \begin{equation} \difpFrac{H}{q}(q, p, \xi) = - \DifFrac{}{t} p + \dualPair{p}{\nabla \xi_*}. \end{equation} Let us rewrite the last term on the right-hand side of this equation. For \( \xi \in \LieA{g} \), choose a curve \( \varepsilon \mapsto g_\varepsilon \) in \( G \) with \( g_0 = e \) and \( \difFracAt{}{\varepsilon}{0} g_\varepsilon = \xi \). Let \( \Upsilon: \R \to Q \), \( \Upsilon(\varepsilon) = g_\varepsilon \cdot q \) be the orbit curve through \( q \in Q \). Considering \( \varepsilon \mapsto g_\varepsilon \ldot \dot{q} \) as a section of \( \Upsilon^* \TBundle Q \) for \( \dot{q} \in \TBundle_q Q \), we have \begin{equation} \nabla_{\difp_\varepsilon}^\Upsilon (g_\varepsilon \ldot \dot{q})|_{\varepsilon = 0} = K \left(\difFracAt{}{\varepsilon}{0} g_\varepsilon \ldot \dot{q}\right) = K (\xi \ldot \dot{q}) = K \bigl(\tangent \xi_* (\dot{q})\bigr) = \nabla_{\dot{q}} \xi_* \, . \end{equation} A similar calculation yields \( \bar{\nabla}_{\difp_\varepsilon}^\Upsilon (g_\varepsilon \ldot p)|_{\varepsilon = 0} = \bar{K}(\xi \ldot p) \) for \( p \in \CotBundle_q Q \). Thus, taking the derivative of the identity \( \dualPair{p}{\dot{q}} = \dualPair{g_\varepsilon \cdot p}{g_\varepsilon \ldot \dot{q}} \) with respect to \( \varepsilon \) gives \begin{equation}\label{eq:clebschLagrange:hamiltonian:proof:nablaXi}\begin{split} 0 &= \difFracAt{}{\varepsilon}{0} \dualPair{g_\varepsilon \cdot p}{g_\varepsilon \ldot \dot{q}} \\ &= \dualPair*{p}{\nabla_{\difp_\varepsilon}^\Upsilon (g_\varepsilon \ldot \dot{q})|_{\varepsilon = 0}} + \dualPair*{\bar{\nabla}_{\difp_\varepsilon}^\Upsilon (g_\varepsilon \ldot p)|_{\varepsilon = 0}}{\dot{q}} \\ &= \dualPair{p}{\nabla_{\dot{q}} \xi_*} + \dualPair{\bar{K}(\xi \ldot p)}{\dot{q}}. \end{split}\end{equation} That is \( \dualPair{p}{\nabla \xi_*} = - \bar{K}(\xi \ldot p) \). Hence, in summary, the first equation in~\eqref{eq:clebschLagrange:hamiltonianDirect:hamiltonian} is equivalent to~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:evol}. Moreover,~\eqref{eq:hamiltonian} implies \begin{equation} \difpFrac{H}{p}(q, p, \xi) = v + \dualPair*{p}{\difpFrac{v}{p}} - \dualPair*{\difpFrac{L}{\dot q}(q, v, \xi)}{\difpFrac{v}{p}} = \dot q + \xi \ldot q, \end{equation} which yields the second equation in~\eqref{eq:clebschLagrange:hamiltonianDirect:hamiltonian}. Similarly, the derivative of \( H \) in the \( \xi \)-direction is given by \begin{equation}\begin{split} \difpFrac{H}{\xi}(q, p, \xi) &= \dualPair*{p}{\difpFrac{v}{\xi}} - \dualPair*{\difpFrac{L}{\dot q}(q, v, \xi)}{\difpFrac{v}{\xi}} - \difpFrac{L}{\xi}(q, v, \xi) \\ &= - \difpFrac{L}{\xi}(q, \dot q + \xi \ldot q, \xi). \end{split}\end{equation} Hence, the constraint~\eqref{eq:clebschLagrange:hamiltonianDirect:constraint} is equivalent to~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect:constraint}. \end{proof} \begin{remark} In the proof of \cref{prop:clebschLagrange:legendreDirect}, we have seen that the first equation in~\eqref{eq:clebschLagrange:hamiltonianDirect:hamiltonian} is equivalent to \begin{equation+} \difpFrac{H}{q}(q, p, \xi) = - \DifFrac{}{t} p + \dualPair{p}{\nabla \xi_*}. \qedhere \end{equation+} \end{remark} The constraint equation~\eqref{eq:clebschLagrange:hamiltonianDirect:constraint} has a natural reformulation in terms of the momentum map of the lifted \( G \)-action on the cotangent bundle \( \CotBundle Q \). As usual, \( \CotBundle Q \) carries a canonical \( 1 \)-form \( \theta \) defined by \begin{equation} \label{CanForm} \theta_{(q,p)} (X) = \dualPair{p}{\tangent_{(q,p)} \pi (X)}_q, \end{equation} where \( X \in \TBundle_{(q,p)} (\CotBundle Q) \) and \( \pi \) is the natural projection of \( \CotBundle Q \). Correspondingly, the canonical symplectic form is \( \omega = \dif \theta \). Recall the \( G \)-action on \( \TBundle Q \) written as \( g \ldot \dot{q} \in \TBundle_{g \cdot q} Q \) for \( g \in G \) and \( \dot{q} \in \TBundle_q Q \). We assume that the relation \begin{equation} \dualPair{g \cdot p}{\dot{q}} = \dualPair{p}{g^{-1} \ldot \dot{q}} \end{equation} for \( g \in G, \dot{q} \in \TBundle_q Q \) and \( p \in \CotBundle_q Q \), defines a lift to \( \CotBundle Q \) of the \( G \)-action on \( Q \) (which is automatic in finite dimensions). The lifted action on \( \CotBundle Q \) is symplectic, and the associated \( G \)-equivariant momentum map \( J: \CotBundle Q \to \LieA{g}^* \) (if it exists) is defined by \begin{equation} \label{eq:cotangentBundle:momentumMapDef} \kappa\left(J(q, p), \xi\right) = \dualPair{p}{\xi \ldot q}. \end{equation} In infinite dimensions, the momentum map may not exist in pathological cases, \cf \parencite[Example~2.2]{DiezRudolphReduction}, and we thus have to assume its existence in the forthcoming. The right-hand side of~\eqref{eq:cotangentBundle:momentumMapDef} is exactly what occurs in the constraint~\eqref{eq:clebschLagrange:hamiltonianDirect:constraint} and we hence arrive at the following reformulation of the Clebsch--Euler--Lagrange equations~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect}. \begin{coro}\label{prop:clebschLagrange:legendre} In the setting of~\cref{prop:clebschLagrange:legendreDirect}, assume additionally that the lifted \( G \)-action on the cotangent bundle \( \CotBundle Q \) has a momentum map \( J: \CotBundle Q \to \LieA{g}^* \) (which is automatic in finite dimensions). Then, the Clebsch--Euler--Lagrange equations are equivalent to the following set of equations: \begin{subequations}\label{eq:clebschLagrange:hamiltonian}\begin{gather+} \label{eq:clebschLagrange:hamiltonian:hamiltonian} \difpFrac{H}{q}(q, p, \xi) = - \DifFrac{}{t} p - \bar{K}(\xi \ldot p), \qquad \difpFrac{H}{p}(q, p, \xi) = \dot q + \xi \ldot q, \\ \label{eq:clebschLagrange:hamiltonian:constraint} J(q, p) = \difpFrac{H}{\xi}(q, p, \xi). \end{gather+}\end{subequations} \end{coro} \noindent The system~\eqref{eq:clebschLagrange:hamiltonian} on \( \CotBundle Q \times \LieA{g} \) will be referred to as the \emphDef{Clebsch--Hamilton equations}. We will refer to~\eqref{eq:clebschLagrange:hamiltonian:constraint} as the \emphDef{momentum map constraint}. It turns out that the Clebsch--Hamilton equations can be written in a form where the dynamics is given by a Hamiltonian vector field (relative to the canonical symplectic structure on \( \CotBundle Q \)). As this needs some additional technical insight, we will discuss this aspect in a separate paper. \begin{example} A special model of a Clebsch--Hamiltonian system is studied in \parencite{GayBalmazHolmRatiu2013}. Let \( (Q, h) \) be a Riemannian manifold, which we assume to be finite-dimensional for simplicity. Moreover, let \( G \) be a (finite-dimensional) Lie group acting on \( Q \). Consider the Lagrangian \begin{equation} L(q, \dot q, \xi) = \frac{m}{2} \norm{\dot q}^2 - V(q, \xi), \end{equation} where the norm is taken with respect to the Riemannian metric \( h \). Thus, the Clebsch--Lagrange problem~\eqref{eq:clebschLagrange:action} consists in minimizing the action functional \begin{equation} S[q, \xi] = \int_0^T \left(\frac{m}{2} \norm{\dot q + \xi \ldot q}^2 - V(q, \xi)\right). \end{equation} This variational problem is investigated in \parencite[Section~3.2]{GayBalmazHolmRatiu2013}. By~\eqref{eq:hamiltonian}, the associated Clebsch--Hamiltonian is given by \begin{equation} H(q, p, \xi) = \frac{1}{2m} \norm{\dot q}^2 + V(q, \xi). \end{equation} Thus, the Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian} take the following form (with respect to the Levi--Civita connection): \begin{subequations} \begin{align} \DifFrac{}{t} p + \bar{K}(\xi \ldot p) &= - \difpFrac{V}{q}, \\ p^\sharp &= m (\dot q + \xi \ldot q), \\ J(q, p) &= \difpFrac{V}{\xi}, \end{align} \end{subequations} where \( p^\sharp \in \TBundle_q Q \) denotes the metric-dual of \( p \in \CotBundle_q Q \). These dynamical equations have been already derived in~\parencite[Equation~(3.28)]{GayBalmazHolmRatiu2013}. \end{example} In the remainder of this section, we discuss the time dependence of the momentum map constraint. For this purpose, consider the constraint map \( C: \CotBundle Q \times \LieA{g} \to \LieA{g}^* \) defined by \begin{equation} C(q, p, \xi) = J(q, p) - \difpFrac{H}{\xi}(q, p, \xi). \end{equation} Clearly, the momentum map constraint is equivalent to \( C = 0 \). \begin{prop} \label{prop:G-inv-H} Assume that \( H \) is \( G \)-invariant in the sense that \begin{equation} \label{eq:G-inv-H} H(g \cdot q, g \cdot p, \AdAction_g \xi) = H(q, p, \xi) \end{equation} for all \( g \in G \). Let \( t \mapsto \gamma(t) = \bigl(q(t), p(t), \xi(t) \bigr) \) be a curve in \( \CotBundle Q \times \LieA{g} \) satisfying the evolution equations~\eqref{eq:clebschLagrange:hamiltonian:hamiltonian}. Then, \begin{equation} \difFrac{}{t} C\bigl(\gamma(t)\bigr) = - \CoadAction_{\xi(t)} C\bigl(\gamma(t)\bigl) - \difFrac{}{t} \left(\difpFrac{H}{\xi}\bigl(\gamma(t)\bigr) \right). \end{equation} In particular, a solution \( \gamma(t) \) of the evolution equations~\eqref{eq:clebschLagrange:hamiltonian:hamiltonian} with \( C\bigl(\gamma(t_0)\bigr) = 0 \) is tangent\footnote{That is, \( \tangent_{\gamma(t_0)} C \bigl(\dot{\gamma}(t_0)\bigr) = 0 \).} to the constraint set \( C = 0 \) at \( \gamma(t_0) \) if and only if \begin{equation+} \difFracAt{}{t}{t_0} \left(\difpFrac{H}{\xi}\bigl(\gamma(t)\bigr)\right) = 0. \qedhere \end{equation+} \end{prop} \begin{proof} For clarity, we suppress the explicit time dependence in the subsequent calculation. By differentiating the \( G \)-invariance identity~\eqref{eq:G-inv-H} at the point \( (q, p, \xi) \) with respect to \( g \in G \), using~\eqref{eq:clebschLagrange:hamilton:decompCotBundle} and~\eqref{eq:clebschLagrange:hamilton:decompTHamilton}, we get \begin{equation}\begin{split} 0 &= \dualPair*{\difpFrac{H}{q}}{\zeta \ldot q} + \dualPair*{\bar{K}(\zeta \ldot p)}{\difpFrac{H}{p}}{} + \kappa\left(\difpFrac{H}{\xi}, \adAction_\zeta \xi\right) \end{split}\end{equation} for all \( \zeta \in \LieA{g} \). Here, \( \adAction \) denotes the adjoint action of \( \LieA{g} \), that is, \( \adAction_\zeta \xi = \commutator{\zeta}{\xi} \). Using this equation and the evolution equations~\eqref{eq:clebschLagrange:hamiltonian:hamiltonian}, we obtain \begin{equation}\begin{split} \difFrac{}{t} \kappa\bigl(C \bigl(\gamma(t)\bigr), \zeta\bigr) &= \difFrac{}{t} \left(\dualPair{p}{\zeta \ldot q} - \kappa\left(\difpFrac{H}{\xi}, \zeta\right)\right) \\ &= \dualPair*{\DifFrac{}{t} p}{\zeta \ldot q} + \dualPair*{p}{\DifFrac{}{t} (\zeta \ldot q)} - \kappa\left(\difFrac{}{t} \difpFrac{H}{\xi}, \zeta\right) \\ &= \dualPair{\bar{K}(\zeta \ldot p)}{\xi \ldot q} - \dualPair{\bar{K}(\xi \ldot p)}{\zeta \ldot q} + \kappa\left(\difpFrac{H}{\xi}, \adAction_\zeta \xi\right) \\ &\qquad + \dualPair{\bar{K}(\zeta \ldot p)}{\dot q} + \dualPair*{p}{\nabla_{\dot q} \zeta_*} - \kappa\left(\difFrac{}{t} \difpFrac{H}{\xi}, \zeta\right). \end{split}\end{equation} By~\eqref{eq:clebschLagrange:hamiltonian:proof:nablaXi}, the sum of the fourth and the fifth term vanishes. Moreover, by \( G \)-equivariance of \( J \) and by using~\eqref{eq:clebschLagrange:hamiltonian:proof:nablaXi}, we have \begin{equation} \dualPair{\bar{K}(\zeta \ldot p)}{\xi \ldot q} - \dualPair{\bar{K}(\xi \ldot p)}{\zeta \ldot q} = - \kappa(J(q, p), \adAction_\zeta \xi). \end{equation} This yields the assertion. \end{proof} A calculation similar to the one in the proof of~\cref{prop:G-inv-H} yields the following. \begin{prop} \label{prop:clebschLagrange:lagrange:constraintConstantOfMotionEulerPoincare} Assume, instead, that \( H \) is \( G \)-invariant in the sense that \begin{equation} H(g \cdot q, g \cdot p, \xi) = H(q, p, \xi)\, , \end{equation} for all \( g \in G \). Let \( t \mapsto \gamma(t) = \bigl(q(t), p(t), \xi(t) \bigr) \) be a curve in \( \CotBundle Q \times \LieA{g} \) satisfying the Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian}. Then the curve \( t \mapsto J\bigl(\gamma(t)\bigr) \in \LieA{g}^* \) satisfies the Euler--Poincaré equation \begin{equation+} \difFrac{}{t} \left( J\bigl(\gamma(t)\bigr) \right) = - \CoadAction_{\xi(t)} \left( J\bigl(\gamma(t)\bigr) \right). \qedhere \end{equation+} \end{prop} \begin{remark}[Clebsch representation] Originally, the Clebsch representation refers to a special parameterization of the velocity field of an ideal incompressible fluid. \Textcite{MarsdenWeinstein1983} have shown that this classical example from fluid dynamics fits into the following geometric framework: Let \( (P, \poissonDot) \) be a Poisson manifold with Hamiltonian \( h: P \to \R \). A \emphDef{Clebsch representation} of \( P \) is a pair consisting of a symplectic manifold \( M \) and a Poisson map \( \psi: M \to P \). If we let \( H = h \circ \psi \), then \( \psi \) intertwines the Hamiltonian vector fields \( X_H \) and \( X_h \). Thus, by introducing possibly redundant variables, the Hamilton--Poisson equations on \( P \) are written in a symplectic Hamiltonian form on \( M \). The most important class of examples is obtained when \( P = \LieA{g}^* \) endowed with the Lie--Poisson bracket and when \( \psi = J \) is the momentum map for a symplectic \( G \)-action on \( M \). Then, \( J \) gives Clebsch variables in which the Euler--Poincaré equations on \( \LieA{g}^* \) are written in a symplectic Hamiltonian form. Now consider the Clebsch--Hamiltonian setup given by a Clebsch--Hamiltonian \( H: \CotBundle Q \times \LieA{g} \to \R \) which is \( G \)-invariant in the sense that \( H(g \cdot q, g \cdot p, \xi) = H(q, p, \xi) \). Then, \cref{prop:clebschLagrange:lagrange:constraintConstantOfMotionEulerPoincare} shows that \( \psi = J: \CotBundle Q \to \LieA{g}^* \) intertwines solutions of the Clebsch--Hamilton equations with solutions of the Euler--Poincaré equation on \( \LieA{g}^* \) parametrized by points \( (q,p) \in \CotBundle Q \). In other words, by introducing possibly redundant variables, the (generalized) Euler--Poincaré equations are written in a symplectic Clebsch--Hamilton form. Many equations (especially from hydrodynamics) can be written as Euler--Poincaré equations on some Lie algebra. For these cases, the Clebsch--Hamilton formalism thus provides a framework to construct different Clebsch-like representations. This observation might turn out to be especially advantageous for coupled equations, \eg Yang--Mills plasmas or relativistic fluids, for which the additional variables in \( \CotBundle Q \) admit a physical interpretation. \end{remark} \subsection{Relation to the standard Euler-Lagrange problem} \label{Ham-Pic-gen} In this section, we derive the Clebsch--Legendre transformation~\eqref{eq:clebschLagrange:clebschLegendre} from an ordinary Legendre transformation of an extended Lagrangian system. This approach leads to a constraint analysis using the Dirac--Bergmann theory, which for the special cases of Yang--Mills theory and general relativity recovers the discussion of constraints in \parencite{BergveltDeKerf1986,BergveltDeKerf1986a,Giulini2015}. \subsubsection{The extended phase space} One can view \( \xi \in \LieA{g} \) also as an ordinary configuration variable and thus formulate the Clebsch--Euler--Lagrange variational principle as an ordinary Euler--Lagrange problem. For that purpose, we extend the configuration space to \begin{equation} Q_\ext \defeq Q \times \LieA{g} \end{equation} and define \begin{equation} \label{L-ext} L_\ext : \TBundle Q_\ext \to \R \, , \quad L_\ext(q, \dot q, \xi, \dot \xi) = L(q, \dot q + \xi \ldot q, \xi) \, . \end{equation} The basic structure for the discussion of the Hamiltonian picture is the cotangent bundle \begin{equation} \CotBundle Q_\ext = \CotBundle Q \times (\LieA{g} \times \LieA{g}^*) \end{equation} endowed with the natural product symplectic structure. We usually denote points of \( \CotBundle Q_\ext \) by tuples \( (q, p, \xi, \nu) \). First, we observe that the Lie group $\TBundle G$ acts naturally on $\TBundle Q_\ext$. Under the right trivialization $\tau_\textrm{R}$ of $\TBundle G$, \begin{equation} \tau_\textrm{R}: \LieA{g} \times G \to \TBundle G \, , \quad (\zeta, g) \mapsto \zeta \ldot g \,, \end{equation} the group structure of \( \TBundle G \) is that of a semidirect product \( \LieA{g} \rSemiProduct_{\AdAction} G \). That is, the group multiplication is given by \begin{equation}\label{eq:TGactionOnQext} (\xi, a) \cdot (\zeta, b) = (\xi + \AdAction_a \zeta, ab) \end{equation} for \( \xi, \zeta \in \LieA{g} \) and \( a,b \in G \). The pair \( (\zeta, g) \in \LieA{g} \rSemiProduct_{\AdAction} G \) acts on \( Q_\ext \) by \begin{equation} (\zeta, g) \cdot (q, \xi) \defeq (g \cdot q, \AdAction_g \xi + \zeta). \end{equation} A straightforward calculation shows that the natural lift of the \( \TBundle G \)-action to \( \CotBundle Q_\ext \) has the form \begin{equation}\label{eq:extendedCotangentBundle:liftedAction} (\zeta, g) \cdot (q, p, \xi, \nu) = (g \cdot q, g \cdot p, \AdAction_g \xi + \zeta, \CoAdAction_g \nu). \end{equation} \begin{prop}\label{ext-CotB} Assume that the lifted \( G \)-action on the cotangent bundle \( \CotBundle Q \) has a momentum map \( J: \CotBundle Q \to \LieA{g}^* \) (which is automatic in finite dimensions). Then, the cotangent bundle \( \CotBundle Q_\ext \) carries the structure of a Hamiltonian \( \TBundle G \)-manifold with the equivariant momentum map \( J_\ext: \CotBundle Q_\ext \to \LieA{g}^* \times \LieA{g}^* \) given by \begin{equation+}\label{eq:TGmomentumMap} J_\ext (q, p, \xi, \nu) = (\nu, \CoadAction_\xi \nu + J(q,p)). \qedhere \end{equation+} \end{prop} \begin{proof} Since the \( G \)-action on \( Q \) and the adjoint action are smooth, the \( \TBundle G \)-action on \( Q_\ext \) as defined in~\eqref{eq:TGactionOnQext} is smooth, too. The lifted \( \TBundle G \)-action on the cotangent bundle \( \CotBundle Q_\ext \) leaves the tautological \( 1 \)-form invariant and thus it is symplectic. Moreover, note that the fundamental vector field on \( Q_\ext \) generated by \( (\sigma, \varrho) \in \LieA{g} \times \LieA{g} \) is given by \begin{equation} (\sigma, \varrho) \ldot (q, \xi) = (\varrho \ldot q, \commutator{\varrho}{\xi} + \sigma). \end{equation} Using the defining relation~\eqref{eq:cotangentBundle:momentumMapDef} of the momentum map, we have \begin{equation}\begin{split} \kappa\left(J_\ext (q, p, \xi, \nu), (\sigma, \varrho) \right) &= \dualPair{(p, \nu)}{(\sigma, \varrho) \ldot (q, \xi)} \\ &= \dualPair{p}{\varrho \ldot q} + \kappa(\nu, -\adAction_\xi \varrho + \sigma) \\ &= \kappa(\nu, \sigma) + \kappa(\CoadAction_\xi \nu + J(q,p), \varrho), \end{split}\end{equation} which verifies the asserted formula~\eqref{eq:TGmomentumMap}. The equivariance of \( J_\ext \) is immediate from~\eqref{eq:cotangentBundle:momentumMapDef}. \end{proof} \subsubsection{The Dirac--Bergmann theory of constraints} The extended Lagrangian \( L_\ext \) is never regular, because the velocity vector \( \dot \xi \) does not appear in \( L_\ext \). In order to handle the non-degeneracy, we employ the Dirac--Bergmann theory of constraints. Let us shortly review the algorithm in a geometric language as presented in \parencite{GotayNesterHinds1978}. \begin{enumerate} \item Given a degenerate Lagrangian \( L: \TBundle Q \to \R \), let \( M_1 \subseteq \CotBundle Q \) be the image of the fiber derivative of \( L \). Assume that \( M_1 \) is a smooth submanifold. Then, at least locally, \( M_1 \) is characterized by the vanishing of a collection of functions \( c_i \). The equations \( c_i(q, p) = 0 \) are called the \emphDef{primary constraints}. \item The Hamiltonian \( H \), as a smooth function on \( M_1 \), is defined by the Legendre transformation \begin{equation} H\left(q, \difpFrac{L}{\dot{q}}(q, \dot q)\right) \defeq \dualPair*{\difpFrac{L}{\dot{q}}(q, \dot q)}{\dot q} - L(q, \dot q). \end{equation} Let \( M_2 \subseteq M_1 \) be the subset characterized by the \emphDef{secondary constraints} as follows: \begin{equation} \label{eq:clebsch:hamiltonian:secondaryConstraint} m \in M_2 \text{ if and only if } \dualPair{\dif_m H}{\TBundle_m M_1 \intersect (\TBundle_m M_1)^\omega} = 0, \end{equation} where \( (\TBundle_m M_1)^\omega \) denotes the symplectic orthogonal of \( \TBundle_m M_1 \) in \( \TBundle_m (\CotBundle Q) \) with respect to the canonical symplectic form \( \omega \). Assume that \( M_2 \) is a smooth submanifold of \( M_1 \). \item Iterate the process to get a chain of submanifolds \begin{equationcd} \ldots \to[r] & M_i \to[r] & \ldots \to[r] & M_3 \to[r] & M_2 \to[r] & M_1 \to[r] & \CotBundle Q, \nonumber \end{equationcd} defined by the condition that \( m \in M_i \) is a point of \( M_{i+1} \) if and only if \( \dualPair{\dif_m H}{\TBundle_m M_1 \intersect (\TBundle_m M_i)^\omega} = 0 \). If the algorithm happens to terminate at a non-empty submanifold \( M_f \), then the dynamics is, by construction, tangent to \( M_f \) and is Hamiltonian with respect to the Hamiltonian \( \restr{H}{M_f} \). \end{enumerate} \begin{remark} \Textcite{GotayNesterHinds1978} discuss this algorithm in an infinite-dimensional Banach setting but under the rather restrictive assumption that the symplectic form is strongly symplectic. This assumption never holds for symplectic Fréchet manifolds. Moreover, in \parencite{GotayNesterHinds1978} it is assumed that all subsets \( M_i \) are submanifolds of \( \CotBundle Q \). In our setting, \( M_2 \) turns out to be a momentum map level set and thus it is not a smooth manifold unless the action is free. In view of these problems, we take the Dirac--Bergmann algorithm only as a guide and verify directly at the end that the Hamiltonian system so obtained is indeed equivalent to the Clebsch--Hamilton equations, which in turn are equivalent to the degenerate Lagrangian system we started with, see \cref{prop:clebschLagrange:legendre}. \end{remark} We now apply this algorithm to the degenerate Lagrangian $L_\ext: \TBundle Q_\ext \to \R $ defined in~\eqref{L-ext}. The fiber derivative of \( L_\ext \) is given by \begin{equation} \left(\difpFrac{L_\ext}{\dot{q}}, \difpFrac{L_\ext}{\dot{\xi}}\right)(q, \dot q, \xi, \dot \xi) = \left(\difpFrac{L}{\dot{q}}(q, \dot q + \xi \ldot q, \xi), 0\right). \end{equation} Note that this Legendre transformation of the extended system is (essentially) equivalent to the Clebsch--Legendre transformation introduced in~\eqref{eq:clebschLagrange:clebschLegendre}. In the case when \( L \) is regular, the range of the fiber derivative is thus \begin{equation} M_1 \equiv \CotBundle Q \times \LieA{g} \times \set{0} \subseteq \CotBundle Q_\ext = \CotBundle Q \times \LieA{g} \times \LieA{g}^* \, . \end{equation} In other words, the first primary constraint entails that the canonical momentum conjugate to \( \xi \) has to vanish, \ie, \( \nu = 0 \). This should not come as a surprise as \( L_\ext \) does not depend on the velocity vector \( \dot \xi \). The Hamiltonian \( H_\ext \) on \( \CotBundle Q \times \LieA{g} \) is defined by the Legendre transformation of \( L_\ext \), \begin{equation} \label{H-ext} H_\ext(q,p,\xi) \defeq \dualPair*{p}{\dot q} - L_\ext(q, \dot q, \xi, 0) \, , \end{equation} where the relation \begin{equation} p = \difpFrac{L_\ext}{\dot{q}}(q, \dot q, \xi, \dot \xi) = \difpFrac{L}{\dot{q}}(q, \dot q + \xi \ldot q, \xi) \end{equation} determines $\dot q$ as a function of $q,p$ and $\xi$. As the next step of the constraint analysis, we have to determine the set of points \( (q, p, \xi) \in M_1 \) for which the linearized Hamiltonian vanishes on the symplectic orthogonal of \( \TBundle_{(q,p,\xi)} M_1 \). We clearly have \begin{equation} \bigl(\TBundle_{(q,p,\xi)} M_1\bigr)^\omega = \bigl(\TBundle_{(q,p)} (\CotBundle Q) \times \LieA{g} \times \set{0}\bigr)^\omega = \set{0} \times (\LieA{g} \times \set{0}) \subseteq \TBundle_{(q,p)} (\CotBundle Q) \times (\LieA{g} \times \LieA{g}^*), \end{equation} because the canonical symplectic form on \( \CotBundle Q \) is weakly non-degenerate and \( \LieA{g} \times \set{0} \) is a Lagrangian subspace of \( \LieA{g} \times \LieA{g}^* \). Thus, the secondary constraint entails that the derivative of \( H_\ext \) in the \( \xi \)-direction has to vanish, \ie \begin{equation} \label{constr-H-ext} \difpFrac{H_\ext}{\xi}(q,p,\xi) = 0. \end{equation} To summarize, after this step of the constraint analysis, we obtain the following system of constrained Hamilton equations on \( \CotBundle Q_\ext \): \begin{subequations}\label{eq:clebschLagrange:hamiltonianExt}\begin{gather} \label{eq:clebschLagrange:hamiltonianExt:hamiltonian} \difpFrac{H_\ext}{q}(q, p, \xi) = - \DifFrac{}{t} p, \qquad \difpFrac{H_\ext}{p}(q, p, \xi) = \dot q, \\ \label{eq:clebschLagrange:hamiltonianExt:constraints} \nu = 0, \qquad \difpFrac{H_\ext}{\xi}(q,p,\xi) = 0. \end{gather}\end{subequations} The constraints~\eqref{eq:clebschLagrange:hamiltonianExt:constraints} are, in general, not preserved in time and, thus, the Dirac--Bergmann algorithm may yield further constraints. We will return to this question at the end of the section, but first let us relate the constrained Hamiltonian system~\eqref{eq:clebschLagrange:hamiltonianExt} to the Clebsch--Hamiltonian system~\eqref{eq:clebschLagrange:hamiltonian}. For this, we need the following. \begin{lemma} The following identity holds: \begin{equation+}\label{eq:compositeHamiltonian} H_\ext(q, p, \xi) = H(q, p, \xi) - \kappa(J(q, p), \xi), \end{equation+} where \( H \) is the Clebsch--Hamiltonian obtained via the Clebsch--Legendre transformation, \cf~\eqref{eq:hamiltonian}. \end{lemma} \begin{proof} Using~\eqref{H-ext}, the momentum map relation and the definition of \( H \) as the Legendre transform of \( L \) (with \( v = \dot q + \xi \ldot q \) in the above notation), we calculate \begin{align} H_\ext(q, p, \xi) &= \dualPair{p}{\dot q} - L_\ext(q, \dot q, \xi, 0) \nonumber \\ &= \dualPair*{p}{\dot q} - L(q, \dot q + \xi \ldot q, \xi) \\ &= \dualPair*{p}{\dot q + \xi \ldot q} - L(q, \dot q + \xi \ldot q, \xi) - \dualPair*{p}{\xi \ldot q}\nonumber \\ &= H \left(q, p, \xi \right) - \kappa(J(q, p), \xi).\nonumber \qedhere \end{align} \end{proof} The identity~\eqref{eq:compositeHamiltonian} allows us to reformulate the equations~\eqref{eq:clebschLagrange:hamiltonianExt} in a form which involves \( H \) only. Using \( \kappa(J(q, p),\xi) = \dualPair{p}{\xi \ldot q} \), we obtain \begin{equation}\begin{split} \difpFrac{H_\ext}{q}(q, p, \xi) &= \difpFrac{H}{q}(q, p, \xi) + \bar{K}(\xi \ldot p), \\ \difpFrac{H_\ext}{p}(q, p, \xi) &= \difpFrac{H}{p}(q, p, \xi) - \xi \ldot q, \end{split}\end{equation} so that the evolution equations~\eqref{eq:clebschLagrange:hamiltonianExt:hamiltonian} take the form \begin{equation} \difpFrac{H}{q}(q, p, \xi) = - \DifFrac{}{t} p - \bar{K}(\xi \ldot p), \qquad \difpFrac{H}{p}(q, p, \xi) = \dot q + \xi \ldot q. \end{equation} Similarly, for the \( \xi \)-derivative of \( H_\ext \), using~\eqref{eq:compositeHamiltonian}, we find \begin{equation} \difpFrac{H_\ext}{\xi}(q, p, \xi) = \difpFrac{H}{\xi}(q, p, \xi) - J(q, p). \end{equation} The constraints~\eqref{eq:clebschLagrange:hamiltonianExt:constraints} are, therefore, equivalent to \begin{equation} \label{eq:Constr} \nu = 0, \qquad J(q, p) = \difpFrac{H}{\xi}(q, p, \xi). \end{equation} In summary, the constrained Hamiltonian system~\eqref{eq:clebschLagrange:hamiltonianExt} obtained by the Dirac-Bergmann analysis of the extended system is equivalent to the Clebsch--Hamilton system~\eqref{eq:clebschLagrange:hamiltonian}. \begin{remark} \label{prop:clebschLagrange:constraintsAsMomentumMapExtConstraint} Using \cref{ext-CotB}, we see that the constraints defined by~\eqref{eq:Constr} are equivalent to the momentum map constraint \begin{equation+} J_\ext(q, p, \xi, \nu) = \left(0, \difpFrac{H}{\xi}(q, p, \xi)\right). \qedhere \end{equation+} \end{remark} In \cref{prop:G-inv-H}, we have seen that the dynamics is tangent to the momentum map constraint if \( H \) is \( G \)-invariant in the sense of~\eqref{eq:G-inv-H} and if, additionally, it does not explicitly depend on the \( \xi \)-variable. Thus, in this case, the Dirac--Bergmann algorithm terminates at this stage without introducing further constraints. Let us spell out the details. Since \( H \) does not explicitly depend on \( \xi \), we have \( M_2 = J^{-1}(0) \times \LieA{g} \). Note that \( M_2 \) might have singularities if the action is not free. The \enquote{tangent space} to \( J^{-1}(0) \) at a point \( (q, p) \) is the kernel of \( \tangent_{q, p} J \), which by the bifurcation lemma equals \( {(\LieA{g} \ldot (q, p))}^\omega \), \cf \parencite[Proposition~4.5.14]{OrtegaRatiu2003} for the finite-dimensional case and \parencite{DiezThesis} for the infinite-dimensional setting. Thus, for a \( G \)-invariant Clebsch--Hamiltonian \( H \), the condition\footnote{Here, we have used the identity \( (\LieA{g} \ldot (q, p))^{\omega \omega} = \LieA{g} \ldot (q, p) \), which in infinite dimensions requires additional assumptions of functional-analytic nature. For a precise formulation of the bifurcation lemma in an infinite-dimensional context see \parencite{DiezThesis}.} \begin{equation} 0 = \dualPair{\dif_{q, p, \xi} H_\ext}{\TBundle_{q, p, \xi} M_1 \intersect (\TBundle_{q, p, \xi} M_2)^\omega} = \dualPair{\dif_{q, p} H}{\LieA{g} \ldot (q,p)}, \end{equation} is automatically satisfied for all \( (q, p, \xi) \in \CotBundle Q \times \LieA{g} \) and the Dirac algorithm terminates at \( M_2 \). \subsubsection{Reduction by stages} \label{sec:clebschLagrange:reductionStages} As we have seen in \cref{prop:clebschLagrange:constraintsAsMomentumMapExtConstraint}, the constraints in the Dirac--Bergmann algorithm can be implemented in terms of the momentum map \( J_\ext \) for the lifted \( \TBundle G \)-action on \( \CotBundle Q_\ext \). Thus it is natural to think of the symplectically reduced space \( \CotBundle Q_\ext \sslash \TBundle G \) as the true phase space of the theory. In the sequel, we use the theory of symplectic reduction by stages \parencite{MarsdenMisiolekEtAl2007} for the tangent group \( \TBundle G \) to realize the two steps in the Dirac--Bergmann process as two separate symmetry reductions. The starting point is the cotangent bundle \( \CotBundle Q_\ext \) of the extended configuration space \( Q_\ext \). Let us assume that the Clebsch--Hamiltonian $H$ is $G$-invariant in the sense of~\eqref{eq:G-inv-H} and that it does not explicitly depend on \( \xi \). Then, the constraints~\eqref{eq:Constr} have the form \( \nu = 0 \) and \( J(q, p) = 0 \), or in other words, \( J_\ext(q,p,\xi,\nu) = 0 \). As we have seen above, the right trivialization \( \tau_\textrm{R} \) identifies \( \TBundle G \) with the semidirect product \( \LieA{g} \rSemiProduct_{\AdAction} G \). It is hence natural to perform symplectic reduction by stages: first quotient out by the Lie algebra \( \LieA{g} \) and then by the Lie group \( G \). Due to the particularly simple action of \( \LieA{g} \) on \( Q_\ext \) the reduced phase space at the first stage is symplectomorphic to \( \CotBundle Q \). Indeed, the momentum map for the \( \LieA{g} \)-action on \( \CotBundle Q_\ext \) is simply given by \begin{equation} J_{\LieA{g}} (q, p, \xi, \nu) = \nu. \end{equation} Hence, the condition \( J_{\LieA{g}} = 0 \) cuts out exactly the first constraint submanifold \( M_1 = \CotBundle Q \times \LieA{g} \times \set{0} \subset \CotBundle Q_\ext \). Moreover, by~\eqref{eq:extendedCotangentBundle:liftedAction}, \( \LieA{g} \) acts on \( \CotBundle Q_\ext \) by translation in the \( \LieA{g} \)-factor and thus the quotient \( J_{\LieA{g}}^{-1}(0) \slash \LieA{g} \) is diffeomorphic to \( \CotBundle Q \). The regular cotangent bundle reduction theorem \parencite[Theorem~6.6.1]{OrtegaRatiu2003} shows that the reduced symplectic form coincides with the canonical one (the problems coming from the infinite-dimensional setting can easily be handled, because the \( \LieA{g} \)-bundle \( Q_\ext \to Q \) is trivial). By~\parencite[Lemma~4.2.6]{MarsdenMisiolekEtAl2007}, the momentum map for the residual \( G \)-action on the reduced space \( \CotBundle Q \) is induced by the map \begin{equation} \pr_2 \circ \restr{(J_\ext)}{J_{\LieA{g}}^{-1}(0)} (q, p, \xi) = J(q, p). \end{equation} That is, it coincides with the momentum map for the lifted action on \( \CotBundle Q \). This can be also directly deduced by noting that the residual \( G \)-action coincides with the lifted action on the cotangent bundle. In particular, the momentum level set \( J^{-1}(0) \) coincides with the secondary constraint set \( M_2 \). As the momentum maps under consideration are equivariant, the reduction by stages theorem for semidirect product actions~\parencite[Theorem~4.2.2]{MarsdenMisiolekEtAl2007} implies that the two-stage reduced space is symplectomorphic to the all-at-once reduced space\footnote{To be more precise, the reduction by stages theorem~\parencite[Theorem~4.2.2]{MarsdenMisiolekEtAl2007} is formulated for the free and proper action of a finite-dimensional group on a finite-dimensional phase space. Nonetheless, the peculiarities of our infinite-dimensional setting can be handled easily due to the simple form of the first reduction.}. We thus have shown the following. \begin{prop} \label{prop:clebschLagrange:reductionStages} The symplectically reduced space \( \check{M} = J^{-1}(0) \slash G \equiv \CotBundle Q \sslash G \) is symplectomorphic to the symplectic quotient \( J_\ext^{-1}(0) \slash \TBundle G \equiv \CotBundle Q_\ext \sslash \TBundle G \) and we have the following reduction by stages diagram: \begin{equation}\begin{tikzcd}[column sep=5.9em, row sep=0.02em] \CotBundle Q_\ext \ar[r, twoheadrightarrow, "{\sslash \, \LieA{g}}"] \ar[rr, twoheadrightarrow, "{\sslash \, \TBundle G}", swap, bend right] & \CotBundle Q \ar[r,twoheadrightarrow, "{\sslash \, G}"] & \check{M}. \end{tikzcd}\end{equation} \end{prop} By this proposition, the symplectic reduction procedure boils down to the symplectic reduction of $\CotBundle Q$ with respect to $G$. This singular reduction will be discussed in detail elsewhere \parencite{DiezRudolphReduction}. Finally, let us comment on the reduction of dynamics. We continue to work in the setting where the Clebsch--Hamiltonian \( H \) is \( G \)-invariant in the sense of~\eqref{eq:G-inv-H} and does not explicitly depend on \( \xi \). In particular, we view \( H \) as a smooth function on \( \CotBundle Q \). Recall that the Hamiltonian $H_\ext$ is only defined on \( J_{\LieA{g}}^{-1}(0) \) and not on the whole of \( \CotBundle Q_\ext \). By~\eqref{eq:extendedCotangentBundle:liftedAction}, the $\LieA{g}$-action on \( \CotBundle Q_\ext \) has the form \begin{equation} (\zeta, {\mathbbm 1}) \cdot (q, p, \xi, \nu) = (q, p, \xi + \zeta, \nu) \end{equation} and, thus, $H_\ext$ as given by~\eqref{eq:compositeHamiltonian} is \emph{not} \( \LieA{g} \)-invariant. That is, it does not descend to the first reduced space \( J_{\LieA{g}}^{-1}(0) \slash \LieA{g} = \CotBundle Q \). On the other hand, if also the secondary constraint \( J = 0 \) is imposed, then \( H_\ext \) coincides with $H$ and is \( \TBundle G \)-invariant. Hence, the Hamiltonian \( H_\ext \) \emph{does} descend to the completely reduced space \( \check{M} \isomorph \CotBundle Q_\ext \sslash \TBundle G \) and the reduction of dynamics boils down to the reduction of $H: \CotBundle Q \to \R$ with respect to $G$. Let us summarize. \begin{thm} \label{prop:clebschLagrange:equivalenceHamiltonianReductions} Let \( H: \CotBundle Q \to \R \) be a \( \xi \)-independent Clebsch--Hamiltonian, which is \( G \)-invariant in the sense of~\eqref{eq:G-inv-H}. Assume that the lifted \( G \)-action to \( \CotBundle Q \) has a momentum map \( J \). Then, the following systems of equations are equivalent: \begin{thmenumerate} \item The Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian} on \( \CotBundle Q \times \LieA{g} \) with respect to \( H \). \item The constraint Hamilton equations~\eqref{eq:clebschLagrange:hamiltonianExt} on \( \CotBundle Q_\ext \) with respect to the Hamiltonian \( H_\ext: \CotBundle Q \times \LieA{g} \to \R \) defined by~\eqref{eq:compositeHamiltonian}. \item The Hamilton equations on \( \check{M} = \CotBundle Q \sslash G \isomorph \CotBundle Q_\ext \sslash \TBundle G \) with respect to the reduced Hamiltonian \( \check{H}: \check{M} \to \R \) defined by \begin{equation} \pi^* \check{H} = \restr{H}{J^{-1}(0)}, \end{equation} where \( \pi: J^{-1}(0) \to J^{-1}(0) \slash G = \check{M} \) is the natural projection. \qedhere \end{thmenumerate} \end{thm} Since the Clebsch--Hamiltonian \( H: \CotBundle Q \times \LieA{g} \to \R \) is \( \xi \)-independent, it may be viewed as an ordinary Hamiltonian on \( \CotBundle Q \). We emphasize, however, that the ordinary Hamilton equation on \( \CotBundle Q \) with respect to \( H \) are \emph{not} equivalent to the Clebsch--Hamilton equations on \( \CotBundle Q \times \LieA{g} \). This explains to some extend the problem one faces when trying to quantize relativistic field theories: in order to make sense of the dynamics, the Hamiltonian and the momentum constraint have to be quantized simultaneously, or, alternatively, the symmetry has to be reduced completely before quantization. \section{Yang--Mills--Higgs theory} \label{sec:yangMillsHiggs} In this section, we will show how the Yang--Mills--Higgs system fits into the general Clebsch--Lagrange variational framework discussed in \cref{sec:clebschLagrange}. Let \( (M, \eta) \) be a \( 4 \)-dimensional oriented Lorentzian manifold with signature \( (- + +\, +) \). The underlying geometry of a Yang--Mills-Higgs field, is that of a principal \( G \)-bundle \( P \to M \), where \( G \) is a connected compact Lie group. In order to establish the notation, we recall the geometric picture and we refer for details to \parencite{RudolphSchmidt2014}. A connection in \( P \) is a splitting of the tangent bundle \( \TBundle P = \VBundle P \oplus \HBundle P \) into the canonical vertical distribution \( \VBundle P \) and a horizontal distribution \( \HBundle P \). Recall that \( \VBundle P \) is spanned by the Killing vector fields \( p \mapsto \xi \ldot p \) for \( \xi \in \LieA{g} \). Equivalently, a connection is given by a \( G \)-equivariant \( 1 \)-form \( A \in \DiffFormSpace^1(P, \LieA{g}) \). A bosonic matter field is a section \( \varphi \) of the associated vector bundle \( F = P \times_G \FibreBundleModel{F} \), where the typical fiber \( \FibreBundleModel{F} \) carries a \( G \)-representation. Thus, the space of configurations of Yang--Mills--Higgs theory consists of pairs \( (A, \varphi) \). It is obviously the product of the infinite-dimensional affine space \( \ConnSpace \) of connections and the space of sections \( \SectionSpaceAbb{F} \) of \( F \). On \( \ConnSpace \times \SectionSpaceAbb{F} \) we have a left action of the group \( \GauGroup = \sSectionSpace(P \times_G G) \) of local gauge transformations, \begin{equation} \label{LocGTr} A \mapsto \AdAction_{\lambda} A - \dif \lambda \, \lambda^{-1}, \quad \varphi \mapsto \lambda \cdot \varphi, \end{equation} for \( \lambda \in \GauGroup \). Next, recall the notion of the covariant exterior derivative, which we denote by \( \dif_A \). Let \( \alpha \in \DiffFormSpace^k(M, F) \) and let \( \tilde{\alpha} \in \DiffFormSpace^k(P, \FibreBundleModel{F}) \) be its associated horizontal form\footnote{This one-to-one correspondence will be used throughout the text without further notice.}. Then, \begin{equation} \dif_A \tilde{\alpha} = \dif \tilde{\alpha} + A \wedgeldot \tilde{\alpha}, \end{equation} where \( \wedgeldot: \DiffFormSpace^r(P, \LieA{g}) \times \DiffFormSpace^k(P, \FibreBundleModel{F}) \to \DiffFormSpace^{r+k}(P, \FibreBundleModel{F}) \) is the natural operation obtained by combining the Lie algebra action \( \LieA{g} \times \FibreBundleModel{F} \to \FibreBundleModel{F}, (\xi, f) \mapsto \xi \ldot f \) with the wedge product operation. An important special case is provided by the curvature \( F_A = \dif_A A \) of the connection \( A \), which is a horizontal \( 2 \)-form of type \( \AdAction \) on \( P \) or, equivalently, a \( 2 \)-form on \( M \) with values in the adjoint bundle \( \AdBundle P = P \times_G \LieA{g} \). Next, note that the Lie algebra $\GauAlgebra$ of $\GauGroup$ may be naturally identified with \( \sSectionSpace(\AdBundle P ) \). The infinitesimal action of $\xi \in \GauAlgebra$ on \( \ConnSpace(P) \) is given by \begin{equation}\label{eq:generalGauge:infinitisimalGaugeAction} \xi \ldot A = \adAction_\xi A - \dif \xi = - \dif_A \xi. \end{equation} Now we can formulate the variational principle for the Yang--Mills--Higgs system on \( \ConnSpace \times \SectionSpaceAbb{F} \). For that purpose we fix an \( \AdAction_G \)-invariant scalar product on \( \LieA{g} \) and a \( G \)-invariant scalar product on \( \FibreBundleModel{F} \). The Lagrangian for this model is given by the following top differential form on $M$: \begin{equation}\label{eq:yangMillsHiggs:action} \SectionSpaceAbb{L}_\textrm{YMH}(A, \varphi) = \frac{1}{2} \wedgeDual{F_A}{\hodgeStar F_A} + \frac{1}{2} \wedgeDual{\dif_A \varphi}{\hodgeStar \dif_A \varphi} - V(\varphi) \vol_\eta, \end{equation} where \( V: F \to \R \) denotes the Higgs potential induced from a smooth \( G \)-invariant function \( \FibreBundleModel{V}: \FibreBundleModel{F} \to \R \). In order to underline that the Hodge dual is defined in terms of a linear functional on the space of differential forms, we use the convention that the Hodge dual of a vector-valued differential form \( \alpha \in \DiffFormSpace^k(M, F) \) is the \emph{dual-valued}\footnotemark{} differential form \( \hodgeStar \alpha \in \DiffFormSpace^{4-k}(M, F^*) \). \footnotetext{Although this convention is a bit non-standard, the consistent use of dual-valued forms has the advantage that dual objects are clearly marked, which will be helpful later to identify them as points in the cotangent bundle (as opposed to elements of the tangent bundle).}% As we are using \( G \)-invariant scalar products on \( \LieA{g} \) and \( \FibreBundleModel{F} \), we have the equivariance property \begin{equation}\label{equivariance:hodgeStar} \hodgeStar \, (\AdAction_g \alpha) = \CoAdAction_g (\hodgeStar \alpha). \end{equation} Moreover, for \( \alpha \in \DiffFormSpace^k(M, F) \) and \( \beta \in \DiffFormSpace^{4-k}(M, F^*) \), we denote by \( \wedgeDual{\alpha}{\beta} \) the real-valued top-form that arises from combining the wedge product with the natural pairing \( \dualPairDot: F \times F^* \to \R \). By~\eqref{equivariance:hodgeStar}, the Lagrangian \( L \) is gauge invariant. The Euler--Lagrange equations corresponding to the Lagrangian~\eqref{eq:yangMillsHiggs:action}, called the \emphDef{Yang--Mills--Higgs equations}, read as follows: \begin{subequations}\label{eq:yangMillsHiggs4d}\begin{align} \dif_A \hodgeStar F_A + \varphi \diamond \hodgeStar \dif_A \varphi = 0, \label{eq:yangMillsHiggs:4d:ym} \\ \dif_A \hodgeStar \dif_A \varphi + V' (\varphi) \vol_\eta = 0, \label{eq:yangMillsHiggs:4d:higgs} \end{align}\end{subequations} where the derivative \( V' (\varphi) \) of \( V \) at the point \( \varphi \) is viewed as a fiberwise linear functional on \( F \) and the diamond product\footnote{The diamond operator often occurs in the study of Lie--Poisson systems and this is were we borrowed the notation from. Note that in a purely algebraic setting, the diamond product boils down to a map \( \diamond: F \times F^* \to \LieA{g}^* \) dual to the Lie algebra action. Hence, it is a momentum map for the \( G \)-action on \( \CotBundle F \). For example, if we consider the action of \( G = \SOGroup(3) \) on \( F = \R^3 \) and identify \( F^* \) with \( \R^3 \), then the diamond product becomes the classical cross product.} \begin{equation} \diamond: \DiffFormSpace^k(M, F) \times \DiffFormSpace^{\dim M-r-k}(M, F^*) \to \DiffFormSpace^{\dim M-r}(M, \CoAdBundle P) \end{equation} is defined by \begin{equation} \label{eq:yangMillsHiggs:defDiamond} \wedgeDual{\xi}{(\alpha \diamond \beta)} = \wedgeDual{(\xi \wedgeldot \alpha)}{\beta} \in \DiffFormSpace^{\dim M}(M) \end{equation} for all \( \xi \in \DiffFormSpace^r(M, \AdBundle P) \). The diamond product is equivariant with respect to gauge transformations in the sense that \begin{equation} (\lambda \cdot \alpha) \diamond (\lambda \cdot \beta) = \CoAdAction_\lambda (\alpha \diamond \beta) \end{equation} holds for all \( \lambda \in \GauGroup(P) \). Indeed, we have \begin{equation}\begin{split} \wedgeDual{\xi}{((\lambda \cdot \alpha) \diamond (\lambda \cdot \beta))} &= \wedgeDual{(\xi \wedgeldot (\lambda \cdot \alpha))}{(\lambda \cdot \beta)} \\ &= \wedgeDual{\lambda \cdot ((\AdAction_\lambda^{-1}\xi) \wedgeldot \alpha)}{(\lambda \cdot \beta)} \\ &= \wedgeDual{\AdAction_\lambda^{-1}\xi}{(\alpha \diamond \beta)} \\ &= \wedgeDual{\xi}{\CoAdAction_\lambda(\alpha \diamond \beta)} \, , \end{split}\end{equation} for all \( \xi \in \DiffFormSpace^r(M, \AdBundle P) \). In order to write the Yang--Mills equation~\eqref{eq:yangMillsHiggs4d} as a dynamical system we have to single-out a time direction and to split the equations in time and space directions. This decomposition is standard in the physics literature and pretty straightforward in local coordinates. A derivation of the following identities using a geometric, coordinate-independent language can be found in \cref{sec:yangMillsHiggs:decomposition}. Let us assume that the spacetime \( (M, \eta) \) is globally hyperbolic and time-oriented. Then, there exists a $3$-dimensional manifold $\Sigma$ and a diffeomorphism \( \iota: \R \times \Sigma \to M \) such that \begin{equation} \iota^* \eta = - \ell(t)^2 \dif t^2 + g(t), \end{equation} where \( \ell \) is a smooth time-dependent positive function on \( \Sigma \) and \( g \) is a time-dependent Riemannian metric on \( \Sigma \), see \parencite[Theorem~1.3.10]{BarGinouxEtAl2007}. The Hodge operator associated to \( g \) is denoted by \( \hodgeStar_g \). Moreover, let \( \diamond_\Sigma \) denote the diamond product relative to \( \Sigma \). For every \( t \in \R \), we denote by \( \iota_t: \Sigma \to M \) the induced embedding and by \( \Sigma_t \) the image of \( \Sigma \) under \( \iota_t \). The submanifold $\Sigma_t$ is a Cauchy hypersurface at $t$. In what follows, we assume $\Sigma$ to be compact and without boundary. This assumption amounts to requiring that the fields satisfy suitable boundary condition at spacial infinity. Following \parencite[Section~6A]{GotayIsenbergMarsden2004}, a \emphDef{slicing} of \( P \) over \( \iota \) is a principal \( G \)-bundle \( \vec{\pi}: \vec{P} \to \Sigma \) and a principal bundle isomorphism \( \hat{\iota} \) fitting into the following diagram: \begin{equationcd} \R \times \vec{P} \to[r, "\hat{\iota}"] \to[d, "\id_\R \times \vec{\pi}", swap] & P \to[d, "\pi"] \\ \R \times \Sigma \to[r, "\iota"] & M. \end{equationcd} By \parencite[Corollary~4.9.7]{Husemoller1966}, such a slicing always exists for small times; which suffices for the study of the initial value problem. In order to simplify the presentation, let us assume that the slicing exists for all \( t \in \R \). Relative to the splitting \( \hat{\iota}: \R \times \vec{P} \to P \), the objects living on \( P \) decompose into time-dependents objects living on \( \vec{P} \) and objects normal to \( \vec{P} \). For example, a function \( f: P \to \R \) on \( P \) yields a time-dependent function \( \vec{f}(t) = f \circ \hat{\iota}(t, \cdot) \) on \( \vec{P} \). In the sequel, we will usually suppress the diffeomorphisms \( \hat{\iota} \) and \( \iota \) in our notation. Accordingly, a vector field on \( M \) can be written as \( X = X^0 \, \difp_t + \vec{X} \), where \( X^0(t) \in \sFunctionSpace(\Sigma) \) and \( \vec{X}(t) \in \sSectionSpace(\TBundle \Sigma ) \) for every \( t \). A connection \( A \) in \( P \) decomposes as \begin{equation} A = A_0 \dif t + \vec{A} \end{equation} into \( A_0(t) \in \sSectionSpace(\AdBundle \vec{P}) \) and a time-dependent connection \( \vec{A}(t) \) in \( \vec{P} \). Moreover, the decomposition of the curvature takes the form \begin{equation} F_A = E \wedge \dif t + B, \end{equation} where, as usual, we have introduced the time-dependent color-electric field \( E \defeq \dif_{\vec{A}} \, A_0 - \dot{\vec{A}} \) and the color-magnetic field \( B \defeq F_{\vec{A}} \). A section \( \varphi \in \sSectionSpace(F) \) is the same as a time-dependent section of \( \vec{F} \) and its covariant differential reads \begin{equation} \dif_A \varphi = \left(\partial_t^{A_0}\varphi\right) \dif t + \dif_{\vec{A}} \varphi, \end{equation} where \( \partial_t^{A_0} \varphi \defeq \dot{\varphi} + A_0 \ldot \varphi \) is the covariant time-derivative. According to \cref{prop:splitting:yangMillsHiggs}, the Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs4d} take the following form: \begin{subequations}\label{eq:yangMillsHiggs3dEvol}\begin{align+} \dif_{\vec{A}} (\ell \hodgeStar_g B) - \partial_t^{A_0} (\ell^{-1} \hodgeStar_{g} E) &= - \ell \, \varphi \diamond_\Sigma (\hodgeStar_g \dif_{\vec{A}} \varphi), \label{eq:yangMillsHiggs3d:evol:ampere} \\ \dif_{\vec{A}} (\ell^{-1} \hodgeStar_{g} E) &= \ell^{-1} \, \varphi \diamond_\Sigma (\hodgeStar_g \partial_t^{A_0} \varphi), \label{eq:yangMillsHiggs3dConstraint} \\ - \partial_t^{A_0} (\ell^{-1} \hodgeStar_g \partial_t^{A_0} \varphi) + \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) &= \ell \, V' (\varphi)\vol_g . \label{eq:yangMillsHiggs3d:evol:higgs} \end{align+}\qedhere\end{subequations} Note that~\eqref{eq:yangMillsHiggs3d:evol:ampere} is the non-abelian counterpart of Ampere's law and that~\eqref{eq:yangMillsHiggs3dConstraint} is the Gauß constraint. Also note that \( A_0 \) is not a dynamical variable. \subsection{Formulation as a Clebsch--Lagrange system} \label{sec:yangMillls:asClebschLagrange} We will now show how the Yang--Mills--Higgs equations can be derived from the Clebsch--Lagrange variational principle. See forthcoming \cref{table:gaugeTheory:comparisionWithGeneralTheory} for a comparison of the general Clebsch--Lagrange theory and its concrete implementation in the Yang--Mills--Higgs case. After the \( (1 + 3) \)-splitting, the configuration space of the theory is \begin{equation} \SectionSpaceAbb{Q} = \set*{(\vec{A}, \varphi) \in \ConnSpace(\vec{P}) \times \sSectionSpace(\vec{F})}. \end{equation} Since \( \SectionSpaceAbb{Q} \) is an affine space, its tangent bundle is trivial with fiber \( \DiffFormSpace^1(\Sigma, \AdBundle \vec{P}) \times \sSectionSpace(\vec{F}) \). We will denote points in \( \TBundle \SectionSpaceAbb{Q} \) by tuples \( (\vec{A}, \alpha, \varphi, \zeta) \) with \( \alpha \in \DiffFormSpace^1(\Sigma, \AdBundle \vec{P}) \) and \( \zeta \in \sSectionSpace(\vec{F}) \). A natural choice for the cotangent bundle $\CotBundle \SectionSpaceAbb{Q}$ is the trivial bundle over $\SectionSpaceAbb{Q}$ with fiber \begin{equation} \DiffFormSpace^2(\Sigma, \CoAdBundle \vec{P}) \times \DiffFormSpace^3(\Sigma, \vec{F}^*)\, . \end{equation} We denote elements of this fiber by pairs \( (D, \pi) \). The natural pairing with \( \TBundle \SectionSpaceAbb{Q} \) is given by integration over \( \Sigma \), \begin{equation} \dualPair{(D, \pi)}{(\alpha, \zeta)} = \int_\Sigma (\wedgeDual{D}{\alpha} + \wedgeDual{\pi}{\zeta}) \, . \end{equation} As in the finite-dimensional case, the cotangent bundle \( \CotBundle \SectionSpaceAbb{Q} \) carries a natural symplectic structure. Its canonical $1$-form $\theta$ is given by~\eqref{CanForm}. In terms of the global coordinates \( (\vec{A}, \varphi, D, \pi) \) on \( \CotBundle \SectionSpaceAbb{Q} \) it reads\footnote{ Equivalently, one can view \( \vec{A} \) as a global coordinate function \( \CotBundle \SectionSpaceAbb{Q} \to \DiffFormSpace^1(\Sigma, \AdBundle \vec{P}) \) and \( \diF \) as the exterior differential on $ \CotBundle \SectionSpaceAbb{Q} $. In this language, \( \diF \vec{A} \in \DiffFormSpace^1(\CotBundle \SectionSpaceAbb{Q}, \DiffFormSpace^1(\Sigma, \AdBundle \vec{P})) \) and then \begin{equation} \theta = \dualPair{D}{\diF \vec{A}} + \dualPair{\pi}{\diF \varphi}, \quad \Omega = \wedgeDual{\diF D}{\diF \vec{A}} + \wedgeDual{\diF \pi}{\diF \varphi} \end{equation} are the field theoretic counterparts of the formulae \( \theta = p_i \dif q^i \) and \( \Omega = \dif p_i \wedge \dif q^i \) in classical mechanics. } \begin{equation}\label{eq:cotangentBundleTautologicalOneForm} \theta_{\vec{A}, \varphi, D, \pi}(\diF \vec{A}, \diF \varphi, \diF D, \diF \pi) = \dualPair{D}{\diF \vec{A}} + \dualPair{\pi}{\diF \varphi}, \end{equation} where \( \diF \vec{A} \in \DiffFormSpace^1(\Sigma, \AdBundle \vec{P}) \), \( \diF \varphi \in \DiffFormSpace^0(\Sigma, \vec{F}) \), \( \diF D \in \DiffFormSpace^2(\Sigma, \CoAdBundle \vec{P}) \) and \( \diF \pi \in \DiffFormSpace^0(\Sigma, \vec{F}^*) \) are viewed as tangent vectors on \( \CotBundle \SectionSpaceAbb{Q} \). The symplectic form \( \Omega \) is given as the exterior differential of \( \theta \), that is, \begin{equation}\label{eq:yangMillsHiggs:symplecticForm}\begin{split} \Omega_{\vec{A}, \varphi, D, \pi}&\left((\diF \vec{A}_1, \diF \varphi_1, \diF D_1, \diF \pi_1), (\diF \vec{A}_2, \diF \varphi_2, \diF D_2, \diF \pi_2)\right) \\ &= \dualPair{\diF D_1}{\diF \vec{A}_2} - \dualPair{\diF D_2}{\diF \vec{A}_1} + \dualPair{\diF \pi_1}{\diF \varphi_2} - \dualPair{\diF \pi_2}{\diF \varphi_1}. \end{split}\end{equation} The group \( \GauGroup(\vec{P}) \) of local gauge transformations acts naturally on \( \SectionSpaceAbb{Q} \). Since \( A_0 \in \sSectionSpace(\AdBundle \vec{P}) \) can be viewed as an element of \( \GauAlgebra(\vec{P}) \), it is natural to take it as the \( \xi \)-variable of the general theory. By~\eqref{eq:generalGauge:infinitisimalGaugeAction}, the Killing vector field generated by \( A_0 \) at \( (\vec{A}, \varphi) \) is given by \begin{equation} A_0 \ldot (\vec{A}, \varphi) = (- \dif_{\vec{A}} A_0, A_0 \ldot \varphi). \end{equation} Hence, in this case, the effective velocities \( \dot q + \xi \ldot q \) are \begin{equation} \dot{\vec{A}} - \dif_{\vec{A}} A_0 = - E \quad\text{and}\quad \dot \varphi + A_0 \ldot \varphi = \partial_t^{A_0} \varphi. \end{equation} The calculation~\eqref{eq:splitting:lagrangian} shows that after the \( (1 + 3) \)-decomposition the Lagrangian defined in~\eqref{eq:yangMillsHiggs:action} is of the form \( \SectionSpaceAbb{L}_\textrm{YMH} = \dif t \wedge \SectionSpaceAbb{L}_\Sigma \) with \begin{equation} \SectionSpaceAbb{L}_\Sigma = \frac{1}{2 \ell} \wedgeDual{E}{\hodgeStar_{g} E} - \frac{\ell}{2}\wedgeDual{B}{\hodgeStar_g B} + \frac{1}{2 \ell} \wedgeDual{\partial_t^{A_0} \varphi}{\hodgeStar_g \partial_t^{A_0} \varphi} - \frac{\ell}{2} \wedgeDual{\dif_{\vec{A}} \varphi}{\hodgeStar_g \dif_{\vec{A}} \varphi} - \ell \, V(\varphi) \vol_g . \end{equation} This expression shows that the Yang--Mills--Higgs action is of Clebsch--Lagrange form. To make this precise, we define the Clebsch--Lagrangian \( \SectionSpaceAbb{L}: \TBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \to \R \) in the coordinates on $\TBundle \SectionSpaceAbb{Q}$ introduced above by \begin{equation}\label{calL-YM}\begin{multlined}[c][.85\displaywidth] \SectionSpaceAbb{L}(\vec{A}, \alpha, \varphi, \zeta, A_0) = \int_\Sigma \bigg(\frac{1}{2 \ell} \wedgeDual{\alpha}{\hodgeStar_g \alpha} - \frac{\ell}{2}\wedgeDual{B}{\hodgeStar_g B} \\ + \frac{1}{2 \ell} \wedgeDual{\zeta}{\hodgeStar_g \zeta} - \frac{\ell}{2} \wedgeDual{\dif_{\vec{A}} \varphi}{\hodgeStar_g \dif_{\vec{A}} \varphi} - \ell \, V(\varphi) \vol_g \bigg). \end{multlined}\end{equation} Note that \( \SectionSpaceAbb{L} \) does not depend on the Lie algebra variable \( A_0 \). Then, the Yang--Mills--Higgs action defined by~\eqref{eq:yangMillsHiggs:action} takes the form: \begin{equation}\begin{split} \SectionSpaceAbb{S}[\vec{A}, \varphi, A_0] &= \int_0^T \int_\Sigma \SectionMapAbb{L}_\textrm{YMH}(A, \varphi) \\ &= \int_0^T \SectionSpaceAbb{L}(\vec{A}, -E, \varphi, \partial_t^{A_0} \varphi, A_0) \dif t \\ &= \int_0^T \SectionSpaceAbb{L}(\vec{A}, \dot{\vec{A}} - \dif_{\vec{A}} A_0, \varphi, \dot \varphi + A_0 \ldot \varphi, A_0) \dif t. \end{split}\end{equation} Comparing with~\eqref{eq:clebschLagrange:action}, we see that the variational principle associated to the Yang--Mills--Higgs action \( \SectionSpaceAbb{S} \) is a Clebsch--Lagrange principle with respect to the Clebsch--Lagrangian \( \SectionSpaceAbb{L} \). Since the Yang--Mills--Higgs equations arise from varying the action \( \SectionSpaceAbb{S} \), the general theory in form of \cref{prop:clebschLagrange:clebschEulerLagrangeDirect} implies that the Yang--Mills--Higgs equations in its \( (1 + 3) \)-formulation~\eqref{eq:yangMillsHiggs3dEvol} are the Clebsch--Euler--Lagrange equations associated to \( \SectionSpaceAbb{L} \). Let us verify this directly. Since \( \SectionSpaceAbb{L} \) does not depend on \( A_0 \in \GauAlgebra(\vec{P}) \), for the remainder of this subsection we may view it as a function on \( \TBundle \SectionSpaceAbb{Q} \). First, we have to calculate \begin{equation} \left(\difpFrac{\SectionSpaceAbb{L}}{\vec {A}}, \difpFrac{\SectionSpaceAbb{L}}{\varphi} \right) : \TBundle \SectionSpaceAbb{Q} \to \R \end{equation} and to realize these functionals as elements of $\CotBundle \SectionSpaceAbb{Q}$. As \( \diF F_A = \dif_A \diF A \) and \( \diF (\dif_A \beta) = \diF A \wedgeldot \beta + \dif_A \diF \beta \) for a vector-valued \( k \)-form \( \beta \), we obtain \begin{align} \difpFrac{\SectionSpaceAbb{L}}{\vec{A}} &= - \dif_{\vec{A}} \left(\ell \, \hodgeStar_g B \right) - \ell \, \varphi \diamond_\Sigma \hodgeStar_g \dif_{\vec{A}} \varphi \label{eq:yangMills:difLA} \\ \intertext{and} \difpFrac{\SectionSpaceAbb{L}}{\varphi} &= \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) - \ell \, V'(\varphi) \vol_g \,. \label{eq:yangMills:difLPhi} \end{align} Next, the fiber derivative of \( \SectionSpaceAbb{L} \), viewed as a mapping $ \TBundle \SectionSpaceAbb{Q} \to \CotBundle \SectionSpaceAbb{Q} $, is given by \begin{equation} \left(\difpFrac{\SectionSpaceAbb{L}}{\alpha}, \difpFrac{\SectionSpaceAbb{L}}{\zeta} \right) (\vec{A}, \alpha, \varphi, \zeta) = \left(\ell^{-1} \hodgeStar_g \alpha, \ell^{-1} \hodgeStar_g \zeta \right). \label{eq:yangMills:difLFibre} \end{equation} We note, in particular, that \( \SectionSpaceAbb{L} \) is regular. Evaluating~\eqref{eq:yangMills:difLFibre} at \( (\vec A, \alpha = -E, \varphi, \zeta = \partial_t^{A_0} \varphi )\), we see that the Clebsch--Euler--Lagrange equations~\eqref{eq:clebschLagrange:clebschEulerLagrangeDirect} take the following form here: \begin{align} \label{CL-YM-1} \difFrac{}{t}(\ell^{-1} \hodgeStar_{g} E) + A_0 \ldot (\ell^{-1} \hodgeStar_{g} E) - \dif_{\vec{A}} (\ell \hodgeStar_g B) - \ell \, \varphi \diamond_\Sigma \hodgeStar_g \dif_{\vec{A}} \varphi = 0, \\ \label{CL-YM-2} \difFrac{}{t}(\ell^{-1} \hodgeStar_g \partial_t^{A_0} \varphi) + A_0 \ldot (\ell^{-1} \hodgeStar_g \partial_t^{A_0} \varphi) - \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) + \ell \, V'(\varphi) \vol_g = 0 \,. \end{align} These equations clearly coincide with the Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs3d:evol:ampere} and~\eqref{eq:yangMillsHiggs3d:evol:higgs}. Now, let us study the momentum map constraint~\eqref{eq:clebschLagrange:hamiltonian:constraint}. As usual, the action on the configuration space lifts to the cotangent bundle. In the present setting, the \( \GauGroup(\vec{P}) \)-action on the dual variables reads \begin{equation} \lambda \cdot (D, \pi) = (\CoAdAction_\lambda D, \lambda \cdot \pi). \end{equation} Thus, the fundamental vector field generated by \( \xi \in \GauAlgebra(\vec{P}) \) on \( \CotBundle \SectionSpaceAbb{Q} \) is given by \begin{equation} \xi \ldot (\vec{A}, D, \varphi, \pi) = (- \dif_{\vec{A}} \xi, \CoadAction_\xi D, \xi \ldot \varphi, \xi \ldot \pi). \end{equation} Contracting with the canonical $1$-form \( \theta \) yields \begin{equation}\begin{split} \theta (\xi \ldot (\vec{A}, D, \varphi, \pi)) &= \int_\Sigma \bigl( - \wedgeDual{D}{\dif_{\vec{A}} \xi} + \wedgeDual{\pi}{(\xi \ldot \varphi)} \bigr) \\ &= \int_\Sigma \bigl( \wedgeDual{\dif_{\vec{A}} D}{\xi} + \wedgeDual{(\varphi \diamond_\Sigma \pi)}{\xi} \bigr), \end{split}\end{equation} from which we read-off the momentum map \begin{equation}\label{eq:yangMills:momentumMap} \SectionMapAbb{J}(\vec{A}, D, \varphi, \pi) = \dif_{\vec{A}} D + \varphi \diamond_\Sigma \pi, \end{equation} which takes values in \( \GauAlgebra(\vec{P})^* \isomorph \DiffFormSpace^3(\Sigma, \CoAdBundle \vec{P}) \). As $\SectionSpaceAbb{L}$ does not explicitly depend on $A_0$, the momentum map constraint~\eqref{eq:clebschLagrange:hamiltonian:constraint} takes the form \begin{equation} \label{Gauss-Constr} \SectionMapAbb{J}(\vec{A}, D, \varphi, \pi) = 0. \end{equation} By~\eqref{eq:yangMills:difLFibre} the canonically conjugate momenta are \begin{equation}\label{eq:yangMills:defConjugateMomenta} (D, \pi) = \left(- \ell^{-1} \hodgeStar_g E, \ell^{-1} \hodgeStar_g \partial_t^{A_0} \varphi \right) \, . \end{equation} Inserting this expression into~\eqref{Gauss-Constr}, yields the Gauß constraint~\eqref{eq:yangMillsHiggs3dConstraint}. To summarize, we have shown the following. \begin{thm} The Yang--Mills--Higgs action is of Clebsch--Lagrange form. Moreover, the Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs3dEvol} are equivalent to the Clebsch--Lagrange equations on \( \TBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \) associated to the Clebsch--Lagrangian \( \SectionMapAbb{L} \) defined in~\eqref{calL-YM}. \end{thm} \begin{table}[tbp] \scriptsize \centering \begin{tabular}{l l l} \toprule & Clebsch--Lagrange & Yang--Mills--Higgs \\ \midrule Configuration variables & \( q \) & \( \vec{A}, \varphi \) \\ Symmetry variables & \( \xi \in \LieA{g} \)& \( A_0 \in \sSectionSpace(\AdBundle \vec{P}) \) \\ Effective velocities & \( \dot q + \xi \ldot q \) & \( -E, \difp_t^{A_0} \varphi \) \\ Conjugate momenta & \( p \) & \( D, \pi \) \\ \addlinespace \addlinespace \parbox[c]{3.6cm}{Clebsch--Euler--Lagrange \\ equation} & \( \difFrac{}{t} \left( \difpFrac{L}{\dot{q}} \right) + \xi \ldot \difpFrac{L}{\dot{q}} = \difpFrac{L}{q} \) & \( \begin{gathered} \dif_{\vec{A}} (\ell \hodgeStar_g B) - \partial_t^{A_0} (\ell^{-1} \hodgeStar_{g} E) = - \ell \, \varphi \diamond_\Sigma (\hodgeStar_g \dif_{\vec{A}} \varphi), \\ - \partial_t^{A_0} (\ell^{-1} \hodgeStar_g \partial_t^{A_0} \varphi) + \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) = \ell \, V' (\varphi)\vol_g \end{gathered} \) \\ \addlinespace \addlinespace Momentum map constraint & \( J\left(q, \difpFrac{L}{\dot q}\right) = - \difpFrac{L}{\xi} \) & \( \dif_{\vec{A}} D + \varphi \diamond_\Sigma \pi = 0 \) \\ \addlinespace \addlinespace \parbox[c]{3.6cm}{\vspace*{-2ex}Clebsch--Hamilton \\ equations} & \( \begin{aligned} \difpFrac{H}{q}(q, p, \xi) &= - (\dot p + \xi \ldot p), \\ \difpFrac{H}{p}(q, p, \xi) &= \dot q + \xi \ldot q \end{aligned} \) & \( \begin{aligned} \dif_{\vec{A}} H - \partial_t^{A_0} D &= - \varphi \diamond_\Sigma \psi, \\ \partial_t^{A_0}\pi + \dif_{\vec{A}} \psi &= \ell \, V' (\varphi)\vol_g \end{aligned} \) \\ \bottomrule \end{tabular} \caption{Comparison of the general Clebsch--Lagrange theory and the Yang--Mills--Higgs system.} \label{table:gaugeTheory:comparisionWithGeneralTheory} \end{table} \subsection{Hamiltonian picture} It is straightforward to spell out the results of \cref{Ham-Pic-gen} for the model under consideration. Therefore, we limit ourselves to the main points. By definition, the Clebsch--Hamiltonian \( \SectionSpaceAbb{H}: \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \to \R \) is the Legendre transform of \( \SectionSpaceAbb{L} \). That is, \begin{equation} \label{calH} \SectionSpaceAbb{H}(\vec{A}, D, \varphi, \pi, A_0) = \dualPair{D}{\alpha} + \dualPair{\pi}{\zeta} - \SectionSpaceAbb{L}(\vec{A}, \alpha, \varphi, \zeta, A_0), \end{equation} where \( \alpha \in \sSectionSpace(\AdBundle(\vec{P})) \) and \( \zeta \in \sSectionSpace(\vec{F}) \) are considered as functions of \( (\vec{A}, D, \varphi, \pi) \) via the condition (\cf,~\eqref{eq:yangMills:difLFibre}) \begin{equation} (D, \pi) = \left(\difpFrac{\SectionSpaceAbb{L}}{\alpha}, \difpFrac{\SectionSpaceAbb{L}}{\zeta} \right) (\vec{A}, \alpha, \varphi, \zeta) = \left(\ell^{-1} \hodgeStar_g \alpha, \ell^{-1} \hodgeStar_g \zeta \right). \end{equation} Since \( \SectionMapAbb{L} \) does not depend on \( A_0 \), the Clebsch--Hamiltonian \( \SectionMapAbb{H} \) is independent of \( A_0 \), too. By~\eqref{calL-YM}, we have \begin{equation}\label{eq:yangMills:hamiltonian}\begin{split} \SectionSpaceAbb{H}(\vec{A}, D, \varphi, \pi, A_0) &= \dualPair{D}{\ell \hodgeStar_g D} + \dualPair{\pi}{\ell \hodgeStar_g \pi} - \SectionSpaceAbb{L}(\vec{A}, \ell \hodgeStar_g D, \varphi, \ell \hodgeStar_g \pi, A_0), \\ & \!\begin{multlined}[b][.55\displaywidth] = \int_\Sigma \bigg( \ell \wedgeDual{D}{\hodgeStar_g D} + \ell \wedgeDual{\pi}{\hodgeStar_g \pi} - \frac{\ell}{2} \wedgeDual{D}{\hodgeStar_g D} + \frac{\ell}{2}\wedgeDual{B}{\hodgeStar_g B} \\ - \frac{\ell}{2} \wedgeDual{\pi}{\hodgeStar_g \pi} + \frac{\ell}{2} \wedgeDual{\dif_{\vec{A}} \varphi}{\hodgeStar_g \dif_{\vec{A}} \varphi} + \ell \, V(\varphi) \vol_g \bigg) \end{multlined} \\ & \!\begin{multlined}[b][.71\displaywidth] = \int_\Sigma \frac{\ell}{2} \bigg( \wedgeDual{D}{\hodgeStar_g D} + \wedgeDual{B}{\hodgeStar_g B} \\ + \wedgeDual{\pi}{\hodgeStar_g \pi} + \wedgeDual{\dif_{\vec{A}} \varphi}{\hodgeStar_g \dif_{\vec{A}} \varphi} + 2 \, V(\varphi) \vol_g \bigg) \, . \end{multlined} \end{split}\end{equation} In summary, the general theory from \cref{Ham-Pic-gen} yields the following. \begin{thm} The Clebsch--Hamilton equations for \( \SectionMapAbb{H} \) are given by the following system of equations on \( \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \): \begin{equation}\label{eq:yangMills:asClebschHamilton}\begin{split} \partial_t^{A_0} D &= - \dif_{\vec{A}} (\ell \hodgeStar_g B) - \ell \varphi \diamond \hodgeStar_g \dif_{\vec{A}} \varphi, \\ \partial_t \vec{A} &= \dif_{\vec{A}} A_0 + \ell \hodgeStar_g D, \\ \partial_t^{A_0}\pi &= \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) - \ell \, V'(\varphi) \vol_g, \\ \partial_t^{A_0}\varphi &= \ell \hodgeStar_g \pi, \\ \dif_{\vec{A}} D &+ \varphi \diamond_\Sigma \pi = 0 \, . \end{split}\end{equation} These equations are equivalent to the Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs3dEvol}. \end{thm} \begin{proof} Since the Lagrangian \( \SectionMapAbb{L} \) is regular, the claim follows directly from the general equivalence of the Clebsch--Lagrange and Clebsch--Hamilton equations under the Legendre transformation, see \cref{prop:clebschLagrange:legendre}. For completeness, let us also give a direct proof, that is, let us write down the Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian} for the case under consideration. From~\eqref{eq:yangMills:hamiltonian}, we obtain \begin{equation} \difpFrac{\SectionMapAbb{H}}{\vec{A}} = \dif_{\vec{A}} (\ell \hodgeStar_g B) + l \varphi \diamond \hodgeStar_g \dif_{\vec{A}} \varphi \end{equation} and \begin{equation} \difpFrac{\SectionMapAbb{H}}{\varphi} = - \dif_{\vec{A}} (\ell \hodgeStar_g \dif_{\vec{A}} \varphi) + \ell \, V'(\varphi) \vol_g. \end{equation} Moreover, we obviously have \begin{equation} \difpFrac{\SectionMapAbb{H}}{D} = \ell \hodgeStar_g D, \qquad \difpFrac{\SectionMapAbb{H}}{\pi} = \ell \hodgeStar_g \pi, \qquad \difpFrac{\SectionMapAbb{H}}{A_0} = 0. \end{equation} As we have already seen, the constraint~\eqref{constr-H-ext} reads \begin{equation} \dif_{\vec{A}} D + \varphi \diamond_\Sigma \pi =0 \end{equation} and corresponds to the Gauß constraint~\eqref{Gauss-Constr}. Thus the Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian} yield~\eqref{eq:yangMills:asClebschHamilton}. Using expression~\eqref{eq:yangMills:defConjugateMomenta} for the canonical momenta, it is straightforward to see that~\eqref{eq:yangMills:asClebschHamilton} recovers the Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs3dEvol}, indeed. \end{proof} \begin{remark} Note that the metric \( g \) on \( \Sigma \) depends on time. Accordingly, the Hamiltonian is explicitly time-dependent and so it is not a constant of motion for curved spacetimes. \end{remark} \subsection{Constraints and reduction by stages} Recall that the Clebsch--Hamilton equations can also be obtained by passing through an extended phase space and implementing the constraints using the Dirac--Bergmann algorithm in the formulation of \parencite{GotayNesterHinds1978}. Here, we apply the general theory of \cref{Ham-Pic-gen} to the Yang--Mills--Higgs case. By definition, the extended configuration space is \( \SectionSpaceAbb{Q}_\ext \defeq \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \). Thus, the natural extended phase space induced by Hodge duality is \begin{equation} \CotBundle \SectionSpaceAbb{Q}_\ext \defeq \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \times \DiffFormSpace^3(\Sigma, \CoAdBundle \vec{P}). \end{equation} We will denote points in \( \CotBundle \SectionSpaceAbb{Q}_\ext \) by tuples \( (\vec{A}, D, \varphi, \pi, A_0, \nu) \). As in the general theory, associated with the Clebsch--Lagrangian \( \SectionSpaceAbb{L}: \TBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \to \R \) as in~\eqref{calL-YM}, we have a degenerate Lagrangian $\SectionSpaceAbb{L}_\ext$ given by~\eqref{L-ext} and we can, thus, perform the Dirac-Bergmann constraint analysis as in \cref{Ham-Pic-gen}. As in the general theory, the primary constraint is \( \nu = 0 \). Let \( \SectionSpaceAbb{M}_1 \isomorph \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \) be the subspace cut out by this constraint. Let $ \SectionSpaceAbb{H}_\ext: \SectionSpaceAbb{M}_1 \to \R $ be the Hamiltonian corresponding to $\SectionSpaceAbb{L}_\ext$. Using~\eqref{eq:compositeHamiltonian} and~\eqref{eq:yangMills:momentumMap}, from~\eqref{H-ext} we read off \begin{equation} \label{calHext} \SectionSpaceAbb{H}_\ext (\vec{A}, D, \varphi, \pi, A_0) = \SectionSpaceAbb{H}(\vec{A}, D, \varphi, \pi, A_0) - \dualPair{\dif_{\vec{A}} D + \varphi \diamond_\Sigma \pi}{A_0}. \end{equation} The next step in the constraint analysis leads to the constraint~\eqref{constr-H-ext} and, thus, to the system of Hamilton equations given by~\eqref{eq:clebschLagrange:hamiltonianExt}. We have already seen that the constraint~\eqref{constr-H-ext} coincides with the Gauß constraint. As $\SectionSpaceAbb{H}$ is gauge invariant and does not explicitly depend on $A_0$, \cref{prop:G-inv-H} implies that the Gauß constraint is preserved in time. Consequently, the constraint analysis terminates at that stage. The Hamiltonian \( \SectionMapAbb{H}_\ext \) transforms under local gauge transformations as follows: \begin{equation}\begin{split} \SectionMapAbb{H}_\ext&(\AdAction_\lambda \vec{A} + \lambda \dif \lambda^{-1}, \CoAdAction_\lambda D, \lambda \cdot \varphi, \lambda \cdot \pi, \AdAction_\lambda A_0 - \xi) \\ &= \SectionMapAbb{H}_\ext(\vec{A}, D, \varphi, \pi, A_0) + \int_\Sigma \dualPair{(\dif_{\vec{A}} D + \varphi \diamond_\Sigma \pi)}{\AdAction_{\lambda}^{-1} \xi}. \end{split}\end{equation} In particular, \( \SectionMapAbb{H}_\ext \) is not gauge invariant unless the Gauß constraint is imposed. Using the terminology common in physics, one could say that the extended Hamiltonian is invariant \enquote{on-shell}. According to the discussion of the general reduction theory in \cref{sec:clebschLagrange:reductionStages}, the action of \( \TBundle \GauGroup(\vec{P}) \) on \( \CotBundle \SectionSpaceAbb{Q}_\ext \) plays an important role. We will now show that, in the present context, this action is derived from the action of \( \GauGroup \) on \( \ConnSpace \times \SectionSpaceAbb{F} \). Recall from~\eqref{LocGTr} the action of the group of local gauge transformations \( \GauGroup \) on the space of connections \( \ConnSpace \), \begin{equation} \lambda \cdot A = \AdAction_\lambda A - \difLog^R \lambda, \end{equation} where \( \lambda \) is a \( G \)-equivariant map \( P \to G \) and \( \difLog^R \lambda \defeq \dif \lambda \, \lambda^{-1} \in \DiffFormSpace^1(P, \LieA{g}) \) denotes the right logarithmic derivative. Via the $(1+3)$-decomposition, we may equivalently think of a gauge transformation \( \lambda \in \GauGroup \) as a time-dependent gauge transformation \( \lambda(t) \) in \( \vec{P} \). Evaluating $\lambda \cdot A $ on a vector field $Y$ on $P$, decomposed as \( Y = Y^0 \difp_t + \vec{Y} \), we obtain \begin{equation}\begin{split} (\lambda \cdot A) (Y^0 \difp_t + \vec{Y}) &= Y^0 \AdAction_\lambda A_0 - Y^0 \difLog^R_t \lambda + \AdAction_\lambda \vec{A}(\vec{Y}) - \difLog^R \lambda (\vec{Y}) \\ &= Y^0 (\AdAction_\lambda A_0 - \difLog^R_t \lambda) + (\lambda \cdot \vec{A})(\vec{Y}), \end{split}\end{equation} where \( \difLog^R_t \lambda \in \sSectionSpace(\AdBundle \vec{P}) \) denotes the right logarithmic derivative of the path \( t \mapsto \lambda(t) \in \GauGroup(\vec{P}) \). That is, \begin{equation} \difLog^R_t \lambda = \difLog^R \lambda (\difp_t) = \tangent \lambda (\difp_t) \ldot \lambda^{-1} \equiv \dot{\lambda} \, \lambda^{-1} \in \GauAlgebra(\vec{P}). \end{equation} Thus, the action of gauge transformations on connections on \( P \) decomposes into the action \( A_0 \mapsto \AdAction_\lambda A_0 - \difLog^R_t \lambda \) and the natural action on the space $\ConnSpace (\vec P)$ of connections on \( \vec{P} \). The crucial observation is that the action only depends on \( \lambda(t) \in \GauGroup(\vec{P}) \) and on the first derivative \( \difLog^R_t \lambda \). In other words, the action of the group of time-dependent gauge transformations on the pair \( (A_0, \vec{A}) \) factors through the following action of the tangent group \( \TBundle \GauGroup(\vec{P}) \): \begin{equation}\begin{split} (\xi, \lambda) \cdot A_0 &= \AdAction_\lambda A_0 - \xi, \\ (\xi, \lambda) \cdot \vec{A} &= \AdAction_\lambda \vec{A} - \difLog^R \lambda, \end{split}\end{equation} where \( (\xi, \lambda) \in \GauAlgebra(\vec{P}) \times \GauGroup(\vec{P}) \) is viewed as the element of \( \TBundle \GauGroup(\vec{P}) \) under the right trivialization \( (\xi, \lambda) \mapsto \xi \ldot \lambda \). The action of \( \GauGroup \) on the matter field \( \varphi \in \sSectionSpace(F) \) is considerably simpler, because it does not involve derivatives in the time direction and thus comes down to an action of \( \GauGroup(\vec{P}) \) on \( \sSectionSpace(\vec{F}) \) for every moment of time. In summary, we get a natural action of \( \TBundle \GauGroup(\vec{P}) \) on the extended configuration space of the theory. Moreover, \( \TBundle \GauGroup(\vec{P}) = \GauAlgebra(\vec{P}) \rSemiProduct \GauGroup(\vec{P}) \). As \( \SectionSpaceAbb{H} \) is \( \GauGroup(\vec{P}) \)-invariant and does not explicitly depend on \( A_0 \), \cref{prop:clebschLagrange:reductionStages} holds and we have the following commutative diagram of symplectic reductions: \begin{equation}\begin{tikzcd}[column sep=5.9em, row sep=0.02em] \CotBundle \SectionSpaceAbb{Q}_\ext \ar[r, twoheadrightarrow, "{\sslash \GauAlgebra(\vec{P})}"] \ar[rr, twoheadrightarrow, "{\sslash \TBundle \GauGroup(\vec{P})}", swap, bend right] & \CotBundle \SectionSpaceAbb{Q} \ar[r,twoheadrightarrow, "{\sslash \GauGroup(\vec{P})}"] & \check{\SectionSpaceAbb{M}}, \end{tikzcd}\end{equation} where \( \check{\SectionSpaceAbb{M}} = \SectionMapAbb{J}^{-1}(0) \slash \GauGroup(\vec{P}) \) is the reduced phase space of the theory. In the first reduction step, we pass from the variables \( (\vec{A}, D, \varphi, \pi, A_0, \nu) \) to the variables \( (\vec{A}, D, \varphi, \pi) \). In this language, the temporal gauge \( A_0 = 0 \) often used in physics acquires the geometric interpretation of a section in \( \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \to \CotBundle \SectionSpaceAbb{Q} \). Moreover, \cref{prop:clebschLagrange:equivalenceHamiltonianReductions} immediately gives the following. \begin{thm} The following systems of equations are equivalent: \begin{thmenumerate} \item The Yang--Mills--Higgs equations~\eqref{eq:yangMillsHiggs4d}. \item The Clebsch--Hamilton equations on \( \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \) with respect to \( \SectionMapAbb{H} \). \item The constraint Hamilton equations on \( \CotBundle \SectionSpaceAbb{Q}_\ext \) with respect to the Hamiltonian \( \SectionMapAbb{H}_\ext \). \item The Hamilton equations on \( \check{\SectionSpaceAbb{M}} \) with respect to the reduced Hamiltonian \( \check{\SectionMapAbb{H}}: \check{\SectionSpaceAbb{M}} \to \R \) defined by \begin{equation} \pi^* \check{\SectionMapAbb{H}} = \restr{\SectionMapAbb{H}}{\SectionMapAbb{J}^{-1}(0)}, \end{equation} where \( \pi: \SectionMapAbb{J}^{-1}(0) \to \check{\SectionSpaceAbb{M}} \) is the natural projection. \qedhere \end{thmenumerate} \end{thm} Since the Clebsch--Hamiltonian \( \SectionMapAbb{H}: \CotBundle \SectionSpaceAbb{Q} \times \GauAlgebra(\vec{P}) \to \R \) is \( A_0 \)-independent, it may be viewed as an ordinary Hamiltonian on \( \CotBundle \SectionSpaceAbb{Q} \). We emphasize that the ordinary Hamilton equations on \( \CotBundle \SectionSpaceAbb{Q} \) with respect to \( \SectionMapAbb{H} \) are \emph{not} equivalent to the Yang--Mills--Higgs equations unless the Gauß constraint is also imposed. The geometry of the reduced phase space \( \check{\SectionSpaceAbb{M}} \) will be studied in \parencite{DiezRudolphReduction}. \section{General relativity} We will now consider Einstein's equation of general relativity and show how it fits into the general Clebsch--Lagrange variational framework. Let \( M \) be a \( 4 \)-dimensional oriented manifold. We are interested in solutions of Einstein's equation \begin{equation} \Ric_\eta - \frac{1}{2} \RicScalar_\eta \, \eta = 8 \pi T, \end{equation} where \( \Ric_\eta \) and \( \RicScalar_\eta \) denote, respectively, the Ricci curvature and the Ricci scalar curvature of the Lorentzian metric \( \eta \) on \( M \) to be determined. For simplicity, we will restrict attention to the vacuum setting, for which the energy-momentum tensor \( T \) vanishes and Einstein's equation is equivalent to \( \Ric_\eta = 0 \). Equivalently, we are looking for extrema of the Einstein--Hilbert action \begin{equation}\label{eq:generalRelativity:einsteinHilbert4D} \SectionSpaceAbb{S}[\eta] = \frac{1}{2} \int_M \RicScalar_\eta \, \vol_\eta, \end{equation} defined on the space of Lorentzian metrics on \( M \) with signature \( (- + +\, +) \). In order to formulate~\eqref{eq:generalRelativity:einsteinHilbert4D} as a Clebsch--Lagrange variational problem, we proceed similarly to the study of the Yang--Mills--Higgs equations: \begin{enumerate} \item Using a splitting of spacetime, formulate Einstein's equation as a Cauchy problem in the variables \( (S, \ell, g) \), where \( S \) is the shift vector, \( \ell \) is the lapse function and \( g \) is the spatial metric. \item Realize \( S \) and \( \ell \) as elements of the Lie algebra of the diffeomorphism group of \( M \) and determine their action on the space of spatial metrics. \item Calculate the Lagrangian in the \( (1+3) \)-splitting and show that it is of the Clebsch--Lagrange form. \item Determine the momentum map constraint and compare it to the diffeomorphism and Hamiltonian constraint. \item Pass to the Hamiltonian picture using the Clebsch--Legendre transformation. \end{enumerate} First, we formulate Einstein's equation as a Cauchy problem, see \eg \parencite[Section~10.2]{Wald1984}. For this purpose, choose a slicing of \( M \), that is, a diffeomorphism \( \iota: \R \times \Sigma \to M \), where \( \Sigma \) is a compact manifold that will play the role of a Cauchy hypersurface. For simplicity, we assume that \( \Sigma \) has no boundary\footnote{We refer to \parencite[Chapter~20]{Blau2018} for a detailed discussion of boundary terms within the ADM formalism and to \parencite{Kijowski1997} for a new canonical description of the gravitational field dynamics in a finite volume with boundary. Clearly, our approach can be adapted to include boundary terms as well.}. For every \( t \in \R \), let \( \iota_t: \Sigma \to M \) denote the smooth curve of embeddings associated to \( \iota \). Assume that these embeddings are spacelike. Let \( \Sigma_t = \iota_t (\Sigma) \) be the corresponding submanifold of \( M \) and denote its unit\footnote{\( \eta(\nu, \nu) = -1 \).} normal vector field by \( \nu \). Since \( \iota \) is a diffeomorphism, we have the following splitting of the tangent space \begin{equation} \TBundle_{\iota(t, x)} M = \R \, \nu_{\iota(t, x)} \oplus \tangent_{(t,x)} \iota (\TBundle_x \Sigma) \end{equation} for every \( t \in \R \) and \( x \in \Sigma \). Accordingly, every vector field on \( M \) decomposes into parts normal and tangent to \( \Sigma_t \). In particular, for the time vector field, we obtain \begin{equation} \difpFracAt{}{t}{t} \iota_t = \tangent \iota (\difp_t) = \ell \, \nu + \tangent \iota_t (S), \end{equation} where \( \ell(t) \in \sFunctionSpace(\Sigma) \) is the \emphDef{lapse function} and \( S(t) \in \VectorFieldSpace(\Sigma) \) is the \emphDef{shift vector field}. Moreover, \( \eta \) induces by pull-back a family \( g(t) = \iota^*_t \eta \) of Riemannian metrics on \( \Sigma \). In terms of these data, the pull-back of \( \eta \) to \( \R \times \Sigma \) is given by \begin{equation} \iota^* \eta = \Matrix{- \ell^2 + g (S, S) & g(S, \cdot) \\ g(S, \cdot) & g}. \end{equation} In particular, the knowledge of the data \( (S, \ell, g) \) is enough to reconstruct \( \eta \) (in a neighborhood around \( \Sigma_0 \)). \subsection{Formulation as a Clebsch--Lagrange system} In order to make contact to the general Clebsch--Lagrange theory, we need to identify the Lie algebra-valued fields and the configuration space on which the Lie group acts. Since the lapse function and the shift vector field are known to be related to the diffeomorphism invariance of the theory, they are natural candidates for the Lie algebra-valued fields leaving the metric \( g \) as the configuration variable. To make this idea precise, let \( \SectionSpaceAbb{G} \) be the Lie group consisting of diffeomorphisms \( \phi \) of \( \R \times \Sigma \) that are of the form \( \phi(t, x) = (t + \bar{\phi}(x), \varphi(x)) \), where \( \bar{\phi} \) is a real-valued function on \( \Sigma \) and \( \varphi \) is a diffeomorphism of \( \Sigma \). Using the slicing \( \iota \) we can view \( \SectionSpaceAbb{G} \) as a subgroup of the group \( \DiffGroup(M) \) of diffeomorphisms of \( M \). If \( \R \times \Sigma \to \Sigma \) is viewed as a principal \( \R \)-bundle, then \( \SectionSpaceAbb{G} \) is identified with the group of principal bundle automorphism. Clearly, \( \SectionSpaceAbb{G} \) has the following semidirect product structure: \begin{equation} \SectionSpaceAbb{G} = \DiffGroup(\Sigma) \lSemiProduct \sFunctionSpace(\Sigma), \end{equation} where the group \( \DiffGroup(\Sigma) \) of diffeomorphisms of \( \Sigma \) acts on \( \sFunctionSpace(\Sigma) \) by pull-back. Thus, the Lie algebra \( \mathrm{L}\SectionSpaceAbb{G} \) of \( \SectionSpaceAbb{G} \) is the semidirect product of \( \VectorFieldSpace(\Sigma) \) and \( \sFunctionSpace(\Sigma) \). Hence, \( (S, \ell) \) are naturally elements of \( \mathrm{L}\SectionSpaceAbb{G} \). Correspondingly, the configuration space \( \SectionSpaceAbb{Q} \) of the theory is the space \( \MetricSpace(\Sigma) \) of Riemannian metrics on \( \Sigma \). There is a natural left action of \( \SectionSpaceAbb{G} \) on \( \MetricSpace(\Sigma) \) through the projection onto \( \DiffGroup(\Sigma) \): \begin{equation} (\varphi, \bar{\phi}) \cdot g \defeq (\varphi^{-1})^* g . \end{equation} Correspondingly, \( (X, f) \in \mathrm{L}\SectionSpaceAbb{G} \) acts on \( g \in \MetricSpace(\Sigma) \) as \begin{equation} \label{eq:generalRelativity:actionLieAlgebra} (X, f) \ldot g = - \difLie_X g. \end{equation} Recall (\eg, from \parencite[Equation~232]{Giulini2015}) that the variation of \( g \) is connected to the extrinsic curvature \( k(t) \in \SymTensorFieldSpace^2(\Sigma) \) by \begin{equation} \dot{g} \equiv \difFracAt{}{t}{t} g(t) = \difLie_S g + 2 \ell k, \end{equation} where \( \difLie_S \) is the Lie derivative along \( S \). Thus, by~\eqref{eq:generalRelativity:actionLieAlgebra}, the effective velocity \( \dot{q} + \xi \ldot q \) of the general Clebsch--Lagrange theory is given by \begin{equation} \label{eq:generalRelativity:effectiveVelocity} \dot{g} + (S, \ell) \ldot g = \difLie_S g + 2 \ell k - \difLie_S g = 2 \ell k. \end{equation} Let us recall how the Einstein--Hilbert action looks like in terms of the variables \( (S, \ell, g) \), see, \eg, \parencites[Appendix~E.2]{Wald1984}[Section~5 and~6]{Giulini2015}. Using Gauß' formula, the Ricci scalar curvature \( \RicScalar_\eta \) of \( \eta \) can be written in terms of the Ricci scalar curvature \( \RicScalar_g \) of \( g(t) \) and the second fundamental form \( k(t) \in \SymTensorFieldSpace^2(\Sigma) \) of the embedding \( \iota_t \): \begin{equation} \label{eq:generalRelativity:ricciScalar} \RicScalar_\eta = \RicScalar_g - 2 \Ric_\eta (\nu, \nu) - \norm{k}^2_g + (\tr_g k)^2. \end{equation} Moreover, we have the Ricci equation, \begin{equation} \label{eq:generalRelativity:ricciEq} \Ric_\eta(\nu, \nu) = (\tr_g k)^2 - \norm{k}^2_g + \divergence_g v, \end{equation} where \( v(t) \in \VectorFieldSpace(\Sigma) \) is a certain time-dependent vector field. We assume that \( M \) is oriented and time-oriented in such a way that \( \iota^* \vol_\eta = \ell \dif t \wedge \vol_g \). Then, inserting~\eqref{eq:generalRelativity:ricciEq} into~\eqref{eq:generalRelativity:ricciScalar} and using the fact \( \Sigma \) has no boundary, the Einstein--Hilbert action takes the following form relative to the splitting \( \iota \): \begin{equation}\label{eq:generalRelativity:einsteinHilbert3D} \SectionSpaceAbb{S}[g, S, \ell] = \int_0^T \dif t \, \int_\Sigma \ell \left(\RicScalar_{g} + \norm{k}^2_{g} - (\tr_{g} k)^2\right) \, \vol_{g}. \end{equation} The space of metrics on \( \Sigma \) is an open cone in the space \( \SymTensorFieldSpace^2(\Sigma) \) of symmetric \( 2 \)-tensors. Thus, the tangent bundle \( \TBundle \MetricSpace(\Sigma) \) may be identified with \( \MetricSpace(\Sigma) \times \SymTensorFieldSpace^2(\Sigma) \). We will denote elements of \( \TBundle \MetricSpace(\Sigma) \) by pairs \( (g, h) \) with \( g \in \MetricSpace(\Sigma) \) and \( h \in \SymTensorFieldSpace^2(\Sigma) \). In terms of these variables, define the Clebsch--Lagrangian \( \SectionSpaceAbb{L}: \TBundle \MetricSpace(\Sigma) \times \mathrm{L} \SectionSpaceAbb{G} \to \R \) by \begin{equation} \SectionSpaceAbb{L}(g, h, S, \ell) = \int_\Sigma \left( \ell \, \RicScalar_{g} + \frac{1}{4 \ell} \norm{h}^2_{g} - \frac{1}{4 \ell} (\tr_{g} h)^2\right) \, \vol_{g}, \end{equation} which is the counterpart of \( L(q, \dot q, \xi) \) in the general theory. Then, using~\eqref{eq:generalRelativity:effectiveVelocity}, the Einstein--Hilbert action~\eqref{eq:generalRelativity:einsteinHilbert3D} reads as follows: \begin{equation} \label{eq:generalRelativity:einsteinHilbert3DClebschForm} \SectionSpaceAbb{S}[g, S, \ell] = \int_0^T \dif t \, \SectionSpaceAbb{L}(g, 2 \ell k, S, \ell) = \int_0^T \dif t \, \SectionSpaceAbb{L}(g, \dot{g} + (S, \ell) \ldot g, S, \ell). \end{equation} Hence, we see that the Einstein--Hilbert action is of the general Clebsch--Lagrange form~\eqref{eq:clebschLagrange:action} with Clebsch--Lagrangian \( \SectionSpaceAbb{L} \). We emphasize that, in sharp contrast to the Yang--Mills case, the Clebsch--Lagrangian of general relativity depends on the Lie algebra variable \( \xi = (S, \ell) \), more precisely, it depends on \( \ell \) but not on \( S \). Now, consider the cotangent bundle of \( \MetricSpace(\Sigma) \). The dual space to \( \SymTensorFieldSpace^2(\Sigma) \) with respect to the integration pairing is the space \( \SymTensorFieldSpace_2(\Sigma; \VolSpace) \) of volume-form-valued contravariant symmetric \( 2 \)-tensors. Hence, it is natural to define \( \CotBundle \MetricSpace(\Sigma) = \MetricSpace(\Sigma) \times \SymTensorFieldSpace_2(\Sigma; \VolSpace) \) endowed with the natural symplectic structure. We will denote the conjugate momenta by \( \pi \). By \cref{prop:clebschLagrange:legendre}, the associated constraint is given by the momentum map \( \SectionSpaceAbb{J} \) for the lifted \( \DiffGroup(\Sigma) \)-action on \( \CotBundle \MetricSpace(\Sigma) \). As in the general theory, denote the natural integration paring of \( \mathrm{L}\SectionSpaceAbb{G} \) with \( \mathrm{L}\SectionSpaceAbb{G}^* = \DiffFormSpace^1(\Sigma, \VolSpace) \times \VolSpace(\Sigma) \) by \( \kappa \). Now, consider the momentum map given by~\eqref{eq:cotangentBundle:momentumMapDef}. Using~\eqref{eq:generalRelativity:actionLieAlgebra} together with the Koszul identity and metric compatibility, we calculate \begin{align} \kappa\left(\SectionSpaceAbb{J}(g, \pi), (X, f)\right) &= \dualPair{\pi}{(X, f) \ldot g} \\ &= - \int_\Sigma \dualPair{\pi}{\difLie_X g} \\ &= - 2 \int_\Sigma \dualPair{\pi}{\Sym g(\nabla_{(\cdot)} X, \cdot)} \\ &= - 2 \int_\Sigma \dualPair{\pi}{\Sym (\nabla_{(\cdot)} X^\flat) (\cdot)}, \intertext{where \( X^\flat(Y) = g(X, Y) \). Since \( \Sigma \) has no boundary, integration by parts yields} \kappa\left(\SectionSpaceAbb{J}(g, \pi), (X, f)\right) &= 2 \int_\Sigma \dualPair{\divergence_g \pi}{X^\flat}, \end{align} where the divergence of \( \pi \) is defined as the trace of \( \nabla_{(\cdot)} \pi (\cdot, \cdot) \), that is, in abstract index notation \( (\divergence_g \pi)^j = \nabla_i \pi^{ij} \). Hence, the momentum map with respect to the natural integration pairing is given by \begin{equation} \SectionSpaceAbb{J}(g, \pi) = \left( 2 (\divergence_g \pi)^\flat, 0 \right) \in \DiffFormSpace^1(\Sigma, \VolSpace) \times \VolSpace(\Sigma). \end{equation} The fiber derivative \( \difpFrac{\SectionSpaceAbb{L}}{h}: \TBundle \MetricSpace(\Sigma) \times \mathrm{L}\SectionSpaceAbb{G} \to \CotBundle \MetricSpace(\Sigma) \) is given by \begin{equation} \label{eq:generalRelativity:legendreTransformation} \difpFrac{\SectionSpaceAbb{L}}{h} = \frac{1}{2 \ell} \bigl( \,g(h, \cdot) - \tr_g h \, \tr_g (\cdot) \bigr) \vol_g. \end{equation} Moreover, we have \begin{equation} \difpFrac{\SectionSpaceAbb{L}}{S} = 0, \qquad \difpFrac{\SectionSpaceAbb{L}}{\ell} = \left( \RicScalar_{g} - \frac{1}{4 \ell^2} \norm{h}^2_{g} + \frac{1}{4 \ell^2} (\tr_{g} h)^2\right) \, \vol_{g}. \end{equation} Using these identities, a straightforward calculation shows that the momentum map constraint~\eqref{eq:clebschLagrange:hamiltonian:constraint} takes the form \begin{subequations}\label{eq:generalRelativity:constraints}\begin{align} \label{eq:generalRelativity:constraints:diffeo} \divergence_g k - \grad_g (\tr_g k) = 0, \\ \label{eq:generalRelativity:constraints:hamiltonian} 0 = \RicScalar_{g} - \norm{k}^2_{g} + (\tr_{g} k)^2. \end{align}\end{subequations} Equations~\eqref{eq:generalRelativity:constraints:diffeo} and~\eqref{eq:generalRelativity:constraints:hamiltonian} are called the \emphDef{diffeomorphism constraint} and the \emphDef{Hamiltonian constraint}, respectively. Hence, in our framework, these constraints are derived from the momentum map constraint~\eqref{eq:clebschLagrange:hamiltonian:constraint}. \begin{remark} We emphasize that the Hamiltonian constraint stands on a different footing than the diffeomorphism constraint. This is due to the fact that the \( \sFunctionSpace(\Sigma) \)-part of \( \SectionSpaceAbb{G} \) does not act on the configuration space. Whereas the diffeomorphism constraint arises from setting the \( \DiffGroup(\Sigma) \)-component of \( \SectionSpaceAbb{J} \) to zero, the Hamiltonian constraint results from \( \difpFrac{\SectionSpaceAbb{L}}{\ell} = 0 \). As a consequence, the diffeomorphism constraint is a real momentum map constraint. Related to these facts is the observation that the Hamiltonian constraint fails to generate a Lie algebra \parencite{BergmannKomar1972} although the lapse function is of course an element of the Lie algebra of vector fields on \( M \). Recently, \textcite{BlohmannFernandesWeinstein2013} argued that the correct algebraic setting incorporating the full symmetry of Einstein's equation is that of groupoids and algebroids. It would hence be interesting to extend the Clebsch--Lagrange variational principle from Lie group actions to actions of Lie groupoids. \end{remark} \subsection{Hamiltonian picture} Finally, let us briefly discuss the Hamiltonian picture. According to~\eqref{eq:hamiltonian} the Clebsch--Hamiltonian \( \SectionSpaceAbb{H}: \CotBundle \MetricSpace(\Sigma) \times \mathrm{L} \SectionSpaceAbb{G} \to \R \) is given as the Legendre transform \begin{equation} \SectionSpaceAbb{H}(g, \pi, S, \ell) = \dualPair{\pi}{h} - \SectionSpaceAbb{L}(g, h, S, \ell), \end{equation} where \( h \in \SymTensorFieldSpace^2(\Sigma) \) is defined as a function of \( (g, \pi, S, \ell) \) via the relation \( \pi = \difpFrac{\SectionSpaceAbb{L}}{h} (g, h, S, \ell) \). By~\eqref{eq:generalRelativity:legendreTransformation}, we have \begin{equation} h^\flat = 2 \ell \bar{\pi} - \ell \tr_g \bar{\pi} \, \tr_g (\cdot), \end{equation} where \( \bar{\pi} \cdot \vol_g = \pi \). Hence, we obtain \begin{equation} \SectionSpaceAbb{H}(g, \pi, S, \ell) = \int_\Sigma \ell \left( \norm{\bar{\pi}}^2_{g} - \RicScalar_{g} - \frac{1}{2} (\tr_{g} \bar{\pi})^2\right) \, \vol_{g}. \end{equation} A straightforward calculation shows that the Clebsch--Hamilton equations~\eqref{eq:clebschLagrange:hamiltonian} relative to this Hamiltonian yield the Einstein equation in its dynamical \( (1+3) \)-form. More precisely, the dynamical equations~\eqref{eq:clebschLagrange:hamiltonian:hamiltonian} coincide with standard dynamical equations of general relativity, see \eg \parencite[Equation~E.2.35 and~E.2.36]{Wald1984}, and the constraint equations~\eqref{eq:clebschLagrange:hamiltonian:constraint} yield the momentum and Hamiltonian constraints as in the standard ADM formalism, see \eg \parencite[Equation~E.2.34 and~E.2.33]{Wald1984}. To establish the connection with the ADM formalism, we follow the discussion in \cref{Ham-Pic-gen} and introduce the extended configuration space by \begin{equation} \SectionSpaceAbb{Q}_\ext \defeq \MetricSpace(\Sigma) \times \VectorFieldSpace(\Sigma) \times \sFunctionSpace(\Sigma), \end{equation} whose elements are denoted by \( (g, S, l) \). The moment map constraints~\eqref{eq:Constr} cut out the following subset of the extended phase space \( \CotBundle \SectionSpaceAbb{Q}_\ext \): \begin{equation} \SectionSpaceAbb{M}_1 \equiv \CotBundle \MetricSpace(\Sigma) \times \VectorFieldSpace(\Sigma) \times \sFunctionSpace(\Sigma) \times \set{0} \times \set{0}. \end{equation} We will denote elements of \( \SectionSpaceAbb{M}_1 \) by \( (g, \pi, S, l) \). By~\eqref{eq:compositeHamiltonian}, the extended Hamiltonian on \( \SectionSpaceAbb{M}_1 \) is given by \begin{equation}\begin{split} \SectionSpaceAbb{H}_\ext(g, \pi, S, l) &= \SectionSpaceAbb{H}(g, \pi, S, l) - \kappa\bigl(\SectionSpaceAbb{J}(g, \pi), (S, l)\bigr)\, \\ &= \int_\Sigma \ell \left( \norm{\bar{\pi}}^2_{g} - \RicScalar_{g} - \frac{1}{2} (\tr_{g} \bar{\pi})^2\right) \vol_g - 2 g(\divergence_g \bar{\pi}, S) \vol_g. \end{split}\end{equation} In the presence of a boundary this Hamiltonian will be modified by boundary terms, see \parencite[Equation~20.173]{Blau2018}. Note that \( \SectionSpaceAbb{H}_\ext \) is the usual Hamiltonian in the ADM formalism (\eg, \parencites{ArnowittDeserMisner1959}[Equation~241]{Giulini2015}). According to \cref{Ham-Pic-gen}, Hamilton's equations for \( \SectionSpaceAbb{H}_\ext \) coincide with the Clebsch--Hamilton equations for \( \SectionSpaceAbb{H} \) and thus with Einstein's equation (in the \( (1+3) \)-splitting). Finally, we note that both Hamiltonians \( \SectionSpaceAbb{H} \) and \( \SectionSpaceAbb{H}_\ext \) vanish on the subset cut out by the constraints~\eqref{eq:generalRelativity:constraints}. This is in accordance with the fact that there is no absolute time in general relativity. A non-vanishing Hamiltonian would generate time evolution with respect to an external time parameter leading to a violation of the general covariance of the theory. Instead, dynamics is completely governed by the constraints and evolution appears as a \enquote{gauge flow}. \begin{remark} Finally, let us comment on the reduction by stages procedure, see \cref{sec:clebschLagrange:reductionStages}. We have seen that the physical action~\eqref{eq:generalRelativity:einsteinHilbert3D} exhibits the symmetry group \( \SectionSpaceAbb{G} = \DiffGroup(\Sigma) \lSemiProduct \sFunctionSpace(\Sigma) \). According to the discussion in \cref{sec:clebschLagrange:reductionStages}, one would thus like to reduce the symmetry of the system using a symplectic reduction with respect to \( \SectionSpaceAbb{G} \). We should note, however, that in \cref{sec:clebschLagrange:reductionStages} we made the assumption that the Lagrangian does not explicitly depend on the \( \xi \)-variables. This assumption does not hold for general relativity as \( \SectionMapAbb{L} \) depends on \( \ell \). Nonetheless, as a first step, one can restrict attention to the \( \DiffGroup(\Sigma) \)-subgroup and pass to the symplectic quotient \( \CotBundle \SectionSpaceAbb{Q} \sslash \DiffGroup(\Sigma) \). Up to a delicate discussion of the singular strata of the $\DiffGroup(\Sigma)$-action, the latter quotient coincides with the cotangent bundle of the superspace of Wheeler. This reduction procedure corresponds to implementing the momentum map constraint. It appears that the Hamiltonian constraint does not admit a similar interpretation in terms of a symplectic reduction. \end{remark} \begin{remark} In contrast to our approach, \textcite{FischerMarsden1972} have taken the extended configuration space \( \MetricSpace(\Sigma) \times \SectionSpaceAbb{G} \) as the starting point of a geometric analysis of the Einstein equation as a Lagrangian system. Note that passing from \( \mathrm{L} \SectionSpaceAbb{G} \) to \( \SectionSpaceAbb{G} \) completely changes the role of the shift vector field and the lapse function: in our approach they are configuration variables whereas in \parencite{FischerMarsden1972} they are generalized velocities. \end{remark}
1,941,325,220,326
arxiv
\section{Introduction} We consider the problem of embedding a tree $T$ in a given graph $G$. Formally, we look for an injective map $f:V(T) \rightarrow V(G)$ which preserves the edges. We do not require that non-edges are mapped to non-edges, i.e. the copy of $T$ in $G$ need not be induced. Our goal is to find sufficient conditions on $G$ in order to contain all trees of certain size, with maximum degree as large as a constant fraction (possibly approaching 1) of the minimum degree of $G$. \subsection{Brief history} The problem of embedding paths and trees in graphs has long been one of the fundamental questions in combinatorics. This problem has been extensively studied in extremal combinatorics, in the theory of random graphs, in connection with properties of expanders and with applications to Computer Science. The goal always has been to find a suitable property of a graph $G$ which guarantees that it contains all possible trees with given parameters. We describe next several examples which we think are representative and give a good overview of previous research in this area. \paragraph{Extremal questions.} The basic extremal question about trees is to determine the number of edges that a graph needs to have in order to contain all trees of given size. It is an old folklore result that a graph $G$ of minimal degree $d$ contains every tree $T$ with $d$ edges. This can be achieved simply by embedding vertices of $T$ greedily one by one. Since at most $d$ vertices of $G$ are occupied at any point, there is always enough room to embed another vertex of the tree. An old conjecture of Erd\H{o}s and S\'os says that {\em average degree} $d$ is already sufficient to guarantee the same property. More precisely, any graph with more than $(d-1)n/2$ edges contains all trees with $d$ edges. A clique of size $d$ is an obvious tight example for this conjecture. The conjecture has been proved in several special cases, e.g. Brandt and Dobson \cite{BD96} establish it for graphs of girth at least $5$ ({\em girth} is the length of the shortest cycle in a graph). In fact, they prove a stronger statement, that any such graph of minimum degree $d/2$ and maximum degree $\Delta$ contains all trees with $d$ edges and maximum degree at most $\Delta$. More generally, improving an earlier result of \L uczak and Haxell \cite{HL00}, Jiang proved that any graph of girth $2k+1$ and minimum degree $d/k$ contains all trees with $d$ edges and maximum degree at most $d/k$ \cite{Jiang01}. For general graphs, it has been announced by Ajtai, Koml\'os, Simonovits and Szemer\'edi \cite{AKSS} that they proved Erd\H{o}s-S\'os conjecture for all sufficiently large trees. A related statement, known as Loebl's $(\frac{n}{2}-\frac{n}{2}-\frac{n}{2})$ conjecture \cite{ELS94}, is that any graph on $n$ vertices, with at least $n/2$ vertices of degree at least $n/2$, contains all trees with at most $n/2$ edges. Progress on this conjecture has been recently made by Yi Zhao \cite{Zhao07}. Note that in the results discussed so far, the size of the tree is of the same order as degrees in the graph $G$. Without assuming any additional properties of $G$, this seems to be a natural barrier. \paragraph{Expanding graphs.} Embedding trees of size much larger than the average degree of the graph is possible in graphs satisfying certain expansion properties. The first such result was established by P\'osa using his celebrated rotation/extension technique. Given a subset of vertices $X$ of a graph $G$ let $N(X)$ denote the set of all neighbors of vertices of $X$ in $G$. P\'osa \cite{Posa76} proved that if $|N(X) \setminus X| \geq 2|X|-1$ for every subset $X$ of $G$ with at most $t$ vertices, then $G$ contains a path of length $3t-2$. This technique was extended to trees by Friedman and Pippenger \cite{FP87}. They proved that if $|N(X)| \geq (d+1) |X|$ for all subsets of size at most $2t-2$, then $G$ contains every tree of size $t$ and maximum degree at most $d$. The power of this technique is that while $T$ can have degrees close to the minimum degree of $G$, it can be of size much larger than $d$, depending on the expansion guarantee. On the other hand, note that these techniques cannot embed trees of size larger than $|G|/ d$, due to the nature of the expansion property. The result of Friedman and Pippenger has several interesting applications. For example, it can be used to show that for a fixed $\delta>0$, $d$ and every $n$ there is a graph $G$ with $O(n)$ edges that, even after deletion of all but $\delta |E(G)|$ edges, continues to contain every tree with $n$ vertices and maximum degree at most $d$. This has immediate corollaries in Ramsey Theory. The technique from \cite{FP87} also has an application for infinite graphs. For an infinite graph $G$, its {\em Cheeger constant} is $h(G)=\inf_X\frac{|N(X) \setminus X|}{|X|}$, where $X$ is a nonempty finite subset of vertices of $G$. Using the ideas of Friedman and Pippenger, one can show (see \cite{BS97}) that any infinite graph $G$ with Cheeger constant $d \geq 3$ contains an infinite tree $T$ with Cheeger constant $d-2$. Benjamini and Schramm \cite{BS97} prove a stronger result that any infinite graph with $h(G) > 0$ contains an infinite tree with positive Cheeger constant. They use the notion of {\em tree-indexed random walks} to find such a tree. We will allude to this notion again later. \paragraph{Random and pseudorandom graphs.} The random graph $G_{n,p}$ is a probability space whose points are graphs on a fixed set of $n$ vertices, where each pair of vertices forms an edge, randomly and independently, with probability $p$. For random graphs, Erd\H{o}s conjectured that with high probability, $G_{n,d/n}$ for a fixed $d$ contains a very long path, i.e., a path of length $(1-\alpha(d)) n$ such that $\lim_{d \rightarrow \infty} \alpha(d) = 0$. This conjecture was proved by Ajtai, Koml\'os and Szemer\'edi \cite{AKS} and, in a slightly weaker form, by Fernandez de la Vega \cite{FD}. Embedding trees, however, is considerably harder. Fernandez de la Vega \cite{Vega88} showed that there are (large) constants $a_1, a_2$ such that $G_{n,d/n}$ contains any {\em fixed} tree $T$ of size $n / a_1$ and maximum degree $\Delta \leq d / a_2$ w.h.p (i.e., with probability tending to 1 when $n \rightarrow \infty$). Note that this is much weaker than containing all trees simultaneously, because a random graph can contain every fixed tree w.h.p, and still miss at least one tree w.h.p. Until recently, there was no result known on embedding all trees simultaneously. Alon, Krivelevich and Sudakov proved in \cite{AKS} that for any $\epsilon>0$, $G_{n, d/n}$ contains all trees of size $(1-\epsilon) n$ and maximum degree $\Delta$ such that $$ d \geq \frac{10^6}{\epsilon} \Delta^3 \log \Delta \log^2 (2/\epsilon).$$ (All logarithms here and in the rest of this paper have natural base.) This result is nearly tight in terms of the size of $T$, and holds for all trees simultaneously. But it is achieved at the price of requiring that degrees in $G$ are much larger than degrees in the tree. A similar result for pseudorandom graphs was also proved in \cite{AKS}. A graph $G$ is called an $(n,d,\lambda)$-graph if $G$ has $n$ vertices, is $d$-regular (hence the largest eigenvalue of the adjacency matrix is $d$) and the second largest eigenvalue is $\lambda$. Such graphs are known to have good expansion and other random-like properties. Alon, Krivelevich and Sudakov proved that any $(n,d,\lambda)$-graph such that $$ \frac{d}{\lambda} \geq \frac{160}{\epsilon} \Delta^{5/2} \log (2/\epsilon) $$ contains all trees of size $(1-\epsilon) n$ and degrees bounded by $\Delta$. Note that using the expansion properties of $(n,d,\lambda)$-graphs, one could have used Friedman-Pippenger as well; however, one would not be able to embed trees larger than $n / \Delta$ in this way. \paragraph{Universal graphs.} In a more general context, graphs containing all trees with given parameters can be seen as instances of {\em universal graphs}. For a family of graphs $\cal F$, a graph $G$ is called $\cal F$-universal, if it contains every member of $\cal F$ as a subgraph. The construction of $\cal F$-universal graphs for various families of subgraphs is important in applied areas such as VLSI design, data representation and parallel computing. For trees, a construction is known of a graph $G$ on $n$ vertices which contains all trees with $n$ vertices and degrees bounded by $d$, such that the maximum degree in $G$ is a function of $d$ only \cite{BCLR89}. \subsection {Our results} We prove several results concerning embedding trees in graphs with no short cycles, graphs without a given complete bipartite subgraph, random graphs and also graphs satisfying a certain pseudorandomness property. We embed trees with parameters very close to trivial upper bounds that cannot be exceeded: maximum degree close to the minimum degree of $G$, and size a constant fraction of the order of $G$~\footnote{By the {\em order} of a graph, we mean the number of vertices. By {\em size}, we mean the number of edges. For trees, the two quantities differ only by $1$.} (or more precisely the minimum possible order of $G$ under given conditions). A summary of our main results follows. Here we assume that $d$ and $n$ are sufficiently large. \begin{enumerate} \item For any constant $k \geq 2$, $\epsilon \leq \frac{1}{2k}$ and any graph $G$ of girth at least $2k+1$ and minimum degree $d$, $G$ contains every tree $T$ of size $|T| \leq \frac14 \epsilon d^k$ and maximum degree $\Delta \leq (1-2\epsilon) d-2$. \item For any $G$ of minimum degree $d$, not containing $K_{s,t}$ (a complete bipartite graph with parts of size $s \geq t \geq 2$), $G$ contains every tree $T$ of size $|T| \leq \frac{1}{64 s^{1/(t-1)}} d^{1+\frac{1}{t-1}}$ and maximum degree $\Delta \leq \frac{1}{256} d$. \item For a random graph $G_{n,p}$ with $d = pn \geq n^{1/k}$ for some constant $k$, with high probability $G_{n,p}$ contains all trees of size $O(n/k)$ and maximum degree $O(d/k)$. \end{enumerate} It is easy to see that any graph of girth $2k+1$ and minimum degree $d$ has at least $\Omega(d^k)$ vertices. It is a major open question to determine the smallest possible order of such graph. For values of $k = 2,3,5$ there are known constructions obtained by Erd\H{o}s and R\'enyi \cite{ErdR} and Benson \cite{B} of graphs of girth $2k+1$, minimum degree $d$ and order $O(d^k)$. It is also widely believed that such constructions should be possible for all fixed $k$. This implies that our first statement is tight up to constant factors for $k = 2,3,5$ and probably for all remaining $k$. Similarly, it is conjectured that for $s \geq t$ there are $K_{s,t}$-free graphs with minimum degree $d$ which have $O(d^{1+\frac{1}{t-1}})$ vertices. For $s > (t-1)!$, such a construction was obtained by Alon, R\'onyai and Szabo \cite{ARS} (modifying the construction in \cite{KRS}). Hence, the size of the trees we are embedding in our second result is tight up to constant factors as well. Finally, since the minimum degree of the random graph $G_{n,p}$ is roughly $pn$, it is easy to see that for constant $\alpha>0$ and $p=n^{-\alpha}$ we are embedding trees whose size and maximum degree is proportional to the order and the minimum degree of $G_{n,p}$. Thus our third result is also nearly optimal. \subsection{Discussion} \paragraph{Local expansion.} Using well known results from Extremal Graph theory, one can show that if graph $G$ contains no subgraphs isomorphic to a fixed bipartite graph $H$ (e.g., $C_{2k}$ or $K_{s,t}$) then it has certain expansion properties. More precisely, all small subsets of $G$ have a large boundary. For example, if $G$ is a $C_4$-free graph with minimum degree $d$ then all subsets of $G$ of size at most $d$ expand by a factor of $\Theta(d)$. Otherwise we would get a $4$-cycle by counting the number of edges between $S$ and its boundary $N(S) \setminus S$. This simple observation appears to be a powerful tool in attacking various extremal problems and was used in \cite{SV} and \cite{KS} to resolve several conjectures about cycle lengths and clique-minors in $H$-free graphs. Therefore, it is natural to ask whether the expansion of $H$-free graphs combined with the result of Friedman and Pippenger can be used to embed large trees. Recall that to embed a tree of size $t$ of maximum degree $d$, Friedman and Pippenger require that sets of size up to $2t-2$ expand at least $d+1$ times. For example, plugging this into the observation we made on the expansion of $C_4$-free graphs only gives embedding of trees of order $O(d)$ in such graphs. This is quite far from the bound $O(d^2)$ which can be achieved using our approach. Similarly, in graphs of girth $2k+1$, we can embed trees of size $O(d^k)$, rather than $O(d^{k-1})$ as can be guaranteed by using Friedman-Pippenger. Therefore, our work can be seen as an extension of the embedding results for locally expanding graphs. It shows that using structural information about $G$, rather then just local expansion, one can embed in $G$ trees of much larger size. \paragraph{Extremal results.} Our work sheds some light on why the Erd\H{o}s-S\'os conjecture, which we already discussed in the beginning of the introduction, becomes easier for graphs with no short cycles. This scenario was considered, e.g., in \cite{BD96, HL00, Jiang01}. In particular, assuming that graph $G$ has girth $2k+1, k\geq 2$ and minimum degree $d$, Jiang \cite{Jiang01} showed how to embed in $G$ all trees of size $kd$ with degrees bounded by $d$. Although this is best possible, our result implies that this statement can be tight only for a relatively few very special trees, i.e., those that contain several large stars of degree $d$ or extremely close to $d$. Indeed, if we relax the degree assumption and consider trees with the maximum degree at most $(1-\epsilon) d$, then it is possible to embed trees of size $O(d^k)$ rather than $O(d)$. Moreover, a careful analysis of our proof shows that it still works for $\epsilon$ which have order of magnitude $k\frac{\log d}{d}$. Therefore even if we allow the degree of the tree to be as large as $d-ck\log d$ for some constant $c$, we are still able to embed all trees of size $\Omega(kd^{k-1}\log d) \gg kd$. \paragraph{Random graphs.} It is quite easy to prove an analog of the result of Fernandez de la Vega \cite{Vega88} on the embedding of a fixed tree of size proportional to $n$ and maximum degree $O(pn)$ in the dense random graph $G_{n,p}$. Indeed for constant $\alpha<1$ and edge probability $p = n^{-\alpha}$, this can be done greedily, vertex by vertex, generating the random graph simultaneously with the embedding. On the other hand, this simple approach cannot be used to embed all such trees with high probability, since there are too many trees to use the union bound. We provide the first result for simultaneous embedding of all trees of size $\Theta(n)$ and maximum degree $\Theta(pn)$, in the random graph $G_{n,p}$ for $p = n^{-\alpha}$ and constant $\alpha<1$. It is also interesting to compare our result with the work of Alon, Krivelevich, Sudakov \cite{AKS}. They embed nearly spanning trees but with degree which is only a small power (roughly $1/3$) of the degree of $G_{n,p}$. Although our trees are somewhat smaller (by constant factor), we can handle trees with degrees proportional to the minimum degree of the random graph. \subsection{The algorithm} All our results are proved using variants of the following very simple randomized embedding algorithm. First, choose arbitrarily some vertex $r$ of $T$ to be the {\em root}. Then for every other vertex $u \in V(T)$ there is a unique path in $T$ from $r$ to $u$. The neighbor of $u$ on this path is called the {\em parent} of $u$ and all the remaining neighbors of $u$ are called {\em children} of $u$. The algorithms proceeds as follows. \paragraph{Algorithm 1.} \noindent {\em Start by embedding the root $r$ at an arbitrary vertex $f(r) \in V(G)$. As long as $T$ is not completely embedded, take an arbitrary vertex $u \in V(T)$ which is already embedded but its children are not. If $f(u)$ has enough neighbors in $G$ unoccupied by other vertices of $T$, embed the children of $u$ by choosing vertices uniformly at random from the available neighbors of $f(u)$ and continue. Otherwise, fail.} \ This algorithm can be seen as a variant of a {\em tree-indexed random walk}, i.e. a random process corresponding to a tree where each vertex assumes a random state depending only on the state of its parent. The notion of a tree-indexed random walk was first introduced and studied by Benjamini and Peres \cite{BP94}. It is also used in the above mentioned paper of Benjamini and Schramm \cite{BS97} to embed trees with a positive Cheeger constant into infinite expanding graphs. In our case, we consider in fact a {\em self-avoiding} tree-indexed random walk, where each state is chosen randomly, conditioned on being distinct from previously chosen states. The corresponding concept for a random walk is a well studied subject in probability (see, e.g., \cite{MS93}). Loosely speaking, we prove that our self-avoiding tree-indexed random walk behaves sufficiently randomly, in the sense that it does not intersect the neighborhood of any vertex more often than expected. To analyze the number of times the random process intersects a given neighborhood, we use large deviation inequalities for supermartingales. \subsection{A supermartingale tail estimate} In all our proofs, we use the following tail estimate. \begin{proposition} \label{thm:supermartingale} Let $X_1, X_2, \ldots, X_n$ be random variables in $[0,1]$ such that for each $k$, $$ {\mathbb E}[X_k \mid X_1, X_2,\ldots, X_{k-1}] \leq a_k.$$ Let $\mu = \sum_{i=1}^{n} a_i$. Then for any $0<\delta \leq 1$, $$ {\mathbb P}[\sum_{i=1}^{n} X_i > (1+\delta) \mu] \leq e^{-\frac{\delta^2 \mu}{3}}.$$ \end{proposition} This can be derived easily from the proof of Theorem 3.12(b) in~\cite{HMRR}. We re-state this theorem here: {\em Let $Y_1,Y_2,\ldots,Y_n$ be a martingale difference sequence with $-a_k \leq Y_k \leq 1-a_k$ for each $k$, for suitable constants $a_k$; and let $a = \frac{1}{n} \sum a_k$. Then for any $\delta>0$, $$ {\mathbb P}[\sum_{k=1}^{n} Y_k \geq \delta a n] \leq e^{-\frac{\delta^2 a n}{2(1+\delta/3)}}.$$ } A martingale difference sequence satisfies ${\mathbb E}[Y_i \mid Y_1,Y_2,\ldots,Y_{i-1}] = 0$. However, it can be seen easily from the proof in \cite{HMRR} that for this one-sided tail estimate, it is sufficient to assume ${\mathbb E}[Y_i \mid Y_1,Y_2,\ldots,Y_{i-1}] \leq 0$. (Such a random process is known as a {\em supermartingale}.) To show Proposition~\ref{thm:supermartingale}, set $Y_k = X_k - a_k$ and $\mu = an = \sum_{i=1}^{n} a_k$. The conditional expectations of $X_k$ are bounded by $a_k$, hence the conditional expectations of $Y_k$ are non-positive as required. Since $\delta \leq 1$, we also replace $2(1 + \delta/3)$ by $3$, and Proposition~\ref{thm:supermartingale} follows. Note also that we can always replace $\mu$ by a larger value (e.g., by adding auxiliary random variables that are constants with probability 1), and the conclusion still holds. Hence, in Proposition \ref{thm:supermartingale} it is enough to assume $\sum_{i=1}^{n} a_i \leq \mu$. \section{Embedding trees in $C_4$-free graphs} \label{section:C4-free} The purpose of this section is to illustrate on a simple example the main ideas and techniques that we will use in our proofs. We start with $C_4$-free graphs, which is a special case of two classes of graphs we are interested in: graphs without short cycles, and graphs without $K_{s,t}$ (note that $K_{2,2} = C_4$). Let's recall Algorithm 1. For a given rooted tree $T$, we start by embedding the root $r \in V(T)$ at an arbitrary vertex $f(r) \in V(G)$. As long as $T$ is not completely embedded, we take an arbitrary $u \in V(T)$ which is already embedded but its children are not. If $f(u)$ has enough unoccupied neighbors in $G$, we embed the children of $u$ uniformly at random in the available neighbors of $f(u)$ and continue. Otherwise, we fail. \begin{theorem} \label{thm:C4-free} Let $\epsilon \leq 1/8$, and let $G$ be $C_4$-free graph $G$ of minimum degree at least $d$. For any tree $T$ of size $|T| \leq \epsilon d^2$ and maximum degree $\Delta \leq d - 2 \epsilon d - 2$, Algorithm 1 finds an embedding of $T$ in $G$ with high probability (i.e., with probability tending to 1 when $d \rightarrow \infty$). \end{theorem} \paragraph{Example.} Before we plunge into the proof, let us consider the statement of this theorem in a particular case, where $G$ is the incidence graph of a finite projective plane. Let $q=d-1$ be a prime or a prime power and consider a 3-dimensional vector space over the finite field $\mathbb{F}_q$. Let $V_1$ be all 2-dimensional linear subspaces of $\mathbb{F}^3_q$ (lines in a projective plane), $V_2$ all 1-dimensional linear subspaces (points in a projective plane) and two vertices from $V_1$ and $V_2$ are adjacent if their corresponding subspaces contain one another. This $G$ has $n = 2(q^2+q+1)=2(d^2 - d + 1)$ vertices, it is bipartite and $d$-regular. Also, it is easy to see from the definition that $G$ contains no $C_4$. Clearly, we cannot embed in $G$ trees of size larger than $O(d^2)$ or maximum degree larger than $d$. In this respect, our theorem is tight up to constant factors. It is also worth mentioning that in the analysis of our simple algorithm, the trade-off between the size of $T$ and the maximum degree $\Delta$ is close to being tight. Indeed, we show that for $\Delta = (1-\epsilon) d$, our algorithm cannot embed trees of size much larger than $\epsilon d^2$. Suppose we are embedding a tree $T$ of depth $3$, where the degrees of the root and its children (level $1$) are $\sqrt{d}$. On level $2$, the degrees are $\epsilon d$ except one special vertex $z$ of degree $(1-\epsilon)d$. On level $3$, there are only leaves. The size of this tree is $\epsilon d^2 + \Theta(d)$. We can assume that the root is embedded at a vertex corresponding to a point $a$. The level-$1$ vertices are embedded into a set $L_1$ of $\sqrt{d}$ random lines through $a$. The level-$2$ vertices are embedded into a set $P_2$ of $d$ random points on these lines. Every point in the projective plane (except $a$) has the same probability of appearing in $P_2$, hence this probability is $d / (d^2 - d) = 1 / (d-1)$. The level-$3$ vertices are embedded into random lines $L_3$ through points in $P_2$, each line through a point in $P_2$ with probability $\epsilon$. Now every line has probability roughly $\epsilon$ of being in $L_3$, because one of its points on the average appears in $P_2$. Consider the point where we embed the special vertex $z$ and assume this is the last vertex we process in the algorithm. Each of the $d$ lines through this point has probability roughly $\epsilon$ of being occupied by a level-$3$ vertex, so on the average, only $(1-\epsilon)d$ lines are available to host the children of $z$. Therefore, our algorithm cannot succeed in embedding more than $(1-\epsilon) d$ children of $z$. \paragraph{Proof of Theorem~\ref{thm:C4-free}.} Let's fix an ordering in which the algorithm processes the vertices of $T$: $V(T) = \{1,2,\ldots,|V(T)|\}$. Here, $1$ denotes the root and the ordering is consistent with the structure of the tree in the sense that every vertex can appear only after its parent. In step $0$, the algorithm embeds the root. In step $t$, the children of $t$ are embedded randomly in the yet unoccupied neighbors of $f(t) \in V(G)$. If $t$ is a leaf in $T$, the algorithm is idle in step $t$. Our goal is to argue that for large $d$, with high probability, the algorithm never fails. The only way the algorithm can fail is that for a vertex $t \in V(T)$, embedded at $v = f(t) \in V(G)$, we are not able to place its children since too many neighbors of $v$ in $G$ have been occupied by other vertices of $T$. This is the crucial ``bad event" we have to analyze: {\em Let ${\cal B}_v$ denote the event that at some point, more than $2 \epsilon d + 2$ neighbors of $v$ are occupied by vertices of $T$ other than the children of $f^{-1}(v)$.} If we can show that with high probability, ${\cal B}_v$ does not occur for any $v \in V(G)$, then the algorithm clearly succeeds. To do this, we will modify our algorithm slightly and force it to stop immediately at the moment when the first bad event occurs. Thus, in analyzing ${\cal B}_v$, we can assume that for any $w \neq v$ the event ${\cal B}_w$ has not happened yet. Our strategy is to prove that the probability of ${\cal B}_v$ for any given vertex $v$, even conditioned on our embedding getting ``dangerously close" to $v$, is exponentially small in $d$. Then, we argue that the number of vertices which can ever get dangerously close to our embedding (i.e., the number of bad events we have to worry about) is only polynomial in $d$. Therefore, we conclude that with high probability, no bad event occurs. \begin{lemma} \label{lemma:bad-event} Let $\epsilon \leq \frac18$ and $d \geq 24$. For a vertex $v \in V(G)$, condition on any history $\cal H$ of running the algorithm up to a certain point such that at most $2$ vertices of $T$ have been embedded in $N(v)$. Then $$ {\mathbb P}[{\cal B}_v \mid {\cal H}] \leq e^{-\epsilon d / 18}.$$ \end{lemma} \noindent {\bf Proof.}\, For $t = 1,2,\ldots,|V(T)|$, let $X_t$ be an indicator variable of the event that $f(t) \neq v$ but some child of $t$ gets embedded in $N(v)$. Here we use the property that $G$ is $C_4$-free. Note that, if $f(t) = w \neq v$, $w$ can have at most one neighbor in $N(v)$, otherwise we get a $4$-cycle. Therefore, $t$ can have at most one child embedded in $N(v)$ and $X_t$ represents the number of vertices in $N(v)$, occupied by the children of $t$. We condition on a history $\cal H$ of running the algorithm up to step $h$, such that at most $2$ vertices of $N(v)$ have been occupied so far. The bad event ${\cal B}_v$ can occur only if $X = \sum_{t=h+1}^{|T|} X_t > 2 \epsilon d$. Therefore, our goal is to prove that this happens only with very small probability. Each vertex chooses the embedding of its children randomly, out of at least $d - 2 \epsilon d - 2$ still available choices (here we assume that no bad event $B_w$ occurred before ${\cal B}_v$ for any $w \neq v$, or else the algorithm has failed already). Thus we get $$ {\mathbb E}[X_t] \leq \frac{d_T(t)}{d - 2 \epsilon d - 2} \leq \frac{d_T(t)}{2d/3} $$ where $d_T(t)$ is the number of children of the vertex $t$ in $T$. We also used $\epsilon \leq 1/8$ and $d \geq 24$. This holds even conditioned on any previous history of the algorithm, since the decisions for each vertex are made independently. We are interested in the probability that $X = \sum_{t=h+1}^{|T|} X_t$ exceeds $2 \epsilon d$. Using the fact that $\sum_{t \in T} d_T(t) =|T|-1 \leq \epsilon d^2$, we can bound the expectation of $X$ by $$\mu = {\mathbb E}[X] = \sum_{t=h+1}^{|T|}{\mathbb E}[X_t] \leq \sum_{t \in T} \frac{d_T(t)}{2d/3} \leq \frac32 \epsilon d.$$ We use the supermartingale tail estimate (Proposition~\ref{thm:supermartingale}) with $\delta = \frac13$ and $\mu = \frac32 \epsilon d$: $$ {\mathbb P}[X > 2 \epsilon d] \leq e^{-\delta^2 \mu / 3} = e^{-\mu / 27} = e^{-\epsilon d / 18}.$$ Therefore, the bad event ${\cal B}_v$ happens with probability at most $e^{-\epsilon d / 18}$. \hfill $\Box$ \vspace{2mm} Our final goal is to argue that with high probability, no bad event ${\cal B}_v$ occurs for any vertex $v \in V(G)$. Since the number of vertices could be potentially unbounded by any function of $d$, we cannot apply a straightforward union bound over all vertices in the graph. However, we observe that the number of vertices for which ${\cal B}_v$ can potentially occur is not very large. Define ${\cal D}_v$ to be the event that at some point in the algorithm, two vertices in $N(v)$ are occupied by vertices of $T$. This is the event that the embedding of $T$ gets ``dangerously close" to $v$. Observe that if ${\cal D}_v$ is ``witnessed" by the pair of vertices of $T$ which are placed in $N(v)$, each pair of vertices of $T$ can witness at most one event ${\cal D}_v$ (otherwise the same pair is in the neighborhood of two vertices which implies a $C_4$). Since $T$ has at most $\epsilon d^2$ vertices, the event ${\cal D}_v$ can occur for at most $\epsilon^2 d^4$ vertices in any given run of the algorithm. Clearly, event ${\cal B}_v \subseteq {\cal D}_v$. Let's analyze the probability of ${\cal B}_v$, conditioned on ${\cal D}_v$. The event ${\cal D}_v$ can be written as a union of all histories $\cal H$ of running the algorithm up to the point where two vertices of $T$ get embedded in $N(v)$. By Lemma~\ref{lemma:bad-event}, $$ {\mathbb P}[{\cal B}_v \mid {\cal H}] < e^{-\epsilon d / 18} $$ for any such history $\cal H$. By taking the union of all these histories, we get $$ {\mathbb P}[{\cal B}_v \mid {\cal D}_v] < e^{-\epsilon d / 18}.$$ Now we can estimate the probability that ${\cal B}_v$ ever occurs for any vertex $v$: \begin{eqnarray*} {\mathbb P}[\exists v \in V; {\cal B}_v~\mbox{occurs}] & \leq & \sum_{v \in V} {\mathbb P}[{\cal B}_v] = \sum_{v \in V} {\mathbb P}[{\cal B}_v \mid {\cal D}_v] {\mathbb P}[{\cal D}_v] \leq e^{-\epsilon d / 18} \sum_{v \in V} {\mathbb P}[{\cal D}_v]. \end{eqnarray*} Since ${\cal D}_v$ can occur for at most $\epsilon^2 d^4$ vertices in any given run of the algorithm, we have $\sum_{v \in V} {\mathbb P}[{\cal D}_v] \leq \epsilon^2 d^4$. Thus $$ {\mathbb P}[\exists v \in V; {\cal B}_v~\mbox{occurs}] \leq \epsilon^2 d^4 e^{-\epsilon d / 18} \rightarrow 0,$$ when $d \rightarrow \infty$. Hence the algorithm succeeds with high probability. \hfill $\Box$ \section{Embedding trees in $K_{s,t}$-free graphs} \label{section:Kst-free} Next, we consider the case of graphs which contain no complete bipartite subgraph $K_{s,t}$ with parts of size $s$ and $t$. We assume that $s \geq t$. It is known that the extremal size of such graphs depends essentially only on the value of the smaller parameter $t$. Indeed, by the result of K\"ovari, S\'os and Tur\'an \cite{KST} the number of vertices in $K_{s,t}$-free graph with minimum degree $d$ is at least $c\,d^{t/(t-1)}$, where only the constant $c$ depends on $s$. For relatively high values of $s$ ($s >(t-1)!$) there are known constructions (see, e.g., \cite{KRS, ARS}) of $K_{s,t}$-free graphs achieving this bound. Moreover, it is conjectured that $\Theta(d^{t/(t-1)})$ is the correct bound for all $s \geq t$. This implies that one cannot embed trees larger than $O(d^{t/(t-1)})$ in a $K_{s,t}$-free graph with minimum degree $d$. Also, it is obvious that the maximum degree in the tree should be $O(d)$. In this section we show how to embed trees with parameters very close to these natural bounds that cannot be exceeded. It is easier to analyze our algorithms in the case when the maximum degrees in the tree are in fact bounded by $O(d/t)$. First, we obtain this weaker result, and then present a more involved analysis which shows that our algorithm also works for trees with maximum degree at most $\frac{1}{256} d$. Our algorithm here is a slight modification of Algorithm 1. \paragraph{Algorithm 2.} {\em For each vertex $v \in V(G)$, fix a set of $d$ neighbors $N_+(v) \subseteq N(v)$. Start by embedding the root of the tree $r \in T$ at an arbitrary vertex $f(r) \in V(G)$. As long as $T$ is not completely embedded, take an arbitrary vertex $u \in V(T)$ which is already embedded but its children are not. If $f(u)$ has enough neighbors in $N_+(f(u))$ unoccupied by other vertices of $T$, embed the children of $u$ one by one, by choosing vertices uniformly at random from the available vertices in $N_+(f(u))$, and continue. Otherwise, fail.} \ The only difference from the original algorithm is that when embedding the children of a vertex, we choose from a predetermined set of $d$ neighbors rather than all possible neighbors. Since the maximum degree of $G$ can be very large, this modification is useful in the analysis of our algorithm. It allows us to bound the number of dangerous events. However, we believe that the original algorithm works as well and only our proof requires this modification. \begin{theorem} \label{thm:Kst-free} Let $G$ be a $K_{s,t}$-free graph ($s \geq t$) with minimum degree $d$. For any tree $T$ of size $|T| \leq \frac{1}{64} s^{-1/(t-1)} d^{t/(t-1)}$ and maximum degree $\Delta \leq \frac{1}{64 t} d$, Algorithm 2 finds an embedding of $T$ in $G$ with high probability. \end{theorem} \noindent {\bf Proof.}\, We follow the strategy of defining bad events for each vertex $v \in V(G)$ and bounding the probability that any such event occurs. {\em Let ${\cal B}_v$ denote the event that at some stage of the algorithm, more than $\frac12 d + 2t$ vertices in $N_+(v)$ are occupied by vertices of $T$ other than children of $f^{-1}(v)$.} Note that (as in the previous section), to bound the probability of a bad event, we assume that our algorithm stops immediately at the moment when the first such event occurs. To simplify our analysis, we also assume that the children of every vertex of $T$ are embedded in some particular order, one by one. As long as ${\cal B}_v$ does not occur, we have at least $\frac12 d - 2t$ unoccupied vertices in $N_+(v)$. Since degrees in the tree are bounded by $\frac{1}{64 t} d \leq \frac{1}{64} d$, we have enough space for the children of any vertex to be embedded at $N_+(v)$. As we embed the children one by one, the last child still has at least $\frac12 d - 2t - \frac{1}{64} d \geq \frac{1}{4} d$ choices available (for large enough $d$). The new complication here is that another vertex $w$ could share many neighbors with $v$. Unlike in the case of $K_{2,2}$-free graphs, where any two vertices can share at most 1 neighbor, in $K_{s,t}$-free graphs (for $s > t \geq 2$), we do not have any bound on the number of shared neighbors. Therefore we have to proceed more carefully. For every vertex $v$ in $G$, we partition all other vertices into two sets depending on how many neighbors they have in $N_+(v)$: \begin{itemize} \item $ L_v = \{w \neq v: |N_+(v) \cap N_+(w)| \leq 2 s^\frac{1}{t-1} d^\frac{t-2}{t-1} \}. $ \item $ M_v = \{w \neq v: |N_+(v) \cap N_+(w)| > 2 s^\frac{1}{t-1} d^\frac{t-2}{t-1} \}. $ \end{itemize} The idea is that vertices in $L_v$ are harmless because the fraction of their children that affects $N_+(v)$ is $O(d^{-1/(t-1)})$. Since the trees we are embedding have size $O(d^{1 + 1/(t-1)})$, we show that the expected impact of these children on $N_+(v)$ is $O(d)$. The vertices in $M_v$ have to be treated in a different way, because the fraction of their children in $N_+(v)$ could be very large. However, we prove that the total number of edges between $M_v$ and $N_+(v)$ cannot be too large, otherwise we would get a copy of $K_{s,t}$ in $G$. Therefore, the impact of the children of $M_v$ on $N_+(v)$ can be also controlled. Again, we ``start watching" a bad event for vertex $v$ only at the moment when it becomes dangerous. {\em Let ${\cal D}_v$ denote the event that at least $t$ vertices in $N_+(v)$ are occupied by vertices of tree $T$ other than children of $f^{-1}(v)$.} \begin{lemma} \label{lemma:low-deg} Let $\cal H$ be a fixed history of running the algorithm up to a point where at most $t$ vertices in $N_+(v)$ are occupied. Conditioned on $\cal H$, the probability that children of vertices embedded in $L_v$, will ever occupy more than $\frac14 d+t$ vertices in $N_+(v)$ is at most $e^{-d/24}$. \end{lemma} \noindent {\bf Proof.}\, We use an argument similar to the proof of Lemma~\ref{lemma:bad-event}. Fix an ordering of the vertices of $T$ starting from the root, $i=1,2,\ldots,|T|$, as they are processed by the algorithm. Suppose that vertices $1, \ldots, h$ were embedded during the history $\cal H$. Let $X_i$ be the indicator variable of the event that $i \in T$ is embedded in $N_+(v)$ and the parent of $i$ was embedded in $L_v$. As long as the algorithm does not fail (i.e., no bad event happened), for each vertex $i \in T$ when it is embedded we have at least $d - \frac12 d - 2t - \frac{1}{64} d \geq \frac14 d$ choices where to place the vertex. This holds even if we condition on any fixed embedding of vertices $j<i$. Moreover, the embedding decisions for different vertices are done independently. Since we assume that the parent of $i$ was embedded in $L_v$, at most $2 s^{1/(t-1)} d^{(t-2)/(t-1)}$ of these choices are in $N_+(v)$. Therefore, conditioned on any previous history $\cal H$ such that $i$ was not embedded yet $$ {\mathbb P}[X_i = 1 \mid {\cal H}] \leq \frac{2 s^\frac{1}{t-1} d^{\frac{t-2}{t-1}}}{\frac14 d} = 8 \left( \frac{s}{d} \right)^\frac{1}{t-1}.$$ Summing up over such vertices $i$ in the tree, whose number is at most $|T| \leq \frac{1}{64} s^{-1/(t-1)} d^{t/(t-1)}$, we have $$ {\mathbb E}\Big[\sum_{i=h+1}^{|T|} X_i \mid {\cal H}\Big] =\sum_{i=h+1}^{|T|} {\mathbb E}[X_i = 1 \mid {\cal H}] \leq |T| \cdot 8 \left( \frac{s}{d} \right)^\frac{1}{t-1} \leq \frac18 d.$$ Since, the upper bound on ${\mathbb P}[X_i = 1\mid {\cal H}]$ is still valid even if we also condition on a fixed embedding of all vertices $j<i$, by Proposition~\ref{thm:supermartingale} with $\mu = \frac18 d$ and $\delta = 1$, $$ {\mathbb P}\Big[\sum_{i=h+1}^{|T|} X_i > \frac14 d \mid {\cal H}\Big] < e^{-d/24}. $$ By definition of $\cal H$, during the first $h$ steps of the algorithm only at most $t$ vertices in $N_+(v)$ have been occupied. Therefore, the probability that more than $\frac14 d + t$ vertices are ever occupied is at most $e^{-d/24}$. \hfill $\Box$ \medskip Next, we treat the vertices whose parent is embedded in $M_v$. Recall that each vertex in $M_v$ has many neighbors in $N_+(v)$. However, the number of edges between $M_v$ and $N_+(v)$ cannot be too large. Observe that there is no $K_{s,t-1}$ in $G$ with $s$ vertices in $N_+(v)$ and $t-1$ vertices in $M_v$, otherwise we would obtain a copy of $K_{s,t}$ by adding $v$ to the part of size $t-1$. Also, this shows that for $t=2$, $M_v$ must be empty. Indeed, by definition any vertex in $M_v$ has at least $2s$ neighbors in $N_+(v)$, which together with vertex $v$ would form $K_{2s,2}$. So in the following, we can assume $s \geq t \geq 3$. The following is a standard estimate in extremal graph theory, whose short proof we include here for the sake of completeness. \begin{lemma} \label{lemma:Kst-extremal} Consider a subgraph $H_v$ containing the edges between $M_v$ and $N_+(v)$, where $|N_+(v)| = d$, every vertex in $M_v$ has at least $2 s^{1/(t-1)} d^{(t-2)/(t-1)}$ neighbors in $N_+(v)$ and the graph does not contain $K_{s,t-1}$ (with $s$ vertices in $N_+(v)$ and $t-1$ vertices in $M_v$). Then $H_v$ has at most $2td$ edges. \end{lemma} \noindent {\bf Proof.}\, Let $m$ denote the number of edges in $H_v$ and assume $m > 2td$. Let $N$ denote the number of copies of $K_{1,t-1}$ (a star with $t-1$ edges) in $H_v$, with $1$ vertex in $N_+(v)$ and $t-1$ vertices in $M_v$. By convexity, the minimum number of $K_{1,t-1}$ in $H_v$ is attained when all vertices in $N_+(v)$ have the same degree $m/|N_+(v)|$. Therefore $$ N \geq |N_+(v)| {\frac{m}{|N_+(v)|} \choose t-1} = d {\frac{m}{d} \choose t-1}.$$ Our assumption that $m > 2td$ implies that $\frac{m}{d}, \frac{m}{d}-1, \ldots, \frac{m}{d}-(t-2) \geq \frac{m}{2d}$ and therefore $$ N \geq d \frac{(\frac{m}{2d})^{t-1}}{(t-1)!} = \frac{m^{t-1}}{(t-1)! 2^{t-1} d^{t-2}}.$$ Since all the degrees in $M_v$ are at least $2 s^{1/(t-1)} d^{(t-2)/(t-1)}$, we have $m \geq 2 s^{1/(t-1)} d^{(t-2)/(t-1)} |M_v|$. Then $m^{t-1} \geq 2^{t-1} s d^{t-2} |M_v|^{t-1}$ and $$ N \geq \frac{m^{t-1}}{(t-1)! 2^{t-1} d^{t-2}} \geq \frac{s |M_v|^{t-1}}{(t-1)!} \geq s {|M_v| \choose t-1}.$$ Consequently, there must be a $(t-1)$-tuple in $M_v$ which appears in at least $s$ copies of $K_{1,t-1}$. This creates a copy of $K_{s,t-1}$, a contradiction. \hfill $\Box$ \begin{lemma} \label{lemma:high-deg} Let $\cal H$ be a fixed history of running the algorithm up to a point where at most $t$ vertices in $N_+(v)$ are occupied. Then, conditioned on $\cal H$, the probability that children of vertices embedded in $M_v$ will ever occupy more than $\frac14 d+t$ vertices in $N_+(v)$ is at most $t \sqrt{d} e^{-\frac{1}{24t} \sqrt{d}}$. \end{lemma} \noindent {\bf Proof.}\, As we mentioned, we can assume $s \geq t \geq 3$, otherwise $M_v$ is empty. Consider the vertices in $M_v$ and for every $w \in M_v$ denote the number of edges from $w$ to $N_+(v)$ by $d_w$. We know that each vertex $w \in M_v$ has $d_w \geq 2 s^{1/(t-1)} d^{(t-2)/(t-1)} \geq 2 \sqrt{d}$ (using $t \geq 3$). From Lemma~\ref{lemma:Kst-extremal}, we know that the total number of these edges is $\sum_{w \in M_v} d_w \leq 2td$. This implies that $|M_v| \leq 2td / (2 \sqrt{d}) \leq t \sqrt{d}$. For $w \in M_v$, let $X_w$ denote the number of tree vertices embedded in $N_+(v)$ after the history $\cal H$, whose parent is embedded at $w$. We claim that with high probability, $X_w \leq \frac{1}{8t} d_w$. This can be seen as follows. Suppose that $f(x) = w$ for some $x \in V(T)$. The degree of $x$ in $T$ is at most $\frac{1}{64 t} d$ and the children of $x$ are embedded one by one. Hence as we already explained, if no bad event ${\cal B}_w$ happened so far, each child $y$ has at least $\frac 14 d$ choices available for its embedding. Therefore, even conditioned on the embedding of the previous children, the probability that $y$ is embedded in $N_+(v)$ is at most $p = \min \{1, d_w / (\frac14 d)\}$. So $X_w$ satisfies the conditions of Proposition~\ref{thm:supermartingale} with $\mu = \frac{1}{64 t} d \cdot d_w / (\frac14 d) = \frac{1}{16 t} d_w$. By Proposition~\ref{thm:supermartingale} with $\delta = 1$, $$ {\mathbb P}[X_w > \frac{1}{8t} d_w] \leq e^{-\mu/3} = e^{-\frac{1}{48t} d_w} \leq e^{-\frac{1}{24 t} \sqrt{d}}, $$ using $d_w \geq 2 \sqrt{d}$. By the union bound, the probability that $X_w > \frac{1}{8t} d_w$ for any $w \in M_v$ is at most $|M_v| e^{-\frac{1}{24t} \sqrt{d}} \leq t \sqrt{d} e^{-\frac{1}{24t} \sqrt{d}}$. Otherwise, $$ \sum_{w \in M_v} X_w \leq \frac{1}{8t} \sum_{w \in M_v} d_w \leq \frac{1}{8t} \cdot 2td = \frac14 d.$$ Together with the $t$ vertices possibly occupied within history $\cal H$, this gives at most $\frac14 d + t$ vertices occupied in $N_+(v)$. \hfill $\Box$ Having finished all the necessary preparations we are now ready to complete the proof of Theorem~\ref{thm:Kst-free}. The bad event ${\cal B}_v$ can occur only if more than $\frac14 d + t$ vertices are occupied in $N_+(v)$ by children of vertices in $L_v$ or more than $\frac14 d + t$ vertices by children of vertices in $M_v$. As we proved, each of these events has probability smaller than $t \sqrt{d} e^{-\sqrt{d}/(24t)}$, therefore the probability of ${\cal B}_v$ is at most $2t \sqrt{d} e^{-\sqrt{d}/(24t)}$. This holds even if we condition on the event ${\cal D}_v$ (a disjoint union of histories $\cal H$) which occurs at the moment when $t$ vertices in $N_+(v)$ are occupied. Let's estimate the number of events ${\cal D}_v$ which can occur. The event ${\cal D}_v$ is witnessed by a $t$-tuple of vertices of tree $T$ which are embedded in $N_+(v)$. The same $t$-tuple cannot be a witness to $s$ different events ${\cal D}_v$, because then we would have a copy $K_{s,t}$ in our graph $G$. Therefore, each $t$-tuple can witness at most $s-1$ events and the total number of events ${\cal D}_v$ is bounded by $(s-1) |T|^t \leq s d^{2t}$. Since ${\cal D}_v$ can occur for at most $s d^{2t}$ vertices in any given run of the algorithm, we have $\sum_{v \in V} {\mathbb P}[{\cal D}_v] \leq s d^{2t}$. Thus \begin{eqnarray*} {\mathbb P}[\exists v \in V; {\cal B}_v~\mbox{occurs}] & \leq & \sum_{v \in V} {\mathbb P}[{\cal B}_v] = \sum_{v \in V} {\mathbb P}[{\cal B}_v \mid {\cal D}_v] {\mathbb P}[{\cal D}_v] \\ & \leq & 2t \sqrt{d} e^{-\frac{1}{24t} \sqrt{d}} \sum_{v \in V} {\mathbb P}[{\cal D}_v] \leq 2st \, d^{2t+\frac12} e^{-\frac{1}{24t} \sqrt{d}} \end{eqnarray*} which tends to $0$ as $d \rightarrow \infty$. \hfill $\Box$ \medskip Finally, we show how to prove the same result for trees whose degrees can be a constant fraction of $d$, independent of $t$. The following is a strengthened version of Theorem~\ref{thm:Kst-free}. \begin{theorem} \label{thm:Kst-free2} Let $G$ be $K_{s,t}$-free graph $G$ ($s \geq t$) of minimum degree $d$. For any tree $T$ of size $|T| \leq \frac{1}{64} s^{-1/(t-1)} d^{t/(t-1)}$ and maximum degree $\Delta \leq \frac{1}{256} d$, Algorithm 2 finds an embedding of $T$ in $G$ with high probability. \end{theorem} \noindent {\bf Proof.}\, The proof is very similar to the proof of Theorem~\ref{thm:Kst-free}, with some additional ingredients. We can assume that $t \geq 5$, otherwise the result follows from Theorem~\ref{thm:Kst-free} directly. We focus on the new issues arising from the fact that degrees in the tree can exceed $O(d/t)$. For a fixed vertex $v$, consider again the set $M_v$ defined by $$ M_v = \{w \neq v: |N_+(v) \cap N_+(w)| > 2 s^\frac{1}{t-1} d^\frac{t-2}{t-1} \}. $$ We know from Lemma~\ref{lemma:Kst-extremal} that the number of edges from $M_v$ to $N_+(v)$ is bounded by $2td$. Before, we argued that since degrees are bounded by $O(d/t)$, the expected contribution of vertices embedded along edges from $M_v$ to $N_+(v)$ cannot be too large. The vertices in $T$ that could cause trouble are those embedded in $M_v$, whose degree is more than $O(d/t)$. The contribution of the children of these vertices to $N_+(v)$ might be too large. Hence we need to argue that not too many vertices of this type can be embedded in $M_v$. First, observe that using Lemma~\ref{lemma:Kst-extremal} and the definition of $M_v$, the size of $M_v$ is bounded by $$ |M_v| \leq \frac{e(M_v,N_+(v))}{2 s^{\frac{1}{t-1}} d^{\frac{t-2}{t-1}}} \leq \frac{2td}{2 s^{\frac{1}{t-1}} d^{\frac{t-2}{t-1}}} \leq t d^{\frac{1}{t-1}}.$$ Similarly, if we denote by $Q$ the vertices of $T$ with degrees at least $\frac{1}{64 t} d$, the number of such vertices is bounded by $$ |Q| \leq \frac{2|T|}{\frac{1}{64 t} d} \leq \frac{\frac{1}{32} d^{\frac{t}{t-1}}}{\frac{1}{64 t} d} = 2 t d^{\frac{1}{t-1}}. $$ Our goal is to prove that not many vertices from $Q$ can be embedded in $M_v$. For that purpose, we also need to define a new type of ``bad event" ${\cal C}_v$ and ``dangerous event" ${\cal E}_v$. {\em The event ${\cal E}_v$ occurs if any vertex of the tree is embedded in $M_v$. The event ${\cal C}_v$ occurs if after the first vertex embedded in $M_v$, at least $8$ vertices from $Q$ are embedded in $M_v$.} Now, consider any tree vertex $q \in Q$. At the moment when we embed $q$, there are at least $\frac14 d$ choices, unless ${\cal B}_w$ happened for some vertex $w$ and the algorithm has failed already. Since $|M_v| \leq t d^{\frac{1}{t-1}}$, the probability of embedding $q$ into $M_v$, even conditioned on any previous history $\cal H'$, is $$ {\mathbb P}[f(q) \in M_v \mid {\cal H}'] \leq \frac{|M_v|}{\frac14 d} \leq \frac{4t d^{\frac{1}{t-1}}}{d} \leq \frac{4t}{d^{3/4}} $$ for $t \geq 5$. We condition on any history $\cal H$ up to the first vertex embedded in $M_v$, and estimate the probability that at least $8$ vertices from $Q$ are embedded in $M_v$ after this moment. For any particular $8$-tuple from $Q$, this probability is bounded by $(4t / d^{3/4})^{8} = (4t)^8 / d^{6}$. The number of possible $8$-tuples in $Q$ is at most $|Q|^{8} \leq (2t d^{1/(t-1)})^{8} \leq (2t)^8 d^2$ for $t \geq 5$. Hence, $$ {\mathbb P}[{\cal C}_v \mid {\cal H}] \leq \frac{(4t)^8}{d^{6}} (2t)^8 d^2 = \frac{8^8 t^{16}}{d^4}.$$ By averaging over all histories up to the moment when the first vertex is embedded in $M_v$, we get ${\mathbb P}[{\cal C}_v \mid {\cal E}_v] \leq 8^8 t^{16} / d^4$. Consider the number of events ${\cal E}_v$ that can ever happen. For any event ${\cal E}_v$, there is a witness vertex $x \in V(T)$, mapped to $f(x) = w \in M_v$. Observe that the definition of $w \in M_v$ is symmetric with respect to $(v,w)$, i.e., we also have $v \in M_w$. We know that $|M_w| \leq t d^{1 / (t-1)}$ for any $w \in V$, therefore each vertex of the tree can be witness to at most $t d^{1 / (t-1)}$ events ${\cal E}_v$. In total, we can have at most $|T| \cdot t d^{1/(t-1)} \leq d^{t/(t-1)} \cdot t d^{1/(t-1)} \leq t d^2$ events ${\cal E}_v$. Since ${\cal E}_v$ can occur for at most $t d^2$ vertices in any given run of the algorithm, we have $\sum_{v \in V} {\mathbb P}[{\cal E}_v] \leq t d^2$. Hence, \begin{eqnarray*} {\mathbb P}[\exists v \in V; {\cal C}_v~\mbox{occurs}] & \leq & \sum_{v \in V} {\mathbb P}[{\cal C}_v] = \sum_{v \in V} {\mathbb P}[{\cal C}_v \mid {\cal E}_v] {\mathbb P}[{\cal E}_v] \\ & \leq & \frac{8^8 t^{16}}{d^4} \sum_{v \in V} {\mathbb P}[{\cal E}_v] \leq \frac{8^8 t^{16}}{d^4} t d^2 \leq \frac{8^8 t^{17}}{d^2} \end{eqnarray*} which tends to $0$ for $d \rightarrow \infty$. So, with high probability, no event ${\cal C}_v$ happens. Given that ${\cal C}_v$ does not occur for any vertex, we can carry out the same analysis we used to prove Theorem~\ref{thm:Kst-free}. The only difference is that each vertex $v$ might have up to $9$ vertices from $Q$ embedded in $M_v$ ($8$ plus the first vertex ever embedded in $M_v$). Since the degrees in $T$ are bounded by $\frac{1}{256} d$, even if the children of these vertices were embedded arbitrarily, still they can occupy at most $\frac{9}{256} d$ vertices in $N_+(v)$. The number of vertices in $N_+(v)$ occupied through vertices in $L_v$ or the contribution of the children of vertices in $T$ with degree $O(d/t)$ that were embedded in $M_v$ can be analyzed just like in Theorem~\ref{thm:Kst-free}. Thus, with high probability, at most $\frac12 d + \frac{9}{256} d + 2t < \frac34 d$ vertices are occupied in any neighborhood and so at least $\frac14 d$ vertices are always available to embed any vertex of the tree. \hfill $\Box$ \section{Graphs of fixed girth} \label{section:bounded-girth} In this section we consider the problem of embedding trees into graphs which have no cycle of length shorter than $2k+1$ for some $k > 1$. (If the shortest cycle in a graph has length $2k+1$, such a graph is said to have {\em girth} $2k+1$.) We also assume that the minimum degree in our graph is at least $d$. It is easy to see that such $G$ must have $\Omega(d^k)$ vertices, because up to distance $k$ from any vertex $v$, $G$ looks locally like a tree. It is widely believed that graphs of minimum degree $d$, girth $2k+1$, and order $O(d^k)$ do exist for all fixed $k$ and large $d$. Such constructions are known when $k = 2,3$ and $5$. Since our graph might have order $O(d^k)$, we cannot aspire to embed trees of size larger than $O(d^k)$ in $G$. This is what we achieve. For the purpose of analysis, we need to modify slightly our previous algorithms. \paragraph{Algorithm 3.} {\em For each $v \in V$, fix a set of its $d$ neighbors $N_+(v)$. Assume that $T$ is a rooted tree with root $r$. Start by making $k$ random moves from an arbitrary vertex $v_1 \in V$, in each step choosing a random neighbor $v_{i+1} \in N_+(v_i)$. Embed the root of the tree at $f(r) = v_k$. As long as $T$ is not completely embedded, take an arbitrary vertex $s \in V(T)$ which is embedded but its children are not. If $f(s)$ has enough available neighbors in $N_+(f(s))$ unoccupied by other vertices of $T$, embed the children of $s$ among these vertices uniformly at random. Otherwise, fail.} \medskip The following is our main result for graphs of girth $2k+1$. \begin{theorem} \label{thm:fixed-girth} Let $G$ be a graph of minimum degree $d$ and girth $2k+1$. Then for any constant $\epsilon \leq \frac{1}{2k}$, Algorithm 3 succeeds with high probability in embedding any tree $T$ of size $ \frac14 \epsilon d^k$ and maximum degree $\Delta(T) \leq d - 2 \epsilon d - 2$. \end{theorem} To prove this theorem, we will generalize the analysis of the $C_4$-free case to allow embedding of substantially larger trees. The solution is to consider multiple levels of neighborhoods for each vertex. Starting from any vertex $v \in V(G)$, we have the property that up to distance $k$ from $v$, $G$ looks like a tree (otherwise we get a cycle of length at most $2k$). Consequently, for any vertex $w$, there can be at most one path of length $k$ from $w$ to $v$ . Therefore, embedding a subtree whose root is placed at $w$ cannot impact the neighborhood of $v$ too much. In fact, neighbors to be used in the embedding are chosen only from a subset of $d$ neighbors $N_+(v)$. We can define an orientation of $G$ where each vertex has out-degree exactly $d$, by orienting all edges from $v$ to $N_+(v)$. (Some edges can be oriented both ways.) Then, branches of the tree $T$ are embedded along {\em directed paths} in $G$. \begin{definition} For a rooted tree $T$, with a natural top-to-bottom orientation, let $L_{k-1}(x)$ define the set of descendants $k-1$ levels down from $x \in V(T)$. For a tree vertex $x \in V(T)$, denote by $X_{v,x}$ the number of vertices in $L_{k-1}(x)$ that end up embedded in $N_+(v)$, before the children of $f^{-1}(v)$ are embedded. For a vertex $v \in V(G)$, denote by $X_v$ the total number of vertices in $T$ that end up embedded in $N_+(v)$, before the children of $f^{-1}(v)$ are embedded. \end{definition} We extend $T$ to a larger rooted tree $T^*$ by adding a path of length $k-1$ above the root of $T$ and making the endpoint of this path the root of $T^*$. Observe that our embedding algorithm proceeds effectively as if embedding $T^*$, except the first $k-1$ steps do not occupy any vertices of $G$. Each embedded vertex $y \in V(T)$ is a $(k-1)$-descendant of some $x \in V(T^*)$ and hence $V(T) = \bigcup_{x \in V(T^*)} L_{k-1}(x)$. By summing up the contributions over $x \in V(T^*)$, we get $$ X_v = \sum_{x \in V(T^*)} X_{v,x}.$$ Our goal is to apply tail estimates on $X_v$ in order to bound the probabilities of ``bad events". Just like before, we need to be careful in summing up these probabilities, since the size of the graph might be too large for a union bound. We start ``watching out" for the bad event ${\cal B}_v$ only after a ``dangerous event" ${\cal D}_v$ occurs. We also stop our algorithm immediately after the first bad event happens. {\em Event ${\cal B}_v$ occurs when $X_v > 2 \epsilon d + 2$.} Event {\em ${\cal D}_v$ occurs whenever at least two vertices in $N_+(v)$ can be reached by directed paths of length at most $k-1$, avoiding $v$, from the embedding of $T^*$. By the embedding of $T^*$, we also mean the vertices visited in the first $k-1$ steps of the algorithm, which are not really occupied.} Suppose $q_1,q_2$ are the first two vertices in $N_+(v)$ that can be reached by directed paths of length at most $k-1$, avoiding $v$, from the embedding of $T^*$. Then we define a modified random variable $\tilde{X}_{v,x}$ as the number of vertices in $L_{k-1}(x)$, which are embedded in $N_+(v) \setminus \{q_1,q_2\}$, but not through $v$ itself. In other words, these random variables count the vertices occupied in $N_+(v)$, not counting $q_1$ and $q_2$. Observe that $X_v \leq \sum_{x \in V(T^*)} \tilde{X}_{v,x} + 2$. \begin{lemma} \label{lemma:girth-occupy} Assume the girth of $G$ is at least $2k+1$. Fix an ordering of the vertices of $T^*$ starting from the root, $(x_1, x_2, x_3, \ldots)$, as they are processed by the algorithm. Let $\cal H$ be a fixed history of running the algorithm until two vertices $q_1, q_2 \in N_+(v)$ can be reached from an embedded vertex by a directed path (avoiding $v$) of length at most $k-1$. Then for any vertex $x_i \in V(T^*)$, $\tilde{X}_{v,x_i}$ is a $0/1$ random variable such that $$ {\mathbb P}[\tilde{X}_{v,x_i}=1 \mid {\cal H}, \tilde{X}_{v,x_1}, \tilde{X}_{v,x_2}, \ldots, \tilde{X}_{v,x_{i-1}}] \leq \frac{|L_{k-1}(x_i)|}{(d-2\epsilon d - 2)^{k-1}}. $$ \end{lemma} \noindent {\bf Proof.}\, First, note that any vertex $x_i$ embedded during the history $\cal H$ has $\tilde{X}_{v,x_i}=0$. (Since the only vertices in $N_+(v)$ possibly reachable within $k-1$ steps from $f(x_j)$ are $q_1$ and $q_2$.) Therefore we can assume that the embedding of $x_i$ together with the embedding of the subtree of its descendants in $T^*$ is still undecided at the end of ${\cal H}$. Let $\cal K$ denote the event that $x_i$ is embedded so that there is a directed path of length exactly $k-1$ from $f(x_i)$ to $N_+(v)$, which avoids $v$ and has endpoint in $N_+(v)$ other than $q_1, q_2$. Observe that this is the only way $\tilde{X}_{v,x_i}$ could be non-zero. Indeed, if $\tilde{X}_{v,x_i}=1$, then there is a branch of tree $T^*$ of length $k-1$ from $x_i$ to some $y$ that was mapped to a path from $f(x_i)$ to $N_+(v)$ such that the vertex next to last is not $v$. However, such a path from $f(x_i)$ to $N_+(v)$, if it exists, is unique. If we had two different paths like this, we could extend them to two paths of length $k$ between $f(x_i)$ and $v$, which contradicts the girth assumption. Note that $\cal K$ occurs only if this unique path leads to a vertex of $N_+(v)$ other than $q_1$ or $q_2$. Also, we have that at most one vertex $y \in L_{k-1}(x_i)$ can be embedded in $N_+(v)$. The variable $\tilde{X}_{v,x_i}$ is equal to $1$ when this happens for some $y \in L_{k-1}(x_i)$, and $0$ otherwise. We bound the probability that $\tilde{X}_{v,x_i} = 1$, conditioned on $({\cal H}, \tilde{X}_{v,x_1}, \ldots, \tilde{X}_{v,x_{i-1}})$. In fact, let's condition even more strongly on a fixed embedding $\cal E$ of all vertices of $T$ except for the descendants of $x_i$. We also assume that $\cal E$ satisfies $\cal K$, i.e. $f(x_i)$ is at distance exactly $k-1$ from $N_+(v)$, since otherwise $\tilde{X}_{v,x_i}=0$. We claim that any such embedding implies the values of $\tilde{X}_{v,x_1}, \ldots, \tilde{X}_{v,x_{i-1}}$. For vertices $x_j$ such that $L_{k-1}(x_j)$ does not intersect the subtree of $x_i$, this is clear because the embedding of these vertices is fixed. However, even if $L_{k-1}(x_j)$ intersects the subtree of $x_i$, $\tilde{X}_{v,x_j}$ is still determined, since none of these vertices can be embedded into $N_+(v)$. Indeed, any descendant of $x_i$ which is in $L_{k-1}(x_j)$ must be also in $L_{k'}(x_i)$ for some $k' < k-1$. If the embedding of $L_{k'}(x_i)$ intersects $N_+(v)$, we obtain that there are two paths from $f(x_i)$ to $v$, one of length $k$ and another of length $k'+1<k$. Together they form a cycle of length shorter than girth, a contradiction. Now fix a vertex $y \in L_{k-1}(x_i)$. Every vertex $x_j \in T^*$, when embedded, chooses randomly from one of the available neighbors of the vertex of $G$, in which its parent has been embedded. As long as no bad event happened so far (otherwise the algorithm would have terminated), there are at least $d-2\epsilon d-2$ candidates available for $f(x_j)$. Therefore, each particular vertex has probability at most $1/(d-2\epsilon d-2)$ of being chosen to be $f(x_j)$. The probability that $f(y) \in N_+(v)$ is the probability that our embedding follows a particular path of length $k-1$. By the above discussion, this probability is at most $1/(d-2\epsilon d-2)^{k-1}$. (Note that by our conditioning, this path might be already blocked by the placement of other vertices; in such a case, the probability is actually $0$.) Using the union bound, we have $$ {\mathbb P}[\tilde{X}_{v,x_i}=1 \mid {\cal E}] \leq \frac{|L_{k-1}(x_i)|}{(d - 2\epsilon d - 2)^{k-1}}. $$ Since the right hand side of this inequality is a constant, independent of the embedding, we get the same bound conditioned on $({\cal H}, \tilde{X}_{v,x_1}, \ldots, \tilde{X}_{v,x_{i-1}}, \cal K)$ and hence also conditioned on $({\cal H}, \tilde{X}_{v,x_1}, \ldots, \tilde{X}_{v,x_{i-1}})$. \hfill $\Box$ Now we are ready to use our supermartingale tail estimate from Proposition~\ref{thm:supermartingale} to bound the probability of a bad event. \begin{lemma} Assume $\epsilon \leq \frac{1}{2k}$ and $|T| \leq \frac14 \epsilon d^k$. For any vertex $v \in V(G)$, condition on the dangerous event ${\cal D}_v$. Then for large enough $d$, the probability that the bad event ${\cal B}_v$ happens is $$ {\mathbb P}[{\cal B}_v \mid {\cal D}_v] \leq e^{-\epsilon d / 3}.$$ \end{lemma} \noindent {\bf Proof.}\, The bad event means that $X_v > 2 \epsilon d + 2$. As before, first we condition on any history $\cal H$ up to the point when ${\cal D}_v$ happens. At this point, two vertices $q_1,q_2 \in N_+(v)$ are within distance $k-1$ of the embedding of $T^*$ constructed so far. We consider these two vertices effectively occupied. Our goal is to prove that the number of additional occupied vertices in $N_+(v)$ is small, namely $\sum_{i=1}^{|T^*|} \tilde{X}_{v,x_i} \leq 2 \epsilon d$. By Lemma~\ref{lemma:girth-occupy}, we know that $$ {\mathbb P}[\tilde{X}_{v,x_i} = 1 \mid {\cal H}, \tilde{X}_{v,x_1}, \ldots, \tilde{X}_{v,x_{i-1}}] \leq \frac{|L_{k-1}(x_i)|}{(d - 2\epsilon d - 2)^{k-1}}.$$ Therefore the expectation of $\tilde{X}_{v}=\sum_{i=1}^{|T^*|} \tilde{X}_{v,x_i} $ is bounded by $$ \mathbb{E}[\tilde{X}_{v}]=\sum_{i=1}^{|T^*|} \mathbb{E}[\tilde{X}_{v,x_i}] \leq \sum_{i=1}^{|T^*|} \frac{|L_{k-1}(x_i)|}{(d-2\epsilon d - 2)^{k-1}} \leq \frac{|T|}{(d-2\epsilon d - 2)^{k-1}} < \frac{4|T|}{d^{k-1}} \leq \epsilon d. $$ Here we used that $\epsilon \leq \frac{1}{2k}$, $d$ large enough, and $|T| \leq \frac14 \epsilon d^k$. So we can set $\mu = \epsilon d$, $\delta = 1$ and use Proposition~\ref{thm:supermartingale} to conclude that, $$ {\mathbb P}\big[ \tilde{X}_{v} > 2 \epsilon d \mid {\cal H}\big] \leq e^{-\epsilon d / 3}. $$ The same holds when we condition on the event ${\cal D}_v$, which is the disjoint union of all such histories $\cal H$. Consequently, $X_v \leq \tilde{X}_{v} + 2 \leq 2 \epsilon d + 2$ with high probability, which concludes the proof. \hfill $\Box$ \medskip To finish the proof of Theorem~\ref{thm:fixed-girth}, we show that with high probability, ${\cal B}_v$ does not happen for any vertex $v \in V$. First, let's examine how many events ${\cal D}_v$ can possibly occur for a given run of the algorithm. Every vertex $v$ for which ${\cal D}_v$ happens has a ``witness pair" of vertices in $N_+(v)$ satisfying the condition that they can be reached by directed paths of length at most $k-1$ from the embedding of $T^*$. The number of such vertices is at most $|T^*| d^{k-1} \leq d^{2k}$. Also, observe that the same pair can be a witness to at most $1$ event ${\cal D}_v$, otherwise we have a 4-cycle in $G$ which contradicts the high girth property. Hence the number of possible witness pairs is at most $$ {d^{2k} \choose 2} \leq d^{4k} $$ and each event ${\cal D}_v$ has a unique witness pair. Therefore, the expected number of events ${\cal D}_v$ is $$ \sum_v {\mathbb P}[{\cal D}_v] \leq d^{4k}.$$ Now we bound the probability that any bad event ${\cal B}_v$ occurs. \begin{eqnarray*} {\mathbb P}[\exists v \in V; {\cal B}_v~\mbox{occurs}] & \leq & \sum_{v \in V} {\mathbb P}[{\cal B}_v] = \sum_{v \in V} {\mathbb P}[{\cal B}_v \mid {\cal D}_v] {\mathbb P}[{\cal D}_v] \\ & \leq & e^{-\epsilon d / 3} \sum_{v \in V} {\mathbb P}[{\cal D}_v] \leq d^{4k} e^{-\epsilon d / 3}. \end{eqnarray*} For a constant $k$ and $d \rightarrow \infty$, this probability tends to $0$. \hfill $\Box$ \section{Random graphs and the property ${\cal P}(d,k,t)$} \label{section:bounded-paths} The main objective of this section is to obtain nearly optimal tree embedding results for random graphs. In our analysis, we do not actually require true randomness. The important condition that $G$ has to satisfy is a certain ``pseudorandomness" property, stated below. Roughly speaking, the property requires that there are not too many paths between any pair of vertices, compared to how many paths a random graph would have. \paragraph{Property ${\cal P}(d,k,t)$.} Let $d, k$ and $t$ be positive integers. A graph $G$ on $n$ vertices satisfies property ${\cal P}(d,k,t)$ if \begin{enumerate} \item $G$ has minimum degree at least $d$. \item For any $u,v \in V$, the number of paths of length $k$ from $u$ to $v$ is $$ P_{k}(u,v) \leq d^{1/4}.$$ \item For any $u,v \in V$, the number of paths of length $k+1$ from $u$ to $v$ is $$ P_{k+1}(u,v) \leq \frac{d^{k+1}}{t}.$$ \end{enumerate} \paragraph{Remark.} In the second condition, $d^{1/4}$ is somewhat arbitrary. For $k$ constant, it would be enough to require $P_{k}(u,v) = o(d / \log d)$. However, having a larger gap between $P_{k}(u,v)$ and $d$ allows our framework to work for larger (non-constant) values of $k$. Observe that $d$-regular graphs of girth $2k+1$ satisfy ${\cal P}(d,k,t=d^k)$, because there is at most one path of length $k$ between any pair of vertices. Thus our embedding results for graphs satisfying this property implies similar statements for regular graphs of fixed girth, although somewhat weaker than those we presented in Section~\ref{section:bounded-girth}. Our main focus in this section is on random graphs. \iffalse \begin{proposition} Any graph $G$ of minimum degree $d$ and girth at least $2k+1$ satisfies ${\cal P}(d,k,t)$ where $t = d^k$. \end{proposition} \noindent {\bf Proof.}\, For each vertex $v$, choose $d$ neighbors arbitrarily and orient edges from $v$ to these neighbors. Some edges are possibly oriented both ways. For the second condition, note that since $G$ does not contain any cycle of length at most $2k$, there can be at most 1 $u$-$v$ path of length $k$; hence at most 1 path of length $k-1$ from $u$ to $N(v)$ avoiding $v$, i.e. $P_{k-1}(u,v) \leq 1$. Since $u$ has at most $d$ out-neighbors $u'$, also satisfying $P_{k-1}(u',v) \leq 1$, we get $P_k(u,v) \leq \sum_{u' \in N(u)} P_{k-1}(u',v) \leq d = d^{k+1} / t$. \hfill $\Box$ \fi \begin{proposition} \label{random} A random graph $G_{n,p}$ where $\frac12 \geq p \geq n^{a-1}$, $a > 0$ constant, satisfies almost surely ${\cal P}(d,k,t)$ with $t = (1-o(1)) n$, $d=(1-o(1))pn$ and $k \geq 1$ chosen so that $$\frac{1}{4} (pn)^{-3/4} < p^k n^{k-1} \leq \frac{1}{4} (pn)^{1/4}.$$ \end{proposition} \noindent {\bf Proof.}\, Since we assume $pn \geq n^a$, we have $k \leq 1 + 1/a$, otherwise $p^k n^{k-1} = p (pn)^{k-1} \geq pn >> (pn)^{1/4}$ contradicting our choice of $k$. Hence, $k$ is a constant. The degree of every vertex in $G_{n,p}$ is a binomially distributed random variable with parameters $n$ and $p$. Thus, by standard tail estimates (Chernoff bounds), the probability that it is smaller than $$ d = pn - \sqrt{pn} \log n = (1-o(1)) pn $$ is $e^{-\Omega(\log^2 n)} = o(1/n)$. Therefore with high probability the minimum degree of $G_{n,p}$ is at least $d$. The expected number of paths of length $k$ from $u$ to $v$ is $$ {\mathbb E}[P_{k}(u,v)] \leq p^k n^{k-1} \leq \frac{1}{4} (pn)^{1/4} $$ by our choice of $k$. We use the Kim-Vu inequality \cite{KimVu} to argue that $P_k(u,v)$ is strongly concentrated. Let $t_e$ be the indicator variable of edge $e$. We can write $$ P_k(u,v) = \sum_{P} \prod_{e \in P} t_e $$ where $P$ runs over all possible paths of length $k$ between $u$ and $v$. Clearly, this is a multilinear polynomial of degree $k$. Let $\frac{\partial}{\partial t_I} P_k(u,v)$ denote the partial derivative of $P_k(u,v)$ with respect to all variables in the set $I$. Using the notation of \cite{KimVu}, we set $$ E_i = \max_{|I|=i} {\mathbb E}\left[\frac{\partial}{\partial t_I} P_k(u,v)\right],$$ $E = \max_{i \geq 0} E_i$ and $E' = \max_{i \geq 1} E_i$. In particular, $E_0$ is the expected value of $P_k(u,v)$. The Kim-Vu inequality states that $$ {\mathbb P}\big[|P_k(u,v) - E_0| > a_k \lambda^k \sqrt{E'E}\big] = O\big(e^{-\lambda + (k-1) \log n}\big) $$ for any $\lambda > 1$ and $a_k = 8^k \sqrt{k!}$. In our case, ${\mathbb E}\left[\frac{\partial}{\partial t_I} P_k(u,v)\right]$ can be seen as the expected number of $u$-$v$ paths of length $k$ with $i$ edges already fixed to be on the path. For any choice of such $i$ edges, if $i<k$, we have at most $n^{k-i-1}$ choices to complete the path and the probability that such a path appears is $p^{k-i}$. Hence, $E_i \leq p^{k-i} n^{k-i-1}$ for $i<k$. For $i=k$, we have $E_k = 1$. Hence, $E = \max_{i \geq 0} E_i \leq p^k n^{k-1} \leq \frac14 (pn)^{1/4}$ and $E' = \max_{i \geq 1} E_i \leq 1$. By the Kim-Vu inequality with $\lambda = (k+2) \log n$, we have $$ {\mathbb P}\big[|P_k(u,v) - E_0| > a'_k (pn)^{1/8} \log^k n\big] = O\big(e^{-3 \log n}\big) = O\big(n^{-3}\big),$$ where $a'_k=(k+2)^k a_k=8^k (k+2)^k \sqrt{k!}$. Thus, we get for all pairs $(u,v)$ that with high probability $$P_k(u,v) \leq E_0 + a'_k (pn)^{1/8} \log^k n \leq \frac14 (pn)^{1/4} +a'_k (pn)^{1/8} \log^k n <\frac12 (pn)^{1/4} \leq d^{1/4}.$$ To estimate $P_{k+1}(u,v)$, we use a similar argument. Again, this is a multilinear polynomial $P_{k+1}(u,v) = \sum_P \prod_{e \in P} t_e$, this time of degree $k+1$. The expectation is $E_0 = {\mathbb E}[P_{k+1}(u,v)] \leq p^{k+1} n^k$. Further, we get $E_i \leq p^{k+1-i} n^{k-i}$ for $i < k$, $E_{k+1} = 1$ and therefore, $E = \max_{i \geq 0} E_i = E_0$. Since our choice of $k$ implies that $E_0=(1-o(1))p^{k+1} n^k > (pn)^{1/4}/5$, we also have $$E' = \max_{i \geq 1} E_i = \max\big(p^{k} n^{k-1}, 1\big) \leq 5E_0/(pn)^{1/4}.$$ By Kim-Vu with $\lambda = (k+2) \log n$, $$ {\mathbb P}\big[|P_{k+1}(u,v) - E_0| > a'_k \sqrt{EE'} \log^k n\big] = O\big(e^{-3 \log n}\big) = O\big(n^{-3}\big),$$ where $a'_k =(k+2)^ka_k$ is a constant. Note that $a'_k \sqrt{EE'} \log^k n \leq 5a'_k\log^k n E_0/(pn)^{1/8}=o(E_0)$. Recall also that $d=(1-o(1))pn$ and $t=(1-o(1))n$. Thus, for all pairs $(u,v)$ with high probability $$\hspace{3.5cm} P_{k+1}(u,v) \leq E_0 + o(E_0) \leq (1+o(1)) p^{k+1} n^k \leq d^{k+1} / t \hspace{3.5cm} \Box$$ \iffalse \noindent {\bf Proof.}\, Since we assume $pn \geq \log^5 n$, we have $k \leq 1 + \frac{\log n}{\log \log n}$, otherwise $p^k n^{k-1} \geq p \log^{5(k-1)} n \geq pn^5 >> (pn)^{1/4}$ contradicting our choice of $k$. Hence, $k = o(\log n)$ and $$ d = \left(1-\frac{1}{4k}\right ) pn \leq pn-\sqrt{pn}\log n.$$ The degree of every vertex in $G_{n,p}$ is binomailly distributed random variable with parameters $n$ and $p$. Thus, by standard tail estimates (Chernoff bounds), the probability that it is smalller than $pn-\sqrt{pn}\log n$ is $o(n^{-1})$. Therefore with high probablity the minimum degree of $G_{n,p}$ is at least $d$. The expected number of paths of length $k$ from $u$ to $v$ is $$ {\mathbb E}[P_{k}(u,v)] \leq p^k n^{k-1} \leq \frac{1}{4} (pn)^{1/4} $$ by our choice of $k$. Since $(pn)^{1/4} \geq \log^{5/4} n$, by known concentration results (for example, the Kim-Vu inequality), $P_{k}(u,v) \leq \frac{1}{2} (pn)^{1/4} \leq d^{1/4}$ for all pairs $u,v$ with high probability. Similarly, the number of paths of length $k+1$ from $u$ to $v$ is $ {\mathbb E}[P_{k+1}(u,v)] \leq p^{k+1} n^k $. Here, our choice of $k$ gives $p^{k+1} n^k \geq \frac{1}{4} (pn)^{1/4} \geq \frac{1}{4} \log^{5/4} n$. Again, by known results, we get with high probability that for all pairs $u,v$, $$ P_{k+1}(u,v) \leq \frac{9}{8} p^{k+1} n^k = \frac{9}{8n} \left( \frac{d}{1-\frac{1}{4k}} \right)^{k+1} \leq \frac{2}{n} d^{k+1} $$ for any $k \geq 1$. \hfill $\Box$ \fi \paragraph{Algorithm 4.} {\em Start by making $k$ random moves from an arbitrary vertex $v_0 \in V$, in each step choosing a random neighbor $v_{i+1} \in N(v_i)$. Embed the root of the tree $r \in T$ at $f(r) = v_k$. As long as $T$ is not completely embedded, take an arbitrary vertex $u \in V(T)$ which is embedded but its children are not. If $f(u)$ has enough available neighbors in $N(f(u))$ unoccupied by other vertices of $T$, embed the children of $u$ one by one by choosing vertices randomly from the available neighbors of $f(u)$. Otherwise, fail.} \medskip The following is our main theorem. \begin{theorem} \label{thm:bounded-paths} Let $G$ be a graph on $n$ vertices satisfying property ${\cal P}(d,k,t)$ for $d \geq \log^8 n$, $k \leq \log n$ and $\epsilon, \delta > 0$ are such that \begin{equation} \label{eq:eps-delta} (2k \epsilon)^{1/k} + \delta + \frac{1}{k} \leq 1. \end{equation} Then for any tree $T$ of maximum degree at most $\delta d$ and size at most $\epsilon t$, the algorithm above finds embedding of $T$ with high probability. \end{theorem} \noindent This result has an interesting consequence already for $k=1$. Let $G$ be a graph on $n$ vertices with minimum degree $pn$ such that every two distinct vertices of $G$ have at most $O(p^2 n)$ common neighbors. For $p \gg n^{-1/2}$ there are several known explicit construction of such graphs and their properties were extensively studied by various researchers (see, e.g., survey \cite{KS1} and its references). Our theorem implies nearly optimal embedding results for such $G$ and shows that it contains every tree of order $\Omega(n)$ with maximum degree $\Omega(pn)$. Considering the extreme values of $\epsilon$ and $\delta$ that satisfy (\ref{eq:eps-delta}), we obtain embeddings of \begin{itemize} \item trees with maximum degree at most a constant fraction of $d$ (e.g., $\frac{1}{4} d$) and size $2^{-\Theta(k)} t$. \item trees with maximum degree $O(d/k)$ and size $O(t/k)$. \end{itemize} Combining Theorem \ref{thm:bounded-paths} with Proposition \ref{random}, we see that for a random graph $G_{n,p}$ with $p = n^{a-1}$ and constant $a > 0$ we can use $d \simeq pn$, $t \simeq n$ and $k \simeq 1/a$. Therefore for such $p$ we are embedding trees whose size and maximum degree are proportional to the order and minimum degree of $G_{n,p}$. This is clearly tight up to constant factors. \medskip Before proving the theorem, we outline the strategy of our proof. Our goal is to argue that there is some $\alpha > 0$ such that no more than $\alpha d$ vertices are ever occupied in any neighborhood $N(v)$, including vertices embedded through $v$ itself. Again, we consider the number $X_v$ of vertices in $N(v)$ occupied by vertices of $T$, other than those embedded as children of $v$. The ``bad event" ${\cal B}_v$ occurs when $X_v > d/k$ and we stop the algorithm immediately after the first such event. At most $\delta d$ vertices can be embedded as children of $v$, therefore assuming that no bad event happens, at most $(1/k + \delta) d$ vertices are eventually occupied in any neighborhood $N(v)$. Since $1/k + \delta \leq 1 - (2 k \epsilon)^{1/k}$ by (\ref{eq:eps-delta}), we can set $$\alpha = 1 - (2 k \epsilon)^{1/k}.$$ If no bad even occurs, any vertex of $T$ has at least $(1-\alpha) d$ choices available for its embedding. If a bad event occurs, we can assume that the algorithm fails. We estimate the probability of ${\cal B}_v$ by studying the random variable $X_v$. The expectation ${\mathbb E}[X_v]$ is bounded relatively easily, since this is determined by the number of possible ways that a vertex of $T$ can reach the neighborhood $N(v)$. This can be bounded using our property ${\cal P}(d,k,t)$. The more challenging part of the proof is to argue that the probability of ${\cal B}_v$ is very small, since the contributions from different vertices of the tree are not independent. We handle this issue by dividing the contributions into blocks of variables which are effectively independent. We write $X_v = \sum_{i=1}^{k} Y_{v,i}$ and use a supermartingale tail estimate to bound each $Y_{v,i}$. The following definitions are similar to those in Section~\ref{section:bounded-girth}. \begin{definition} For a rooted tree $T$, with a natural top-to-bottom orientation, let $L_{k-1}(x)$ define the set of descendants $k-1$ levels down from $x \in V(T)$. For a vertex $v \in V(G)$, denote by $X_v$ the number of vertices in $T$ that end up embedded in $N(v)$, before the children of $f^{-1}(v)$ are embedded. For a tree vertex $x \in V(T)$, denote by $X_{v,x}$ the number of vertices in $L_{k-1}(x)$ that end up embedded in $N(v)$, before the children of $f^{-1}(v)$ are embedded. \end{definition} As in Section~\ref{section:bounded-girth}, we extend $T$ to a larger tree $T^*$ by adding a path of $k$ auxiliary vertices above the root. Each embedded vertex $y$ is a $(k-1)$-descendant of some $x \in V(T^*)$ and hence $V(T) = \bigcup_{x \in V(T^*)} L_{k-1}(x)$. By summing up the contributions over $x \in V(T^*)$, we get $$ X_v = \sum_{x \in V(T^*)} X_{v,x}.$$ \begin{lemma} \label{lemma:1-subtree} Assume $G$ satisfies property ${\cal P}(d,k,t)$ and fix a tree vertex $x \in V(T)$. Then $X_{v,x}$ is bounded by $d^{1/4}$ with probability $1$, and $$ {\mathbb E}[X_{v,x} \mid {\cal T}] \leq (1-\alpha)^{-k} |L_{k-1}(x)| \frac{d}{t} $$ where $\cal T$ is any fixed embedding of the entire tree $T$ except for the vertex $x$ and its descendants. \end{lemma} \noindent {\bf Proof.}\, Assume that conditioned on $\cal T$, the parent $q$ of $x$ is embedded at $f(q) = w \in V(G)$. The only way that a vertex $y \in L_{k-1}(x)$ can end up in $N(v)$ (but not through $v$) is when some branch of the tree $T$ from $q$ to $y$ is embedded in a path of length $k$ from $w$ to $N(v)$, avoiding $v$. Such paths can be extended uniquely to paths of length $k+1$ from $w$ to $v$. We know that the number of such paths is bounded by $P_{k+1}(w,v) \leq d^{k+1} / t$. Since there are at least $(1-\alpha) d$ choices when we embed each vertex, the probability of following a particular path of length $k$ is at most $\frac{1}{((1-\alpha)d)^{k}}$. By the union bound, the probability that $y$ is embedded in $N(v)$ is $$ {\mathbb P}[f(y) \in N(v) \mid {\cal T}] \leq \frac{1}{((1-\alpha)d)^{k}} P_{k+1}(w,v) \leq \frac{d}{(1-\alpha)^{k} t}.$$ Finally, $$ {\mathbb E}[X_{v,x} \mid {\cal T}] = \sum_{y \in L_{k-1}(x)} {\mathbb P}[f(y) \in N(v) \mid {\cal T}] \leq \frac{|L_{k-1}(x)| d}{(1-\alpha)^{k} t}.$$ Similarly, the number of paths of length $k-1$ from any vertex $u$ to $N(v)$, avoiding $v$, is the same as the number $P_k(u,v)$ of paths of length $k$ from $u$ to $v$. Even if all these $P_{k}(u,v)$ paths are used in the embedding of $T$, the vertices in $L_{k-1}(x)$ cannot occupy more than $P_{k}(u,v)$ neighbors of $v$. Therefore, we can always bound $X_{v,x} \leq P_{k}(u,v) \leq d^{1/4}$. \hfill $\Box$ Next, we want to argue about the concentration of $X_v = \sum_{x \in V(T^*)} X_{v,x}$. Since the placements of different vertices in $T$ are highly correlated, it is not clear whether any concentration result applies directly to this sum. However, we can circumvent this obstacle by partitioning $V(T^*)$ into subsets where the dependencies can work only in our favor. \begin{definition} Let $r^*$ be the root of $T^*$, then every vertex of $T^*$ is in $L_j(r^*)$ for some $j$. Define a partition $V(T^*) = W_0 \cup W_1 \cup \ldots \cup W_{k-1}$ by $$ W_j = \bigcup_{j'=j\pmod k} L_{j'}(r^*).$$ For each vertex $v \in V(G)$ and $0 \leq j < k$, define $$ Y_{v,j} = \sum_{x \in W_j} X_{v,x}.$$ \end{definition} Obviously, we have $X_v = \sum_{x \in V(T^*)} X_{v,x} = \sum_{j=0}^{k-1} Y_{v,j}$. In the following, we argue that each $Y_{v,j}$ has a very small one-sided tail. \begin{lemma} \label{lemma:one-class} Let $\ell_j = \sum_{x \in W_j} |L_{k-1}(x)|$. Then ${\mathbb E}[Y_{v,j}] \leq (1-\alpha)^{-k} \frac{\ell_j}{t} d$ and $$ {\mathbb P}\left[Y_{v,j} > (1-\alpha)^{-k} \left( \frac{\ell_j}{t} + \frac{\epsilon}{k} \right) d \right] < e^{-\frac{\epsilon d^{3/4}}{3k^2 (1-\alpha)^{k}}}.$$ \end{lemma} \noindent {\bf Proof.}\, By Lemma~\ref{lemma:1-subtree}, we know that $ {\mathbb E}[X_{v,x} \mid {\cal T}] \leq (1-\alpha)^{-k} |L_{k-1}(x)| \frac{d}{t}$ where $\cal T$ is any fixed embedding of $T$ except $x$ and its subtree. Therefore, the same also holds without any conditioning. By taking a sum over all $x \in W_j$, $$ {\mathbb E}[Y_{v,j}] = \sum_{x \in W_j} {\mathbb E}[X_{v,x}] \leq (1-\alpha)^{-k} \sum_{x \in W_j} |L_{k-1}(x)| \frac{d}{t} = (1-\alpha)^{-k} \ell_j \frac{d}{t}.$$ For a tail estimate, we use Proposition~\ref{thm:supermartingale}. Write the vertices of $W_j= \{x_1, x_2, \ldots, x_r \}$ in order as they are embedded by the algorithm and write $X_i = d^{-1/4} X_{v,x_i}$. The important observation is that the values of $X_1, X_2, \ldots, X_{i-1}$ are determined if we are given the embedding of the tree $T$ except for the vertex $x_i$ and its subtree (let's denote this condition by ${\cal T}_i$). This holds because $X_1, \ldots, X_{i-1}$ depend only on the embedding of vertices $x_1, \ldots, x_{i-1}$ and their subtrees of depth $k-1$. Since all these vertices are either at least $k$ levels above $x_i$ in the tree $T$, or on the same level or below (but not in the subtree of $x_i$), their subtrees of depth $k-1$ are disjoint from the subtree of $x_i$. Hence, conditioning on ${\cal T}_i$ is stronger than conditioning on $X_1,\ldots,X_{i-1}$. Since ${\mathbb E}[X_i \mid {\cal T}_i] = d^{-1/4} {\mathbb E}[X_{v,x_i} \mid {\cal T}_i] \leq (1-\alpha)^{-k} |L_{k-1}(x_i)| \frac{d^{3/4}}{t}$, we can also write $$ {\mathbb E}[X_i \mid X_1,\ldots,X_{i-1}] \leq (1-\alpha)^{-k} |L_{k-1}(x_i)| \frac{d^{3/4}}{t}. $$ The range of $X_{v,x_i}$ is $[0, d^{1/4}]$, hence $X_i \in [0,1]$. Summing over $W_j$, we have $\sum {\mathbb E}[X_i] \leq (1-\alpha)^{-k} \ell_j \frac{d^{3/4}}{t}$, so let's set $\mu = (1-\alpha)^{-k} \ell_j \frac{d^{3/4}}{t}$. By Proposition~\ref{thm:supermartingale}, $$ {\mathbb P}[\sum X_i > (1 + \epsilon') \mu] < e^{-\frac{\epsilon'^2 \mu}{3}}.$$ Using that $\ell_j \leq |T| \leq \epsilon t$, for $\epsilon' = \frac{t \epsilon}{\ell_j k}$, we get $$ {\mathbb P}\left[\sum X_i > \mu + \frac{\epsilon}{k}(1-\alpha)^{-k} d^{3/4} \right] < e^{-\frac{t \epsilon^2 d^{3/4}}{3 \ell_j k^2 (1-\alpha)^{k}}} < e^{-\frac{\epsilon d^{3/4}}{3k^2 (1-\alpha)^k}}. $$ Since $Y_{v,j} = d^{1/4} \sum X_i$, this proves the claim of the lemma. \hfill $\Box$ \begin{lemma} \label{lemma:gen-bad-event} Let ${\cal B}_v$ denote the ``bad event" that $X_v > d/k$. Assuming that (\ref{eq:eps-delta}) holds, and $|T| \leq \epsilon t$, then for any fixed vertex $v \in V$ the bad event happens with probability $$ {\mathbb P}[{\cal B}_v] < k e^{-\frac{d^{3/4}}{6 k^3}}.$$ \end{lemma} \noindent {\bf Proof.}\, We have $X_v = \sum_{j=0}^{k-1} Y_{v,j}$. Recall that $(1-\alpha)^k = 2k \epsilon$. By Lemma~\ref{lemma:one-class}, $$ {\mathbb P}\left[Y_{v,j} > (1-\alpha)^{-k} \left( \frac{\ell_j}{t} + \frac{\epsilon}{k} \right) d \right] < e^{-\frac{\epsilon d^{3/4}}{3 k^2 (1-\alpha)^{k}}} = e^{-\frac{d^{3/4}}{6 k^3}} $$ for each $j=0,1,2,\ldots,k-1$. By the union bound, the probability that any of these events happens is at most $k e^{-{d^{3/4}}/{6 k^3}}$. If none of them happen, we have $$ X_v = \sum_{j=0}^{k-1} Y_{v,j} \leq (1-\alpha)^{-k} \sum_{j=0}^{k-1} \left( \frac{\ell_j}{t} + \frac{\epsilon}{k} \right) d = (1-\alpha)^{-k} \left( \frac{|T|}{t} + \epsilon \right) d \leq (1-\alpha)^{-k} \cdot 2 \epsilon d = \frac{d}{k}.$$ \hfill $\Box$ To finish the proof of Theorem~\ref{thm:bounded-paths}, we note that $d \geq \log^8 n$ and $k \leq \log n$. The probabilities of bad events ${\cal B}_v$ are bounded by $k e^{-d^{3/4} / 6 k^3} \leq (\log n) \ e^{-\frac16 \log^3 n} \leq 1/n^{\log n}$. There are $n$ potential bad events, so none of them occurs with high probability. \section{Concluding remarks} In this paper we have shown that a very simple randomized algorithm can find efficiently tree embeddings with near-optimal parameters, surpassing some previous results achieved by more involved approaches. Here are few natural questions which remain open. \begin{itemize} \item It would be interesting to extend our results from graphs of girth $2k+1$ to graphs without cycles of length $2k$. For $k=3$, this follows from our work combined with a result of Gy\"{o}ri. In \cite{G97} he proved that every bipartite $C_6$-free graph can be made also $C_4$-free by deleting at most half of its edges. Therefore given a $C_6$-free graph with minimum degree $d$, we can first take its maximum bipartite subgraph. This will decrease the number of edges by at most factor of two. Then we can use the above mentioned result of Gy\"{o}ri to obtain a $C_4$-free and $C_6$-free graph which has at least a quarter of the original edges, i.e., average degree at least $d/4$. In this graph we can find a subgraph where the minimum degree is at least $d/8$ ($1/2$ of average degree). Since it is bipartite, this subgraph has no cycles of length shorter than $7$. This shows that every $C_6$-free graph $G$ with minimum degree $d$ contains a subgraph $G'$ of girth at least $7$ whose minimum degree is a constant fraction of $d$. Using our result, we can embed in $G'$ (and hence also in $G$) every tree of size $O(d^3)$ and maximum degree $O(d)$. More generally, it is proved in \cite{KO05} that any $C_{2k}$-free graph contains a $C_4$-free subgraph with at least $\frac{1}{2(k-1)}$-fraction of its original edges. Moreover it is conjectured in \cite{KO05}, that any $C_{2k}$-free graph contains a subgraph of girth $2k+1$ with at least an $\epsilon_k$-fraction of the edges. If this conjecture is true, it shows that the tree embedding problems for $C_{2k}$-free graphs and graphs of girth $2k+1$ are equivalent up to constant factors. \item For random graphs $G_{n,p}$ our approach works most efficiently when the edge probability $p=n^{a-1}$ for some constant $a>0$. Nevertheless, it can be used to embed trees in sparser random graphs as well. By analyzing more carefully the application of the Kim-Vu inequality, one can show that for every fixed $\epsilon > 0$, a random graph with edge probability $p \geq e^{\log^{1/2+\epsilon} n}/n$ satisfies ${\cal P}(d,k,n/2)$ with $d \simeq pn$ and $k \simeq \log_d n$. However, when $p= n^{-1+o(1)}$ we have $k \rightarrow \infty$ and therefore both the maximum degree an the size of the tree we can embed are only an $o(1)$-fraction of the optimum. It would be extremely interesting to show that for edge probability $p=n^{-1+o(1)}$, perhaps even $p=c/n$ for some large constant $c>0$, the random graph $G_{n,p}$ still contains every tree with maximum degree $O(pn)$ and size $O(n)$. It would be also nice to weaken our pseudorandomness property ${\cal P}(d,k,t)$ which is defined in terms of numbers of paths between pairs of vertices. The most common definition of pseudorandomness is in terms of edge density between subsets of vertices of a graph. In particular, it would be interesting to extend our results to embedding of trees in graphs whose edge distribution is close to that of random graph. \item Finally, we wonder if there are any additional interesting families of graphs for which one can show that our simple randomized algorithm succeeds to embed trees with nearly optimal parameters. \end{itemize}
1,941,325,220,327
arxiv
\section{Introduction} Sensitive tests of strong interaction dynamic models are provided by polarization measurements. The largest amount of discussions have been stimulated by the polarized deep inelastic lepton nucleon scattering (DIS) measurements~\cite{DIS}. These results suggest that the angular momentum of the nucleon is not distributed among its parton constituents in the way expected in na{\"{\i}}ve quark models. New information on the non-perturbative dynamics of strong interactions can be obtained by investigating semi-inclusive deep inelastic processes which, in addition to the nucleon parton distribution functions, depend also on fragmentation functions. To investigate the spin transfer phenomenon in quark fragmentation one needs a {\it source} of polarized quarks. The simplest processes involving polarized quark fragmentation are the $e^+e^-$ annihilation, the DIS of polarized charged leptons off unpolarized and polarized targets, and neutrino (anti-neutrino) DIS. The self-analyzing decay properties of the {$\Lambda / \overline{\Lambda}$}~hyperon make this particle particularly interesting for spin physics. A first theoretical study of the {$\Lambda$}~polarization in hard processes \begin{equation} l + N \rightarrow l^\prime + \Lambda / \overline{\Lambda} + X \label{eq:ln} \end{equation} and \begin{equation} e^+ + e^- \rightarrow \Lambda / \overline{\Lambda} +X \label{eq:ee} \end{equation} to investigate the longitudinal spin transfer from polarized quarks (di-quarks) to {$\Lambda / \overline{\Lambda}$}'s was made by Bigi~\cite{bi}. The idea of using {$\Lambda / \overline{\Lambda}$}'s as a quark transverse-spin polarimeter in reaction~(\ref{eq:ln}) was originally proposed by Baldracchini~{\it et al.}~\cite{bgrs}, and later rediscovered by Artru and Mekhfi~\cite{artm}. The {$\Lambda / \overline{\Lambda}$}~polarization in reactions (\ref{eq:ln}) and (\ref{eq:ee}) has been discussed in several recent works~\cite{gh}--\cite{Jaf96}, which show a considerable interest for this problem. Some {$\Lambda / \overline{\Lambda}$}~polarization data already exist for the reaction (\ref{eq:ln}) from neutrino (anti-neutrino) beam experiments~\cite{neut} and for the reaction~(\ref{eq:ee}) at the $Z^0$ pole~\cite{aleph}. New high statistics data are expected soon from several experiments~\cite{hermes,e665,nomad,comp}. The longitudinal polarization transfer mechanism from a polarized lepton to the final hadron in reaction~(\ref{eq:ln}) is based on the idea~\cite{bi}, that the exchanged polarized virtual boson will strike preferentially one quark polarization state inside the target nucleon, and that the fragment left behind will contain some memory of the angular momentum removed from the target nucleon, thus resulting in a non-trivial longitudinal polarization of $\Lambda$ hyperons produced in the target fragmentation region ($x_F < 0$, $x_F$ is the Feynman $x$)~\cite{EKK96}. The fragmenting struck quark in turn can transfer its polarization to a {$\Lambda / \overline{\Lambda}$}~hyperon produced in the current fragmentation region ($x_F > 0$). In both cases the underlying dynamics of the hyperon production and polarization cannot be described by perturbative QCD and some phenomenological models have to be considered. A phenomenological study of the {$\Lambda$}~and {$\overline{\Lambda}$}~longitudinal polarization in the reaction \mbox{$\mu^+ + N \rightarrow \mu^{+\prime} + \Lambda / \overline{\Lambda} + X$} has been already presented by us in the {\it COMPASS} proposal~\cite{comp}. Here we present a more detailed and complete study of the {$\Lambda / \overline{\Lambda}$}~polarization in DIS of charged leptons and neutrinos in the current fragmentation region and in $e^+e^-$ annihilation at the $Z^0$ pole. The measurement of the {$\Lambda / \overline{\Lambda}$}~polarization in these processes can help us to distinguish between different mechanisms of the spin transfer in the quark fragmentation. In Section~2 we describe some models for the spin transfer mechanism that we consider in our studies. In Sections~3 and 4 we present predictions for the {$\Lambda / \overline{\Lambda}$}~polarization in electro-production with polarized leptons on unpolarized and polarized targets, in Section~5 for neutrino and anti-neutrino scattering, and in Section~6 at the $Z^0$ pole. Section~7 contains a discussions of the results presented in this work. \section{Models for spin transfer in quark fragmentation} The quark fragmentation functions as well as the parton distribution functions of the nucleon are well defined objects in quantum field theory. The spin, twist, and chirality structure of the quark fragmentation functions, integrated over the transverse momentum, are discussed and classified in~\cite{jj}. The leading twist unpolarized ($D_q^\Lambda (z)$) and polarized ($\Delta D_q^\Lambda (z)$) quark fragmentation functions to a {$\Lambda$}~hyperon are defined as: \begin{eqnarray} D_q^{\Lambda}(z)&=& D_q^{+ \;\Lambda}(z)+ D_q^{- \;\Lambda}(z) \\ \Delta D_q^{\Lambda}(z)&=& D_q^{+\;\Lambda}(z)- D_q^{-\;\Lambda}(z) \end{eqnarray} where $D_q^{+ \;\Lambda} (z)$ ($D_q^{- \;\Lambda} (z)$) is the spin dependent quark fragmentation functions for the {$\Lambda$}~spin parallel (anti-parallel) to that of the initial quark $q$, and $z$ is the quark energy fraction carried by the $\Lambda$ hyperon. We will parametrize the polarized quark fragmentation functions as \begin{equation} \Delta D_q^\Lambda (z) = C_q^\Lambda (z) \cdot D_q^\Lambda (z) \end{equation} where $C_q^\Lambda (z)$ are the spin transfer coefficients. Since much is still unknown on polarized fragmentation functions, we do not consider explicitly their $Q^2$ evolution in this work (see for instance Ref.~\cite{Rav96}). In the literature there exists some models~\cite{bfm80,nh95} for the spin dependent fragmentation functions in which a $z$ dependence of the spin transfer coefficients can be found. For example, in the jet fragmentation model of~\cite{bfm80}, $C_q^\Lambda (z) \sim z$ at small $z$ and $C_q^\Lambda (z) \rightarrow 1$ at $z \rightarrow 1$. In the covariant quark -- di-quark model of~\cite{nh95} $C_u^\Lambda (z) \sim z$ at small $z$, whereas $C_s^\Lambda (z) \sim const$. We will not present here predictions for the {$\Lambda / \overline{\Lambda}$}~polarization obtained with these models, since they contain many free parameters which are not well tuned with existing data. To get quantitative predictions for the {$\Lambda / \overline{\Lambda}$}~polarization in processes (\ref{eq:ln}) and (\ref{eq:ee}) we used a phenomenological approach similar to that of Ref.~\cite{gh}. We consider two different descriptions of the spin transfer mechanism in the quark fragmentation to a {$\Lambda / \overline{\Lambda}$}~hyperon. The first one is based on the non-relativistic quark model SU(6) wave functions, where the {$\Lambda$}~spin is carried only by its constituent \sq~quark. Therefore, the polarization of directly produced {$\Lambda$}'s is determined by that of the \sq~quark only, while {$\Lambda$}'s coming from decays of heavier hyperons inherit a fraction of the parent's polarization, which might originate also from other quark flavors (namely \uq~and \dq). In this scheme the spin transfer is discussed in terms of {\it constituent quarks}. Table~\ref{tab:bgh} shows the spin transfer coefficients $C_q^\Lambda$ for this case~\cite{bi,gh}. As discussed in Section 6 and shown in~\cite{aleph}, this model reproduces fairly well the {$\Lambda / \overline{\Lambda}$}~longitudinal polarization measured at the $Z^0$ pole and at large $z$. However, the interpretation of these data is not unique. A particular case is given by a simpler assumption that the {$\Lambda$}~hyperon gets its polarization from \sq~quarks only. In the following we will refer to the former description as $BGH$ ({\it for} Bigi, Gustafson, and H\"akkinen) and the latter as $NQM$ ({\it for} na\"{\i}ve quark model). \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline {$\Lambda$}'s parent & $C_u^\Lambda$ & $C_d^\Lambda$ & $C_s^\Lambda$ & $C_{\bar q}^\Lambda$\\ \hline \hline Quark & $0$ & $0$ & $+1$ & $0$ \\ \hline $\Sigma^0$ & $-2/9$ & $-2/9$ & $+1/9$ & $0$ \\ \hline $\Sigma(1385)$ & $+5/9$ & $+5/9$ & $+5/9$ & $0$ \\ \hline $\Xi$ & $-0.3$ & $-0.3$ & $+0.6$ & $0$ \\ \hline \end{tabular} \end{center} \caption{Spin transfer coefficients according to non-relativistic SU(6) quark model.} \label{tab:bgh} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline & $C_u^\Lambda$ & $C_d^\Lambda$ & $C_s^\Lambda$ & $C_{\bar q}^\Lambda$ \\ \hline \hline {\it BJ-I} & $-0.20$ & $-0.20$ & $+0.60$ & $0.0$ \\ \hline {\it BJ-II} & $-0.14$ & $-0.14$ & $+0.66$ & $-0.06$ \\ \hline \end{tabular} \end{center} \caption{Spin transfer coefficients according to the Burkardt-Jaffe $g_1^\Lambda$ sum rule.} \label{tab:bjsr} \end{table} The second approach is based on the $g_1^\Lambda$ {\it sum rule} for the first moment of the polarized quark distribution functions in a polarized {$\Lambda$}~hyperon, which was derived by Burkardt and Jaffe~\cite{BuJ93} in the same fashion as for the proton one ($g_1^{\pp}$). We assume that the spin transfer from a polarized quark \qq~to a {$\Lambda$}~is proportional to the {$\Lambda$}~spin carried by that flavor, {\it i.e.} to $g_1^\Lambda$. Table~\ref{tab:bjsr} contains the spin transfer coefficients $C_q^\Lambda$, which were evaluated using the experimental values for $g_1^{\pp}$. Two cases are considered~\cite{Jaf96}: in the first one only valence quarks are polarized; in the second case also sea quarks and anti-quarks contribute to the {$\Lambda$}~spin. In the following we will refer to the first one as {\it BJ-I} and the second one as {\it BJ-II}. In this description, {$\Lambda$}'s~originating from strong decays of hyperon resonances are absorbed in the {$\Lambda$}~fragmentation function. A similar description for $\Sigma^0$'s and cascades is not yet available; therefore {$\Lambda$}'s originating from decays of these hyperons have to be excluded in this description. As our calculations have shown, the exclusion of these {$\Lambda$}'s has a small effect on the final polarization result (contained to within a few~\%). \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig1.eps}} \end{center} \vspace*{-5mm} \caption{$z$ {\it dependence} of the spin transfer coefficients $C_q^\Lambda$ in the {\it BGH} spin transfer mechanism. a) $\Lambda$: solid line - \sq~quark, dashed - \uq, and dotted - \dq; b) ${\overline \Lambda}$: solid line - ${\sf \bar s}$~quark, dashed - ${\sf \bar u}$, dotted - ${\sf \bar d}$.} \label{fig:zeddep} \end{figure} In the $g_1^\Lambda$ {\it sum rule} scheme a negative spin transfer from \uq~and \dq~quarks to a {$\Lambda$}~hyperon is predicted. A negative spin transfer from \uq~and \dq~quarks of $-0.09$ was also predicted in~\cite{AR95} using an effective QCD Lagrangian, and in the covariant quark -- di-quark model of ~\cite{nh95}. This effect can be understood qualitatively even if the spin of the {$\Lambda$}~is determined by its constituent \sq~quark only: in some cases the fragmenting \uq~or \dq~quark will become a sea quark of the constituent \sq~quark, and the spin of the constituent \sq~quark will be anti-correlated to the spin of the fragmenting quark~\cite{EKKS95,EKK96}. Another possibility occurs when the {$\Lambda$}~is produced as a second rank particle in the fragmentation of a \uq~or \dq~quark. If the first rank particle was a pseudoscalar strange meson, then the spin of the ${\sf \bar s}$ quark has to be opposite to that of the \uq~(\dq) quark, and since the \sq${\sf \bar s}$ pair created from the vacuum in the string breaking is assumed to be in a $^3P_0$ state~\cite{And79}, the \sq~quark is also oppositely polarized to the \uq~or \dq~quark. This last mechanism of the spin transfer can be checked by measuring the {$\Lambda$}~polarization for a sample of events containing fast $K$ mesons. We implemented the spin transfer coefficients $C_q^\Lambda$ given in Tables~\ref{tab:bgh} and~\ref{tab:bjsr} in appropriate Monte Carlo event generators for different processes on the basis of the program information on the flavor of the fragmenting quark and the {$\Lambda$}~production process (directly produced or originating from decays). For the simulation of DIS events (charged leptons and neutrinos) we used the {\tt LEPTO v.6.3 - JETSET v.7.4}~\cite{ing,sos} event generator, and for the $e^+ e^-$ annihilation at the $Z^0$ pole the {\tt PYTHIA v.5.7 - JETSET v.7.4}~\cite{sos} event generator. With a suitable choice of input parameters these event generators reproduce well the distributions of various measured physical observables and the particle yields. The quark hadronization is described by the LUND string fragmentation model~\cite{And83}. We used the LUND modified symmetric fragmentation function with default parameter settings~\cite{sos}. Different fragmentation schemes were also considered, like the independent fragmentation options in {\tt JETSET}~\cite{sos}. They lead to similar results and conclusions. We set the strangeness suppression factor to 0.20 in agreement with recent experimental data~\cite{ssup}. In the {\it BGH} approach the spin transfer coefficients for individual channels are $z$ independent. However, the effective spin transfer coefficient for a given quark flavor, obtained by summing over all {$\Lambda$}~production channels appears to have a $z$ dependence. Thus using the $C_q^\Lambda$ from Table~\ref{tab:bgh} together with appropriate weights for different {$\Lambda$}~production channels, as obtained from the event generators, we automatically introduce a $z$ dependence in $C_q^{\Lambda}(z)$ (see Figure~\ref{fig:zeddep}). For the $g_1^\Lambda$ {\it sum rule} spin transfer mechanism we make the simplest assumption, in which the spin transfer coefficients are $z$ independent. If we choose a $z$ dependence for $C_q^\Lambda$ similar to the one proposed in~\cite{bfm80}, we will obtain smaller (larger) values of the {$\Lambda$}~polarization at small (large) $z$. \section{{$\Lambda$}~and {$\overline{\Lambda}$}~polarization in charged lepton DIS off an unpolarized target} The complete twist-three level description of spin-$1/2$ baryons production in polarized DIS is given in~\cite{Mul96}. Here we will consider this process at leading order integrated over the final hadron transverse momentum. In this approximation the magnitude of the {$\Lambda$}~longitudinal polarization is given by the simple parton model expression~\cite{comp,EKK96} \begin{equation} P_{\Lambda} \, (x,y,z) = P_{\Lambda}^{\parallel} \, (x,y,z) = \frac{\sum_q e_q^2 \; [P_B D(y) q(x) + P_T \Delta q(x) ] \; \Delta D_q^{\Lambda}(z)} {\sum_q e_q^2 \; [q(x) + P_B D(y) P_T \Delta q(x) ] \; D_q^{\Lambda}(z)}, \label{eq:lambdap} \end{equation} where $P_B$ and $P_T$ are the beam and target longitudinal polarizations, $e_q$ is the quark charge, $q(x)$ and $\Delta q(x)$ are the unpolarized and polarized quark distribution functions, and $D_q^\Lambda (z)$ and $\Delta D_q^\Lambda (z)$ are the unpolarized and polarized fragmentation functions. \begin{equation} D(y)=\frac{1-(1-y)^2}{1+(1-y)^2} \label{eq:dy} \end{equation} is commonly referred to as the longitudinal depolarization factor of the virtual photon with respect to the parent lepton, where $y$ is the energy fraction of the incident lepton carried by the virtual photon \footnote{Here and in the following the sign of the {$\Lambda$}~polarization is given with respect to the direction of the momentum transfer ({\it i.e.} along the axis of the exchanged virtual boson in DIS).}. For scattering off an unpolarized target Eq.~(\ref{eq:lambdap}) reduces to \begin{equation} P_\Lambda \, (x,y,z) =P_B D(y)\;\frac{\sum_q e_q^2 \; q(x) \; \Delta D_q^{\Lambda}(z)} {\sum_q e_q^2 \; q(x)\; D_q^{\Lambda}(z)}. \label{eq:lambdaup} \end{equation} This expression is intuitively easy to understand since the final quark polarization, $P_{q^\prime}$, in polarized lepton-unpolarized quark scattering is given by the QED expression \begin{equation} P_{q^\prime}=P_B D(y). \label{eq:pq} \end{equation} We implemented Eq.~(\ref{eq:lambdaup}) and the spin transfer coefficients $C_q^\Lambda$ from Tables~\ref{tab:bgh} and~\ref{tab:bjsr} into the {\tt LEPTO} code to predict the {$\Lambda / \overline{\Lambda}$}~polarization for different models of the spin transfer mechanism. Our calculations have been performed in experimental conditions similar to that of the proposed {\it COMPASS} experiment~\cite{comp}: hard DIS ($Q^2 > 4$~GeV$^2$) of negatively polarized $\mu^+$'s ($P_\mu = -0.80$) at $E_\mu = 200~{\rm GeV}$ \footnote{The beam energy chosen in the {\it COMPASS} proposal~\protect\cite{comp} is 100~GeV. No big differences are expected for $E_\mu = 100~{\rm GeV}$ compared to $E_\mu = 200~{\rm GeV}$.} off an unpolarized isoscalar target ($^6$LiD). To select {$\Lambda$}'s produced in the current fragmentation region we require $x_F > 0$ and $z > 0.2$ \footnote{A different selection of the current fragmentation region was proposed by Berger~\cite{Ber87} - {\it Berger criterium}. Basically, to each $W^2$ value it corresponds a range in $z$, $z_{min} < z < 1$, where it should be possible to measure the fragmentation functions: for instance for $W^2 > 55 \, (> 23)~{\rm GeV}^2$, $z > 0.1 \, (> 0.2)$. In our kinematical conditions this criterium is satisfied automatically by all selected events.}. Additionally, to enrich the sample with events with a large spin transfer in lepton -- quark scattering, we restrict the virtual photon energy range to $0.5 <y <0.9$, which gives $\langle D(y) \rangle \sim 0.8$. In these studies we used the recent $MRSA^\prime$ unpolarized parton distribution functions~\cite{MRSA}. In the selected kinematical region the yields of {$\Lambda$}'s and {$\overline{\Lambda}$}'s are similar as is their kinematical spectra. Roughly 10~\% of all produced {$\Lambda$}'s and one third of {$\overline{\Lambda}$}'s survive the $x_F$ and $z$ cuts. This sample represents about 1~\% of the total DIS cross section ($\sigma \sim 10~{\rm nb}$ in these conditions). In Figure~\ref{fig:muzed}a we show the normalized $z$ distribution (to the total number of generated events) of {$\Lambda$}'s created in the fragmentation of different quark and anti-quark flavors as obtained from the {\tt LEPTO} code. Figure~\ref{fig:muzed}b shows the $z$ distribution of directly produced {$\Lambda$}'s, as well as {$\Lambda$}'s coming from $\Sigma^0$ decays, higher spin resonances $\Sigma(1385)$, and cascades. Almost half of the total {$\Lambda$}~sample is produced directly. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig2.eps}} \end{center} \vspace*{-5mm} \caption{a) normalized $z$ distribution of {$\Lambda$}'s produced in $\mu^+$--DIS originating from the fragmentation of different quark flavors: solid line - \uq~quark, dashed - \dq, dotted - \sq, and dot-dashed - ${\sf \bar u}$; b) normalized $z$ distribution of directly produced {$\Lambda$}'s (solid line), {$\Lambda$}'s coming from $\Sigma(1385)$ resonances (dotted), $\Sigma^0$ decays (dashed), and cascades (dot-dashed).} \label{fig:muzed} \end{figure} In Figure~\ref{fig:unpolarized} we present our results for the {$\Lambda$}~and {$\overline{\Lambda}$}~longitudinal polarization separately. The differences in the {$\Lambda / \overline{\Lambda}$}~polarizations are quite significant between the two schemes for the spin transfer mechanism (the {\it constituent quark} on one hand and the $g_\Lambda^1$ {\it sum rule} on the other), while they appear to be small (only a few~\%) within the same scheme. Integrating in $z$ for $z > 0.2$ we expect a polarization of about $-12~\% \, (-14~\%)$ for {$\Lambda$}'s ({$\overline{\Lambda}$}'s) for the first scheme, and $-2~\% \, (-5~\%)$ for the second one \footnote{These results were obtained for a negative beam polarization of $P_B = -0.80$. For different beam polarizations these results can be rescaled accordingly.}. No significant variations were observed for the {$\Lambda / \overline{\Lambda}$}~polarization values, when performing the same analysis for a proton or neutron target in the same kinematical region. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig3.eps}} \end{center} \vspace*{-7mm} \caption{{$\Lambda$}~and {$\overline{\Lambda}$}~longitudinal polarization in the current fragmentation region for DIS of polarized $\mu^+$'s on an unpolarized target for different mechanisms of spin transfer: solid line - $NQM$, dashed - $BGH$, dotted - {\it BJ-I}, and dot-dashed - {\it BJ-II}.} \label{fig:unpolarized} \end{figure} Already at low values of $z$, the {\it constituent quark} scheme predicts a sizeable negative {$\Lambda$}~polarization, while the $g_\Lambda^1$ {\it sum rule} a slightly positive one. At high $z$ both reach large negative values. This behaviour of the {$\Lambda$}~polarization is easily understood, given that at low $z$ ($z < 0.5$) {$\Lambda$}~production is dominated by scattering off \uq~quarks (see Figure~\ref{fig:muzed}a), which in the two schemes contributes to the {$\Lambda$}~polarization with opposite signs (see Table~\ref{tab:bgh} and Table~\ref{tab:bjsr}). At high $z$ ($z > 0.5$) the relative contribution of \sq~quarks, which contributes with the same sign but different magnitudes, increases significantly, and eventually dominates at large $z$. In the same way the $x$ dependence of the polarization can also be understood: in the low $x$ region scattering off both \uq~and \sq~quarks contribute to the {$\Lambda$}~production and polarization, while at high $x$ only \uq~quarks contribute. A similar analysis applies to the {$\overline{\Lambda}$}~polarization results. A high luminosity experiment, like {\it COMPASS}~\cite{comp}, might collect with these kinematical conditions a fully reconstructed sample of {$\Lambda / \overline{\Lambda}$}~in excess of $10^5$. This will allow a precise determination of the {$\Lambda / \overline{\Lambda}$}~polarization in several bins over a wide $z$ range with a precision of a few~\% for each bin and to distinguish between the two descriptions of the spin transfer mechanism discussed above. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig4.eps}} \end{center} \vspace*{-7mm} \caption{{$\Lambda$}~and {$\overline{\Lambda}$}~longitudinal polarization for three different beam energies using the {\it BGH} (left plots) and the {\it BJ-I} (right plots) spin transfer mechanism: solid line - $E_\mu = 200$~GeV, dashed - $E_\mu = 500$~GeV, and dotted - $E_{el} = 30$~GeV.} \label{fig:lampolen} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline $E_{beam}$ (GeV) & $NQM$ & $BGH$ & $BJ-I$ & $BJ-II$ \\ \hline \hline 30 -- $\Lambda$ & $-5.8$ & $-12.0$ & $4.9$ & $2.4$ \\ \hline 30 -- ${\overline \Lambda}$ & $-12.8$ & $-10.4$ & $-5.2$ & $-4.3$ \\ \hline \hline 200 -- $\Lambda$ & $-12.4$ & $-11.5$ & $-1.1$ & $-2.6$ \\ \hline 200 -- ${\overline \Lambda}$ & $-15.8$ & $-13.1$ & $-5.1$ & $-5.4$ \\ \hline \hline 500 -- $\Lambda$ & $-15.9$ & $-14.1$ & $-3.4$ & $-4.8$ \\ \hline 500 -- ${\overline \Lambda}$ & $-17.8$ & $-14.7$ & $-5.6$ & $-6.3$ \\ \hline \hline \end{tabular} \end{center} \caption{{$\Lambda / \overline{\Lambda}$}~longitudinal polarization for different lepton beam energies and an unpolarized deuterium target, $x_F > 0$ and $z > 0.2$.} \label{tab:lampolen} \end{table} We performed similar calculations also for different beam energies, like the {\it HERMES} experiment~\cite{hermes} with the 30~GeV polarized electron beam, and the {\it E665} experiment~\cite{e665} with the 500~GeV polarized muon beam. Table~\ref{tab:lampolen} summarizes these results for $x_F > 0$ and $z > 0.2$. In Figure~\ref{fig:lampolen} we compare the {$\Lambda / \overline{\Lambda}$}~polarization predictions for these three different beam energies using the {\it BGH} and the {\it BJ-I} spin transfer mechanism. We always assume a beam polarization $P_B = -0.80$, an isoscalar target, $Q^2 > 4~{\rm GeV}^2$, and $0.5 < y < 0.9$. The conclusions are similar to the ones above, except that at each different beam energy a different $x$ interval is covered (the higher the energy, the lower the accessible $x$). In particular, at 30~GeV (and $Q^2 > 4~{\rm GeV}^2$) {$\Lambda$}~production is dominated by scattering off \uq~quarks even at large $z$, and the {$\Lambda$}~polarization varies weakly with $z$ in the whole $z$ interval for the {\it constituent quark} spin transfer mechanism, since the accessible $x$ range hardly extends into the low $x$ region, where \sq~quarks are abundant. \section{{$\Lambda / \overline{\Lambda}$}~polarization in DIS off a polarized target} For a polarized target and polarized lepton beam (see Eq.~\ref{eq:lambdap}) there are two {\it sources} for the fragmenting quarks polarization: the spin transfer from the polarized lepton and from the struck polarized quark in the target. We studied the polarization difference between the {$\Lambda / \overline{\Lambda}$}~polarization for positive (parallel to the beam polarization) and negative target polarization (anti-parallel to the beam polarization) \begin{equation} \Delta P_\Lambda \, = \, P_{\Lambda} \, (+ P_T) \, - \, P_{\Lambda} \, (- P_T) \,. \label{eq:delpol} \end{equation} By reversing the target polarization, the fragmenting quark polarization, $P_{q'}$, changes by \begin{equation} \Delta P_{q'} (x,y) = 2 \, P_q(x) \, \frac{1- (P_B \, D(y))^2}{1-(P_B \, D(y) \, P_q (x))^2} \label{eq:poldif} \end{equation} where \begin{equation} P_q(x) = P_T \frac{\Delta q(x)}{q(x)} \end{equation} is the polarization of the quark in the polarized nucleon. In our calculations we use a polarized proton target of $P_T = 0.80$ and a 200~GeV polarized $\mu^+$ beam of $P_B = -0.80$. In most experiments complex target materials are used and the effective nucleon polarization is significantly diluted (for instance for a polarized $^6$LiD target $\langle P_N \rangle \sim 25~\%$). For such targets a smaller sensitivity on the nucleon polarization is therefore expected (roughly $3 \, \times$ smaller). The covered kinematical range is similar to the one in the previous section. To reduce the effects related to the beam polarization, we extend our analysis to the whole accessible $y$ range, $0.1 \leq y \leq 0.9$, to which corresponds a photon beam with smaller polarization ($\langle D(y) \rangle \sim 0.4$). In $\Delta P_\Lambda$ ($\Delta P_{\overline \Lambda}$) the beam polarization $(P_B \langle D(y) \rangle)$ enters quadratically, and therefore affects little the final result (typically $(P_B \langle D(y) \rangle)^2 < 0.1$). The total DIS cross section for this sample is about 35~nb. Figure~\ref{fig:muxbj} shows the normalized $x$ distribution of different quark and anti-qaurk flavors fragmenting to the selected {$\Lambda$}'s and {$\overline{\Lambda}$}'s. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig5.eps}} \end{center} \vspace*{-5mm} \caption{a) normalized $x$ distribution of {$\Lambda$}'s produced in $\mu^+$--DIS from the fragmentation of different quark flavors: solid line - \uq~quark, dashed - \dq, dotted - \sq, and dot-dashed - ${\sf \bar u}$; b) same as (a) for {$\overline{\Lambda}$}'s: solid line - ${\sf \bar u}$~quark, dashed - ${\sf \bar d}$, dotted - ${\sf \bar s}$, and dot-dashed - \uq~($x_F > 0$ and $z > 0.2$).} \label{fig:muxbj} \end{figure} In Figure~\ref{fig:polarized} we present our predictions for the {$\Lambda$}~({$\overline{\Lambda}$}) polarization difference $\Delta P_\Lambda$ ($\Delta P_{\overline \Lambda}$). As input for the polarized quark distribution functions we used the Brodsky, Burkardt, and Schmidt parametrization~\cite{bbs95}, which predicts a large negative sea quark polarization ($\Delta {\sf s} = -0.10$, and $\Delta s(x) / s(x) \sim -0.20$ at $x \sim 0.1$ and input scale of $Q^2_0 = 4~{\rm GeV}^2$). Different polarized parton densities were also considered (see Figure~\ref{fig:polpdf}). \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig6.eps}} \end{center} \vspace*{-7mm} \caption{$\Delta P_\Lambda$ and $\Delta P_{\overline \Lambda}$ (Eq.~\protect\ref{eq:delpol}) for DIS of polarized $\mu^+$'s off a polarized proton target: solid line - $NQM$, dashed - $BGH$, dotted - {\it BJ-I}, and dot-dashed - {\it BJ-II}.} \label{fig:polarized} \end{figure} All the different spin transfer models lead to similar predictions for $\Delta P_\Lambda$ ($\Delta P_{\overline \Lambda}$) within a few~\%, except for {$\Lambda$}'s produced at high $x$. This effect is easily understood, given that at high $x$, {$\Lambda$}~production is dominated by scattering off polarized \uq~quarks, which transfer their polarization to the {$\Lambda$}'s in different ways and with opposite signs (see Table~\ref{tab:bgh} and Table~\ref{tab:bjsr}). The {\it dip} in the {$\Lambda / \overline{\Lambda}$}~polarization distributions just below $x \sim 0.1$ originates from the large negative \sq/${\sf \bar s}$ polarization in this $x$ interval correlated with a large positive spin transfer coefficient $C_s^\Lambda$, and the positive \uq/${\sf \bar u}$ polarization with a negative spin transfer coefficient $C_u^\Lambda$; therefore both quarks contribute with the same sign. These results were obtained with the polarized parton density parametrization of Brodsky, Burkardt, and Schmidt~\cite{bbs95}. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig7.eps}} \end{center} \vspace*{-5mm} \caption{Comparison of $\Delta P_\Lambda$ for two different polarized parton densities (solid line - Brodsky, Burkardt, and Schmidt~\protect\cite{bbs95} and dashed - Gehrmann and Stirling~\protect\cite{GS94}) for the {\it BGH} (left plot) and the {\it BJ-I} (right plot) spin transfer mechanism.} \label{fig:polpdf} \end{figure} For comparison we also used different parametrizations of the polarized quark distributions, like the Gehrmann and Stirling one~\cite{GS94} with a zero sea quark polarization at the input scale ($Q^2_0 = 4~{\rm GeV}^2$). With this parametrization we obtained considerably smaller results. Near zero values for the polarization difference $\Delta P_\Lambda$ ($\Delta P_{\overline \Lambda}$) were obtained using the polarized parton densities of Gl\"uck, Reya, Stratman, and Vogelsang~\cite{GRSV}. Table~\ref{tab:polpdf} summarizes the {$\Lambda / \overline{\Lambda}$}~longitudinal polarization results, integrated in $z$ for $x_F > 0$ and $z > 0.2$, using these three different polarized parton densities. Figure~\ref{fig:polpdf} compares the $\Delta P_\Lambda$ expectations for the first two polarized parton densities using the {\it BGH} and the {\it BJ-I} spin transfer mechanism. \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline Pol. part. dens. & $NQM$ & $BGH$ & $BJ-I$ & $BJ-II$ \\ \hline \hline $BBS$ & $-7.2$ & $-4.0$ & $-6.5$ & $-6.0$ \\ \hline \cite{bbs95} & $-10.3$ & $-7.7$ & $-5.5$ & $-6.5$ \\ \hline \hline $GS94$ & $0.1$ & $1.1$ & $-2.0$ & $-1.4$ \\ \hline \cite{GS94} & $0.1$ & $0.1$ & $0.0$ & $-0.4$ \\ \hline \hline $GRSV$ & $0.0$ & $0.3$ & $-0.5$ & $-0.3$ \\ \hline \cite{GRSV} & $0.0$ & $0.0$ & $0.0$ & $-0.1$ \\ \hline \hline \end{tabular} \end{center} \caption{$\Delta P_\Lambda$ (upper lines) and $\Delta P_{\overline \Lambda}$ (lower lines) for three different polarized parton densities and different spin transfer mechanisms.} \label{tab:polpdf} \end{table} The longitudinal polarization of {$\Lambda / \overline{\Lambda}$}'s produced in the scattering off a polarized nucleon, should allow, at least in principle, to access the polarized quark densities in the nucleon, once the spin transfer mechanism for the {$\Lambda / \overline{\Lambda}$}~production is understood. >From the study with an unpolarized target (previous Section), the $\Delta D_q^\Lambda$ which best describes the data can be determined, and then used for the polarized target case. However, our studies have shown that typically one would expect at most $|\Delta P_\Lambda| \sim 6~\%$: for a solid target, like in most fixed target experiments, $|\Delta P_\Lambda|$ reduces to only 1--2~\%. In addition, one can determine, realistically, the {$\Lambda / \overline{\Lambda}$}~longitudinal polarization with a precision not higher than a percent, because of experimental systematic uncertainties in the measurement of this quantity. These facts indicate a relatively small sensitivity of the {$\Lambda / \overline{\Lambda}$}~polarization to the target polarization, contrary to what is expected for instance in Ref.~\cite{lu}. Therefore, only a crude estimate of the polarized parton densities can be obtained through the study of the {$\Lambda / \overline{\Lambda}$}~polarization. Nevertheless, a sizeable (negative) {$\Lambda / \overline{\Lambda}$}~polarization would indicate a large (negative) {\it strange} sea polarization. \section{{$\Lambda$}~polarization in neutrino and anti-neutrino production} Particularly interesting conditions for the measurement of polarized fragmentation functions are provided by {$\Lambda$}~production in neutrino and anti-neutrino DIS. In neutrino scattering the flavor changing charged current weak interaction selects left-handed quarks (right-handed anti-quarks), giving 100~\% polarized fragmenting quarks. For this process Eq.~\ref{eq:lambdaup} reads \begin{equation} P_\Lambda \, (x,z) = \frac{\sum_{q,q^\prime} \epsilon_q w_{qq^\prime} \; q(x) \; \Delta D_{q^\prime}^{\Lambda}(z)} {\sum_{q,q^\prime} w_{qq^\prime} \; q(x) \; D_{q^\prime}^{\Lambda}(z)}. \label{eq:lambdanu} \end{equation} where the $w_{qq^\prime}$ are the $W^+q$ ($W^-q$) weak charge couplings (for instance $w_{su} = \sin \theta_C$ where $\theta_C$ is the Cabibbo angle), $\epsilon_q=-1$ ($\epsilon_{\bar{q}}=+1$) for scattering off (anti-)quarks, $q$ is the struck quark, and $q^\prime$ is the fragmenting quark (of different flavor). Using the {\tt LEPTO} event generator we have performed calculations for the {$\Lambda / \overline{\Lambda}$}~polarization in neutrino and anti-neutrino DIS in the current fragmentation region ($x_F > 0$ and $z > 0.2$) in the same fashion as for the electro-production DIS case by implementing Eq.~\ref{eq:lambdanu} into the event generator code. In our calculations we used a neutrino and an anti-neutrino beam of $E_{\nu({\overline \nu})} =50~{\rm GeV}$ incident on an isoscalar target. In Figure~\ref{fig:nuzed}a we show the normalized $z$ distribution (to the total number of generated events) of {$\Lambda$}'s created in the fragmentation of different quark and anti-quark flavors as obtained from the {\tt LEPTO} code. Figure~\ref{fig:nuzed}b shows the $z$ distribution of directly produced {$\Lambda$}'s, as well as {$\Lambda$}'s coming from $\Sigma^0$ decays, higher spin resonances $\Sigma(1385)$, and cascades. These distributions (Figure~\ref{fig:nuzed}b) are similar to the ones obtained with a muon beam (see Figure~\ref{fig:muzed}b). \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig8.eps}} \end{center} \vspace*{-5mm} \caption{a) normalized $z$ distribution of {$\Lambda$}'s originating from the fragmentation of different quark flavors produced in $\nu$--DIS: solid line - \uq~quark, dashed - \cq, and {$\Lambda$}'s produced in ${\overline \nu}$--DIS: dotted - \dq, and dot-dashed - \sq; b) normalized $z$ distribution of directly produced {$\Lambda$}'s (solid line), {$\Lambda$}'s coming from $\Sigma(1385)$ resonances (dotted), $\Sigma^0$ decays (dashed), and charmed hadrons (dot-dashed) in $\nu$--DIS.} \label{fig:nuzed} \end{figure} The results for the {$\Lambda / \overline{\Lambda}$}~polarization as a function of $z$ are presented in Figure~\ref{fig:neutrino}. In neutrino scattering, {$\Lambda$}~production is dominated by fully polarized fragmenting \uq~quarks (see Figure~\ref{fig:nuzed}), with a small contribution of \cq~quarks ($\sim 10~\%$) at this energy (at a lower energy the contribution of \cq~quarks is smaller). In ${\overline \nu}$--DIS both \dq~and \sq~quark fragmentation contributes to {$\Lambda$}~production and the latter dominates at large $z$. Note that the polarization of \dq~and \sq~quarks is opposite compared to that of \uq~quarks in $\nu$--DIS. This easily explains the observed behavior of the {$\Lambda$}~polarization for the considered mechanisms of the spin transfer shown in Figure~\ref{fig:neutrino}. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig9.eps}} \end{center} \vspace*{-8mm} \caption{{$\Lambda / \overline{\Lambda}$}~polarization in the current fragmentation region in $\nu$--DIS (upper plots) and $\overline{\nu}$--DIS (lower plots): solid line - $NQM$, dashed - $BGH$, dotted - {\it BJ-I}, and dot-dashed - {\it BJ-II}.} \label{fig:neutrino} \end{figure} The difference between the {\it constituent quark} and the $g_1^\Lambda$ {\it sum rule} spin transfer mechanism are quite significant in neutrino scattering. These two schemes lead to different signs for the $\Lambda$ polarization, contrary to what was found in the $\mu$--beam case (Section~3) and at the $Z^0$ pole (next Section). This effect is mainly due to the different signs in the spin transfer from \uq~quarks, which in this reaction dominate the {$\Lambda$}~production. The {\it NQM} gives zero polarization, since no \sq~quarks are involved, while the {\it BGH} increases with $z$ from negative values to slightly positive values at large $z$, which is correlated with the relative abundances of $\Sigma^\ast$ hyperons. The {\it BJ-I} and {\it BJ-II} prescriptions give almost constant (in $z$) positive values for the {$\Lambda$}~polarization. The longitudinal {$\Lambda$}~polarization in anti-neutrino scattering has been measured by the {\it WA59} experiment~\cite{neut} in the current fragmentation region and kinematical conditions similar to our analysis. However, the result obtained $P_{\Lambda}=-0.11 \pm 0.45$ has a large uncertainty and it is inconclusive as far as this analysis is concerned. New data on {$\Lambda / \overline{\Lambda}$}~production with a neutrino beam at a mean beam energy of 30~GeV will be soon available from the {\it NOMAD} experiment~\cite{nomad}. At 30~GeV we expect similar {$\Lambda$}~polarization results as those shown in Figure~\ref{fig:neutrino}. This experiment might collect a sample of several thousands {$\Lambda$}'s, giving a relatively accurate measurement of the {$\Lambda$}~polarization within a few~\% and thus allowing one to distinguish between the models considered here and to measure directly the polarized fragmentation function $\Delta D_{\sf u}^\Lambda$ for \uq~quarks in {\it clean} conditions. \section{{$\Lambda / \overline{\Lambda}$}~polarization at the $Z^0$ pole} The Standard Model predicts a high degree of longitudinal polarizations for quarks and anti-quarks produced in $Z^0$ decays: $P_s=P_d=-0.91, \;P_u=P_c=-0.67$~\cite{KPT94}. Thus, reaction (\ref{eq:ee}) is a source of polarized quarks which can be exploited to investigate the spin transfer dynamics in polarized quark fragmentation. A large {$\Lambda / \overline{\Lambda}$}~longitudinal polarization ($P_{\Lambda}=-0.32 \pm 0.06$ for $z>0.3$) has been recently reported by the {\it ALEPH} collaboration~\cite{aleph}. The authors concluded that the measured {$\Lambda / \overline{\Lambda}$}~longitudinal polarization is well described by the {\it constituent quark} model predictions of Gustafson and H\"akkinen~\cite{gh}. However, as our study shows, the interpretation of this data is not unique. In Figure~\ref{fig:lamz0pol}a we present our predictions for different spin transfer mechanisms for the {$\Lambda / \overline{\Lambda}$}~polarization at the $Z^0$ pole. These predictions are compared with experimental data from~\cite{aleph}. At high $z$ both models, the {\it BGH} and $g_1^\Lambda$ {\it sum rule}, describe the experimental data fairly well, while the {\it NQM} mechanism gives too large values for the {$\Lambda / \overline{\Lambda}$}~polarization. At small $z$ the data even favors the $g_1^\Lambda$ {\it sum rule} mechanism. Also in this case more precise experimental data are needed to distinguish between the various models of the spin transfer mechanism in quark fragmentation. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{fig10.eps}} \end{center} \vspace*{-8mm} \caption{a) {$\Lambda / \overline{\Lambda}$}~polarization at the $Z^0$ pole for different mechanisms of spin transfer: solid line - $NQM$, dashed - $BGH$, dotted - {\it BJ-I}, and dot-dashed - {\it BJ-II}. The experimental data (full squares) are from~\protect\cite{aleph}. b) comparison between predictions using the {\it BGH} model for the {$\Lambda$}~polarization in our analysis (solid line) and the analysis of~\protect\cite{aleph} assuming that only \sq~quarks contribute to {$\Lambda$}~polarization (dashed), and additionally that only first rank {$\Lambda$}'s inherit a fraction of the fragmenting quark polarization (dotted).} \label{fig:lamz0pol} \end{figure} Our analysis differs from that of~\cite{aleph} in two main points. The authors of~\cite{aleph} assume in their analysis (similarly as in Ref.~\cite{gh}) that only fragmenting polarized \sq~quarks contribute to the {$\Lambda$}~polarization ({\it i.e.} $C_u^\Lambda = C_d^\Lambda = 0$ in Table~\ref{tab:bgh}). Additionally, they separate between first and lower rank {$\Lambda$}'s produced in the string fragmentation, and they assume that lower rank {$\Lambda$}'s do not inherit any polarization from the fragmenting quark. Their argument that \sq~quarks produced in the fragmentation process have no longitudinal polarization due to parity conservation is not applicable to polarized quarks fragmentation. In our study, instead, all quark flavors contribute to the {$\Lambda$}~polarization (according to Table~\ref{tab:bgh}), and we also do not separate between first and lower rank {$\Lambda$}'s. For comparison we also performed the analysis of~\cite{aleph}. Figure~\ref{fig:lamz0pol}b compares the results thus obtained for the {\it BGH} spin transfer mechanism. These two different approaches give similar results in the high $z$ region, where {$\Lambda$}~production is dominated by fragmenting \sq~quarks and the {$\Lambda$}~polarization is directly correlated to that of the fragmenting \sq~quark (first rank particle), while at lower $z$ our analysis predicts larger values for the {$\Lambda$}~polarization. \section{Conclusions} In this work we have studied several lepton induced processes, where the longitudinal spin transfer in polarized quark fragmentation can be investigated through the measurement of the {$\Lambda / \overline{\Lambda}$}~longitudinal polarization produced in these processes. Two different scenarios for the spin transfer mechanism, the {\it constituent quark} and the $g_1^\Lambda$ {\it sum rule}, were used for numerical estimates of the {$\Lambda / \overline{\Lambda}$}~longitudinal polarization. To distinguish between various spin transfer mechanisms, it is important to measure the {$\Lambda / \overline{\Lambda}$}~polarization in different processes, since different quark flavors are involved in the fragmentation to a {$\Lambda / \overline{\Lambda}$}~hyperon with different weights. For instance, the largest effects are expected in neutrino DIS, where mainly \uq~quarks fragment to a {$\Lambda$}, and the two scenarios predict also different signs for the polarization. Typically, the {\it constituent quark} spin transfer mechanism predicts, in magnitude, larger values for the {$\Lambda / \overline{\Lambda}$}~polarization, while the $g_1^\Lambda$ {\it sum rule} mechanism predicts smaller values. The existing experimental data have large uncertainties on the polarization measurements \footnote{ Note that preliminary results from the {\it DELPHI} collaboration \cite{delphi} indicates a value for the {$\Lambda$}~polarization in the reaction (2) compatible with zero.} and cannot separate between these models. Current and future semi inclusive DIS experiments will soon provide accurate enough data to study these phenomena. Our studies have shown that the {$\Lambda / \overline{\Lambda}$}~polarization in electro-production is less sensitive to the target polarization (in general) and to $\Delta {\sf s}$ as expected in Ref.~\cite{lu}. The main physics reason behind this is that {$\Lambda$}~production is dominated by scattering off \uq~quarks even in the low $x$ region. The production of {$\Lambda$}'s from scattering off \sq~quarks can be enhanced by selecting events in the high $z$ region. However, in this region the {$\Lambda / \overline{\Lambda}$}~yields drop significantly, and measurements will be limited by small statistics. The small sensitivity is also due to the experimental difficulties in measuring the longitudinal {$\Lambda / \overline{\Lambda}$}~polarization to very high accuracy and in the realization of proton targets with a high effective polarization. Nevertheless, a sizeable (negative) {$\Lambda$}~polarization ($i.e.$ $\Delta P_\Lambda$) will indicate a large (negative) polarization of sea quarks in the polarized nucleon. \section*{Acknowledgements} Part of this work was initiated with C.A.~Perez. We appreciate his contribution and warmly acknowledge his help. It is also a pleasure to thank our {\it COMPASS} colleagues, A.~Efremov, J.~Ellis, S.~Gerassimov, and P.~Hoodbhoy for valuable discussions and comments. We are grateful to L.~Camilleri for discussions on the {\it NOMAD} experiment, and P.~Hansen for correspondence on the {$\Lambda / \overline{\Lambda}$}~polarization measurement in {\it ALEPH}.
1,941,325,220,328
arxiv
\section{Introduction} Let $\underline{G}$ be the identity component of the group of biholomorphisms of a irreducible bounded symmetric domain $\underline{\mathcal{D}}$. The scalar holomorphic discrete series of $\underline{G}$ can be realised in the space of holomorphic functions on this domain. By reproducing kernel techniques, M. Vergne and H. Rossi \cite{VERO} have shown (see also \cite{BER1,WAL,FAKO}) that it has an analytic continuation as a family of (projective) irreducible unitary representations $\pi_\alpha$ of $\underline{G}$, parametrised by the so-called Wallach set. Let $r$ be the rank of the domain and $d$ its characteristic number (cf. next section for a definition). Then the Wallach set is the union of the half-line $\alpha>(r-1)\frac{d}{2}$ and a discrete part consisting of $r$ points $l\frac{d}{2}$, $l=0,\ldots, r-1$. When $\alpha>p-1$, where $p$ is the genus of $\underline{\mathcal{D}}$, the representation spaces are weighted Bergman spaces. Let $\tau$ be an antilinear involution of $\underline{\mathcal{D}}$. Then $\mathcal{D}:=\underline{\mathcal{D}}^\tau$ is a totally geodesic submanifold, hence a Riemannian symmetric space, and $G:=\underline{G}^\tau$ contains its group of displacements. Such a domain is called a real bounded symmetric domain. When one restricts an irreducible unitary representation of a group to a subgroup, the representation need not to be irreducible anymore, and the decomposition into irreducibles is called a branching law. In our context two branching problems have been extensively studied: the decomposition of the tensor product representation $\pi_\alpha\widehat\otimes\overline\pi_\alpha$ and the restriction of $\pi_\alpha$ to symmetric subgroups $G=\underline{G}^\tau$ where $\tau$ is an antilinear involution of $\mathcal{D}$. A formula for the first problem and for $\alpha>p-1$ was given without proof by Berezin for classical domains in \cite{BER2}. H. Upmeier and A. Unterberger extended it to all domains and gave a Jordan theoretic proof \cite{UPUN}. The second problem was solved (for the same parameters) by G. Zhang and (independently) by G. van Dijk and M. Pevzner \cite{ZH3,VDPE}, and also by Y. Neretin for classical groups \cite{NE1}. Those two problems are in fact similar. The restriction map from $\underline{\mathcal{D}}$ to $\mathcal{D}$ (resp. from $\underline{\mathcal{D}}\times\underline{\mathcal{D}}$ to $\underline{\mathcal{D}}$) gives rise to the Berezin transform on $\mathcal{D}$ (resp. $\underline{\mathcal{D}}$), which is a kernel operator. The solution then consists in computing the spectral symbol of the Berezin transform, or, if one prefers, in computing the Fourier transform of the Berezin kernel. In \cite{ZH1} and in \cite[Section 5]{VDPE} the problem of decomposing $\pi_\alpha\widehat\otimes\overline\pi_{\alpha+l}$ where $l\in\mathbb{N}$ is also solved, by the same method. A similar problem is also studied in \cite{FAPE}. For arbitrary parameters, those problems are more complicated, and no general method seems to apply. In \cite{ZHO} the tensor product problem for $\underline{G}=SU(2,2)$ is solved for any parameter. In \cite{ZH2} the representation $\pi_{\frac{d}{2}}\widehat\otimes\overline\pi_{\frac{d}{2}}$ is decomposed for any $\underline{G}$ ($\pi_{\frac{d}{2}}$ is called the minimal representation). In \cite{NE3}, Y. Neretin solves the restriction problem from $U(r,s)$ to $O(r,s)$ ($r\leq s$) for any parameter by analytic continuation of the result for large parameters. If $r=s$ the support of the Plancherel formula remains the same for all $\alpha>r-1$ (here $d=2$) but when $s-r$ is sufficiently large new pieces appear when $\alpha$ crosses $p-1=2(r+s)-1$ and the situation gets worse as $\alpha$ approaches to $(r-1)\frac{d}{2}$, as he had already explained in \cite{NE2}. For points in the discrete Wallach set, the situation is not clear. In his thesis the second author manages to decompose the restriction of $SO(2,n)$ to $SO(1,n)$ for any parameter \cite{SEP2}, as well as the restriction of the minimal representation of $SU(p,q)$ to $SO(p,q)$ \cite{SEP3}, and the minimal representation of $Sp(n,\mathbb{R})$ (resp. $SU(n,n)$) to $GL^+(n,\mathbb{R})$ (resp. to $GL(n,\mathbb{C})$) \cite{SEP1}. Assume that $\mathcal{D}$ is of tube type, i.e. that $\mathcal{D}$ is biholomorphic to the tube domain $T_\Omega$ over the symmetric cone $\Omega$. Then the inverse image of $\Omega$ is a real bounded symmetric domain. In this paper, generalising \cite{SEP1}, we establish, for any parameter in the discrete Wallach set, the branching rule for the restriction of the associated representation of $\underline{G}=G_0(T_\Omega)$ to $G=GL_0(\Omega)$. We use the model by Rossi and Vergne which realises the representation given by the $l$-th point in the Wallach set as $L^2(\partial_l\Omega, \mu_l)$, where $\partial_l\Omega$ is the set of positive semidefinite elements in $\partial \Omega$ of rank $l$, and $\mu_l$ is a relatively $G$-invariant measure on $\partial_l\Omega$. A key observation is that for any $x$ in $\partial_l\Omega$, the function $g \mapsto \Delta_{\boldsymbol\nu}(g^*x)$ on $G$, where $\Delta_{\boldsymbol\nu}$ is the power function of the Jordan algebra, transforms like a function in a certain parabolically induced representation. A naive approach to construct an intertwining operator from $L^2(\partial_l\Omega, \mu_l)$ into a direct sum of parabolically induced representations would then to weight the functions above by compactly supported smooth functions, i.e., to consider mappings $f \mapsto \int_{\partial_l\Omega}f(x) \Delta_{\boldsymbol\nu}(g^*x)d\mu_l(x)$, for $f$ in $C_0^\infty(\partial_l\Omega)$. It will become clear that this approach is in fact fruitful. However, there are two problems that have to be dealt with. First of all, it is not obvious that the natural target spaces are unitarisable. Secondly, and more importantly, the integrals above need not converge for the suitable choice of parameters ${\boldsymbol\nu}$. However, as we shall see, both these problems can be solved. The paper is organised as follows. In Section 2 we recall some facts about Jordan algebras and symmetric cones that will be needed in the paper. In Section~3 we prove an identity between the restriction of a spherical function for the cone $\Omega$ to a cone of lower rank in its boundary and the corresponding spherical function for the lower rank cone. In Section~4 we define a class of irreducible unitary spherical representations that provides target spaces for the integral operators discussed above. These are constructed using the Levi decomposition of the group $G$ by twisting parabolically induced unitary representations for the semisimple factor of $G$ by a certain character. In Section~5 we construct the intertwining operator as an analytic continuation of the integral operator above. After this has been taken care of, a polar decomposition for the measure $\mu_l$ due to J. Arazy and H. Upmeier \cite{ARUP} allows to express the restriction of the intertwining operator to $K$-invariant vectors in terms of the Fourier transform for a cone of rank $l$. Using this identification, the inversion formula for the Fourier transform can be used to prove the Plancherel theorem for the branching problem. In the appendix we provide a framework for certain restrictions of distributions to submanifolds which will be useful for giving an analytic continuation for the integral that should give an intertwining operator. It should be pointed out that the standard theory for restricting distributions (e.g. \cite[Cor.~8.2.7.]{HOR}) does not apply to our situation since the condition on the wave front set for the distribution is not satisfied. Instead we have to use restrictions based on extending test functions in such a way that they are constant in certain directions from the submanifold (cf. Appendix~\ref{A1}). We finally want to mention that branching problems related to holomorphic involutions of $\underline{\mathcal{D}}$ have also been studied in \cite{REP,KOB,BSA,PEZH}. \bigskip \noindent {\bf Acknowledgement}. The authors would like to thank Karl-Hermann Neeb for enlightening discussions and comments that led to substantial improvement of the presentation. \section{Jordan theoretic preliminaries} Let $V$ be a Euclidean Jordan algebra. It is a commutative real algebra with unit element $e$ such that the multiplication operator $L(x)$ satisfies $[L(x),L(x^2)]=0$, and provided with a scalar product for which $L(x)$ is symmetric. An element is invertible if its quadratic representation $P(x)=2L(x)^2-L(x^2)$ is so. Its cone of invertible squares $\Omega$ is a symmetric cone: it is homogeneous under the identity component, $G$, of the Lie group $GL(\Omega)=\{g\in GL(V) \mid g\Omega=\Omega\}$, and it is self-dual. It follows that the involution $\Theta(g):=g^{-*}:=(g^*)^{-1}$ (where $g^*$ is the adjoint of $g$ with respect to the scalar product of $V$) preserves $G$ (which is hence reductive). The stabiliser $K=G_e$ of $e$ coincides with the identity component of the group $\Aut(V)$ of automorphisms of $V$ and with the fixed points of $G$ under the involution $\Theta$, and hence is compact. Thus $\Omega$ is a a Riemannian symmetric space. The tube $T_\Omega=V\oplus i\Omega$ over $\Omega$ in the complexification of $V$ is an Hermitian symmetric space of the non-compact type, diffeomorphic via the Cayley transform to a (tube type) bounded symmetric domain. Any element of $GL(\Omega)$, when extended complex-linearly, preserves $T_\Omega$. In this fashion $G$ is seen as a subgroup of the identity component $\underline{G}$ of the group of biholomorphisms of $T_\Omega$. We assume that $V$ is simple. Then there exists a positive integer $r$, called the rank of $V$, such that any family of mutually orthogonal minimal idempotents has $r$ elements. Such a family is called a Jordan frame. Let $$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$$ be the Cartan decomposition of the Lie algebra $\mathfrak{g}$ of $G$. Then the map $x\mapsto L(x)$ yields an isomorphism $V\rightarrow\mathfrak{p}$. The subspace generated by a Jordan frame (more precisely by the associated multiplication operators) is a maximal abelian subspace of $\mathfrak{p}$ and conversely, any maximal abelian subspace of $\mathfrak{p}$ determines (up to order) a Jordan frame. From now we fix a choice of a Jordan frame $(c_1,\dots,c_r)$ and let $$\mathfrak{a}=\langle L(c_j),j=1,\dots, r\rangle$$ and $A=\exp{\mathfrak{a}}$. Any $x$ in $V$ can be written \begin{equation*} x=k\sum_{1\leq j\leq r}{\lambda_jc_j} \end{equation*} where $k\in K$ and the $\lambda_j$ are real numbers, and the family $(\lambda_1,\dots,\lambda_r)$ is unique up to permutation (its members are called the eigenvalues of $x$). Then $x$ belongs to $\Omega$ if and only if for all $1\leq j\leq r$, $\lambda_j>0$, and this spectral decomposition corresponds the $KAK$ decomposition of $G$, the $A$-component in the decomposition being unique up to conjugation by an element of the Weyl group $W=\mathfrak{S}_r$ of $G$. The rank of $x$ is defined to be the number of its nonzero eigenvalues. There exists on $V$ a $K$-invariant polynomial function $\Delta(x)$ (the determinant) and a $K$-invariant linear function $\tr(x)$ (the trace) that satisfy $$\Delta(x)=\prod_{j=1}^r\lambda_j\quad\text{and}\quad\tr(x)=\sum_{j=1}^r\lambda_j.$$ The determinant defines the character \begin{equation}\label{D:char} \Delta(g):=\Delta(ge) \end{equation} of the group $G$. A Jordan frame gives rise to the important Peirce decomposition. Since multiplications by orthogonal idempotents commute, the space $V$ decomposes into a direct sum of joint eigenspaces for the (symmetric) operators $(L(c_j))_{j=1,\dots, r}$. The eigenvalues of $L(c)$ when $c$ is an idempotent, belong to $\{0,\frac12,1\}$. Let us denote by $V(c,\alpha)$ the eigenspace corresponding to the value $\alpha$. The decomposition into joint eigenspaces is then given by $$V=\bigoplus_{1\leq i\leq j\leq r}V_{ij},$$ where $$V_{ii}=V(c_i,1)\cap\bigcap_{j\neq i}V(c_j,0),$$ and when $i\neq j$, $$V_{ij}=V(c_i,\tfrac12)\cap V(c_j,\tfrac12)\cap\bigcap_{k\not\in \{i,j\}}V(c_j,0).$$ We have $V_{ii}=\mathbb{R} c_i$ and the $V_{ij}$ all have the same dimension $d$, called the degree of the Jordan algebra. We can now describe the roots of $(\mathfrak{g},\mathfrak{a})$. Let $(\delta_j)_{j=1,\dots, r}$ be the dual basis of $(L(c_j))_{j=1,\dots, r}$ in $\mathfrak{a}^*$. Then the roots are $$\alpha^{\pm}_{ij}=\pm \frac{\delta_j-\delta_i}{2},\quad 1\leq i<j\leq r,$$ and the corresponding root spaces are \begin{gather*} \mathfrak{g}^{+}_{ij}=\{a\square e_i\mid a\in V_{ij}\},\\ \mathfrak{g}^{-}_{ij}=\{a\square e_j\mid a\in V_{ij}\}. \end{gather*} where $x\square y=L(L(x)y)+[L(x),L(y)]$. Let $N$ be the nilpotent subgroup $$N=\exp{\bigoplus_{1\leq i<j \geq r}\mathfrak{g}^+_{ij}}.$$ Then $G$ has the Iwasawa decomposition $G=NAK$. For any idempotent $c$, the projection on $V(1,c)$ is $P(c)$, and $V(1,c)$ is a Jordan subalgebra, hence a Euclidean Jordan algebra with neutral element $c$ (note that it is simple with rank the one of $c$). We denote by $\Omega_1(c)$ its symmetric cone. In particular for $$e_j=\sum_{k=1}^l{c_k}.$$ we set $$V^{(l)}=V(1,e_l) \quad \text{and}\quad\Omega^{(l)}=\Omega_1(e_l),$$ and also note $G^{(l)}$ the identity component of $G(\Omega^{(l)})$, $K^{(l)}=G_{e_l}$ and $\Delta^{(l)}$ the determinant of $V^{(l)}$. The principal minors of $V$ are then defined by the formula $$\Delta_{(j)}(x):=\Delta^{(j)}(P(e_j)(x)).$$ Then $x$ is in $\Omega$ if and only if for all $1\leq j\leq r$, $\Delta_{(j)}(x)>0$. Let ${\boldsymbol\nu}\in\mathbb{C}^r$ and set for $x$ in $\Omega$, $$\Delta_{\bsn}(x)=\Delta_{(1)}^{\nu_1-\nu_2}(x)\Delta_{(2)}^{\nu_2-\nu_3}(x)\dots \Delta_{(r-1)}^{\nu_{r-1}-\nu_r}(x)\Delta_{(r)}^{\nu_r}(x).$$ Using the basis $(\delta_j)$, we can identify ${\boldsymbol\nu}$ with an element of $a_\mathbb{C}^*$. Then if $a(g)$ is the projection of $g$ on $A$ in the Iwasawa decomposition, \begin{equation}\label{E:hcef} \Delta_{\bsn}(gx)=e^{{\boldsymbol\nu}\log{a(g)}}\Delta_{\bsn}(x). \end{equation} The action of $G$ on the boundary $\partial \Omega$ of $\Omega$ has $r-1$ orbits, which may be parametrised by the rank of its elements. We denote by $\partial_l\Omega$ the orbit of rank $l$ elements, i.e., $$\partial_l\Omega=Ge_l.$$ There exists on $\partial_l\Omega$ a unique relatively $G$-invariant measure $\mu_l$, which transforms according to $$d\mu_l(gx)=\Delta^{\frac{ld}{2}}(g)d\mu_l(x).$$ The Hilbert space associated to the Wallach point $l\frac{d}{2}$ is, up to renormalisation, isometric to $L^2(\partial_l\Omega,\mu_l)$ \cite[Theorem X.III.4]{FAKO}, and the representation of $G$ in this picture is then given by \begin{equation} \pi^l(g)f=\Delta^{\frac{ld}{4}}(g)f(g^*\cdot). \end{equation} The measures $\mu_l$ were constructed by M. Lassalle \cite{LAS} and can also be obtain as Riesz distribution, thanks to S. Gindikin's theorem \cite[VII.3]{FAKO}. A major tool for our purpose will be the polar decomposition of $\mu_l$ \cite[Theorem 3.2.6]{ARUP}. Let $\Pi_l=K.e_l$ be the set of idempotents of rank $l$. Then $\partial_l\Omega$ is the disjoint union $$\partial_l\Omega=\bigsqcup_{u\in\Pi_l}\Omega(u).$$ Since elements of $G$ permute the faces of $\overline{\Omega}$ (which are of the form $\overline{\Omega(u)}$ for idempotents $u$), an action is induced on $\Pi_l$, such that the preceding equality defines a $G$-equivariant fibration $$\partial_l\Omega\rightarrow\Pi_l.$$ For any function $f$ in the space $C_0^\infty(\partial_l\Omega)$ of smooth functions with compact support on $\partial_l\Omega$, \begin{equation}\label{E:pd} \int_{\partial_l\Omega}{fd\mu_l}=\int_{K}{dk\int_{\Omega^{(l)}}{\Delta_{(l)}^{\frac{rd}{2}}(x)f(kx)d_*^{(l)}x}}, \end{equation} where $d_*^{(l)}x$ is the unique $G^{(l)}$-invariant measure on $\Omega^{(l)}$. The set $\partial_l\Omega$ is not a submanifold of $V$. However let $V_{\geq l}$ be the (open) set of elements in $V$ with rank bigger or equal than $l$. To any $l$-element subset $I_l\subset\{1,\dots, r\}$ one can associate the idempotent $e_{I_l}=\sum_{j\in I_l}{c_j}$ and the minor $\Delta_{I_l}(x):=\Delta(P(e_{I_l})x+e-e_{I_l})$. Then \begin{equation*} V_{\geq l}=\bigcup_{I_l\subset\{1,\dots, r\}}\{x\in V\mid \Delta_{I_l}(x)\neq 0\}, \end{equation*} and \cite[Propositions 3 and 7]{LAS} show that $\partial_l\Omega$ is a (closed) submanifold of $V_{\geq l}$. \section{An identity between spherical functions} The spherical functions on $\Omega$ may be defined for ${\boldsymbol\nu}$ in $\mathfrak{a}_\mathbb{C}^*$ by the formula $$\Phi_{\boldsymbol\nu}(x)=\int_{K}{\Delta_{\bsn}(kx)dk}.$$ When ${\boldsymbol\nu}$ satisfies $\Re\nu_1\geq\dots\geq\Re\nu_r\geq0$, a property that we will denote by $\Re{\boldsymbol\nu}\geq0$, the generalised power function $\Delta_{{\boldsymbol\nu}}$ and the spherical function $\Phi_{\boldsymbol\nu}$ extend continuously to $\overline\Omega$. Now let \begin{equation}\label{D:al} \mathfrak{a}_l=\langle L(c_j),j=1,\dots, l\rangle \end{equation} for $1\leq l\leq r-1$ and assume that ${\boldsymbol\nu}$ belongs to $({\mathfrak{a}_l}_\mathbb{C})^*$, i.e. that its $(r-l)$ last coordinates vanish. Then ${\boldsymbol\nu}$ also defines a spherical function $\Phi^{(l)}_{\boldsymbol\nu}$ of $\Omega^{(l)}$. Let $\alpha$ be a real number. When ${\boldsymbol\nu}$ appears in the argument of an object related to $V^{(l)}$, we will use the convention that ${\boldsymbol\nu}+\alpha:=(\nu_1+\alpha,\dots,\nu_l+\alpha)$. Recall that $\Omega^{(l)}\subset\partial_l\Omega$. \begin{theo}\label{T:En} Let ${\boldsymbol\nu}$ in ${\mathfrak{a}_l}_\mathbb{C}^*$ such that $\Re{\boldsymbol\nu}\geq0$. Then for all $x$ in $\Omega^{(l)}$, $$\Phi_{{\boldsymbol\nu}}(x)=\gamma_{{\boldsymbol\nu}}^{(l)}\Phi^{(l)}_{{\boldsymbol\nu}}(x), \quad\text{where}\quad \gamma_{{\boldsymbol\nu}}^{(l)}= \frac{\Gamma_{\Omega^{(l)}}(\frac{rd}{2})\Gamma_{\Omega^{(l)}}({\boldsymbol\nu}+\frac{ld}{2})} {\Gamma_{\Omega^{(l)}}(\frac{ld}{2})\Gamma_{\Omega^{(l)}}({\boldsymbol\nu}+\frac{rd}{2})}.$$ \end{theo} \noindent Here $\Gamma_{\Omega^{(l)}}$ is the Gindikin Gamma function for the cone $\Omega^{(l)}$, $$\Gamma_{\Omega^{(l)}}({\boldsymbol\nu})=(2\pi)^{\frac{l(l-1)d}{4}}\prod^{l}_{j=1}\Gamma\left(\nu_j-(j-1)\tfrac{d}{2}\right).$$ The theorem is proved in the case ${\boldsymbol\nu}\in\mathbb{N}^l$ in \cite[Proposition 1.3.2 and remark 1.3.4]{ARUP}. We use this result and the following lemma, which is based on Blaschke's theorem (see \cite[Lemma A.1]{BK} for a detailed proof). \begin{lem} Let $f$ be a holomorphic function defined on the right half-plane $\{z\in\mathbb{C}\mid \Re z>0\}$. If $f$ is bounded and $f(n)=0$ for $n\in\mathbb{N}$, then $f$ is identically zero. \end{lem} \begin{proof}[Proof of the theorem] Let us set $z_j=\nu_j-\nu_{j+1}$, $j=1,\dots, l-1$, $z_l=\nu_l$, so that $\Re z_j\geq 0$ and $\nu_j=\sum_{k=j}^lz_k$. Let $x\in\Omega^{(l)}$ and let $$F(z_1,\dots,z_l)=\Phi_{{\boldsymbol\nu}(z_1,\dots,z_l)}(x)-\gamma_{{\boldsymbol\nu}(z_1,\dots,z_l)}^{(l)} \Phi^{(l)}_{{\boldsymbol\nu}(z_1,\dots,z_l)}(x).$$ Let us fix $z_j=m_j\in\mathbb{N}$, $j=2,\dots,l$. If $b\geq a>0$, one can see by Stirling's formula that $\frac{\Gamma(z+a)}{\Gamma(z+b)}$ is bounded on the right half plane. It follows that the function $$z\mapsto\frac{\Gamma(z+\sum_{k=2}^{l}m_k+\frac{ld}{2})}{\Gamma(z+\sum_{k=2}^{l}m_k+\frac{rd}{2})}$$ is bounded and hence also $z\mapsto\gamma_{{\boldsymbol\nu}(z,m_2,\dots,m_l)}^{(l)}$. Now \begin{align*} \nm{\Phi_{{\boldsymbol\nu}(z,m_2,\dots,m_l)}(x)}&\leq\int_{K}{\nm{\Delta_{(1)}^z(kx)\Delta_{(2)}^{m_2}(kx)\dots \Delta_{(l)}^{m_l}(kx)}dk}\\ &\leq\sup_K\left(\Delta_{(2)}^{m_2}(kx)\dots\Delta_{(l)}^{m_l}(kx)\right) \left(\sup_K\Delta_{(1)}(kx)\right)^{\Re(z)}, \end{align*} and $$\nm{\Phi^{(l)}_{{\boldsymbol\nu}(z,m_2,\dots,m_l)}(x)}\leq\sup_{K^{(l)}}\left(\Delta_{(2)}^{m_2}(kx)\dots \Delta_{(l)}^{m_l}(kx)\right)\left(\sup_{K^{(l)}}\Delta_{(1)}(kx ) \right)^{\Re(z)}$$ Let $\delta>\sup_K\Delta_{(1)}(kx)\geq\sup_{K^{(l)}}\Delta_{(1)}(kx)$. Then the holomorphic function $f(z)=F(z,m_2,\dots,m_l)\delta^{-z}$ is bounded and vanishes on $\mathbb{N}$, hence on the right half plane, i.e., for every $z\in\mathbb{C}$ with $\Re z>0$ and $m_j\in \mathbb{N}$, $$F(z,m_2,\dots,m_l)=0.$$ By the same argument one shows that for every $z_1\in\mathbb{C}$ with $\Re z_1>0$ and $m_j\in \mathbb{N}$, the map $z\mapsto F(z_1,z,m_3,\dots,m_l)$ vanishes identically, and the proof follows by induction. \end{proof} \section{A series of spherical unitary representations}\label{S:sr} In this section we introduce a family of spherical unitary representations that will occur in the decomposition of $L^2(\partial_l\Omega)$ under the action of $G$. For $1\leq l\leq r-1$ let $$\overline{\mathfrak{n}}_l=\bigoplus_{l\geq i<j}\mathfrak{g}^{-}_{ij}.$$ Note that it is a (nilpotent) Lie algebra and that (cf. \eqref{D:al}) $$\mathfrak{a}_l\oplus\mathbb{R} L(e)=\bigcap_{l\leq i<j}\ker\alpha^\pm_{ij}.$$ The closed subgroup $Z_G(\mathfrak{a}_l)=Z_G(\mathfrak{a}_l\oplus\mathbb{R} L(e))$ normalises $\overline{N}_l=\exp\overline{\mathfrak{n}}_l$ hence $$Q_l=Z_G(\mathfrak{a}_l)\overline{N}_l$$ is a subgroup of $G$. Moreover $Z_G(\mathfrak{a}_l)\cap\overline{N}_l=\{\id\}$ so this decomposition is a semidirect product. The Lie algebra of $Q_l$, $$\mathfrak{q}_l=\mathfrak{z}_\mathfrak{g}(\mathfrak{a}_l)\oplus\overline{\mathfrak{n}}_l=\mathfrak{m}\oplus\mathfrak{a}\oplus\bigoplus_{l<i<j}\mathfrak{g}^\pm_{ij}\oplus\bigoplus_{l\geq i<j}\mathfrak{g}^{-}_{ij},$$ where $\mathfrak{m}=\mathfrak{z}_\mathfrak{k}(\mathfrak{a})$, is a parabolic subalgebra of $\mathfrak{g}$, and since $Q_l$ is the normaliser of $\mathfrak{q}_l$ in $G$ \cite[7.83]{KNA}, it is a closed subgroup of $G$ (the parabolic subgroup associated to $\mathfrak{q}_l$). It is also the stabiliser of a flag of idempotents $(e_1,e_2,\dots,e_l)$. Note also that since $V(L(c_j),1)=\mathbb{R} c_j$, we have \begin{equation}\label{E:actcent} Z_G(\mathfrak{a}_l)c_j=\mathbb{R}_+ c_j,\quad j=1,\dots, l. \end{equation} \begin{lem}Let $A_l=\exp\mathfrak{a}_l$ and $$M_l=\cap_{j=1}^lZ_G(\mathfrak{a}_l)_{c_j}.$$ Then the multiplication map $$M_l\times A_l\times \overline{N}_l\rightarrow Q_l$$ is a diffeomorphism. \end{lem} \begin{proof} It is clear from \eqref{E:actcent} that the product map $M_l\times A_l\rightarrow Z_G(\mathfrak{a}_l)$ is a smooth bijection hence it is a diffeomorphism. \end{proof} \noindent Note that the decomposition in the preceding lemma is not exactly the Langlands decomposition of $Q_l$. However, it is more adapted to our purpose. We will let $a_l(q)$ denote the $A_l$-component of $q\in Q_l$ in the preceding decomposition. For ${\boldsymbol\nu}\in({\mathfrak{a}_l}_\mathbb{C})^*$, let $1\otimes e^{\boldsymbol\nu}\otimes1$ be the character of $Q_l$ defined by $(1\otimes e^{\boldsymbol\nu}\otimes1)(q)=e^{{\boldsymbol\nu}\log a_l(q)}$, and let us denote by $C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes 1)$ the Frechet space of continuous complex valued functions on $G$ that are $Q_l$-equivariant with respect to $1\otimes e^{{\boldsymbol\nu}}\otimes 1$, i.e., $$C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes 1)=\{f\in C(G)\mid \forall q\in Q_l,\ f(gq)= e^{-{\boldsymbol\nu}\log{a_l(q)}}f(g)\}.$$ The induced representation $\Ind_{Q_l}^{G}(1\otimes e^{\boldsymbol\nu}\otimes1)$ is the left regular representation of $G$ on $C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes 1)$. We will know determine values of ${\boldsymbol\nu}\in({\mathfrak{a}_l}_\mathbb{C})^*$ for which the representation $$\Ind_{Q_l}^{G}(1\otimes e^{\boldsymbol\nu}\otimes 1)\otimes\Delta^{-\frac{ld}{4}}$$ (cf. \eqref{D:char}) can be made unitary and irreducible. The group $G$ admits the Levi decomposition $$G=G'\times\mathbb{R}_{+}$$ where the semisimple part $G'$ is the kernel of the character $\Delta$. Then $K$ is a maximal compact subgroup of $G'$ and the Lie algebra $\mathfrak{g}'$ of $G'$ has Cartan decomposition $\mathfrak{g}'=\mathfrak{k}\oplus\mathfrak{p}'$ with $\mathfrak{p}'=\{L(x)\in\mathfrak{p}\mid \tr x=0\}$, and $\mathfrak{a}'=\mathfrak{a}\cap\mathfrak{p}'$ is maximal abelian in $\mathfrak{p}'$. Let $$\mathfrak{a}_l'=\bigoplus_{1\leq j\leq l}\mathbb{R} \left(L(c_j)-\frac{L(e-e_l)}{r-l}\right)=\bigcap_{l\leq i<j}\ker\alpha^\pm_{ij},$$ $$\mathfrak{m}_l'=\mathfrak{m}\oplus\bigoplus_{j=l+1}^{r-1} \mathbb{R} \left(L(c_j)-\frac{L(e-e_l)}{r-l}\right)\oplus \bigoplus_{l<i<j}\mathfrak{g}^\pm_{ij}.$$ Then $$\mathfrak{q}'_l=\mathfrak{m}_l'\oplus \mathfrak{a}_l'\oplus \overline{\mathfrak{n}}_l.$$ is a parabolic subalgebra of $G'$. The corresponding parabolic subgroup $Q'_l$ admits the Langlands decomposition $$Q_l'=M_l'A_l'\overline{N}_l,$$ where $A_l'=\exp\mathfrak{a}_l'$ and (cf. \cite[Ch. VII, Propositions 7.25, 7.27 and 7.82]{KNA}) $$M_l'=Z_K(\mathfrak{a}_l')\exp(\mathfrak{m}_l'\cap\mathfrak{p}'),$$ whose Lie algebra is $\mathfrak{m}'_l$. For ${\boldsymbol\nu}\in({\mathfrak{a}'_l}_\mathbb{C})^*$ the induced representation $$\Ind_{Q'_l}^{G'}(1\otimes e^{\boldsymbol\nu}\otimes 1)$$ is defined in the same way as for $G$. \begin{lem} \begin{enumerate}[(i)] \item $Q_l= Q_l'\times\mathbb{R}_{+},$ \item $M_l'\subset M_l.$ \end{enumerate} \end{lem} \begin{proof} To prove (i) we observe that since $\mathbb{R}_+\subset Q_l$, we can write $Q_l=Q'\times\mathbb{R}_{+}$, for some subgroup $Q'\subset G'$. Since $Q_l$ (resp. $Q_l'$) is the normaliser of $\mathfrak{q}_l$ (resp. $\mathfrak{q}_l'$) in $G$ (resp. $G'$), the inclusion $Q_l'\subset Q'$ is obvious and the converse follows from the fact that $\Ad(G)$ preserves $\mathfrak{g}'$. Since $X.c_j=0$ for $X\in\mathfrak{m}_l'$ and $1\leq j\leq l$, the assertion (ii) will follow from $Z_K(\mathfrak{a}_l)=Z_K(\mathfrak{a}_l')$. Let $k\in Z_K(\mathfrak{a}_l')$, i.e., for $j=1,\dots, l$, \begin{equation}\label{E:kframe} k.\left(L(c_j)-\frac{L(e-e_l)}{r-l}\right)=L(kc_j)-\frac{L(e-ke_l)} {r-l}=L(c_j)-\frac{L(e-e_l)}{r-l}. \end{equation} By summing over $j$ one obtains $$L(ke_l)-\frac{l}{r-l}{L(e-ke_l)}=L(e_l)- \frac{l}{r-l}{L(e-e_l)},$$ and hence $L(ke_l)=L(e_l)$. By \eqref{E:kframe} we then have $L(kc_j)=L(c_j)$ for $j=1,\dots, l$, i.e., $k\in Z_K(\mathfrak{a}_l)$. \end{proof} Let us denote by $(\widetilde\delta_j)_{j=1\dots l}$ the dual basis of $(L(c_j)-\frac{L(e-e_l)}{r-l})_{j=1\dots l}$ in $({\mathfrak{a}'_{l}}_\mathbb{C})^*$. By $\delta_j\mapsto\widetilde{\delta}_j$ we define an isomorphism $({\mathfrak{a}_{l}}_\mathbb{C})^*\simeq({\mathfrak{a}'_{l}}_\mathbb{C})^*$, ${\boldsymbol\nu}\mapsto\widetilde{\boldsymbol\nu}$. Let $m_{\boldsymbol\nu}$ be the character of $\mathbb{R}_{+}$ defined by \begin{equation}\label{D:mchar} m_{\boldsymbol\nu}(\zeta)=\zeta^{-\frac{rld}{4}+\sum_{1\leq j}{\nu_j}}. \end{equation} \begin{pro}\label{P:isorep} $$\Ind_{Q_l}^{G}(1\otimes e^{\boldsymbol\nu}\otimes 1)\otimes \Delta^{-\frac{dl}{4}}\simeq\Ind_{Q_l'}^{G'}(1\otimes e^{\widetilde{\boldsymbol\nu}}\otimes 1)\otimes m_{{\boldsymbol\nu}}$$ \end{pro} \begin{proof} Let us denote by $a_l(q')$ the $A'_l$-component of $q'$ in the Langlands decomposition of $Q_l'$. Then for all $q=q'\xi\in Q_l=Q'_l\times\mathbb{R}_{+}$, \begin{equation}\label{E:Qeq} e^{{\boldsymbol\nu}\log a_l(q)}=\xi^{\sum{\nu_j}}e^{\widetilde{\boldsymbol\nu}\log a'_l(q')}. \end{equation} Indeed, if $q'=m'e^{\sum{s_j(L(c_j)-\frac{L(e-e_l)}{r-l}})}\overline{n}$ is the Langlands decomposition of $q'$ in $Q_l'$, then $$q=(m'e^{(-\frac{l}{r-l}+\log{\xi})L(e-e_l)})e^{\sum{s_jL(c_j)}+\log{\xi}L(e_l)}\overline{n}$$ is the Langlands decomposition of q in $Q_l$. If $f\in C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes1)$, then by \eqref{E:Qeq}, its restriction $\widetilde f$ to $G'$ belongs to the space $C(G',Q_l',1\otimes e^{\widetilde{\boldsymbol\nu}}\otimes1)$. Conversely, if $\widetilde{f}\in C(G',Q_l',1\otimes e^{\widetilde{\boldsymbol\nu}}\otimes1)$, one obtains, again by \eqref{E:Qeq}, a function $f\in C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes1)$ by setting $f(g'\zeta):=\zeta^{-\sum_j \nu_j}\widetilde{f}(g')$ and we obtain thereby a bijection $C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes1)\simeq C(G',Q_l',1\otimes e^{\widetilde{\boldsymbol\nu}}\otimes1)$. Now the operator \begin{align*} \mathcal T: C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes1)\otimes\mathbb{C} & \rightarrow C(G',Q_l',1\otimes e^{\widetilde{\boldsymbol\nu}}\otimes1)\otimes\mathbb{C},\\ f \otimes z & \mapsto \widetilde{f}\otimes z, \end{align*} intertwines the actions of $G$. Indeed, if $f\in C(G,Q_l,1\otimes e^{{\boldsymbol\nu}}\otimes1)$ and $g=g'\zeta\in G=G'\times\mathbb{R}_{+}$ then $\widetilde{f(g^{-1}\cdot)}=\widetilde{f(g'^{-1}\zeta^{-1}\cdot)}=\zeta^{ \sum{\nu_j}}\widetilde{f}(g'^{-1}\cdot)$, and hence \begin{align*} \mathcal{T}\left(f(g^{-1}\cdot)\otimes\Delta^{-\frac{dl}{4}}(g)z\right) &=\widetilde{f(g^{-1}\cdot)}\otimes \Delta^{-\frac{ld}{4}}(g)z\\ &=\zeta^{\sum{\nu_j}}\widetilde{f}(g'^{-1}\cdot)\otimes \zeta^{- \frac{rld}{4}}z\\ &=\widetilde{f}(g'^{-1}\cdot)\otimes \zeta^{- \frac{rld}{4}+\sum{\nu_j}}z. \end{align*} \end{proof} The representation $\Ind_{Q_l}^{G}(1\otimes e^{\boldsymbol\nu}\otimes 1)\otimes \Delta^{-\frac{ld}{4}}$ extends to a continuous representation (denoted by the same symbol) on the Hilbert completion of $C(G,Q_l,1\otimes e^{\boldsymbol\nu}\otimes1)\otimes\mathbb{C}$ with respect to $$\nm{f\otimes z}^2=\int_{K}{f(k)dk}\nm{z}^2.$$ \begin{lem}\label{L:invMl} For any $m\in M_l$ and any $x\in V$, $$\Delta_{(j)}(mx)=\Delta_{(j)}(x),\quad 1\leq j\leq l.$$ \end{lem} \begin{proof} Let $m\in M_l$. Then for $j=1,\dots, l$, $m$ commutes with $L(e_j)$, and $m\in\Aut{V^{(j)}}$, so \begin{align*} \Delta_{(j)}(mx)=\Delta^{(j)}(P(c_j)(mx))=\Delta^{(j)}(mP (c_j)(x))&=\Delta^{(j)}(P(c_j)(x))\\ &=\Delta_{(j)}(x). \end{align*} \end{proof} \begin{pro} The map $g\mapsto\Delta_{-{\boldsymbol\nu}}(g^*e)$ is a (norm one) $K$-invariant vector in $C(G,Q_l,1\otimes e^{\boldsymbol\nu} \otimes1)$. \end{pro} \begin{proof} Let $q\in Q_l$ and let $q=man\in M_lA_l\overline{N}_l$. Then for $g\in G$, one has, since $n^*\in N$ and $a^*=a$, $$\Delta_{-{\boldsymbol\nu}}((gq)^*e)=\Delta_{-{\boldsymbol\nu }}(n^*a^*m^*g^*e)=e^{-{\boldsymbol\nu}\log(a)} \Delta_{-{\boldsymbol\nu}}(m^*g^*e)$$ and because of Lemma \ref{L:invMl}, $$\Delta_{-{\boldsymbol\nu}}((gq)^*e)=e^{-{\boldsymbol\nu}\log (a)}\Delta_{-{\boldsymbol\nu}}(g^*e).$$ \end{proof} \noindent Let $${\boldsymbol\rho}_l=\frac{d}{2}\sum_{l \geq i <j}{\frac{\delta_i-\delta_j}{2}}= \frac{d}{4}\sum_{j=1}^{l}{(r+1-2j)}\delta_j$$ be the half sum of the negative $\mathfrak{a}_l\oplus\mathbb{R} L(e)$-restricted roots (counted with multiplicities). \begin{theo} For almost every ${\boldsymbol\lambda}\in\mathfrak{a}_l^*$ (with respect to Lebesgue measure), the representation $\Ind_{Q_l}^{G}(1\otimes e^{\boldsymbol\nu}\otimes 1)\otimes \Delta^{-\frac{ld}{4}}$ with ${\boldsymbol\nu}=i{\boldsymbol\lambda}+{\boldsymbol\rho}_l+\frac{ld}{4}$ is an irreducible unitary spherical representation. \end{theo} \begin{proof} First, let us remark that if ${\boldsymbol\nu}=i{\boldsymbol\lambda}+{\boldsymbol\rho}_l+\frac{ld}{4}$ with ${\boldsymbol\lambda}\in\mathfrak{a}_l^*$, then since $$\sum{\nu_j}-\frac{rld}{4}=i\sum{\lambda_j}+ \frac{d}{4}\left(l(r+1+l)-2\frac{l(l+1)}{2}-rl\right)=i\sum{\lambda_j},$$ the representation $m_{\boldsymbol\nu}$ (cf.~\eqref{D:mchar}) is unitary. We now claim that $\widetilde{{\boldsymbol\rho}_l+\frac{dl}{4}}$ is the half sum of the negative $\mathfrak{a}'_l$-restricted roots (counted with multiplicities). Indeed, \begin{align*} \frac{d}2\sum_{l\geq i<j}\frac{\delta_i-\delta_j}{2}& \left(L(e_k)-\frac{L(e-e_l)}{r-l}\right)\\ &=\frac{d}{4}(r+1-2k)+\frac{d}2 \sum_{i\leq l<j} \frac{\delta_i-\delta_j}{2}\left(-\frac{L(e-e_l)}{r-l}\right)\\ &=\frac{d}{4}(r+1-2k)+\frac{d}4\sum_{i\leq l<j}{(-\delta_j)}\left(- \frac{1}{r-l}\sum_{k>l}{L(c_k)}\right) \\ &=\frac{d}{4}(r+1-2k)+\frac{ld}{4}=\widetilde{{\boldsymbol\rho}_l+\frac{ld}{4}}\left(L(e_k)-\frac{L(e-e_l)}{r-l}\right). \end{align*} The theorem now follows from Proposition \ref{P:isorep} and Bruhat's theorem \cite[Theorem 2.6]{VdB}. \end{proof} \noindent Let us note ${\boldsymbol\rho}_l'={\boldsymbol\rho}_l+\frac{ld}{4}$. We now set $$(\pi_{\boldsymbol\nu},\mathcal{H}_{\boldsymbol\nu}):=\Ind_{Q_l}^{G}(1\otimes e^{i{\boldsymbol\nu}+{\boldsymbol\rho}'_l}\otimes 1)\otimes \Delta^{-\frac{ld}{4}},$$ and $$v_{{\boldsymbol\nu}}:=\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}((\cdot)^*e)\otimes1.$$ Let us compute the positive definite spherical function associated to $\pi_{\boldsymbol\nu}$, that is, $$\Phi(g)=\langle\pi_{\boldsymbol\nu}(g)v_{\boldsymbol\nu},v_{{\boldsymbol\nu}}\rangle_{\boldsymbol\nu}= \int_{K}{\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}((g^{-1}k)^*e)dk}\Delta^{-\frac{ld}{4}}(g).$$ Since $g^{-*}e=\Theta(g)e=(ge)^{-1}$ \cite[Theorem III.5.3]{FAKO}, $\Phi$ can be written, as a function on $\Omega$, $$\Phi(x)=\int_{K}{\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(x^{-1})dk}\Delta^{-\frac{ld}{4}}(x)= \Phi_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(x^{-1})\Delta^{-\frac{ld}{4}}(x),$$ and since $\Phi_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(x^{-1})=\Phi_{i{\boldsymbol\nu}+{\boldsymbol\rho}'_l+2{\boldsymbol\rho}}(x)$ with ${\boldsymbol\rho}=\sum_{j=1}^r{(2j-r-1)\delta_j}$ \cite[Theorem XIV.3.1 (iv)]{FAKO}, $$\Phi(x)=\Phi_{i{\boldsymbol\nu}+\boldsymbol\eta_l+{\boldsymbol\rho}}(x)\quad\text{where}\quad \boldsymbol\eta_l=\frac{d}{4}\sum_{j=l+1}^r{(2j-l-r-1)\delta_j}.$$ Recall \cite[Theorem XIV.3.1 (iii)]{FAKO} that $\Phi_{{\boldsymbol\nu}'+{\boldsymbol\rho}}=\Phi_{{\boldsymbol\nu}+{\boldsymbol\rho}}$ if and only if ${\boldsymbol\nu}'=w{\boldsymbol\nu}$ for $w\in W$. Hence the representations $\pi_{\boldsymbol\lambda}$ and $\pi_{{\boldsymbol\lambda}'}$, with ${\boldsymbol\lambda}$, ${\boldsymbol\lambda}'$ in $\mathfrak{a}_l^*$, are equivalent if and only if $i{\boldsymbol\lambda}'+\boldsymbol\eta_l=w(i{\boldsymbol\lambda}+\boldsymbol\eta_l)$. Since $\boldsymbol\eta_l$ is real it follows that $\pi_{{\boldsymbol\lambda}'}$ is equivalent to $\pi_{\boldsymbol\lambda}$ if and only if ${\boldsymbol\lambda}'=w{\boldsymbol\lambda}$ with $w\in W_l:=\mathfrak{S}_l$. \section{The intertwining operator and the Plancherel formula} Let ${\boldsymbol\nu}\in\mathbb{C}^l$ such that $\Re\left(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)\right)\geq0$. Then for $f\inC_0^\infty(\partial_l\Omega)$ and $g\in G$, the formula \begin{equation}\label{D:intop} T_{\boldsymbol\nu} f(g)=\int_{\partial_l\Omega}{f(x)\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(g^*x)d\mu_l(x)} \end{equation} defines a continuous function on $G$. Moreover, it follows from \eqref{E:hcef} and Lemma~\ref{L:invMl} that \begin{equation*} T_{\boldsymbol\nu} f\in C(G,Q_l,1\otimes e^{i{\boldsymbol\nu}+{\boldsymbol\rho}'_l}\otimes1). \end{equation*} \noindent We will also view $T_{\boldsymbol\nu}$ as an operator with values in $C(G,Q_l,1\otimes e^{i{\boldsymbol\nu}+{\boldsymbol\rho}'_l}\otimes1)\otimes\mathbb{C}$ (in the obvious way), and hence in $\mathcal{H}_{\boldsymbol\nu}$. \begin{lem}\label{L:inter} For $\Re\left(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)\right)\geq0$, the operator $$T_{\boldsymbol\nu}:C^\infty_0(\partial_l\Omega)\rightarrow \mathcal{H}_{\boldsymbol\nu}$$ intertwines $\pi^{l}$ and $\pi_{\boldsymbol\nu}$. \end{lem} \begin{proof} Let us set ${\boldsymbol\nu}'=-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)$. Let $h\in G$. Then \begin{align*} T_{\boldsymbol\nu}(h.f)(g)&=\int_{\partial\Omega_l} {\Delta^{\frac{ld}{4}}(h) f(h^*x) \Delta_{{\boldsymbol\nu}'}(g^*x)d\mu_l(x)}\\ &=\Delta^{-\frac{ld}{4}}(h) \int_{\partial\Omega_l}{f(x)\Delta_{{\boldsymbol\nu}'}(g^*h^{-*}x)d\mu_l(x)}\\ &=\Delta^{-\frac{ld}{4}}(h)\int_{\partial\Omega_l} {f(x)\Delta_{{\boldsymbol\nu}'}((h^{-1}g)^*x)d\mu_l(x)}, \end{align*} i.e., \begin{equation} T_{\boldsymbol\nu}(h.f)(g)=\Delta^{-\frac{ld}{4}}(h) T_{\boldsymbol\nu}(f)(h^{-1}g). \label{E:inter} \end{equation} \end{proof} Since ${\boldsymbol\rho}'_l=\frac{d}{2}\sum_{j=1}^l(r+l+1-2j)\delta_j$, we do not have $\Re((-i{\boldsymbol\lambda}+{\boldsymbol\rho}'_l))\geq0$ when ${\boldsymbol\lambda}\in\mathfrak{a}_l^*$, and the integral \eqref{D:intop} does not converge. This means that the integral has to be interpreted in a suitable sense using analytic continuation in the parameter ${\boldsymbol\nu}$. For this we recall that when ${\boldsymbol\nu}\in\mathbb{C}^l$, the Riesz distribution $R_{{\boldsymbol\nu}+\frac{ld}{2}}$ on $V$ can be defined as the analytic continuation of the following integral $$R_{{\boldsymbol\nu}+\frac{ld}{2}}(F)=\Gamma_{\Omega^{(l)}}\left({\boldsymbol\nu}+\tfrac{ld}{2}\right)^{-1} \int_{\partial_l\Omega}{F(x)\Delta_{\bsn}(x)d\mu_l(x)},\quad F\in\mathcal{S}(V),$$ where $\mathcal{S}(V)$ is the Schwartz space of $V$, and that it has support in $\overline{\partial_l\Omega}$ (cf. \cite[Theorem 5.1 and 5.2]{ISHI}, where the integral is actually defined over $\mathcal{O}_l=\{x \in \partial_l\Omega \mid \Delta_{(l)}(x) \neq 0\}$, but $\mu_l(\partial_l\Omega\setminus\mathcal{O}_l)=0$). The restriction (denoted by the same symbol) to the open set $V_{\geq l}$ is then a distribution with support in the submanifold $\partial_l\Omega$. We can therefore consider the vertical restrictions $R_{{\boldsymbol\nu}+\frac{ld}{2}}\mid_{\partial_l\Omega}$ (cf. Appendix A). Since $R_{{\boldsymbol\nu}+\frac{ld}{2}}$ is a measure with support on $\partial_l\Omega$ for $\Re{\boldsymbol\nu} \geq 0$, these restrictions do not depend on the choice of a tubular neighbourhood (cf. Proposition~\ref{P:invres}). For $\Re(-(i{\boldsymbol\nu}+{\boldsymbol\rho}_l'))\geq 0$, we have \begin{equation*} T_{{\boldsymbol\nu}}f(g)=\Gamma_{\Omega^{(l)}}\left(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)+\tfrac{ld}2\right) \Delta^{-\frac{ld}{2}}(g)\left(R_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)+\frac{ld}{2}}\right)\mid_{\partial_l\Omega}\left(f(g^{-*}\cdot)\right). \end{equation*} Hence, we can let the right hand side define an analytic continuation of the integrals $T_{{\boldsymbol\nu}}f(g)$. It is defined on the complement $\mathcal{Z}$ of the set of poles of the meromorphic function $\Gamma_{\Omega^{(l)}}\big(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)\big)$. Since for fixed $f \in C_0^\infty(\partial_l\Omega)$ the map $g\mapsto f(g^{-*}\cdot)$ is continuous, the function $g \mapsto T_{{\boldsymbol\nu}}(f)(g)$ is continuous. \begin{pro} For any ${\boldsymbol\nu} \in\mathcal{Z}$, $$T_{\boldsymbol\nu} f\in C(G,Q_l,1\otimes e^{i{\boldsymbol\nu}+{\boldsymbol\rho}'_l}\otimes1),$$ and the operator $$T_{\boldsymbol\nu}:C^\infty_0(\partial_l\Omega)\rightarrow \mathcal{H}_{\boldsymbol\nu}$$ intertwines $\pi^{l}$ and $\pi_{\boldsymbol\nu}$. \end{pro} \begin{proof} The equation describing the $Q_l$-equivariance as well as eq.~\eqref{E:inter} are analytic in the parameter ${\boldsymbol\nu}$. Hence they hold by analytic continuation since they hold on the open set where $\Re(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l))>0$. \end{proof} We now recall, in order to fix the notations, the definition of the spherical Fourier transform on $\Omega^{(l)}$. If $f$ is a continuous function with compact support on $\Omega^{(l)}$ which is $K^{(l)}$-invariant, its spherical Fourier transform is $$\widehat{f}({\boldsymbol\nu})=\int_{\Omega^{(l)}}{f(x) \Phi^{(l)}_{-{\boldsymbol\nu}+{\boldsymbol\rho}^{(l)}}(x)d_*^{(l)}\!x},$$ where ${\boldsymbol\nu}\in ({\mathfrak{a}_l}_{\mathbb{C}})^*$ and ${\boldsymbol\rho}^{(l)}=\frac{d}{4}\sum_{j=1}^{l}{(2j-l-1)\delta_j}$. Since $f$ has compact support, the function $\widehat f$ is holomorphic on $({\mathfrak{a}_l}_{\mathbb{C}})^*$. For latter use we also recall the inversion formula for $f\in C_0^{\infty}(\Omega^{(l)})^{K^{(l)}}$ and ${\boldsymbol\lambda}\in\mathfrak{a}_l^*$ (cf. \cite[Theorem XIV.5.3]{FAKO} and \cite[Ch. III, Theorem 7.4]{HEL}): \begin{equation}\label{E:if} f(x)=c^{(l)}_0\int_{\mathfrak{a}_l^*}{\widehat{f}(i{\boldsymbol\lambda}) \Phi^{(l)}_{i{\boldsymbol\lambda}+{\boldsymbol\rho}^{(l)}}(x) \frac{d{\boldsymbol\lambda}}{\nm{c^{(l)}({\boldsymbol\lambda})}^2}}, \end{equation} where $d{\boldsymbol\lambda}$ is the Lebesgue measure on $\mathfrak{a}_l^*\simeq\mathbb{R}^l$, $c^{(l)}({\boldsymbol\lambda})$ is Harish Chandra's $c$-function for $\Omega^{(l)}$, and $c^{(l)}_0$ is a positive constant. Now let $f\inC_0^\infty(\partial_l\Omega)^K$\!, and observe that $\Omega^{(l)}$, being a fibre of $\partial_l\Omega\rightarrow\Pi_l$, is closed in $\partial_l\Omega$, and hence $f\mid_{\Omega^{(l)}}$ (sometimes still denoted by $f$) has compact support in $\Omega^{(l)}$. Moreover, since any $k\in K^{(l)}$ extends to an element of $K$, the function $f\mid_{\Omega^{(l)}}$ is $K^{(l)}$-invariant. \begin{pro} If $f\in C^\infty_0(\partial_l\Omega)$ is K-invariant then \begin{equation} T_{\boldsymbol\nu} f(g)=\gamma_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}^{(l)}\widehat{f}(i{\boldsymbol\nu}-\tfrac {rd}{4})\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(g^*e) \end{equation} \end{pro} \begin{proof} Let us set ${\boldsymbol\nu}'=-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)$ in the following. Again, by analytic continuation, it suffices to prove the equality for $\Re({\boldsymbol\nu}')\geq 0$. Since $f$ and $\mu_l$ are $K$-invariant one has for all $k$ in $K$, $$T_{\boldsymbol\nu} f(g)=\int_{\partial\Omega_l}{f(k^{-1}x) \Delta_{{\boldsymbol\nu}'}(g^*x)d\mu_l(x)} =\int_{\partial\Omega_l}{f(x) \Delta_{{\boldsymbol\nu}'}(g^*kx)d\mu_l(x)}.$$ Hence $$ T_{\boldsymbol\nu} f(g)=\int_K{T_{\boldsymbol\nu} f(g)dk}=\int_{\partial\Omega_l}{f(x) \left(\int_K{\Delta_{{\boldsymbol\nu}'}(g^*kx)dk} \right)d\mu_l(x)}. $$ Writing $g^*=th$, $t\in NA$, $h\in K$, we have $$\Delta_{{\boldsymbol\nu}'}(thkx)=\Delta_{{ \boldsymbol\nu}'}(hkx)\Delta_{{ \boldsymbol\nu}'}(te)=\Delta_{{ \boldsymbol\nu}'}(hkx)\Delta_{{ \boldsymbol\nu}'}(g^*e),$$ and using the left invariance of the Haar measure of $K$, \begin{align*} \int_K{\Delta_{{\boldsymbol\nu}'}(g^*kx)dk}&=\int_K{\Delta_{{ \boldsymbol\nu}'}(hkx)dk}\Delta_{{ \boldsymbol\nu}'}(g^*e)\\ &=\int_K{\Delta_{{ \boldsymbol\nu}'}(kx)dk}\Delta_{{ \boldsymbol\nu}'}(g^*e)\\ &=\Phi_{{\boldsymbol\nu}'}(x) \Delta_{{\boldsymbol\nu}'}(g^*e), \end{align*} hence, $$T_{\boldsymbol\nu} f(g)=\int_{\Omega^{(l)}}{f(x)\Phi_{{\boldsymbol\nu}'}(x)}d\mu_l(x) \Delta_{{\boldsymbol\nu}'}(g^*e).$$ Now Upmeier and Arazy's polar decomposition \eqref{E:pd} for $\mu_l$ yields $$T_{\boldsymbol\nu} f(g)=\int_{\Omega^{(l)}}{f(x)\Phi_{{\boldsymbol\nu}'}(x) \Delta_{(l)}^{\frac{rd}{2}}(x)}d_*^{(l)}\!x \Delta_{{\boldsymbol\nu}'}(g^*e),$$ and by Theorem \ref{T:En}, \begin{align*} T_{\boldsymbol\nu} f(g)=&\gamma^{(l)}_{{\boldsymbol\nu}'}\int_{\Omega^{(l)}}{f(x)\Phi^{(l)}_{{\boldsymbol\nu}'}(x) \Delta_{(l)}^{\tfrac{rd}{2}}(x)} d_*^{(l)}\!x \Delta_{{\boldsymbol\nu}'}(g^*e)\\ &=\gamma^{(l)}_{{\boldsymbol\nu}'}\int_{\Omega^{(l)}}{f(x)\Phi^{(l)}_{{\boldsymbol\nu}'+ \tfrac{rd}{2}}(x)}d_*^{(l)}\!x \Delta_{{\boldsymbol\nu}'}(g^*e)\\ &=\gamma_{{\boldsymbol\nu}'}^{(l)}\widehat{f}(-{\boldsymbol\nu}'+{\boldsymbol\rho}^{(l)}- \tfrac{rd}{2})\Delta_{{\boldsymbol\nu}'}(g^*e). \end{align*} Since $-{\boldsymbol\rho}_l'+{\boldsymbol\rho}^{(l)}-\frac{rd}{2}=-\frac{rd}{4}$, we eventually get $$T_{\boldsymbol\nu} f(g)=\gamma_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}^{(l)}\widehat{f}(i{\boldsymbol\nu}- \tfrac{rd}{4})\Delta_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}(g^*e).$$ \end{proof} \noindent In the following we set, $$\widetilde f({\boldsymbol\nu})=\gamma_{-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)}^{(l)}\widehat{f}(i{\boldsymbol\nu}-\tfrac {rd}{4}),$$ and we note that it defines a meromorphic function whose poles are those of $\Gamma_\Omega(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)+\frac{ld}{2})$. The following lemma will be used in the proof of the Plancherel formula. \begin{lem}[Inversion formula]\label{L:if2} Let $f\in C_0^\infty(\partial_l\Omega)^K$and let ${\boldsymbol\lambda}\in \mathfrak{a}_l^*$. Then for $x\in\Omega^{(l)}$, \begin{equation*} f(x)=c_0^{(l)}\int_{\mathfrak{a}_l^*}{\widetilde{f}({\boldsymbol\lambda}) \Phi^{(l)}_{i{\boldsymbol\lambda}+{\boldsymbol\rho}^{(l)}-\frac{rd}{4}}(x) (\gamma^{(l)}_{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}_l' )})^{-1} \frac{d{\boldsymbol\lambda}}{\nm{c^{(l)}({\boldsymbol\lambda})}^2}}. \end{equation*} \end{lem} \begin{proof} We have \begin{equation}\label{E:pmir} \widehat{f\Delta_{(l)}^{\frac{rd}{4}}}(i{\boldsymbol\lambda})= \widehat{f}(i{\boldsymbol\lambda}-\tfrac{rd}{4})=\widetilde{f}({\boldsymbol\lambda}) (\gamma^{(l)}_{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}'_l )})^{-1} , \end{equation} hence the inversion formula $\eqref{E:if}$ applied to the function $f\Delta_{(l)}^{\frac{rd}{4}}$ gives the desired formula. \end{proof} We now state the main result of the article. Recall the notations from the end of section~\ref{S:sr}. \begin{theo}[The Plancherel Theorem]\label{T:Pl} Let $p$ be the measure on $\mathfrak{a}_l^*/W_l$ defined by $$dp({\boldsymbol\lambda})=\frac{c_0^{(l)}}{\nm{\gamma^{(l)}_{-(i {\boldsymbol\lambda}+{\boldsymbol\rho}_l') }c^{(l)}({\boldsymbol\lambda})}^2}d{\boldsymbol\lambda}.$$ Then there exists an isomorphism of unitary representations $$T:\left(\pi^{l},L^2(\partial_l\Omega)\right)\simeq\left(\int_{\mathfrak{a}_l^*}{\pi_ { {\boldsymbol\lambda}} dp({\boldsymbol\lambda})},\int_{\mathfrak{a}_l^*}{\mathcal{H}_{{\boldsymbol\lambda}} dp({\boldsymbol\lambda})}\right),$$ such that for every $f\inC_0^\infty(\partial_l\Omega)$, $(Tf)_{\boldsymbol\lambda}=T_{\boldsymbol\lambda} f$. \end{theo} \begin{proof} First we prove that for any $K$-invariant function $f$ in $C_0^\infty(\partial_l\Omega)$, \begin{equation}\label{E:pfk} \int_{\partial_l\Omega}{\nm{f(x)}^2d\mu_l(x)}=\int_{\mathfrak{a}_l^*} {\nm{\widetilde{f}({\boldsymbol\lambda})}^2dp({\boldsymbol\lambda})}. \end{equation} For this purpose we use the polar decomposition for $\mu_l$ and the inversion formula of Lemma \ref{L:if2}. Then \begin{align*} &\int_{\partial_l\Omega}{\nm{f(x)}^2d\mu_l(x)}=\int_{\Omega^{(l)}}{\nm {f(x)}^2\Delta_{(l)}^{\frac{rd}{2}}(x)d_*^{(l)}\!x}\\ &=\int_{\Omega^{(l)}}{f(x)\Delta_{(l)}^{\frac{rd}{2}}(x)c_0^{(l)}\int_ {\mathfrak{a}_l^*}{\overline{\widetilde{f}({\boldsymbol\lambda})} {\Phi^{(l)}_{i{\boldsymbol\lambda}+{\boldsymbol\rho}^{(l)}-\frac{rd}{4}}(x)} {\overline{(\gamma^{(l)}_{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}_l' )})}}{}^{-1} \frac{d{\boldsymbol\lambda}}{\nm{c^{(l)}({\boldsymbol\lambda})}^2}}d_*^{(l)}\!x}\\ &=\int_{\mathfrak{a}_l^*}{\overline{\widetilde{f}({\boldsymbol\lambda}) (\gamma^{(l)}_{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}_l' )})}{}^{-1} \left(\int_{\Omega^{(l)}}{f(x)\Delta_{(l)}^{\frac{rd}{4}}(x)\Phi^ {(l)}_{-i{\boldsymbol\lambda}+{\boldsymbol\rho}^{(l)}}(x)d_*^{(l)}\!x}\right) \frac{c_0^{(l)}d{\boldsymbol\lambda}}{\nm{c^{(l)}({\boldsymbol\lambda})}^2}}\\ &=\int_{\mathfrak{a}_l^*}{\overline{\widetilde{f}({\boldsymbol\lambda}) (\gamma^{(l)}_{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}_l' )})}{}^{-1} \widehat{f\Delta_{(l)}^{\frac{rd}{4}}}(i{\boldsymbol\lambda}) \frac{c_0^{(l)}d{\boldsymbol\lambda}}{\nm{c^{(l)}({\boldsymbol\lambda})}^2}}\\ &=\int_{\mathfrak{a}_l^*}{\nm{\widetilde{f} ({\boldsymbol\lambda})}^2\frac{c_0^{(l)}d{\boldsymbol\lambda}}{\nm{\gamma^{(l)} _{-(i{\boldsymbol\lambda}+{\boldsymbol\rho}_l')}c({\boldsymbol\lambda})}^2}}. \end{align*} In the last equality we have used again the formula \eqref{E:pmir}. The next step is to prove that for a dense subset of functions $f$ in $C_0^\infty(\partial_l\Omega)$, the identity \begin{equation*} \int_{\partial_l\Omega}{\nm{f(x)}^2d\mu_l(x)}=\int_{\mathfrak{a}_l^*} {\nm{T_{\boldsymbol\lambda} f}_{\boldsymbol\lambda}^2dp({\boldsymbol\lambda})} \end{equation*} holds. Recall that $L^1(G)$ is a Banach $*$-algebra when equipped with convolution as multiplication, and $\varphi^*(g):=\overline{\varphi(g^{-1})}$. Let $L^1(G)^{\#}$ denote the (commutative) closed subalgebra of left and right $K$ -invariant functions in $L^1(G)^{\#}$. There is a natural projection $L^1(G) \rightarrow L^1(G)^{\#}$, \begin{equation*} \varphi \mapsto \varphi^{\#}:=\int_K\int_K \varphi(k_1^{-1} \cdot k_2)dk_1dk_2. \end{equation*} For a unitary representation $(\tau, \mathscr{H})$ of $G$, there is a $*$-representation (also denoted by $\tau$) of $L^1(G)$ on $\mathscr{H}$ given by \begin{equation*} \tau(\varphi)v:=\int_G \varphi(g)\tau(g)vdg, \quad v\in \mathscr{H}. \end{equation*} The representations of $K$ and $L^1(G)$ are related by \begin{equation} \tau(k_1) \tau(\varphi)\tau(k_2)=\tau(\varphi(k_1^{-1} \cdot k_2^{-1})), \quad \varphi \in L^1(G), \quad k_1, k_2 \in K. \label{E:kalg} \end{equation} The subspace $\mathscr{H}^K$ of $K$-invariants is invariant under $L^1(G)^{\#}$. From \eqref{E:kalg}, it follows that for any $\varphi \in L^1(G)$, and $u,v \in \mathscr{H}^K$, \begin{equation} \langle \tau(\varphi)u,v\rangle=\langle \tau(\varphi^{\#})u,v\rangle. \label{E:projkinv} \end{equation} Let $\xi$ be the $K$-invariant cyclic vector in $L^2(\partial_l\Omega)$. We claim that there exists a sequence $\{\xi_n\}_{n=1}^{\infty} \subseteq C_0^\infty(\partial_l\Omega)^K$, such that $\xi_n \rightarrow \xi$ in $L^2(\partial_l\Omega)$. To see this, we can first choose a sequence $\{\zeta_n\}_{n=1}^{\infty} \subseteq C_0^\infty(\partial_l\Omega)$ that converges to $\xi$. Next, observe that the orthogonal projection $P:L^2(\partial_l\Omega) \rightarrow L^2(\partial_l\Omega)$ is given by $f \mapsto \int_K f(k^{-1} \cdot)dk$. Then $P(f)$ is smooth if $f$ is smooth. Moreover, supp $f$ is contained in the image of the map $K \times \mbox{supp}\,f \rightarrow \partial_l\Omega$, $(k,x) \rightarrow kx$. It follows that $P(C_0^\infty(\partial_l\Omega)) \subseteq C_0^\infty(\partial_l\Omega)^K$. Hence, the claim holds with $\xi_n:=P(\zeta_n)$. The subspace $$\mathscr{H}_0:=\{\pi^l(f)\xi_n \mid f \in C^{\infty}_0(G), n \in \mathbb{N}\}$$ is then dense in $L^2(\partial_l\Omega)$. For $\varphi \in C^{\infty}_0(G), n \in \mathbb{N}$, we have, by \eqref{E:projkinv} and \eqref{E:pfk}, \begin{align*} \langle \pi^l(\varphi)\xi_n, \pi^l(\varphi)\xi_n \rangle_{L^2(\partial_l\Omega)} &=\spl{\pi^l(\varphi^**\varphi)\xi_n,\xi_n}\\ &=\spl{\pi^l((\varphi^**\varphi)^{\#})\xi_n,\xi_n}\\ &=\int_{\mathfrak{a}_l^*}{\langle T_ {\boldsymbol\lambda}(\pi^l((\varphi^**\varphi)^{\#})\xi_n), T_{\boldsymbol\lambda}(\xi_n) \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}\\ &=\int_{\mathfrak{a}_l^*}{\langle \pi_{\boldsymbol\lambda}((\varphi^**\varphi)^{\#})T_ {\boldsymbol\lambda}(\xi_n), T_{\boldsymbol\lambda}(\xi_n) \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}\\ &=\int_{\mathfrak{a}_l^*}{\langle \pi_{\boldsymbol\lambda}(\varphi^**\varphi)T_{\boldsymbol\lambda}(\xi_n), T_ {\boldsymbol\lambda}(\xi_n) \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}\\ &=\int_{\mathfrak{a}_l^*}{\langle \pi_{\boldsymbol\lambda}(\varphi)T_{\boldsymbol\lambda}(\xi_n), \pi(\varphi)T_{\boldsymbol\lambda}(\xi_n) \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}\\ &=\int_{\mathfrak{a}_l^*}{\langle T_{\boldsymbol\lambda}(\pi^l(\varphi)\xi_n), T_ {\boldsymbol\lambda}(\pi^l(\varphi)\xi_n) \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}. \end{align*} Hence, the operator $T$ defined on $\mathscr{H}_0$ by $T(\pi^l(\varphi)\xi_n)=(\pi_{\boldsymbol\lambda}(\varphi)T_{\boldsymbol\lambda}(\xi_n))_{\boldsymbol\lambda}$ extends uniquely to a $G$-equivariant isometric operator $$T:L^2(\partial_l\Omega) \rightarrow \int_{\mathfrak{a}_l^*}\mathcal{H}_{{\boldsymbol\lambda}} dp({\boldsymbol\lambda}).$$ It now only remains to prove the surjectivity of $T$. Assume therefore that $(\eta_{\boldsymbol\lambda})_{\boldsymbol\lambda}$ is orthogonal to the image of $T$. Then for all $\varphi$ in $L^1(G)$ and $h\in L^1(G)^\#$, $$\int_{\mathfrak{a}_l^*}{\langle\pi_{\boldsymbol\lambda}(\varphi*h)(T\xi)_{\boldsymbol\lambda},\eta_{\boldsymbol\lambda} \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}=0,$$ i.e., $$\int_{\mathfrak{a}_l^*}{\check{h}({\boldsymbol\lambda})\langle\pi_{\boldsymbol\lambda}(\varphi)(T\xi)_{\boldsymbol\lambda}, \eta_{\boldsymbol\lambda} \rangle_{\boldsymbol\lambda} dp({\boldsymbol\lambda})}=0,$$ where $\check{h}({\boldsymbol\lambda})$ is the Gelfand transform of $h$ restricted to $\mathfrak{a}_l^*$. Recall that the set of bounded spherical functions can be identified with the character space of $L^1(G)^\#$, and hence the image of $L^1(G)^\#$ under the Gelfand transform separates points in this space. It thus follows from the Stone-Weierstrass Theorem that the functions $\check{h}$ are dense in the space of continuous functions on $\mathfrak{a}_l^*$ that are invariant under the action of $W_l$.. Hence $\langle\pi_{\boldsymbol\lambda}(f)(T\xi)_{\boldsymbol\lambda}, \eta_{\boldsymbol\lambda}\rangle_{\boldsymbol\lambda}=0$ $p$-almost everywhere. By separability of $L^1(G)$, there is a set $U$ with $p(\mathfrak{a}_l^*\setminus U)=0$ such that for all $f$ in $L^1(G)$ and ${\boldsymbol\lambda}\in U$, $\langle\pi_{\boldsymbol\lambda}(f)(T\xi)_{\boldsymbol\lambda}, \eta_{\boldsymbol\lambda}\rangle_{\boldsymbol\lambda}=0$. By cyclicity of $(T\xi)_{\boldsymbol\lambda}$ (note that $(T\xi)_{\boldsymbol\lambda}$ is non-zero $p$-almost everywhere), $\eta_{\boldsymbol\lambda}$ is zero $p$-almost everywhere. \end{proof} \begin{rem} We want to point out that it is actually not necessary to prove the analytic continuation of $T_{\boldsymbol\nu}$ (and hence to use the theory of Riesz distributions) to derive the decomposition of $\pi^l$ (however, the natural operator $T$ above is then replaced by an abstract one). Indeed, by the Cartan-Helgason theorem (\cite[Ch. III, Lemma 3.6]{HEL}) we have $\mathcal{H}_{\boldsymbol\lambda}^K=\mathbb{C}\, v_{\boldsymbol\lambda}$ when ${\boldsymbol\lambda}\in\mathfrak{a}_l^*$, and hence we can set $$ T_{\boldsymbol\lambda}:C_0^\infty(\partial_l\Omega)^{K}\rightarrow\mathcal{H}_{\boldsymbol\lambda}^{K},\quad f\mapsto\widetilde{f}({\boldsymbol\lambda})v_{\boldsymbol\lambda},$$ and by \eqref{E:pfk} we thus obtain an operator $T:L^2(\partial_l\Omega)^K\rightarrow\int_{\mathfrak{a}_l^*}{\mathcal{H}_{\boldsymbol\lambda}^K dp({\boldsymbol\lambda})}$. Assume that we can prove that $T$ intertwines the actions of $C_0^\infty(G)^\#$. Then for $\varphi\inC_0^\infty(G)^\#$, \begin{align*} \langle \pi^l(\varphi)\xi,\xi\rangle&=\langle T\pi^l(\varphi) \xi,T\xi\rangle =\langle \pi(\varphi) T\xi,T\xi\rangle\\ &=\int_{\mathfrak{a}_l^*}{\langle \pi_{\boldsymbol\lambda}(\varphi) (T\xi)_{\boldsymbol\lambda},(T\xi)_{\boldsymbol\lambda}\rangle_{\boldsymbol\lambda}}dp({\boldsymbol\lambda})\\ &=\int_{\mathfrak{a}_l^*}{\check{\varphi}({\boldsymbol\lambda})\nm{(T\xi)_{\boldsymbol\lambda}}^2_{\boldsymbol\lambda}}dp( {\boldsymbol\lambda}), \end{align*} where $\check\varphi$ is defined by $\pi_{{\boldsymbol\lambda}}(\varphi)v_{\boldsymbol\lambda}=\check{\varphi}({\boldsymbol\lambda})v_{\boldsymbol\lambda}$. The proof of \cite[Theorem 10]{SEP2} shows that the decomposition of $\pi^l$ then follows. We now prove the intertwining property. It is equivalent to the equality \begin{equation}\label{E:tbp} \widetilde{\pi^{l}(\varphi)f}({\boldsymbol\lambda})=\widetilde{f}({\boldsymbol\lambda})\check{\varphi}({\boldsymbol\lambda}). \end{equation} Let ${\boldsymbol\nu}\in\mathbb{C}^l$. Then for $f\inC_0^\infty(\partial_l\Omega)$ and $\varphi\inC_0^\infty(G)^\#$ we have, where ${\boldsymbol\nu}'=-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l)$, \begin{align*} \pi_{{\boldsymbol\nu}}(\varphi)&\left(\Delta_{{\boldsymbol\nu}'}(( \cdot)^*e)\otimes1\right)(g)=\int_{G}{\varphi(h)\Delta_{\bsn'}(g^*h^{-*}e) \otimes\Delta^{-\frac{ld}{4}}(he)dh}\\ &=\int_{G}{\varphi(h)\Delta^{-\frac{ld}{4}}(he)\left(\int_{K}{ \Delta_{\bsn'}(g^*kh^{-*}e)dk}\right)\otimes1dh}\\ &=\int_{G}{\varphi(h)\Delta^{-\frac{ld}{4}}(he)\left(\int_{K}{ \Delta_{\bsn'}(kh^{-*}e)dk}\right)dh}\left(\Delta_{\bsn'}(g^*e)\otimes1\right)\\ &=\int_{G}{\varphi(h)\Delta^{-\frac{ld}{4}}(he)\Phi_{{\boldsymbol\nu}'}(h^{-*}e)dh} \left(\Delta_{\bsn'}(g^*e)\otimes1\right), \end{align*} i.e., \begin{equation*} \pi_{{\boldsymbol\nu}}(\varphi)v_{\boldsymbol\nu}=\check{\varphi}({\boldsymbol\nu})v_{\boldsymbol\nu}, \end{equation*} where $\check{\varphi}({\boldsymbol\nu})$ is holomorphic on $\mathbb{C}^l$. If $\Re(-(i{\boldsymbol\nu}+{\boldsymbol\rho}'_l))\geq0$, then, by Lemma~\ref{L:inter}, the operator $T_{\boldsymbol\nu}$ intertwines the actions of $C_0^\infty(G)^\#$, and hence $$\widetilde{\pi^{l}(\varphi)f}({\boldsymbol\nu})=\widetilde{f}({\boldsymbol\nu})\check{\varphi}({\boldsymbol\nu}).$$ Thus \eqref{E:tbp} follows by analytic continuation. \end{rem}
1,941,325,220,329
arxiv
\section{INTRODUCTION} \label{intro} In the present paper, we address the problem of quantum interference between the magnetic substates of the hyperfine structure ($F$) states pertaining to different fine structure ($J$) states of a given term, in the presence of magnetic fields of arbitrary strength covering the Hanle, Zeeman, and Paschen--Back (PB) effect regimes. We will refer to this as ``combined interference'' or the ``$F+J$ state interference''. We develop the necessary theory including the effects of partial frequency redistribution (PRD) in the absence of collisions, assuming the lower levels to be unpolarized and infinitely sharp. We refer to this theory as the ``combined theory'' throughout the paper. We consider a two-term atom with hyperfine structure under the assumption that the lower term is unpolarized. In the absence of a magnetic field, the atomic transitions in a two-term atom take place between the degenerate magnetic substates belonging to the $F$ states. An applied magnetic field lifts the degeneracies and modifies the energies of these magnetic substates. The amount of splitting (or the energy change) produced by the magnetic field defines the regimes in which Zeeman and PB effects act. Depending on the relative magnitudes of the fine structure splitting (FS), the hyperfine structure splitting (HFS), and the magnetic splitting (MS), we characterize the magnetic field strength into five regimes. These regimes are illustrated schematically in Figure~\ref{lev-spl}. In the approach presented in this paper, we account for the interferences between the magnetic substates pertaining to the same $F$ state, the magnetic substates belonging to different $F$ states of the same $J$ state, and the magnetic substates belonging to different $F$ states pertaining to different $J$ states. Although all three types of interference are always present, depending on the field strength one or two of them would dominate as depicted in the different panels of Figure~\ref{lev-spl}. Within the framework of non-relativistic quantum electrodynamics, \citet{cm05} formulated a theory for polarized scattering on a multi-term atom with hyperfine structure in the presence of an arbitrary strength magnetic field under the approximation of complete frequency redistribution (CRD). In the present paper, we restrict our treatment to a two-term atom with HFS and consider the limit of coherent scattering in the atomic frame with Doppler frequency redistribution in the observer's frame. We base our formalism on the Kramers--Heisenberg coherency matrix approach of \citet{s94}. In our combined theory, we do not account for the coherences among the states in the lower term. In a recent paper, \citet{s15} indicated how they may be included by extending the coherency matrix approach to the multi-level case. Based on the concept of ``metalevels'', \citet{landi97} formulated a theory that is able to treat coherent scattering in the atomic rest frame for a two-term atom with hyperfine structure. Recently, \citet{casinietal14} presented a generalized frequency redistribution function for the polarized two-term atom in arbitrary fields, based on a new formulation of the quantum scattering theory. Our approach is an alternative approach to the same problem and is conceptually more transparent, although limited to infinitely sharp and unpolarized lower levels. \citet{bel09} studied the linear polarization produced due to scattering on the D lines of neutral lithium isotopes. They employed the density matrix formalism of \citet[][hereafter LL04]{ll04}, together with the approximation of CRD, to treat the quantum interference between the fine and hyperfine structure states. They restricted their study to the non-magnetic case. However, they explored the sensitivity of the Stokes profiles to the microturbulent magnetic fields. For our study in the present paper, we consider the same D lines of lithium isotopes and present in detail the effects of a deterministic magnetic field of arbitrary strength. For this atomic line system, the PB effect in both the fine and the hyperfine structure states occurs for the magnetic field strengths encountered on the Sun. We restrict our treatment to the single scattering case, since our aim here is to explore the basic physical effects of the combined theory. \begin{figure} \begin{center} \includegraphics[scale=0.43]{fig1.eps} \end{center} \caption{Illustration of the magnetic field strength regimes in the combined theory. For illustration purpose, a $^2$P term with nuclear spin 3/2 is considered. The various splittings indicated are not to scale. Panels (a)--(d) show the first four regimes of the field strength. When MS is much greater than FS, we have a complete PB regime for both $J$ and $F$, which we call the fifth regime (not illustrated in the figure). \label{lev-spl}} \end{figure} \section{THE ATOMIC MODEL} \label{atmod} In this section, we describe the structure of the model atom considered for our studies and its interaction with an external magnetic field. We consider a two-term atom, each state of which is designated by the quantum numbers $L$ (orbital), $S$ (electron spin), $J$ ($=L+S$), $I_s$ (nuclear spin), $F$ ($=J+I_s$), and $\mu$ (projection of $F$ onto the quantization axis). \subsection{The Atomic Hamiltonian} \label{ath} Under the $L-S$ coupling scheme, the atomic Hamiltonian for a two-term atom with hyperfine structure is given by \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\!\mathcal{H}_{A}=\zeta(LS){\bm L}\cdot{\bm S} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!+\mathcal{A}_J {\bm I_s}\cdot{\bm J}+ \frac{\mathcal{B}_J}{2I_s(2I_s-1)J(2J-1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\times\bigg\{3({\bm I_s}\cdot{\bm J})^2 +\frac{3}{2}({\bm I_s}\cdot{\bm J})-I_s(I_s+1)J(J+1)\bigg\}\ , \nonumber \\ && \label{atom-ham} \end{eqnarray} where $\zeta(LS)$ is a constant having the dimensions of energy, and $\mathcal{A}_J$ and $\mathcal{B}_J$ are the magnetic dipole and electric quadrupole hyperfine structure constants, respectively. The first term in the above equation is a measure of the FS while the second and the third terms provide a measure of the HFS. The eigenvalues of the atomic Hamiltonian represent the energies of the $F$ states, calculated with respect to the energy of the corresponding term. \subsection{The Magnetic and the Total Hamiltonians} \label{mah} An external magnetic field lifts the degeneracies of the magnetic substates of the $F$ states and changes their energies by an amount given by the eigenvalues of the magnetic Hamiltonian \begin{eqnarray} && \mathcal{H}_B=\mu_0 ({\bm J}+{\bm S})\cdot {\bm B}\ . \label{mag-ham} \end{eqnarray} Assuming the quantization axis to be along the magnetic field ({\it z}-axis of the reference system), the matrix elements of the total Hamiltonian, $\mathcal{H}_T=\mathcal{H}_A+\mathcal{H}_B$, can be written as \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\!\!\! \langle LSJI_sF\mu|\mathcal{H}_T|LSJ^\prime I_sF^\prime\mu\rangle= \delta_{JJ^\prime}\delta_{FF^\prime}\nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\times\bigg[\frac{1}{2}\zeta(LS)\{J(J+1)-L(L+1)-S(S+1)\} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\! +\frac{1}{2}\mathcal{A}_J \mathcal{K} + \frac{\mathcal{B}_J}{8I_s(2I_s-1)J(2J-1)}\nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\times\{3\mathcal{K}(\mathcal{K}+1)-4J(J+1)I_s(I_s+1)\}\bigg] \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!+\mu_0 B (-1)^{L+S+J+J^\prime+I_s-\mu+1} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\times\sqrt{(2J+1)(2J^\prime+1)(2F+1)(2F^\prime+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\times\left ( \begin{array}{ccc} F & F^\prime & 1\\ -\mu & \mu & 0 \\ \end{array} \right ) \left\lbrace \begin{array}{ccc} J & J^\prime & 1\\ F^\prime & F & I_s \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\times\bigg[\delta_{JJ^\prime}(-1)^{L+S+J+1} \frac{\sqrt{J(J+1)}}{\sqrt{2J+1}} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!+(-1)^{J-J^\prime}\sqrt{S(S+1)(2S+1)} \left\lbrace \begin{array}{ccc} J & J^\prime & 1\\ S & S & L \\ \end{array} \right\rbrace\bigg]\ , \nonumber \\ && \label{tot-ham} \end{eqnarray} where $\mathcal{K}=F(F+1)-I_s(I_s+1)-J(J+1)$, and $\mu_0$ is the Bohr magneton. The total Hamiltonian matrix in the combined theory is no longer a symmetric tridiagonal matrix, unlike the case of the PB effect in fine or hyperfine structure states. Instead, it is a full symmetric matrix and we diagonalize it using the Givens--Householder method described in \citet{ort68}. We test the diagonalization code written for the problem at hand using the principle of spectroscopic stability presented in Appendix~\ref{a-b}. \subsection{Eigenvalues and Eigenvectors} \label{ee} The diagonalization of the total Hamiltonian gives the energy eigenvectors in terms of the linear Zeeman effect regime basis $|LSJI_sF\mu\rangle$ through the expansion coefficients $C^{k}_{JF}$ as \begin{equation} |LSI_s,k\mu\rangle= \sum_{JF}C^{k}_{JF}(LSI_s,\mu) \ |LSJI_sF\mu\rangle\ . \label{basis-good} \end{equation} The symbol $k$ labels different states corresponding to the given values of $(L,S,I_s,\mu)$ and its dimension is given by \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!N_k=\sum^{L+S}_{d=|L-S|} 1+d+I_s-{\rm max} (|\mu|,|d-I_s|)\ . \label{kdim} \end{eqnarray} We assume the $C$-coefficients appearing in Equation~(\ref{basis-good}) to be real because the total Hamiltonian is real. We obtain the $C$-coefficients and the corresponding eigenvalues denoted here as $E_k(LSI_s,\mu)$ after diagonalizing the atomic and magnetic Hamiltonians presented in Sections \ref{ath} and \ref{mah}. \section{THE REDISTRIBUTION MATRIX FOR THE COMBINED $J$ AND $F$ STATE INTERFERENCES} The methodology followed to derive the PRD matrix (RM) for the combined case of $J$ and $F$ state interferences in the presence of a magnetic field is similar to that presented in \citet{sow14b} for $F$ state interference alone. For the sake of clarity, we only present the important equations involved in the derivation. In a single scattering event, the scattered radiation is related to the incident radiation through the Mueller matrix given by \begin{equation} {\bf M}={\bf TWT^{-1}}\ . \label{muel} \end{equation} Here, ${\bf T}$ and ${\bf T^{-1}}$ are the purely mathematical transformation matrices and ${\bf W}$ is the coherency matrix for a transition $a\rightarrow b\rightarrow f$ defined by \begin{equation} {\bf W} = \sum_{a}\sum_{f} {\bm w}\otimes {\bm w}^*\ . \label{w-mat} \end{equation} Note that the summations over the initial ($a$) and final ($f$) states are incoherent, and therefore do not allow the lower levels to interfere. ${\bm w}$ in the above equation is the Jones matrix and its elements are given by the Kramers--Heisenberg formula, which gives the complex probability amplitudes for the scattering $a\rightarrow b\rightarrow f$ as \begin{equation} w_{\alpha\beta} \sim \sum_b \frac{\langle f|{\bf r}\cdot{\bf e}_\alpha|b\rangle \langle b|{\bf r}\cdot{\bf e}_\beta|a\rangle}{\omega_{bf}-\omega-{\rm i}\gamma/2}\ . \label{jones} \end{equation} Here, $\omega=2\pi\xi$ is the circular frequency of the scattered radiation. $\hbar\omega_{bf}$ is the energy difference between the excited and final levels and $\gamma$ is the damping constant. Using Equation~(\ref{basis-good}) in the Kramers--Heisenberg formula, and noting that $L_f=L_a$ and using the Wigner--Eckart theorem (see Equations (2.96) and (2.108) of LL04), we arrive at \begin{eqnarray} && \!\!\!\!\!\!\!w_{\alpha\beta} \sim (2L_a+1)\sum_{k_b\mu_b} \sum_{J_aJ_fJ_bJ_{b^{\prime\prime}}F_aF_fF_bF_{b^{\prime\prime}}} \nonumber \\ && \!\!\!\!\!\!\! \times\sum_{qq\prime\prime}(-1)^{q-q^{\prime\prime}} (-1)^{J_f+J_a+J_b+J_{b^{\prime\prime}}} \nonumber \\ && \!\!\!\!\!\!\! \times C^{k_f}_{J_fF_f}(L_aSI_s,\mu_f)C^{k_a}_{J_aF_a}(L_aSI_s,\mu_a) \nonumber \\ &&\!\!\!\!\!\!\! \times C^{k_b}_{J_bF_b}(L_bSI_s,\mu_b) C^{k_b}_{J_{b^{\prime\prime}}F_{b^{\prime\prime}}}(L_bSI_s,\mu_b) \nonumber \\ && \!\!\!\!\!\!\! \times\sqrt{(2F_a+1)(2F_f+1)(2F_b+1)(2F_{b^{\prime\prime}}+1)} \nonumber \\ && \!\!\!\!\!\!\! \times\sqrt{(2J_a+1)(2J_f+1)(2J_b+1)(2J_{b^{\prime\prime}}+1)} \nonumber \\ && \!\!\!\!\!\!\! \times\left ( \begin{array}{ccc} F_b & F_f & 1\\ -\mu_b & \mu_f & -q \\ \end{array} \right ) \left ( \begin{array}{ccc} F_{b^{\prime\prime}} & F_a & 1\\ -\mu_b & \mu_a & -q^{\prime\prime} \\ \end{array} \right ) \nonumber \\ && \!\!\!\!\!\!\! \times\left\lbrace \begin{array}{ccc} J_f & J_b & 1\\ F_b & F_f & I_s \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} J_a & J_{b^{\prime\prime}} & 1\\ F_{b^{\prime\prime}} & F_a & I_s \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\! \times\left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_b & J_f & S \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_{b^{\prime\prime}} & J_a & S \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\times \ \varepsilon^{\alpha^*}_{q} \varepsilon^{\beta}_{q^{\prime\prime}} \ \Phi_{\gamma}(\nu_{k_b\mu_bk_f\mu_f}-\xi)\ . \label{coh-ele} \end{eqnarray} Here, $\varepsilon$ are the spherical vector components of the polarization unit vectors (${\bf e}_\alpha$ and ${\bf e}_\beta$) with $\alpha$ and $\beta$ referring to the scattered and incident rays, respectively. $\Phi_{\gamma}(\nu_{k_b\mu_bk_f\mu_f}-\xi)$ is the frequency-normalized profile function defined as \begin{equation} \Phi_{\gamma}(\nu_{k_b\mu_bk_f\mu_f}-\xi)=\frac{1/\pi {\rm i}} {\nu_{k_b\mu_bk_f\mu_f}-\xi-{\rm i}\gamma/4\pi}\ , \label{norm-prof} \end{equation} where we have used an abbreviation \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\nu_{k_b\mu_bk_f\mu_f}=\nu_{L_bSI_sk_b\mu_b,L_aSI_sk_f\mu_f} \nonumber \\ && \!\!\!\!\!\!\!\!\!\!=\nu_{L_bL_a} + \frac{E_{k_b}(L_bSI_s,\mu_b) -E_{k_f}(L_aSI_s,\mu_f)}{h}\ , \nonumber \\ && \label{freq} \end{eqnarray} with $h$ being the Planck constant. Inserting Equation~(\ref{coh-ele}) into Equation~(\ref{w-mat}), and after elaborate algebra \citep[see for example][]{sow14b}, we obtain the normalized RM, ${\bf R}^{\rm II}_{ij}$, for type-II scattering in the laboratory frame as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!{\bf R}_{ij}^{\rm II} (x,{\bm n},x^\prime,{\bm n}^\prime;{\bm B})= \frac{3(2L_b+1)}{(2S+1)(2I_s+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sum_{KK^\prime Q} \sum_{k_a\mu_ak_f\mu_fk_b\mu_bk_{b^\prime}\mu_{b^\prime}} \sum_{qq^\prime q^{\prime\prime} q^{\prime\prime\prime}} (-1)^{q-q^{\prime\prime\prime}+Q} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sqrt{(2K+1)(2K^\prime+1)} \cos\beta_{k_{b^\prime}\mu_{b^\prime}k_b\mu_b} {\rm e}^{{\rm i}\beta_{k_{b^\prime}\mu_{b^\prime}k_b\mu_b}} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times [(h^{\rm II}_{k_b\mu_b,k_{b^\prime}\mu_{b^\prime}})_{k_a\mu_ak_f\mu_f}+ {\rm i}(f^{\rm II}_{k_b\mu_b,k_{b^\prime}\mu_{b^\prime}})_{k_a\mu_ak_f\mu_f}] \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sum_{J_aJ_{a^\prime}J_fJ_{f^\prime}J_bJ_{b^\prime} J_{b^{\prime\prime}}J_{b^{\prime\prime\prime}}} \sum_{F_aF_{a^\prime}F_fF_{f^\prime}F_bF_{b^\prime} F_{b^{\prime\prime}}F_{b^{\prime\prime\prime}}} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times C^{k_f}_{J_fF_f}(L_aSI_s,\mu_f) C^{k_a}_{J_aF_a}(L_aSI_s,\mu_a) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times C^{k_b}_{J_bF_b}(L_bSI_s,\mu_b) C^{k_b}_{J_{b^{\prime\prime}}F_{b^{\prime\prime}}}(L_bSI_s,\mu_b) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times C^{k_f}_{J_{f^\prime}F_{f^\prime}}(L_aSI_s,\mu_f) C^{k_a}_{J_{a^\prime}F_{a^\prime}}(L_aSI_s,\mu_a) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times C^{k_{b^\prime}}_{J_{b^\prime}F_{b^\prime}}(L_bSI_s,\mu_{b^\prime}) C^{k_{b^\prime}}_{J_{b^{\prime\prime\prime}}F_{b^{\prime\prime\prime}}} (L_bSI_s,\mu_{b^\prime}) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times (-1)^{J_a+J_{a^\prime}+J_f+J_{f^\prime}+J_b+J_{b^\prime}+J_{b^{\prime\prime}} +J_{b^{\prime\prime\prime}}} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sqrt{(2J_a+1)(2J_f+1)(2J_{a^\prime}+1)(2J_{f^\prime}+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sqrt{(2J_b+1)(2J_{b^\prime}+1)(2J_{b^{\prime\prime}}+1) (2J_{b^{\prime\prime\prime}}+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sqrt{(2F_a+1)(2F_f+1)(2F_{a^\prime}+1)(2F_{f^\prime}+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \sqrt{(2F_b+1)(2F_{b^\prime}+1)(2F_{b^{\prime\prime}}+1) (2F_{b^{\prime\prime\prime}}+1)} \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left ( \begin{array}{ccc} F_b & F_f & 1\\ -\mu_b & \mu_f & -q \\ \end{array} \right ) \left ( \begin{array}{ccc} F_{b^\prime} & F_{f^\prime} & 1\\ -\mu_{b^\prime} & \mu_f & -q^{\prime} \\ \end{array} \right ) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left ( \begin{array}{ccc} F_{b^{\prime\prime}} & F_a & 1\\ -\mu_b & \mu_a & -q^{\prime\prime} \\ \end{array} \right ) \left ( \begin{array}{ccc} F_{b^{\prime\prime\prime}} & F_{a^\prime} & 1\\ -\mu_{b^\prime} & \mu_a & -q^{\prime\prime\prime} \\ \end{array} \right ) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left ( \begin{array}{ccc} 1 & 1 & K\\ q & -q^{\prime} & -Q \\ \end{array} \right ) \left ( \begin{array}{ccc} 1 & 1 & K^\prime\\ q^{\prime\prime\prime} & -q^{\prime\prime} & Q\\ \end{array} \right ) \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left\lbrace \begin{array}{ccc} J_f & J_b & 1\\ F_b & F_f & I_s \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} J_{f^\prime} & J_{b^\prime} & 1\\ F_{b^\prime} & F_{f^\prime} & I_s \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left\lbrace \begin{array}{ccc} J_a & J_{b^{\prime\prime}} & 1\\ F_{b^{\prime\prime}} & F_a & I_s \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} J_{a^\prime} & J_{b^{\prime\prime\prime}} & 1\\ F_{b^{\prime\prime\prime}} & F_{a^\prime} & I_s \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_b & J_f & S \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_{b^\prime} & J_{f^\prime} & S \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times \left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_{b^{\prime\prime}} & J_a & S \\ \end{array} \right\rbrace \left\lbrace \begin{array}{ccc} L_a & L_b & 1\\ J_{b^{\prime\prime\prime}} & J_{a^\prime} & S \\ \end{array} \right\rbrace \nonumber \\ && \!\!\!\!\!\!\!\!\!\! \times (-1)^Q \mathcal{T}^K_{-Q}(i,{\bm n}) \mathcal{T}^{K^\prime}_Q(j,{\bm n}^\prime)\ . \label{final-rm} \end{eqnarray} Here, $\mathcal{T}^K_Q(i,\bm n)$ are the irreducible spherical tensors for polarimetry \citep{landi84} with $i=0,1,2,3$ referring to the Stokes parameters, the multipolar index $K=0,1,2,$ and $Q\in[-K,K]$. $\bm n^\prime$ and $\bm n$ represent the directions of the incident and scattered rays, respectively, and $\bm B$ the vector magnetic field. $x^\prime$ and $x$ are the non-dimensional frequencies in Doppler width units (see Appendix~\ref{a-a}). $\beta_{k_{b^\prime}\mu_{b^\prime}k_b\mu_b}$ is the Hanle angle given by \begin{equation} \tan\beta_{k_{b^\prime}\mu_{b^\prime}k_b\mu_b}= \frac{\nu_{k_{b^\prime}\mu_{b^\prime}k_a\mu_a}-\nu_{k_b\mu_bk_a\mu_a}}{\gamma/2\pi}\ . \label{hanle-beta} \end{equation} The explicit forms of the auxiliary functions $h^{\rm II}$ and $f^{\rm II}$ appearing in Equation~(\ref{final-rm}) are given in Appendix~\ref{a-a}. When $I_s=0$, Equation~(\ref{final-rm}) reduces to the PRD matrix for $J$ state interference alone \citep[see Equation~(11) of][]{sow14a}. When FS is neglected, Equation~(\ref{final-rm}) reduces to the expression of RM for pure $F$ state interference \citep[see Equation~(16) of][]{sow14b}. When we neglect both FS and HFS, we recover RM for $L_a\rightarrow L_b\rightarrow L_a$ transition (analogous to a two-level atom case) in the presence of a magnetic field. \section{RESULTS} \label{res} In this section, we present the results obtained from the combined theory for the case of the single scattering of an unpolarized, spectrally flat incident radiation beam by an atom with both non-zero electron and nuclear spins. Considering the relevance to solar applications, we choose the D line system at 6708 \AA{} from neutral $^6$Li and $^7$Li isotopes as an example to test the formalism developed. We take the values of the atomic parameters and isotope abundances for this system from Table 1 of \citet{bel09}. \subsection{Level-crossings and Avoided Crossings} \label{sec-1} \begin{figure*} \begin{center} \includegraphics[scale=0.48]{fig2a.eps} \includegraphics[scale=0.48]{fig2b.eps} \includegraphics[scale=0.48]{fig2c.eps} \includegraphics[scale=0.48]{fig2d.eps} \caption{Energies of the HFS magnetic substates as a function of the magnetic field strength for $^6$Li (left column) and $^7$Li (right column). Panels (a) and (b) correspond, respectively, to the $^2$P$_{3/2}$ levels of $^6$Li and $^7$Li, while panels (c) and (d) correspond to the $^2$P$_{1/2}$ levels of $^6$Li and $^7$Li, respectively. The nuclear spins of $^6$Li and $^7$Li are 1 and 3/2, respectively. \label{level-fig-1} } \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.48]{fig3a.eps} \includegraphics[scale=0.48]{fig3d.eps} \includegraphics[scale=0.48]{fig3b.eps} \includegraphics[scale=0.48]{fig3e.eps} \includegraphics[scale=0.47]{fig3c.eps} \includegraphics[scale=0.47]{fig3f.eps} \caption{Energies of the magnetic substates belonging to the $^2$P terms as a function of the magnetic field strength for $^6$Li (a) and $^7$Li (d). Blow up of the crossing regions c1 (b) and c2 (c) in $^6$Li and c$^\prime$1 (e) and c$^\prime$2 (f) in $^7$Li. In the panels (b), (c), (e), and (f) the levels are identified by their magnetic quantum number values $\mu$. \label{level-fig} } \end{center} \end{figure*} In Figures~\ref{level-fig-1} and \ref{level-fig}, we show the dependence of the energies of the levels in the $^2$P terms of the $^6$Li and $^7$Li isotopes on the magnetic field strength. Such figures provide us with the information on the field strength regimes in which processes like the Zeeman effect, incomplete PB effect, and complete PB effect operate. They help us to choose the magnetic field strength values for studying the effects of level-crossing on the Stokes profiles. We choose different scales for the {\it x}-axes in different panels to bring out the level-crossings which occur at different field strengths due to the difference in the magnitudes of FS and HFS. The {\it y}-axes in all of the panels in both figures denote the energy shift of the levels from the parent $L=1$ level. In panels (a) and (c) of Figure~\ref{level-fig-1}, we plot the energies of the magnetic substates of the $F$ states belonging to the $^2$P$_{3/2}$ and $^2$P$_{1/2}$ levels of $^6$Li, respectively, as a function of the field strength. Since the nuclear spin of $^6$Li is 1, we have half-integer values for $F$. In these panels, we see that the magnetic substates of the $F$ states of $^2$P$_{3/2}$ cross at nine points while those of $^2$P$_{1/2}$ do not cross. We note a similar behavior in the case of the $F$ states belonging to the $^2$P$_{3/2}$ and $^2$P$_{1/2}$ levels of $^7$Li (see panels (b) and (d), respectively). The magnetic substates of the $F$ states of $^2$P$_{1/2}$ do not cross while those of $^2$P$_{3/2}$ cross at 14 points. In the weak field regime (e.g., $0-60$ G), we see PB effect for the $F$ states, and in the strong field regime (for kG fields) we see PB effect for the $J$ states. In Tables~\ref{tab-2} and \ref{tab-3}, we list the quantum numbers of the levels which cross along with their corresponding field strengths for the weak field regime. The numbers indicated in boldface in these tables correspond to those crossings which satisfy $\Delta\mu=\mu_{b^\prime}-\mu_b=\pm2$. We discuss the effects of these level-crossings on the polarization in later sections. In panels (a) and (d) of Figure~\ref{level-fig}, we plot the energies of the magnetic substates of the $^2$P terms of $^6$Li and $^7$Li as a function of the magnetic field strength. In these panels, the points where the levels cross are denoted as c1 and c2 for $^6$Li and as c$^\prime$1 and c$^\prime$2 for $^7$Li. When we zoom into these crossing points, we see other interesting phenomena (see panels (b), (c), (e), and (f)). For example, at c1, we see a crossing of the bunch of lowermost three levels going downward in Figure~\ref{level-fig-1}(a) with the three levels going upward in Figure~\ref{level-fig-1}(c). Although the magnetic substates of the $F$ states appear to be degenerate in Figure~\ref{level-fig}(a), they are not fully degenerate, as can be seen in Figure~\ref{level-fig}(b). Similar behavior can be seen in Figures~\ref{level-fig}(c), (e), and (f), and the levels correspond to the magnetic substates of the $F$ states shown in Figure~\ref{level-fig-1}. In addition to the usual level-crossings, we see several avoided crossings in Figures~\ref{level-fig}(b), (c), (e), and (f). For example, in panel (b), we see one avoided crossing marked a1, two in panel (c) marked a2 and a3, two in panel (e) marked a$^\prime$1 and a$^\prime$2, and three in panel (f) marked a$^\prime$3, a$^\prime$4, and a$^\prime$5. As we can see from the figure, these avoided crossings take place between the magnetic substates with the same $\mu$ values ($-1/2$ in panel (b), $-3/2$ and $-1/2$ in panel (c), 0 and $-1$ in panel (e), and $-2$, $-1$, and 0 in panel (f)). The levels with the same $\mu$ cannot cross owing to the small interaction that takes place between them. This interaction is determined by the off-diagonal elements of the magnetic hyperfine interaction Hamiltonian which couple the states with different $J$ values \citep{bro67,we67,ari77}. A rapid transformation in the eigenvector basis takes place around the region of avoided crossing. This is described in \citet{bom80} and in LL04 \citep[see also][]{sow14a,sow14b}. \begin{table*}[ht] \begin{centering} \begin{tabular}{cccccc} \hline \hline $F_b\diagdown F_{b^\prime}$ & & 1/2 & 3/2 & 3/2 & 3/2 \\ \hline \ \ \ & $\mu_b\diagdown \mu_{b^\prime}$ & 1/2 & $-1/2$ & 1/2 & 3/2 \\ \hline 3/2 & $-3/2$ & {\bf 0.57} & ... & ... & ...\\ \hline 5/2 & $-5/2$ & 1.61 & {\bf 1.26} & 0.72 & 0.63 \\ 5/2 & $-3/2$ & ... & ... & {\bf 1.3} & 0.9 \\ 5/2 & $-1/2$ & ... & ... & 2.93 & {\bf 2.25} \\ \hline \end{tabular} \caption{Magnetic field strengths (approximate values in G) for which the magnetic substates of the $F$ states cross in the $^6$Li isotope. For instance, the crossing between $(\mu_b=-3/2,F_b=3/2)$ and $(\mu_{b^\prime}=1/2,F_{b^\prime}=1/2)$ occurs at $B\sim0.57$ G. The numbers highlighted in boldface represent the field strength values for which the level-crossings corresponding to $\Delta\mu=\mu_{b^\prime}-\mu_b=\pm2$ occur. \label{tab-2} } \end{centering} \end{table*} \begin{table*}[ht] \begin{centering} \begin{tabular}{cccccccc} \hline \hline $F_b\diagdown F_{b^\prime}$ & & 1 & 1 & 2 & 2 & 2 & 2 \\ \hline \ \ \ & $\mu_b\diagdown \mu_{b^\prime}$ & 0 & +1 & $-1$ & 0 & +1 & +2 \\ \hline 2 & $-2$ & {\bf 2.2} & 2.6 & ... & ... & ... & ...\\ \hline 3 & $-3$ & 5.2 & 5.95 & {\bf 4.15} & 2.65 & 2.35 & 2.1 \\ 3 & $-2$ & ... & ... & ... & {\bf 3.7} & 3.25 & 2.95 \\ 3 & $-1$ & ... & ... & ... & 8.8 & {\bf 7.25} & 6.0 \\ \hline \end{tabular} \caption{Magnetic field strengths (approximate values in G) for which the magnetic substates of the $F$ states cross in the $^7$Li isotope. For instance, the crossing between $(\mu_b=-2,F_b=2)$ and $(\mu_{b^\prime}=0,F_{b^\prime}=1)$ occurs at $B\sim2.2$ G. The numbers highlighted in boldface represent the field strength values for which the level-crossings corresponding to $\Delta\mu=\mu_{b^\prime}-\mu_b=\pm2$ occur. \label{tab-3} } \end{centering} \end{table*} \subsection{Line Splitting Diagrams} \label{sec-2} The line splitting diagram shows the displacement of the magnetic components from the line center (corresponding to the wavelength of the $L=0\rightarrow1\rightarrow0$ transition in the reference isotope $^7$Li) and the strengths of these components for a given field strength. In Figure~\ref{spl-1}, we show the line splitting diagrams for different $B$ values. We take into account the isotope shift and the solar abundances of the two isotopes while computing the strengths and magnetic shifts. As mentioned earlier, the components arising for $B=0$ correspond to the transitions between the unperturbed $F$ states. We see that the hyperfine structure components of the D lines are well separated when $B=0$ due to the relatively large FS. When the magnetic field is applied, the degeneracy of the magnetic substates is lifted. As a result, 70 allowed transitions take place in $^6$Li and 106 in $^7$Li. This explains why the diagrams become crowded as the field strength increases. We see that the magnetic displacements increase with an increase in $B$ as expected. In the diagrams shown, we note that the MS is nonlinear and is a characteristic of the incomplete PB regime. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{fig4a.eps} \includegraphics[scale=0.4]{fig4b.eps} \includegraphics[scale=0.4]{fig4c.eps} \includegraphics[scale=0.4]{fig4d.eps} \includegraphics[scale=0.4]{fig4e.eps} \includegraphics[scale=0.4]{fig4f.eps} \caption{Line splitting diagrams for the two lithium isotopes for the field strengths indicated. The solid lines represent the magnetic components of $^7$Li while the dashed lines represent those of $^6$Li. Vertical dotted lines mark the positions of the D lines of the two isotopes. $\Delta\lambda=0$ corresponds to the line center wavelength of $L=0\rightarrow1\rightarrow0$ transition in $^7$Li. \label{spl-1} } \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.9]{fig5.eps} \caption{Scattering geometry considered for the results presented in Section~\ref{sec-3}. \label{geom}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.45]{fig6a.eps} \includegraphics[scale=0.45]{fig6b.eps} \includegraphics[scale=0.45]{fig6c.eps} \caption{Single scattered Stokes profiles for the lithium D line system in the absence of a magnetic field: (a) 100\% $^7$Li, (b) 100\% $^6$Li, and (c) $^7$Li and $^6$Li combined according to their percentage abundance. The line types are indicated in the intensity panels. The geometry considered for scattering is $\mu=0$, $\mu^\prime=1$, $\chi=0\degree$, and $\chi^\prime=0\degree$. The vertical dotted lines represent the line center wavelength positions of the $^7$Li D$_2$, $^7$Li D$_1$, $^6$Li D$_2$, and $^6$Li D$_1$ lines in the absence of magnetic fields. \label{st-1} } \end{center} \end{figure} \subsection{Single Scattered Stokes Profiles} \label{sec-3} In this section, we present the Stokes profiles for various $B$ values computed using the combined theory for the single scattering case. We choose a coordinate system (see Figure~\ref{geom}) in which the magnetic field lies in the horizontal ({\it xy}) plane making angles $\theta_B=90\degree$ and $\chi_B=45\degree$. We make this choice following \citet{s98} in order to bring out clearly the effects of the magnetic field. We assume the unpolarized incident ray to be along the vertical ({\it z}-axis) and the scattered ray (or the line of sight) to lie in the horizontal plane along the {\it x}-axis. Thus, the angles for the incident and the scattered rays become $\mu^\prime=1$, $\chi^\prime=0\degree$, $\mu=0$, and $\chi=0\degree$. We use the fact that the lithium lines are optically thin and only single scattering is considered here to add the Stokes profiles computed for the individual isotopes after weighting them by their respective abundances. In Figures~\ref{st-1} -- \ref{st-4}, we compare the single scattered Stokes profiles for three cases: the cases of pure $F$ state interference (dotted lines) represented by a two-level atom with hyperfine structure, pure $J$ state interference (dashed lines) represented by a two-term atom without hyperfine structure, and the combined theory (solid lines) represented by a two-term atom with hyperfine structure. We choose a Doppler width of 60 m\AA{} for all of the components of the multiplet when computing the Stokes profiles. For this particular value of the Doppler width, the theoretical $Q/I$ profile closely resembles the observed $Q/I$ profile \citep[see][]{bel09}. We use the Einstein $A$ coefficient of $3.689\times10^7$ s$^{-1}$ for all of the components. In Figure~\ref{st-1}, we show the Stokes profiles computed in the absence of magnetic fields for 100\% $^7$Li in panel (a), for 100\% $^6$Li in panel (b), and for both the isotopes combined according to their percentage abundance in panel (c). In panels (a) and (b), we see two peaks corresponding to the D lines of the two isotopes in intensity. The intensities of the D lines in both the isotopes are of similar magnitude since we have assumed 100\% abundance for the two isotopes. We also note that the wavelength positions of the D lines of $^6$Li are different from those of $^7$Li owing to the isotope shift. In panel (c), we see two distinct peaks in intensity. The first peak to the left is due to the $^7$Li D$_2$ line. The second peak falls at the line center positions of $^7$Li D$_1$ and $^6$Li D$_2$. However, the dominant contribution comes from the $^7$Li D$_1$ due to its relatively larger abundance. A small bump to the right of the second peak is due to the $^6$Li D$_1$ line. A small difference in the intensity at the $^7$Li D$_2$ peak between the dashed lines and the other two cases is seen in panels (a) and (c). It is clear from the figure that this discrepancy is caused by $^7$Li. Comparing the solid, dotted, and dashed profiles, we come to the conclusion that the HFS is at the origin of this discrepancy. This is because the solid and dotted lines computed by including HFS perfectly match and only the dashed lines computed without HFS differ from the other two cases. The discrepancy is very small in the case of $^6$Li because of smaller HFS in $^6$Li compared to that in $^7$Li. The reason for this discrepancy is due to the asymmetric splitting of the HFS components about the given $J$ state and also due to finite widths of the components. This difference decreases (graphically indistinguishable) when a magnetic field is applied (for example, when $B=5$ G as seen in Figure~\ref{st-2}) because of the superposition of a large number of magnetic components. In contrast, this difference is about an order of magnitude larger in the non-magnetic case. As we increase the field strength, the intensity profiles broaden due to an increased separation between the magnetic components. When $B=0$, the $Q/I$ profiles exhibit a multi-step behavior around the line center positions of the D$_1$ and D$_2$ lines of both isotopes. We see the effects of quantum interference clearly in $Q/I$. In the $^7$Li D$_2$ core, significant depolarization is caused by the HFS compared to the case where this splitting is neglected (compare the solid and dashed lines in panels (a) and (c)). A similar depolarization is also exhibited by the core of the $^6$Li D$_2$ line (see panels (b) and (c)). However, in the scale adopted, the solid and dashed lines appear to merge around the core of $^6$Li D$_2$ in panels (c), as the $Q/I$ values of $^6$Li D$_2$ are an order of magnitude smaller than those of $^7$Li D$_2$ because of their relative abundances. The D$_1$ lines remain upolarized. As expected, the solid lines merge with the dotted line in the cores of lithium lines while they coincide with the dashed lines in the wings. When a magnetic field is applied, we see a depolarization in $Q/I$ and a generation of $U/I$ signal in the cores of the lithium lines due to the Hanle effect. We note that the combined theory results match more closely the pure $J$ state interference results for fields of the order of 100 G. This behavior continues until the level-crossing field strength of $B=3238$ G for fine structure is reached. For kG fields, we are by far in the complete PB regime for the $F$ states. In this regime, the $J$ and $I_s$ couple strongly to the magnetic field and the interaction between $J$ and $I_s$ becomes negligible. Therefore, one would expect the HFS magnetic substates to be fully degenerate, and therefore the solid and dashed lines should match closely for fields of the order of kG. However, for the level-crossing field strengths, we see considerable differences between the solid and the dashed lines, especially in $U/I$. In order to understand this, we compare the Stokes profiles for $^7$Li and $^6$Li separately in panel (a) and (b) of Figure~\ref{st-3} with the combined profiles in panel (c). We do this to check whether a particular isotope is giving rise to this difference. We note that this difference between the solid and dashed lines prevails in all three panels (i.e., in both isotopes). We attribute this difference in the shape and amplitude between the solid and the dashed lines to HFS, the level-crossings, and avoided crossings between the HFS magnetic substates. When we look at Figure~\ref{level-fig}, we find that the HFS magnetic substates have finite energy differences and are not fully degenerate in the complete PB regime for the $F$ states. We see several crossings as well as a few avoided crossings. These level-crossings and avoided crossings between the non-degenerate HFS magnetic substates lead to a modification of the coherence and significant Hanle rotation, thereby affecting the shape and amplitude of the $U/I$ profiles. The HFS effects show more prominently in the polarization diagrams which will be discussed in Section~\ref{sec-6}. For the geometry under consideration, this effect is significantly seen for $B=3238$ G. For a level-crossing field strength of $4855$ G, the Stokes profiles show somewhat different behavior. We also note that for fields of the order of kG, differences between the solid and dashed lines remain only in the far left wing (see Figures~\ref{st-3} and \ref{st-4}). From Figure~\ref{st-3} it is clear that this difference in the far blue wings is only due to the $^7$Li isotope (compare panels (a)--(c)). This can be understood with the help of the line splitting diagrams for level-crossing fields in Figure~\ref{spl-1} in comparison with the corresponding diagrams in Figure 3 of \citet[][a direct comparison of the displacements can be made as the zero points in the two figures are the same]{sow14a}. In a two-term atom without HFS, when a magnetic field is applied, the various FS magnetic components are either blue or redshifted from the line center depending on their energies. When HFS is included, the HFS magnetic components are distributed around the positions of the FS magnetic components in the absence of HFS. We find that the positions of the HFS magnetic components in Figure~\ref{spl-1} correspond well with the wavelength positions of the FS magnetic components in Figure 3 of \citet{sow14a}, except for the bunch of magnetic components to the extreme left represented by solid lines. The magnetic field leads to a large blue shift of this bunch, which consists of three $\sigma_b$ ($\Delta\mu=\mu_b-\mu_a=+1$), two $\pi$ ($\Delta\mu=0$) and one $\sigma_r$ ($\Delta\mu=-1$) components. These components (otherwise not present at this wavelength position when HFS is neglected) give rise to the systematic difference in $Q/I$, $U/I$, and $V/I$ in the far blue wing of the D$_2$ line of $^7$Li. However, they do not affect the intensity. The $V/I$ profiles remain somewhat indistinguishable between the three cases considered, except for very weak fields like 5 G as in Figure~\ref{st-2}. $F$ state interference significantly changes the $V/I$ profile at the $^7$Li D$_2$ wavelength position. This is a signature of the alignment-to-orientation (A-O) conversion mechanism \citep[see][and LL04]{landi82} acting in the incomplete PB regime for the $F$ states. As described in LL04, this occurs because of the double summation over $K$ and $K^\prime$ appearing in Equation~(\ref{final-rm}) and because the spherical tensor $\mathcal{T}^K_{Q}(3,{\bm n})$ is non-zero only when $K=1$ (see Equation~(\ref{tkq3}) of Appendix~\ref{a-c}). This means that circular polarization can be generated by resonance scattering even if the atom is not exposed to circularly polarized light. The alignment present in the radiation field is converted to the orientation in the upper term. This orientation in the upper $F$ states gives rise to circularly polarized light. As discussed earlier, small differences appear in the far blue wings for fields equal to or larger than the level-crossing field strengths. Finally, we remark that the discussion presented above concerning the comparison of the single scattered Stokes profiles between the three cases (namely, the pure $J$ state, pure $F$ state, and combined $J$ and $F$ interference) also remains valid for other scattering geometries. \begin{figure*} \begin{center} \includegraphics[scale=0.45]{fig7a.eps} \includegraphics[scale=0.45]{fig7b.eps} \caption{Same as Figure~\ref{st-1} but in the presence of a magnetic field. The left and the right panels correspond to different field strength values. The field orientation ($\theta_B=90\degree$, $\chi_B=45\degree$) is the same in both the panels. Refer to Section~\ref{sec-3} for the scattering geometry. \label{st-2} } \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.32]{fig8a.eps} \includegraphics[scale=0.32]{fig8b.eps} \includegraphics[scale=0.32]{fig8c.eps} \caption{Stokes profiles obtained for $B=3238$ G: (a) 100\% $^7$Li, (b) 100\% $^6$Li, and (c) $^7$Li and $^6$Li combined according to their percentage abundance. Refer to Section~\ref{sec-3} for the scattering geometry. When $B=3238$ G, the $U/I$ values are so small for the dotted line case that they become indistinguishable from the zero line. \label{st-3} } \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.45]{fig9a.eps} \includegraphics[scale=0.45]{fig9b.eps} \caption{Stokes profiles obtained for $B=4855$ G and $B=5000$ G. Refer to Section~\ref{sec-3} for the scattering geometry. \label{st-4} } \end{center} \end{figure*} In Figure~\ref{stlc}, we show the Stokes profiles obtained after including a weakly polarized background continuum. We refer the reader to Section 4.3 of \citet{sow14a} for details on how we add the continuum contribution and on the parameters used for the continuum. We compare this figure with Figure 4 of \citet{sow14a} and find that the HFS does not cause any change in the intensities. When $B=0$ the HFS causes a depolarization in the core of $Q/I$ without affecting the shape of the profile. For other field strengths, there is only a slight difference in the amplitude of the profiles as compared to the case without HFS, although their shapes remain the same. The $U/I$ profiles differ both in amplitude and shape for $B=3238$ G. This difference is due to HFS. When HFS is neglected, there is only one level-crossing at this field strength. On the other hand, when HFS is included, there are several level-crossings around this field strength (see Figures~\ref{level-fig}(b) and (e)). $V/I$ profiles have the same shapes and amplitudes as compared to the case without hyperfine structure. \subsection{Net Circular Polarization (NCP)} \label{sec-5} In this section, we present the plots of NCP defined as $\int V d\lambda$ as a function of the magnetic field strength $B$. Since the PB effect causes nonlinear splitting of the magnetic components with respect to the line center, the Stokes $V$ profiles become asymmetric. As a result of this asymmetry, the integration of the Stokes $V$ over the full line profile yields a non-zero value. In the linear Zeeman and complete PB regimes, the $V$ profiles show perfect antisymmetry which causes the NCP to become zero. The A-O conversion mechanism discussed in Section~\ref{sec-3} further enhances the asymmetry in Stokes $V$ profiles already caused by nonlinear MS, and thereby contributes to the NCP. This mechanism is particularly efficient when the level-crossings satisfy $\Delta\mu=\mu_{b^\prime}-\mu_b=1$. In Figure~\ref{ncp}, we show the behavior of NCP in different field strength ranges for the scattering geometry: $\mu^\prime=0$, $\chi^\prime=0\degree$, $\mu=1$, $\chi=90\degree$, $\theta_B=0\degree$, and $\chi_B=0\degree$. This choice of the field geometry is made in order to obtain larger values for Stokes $V$. In panel (a), we show the weak field behavior of NCP. We attribute the non-zero NCP in this regime to the PB effect in the $F$ states and the A-O conversion mechanism taking place in the incomplete PB regime for the $F$ states. We find that the NCP increases with increasing field strength, peaking around the level-crossing field strength (see Tables~\ref{tab-2} and \ref{tab-3}), and decreases with further increase in $B$. For fields of the order of kG we see a second peak in NCP whose magnitude is larger than the first peak by an order. This is due to the PB effect in the $J$ states and the A-O conversion mechanism occurring in the incomplete PB regime for the $J$ states. With a further increase in the field strength, we enter the complete PB regime for the $J$ states where the NCP becomes zero. Detailed discussions on the various mechanisms producing NCP are presented in LL04. \begin{figure} \begin{center} \includegraphics[scale=0.45]{fig10.eps} \caption{Stokes profiles obtained by including the contribution from the continuum for different values of $B$. Refer to Section~\ref{sec-3} for the scattering geometry. The vertical dotted lines represent the positions of the D lines. \label{stlc} } \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{fig11a.eps} \includegraphics[scale=0.45]{fig11b.eps} \caption{Net circular polarization as a function of the magnetic field strength $B$. The scattering geometry is characterized by: $\mu^\prime=0$, $\chi^\prime=0\degree$, $\mu=1$, $\chi=90^\degree$, $\theta_B=0\degree$, and $\chi_B=0\degree$. \label{ncp} } \end{center} \end{figure} \subsection{Polarization Diagrams} \label{sec-6} In Figure~\ref{pd-1}, we present the plots of Stokes $Q/I$ versus Stokes $U/I$ (polarization diagrams) for a given $B$ and $\theta_B$ and for the full range of $\chi_B$. Refer to the figure caption for the incident and scattered ray directions. $\theta_B$ takes values $0\degree$, $70\degree$, $90\degree$, and $110\degree$. We find that the $\theta_B=70\degree$ and $110\degree$ curves perfectly coincide in all four panels. They take same values for $Q/I$ and $U/I$ at $\chi_B=0\degree$ and $\chi_B=180\degree$. However, we see that the dependence on $\chi_B$ of the $\theta_B=70\degree$ curve is somewhat different from that of the $\theta_B=110\degree$ curve. By this, we mean that for the $\theta_B=70\degree$ case, the $Q/I$ value changes in an anti-clockwise direction from the $\chi_B=0\degree$ point while it changes in a clockwise direction from the $\chi_B=0\degree$ point for the $\theta_B=110\degree$ case. The $Q/I$ value increases with increasing $\chi_B$, reaches a maximum and then decreases till $\chi_B=180\degree$. $U/I$ makes a gradual transition from being positive to negative. $Q/I$ again increases with an increase in $\chi_B$ and at $\chi_B=360\degree$ it resumes the same value it had at $\chi_B=0\degree$. $U/I$ now makes a transition from being negative to positive. When $\theta_B=0\degree$ the magnetic field is along the {\it z}-axis and exhibits azimuthal symmetry. Hence, $\theta_B=0\degree$ is just a point in the polarization diagram. For $\theta_B=90\degree$ the diagram is symmetric with respect to the $U/I=0$ line. In Figure~\ref{pd-3}, we compare the polarization diagrams obtained at different wavelength points by varying the field strength $B$ for a two-term atom without HFS (dashed curves) and a two-term atom with HFS (solid curves). The geometry considered is described in the caption to the figure. In panel (a), we see a decrease in $Q/I$ with increasing field strength due to the Hanle effect. For fields greater than 100 G, we enter the Hanle saturation regime. $Q/I$ starts to increase as we approach the level-crossing field strength (around 3 kG). Loops (i.e., a single circular loop for the dashed line and multiple small loops for the solid line) arise due to several level-crossings (see Figure~\ref{level-fig}) where the coherence increases and $Q/I$ tends to approach its non-magnetic value. Comparing the solid and dashed curves in Figure~\ref{pd-3}, the effects of HFS can be clearly seen. First, due to the depolarization caused by HFS, the polarization diagram shrinks in size. Second, multiple small loops are formed (see the solid lines in Figure~\ref{pd-3}). These multiple loops arise due to several level-crossings that occur only when HFS is included (see Figure~\ref{level-fig}(b), (c), (e), and (f)). For field strengths larger than the level-crossing field strengths, the $Q/I$ value decreases again and becomes zero around 10 kG. We see the effects due to Rayleigh scattering in strong magnetic fields when we increase the field strength beyond 10 kG \citep[similar to Figure 6(b) of][]{sow14a}. In panel (b), we show the polarization diagram computed at the $^6$Li D$_2$ wavelength position. Since the $^7$Li D$_1$ position nearly coincides with that of $^6$Li D$_2$, we see the combined effect of both lines. However, due to the large abundance of $^7$Li, the behavior of the polarization diagram is dominated by contribution from $^7$Li D$_1$. Since $^7$Li D$_1$ is unpolarized, the small arcs seen for weak fields are due to the $^6$Li D$_2$ line. After the Hanle saturation field strength (30 G), the polarization diagrams essentially show behavior similar to the corresponding polarization diagrams in panel (a). In panel (c), we show the polarization diagram for $^6$Li D$_1$ position. The D$_1$ line remains unpolarized till the level-crossing field strength (around 3 kG) is reached. Around the level-crossing field strength, we see a bigger loop for the case without HFS (dashed line) and a smaller loop for the case with HFS (solid line). \begin{figure*} \begin{center} \includegraphics[scale=0.55]{fig12a.eps} \includegraphics[scale=0.55]{fig12b.eps} \includegraphics[scale=0.55]{fig12c.eps} \includegraphics[scale=0.55]{fig12d.eps} \caption{Polarization diagrams obtained at the D line positions for $B=5$ G and different $\theta_B$ as indicated in the panels. The azimuth $\chi_B$ of the magnetic field is varied from $0\degree$ to $360\degree$. The symbols on the curves mark the $\chi_B$ values: $\ast-0\degree$, $\circ-70\degree$, $\square-180\degree$, and $\vartriangle-270\degree$. Since the curves for the $\theta_B=70\degree$ and $110\degree$ coincide, we use symbols that are bigger in size for the $\theta_B=110\degree$ case to distinguish it from the $\theta_B=70\degree$ curve. The geometry considered is $\mu=0$, $\mu^\prime=1$, $\chi=0\degree$, and $\chi^\prime=0\degree$. \label{pd-1} } \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[scale=0.55]{fig13a.eps} \includegraphics[scale=0.55]{fig13b.eps} \includegraphics[scale=0.55]{fig13c.eps} \caption{Polarization diagrams obtained at the D line positions for a given orientation of the magnetic field. The dashed lines correspond to the pure $J$ state interference case without HFS while the solid lines correspond to the combined theory case (including HFS). The magnetic field strength values are marked along the dashed curves in Gauss, with ``k'' meaning a factor of 1000. The asterisks on the solid curves represent the same field strength values as indicated for the dashed curves. The scattering geometry considered is $\mu=1$, $\mu^\prime=0$, $\chi=90\degree$, and $\chi^\prime=0\degree$. \label{pd-3} } \end{center} \end{figure} \section{CONCLUSIONS} \label{conclu} We present a formalism to treat the combined interferences between the magnetic substates of the hyperfine structure states pertaining to different fine structure states of the same term including the effects of PRD in scattering. Using the Kramers--Heisenberg approach, we calculate the polarized scattering cross section (i.e., the redistribution matrix) for this process. We also demonstrate the behavior of the redistribution matrix in a single scattering of the incident unpolarized radiation by the lithium atoms. In the solar case, the combined theory finds applications in modeling of spectral lines like lithium 6708 \AA{} for which the effects of both fine and hyperfine structure are significant. In the absence of magnetic fields, we recover the results already published by \citet{bel09}. In the present paper, we illustrate the effects of a deterministic magnetic field on the Stokes profiles of the lithium D line system. We cover the entire field strength regime from a weak field Hanle regime to incomplete and complete PB regimes. When the fields are weak, the Stokes profiles exhibit the well-known Hanle signatures at the centers of the lithium D lines, namely, depolarization of $Q/I$ and rotation of polarization plane. We note that there are Zeeman-like signatures for stronger fields. We show the signatures of level-crossings and avoided crossings in Stokes profiles and polarization diagrams. Unlike the pure $J$ state or $F$ state interferences, when $J$ and $F$ state interferences are treated together, a multitude of level-crossings and avoided crossings occur which produce multiple loops in the polarization diagrams and interesting signatures in teh $U/I$ profiles. Non-zero NCP is seen for fields in the incomplete PB regime which arises not only due to non-linear MS but also due to the A-O conversion mechanism as already described in LL04. However, its diagnostic potential needs to be explored. We perform the calculations including the effects of PRD. However, its effect manifests itself only when one considers the transfer of the line radiation in the solar atmospheric conditions. We thank the referee for very useful, detailed, constructive comments and suggestions which helped us understand the results better and improve the paper substantially. We acknowledge the use of the HYDRA cluster facility at the Indian Institute of Astrophysics for the numerical computations related to the work presented in this paper.
1,941,325,220,330
arxiv
\section{Singlet free energies} We did finite temperature lattice calculations in 3- and (2+1)- flavor QCD in the region of small quark masses. In the case of 2+1 flavor QCD we used two light quark masses $m_q=0.1m_s$ and $0.2m_s$ with $m_s$ being the strange quark mass. In the case of degenerate three flavors the quark masses were roughly $0.15m_s$ and $0.3m_s$. Calculations have be done on $16^3 \times 4$ and $16^3 \times 6$ lattices. The lattice spacing and thus the temperature scale has been fixed using the Sommer scale $r_0=0.469$ fm \cite{gray}. We used the interpolation Ansatz for the dependence of $r_0$ on the lattice gauge coupling $\beta=6/g^2$ given in Ref.\cite{us}. In our simulations we used the exact RHMC algorithm for the 2+1 flavor case while the standard R-algorithm was used in the 3 flavor calculations. Further details about our simulations can be found in Refs. \cite{us,michael_lat06}. On the gauge configurations separated by 50 trajectories we have calculated the singlet free energy of a static quark anti-quark pair defined as \begin{equation} \exp(-F_1(r,T)/T+C)= \frac{1}{3} \langle Tr W(\vec{r}) W^{\dagger}(0) \rangle, \label{f1def} \end{equation} with $W(\vec{r})$ being the temporal Wilson line. The above definition requires gauge fixing and we use the Coulomb gauge as this was done in many previous works \cite{ophil02,okacz02,digal03,okaczlat03,okacz04,petrov04,okacz05}. In the zero temperature limit the singlet free energy defined above coincides with the well known static potential. In fact the calculations on the static potential in Ref. \cite{milc04} are based on Eq. (\ref{f1def}). At finite temperature $F_1(r,T)$ gives information about in-medium modification of inter-quark forces and color screening. The singlet free energy $F_1(r,T)$ as well as the zero temperature static potential is defined up to additive constant $C$ which depends on the lattice spacing. Since the temperature is varied by changing the lattice spacing to compare the free energy calculated at different lattice spacings we normalized it at the smallest distance to the following form for the $T=0$ static potential \begin{equation} V(r)=-\frac{0.385}{r}+\frac{1.263}{r_0^2} r. \label{t0pot} \end{equation} The above definition gives a very good parameterization of the lattice data for the zero temperature static potential calculated in Ref. \cite{us}. In Fig. \ref{fig:f1} we show the singlet free energy calculated on $16^3 \times 4$ and $16^3 \times 6$ lattices together with the above parameterization of the $T=0$ static potential. As one can see from the figure, $F_1(r,T)$ is temperature independent at very small distances and coincides with the zero temperature potential as expected. At large distances the singlet free energy approaches a constant value. This can be interpreted as string breaking at low temperature and color screening at high temperatures. Note that the distance where the free energy effectively flattens off is decreasing with increasing temperatures. This is another indication of color screening. Although the calculations have been done on quite coarse lattices we see that the results on $F_1(r,T)$ show a fairly good scaling with the lattice spacing. This is shown in Fig. \ref{fig:f1scaling} where the singlet free energies calculated on $16^3 \times 4$ and $16^3 \times 6$ lattices are compared at temperatures $T \simeq T_c$. \begin{figure} \includegraphics[width=8cm]{sing_01ms.eps} \includegraphics[width=8cm]{sing_01msNt4.eps} \caption{ The singlet free energy calculated on $16^3 \times 6$ (left) and on $16^3 \times 4$ lattice for $m_q=0.1m_s$ at different temperatures. The thick line shows the parameterization of the zero temperature potential discussed in the text. } \label{fig:f1} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{sing_scaling.eps}} \caption{The comparison of the singlet free energy for $m_q=0.1m_s$ calculated on $16^3 \times 6$ and $16^3 \times 4$ lattices at $T \simeq T_c$. } \label{fig:f1scaling} \end{figure} \section{Renormalized Polyakov Loop} The expectation value of the Polyakov loop $\langle L(\vec{r}) \rangle =\langle Tr W(\vec{r}) \rangle$ is the order parameter for the deconfining transition in pure gauge theories. In full QCD dynamical quarks break the relevant $Z(3)$ symmetry explicitly and it is no longer the order parameter. Still it remains an interesting quantity to study the deconfinement transition as it shows a rapid increase in the crossover region \cite{us,milcthermo,fodor06} and can be used to determine the transition temperature \cite{us,fodor06}. The Polyakov loop defined above strongly depends on the lattice cutoff and has no meaningful continuum limit. On the other hand a correlator of Polyakov loops is a physical quantity and corresponds to the color averaged free energy up to a normalization constant. It satisfies the cluster decomposition \begin{equation} \exp(-F(r,T)+C)=\frac{1}{9} \langle L(\vec{r}) L^{\dagger}(0) \rangle|_{r \rightarrow \infty}= |\langle L (0)\rangle|^2. \end{equation} The normalization constant can be fixed from the color singlet free energy. Moreover, at large distances the color singlet free energy and the color averaged free energy approach the same constant $F_{\infty}(T)$. \begin{figure} \centerline{\includegraphics[width=10cm]{lren_sum.eps}} \caption{ The renormalized Polyakov in the vicinity of the transition calculated on $16^3 \times 6$ lattices for different quark masses. Also shown in the figure is the corresponding result from $16^3 \times 4$ lattice. \label{fig:lren_sum} } \end{figure} Therefore, following Ref.~\cite{okacz02}, we define the renormalized Polyakov loop as \begin{equation} L_{ren}(T)=\exp(-\frac{F_{\infty}(T)}{2 T}). \end{equation} Our numerical results for the renormalized Polyakov loop for different quark masses and two lattice spacings are summarized in Fig. \ref{fig:lren_sum}. One can see from the figure that $L_{ren}(T)$ shows an almost universal behavior as function of $T/T_c$ for all quark masses studied by us, including the case of three degenerate flavors. This suggests, that in the region of the small quark masses we studied, the flavor and quark mass dependence of the deconfinement transition can be almost entirely understood in terms of the flavor and quark mass dependence of the transition temperature $T_c$. Note that the results obtained on $16^3 \times 4$ lattice are in remarkably good agreement with the results obtained on $16^3 \times 6$ lattices, indicating again that the cutoff effects are small. It is interesting to compare our results for the renormalized Polyakov loop with the calculations for three degenerate flavors performed with Asqtad action \cite{petrov04} as well as with the two flavor calculations with p4 action at larger quark masses \cite{okacz05}. The calculations with Asqtad action have been done on lattices $12^3 \times 4$ and $12^3 \times 6$. Critical temperatures are 194(15)MeV for $N_t=6$ and 199(8)MeV for $N_t=4$, $m_q=0.2m_s$\cite{levkova}. This is consistent with out own estimates from the maximum of susceptibility of the renormalized Polyakov loop. The two flavor calculations with p4 action were performed on $16^3 \times 4$ lattice and for quark mass of about $1.54m_s$ with $m_s$ being the physical strange quark mass. This comparison is shown in Fig. \ref{fig:lren_comp} where the renormalized Polyakov loops is plotted in a wider temperature range. We see some discrepancy in the transition region between the results of our study and earlier Asqtad calculations. For $N_t=4$ it can easily be explained by the large error for the determination of the critical temperature - we can pick a temperature in the allowed region which makes curves lie almost on each other. The $N_t=6$ case shows more dramatic descrepancy and here, apart from even larger ambiguity for the critical temperature, we have to point out that Asqtad simulation are done on a much smaller volume. Most noticable deviations from the results of previous calculations are seen at higher temperatures. These deviations may come from the fact that the parameterization of the zero temperature potential given by Eq. (\ref{t0pot}) may not be appropriate for the small lattice spacings corresponding to high temperatures. Also the parameterization of the non-perturbative beta functions used in the present analysis is based on the analysis of $r_0$ for gauge coupling $\beta=6/g^2$ in the range between 3.3 and 3.4. It may not be accurate for $\beta$ values corresponding to high temperatures. This also could yield a discrepancy in the value of the renormalized Polyakov loop in the high temperature region. \begin{figure} \includegraphics[width=8cm]{lren_tred_nt4.eps} \includegraphics[width=8cm]{lren_tred_nt6.eps} \caption{The comparison of the renormalized Polyakov loop calculated with p4 and Asqtad actions on $N_t=4$ lattices (left) and $N_t=6$ lattices (right). For $N_t=4$ we also shown the results from 2 flavor p4 calculations. } \label{fig:lren_comp} \end{figure} \section{Conclusions} We have calculated the singlet free energy of a static quark anti-quark pair in full QCD in the region of the small quark masses. We have found that this quantity shows little cut-off dependence and can be calculated reliably on relatively coarse lattices. The renormalized Polyakov loop derived from the large distance limit of the singlet free energy has been also calculated and compared with earlier results. We have found that the flavor and quark mass dependence of the renormalized Polyakov loop can be absorbed almost entirely in the flavor and quark mass dependence of the transition temperature.
1,941,325,220,331
arxiv
\section{Introduction} Keystroke dynamics is a behavioral biometric trait aimed to recognize individuals based on their typing habits. The velocity of pressing and releasing different keys \cite{Banerjee}, the hand postures during typing \cite{Buschek}, and the pressure exerted when pressing a key \cite{10.1007/978-3-030-31321-0_2} are some of the features taken into account by keystroke biometric algorithms aimed to discriminate among users. Although keystroke technologies suffer of high intra-class variability, especially in free-text scenarios (i.e. the input text typed is not previously fixed), the ubiquity of keyboards as a method of text entry makes keystroke dynamics a near universal modality to authenticate users on the Internet. Text entry is prevalent in day-to-day applications: unlocking a smartphone, accessing a bank account, chatting with acquaintances, email composition, posting content on a social network, and e-learning \cite{2020_AAAI_edBB_JH}. As a means of user authentication, keystroke dynamics is economical because it can be easily integrated into the existing computer security systems with minimal alteration and user intervention. These properties have prompted several companies to capture and analyze keystrokes. The global keystroke dynamics market will grow from \$$129.8$ million dollars to \$$754.9$ million by 2025, a rate of up to $25\%$ per year \cite{2019alliedmarket}. As an example, Google has recently committed \$7 million dollars to fund TypingDNA \cite{2019silicon}, a startup company which authenticates people based on their typing behavior. At the same time, the security challenges that keystroke dynamics promises to solve are constantly evolving and getting more sophisticated every year: identity fraud, account takeover, sending unauthorized emails, and credit card fraud are some examples \cite{2019sec}. In this context, keystroke biometric algorithms capable of authenticating individuals while interacting with computer applications are more necessary than ever. However, these challenges are magnified when dealing with applications that have hundreds of thousands to millions of users. The literature on keystroke biometrics is extensive, but to the best of our knowledge, these systems have only been evaluated with up to several hundred users. While other popular biometrics such as fingerprint and face recognition have been evaluated at the million-user scale \cite{Schroff}, the performance of keystroke biometrics in large scale scenarios remains unpublished. The aim of this paper is to explore the feasibility and limits of scaling a free-text keystroke biometric authentication system to $100$,$000$ users. The main contributions of this work are threefold: \begin{enumerate} \setlength\itemsep{0em} \item We introduce TypeNet, a free-text keystroke biometrics system based on a Siamese Recurrent Neural Network (RNN) trained on $55$M keystrokes from $68$K users, suitable for user authentication at large scale. \item We evaluate TypeNet in terms of Equal Error Rate (EER) as the number of test users is scaled from $100$ to $100$,$000$ (independent from the training data). TypeNet learns a feature representation of a keystroke sequence without need for retraining if new subjects are added to the database. Therefore, TypeNet is easily scalable. \item We carry out a comparison with previous state-of-the-art approaches for free-text keystroke biometric authentication. The performance achieved by the proposed method outperforms previous approaches in the scenarios evaluated in this work. \end{enumerate} In summary, we present the first evidence in the literature of competitive performance of free-text keystroke biometric authentication at large scale ($100$K test users). The results reported in this work demonstrate the potential of this behavioral biometric for widespread deployment. The paper is organized as follows: Section \ref{related_works} summarizes related works in free-text keystroke dynamics to set the background. Section \ref{aalto} describes the dataset used for training and testing TypeNet. Section \ref{system_description} describes the processing steps and learning methods in TypeNet. Section \ref{experimental_protocol} details the experimental protocol. Section \ref{experiments_results} reports the experiments and analyze the results obtained. Section \ref{conclusions} summarizes the conclusions and future work. \section{Related Works and Background} \label{related_works} Keystroke biometric systems are commonly placed into two categories: \textit{fixed-text}, where the keystroke sequence typed by the user is prefixed, such as a username or password, and \textit{free-text}, where the keystroke sequence is arbitrary, such as writing an email or transcribing a sentence with typing errors, and different between training testing. Biometric authentication algorithms based on keystroke dynamics for desktop and laptop keyboards have been predominantly studied in fixed-text scenarios where accuracies higher than $95\%$ are common \cite{2016_IEEEAccess_KBOC_Aythami}. Approaches based on sample alignment (e.g. Dynamic Time Warping) \cite{2016_IEEEAccess_KBOC_Aythami}, Manhattan distances \cite{Vinnie1}, digraphs \cite{Bergadano}, and statistical models (e.g. Hidden Markov Models) \cite{Ali} have shown to achieve the best results in fixed-text. Nevertheless, the performances of free-text algorithms are generally far from those reached in the fixed-text scenario, where the complexity and variability of the text entry contribute to intra-subject variations in behavior, challenging the ability to recognize users \cite{Sim}. Monrose and Rubin \cite{Monrose} proposed in 1997 a free-text keystroke algorithm based on user profiling by using the mean latency and standard deviation of digraphs and computing the Euclidean distance between each test sample and the reference profile. Their results worsened from $90\%$ to $23\%$ of correct classification rates when they changed both user's profiles and test samples from fixed-text to free-text. Gunetti and Picardi \cite{Gunetti} extended the previous algorithm to n-graphs. They calculated the duration of n-graphs common between training and testing and defined a distance function based on the duration and order of such n-graphs. Their results of $7.33\%$ classification error outperformed previous state-of-the-art. Nevertheless, their algorithm needs long keystroke sequences (between $700$ and $900$ keystrokes) and many keystroke sequences (up to $14$) to build the user's profile, which limits the usability of that approach. Murphy \etal~\cite{Murphy} more recently collected a very large free-text keystroke dataset ($\sim$$2.9$M keystrokes) and applied the Gunetti and Picardi algorithm achieving $10.36\%$ classification error using sequences of $1$,$000$ keystrokes and $10$ genuine sequences to authenticate users. More recently than the pioneering works of Monrose and Gunetti, some algorithms based on statistical models have shown to work very well with free-text, like the POHMM (Partially Observable Hidden Markov Models) \cite{Monaco}. This algorithm is an extension of the traditional Hidden Markov Models (HMMs), but with the difference that each hidden state is conditioned on an independent Markov chain. This algorithm is motivated by the idea that keystroke timings depend both on past events and the particular key that was pressed. Performance achieved using this approach in free-text is close to fixed-text, but it again requires several hundred keystrokes and has only been evaluated with a database containing less than 100 users. Nowadays, with the proliferation of machine learning algorithms capable of analysing and learning human behaviors from large scale datasets, the performance of keystroke dynamics in the free-text scenario has been boosted. As an example, \cite{Ceker} proposes a combination of the existing digraphs method for feature extraction plus a Support Vector Machine (SVM) classifier to authenticate users. This approach achieves almost $0\%$ error rate using samples containing $500$ keystrokes. These results are very promising, even though it is evaluated using a small dataset with only $34$ users. More recently, in \cite{Deb} the authors employ a Recurrent Neural Network (RNN) within a Siamese architecture to authenticate users based on $8$ biometric modalities on smartphone devices. They achieved results in free-text of $81.61\%$ TAR (True Acceptance Rate) at $0.1\%$ FAR (False Acceptance Rate) using just $3$ second test windows with a dataset of $37$ users. Previous works in free-text keystroke dynamics have achieved promising results with up to several hundred users (see Table \ref{table:works}), but they have yet to scale beyond this limit and leverage emerging machine learning techniques that benefit from vast amounts of data. Here we take a step forward in this direction of machine learning-based free-text keystroke biometrics by using the largest dataset published to date with $136$M keystrokes from $168$K users. We analyze to what extent deep learning models are able to scale in keystroke biometrics to authenticate users at large scale while attempting to minimize the amount of data per user required for enrollment. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline Year [Ref] & \#Users & \#Seq. & Sequence Size & \#Keys\\ \hline\hline 1997 \cite{Monrose} & $31$ & N/A & N/A & N/A\\ 2005 \cite{Gunetti} & $205$ & $1-15$ & $700-900$ keys & $688$K \\ 2016 \cite{Ceker} & $34$ & $2$ & $\sim$ $7$ keys & $442$K \\ 2017 \cite{Murphy} & $103$ & N/A & $1$,$000$ keys & $12.9$M \\ 2018 \cite{Monaco} & $55$ & $6$ & $500$ keys & $165$K \\ 2019 \cite{Deb} & $37$ & $180$K & $3$ seconds & $6.7$M \\ \textbf{2020 Ours} & \boldmath$168$K & \boldmath$15$ & \boldmath $\sim$ $70$ \textbf{keys}& \boldmath$136$\textbf{M}\\ \hline \end{tabular} \end{center} \caption{Comparison among different free-text keystroke datasets employed in relevant related works. N/A = Not Available.} \label{table:works} \end{table} \section{Keystroke Dataset} \label{aalto} All experiments are conducted with the Aalto University Dataset \cite{Dhakal} that comprises more than $5$GB of keystroke data collected from $168$,$000$ participants during a three month time span. The acquisition task required subjects to memorize English sentences and then type them as quickly and accurate as they could. The English sentences were selected randomly from a set of $1$,$525$ examples taken from the Enron mobile email and gigaword newswire corpora. The example sentences contained a minimum of $3$ words and a maximum of $70$ characters. Note that the sentences typed by the participants could contain even more than $70$ characters because each participant could forget or add new characters when typing. For the data acquisition, the authors launched an online application that records the keystroke data from participants who visit their webpage and agree to complete the acquisition task (i.e. the data was collected in an uncontrolled environment). Press (keydown) and release (keyup) event timings were recorded in the browser with millisecond resolution using the JavaScript function \texttt{Date.now}. All participants in the database completed $15$ sessions (i.e. one sentence for each session) on either a physical desktop or laptop keyboard. The authors also reported demographic statistics: $72$\% of the participants took a typing course, $218$ countries were involved, and $85$\% of the participants have English as native language. \section{System Description} \label{system_description} \subsection{Pre-processing and Feature Extraction} The raw data captured in each user session includes a time series with three dimensions: the keycodes, press times, and release times of the keystroke sequence. Timestamps are in UTC format with millisecond resolution, and the keycodes are integers between $0$ and $255$ according to the ASCII code. We extract $4$ temporal features for each sequence (see Figure \ref{features} for details): (i) Hold Latency (HL): the elapsed time between press and release key events; (ii) Inter-key Latency (IL): the elapsed time between releasing a key and pressing the next key; (iii) Press Latency (PL): the elapsed time between two consecutive press events; and Release Latency (RL): the elapsed time between two consecutive release events. These $4$ features are commonly used in both fixed-text and free-text keystroke systems \cite{Alsultan}. Finally, we include the keycodes as an additional feature. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/fig_features.pdf} \caption{Example of the 4 temporal features extracted between two consecutive keys: Hold Latency (HL), Inter-key Latency (IL), Press Latency (PL) and Release Latency (RL).} \label{features} \end{figure} The $5$ features are calculated for each keystroke in the sequence. Let $N$ be the length of the keystroke sequence, such that each sequence provided as input to the model is a time series with shape $N \times 5$ ($N$ keystrokes by $5$ features). All feature values are normalized before being provided as input to the model. Normalization is important so that the activation values of neurons in the input layer of the network do not saturate (i.e. all close to $1$). The keycodes are normalized to between $0$ and $1$ by dividing each keycode by $255$, and the $4$ timing features are converted to seconds. This scales most timing features to between $0$ and $1$ as the average typing rate over the entire dataset is $5.1$ $\pm$ $2.1$ keys per second. Only latency features that occur either during very slow typing or long pauses exceed a value of $1$. \subsection{The Deep Model: LSTM Architecture} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/LSTM.png} \caption{Architecture of TypeNet for free-text keystroke sequences. The input \textbf{x} is a time series with shape $M \times 5$ (keystrokes $\times$ keystroke features) and the output $\textbf{f}($\textbf{x}$)$ is an embedding vector with shape $1\times128$.} \label{LSTM} \end{figure} In keystroke dynamics, it is thought that idiosyncratic behaviors that enable authentication are characterized by the relationship between consecutives key press and release events (e.g. temporal patterns, typing rhythms, pauses, typing errors). In a free-text scenario, keystroke sequences may differ in both length and content. This reason motivates us to choose a Recurrent Neural Network (RNN) as our keystroke authentication algorithm. RNNs have demonstrated to be one of the best algorithms to deal with temporal data (e.g. \cite{2020_TIFS_BioTouchPass2_Tolosana}, \cite{Tolosana}) and are well suited for free-text keystroke sequences (e.g. \cite{Deb}, \cite{Lu}). Our RNN model is depicted in Figure \ref{LSTM}. It is composed of two Long Short-Term Memory (LSTM) layers of $128$ units. Between the LSTM layers, we perform batch normalization and dropout at a rate of $0.5$ to avoid overfitting. Additionally, each LSTM layer has a dropout rate of $0.2$. One constraint when training a RNN using standard backpropagation through time applied to a batch of sequences is that the number of elements in the time dimension (i.e. number of keystrokes) must be the same for all sequences. Let's fix the size of the time dimension to $M$. In order to train the model with sequences of different lengths $N$ within a single batch, we truncate the end of the input sequence when $N>M$ and zero pad at the end when $N<M$, in both cases to the fixed size $M$. Error gradients are not computed for those zeros and do not contribute to the loss function at the output layer thanks to the Masking layer indicated in Figure~\ref{LSTM}. Finally, the output of the model $\textbf{f}($\textbf{x}$)$ is an array of size $1 \times 128$ that we will employ later as an embedding feature vector to authenticate users. \section{Experimental Protocol} \label{experimental_protocol} Our goal is to build a keystroke biometric system capable of generalizing to new users not seen during model training. For this, we train our deep model in a Siamese framework which allows us to employ different users to train and test the authentication system. The RNN must be trained only once on an independent set of users. This model then acts as a feature extractor that provides input to a simple distance-threshold based authentication scheme. After training the RNN once, we evaluate authentication performance for a varying number of users and enrollment samples per user. \subsection{Siamese Training} In Siamese training, the model has two inputs (i.e. two keystroke sequences from either the same or different users), and therefore, two outputs (i.e. embedding vectors). During the training phase, the model will learn discriminative information from the pairs of keystroke sequences and transform this information into an embedding space where the embedding vectors (the outputs of the model) will be close in case both keystroke inputs belong to the same user (genuine pairs), and far in the opposite case (impostor pairs). For this, we use the \textit{Contrastive loss} function defined specifically for this task \cite{Taigman}. Let $\textbf{x}_{i}$ and $\textbf{x}_{j}$ each be a keystroke sequence that together form a pair which is provided as input to the model. The contrastive loss calculates the Euclidean distance between the model outputs: \begin{equation} \label{disance} d_C(\textbf{x}_{i},\textbf{x}_{j})= \left \| \textbf{f}(\textbf{x}_{i}) - \textbf{f}(\textbf{x}_{j})\right \| \end{equation} where $\textbf{f}(\textbf{x}_{i})$ and $\textbf{f}(\textbf{x}_{j})$ are the model outputs (embedding vectors) for the inputs $\textbf{x}_{i}$ and $\textbf{x}_{j}$, respectively. The model will learn to make this distance small (close to $0$) when the input pair is genuine and large (close to $\alpha$) for impostor pairs by computing the loss function $\mathcal{L}$ defined as follows: \begin{equation} \label{loss} \mathcal{L}= (1-L_{ij})\frac{d^2(\textbf{x}_{i},\textbf{x}_{j})}{2}+L_{ij}\frac{\max^2\left \{0, \alpha-d(\textbf{x}_{i},\textbf{x}_{j})\right \} }{2} \end{equation} where $L_{ij}$ is the label associated with each pair that is set to $0$ for genuine pairs and $1$ for impostor ones, and $\alpha \geq 0$ is the margin (the maximum margin between genuine and impostor distances). We train the RNN using only the first $68$K users in the dataset. From this subset we generate genuine and impostor pairs using all the $15$ keystroke sequences available for each user. This provides us with $15\times68$K$\times15=15.3$M impostor pair combinations and $15\times14/2=105$ genuine pair combinations for each user. The pairs were chosen randomly in each training batch ensuring that the number of genuine and impostor pairs remains balanced ($512$ pairs in total in each batch including impostor and genuine pairs). Note that the remaining $100$K users will be employed only to test the model, so there is no data overlap between the two groups of users (open-set authentication paradigm). Regarding the training details, the best results were achieved with a learning rate of $0.05$, Adam optimizer with $\beta_{1} = 0.9$, $\beta_{2} = 0.999$ and $\epsilon = 10^{-8}$, and the margin set to $\alpha = 1.5$. The model was trained for $200$ epochs with 150 batches per epoch and $512$ sequences in each batch. The model was built in \texttt{Keras-Tensorflow}. \subsection{Testing} We authenticate users by comparing gallery samples $\textbf{x}_{g}$ belonging to one of the users in the test set to a query sample $\textbf{x}_{q}$ from either the same user (genuine match) or another user (impostor match). The test score is computed by averaging the Euclidean distances $d_E$ between each gallery embedding vector $\textbf{f}(\textbf{x}_{g})$ and the query embedding vector $\textbf{f}(\textbf{x}_{q})$ as follows: \begin{equation} \label{score} \textit{score}= \frac{1}{G}\sum_{g=1}^{G} d_E(\textbf{f}(\textbf{x}_{g}),\textbf{f}(\textbf{x}_{q})) \end{equation} where $G$ is the number of sequences in the gallery (i.e. the number of enrollment samples). Taking into account that each user has a total of $15$ sequences, we retain $5$ sequences per user as test set (i.e. each user has $5$ genuine test scores) and let $G$ vary between $1 \leq G \leq 10$ in order to evaluate the performance as a function of number of enrollment sequences. To generate impostor scores, for each enrolled user we choose one test sample from each remaining user. We define $K$ as the number of enrolled users. In our experiments, we vary $K$ in the range $100 \leq K \leq 100$,$000$. Therefore each user has $5$ genuine scores and $K-1$ impostor scores. Note that we have more impostor scores than genuine ones, a common scenario in keystroke dynamics authentication. The results reported in the next section are computed in terms of Equal Error Rate (EER), which is the value where False Acceptance Rate (FAR, proportion of impostors classified as genuine) and False Rejection Rate (FRR, proportion of genuine users classified as impostors) are equal. The error rates are calculated for each user and then averaged over all $K$ users \cite{2014_IWSB_Aythami_Keystroking}. \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{1.6} \begin{table} \begin{center} \begin{tabular}{cc|c|c|c|c|c|} \cline{3-7} \multicolumn{2}{c}{} &\multicolumn{5}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{\#enrollment sequences per user $G$}} \\ \cline{3-7} \multicolumn{2}{c}{\multirow{-2}{*}{}} & \cellcolor[HTML]{C0C0C0}\textbf{1} & \cellcolor[HTML]{C0C0C0}\textbf{2} & \cellcolor[HTML]{C0C0C0}\textbf{5} & \cellcolor[HTML]{C0C0C0}\textbf{7} & \cellcolor[HTML]{C0C0C0}\textbf{10} \\ \hline \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \cellcolor[HTML]{C0C0C0}\textbf{30} & 9.53 & 8.00 & 6.43 & 5.95 & 5.49 \\ \cline{2-7} \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \cellcolor[HTML]{C0C0C0}\textbf{50} & 7.56 & 6.04 & 4.80 & 4.23 & 3.73 \\ \cline{2-7} \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \cellcolor[HTML]{C0C0C0}\textbf{70} & 7.06 & 5.55 & 4.38 & 3.87 & 3.35 \\ \cline{2-7} \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}} & \cellcolor[HTML]{C0C0C0}\textbf{100} & 6.98 & 5.49 & 4.29 & 3.85 & 3.33 \\ \cline{2-7} \multicolumn{1}{|c|}{\multirow{-5}{*}{\cellcolor[HTML]{C0C0C0}\rotatebox{90}{\textbf{\#keys per sequence $M$}}}} & \cellcolor[HTML]{C0C0C0}\textbf{150} & 6.97 & 5.46 & 4.29 & 3.85 & 3.33 \\ \hline \end{tabular} \end{center} \caption{Equal Error Rate ($\%$) achieved for different values of the parameters $M$ (sequence length) and $G$ (number of enrollment sequences per user).} \label{table:performance} \end{table} \section{Experiments and Results} \label{experiments_results} \subsection{Performance vs User Data} As commented in the related works section, one key factor when analyzing the performance of a free-text keystroke authentication algorithm is the amount of keystroke data per user employed for enrollment. In this work, we study this factor with two variables: the keystroke sequence length $M$ and the number of gallery sequences used for enrollment $G$. Our first experiment reveals to what extent $M$ and $G$ affect the authentication performance of our model. Note that the input to our model has a fixed size of $M$ after the Masking process shown in Figure~\ref{LSTM}. For this experiment, we set $K = 1$,$000$ where $K$ is the number of enrolled users. Table \ref{table:performance} summarizes the error rates achieved for the different values of sequence length $M$ and enrollment sequences per user $G$. We can observe that for sequences longer than $M = 70$ there is no significant improvement in the performance. Adding three times more key events (from $M = 50$ to $M = 150$) lowers the EER by only $0.57\%$ for all values of $G$. However, adding more sequences to the gallery shows greater improvements with about $50\%$ relative error reduction when going from $1$ to $10$ sequences independent of $M$. The best results are achieved for $M = 70$ and $G = 10$ with an error rate of $3.35\%$. For one-shot authentication ($G = 1$), our approach has an error rate of $7.06\%$ using sequences of $70$ keystrokes. These results suggest that our approach achieves a performance close to that of a fixed-text scenario (within $\sim$$5\%$ error rate) even when the data is scarce. For the following experiments, we set $M = 50$ and $G = 5$ to have a good trade-off between performance and amount of user data. \subsection{Comparison with State-of-the-Art Works} We now compare the proposed TypeNet with our implementation of two state-of-the-art algorithms for free-text keystroke authentication: one based on statistical models, the POHMM (Partially Observable Hidden Markov Models) from \cite{Monaco}, and another algorithm based on digraphs and SVM from \cite{Ceker}. To allow fair comparisons, all models are trained and tested with the same data and experimental protocol: $G = 5$ enrollment sequences per user, $M = 50$ keystrokes per sequence, $K = 1$,$000$ test users. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/ROCs.png} \caption{ROC comparison in free-text biometric authentication between the proposed TypeNet and two state-of-the-art approaches: POHMM from \cite{Monaco} and digraphs/SVM from \cite{Ceker}. $M = 50$ keystrokes per sequence, $G = 5$ enrollment sequences per user, 1 test sequence per user, and $K = 1$,$000$ test users.} \label{ROCs} \end{figure} In Figure \ref{ROCs} we plot the performance of the three approaches with the Aalto dataset described in Section~\ref{aalto}. We can observe that TypeNet outperforms previous state-of-the-art free-text algorithms in this scenario where the amount of enrollment data is reduced ($5 \times M=250$ training keystrokes in comparison to more than $10$,$000$ in related works, see Section~\ref{related_works}), thanks to the Siamese training step. The Siamese RNN has learned to extract meaningful features from the training dataset, which minimizes the amount of data needed for enrollment. The SVM generally requires a large number of training sequences per user ($\sim$$100$), whereas in this experiment we have only $5$ training sequences per user. We hypothesize that the lack of training samples contributes to the poor performance (near chance accuracy) of the SVM. \subsection{User Authentication at Large Scale} In the last experiment, we evaluate to what extent our model is able to generalize without performance decay. For this, we scale the number of enrolled users $K$ from $100$ to $100$,$000$. Remember that for each user we have $5$ genuine test scores and $K - 1$ impostor scores, one against each other test user. The model used for this experiment is the same trained for previous section ($68$,$000$ independent users included in the training phase). \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/EERs.png} \caption{EER of our proposed TypeNet when scaling up the number of test users $K$ in one-shot ($G = 1$ enrollment sequences per user) and balanced ($G = 5$) authentication scenarios. $M = 50$ keystrokes per sequence.} \label{EERs} \end{figure} Figure \ref{EERs} shows the authentication results for one-shot enrollment ($G = 1$ enrollment sequences, $M = 50$ keystrokes per sequennce) and the balanced scenario ($G = 5$, $M = 50$) for different values of $K$. We can observe that for both scenarios there is a slight performance decay when we scale from $100$ to $5$,$000$ test users, which is more pronounced in the one-shot scenario. However, for a large number of users ($K \geq 10$,$000$), performance stabilizes in both scenarios. These results demonstrate the potential of the Siamese RNN architecture in TypeNet to authenticate users at large scale in free-text keystroke dynamics. \section{Conclusions} \label{conclusions} We have presented TypeNet, a new free-text keystroke biometrics system based on a Siamese RNN architecture, and experimented with it at large scale in a dataset of $136$M keystrokes from $168$K users. Siamese networks have shown to be effective in face recognition tasks when scaling up to hundreds of thousands of identities. The same capacity has been also shown by our TypeNet in free-text keystroke biometrics. In all scenarios evaluated, specially when there are many users but few enrollment samples per user, the results achieved in this work suggest that our model outperforms previous state-of-the-art algorithms. Our results range from $9.53\%$ to $3.33\%$ EER, depending on the amount of user data enrolled. A good balance between performance and the amount of enrollment data per user is achieved with $5$ enrollment sequences and $50$ keystrokes per sequence, which yields an EER of $4.80\%$ for $1$K test users. Scaling up the number of test users does not significantly affect the performance: the EER of TypeNet decays only $5\%$ in relative terms with respect to the previous $4.80\%$ when scaling up from $1$K to $100$K test users. Evidence of the EER stabilizing around $10$K users demonstrates the potential of this architecture to perform well at large scale. For future work, we will improve the way training pairs are chosen in Siamese training. Recent work has shown that choosing \textit{hard pairs} during the training phase can improve the quality of the embedding feature vectors \cite{Wu}. We plan to test our model with other databases, and investigate smarter ways to combine the multiple sources of information \cite{2018_INFFUS_MCSreview2_Fierrez}, e.g., the multiple distances in Equation~(\ref{score}). \section{Acknowledgment} This work has been supported by projects: PRIMA (MSCA-ITN-2019-860315), TRESPASS (MSCA-ITN-2019-860813), BIBECA (RTI2018-101248-B-I00 MINECO), and by edBB (UAM). A. Acien is supported by a FPI fellowship from the Spanish MINECO. {\small \bibliographystyle{ieee}
1,941,325,220,332
arxiv
\section{Conclusions and Outlook}\label{sec::summary_outlook} This contribution proposes the first 3D beam-to-beam interaction model for molecular interactions such as electrostatic, van der Waals (vdW) or repulsive steric forces between curved slender fibers undergoing large deformations. While the general model is not restricted to a specific beam formulation, in the present work it is combined with the geometrically exact beam theory and discretized via the finite element method. A direct evaluation of the total interaction potential for general 3D bodies requires the integration of contributions from molecule or charge distributions over the volumes of the interaction partners, leading to a 6D integral (two nested 3D integrals) that has to be solved numerically. The central idea of our novel approach is to formulate reduced interaction laws for the resultant interaction potential between a pair of cross-sections of two slender fibers such that only the two 1D integrals along the fibers' length directions have to be solved numerically. This section-to-section interaction potential (SSIP) approach therefore reduces the dimensionality of the required numerical integration from 6D to 2D and yields a significant gain in efficiency, which only enables the simulation of relevant time and length scales for many practical applications. Being the key to this SSIP approach, the analytical derivation of the specific SSIP laws is based on careful consideration of the characteristics of the different types of molecular interactions, most importantly their point pair potential law and the range of the interaction. In a first step, the most generic form of the SSIP law, which is valid for arbitrary shapes of cross-sections and inhomogeneous distribution of interacting points (e.\,g.~atoms or charges) within the cross-sections has been presented before the assumptions and resulting simplifications for the specific SSIP laws have been discussed in detail. For the practically relevant case of homogeneous, disk-shaped cross-sections, specific, ready-to-use SSIP laws for short-range volume interactions such as vdW or steric interactions and for long-range surface interactions such as Coulomb interactions have been proposed. We would like to stress that postulating the general structure of the SSIP law and fitting the free parameters to e.\,g.~experimental data is one of the promising alternatives to the strategy of analytical derivation of the SSIP law as applied in this article. It is also important to emphasize that the general SSIP approach can be seamlessly integrated into an existing finite element framework for solid mechanics. In particular, it does neither depend on any specific beam formulation nor the applied spatial discretization scheme and in the context of the present work, we have exemplarily used it with geometrically exact Kirchhoff-Love as well as Simo-Reissner type beam finite elements. Likewise, it is independent of the temporal discretization and we have used it along with static and (Lie group) Generalized-Alpha time stepping schemes as well as inside a Brownian dynamics framework. The accuracy of the proposed SSIP laws as well as the general SSIP approach has been studied in a thorough quantitative analysis using analytical as well as numerical reference solutions for the case of vdW as well as electrostatic interactions. We find that a very high level of accuracy is achieved for long-range interactions such as electrostatics both for the entire range of separations as well as all mutual angles of the fibers from parallel to perpendicular. In the case of short-range interactions, however, the derived SSIP law without cross-section orientation information slightly overestimates the asymptotic power-law exponent of the interaction potential over separation. As a pragmatic solution, a calibration of the simple SSIP law has been proposed to fit a given reference solution in the small yet decisive range of separations around the equilibrium distance of the Lennard-Jones (LJ) potential. In the authors' recent contribution~\cite{GrillPeelingPulloff}, this strategy led to very good agreement in the force response on the system level. While this accuracy might already be sufficient for certain real world applications, our future research work will focus on the derivation of enhanced interaction laws including information about the cross-section orientation with the aim to achieve higher accuracy and the exact asymptotic scaling behavior. The presented set of numerical examples finally demonstrates the effectiveness and robustness of the SSIP approach to model steric repulsion, electrostatic or vdW adhesion. Several important aspects such as the influence of the Gauss integration error and the spatial discretization error as well as local and global equilibrium of forces and conservation of energy are studied in these simulations, including quasi-static and dynamic scenarios as well as arbitrary mutual orientations and separations of the interacting fibers. In order to remedy the characteristic singularity of inverse power interaction laws in the limit of zero separation, we have proposed a numerical regularization of the LJ SSIP law, which leads to a significant increase in robustness and efficiency, saving a factor of five in the number of nonlinear iterations while yielding identical results. \section{Application of the General SSIP Approach to Specific Types of Interactions}\label{sec::method_application_to_specific_types_of_interactions} At this point we would like to return to the fact that the SSIP approach proposed in \secref{sec::method_pot_based_ia} is general in the sense that it does not depend on the specific type of physical interaction. This section provides the necessary information and formulae to apply the newly proposed, generally valid approach from \secref{sec::method_double_length_specific_integral} to certain types of real-world, physical interactions such as electrostatics or vdW. As mentioned above, the approach requires a closed-form expression for the SSIP~$\tilde{\tilde{\pi}}$. We basically see two alternative promising ways to arrive at such a reduced interaction law~$\tilde{\tilde{\pi}}$: \begin{enumerate} \item analytical integration, e.\,g., as presented in \secref{sec::ia_pot_double_length_specific_evaluation_vdW} \item postulate~$\tilde{\tilde{\pi}}$ as a general function of separation~$\vr_{1-2}$ and mutual orientation~$\vpsi_{1-2}$ and determine the free parameters via fitting to \begin{enumerate} \item experimental data for specific section-to-section configurations, i.\,e., discrete values of separation~$\vr_{1-2}$ and mutual orientation~$\vpsi_{1-2}$ \item data from (one-time) numerical 4D integration for specific section-to-section configurations, i.\,e., discrete values of~$\vr_{1-2}$ and~$\vpsi_{1-2}$ \item experimental data for the global system response, e.\,g., of the entire fiber pair or a fiber network \end{enumerate} \end{enumerate} As a starting point we will restrict ourselves to the first option based on analytical integration throughout the remainder of this work. See~\secref{sec::ia_pot_double_length_specific_evaluation_vdW} for an example of the further steps required to derive the final, ready-to-use expressions in case of vdW interactions. To give but one example for a recent experimental work, which could serve as the basis for the second option listed above, we refer to \cite{Hilitski2015} measuring cohesive interactions between a single pair of microtubules. Postulating an SSIP and studying the global system response in numerical simulations could also be used as a verification of theoretical predictions for the system behavior. To give an example we refer to the work of theoretical biophysicists studying the structural polymorphism of the cytoskeleton resulting from molecular rod-rod interactions \cite{Borukhov2005}, which is based on a postulated model potential ``that captures the main features of any realistic potential''. In summary, we see a large number of promising future use cases for the proposed SSIP approach. \subsection{Additional assumptions and possible simplification of the most general form of SSIP laws}\label{sec::assumptions_simplifications} Recall that the most general form of the SSIP is uniquely described by a set of six degrees of freedom, three for the relative displacement and three for the relative orientation of the two interacting cross-sections, as presented in the preceding~\secref{sec::method_double_length_specific_integral}. The following assumptions turn out to significantly simplify this most general form of the SSIP law by reducing the number of relevant degrees of freedom from six to four, two or even one. This in turn eases the desirable derivation of analytical closed-form solutions of the SSIP~$\tilde{\tilde{\pi}}$ based on the point pair potentials~$\Phi(r)$ presented in~\secref{sec::theory_molecular_interactions_pointpair}. Specifically, these assumptions are: \begin{enumerate} \item undeformable cross-sections \item circular cross-section shapes \item homogeneous (or, more generally, rotationally symmetric) particle densities~$\rho_1,\rho_2$ in the cross-sections or surface charge densities~$\sigma_1,\sigma_2$ over the circumference \end{enumerate} The first assumption is typical for geometrically exact beam theory and the second and third assumption are reasonable regarding our applications to biopolymer fibers such as actin or DNA that can often be modeled as homogeneous fibers with circular cross-sections. Based on these three assumptions, we can conclude that the interaction between two cross-sections is geometrically equivalent to the interaction of two homogeneous, circular disks (or rings in case of surface interactions). The rotational symmetry of the circular disks then implies that the interaction potential is invariant under rotations around their own axes and thus reduces the number of degrees of freedom to four. The relative importance of the remaining degrees of freedom, i.\,e., modes, will be the crucial point in the following discussion, where we turn to the interaction of two slender bodies, i.\,e., consider the entirety of all cross-section pairs. At this point, recall the fundamental distinction between either short-range or long-range interactions as outlined in~\secref{sec::theory_molecular_interactions_twobody_vol}. \begin{figure}[htb]% \centering \subfigure[]{ \def0.3\textwidth{0.41\textwidth} \input{beam_to_beam_interaction_sketch_assumptions_short-range.pdf_tex} \label{fig::beam_to_beam_interaction_sketch_assumptions_short_range} } \hfill \subfigure[]{ \def0.3\textwidth{0.45\textwidth} \input{beam_to_beam_interaction_sketch_assumptions_long-range.pdf_tex} \label{fig::beam_to_beam_interaction_sketch_assumptions_long_range} } \caption{Sketches to illustrate the simplifications resulting for (a) short- and (b) long-range interactions.} \label{fig::beam_to_beam_interaction_sketch_assumptions} \end{figure} In the case of short-range interactions, the cross-section pairs in the immediate vicinity of the mutual closest points of the slender bodies dominate the total interaction. As is known from macroscopic beam contact formulations~\cite{wriggers1997,meier2016,Meier2017a}, the criterion for the closest point is that the distance vector~$\vr_{1-2}$ is perpendicular to both centerline tangent vectors~$\vr'_i$, i.\,e., (assuming small shear angles) the normal vectors of the disks (see~\figref{fig::beam_to_beam_interaction_sketch_assumptions_short_range}). Since only cross-section pairs in the direct vicinity of the closest points are relevant, arbitrary relative configurations (i.\,e.~separations and relative rotations) between those cross-sections shall in the following be discussed on the basis of six alternative degrees of freedom as illustrated in~\figref{fig::beam_to_beam_interaction_sketch_assumptions_short_range}. By considering the cross-sections~$A_{1\text{c}}$ and $A_{2\text{c}}$ at the closest points as reference, the relative configuration between cross-sections in the direct vicinity of $A_{1\text{c}}$ and $A_{2\text{c}}$ can be described via (small) rotations of $A_{1\text{c}}$ around the axis $\vr_1'$ (angle~$\gamma_1$) and $\vr_{1-2} \times \vr_1'$ (angle~$\beta_1$), (small) rotations of $A_{2\text{c}}$ around the axes $\vr_2'$ (angle~$\gamma_2$) and $\vr_{1-2} \times \vr_2'$ (angle~$\beta_2$), (small) relative rotations between $A_{1\text{c}}$ and $A_{2\text{c}}$ around the axis $\vr_{1-2}$ (angle~$\alpha$), and (small) changes in the (scalar) distance $d=\norm{\vr_{1-2}}$. As a consequence of assumptions 1--3 discussed above, the considered interaction potentials are invariant under rotations $\gamma_1$ and $\gamma_2$. From the remaining four degrees of freedom, the scalar distance~$d$ clearly has the most significant influence on the interaction potential because changes in~$d$ directly affect the mutual distance~$r$ of all point pairs in the body and, most importantly, the smallest surface separation~$g$ between both bodies. The second most significant influence is expected for the scalar relative rotation angle~$\alpha$ between the cross-section normal vectors, i.\,e., $\cos(\alpha) = \vn_1 \cdot \vn_2 \approx \vr_1' \cdot \vr_2' / (\norm{\vr_1'} \norm{\vr_2'})$. A change in~$\alpha$ does not alter the gap~$g$, but influences the distance of all next nearest point pairs in the immediate vicinity of the closest surface point pair. For the remaining two relative rotations~$\beta_1$ and $\beta_2$, arguments for both sides, either significant or rather irrelevant influence on the total interaction potential, can be found at this point. On the one hand, the orthogonality conditions~$\vr_i' \cdot \vr_{1-2} = 0$, $i=1,2$ at the closest points are fulfilled in good approximation also for cross-sections in the direct vicinity of the closest points, such that the influence of~$\beta_{1/2}$ could be considered negligible. On the other hand, even small rotations~$\beta_{1/2}$ change the smallest separation of any two point pairs in the immediate neighborhood of the closest point pair as soon as the centroid distance vector $\vr_{1-2}$ rotates out of the two cross-section planes. Therefore, it seems hard to draw a final conclusion with respect to the influence and thus importance of~$\beta_{1/2}$ based on the qualitative theoretical considerations of this section. To summarize, the scalar distance~$d$ between the cross-section centroids, the scalar relative rotation angle~$\alpha$ between the cross-section normal vectors and possibly also the relative rotation components~$\beta_{1/2}$ are supposed to have a perceptible influence on the short-range interaction between slender beams fulfilling assumptions 1--3, with a relative importance which decreases in this order. In favor of the simplest possible model, we will therefore assume at this point that the effect of this scalar relative rotations~$\alpha, \beta_1$ and $\beta_2$ is negligible as compared to the effect of the scalar separation~$d$. This allows us to directly use the analytical, closed-form expression for the disk-disk interaction potential as presented in~\secref{sec::theory_molecular_interactions_twobody_vdW}. The error for arbitrary configurations associated with this model assumption will be thoroughly analyzed in \secref{sec::verif_approx}. In this context, it is a noteworthy fact, that the first published method for 2D beam-to-rigid half space LJ interaction \cite{Sauer2009} likewise neglects the effect of cross-section orientation. In the subsequent publication \cite{Sauer2014}, the effect of cross-section rotation, i.\,e., interaction moments, has finally been included and a quantitative analysis considering a peeling experiment of a Gecko spatula revealed that the differences in the resulting maximum peeling force and bending moment are below~$8\%$ and~$2\%$, respectively. However, it is unclear whether this assessment also holds for beam-to-beam interactions modeled via the proposed SSIP approach. Including the orientation of the cross-sections thus is a work in progress and will be addressed in a subsequent publication. Finally, it is emphasized that by the discussed assumptions the SSIP law~$\tilde{\tilde{\pi}}$ as well as the total two-body interaction potential~$\Pi_\text{ia}$ can be formulated as pure function of the beam centerlines~$\vr_1$ and $\vr_2$ without the necessity to consider cross-section orientations via rotational degrees of freedom. This is considered a significant simplification of the most general case of the SSIP approach and thus facilitates both the remaining derivations in the present work as well as potential future applications. \paragraph{Remark on configurations with non-unique closest points} It is well-known from the literature on macroscopic beam contact that the location of the closest points is non-unique for certain configurations of two interacting beams, e.\,g., the trivial case of two straight beams, where an infinite number of closest point pairs exists (see e.\,g.~\cite{meier2016}). Note however that the reasoning presented above also holds in these cases, since the cross-section pairs in either one or several of these regions will dominate the total interaction potential.\\ In the case of long-range interactions, the situation is fundamentally different. Recall that here the large number of cross-section pairs with large separation~$d\gg R$ outweighs the contributions from those few pairs in the vicinity of the closest point and dominates the total interaction. Thus, the regime of large separations is decisive in this case and it has already been shown in the literature considering disk-disk interaction (see the brief summary in~\ref{sec::derivation_pot_ia_powerlaw_disks}) that in this regime the exact orientation of the disks can be neglected as compared to the centroid separation~$d$. In simple terms, this holds because the distance~$\vx_{\text{P}1-\text{P}2}$ between any point in disk~$1$ and any point in disk~$2$ may be approximated by the centroid separation~$d$, if~$d$ is much larger than the disk radii~$R_i$, which - again - holds for the large majority of all possible cross-section pairs. The validity of this assumption will be thoroughly verified by means of numerical reference solutions in~\secref{sec::verif_SSIP_disks_cyls_elstat}. \paragraph{Remark} The following, similar reasoning from the perspective of slender continua comes to the same conclusion. As visualized in ~\figref{fig::beam_to_beam_interaction_sketch_assumptions_long_range}, even pure (rigid body) rotations of slender bodies always entail large displacements of the centerline% \footnote{Disregarding rotations around its own axis, which are irrelevant here due to rotational symmetry, as mentioned above.} in the region far away from the center of rotation. The displacement of any material point due to cross-section rotation will be in the order of~$\Omega R$, where~$\Omega$ is the angle of rotation and~$R$ denotes the cross-section radius, whereas the displacement due to centerline displacement will be in the order of~$\Omega L$, where~$L$ is the distance from the center of rotation and thus in the order of the beam length~$l$. Due to the high slenderness~$l/R\gg 1$ of beams, the displacement from translation of the centroid will dominate in the region of large separations with~$L\gg R$, which is the decisive one here, because it includes the large majority of all possible cross-section pairs, as outlined above. The original, analogous reasoning has been applied to the relative importance of translational versus rotational contributions to the mass inertia of beams.\\ To conclude, we have discussed the possibility of defining and using SSIP laws~$\tilde{\tilde{\pi}}$ as a function of the scalar separation of the centroids~$d$ instead of the six degrees of freedom in the most general form. This significantly simplifies the theory because the analytical solutions for the planar disk-disk interaction from literature can directly be used and the complex treatment of large rotations is avoided. Having considered the additional assumptions above in the context of short-range interactions, the relative importance of cross-section rotations still needs to be verified in the subsequent quantitative analysis of~\secref{sec::verif_approx}. In the case of long-range interactions between slender bodies, we have argued that the application of such simple SSIP laws~$\tilde{\tilde{\pi}}(d)$ is expected to be a good approximation which will be confirmed by the quantitative analysis of~\secref{sec::verif_SSIP_disks_cyls_elstat}. \subsection{Short-range volume interactions such as van der Waals and steric repulsion}\label{sec::ia_pot_double_length_specific_evaluation_vdW} In the following, a generic short-range volume interaction described by the point-pair potential law \begin{equation}\label{eq::pot_general_powerlaw_pointpair} \Phi_\text{m}(r)= k_\text{m} \, r^{-m}, \, m>3 \end{equation} will be considered, because it includes vdW interaction for exponent~$m=6$ (cf.~eq.~\eqref{eq::pot_ia_vdW_pointpair}) as well as steric repulsion as modeled by LJ for exponent~$m=12$ (cf.~eq.~\eqref{eq::pot_ia_LJ_pointpair}). As outlined already in the preceding section, only the regime of small separations is practically relevant in this case of short-range interactions and we neglect the effect of cross-section rotations throughout this article. At this point, we can thus return to the results for the disk-disk scenario obtained in literature on vdW interactions \cite{langbein1972} and summarized in Table~\ref{tab::pot_ia_vdW_disks_formulae}. In particular, we make use of expression~\eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation} or rather the more general form \eqref{eq::approx_small_sep}. The latter is valid for all power-law point pair interaction potentials with a general exponent~$m>7/2$, i.\,e., all interactions where the strength decays ``fast enough''. First, let us introduce the following abbreviation containing all constants in the lengthy expression: \begin{equation}\label{eq::vdW_small_sep_def_constant} c_\text{m,ss} \defvariable k_\text{m}\rho_1 \rho_2 \frac{2 \pi}{(m-2)^2} \quad \sqrt{ \frac{2 R_1 R_2}{R_1+R_2} } \quad \frac{\Gamma(m-\tfrac{7}{2}) \Gamma(\tfrac{m-1}{2})}{\Gamma(m-2) \Gamma(\tfrac{m}{2}-1)} \end{equation} Using eq.~\eqref{eq::approx_small_sep} in combination with the general SSIP approach \eqref{eq::ia_pot_double_integration} from \secref{sec::method_double_length_specific_integral}, we directly obtain an expression for the total interaction potential of two deformable fibers in the case of short-range interactions: \begin{align}\label{eq::iapot_small_sep} \Pi_\text{m,ss} &= \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \, g^{-m+\tfrac{7}{2}} \, \dd s_2 \dd s_1 \qquad \text{for} \quad m>\tfrac{7}{2}\\ & \text{with} \quad g(s_1,s_2) = \norm{ \vr_1(s_1) - \vr_2(s_2) } - R_1 - R_2\label{eq::gap} \end{align} Here, the so-called gap $g$ is the (scalar) surface-to-surface separation, i.\,e., the beams' centerline curves $\vr_1(s_1)$ and $\vr_2(s_2)$ minus the two radii~$R_i$, as visualized in Figure~\ref{fig::beam_to_beam_interaction_pot_double_integration}. In general, the particle densities $\rho_{1/2}$ may depend on the curve parameters $s_{1/2}$, i.\,e., vary along the fiber, without introducing any additional complexity at this point. For the sake of brevity, these arguments~$s_{1/2}$ will be omitted in the remainder of this section. The variation of the interaction potential required to solve eq.~\eqref{eq::total_virtual_work_is_zero} finally reads \begin{align}\label{eq::var_iapot_small_sep} \delta \Pi_\text{m,ss} = (-m+\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( \delta \vr_1^T - \delta \vr_2^T \right) \frac{\vr_1 - \vr_2}{d} g^{-m+\tfrac{5}{2}} \dd s_2 \dd s_1 \qquad \text{for} \quad m>\tfrac{7}{2}. \end{align} Here, we used the variation of the gap $\delta g$, which is a well-known expression from the literature on macroscopic beam contact \cite{wriggers1997} and is identical to the variation of the separation of the beams' centerlines $\delta d$ to be used in \eqref{eq::var_iapot_large_sep_surface}, because the cross-sections are assumed to be undeformable: \begin{equation} \delta g = \delta d = \left( \delta \vr_1^T - \delta \vr_2^T \right) \frac{\vr_1 - \vr_2}{d} \end{equation} Solving eq.~\eqref{eq::total_virtual_work_is_zero} generally requires two further steps of discretization and subsequent linearization of this additional contribution~$\delta \Pi_\text{m,ss}$ to the total virtual work. The resulting expressions will be presented in \secref{sec::method_vdW_FE_discretization} and \ref{sec::method_vdW_linearization}, respectively. As discussed along with the general SSIP approach in \secref{sec::method_double_length_specific_integral}, the remaining two nested~1D integrals are evaluated numerically, e.\,g., by means of Gaussian quadrature. See \secref{sec::method_numerical_integration} for details on this algorithmic aspect. \paragraph{Remark on the regularization of the integrand} The inverse power law in the integrand of eq.~\eqref{eq::var_iapot_small_sep} has a singularity for the limit of zero surface-to-surface separation~$g\rightarrow0$. Consequently, a so-called \textit{regularization} of the potential law is needed to numerically handle (the integration of) this term robustly as well as sufficiently accurate. This approach is well-known e.\,g.~from (beam) contact mechanics (see e.\,g.~\cite{durville2007,Sauer2013,meier2016}) and will be further discussed and elaborated in \secref{sec::regularization}.\\ At the end of this section, we can conclude that we found specific, ready-to-use expressions for the interaction free energy as well as virtual work of generic short-range interactions described via the SSIP approach. Thus, vdW interaction or steric exclusion of slender, deformable continua can now be modeled in an efficient manner, reducing the numerical integral to be evaluated from six to two dimensions. A detailed quantitative study of the approximation quality with regard to the assumptions discussed in the preceding~\secref{sec::assumptions_simplifications} is content of \secref{sec::verif_approx}. \subsection{Long-range surface interactions such as electrostatics}\label{sec::ia_pot_double_length_specific_evaluation_elstat} Having discussed short-range volume interactions, we now want to consider one example of \textit{long-range surface potentials}. Since electrostatic interaction is the prime example of surface potential interaction and at the same time of high interest for the application to biopolymers we have in mind, we will focus on this case throughout the following section and mostly speak of \textit{point charges} as the elementary interaction partners. However, the required steps and formulae will be presented as general as possible in order to allow for a smooth future transfer to other applications. Especially in this context, it is important to stress again that within this model the elementary interaction partners, i.\,e., charges must not redistribute within the bodies. Hence, only non-conducting materials can be modeled with the SSIP approach. This however covers our main purpose to model electrostatic interactions between bio-macromolecules such as protein filaments and DNA because charges are not free to move therein. According to the SSIP approach proposed in~\secref{sec::method_double_length_specific_integral}, we aim to use analytical expressions for the two inner integrals over the cross-section circumferences, while the integration along the two beam centerlines will be evaluated numerically (cf.~eq.~\eqref{eq::pot_split_int2D_int4D} in combination with the remark on surface interactions at the end of the corresponding section). As discussed in~\secref{sec::assumptions_simplifications}, the regime of large separations is the decisive one for beam-to-beam interactions in this case of long-range interactions and the SSIP law can be simplified in good approximation to depend only on the centroid separation~$d$, which will be confirmed numerically in~\secref{sec::verif_SSIP_disks_cyls_elstat}. At this point, we again return to the expressions for the disk-disk interaction based on a generic point pair potential~$\Phi_m(r)=k \, r^{-m}$, as derived in the literature on vdW interactions~\cite{langbein1972} and summarized in~\ref{sec::derivation_pot_ia_powerlaw_disks}. In particular, the relation~\eqref{eq::approx_large_sep} will be used, which is the same approximation used to derive eq.~\eqref{eq::pot_ia_vdW_disk_disk_parallel_largeseparation} that describes the practically rather irrelevant scenario of short-range vdW interactions in the regime of large separations. Note that in the context of electrostatics, this result is well-known as the first term, i.\,e., zeroth pole or \textit{monopole} of the multipole expansion of the ring-shaped charge distribution on each of the disks' circumference, which represents the effect of the net charge of a (continuous) charge distribution and has no angular dependence (see~\secref{sec::theory_molecular_interactions_twobody_surf}). In simple terms, this monopole-monopole interaction means that the point pair interaction potential~$\Phi(r)$ is evaluated only once for the distance between the centers of the distributions~$r=d=\norm{ \vr_1 - \vr_2 }$ and weighted with the number of all point charges on the two circumferences of the circular cross-sections. The expression for the SSIP law to be used throughout this work would thus be exact for the scenario of the net charge of each cross-section concentrated at the centroid position (or distributed spherically symmetric around the centroid position). If the accuracy of the SSIP approach needs to be improved beyond the level resulting from this simplified SSIP law (see \secref{sec::verif_SSIP_disks_cyls_elstat} for the analysis), one could simply include more terms from the multipole expansion of the (ring-shaped) charge distributions to the SSIP law, which would take the relative rotation of the cross-sections into account. Throughout this work and for the applications we have in mind, the simplified SSIP law, which is based on the monopole-monopole interaction of cross-sections, turns out to be an excellent approximation for the true electrostatic interaction law and we thus restrict ourselves to this variant. Two nested 1D integrals over the beams' length dimensions then yield the two-body interaction potential for two fibers with arbitrary centerline shapes \begin{align}\label{eq::iapot_large_sep_surface} \Pi_\text{ia,ls} =\int_0^{l_1} \int_0^{l_2} 2\pi R_1 \sigma_1 \, 2\pi R_2 \sigma_2 \, \Phi(r=d) \dd s_2 \dd s_1 \qquad \text{with} \qquad d(s_1,s_2) = \norm{ \vr_1(s_1) - \vr_2(s_2)}. \end{align} The surface (charge) densities~$\sigma_j$, $j=1,2$ have already been introduced in eq.~\eqref{eq::pot_fullvolint_surface}. Particularly for the case of electrostatics, the surface charge per unit length can be identified as $\lambda_j = 2\pi R_j \sigma_j$, $j=1,2$, and is commonly referred to as \textit{linear charge density}. Note however that eq.~\eqref{eq::iapot_large_sep_surface} holds for all long-range point pair potential laws $\Phi(r)$, e.\,g., all power laws~$\Phi_m(r)=k \, r^{-m}$ with $m\leq3$. In order to obtain the weak form of the continuous problem, the variation of this total interaction energy needs to be derived. This variational form can immediately be stated as \begin{align}\label{eq::var_iapot_large_sep_surface} \delta \Pi_\text{ia,ls} &= \int_0^{l_1} \int_0^{l_2} \lambda_1 \lambda_2 \, \pdiff{{\Phi(r=d)}}{d} \, \delta d \, \dd s_2 \dd s_1 \quad \text{with} \quad \delta d = \left( \delta \vr_1^T - \delta \vr_2^T \right) \frac{\vr_1 - \vr_2}{d} \end{align} as the consistent variation of the separation of the beams' centerlines~$d$, which is well-known from macroscopic beam contact formulations \cite{wriggers1997}. By inserting the generic (long-range) power law \begin{equation} \Phi_m(r) = k \, r^{-m}, \, m \leq 3 \end{equation} into \eqref{eq::var_iapot_large_sep_surface}, we obtain the final expression for the variation of the two-body interaction energy of two deformable slender bodies \begin{equation}\label{eq::var_pot_ia_powerlaw_large_sep_surface} \delta \Pi_\text{m,ls} = \int_0^{l_1} \int_0^{l_2} \, \underbrace{k m \, \lambda_1 \lambda_2}_{=:c_\text{m,ls}} \, \left( - \delta \vr_1^T + \delta \vr_2^T \right) \, \frac{\vr_1 - \vr_2}{d^{m+2}} \, \dd s_2 \dd s_1. \end{equation} The specific case of Coulombic surface interactions follows directly for~$m=1$ and~$k=C_\text{elstat}$ (cf.~eq.~\eqref{eq::pot_ia_elstat_pointpair_Coulomb}). At this point, we have once again arrived at the sought-after contribution to the weak form~\eqref{eq::total_virtual_work_is_zero} of the space-continuous problem. The steps of finite element discretization and linearization will again be presented later, in \secref{sec::method_elstat_FE_discretization} and \ref{sec::method_elstat_linearization}, respectively. \paragraph{Remark on volume interactions} Note that there is no conceptual difference if long-range volume interactions were considered instead of the long-range surface interactions presented exemplarily in this section. The only difference lies in the constant prefactor~$c_\text{m,ls}$, which would read~$c_\text{m,ls}=k m A_1 A_2 \rho_1 \rho_2$ instead. Rather than the spatial distribution of the elementary interaction points in the volume or on the surface, it is the long-ranged nature of the interactions, which is important for the derivations in this section and allows the use of approximations for large separations (refer to the extensive discussion in~\secref{sec::assumptions_simplifications}). \paragraph{Remark on intra- versus inter-body interactions} The electrostatic interaction of point charges on the same slender body may cause unexpected effects. Assuming equal charges along the beam length leads to repulsive forces which in turn cause tensile axial forces in the beam. At the start of a dynamic simulation, a simply supported beam will undergo axial strain oscillations before eventually an equilibrium state is found. Alternatively, these interactions of charges within the same body may be included in the constitutive model used for the continuous body and to this end be modeled by an increased effective stiffness as has e.\,g.~been suggested by \cite{Sauer2007a,Cyron2013a}. The latter approach has been applied in the numerical examples of \secref{sec::numerical_results}. \section{Linearization of the Virtual Work Contributions from Molecular Interactions}\label{sec::linearization} Generally, the discrete residual vectors~$\vdr_{\text{ia},j}$ from molecular interactions between two beam elements~$j=1,2$ depend on the primary variables~$\hat \vdx_k$ of both beam elements~$k=1,2$. Consistent linearization thus yields the following four sub-matrices~$\vdk_{jk}$ to be considered and assembled into the global stiffness matrix, i.\,e., system Jacobian~$\vdK$: \begin{equation} \vdk_{11} \defvariable \diff{ \vdr_{\text{ia},1} }{ {\hat \vdx_1} }, \quad \vdk_{12} \defvariable \diff{ \vdr_{\text{ia},1} }{ {\hat \vdx_2} }, \quad \vdk_{21} \defvariable \diff{ \vdr_{\text{ia},2} }{ {\hat \vdx_1} }, \quad \vdk_{22} \defvariable \diff{ \vdr_{\text{ia},2} }{ {\hat \vdx_2} } \end{equation} Note that the linearization with respect to the primary variables~$\hat \vdx_k$ of both interacting beam elements simplifies due to the fact that the residuals~$\vdr_{\text{ia},j}$ do not depend on the cross-section rotations as discussed along with the derivation of the specific SSIP laws in~\secref{sec::method_application_to_specific_types_of_interactions}. Thus, only the linearization with respect to the centerline degrees of freedom~$\hat \vdd_k$ yields non-zero entries and are therefore presented in the remainder of this section. \subsection{Short-range volume interactions such as van der Waals and steric repulsion}\label{sec::method_vdW_linearization} The linearization of the residual contributions with respect to the primary variables~$\hat \vdx$ of both interacting beam elements is directly obtained from differentiation of eq.~\eqref{eq::res_ia_pot_smallsep_ele1} and~\eqref{eq::res_ia_pot_smallsep_ele2}: \begin{align} \vdk_{\text{m,ss},11} &= (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( - d^{-1} \, g^{-m+\tfrac{5}{2}} \, \vdH_1^\text{T} \vdH_1 + \right. & \nonumber \\ & \left. \left( d^{-3} \, g^{-m+\tfrac{5}{2}} + (m-\tfrac{5}{2}) \, d^{-2} \, g^{-m+\tfrac{3}{2}} \right) \vdH_1^\text{T} \left( \vr_1 - \vr_2 \right) \otimes \left( \vr_1 - \vr_2 \right)^T \vdH_1 \right) \dd s_2 \dd s_1 & \\ \vdk_{\text{m,ss},12} &= (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( d^{-1} \, g^{-m+\tfrac{5}{2}} \, \vdH_1^\text{T} \vdH_2 - \right. & \nonumber \\ & \left. \left( d^{-3} \, g^{-m+\tfrac{5}{2}} + (m-\tfrac{5}{2}) \, d^{-2} \, g^{-m+\tfrac{3}{2}} \right) \vdH_1^\text{T} \left( \vr_1 - \vr_2 \right) \otimes \left( \vr_1 - \vr_2 \right)^T \vdH_2 \right) \dd s_2 \dd s_1 & \\ \vdk_{\text{m,ss},21} &= (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( d^{-1} \, g^{-m+\tfrac{5}{2}} \, \vdH_2^\text{T} \vdH_1 - \right. & \nonumber \\ & \left. \left( d^{-3} \, g^{-m+\tfrac{5}{2}} + (m-\tfrac{5}{2}) \, d^{-2} \, g^{-m+\tfrac{3}{2}} \right) \vdH_2^\text{T} \left( \vr_1 - \vr_2 \right) \otimes \left( \vr_1 - \vr_2 \right)^T \vdH_1 \right) \dd s_2 \dd s_1 & \\ \vdk_{\text{m,ss},22} &= (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( - d^{-1} \, g^{-m+\tfrac{5}{2}} \, \vdH_2^\text{T} \vdH_2 + \right. & \nonumber \\ & \left. \left( d^{-3} \, g^{-m+\tfrac{5}{2}} + (m-\tfrac{5}{2}) \, d^{-2} \, g^{-m+\tfrac{3}{2}} \right) \vdH_2^\text{T} \left( \vr_1 - \vr_2 \right) \otimes \left( \vr_1 - \vr_2 \right)^T \vdH_2 \right) \dd s_2 \dd s_1. & \end{align} See eq.~\eqref{eq::vdW_small_sep_def_constant} for the definition of the constant~$c_\text{m,ss}$ and eq.~\eqref{eq::centerline_discretization} for the definition of the shape function matrices~$\vdH_j$. Note that the `mixed' matrix products $\vdH_1^\text{T} (\ldots) \vdH_2$ and $\vdH_2^\text{T} (\ldots) \vdH_1$ lead to off-diagonal entries in the tangent stiffness matrix of the system which couple the corresponding degrees of freedom. This is reasonable and necessary because these couplings represent the interaction between the respective bodies. \subsection{Long-range surface interactions such as electrostatics}\label{sec::method_elstat_linearization} In analogy to the previous section, differentiation of eq.~\eqref{eq::res_ia_pot_largesep} yields \begin{align} \vdk_{\text{m,ls},11} &= - \int_0^{l_1} \int_0^{l_2} c_\text{m,ls} \, \vdH_1^\text{T} \frac{1}{d^{2m+4}} \left( \vdH_1 d^{m+2} - (m+2) d^{m} \left( \vr_{1} - \vr_{2} \right) \otimes \left( \vr_{1} - \vr_{2} \right)^T \vdH_1 \right) \dd s_2 \dd s_1 & \nonumber \\ & = \int_0^{l_1} \int_0^{l_2} c_\text{m,ls} \left( - \frac{1}{d^{m+2}} \vdH_1^\text{T} \vdH_1 + \frac{\left(m+2\right)}{d^{m+4}} \vdH_1^\text{T} \left( \vr_{1} - \vr_{2} \right) \otimes \left( \vr_{1} - \vr_{2} \right)^T \vdH_1 \right) \dd s_2 \dd s_1 &\\ \vdk_{\text{m,ls},12} &= \int_0^{l_1} \int_0^{l_2} c_\text{m,ls} \left( \frac{1}{d^{m+2}} \vdH_1^\text{T} \vdH_2 - \frac{\left(m+2\right)}{d^{m+4}} \vdH_1^\text{T} \left( \vr_{1} - \vr_{2} \right) \otimes \left( \vr_{1} - \vr_{2} \right)^T \vdH_2 \right) \dd s_2 \dd s_1 &\\ \vdk_{\text{m,ls},21} &= \int_0^{l_1} \int_0^{l_2} c_\text{m,ls} \left( \frac{1}{d^{m+2}} \vdH_2^\text{T} \vdH_1 - \frac{\left(m+2\right)}{d^{m+4}} \vdH_2^\text{T} \left( \vr_{1} - \vr_{2} \right) \otimes \left( \vr_{1} - \vr_{2} \right)^T \vdH_1 \right) \dd s_2 \dd s_1 &\\ \vdk_{\text{m,ls},22} &= \int_0^{l_1} \int_0^{l_2} c_\text{m,ls} \left( - \frac{1}{d^{m+2}} \vdH_2^\text{T} \vdH_2 + \frac{\left(m+2\right)}{d^{m+4}} \vdH_2^\text{T} \left( \vr_{1} - \vr_{2} \right) \otimes \left( \vr_{1} - \vr_{2} \right)^T \vdH_2 \right) \dd s_2 \dd s_1 & \end{align} See again eq.~\eqref{eq::centerline_discretization} for the definition of the shape function matrices~$\vdH_j$. As mentioned before, the discrete element residual vectors in the specific case of Coulombic interactions directly follow for~$m=1$ and~$c_\text{m,ls} = C_\text{elstat} \lambda_1 \lambda_2$. See \secref{sec::theory_electrostatics_pointcharges} for the definition of~$C_\text{elstat}$ and~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} for the definition of the linear charge densities~$\lambda_i$. Again, as mentioned already in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}, the case of long-range \textit{volume} interactions only requires to adapt the constant prefactor via~$c_\text{m,ls}=k m A_1 A_2 \rho_1 \rho_2$. \section{The Section-to-Section Interaction Potential (SSIP) Approach}\label{sec::method_pot_based_ia} Based on the fundamentals of molecular interactions (\secref{sec::theoretical_foundation_molecular_interactions}) as well as geometrically exact beam theory (\secref{sec::fundamentals_beams}), this section will propose the novel SSIP approach to model various types of molecular interactions between deformable fibers undergoing large deflections in 3D. \subsection{Problem statement}\label{sec::problem_statement_general_strategy} For a classical conservative system, the total potential energy of the system can be stated taking into account the internal and external energy~$\Pi_{int}$ and~$\Pi_{ext}$. The additional contribution from molecular interaction potentials~$\Pi_\text{ia}$ is simply added to the total potential energy as follows. \begin{equation} \Pi_\text{TPE}=\Pi_\text{int}-\Pi_\text{ext}+\Pi_\text{ia} \overset{!}{=} \text{min.} \end{equation} Note that the existing parts remain unchanged from the additional contribution. One noteworthy difference is that internal and external energy are summed over all bodies in the system whereas the total interaction free energy is summed over all pairs of interacting bodies. According to the \textit{principle of minimum of total potential energy}, the weak form of the equilibrium equations is derived by means of variational calculus. The very same equation may alternatively be derived by means of the \textit{principle of virtual work} which also holds for non-conservative systems: \begin{align}\label{eq::total_virtual_work_is_zero} \delta \Pi_\text{int} - \delta \Pi_\text{ext} + \delta \Pi_\text{ia} = 0 \end{align} Clearly, the evaluation of the interaction potential~$\Pi_{ia}$, or rather its variation~$\delta \Pi_{ia}$, is the crucial step here. Recall~\eqref{eq::pot_fullvolint} to realize that it generally requires the evaluation of two nested 3D integrals% \footnote{It is important to mention that, assuming additivity of the involved potentials, systems with more than two bodies can be handled by superposition of all pair-wise two-body interaction potentials. It is thus sufficient to consider one pair of beams in the following. The same reasoning applies to more than one type of physical interaction, i.\,e., potential contribution.}. The direct approach using 6D numerical quadrature turns out to be extremely costly and in fact inhibits any application to (biologically) relevant multi-body systems. See \secref{sec::method_numerical_integration} for more details on the complexity and the cost of this naive, direct approach as well as the novel SSIP approach to be proposed in the following. \subsection{The key to dimensional reduction from 6D to 2D}\label{sec::method_double_length_specific_integral} We propose a split of the integral in the length dimensions $l_1, l_2$ on the one hand and the cross-sectional dimensions $A_1,A_2$ on the other hand: \begin{equation}\label{eq::pot_split_int2D_int4D} \Pi_\text{ia} = \iint_{l_1,l_2} \; \underbrace{\iint_{A_1,A_2} \rho_1(\vx_1) \rho_2(\vx_2) \Phi(r) \dd A_2 \dd A_1}_{=: \; \tilde{\tilde{\pi}}(\vr_{1-2},\vpsi_{1-2})} \; \dd s_2 \dd s_1 \qquad \text{with} \quad r=\norm{\vx_1-\vx_2}. \end{equation} Exploiting the characteristic slenderness of beams, the 4D integration over both undeformable cross-sections shall be tackled analytically and only the remaining two nested 1D integrals along the centerline curves shall be evaluated numerically to allow for arbitrarily deformed configurations. Generally speaking, we follow the key idea of reduced dimensionality from beam theory and thus aim to express the relevant information about the cross-sectional dimensions by the point-wise six degrees of freedom~$(\vr_i,\vpsi_i)$ of the 1D Cosserat continua without loss of significant information. To this end we need to consider the resulting interaction between all the elementary interaction partners within two cross-sections expressed by an interaction potential~$\tilde{\tilde{\pi}}(\vr_{1-2},\vpsi_{1-2})$ that depends on the separation~$\vr_{1-2}$ of the centroid positions and the relative rotation~$\vpsi_{1-2}$ between both material frames attached to the cross-sections. For this reason, the novel approach is referred to as the \textit{section-to-section interaction potential} (SSIP) approach. The SSIP~$\tilde{\tilde{\pi}}$ is a double length-specific quantity in the way that it measures an energy per unit length of beam~$1$ per unit length of beam~$2$, which is indicated here by the double tilde. The sought-after total interaction potential~$\Pi_\text{ia}$ of two slender deformable bodies thus results from double numerical integration of the double length-specific SSIP along both centerline curves: \begin{align}\label{eq::ia_pot_double_integration} \Pi_\text{ia} = \int \limits_0^{l_1} \int \limits_0^{l_2} \tilde{\tilde{\pi}}(\vr_{1-2},\vpsi_{1-2}) \dd s_2 \dd s_1 \end{align} This relation suggests another, alternative interpretation of the SSIP~$\tilde{\tilde{\pi}}$. In analogy to the term~\textit{inter-surface potential}, introduced by~\cite{Argento1997}, $\tilde{\tilde{\pi}}$ can be understood as an \textit{inter-axis potential}, i.\,e., describing the interaction of two spatial curves (with attached material frames). To further illustrate this novel concept, a simple, demonstrative example is shown in~\figref{fig::beam_to_beam_interaction_pot_double_integration}. \begin{figure}[htpb]% \centering \def0.3\textwidth{0.5\textwidth} \input{beam_to_beam_interaction_pot_double_integration.pdf_tex} \caption{Illustration of the novel SSIP approach: Two cross-sections at integration points~$\xi_{1/2,\text{GP}}$ of beam~$1$ and~$2$, respectively, their separation~$\vr_{1-2}$ and relative rotation~$\vpsi_{1-2}$.} \label{fig::beam_to_beam_interaction_pot_double_integration} \end{figure} In this scenario of two beams with circular cross-section, the SSIP~$\tilde{\tilde{\pi}}(\vr_{1-2},\vpsi_{1-2})$ describes the interaction of two circular disks at arbitrary mutual distance and orientation. To evaluate the two nested 1D integrals along the beam axes numerically, the SSIP needs to be evaluated for all combinations of integration points (denoted here as Gaussian quadrature points (GP), without loss of generality). For one of these pairs~($\xi_{1,\text{GP}}$, $\xi_{2,\text{GP}}$), the geometrical quantities are shown exemplarily. While analytical integration of the inner 4D integral of \eqref{eq::pot_split_int2D_int4D} has already been suggested above as one way to find a closed-form expression for the SSIP~$\tilde{\tilde{\pi}}$, we would like to stress the generality of the SSIP approach at this point. The question of how to find~$\tilde{\tilde{\pi}}$ is independent of the strategy to determine the interaction energy~$\Pi_\text{ia}$ of two slender bodies via numerical double integration as proposed in this section. This is important to understand because the SSIP~$\tilde{\tilde{\pi}}$ will obviously depend on the type of interaction as well as the cross-section shape and a number of other factors and there might also be cases where no analytical solution can be obtained and one wants to resort to relations fitted to experimental data. In the scope of this work, several specific expressions of~$\tilde{\tilde{\pi}}$, e.\,g., for vdW as well as electrostatic interactions, will be derived analytically in the following \secref{sec::method_application_to_specific_types_of_interactions}. In its most general form $\tilde{\tilde{\pi}}$ will be a function of the relative displacement~$\vr_{1-2}$ and the relative rotation~$\vpsi_{1-2}$ between both cross-sections, i.\,e., three translational and three rotational degrees of freedom. This becomes clear if one recalls that the position~$\vx_\text{P}$ of every material point in a slender body can be uniquely described by the six degrees of freedom of a 1D Cosserat continuum (cf.\,eq.~\eqref{eq::position_material_point_Cosserat}). Thus, keeping one cross-section fixed, the position~$\vx_{\text{P}1-\text{P}2}$ of every material point in the second cross-section relative to the (centroid position and material frame of the) first cross-section is again uniquely described by six degrees of freedom~$(\vr_{1-2},\vpsi_{1-2})$. This insight naturally leads to the interesting question under which conditions the SSIP~$\tilde{\tilde{\pi}}$ can be described by a smaller set of degrees of freedom, thus simplifying the expressions. Rotational symmetry of the interacting cross-sections is one common example where the SSIP would be invariant under rotations around the cross-section's normal axis. We will return to this topic in~\secref{sec::assumptions_simplifications} as a preparation for the following derivation of specific expressions for the SSIP. \paragraph{Remark on the included special case of surface interactions} \label{sec::method_surface_pot_ia} It is very convenient that the practically highly relevant case of surface potentials is already included as a simpler, special case in the proposed SSIP approach to model molecular interactions between the entire volume of flexible fibers. In simple words, it is sufficient to omit one spatial dimension of analytical integration on each interacting body in the analytical derivation of the required SSIP~$\tilde{\tilde{\pi}}$. More specifically, this means that~$\tilde{\tilde{\pi}}$ may be obtained from solving analytically two nested 1D integrals along both, e.\,g.~ring-shaped, contour lines of the fiber cross-sections. \section{Supplementary information on algorithms and code framework used for the simulations}\label{sec::algorithm_implementation_aspects} \begin{description} \item[\normalfont{\textit{Implementation}}] All novel methods have been implemented in~C++ within the framework of the multi-purpose and multi-physics in-house research code BACI~\cite{BACI2018}. \item[\normalfont{\textit{Integration into existing code framework}}] The novel SSIP approach can be integrated very well in an existing nonlinear finite element solver for solid mechanics. In particular, it does not depend on a specific beam (finite element) formulation and has been used with geometrically exact Kirchhoff-Love as well as Simo-Reissner beam elements. Also, it is independent of the temporal discretization and has been used along with statics, Lie group Generalized-Alpha as well as Brownian dynamics. \item[\normalfont{\textit{Load/time stepping}}] We either applied a fixed step size or an automatic step size adaption that is outlined in the following. Starting from a given initial step size, a step is repeated with half of the previous step size if and only if the nonlinear solver did not converge within a prescribed number of iterations. This procedure may be repeated until convergence is achieved (or until a given finest step size is reached which aborts the algorithm). After four subsequent converging steps with a new, small step size, the step size is doubled. Again, this is repeated until the initial step size is reached. \item[\normalfont{\textit{Nonlinear solver}}] The Newton-Raphson algorithm used throughout this work is based on the package NOX which is part of the Trilinos project~\cite{Trilinos2012}. Unless otherwise stated, the Euclidean norms of the displacement increment vector and of the residual vector are used as convergence criteria. Typically, the corresponding tolerances were chosen as~$10^{-10}$ and~$10^{-7}$, respectively. In some of the numerical examples, an additional Newton step size control is applied. It restricts the step size such that a specified upper bound of the displacement increment per nonlinear iteration is not exceeded. In simple terms, it is meant to prevent any two points on two beams from moving too far and eventually crossing each other without being detected from one iteration to the other. For this reason, the value for this upper bound is typically chosen as half of the beam radius. \item[\normalfont{\textit{Linear solver}}] We use the algorithm UMFPACK~\cite{UMFPACK2004} which is a direct solver for sparse linear systems of equations based on LU-factorization and included in the package Amesos which is part of the Trilinos project~\cite{Trilinos2012}. \item[\normalfont{\textit{Parallel computing}}] The implementation of the novel methods supports parallel computing and is based on the package Epetra which is part of the Trilinos project~\cite{Trilinos2012}. See~\secref{sec::search_parallel_computing} for details on the partitioning of the problem in the context of the search algorithm applied to identify spatially proximate interaction partners. \item[\normalfont{\textit{Post-processing and visualization}}] The computer program MATLAB~\cite{MATLAB2017b} was used to post-process and plot simulation data. All visualizations of the simulation results were generated using Paraview~\cite{Paraview}. \end{description} \section{Fundamentals of Intermolecular Forces and Potentials}\label{sec::theoretical_foundation_molecular_interactions} Interactions between molecules may result from various physical origins and are a complex and highly active field of research within the community of theoretical as well as experimental physics. The methods to be derived in this work make use of the most essential and well established findings as summarized e.\,g.~in the textbooks \cite{israel2011} and \cite{parsegian2005}. This section briefly presents a selection of aspects relevant for this work. \subsection{Characterization, terminology and disambiguation}\label{sec::molecular_interactions_classification_general_informations} To begin with, a number of universal aspects characterizing molecular interactions, especially with regard to the numerical methods to be developed in this work, shall be presented. A few simple facts about the examples mostly considered throughout this work, namely electrostatic and vdW interaction, are presented straight away, whereas the details on these and further types of molecular interactions are to be discussed in the subsequent \secref{sec::theory_molecular_interactions_pointpair}. \begin{titleditemize}{A collection of characteristics of molecular interactions with high relevance for this work} \item \begin{description} \item[Type of elementary interaction partners] Interaction may originate from unit charges as in the case of electrostatics. Another popular example are vdW effects that are caused by fluctuating dipole interactions occurring in every molecule and hence are related to the molecular density of the material. \end{description} \item \begin{description} \item[Spatial distribution of elementary interaction partners] Thinking of the resulting interaction between two bodies as accumulation of all molecular interactions, the question for the locations of all elementary interaction partners arises. Charges can often be found on the bodies' surfaces whereas molecules relevant for vdW interactions spread over the entire volume of the bodies. This work focuses on solid bodies (i.\,e.~condensed matter) that are non-conducting such that interaction partners will not redistribute, i.\,e., change their position within a body. \end{description} \item \begin{description} \item[Distance-dependency of the fundamental potential law] Generally, the strength of molecular interactions decays with increasing distance. Most frequently, inverse power laws with different exponents or exponential decay can be identified. \end{description} \item \begin{description} \item[Range of interactions] As a result of the previous aspects, a range of significant strength of an interaction can be defined. Rather than an inherent property, the classification of long- versus short-ranged interactions is a theoretical concept to judge the perceptible impact in specific scenarios. Moreover, it is a decisive factor in the derivation of well-suited numerical methods. \end{description} \item \begin{description} \item[Additivity and higher-order contributions] Many approaches including the one presented in this work make use of superposition, i.\,e., accumulating all the individual contributions from elementary interaction partners to obtain the total effect of interaction. This assumes that the interactions behave additively, i.\,e., that the sum of all pair-wise interactions describes the overall interaction sufficiently well. More specifically, the presence of other elementary interaction partners in the surrounding must not have a pronounced effect as compared to an isolated system of an interacting pair. Otherwise, the sum of all pair-wise interactions would need to be extended by contributions from sets of three, four and more elementary interaction partners. \end{description} \end{titleditemize} As a matter of course, this list is not exhaustive but represents a selection of the most relevant aspects considered in the development of our methods throughout~\secref{sec::method_pot_based_ia} and \ref{sec::method_application_to_specific_types_of_interactions}. \paragraph{Interaction potential and corresponding force}$\,$\\ An interaction potential~$\Phi(r)$, also known as (Gibbs) free energy of the interaction, is defined as the amount of energy required to approach the interaction partners starting from a reference configuration with zero energy at infinite separation. Hence, the following relations between the interaction potential~$\Phi(r)$ and the magnitude of the force $f(r)$ acting upon each of the partners, each in terms of the distance between both interacting partners~$r$, hold true: \begin{align}\label{eq::force_potential_relation} \Phi(r) = \int \limits_\infty^r -f(r) \dd r \qquad \leftrightarrow \qquad f(r) = - \diff{\Phi(r)}{r} \end{align} Although the final quantities of interest are often the resulting, vectorial forces on slender bodies, it is yet convenient and sensible to consider the scalar interaction potential throughout large parts of this work. This is underlined by the fact that nonlinear finite element methods in the context of structural dynamics can be formulated on the basis of energy and work expressions. Equation \eqref{eq::force_potential_relation} expresses the direct and inherent relation between force and potential. Note that the forces emanating from such interaction potentials are conservative and the integral value in \eqref{eq::force_potential_relation} is path-independent. Furthermore, the interaction is symmetric in a sense that the force acting upon the first interacting partner $\vf_1(r)$ has the same magnitude yet opposite direction as compared to the force acting on the second partner $\vf_2(r)$. Using the partners' position vectors~$\vx_1, \vx_2 \in \mathalpha{\mathbb{R}}^3$, we can formulate the vectorial equivalent of the formula above: \begin{align} \vf_1(r) = - \diff{\Phi(r)}{r} \frac{\vx_1 - \vx_2}{\norm{\vx_1-\vx_2}} \qquad \text{and} \qquad \vf_2(r) = \diff{\Phi(r)}{r} \frac{\vx_1 - \vx_2}{\norm{\vx_1-\vx_2}} \qquad \text{with} \qquad r=\norm{\vx_1 - \vx_2} \end{align} \paragraph{Disambiguation}$\,$\\ In order to particularize the very general term \textit{molecular interactions}, we may note that we solely consider interactions between distinct, solid (macro-)molecules, i.\,e., no covalent or other chemical bonds, but rather what is sometimes referred to as~\textit{physical bonds}. Thus, we restrict ourselves to \textit{intermolecular} forces as opposed to \textit{intramolecular} ones. \subsection{Interactions between pairs of atoms, small molecules or point charges}\label{sec::theory_molecular_interactions_pointpair} First principles describing molecular interactions are formulated for a pair of atoms, molecules or point charges. In the following, all types of interactions to be considered in this work are thus first presented for a minimal system consisting of one pair of these elementary interaction partners. \subsubsection{Electrostatics}\label{sec::theory_electrostatics_pointcharges} Coulomb's law is one of the most fundamental laws in physics and describes the interaction of a pair of point charges under static conditions by \begin{align}\label{eq::pot_ia_elstat_pointpair_Coulomb} \Phi_\text{elstat}(r) = \frac{Q_1 Q_2}{4\pi \varepsilon_0 \varepsilon} \frac{1}{r}, \qquad \norm{\vf_\text{elstat}(r)} = \frac{Q_1 Q_2}{4\pi \varepsilon_0 \varepsilon} \frac{1}{r^2}, \qquad \vf_{\text{elstat},1}(r) = \frac{Q_1 Q_2}{4\pi \varepsilon_0 \varepsilon} \frac{\vx_1 - \vx_2}{\norm{\vx_1-\vx_2}^3} \end{align} where~$\varepsilon_0$ and $\varepsilon$ are the vacuum and dielectric permittivity, respectively. For the sake of brevity in any later usage, let us define the abbreviation~$C_\text{elstat} \defvariable \left( 4\pi \varepsilon_0 \varepsilon \right)^{-1}$. Depending on the signs of~$Q_1$ and~$Q_2$, electrostatic forces may either be repulsive or attractive. Besides a pair of point charges, Coulomb's law likewise holds for a pair of spherically symmetric charge distributions with resulting charges~$Q_1$ and~$Q_2$, respectively. This is an important insight, since ultimately we are interested in interactions between two bodies with finite extension rather than points. Furthermore, interactions between rigid spheres and rigid bodies are of interest for applications such as particle diffusion in hydrogels. Throughout the entire work, no electrodynamic effects shall be considered. This is a valid assumption as long as bodies are non-conductive and the motion of bodies carrying the attached charges happens on much larger time scales than relevant eigenfrequencies in electrodynamics. Due to the inverse-first power law, the electrostatic potential has quite a long range, meaning that two point charges at a large distance still experience a considerable interaction force as compared to small distances. This behavior is even more pronounced for the interaction of two extended bodies, where the whole lot of all distant point pairs dominates the total interaction energy as compared to the few closest point pairs. This property is crucially different as compared e.\,g.~to vdW interactions considered in the next section. We will account for and indeed make use of this important property in the development of the methods to be presented in this work. \subsubsection{Van der Waals interactions}\label{sec::theory_vdW_pointpair} \textit{Van der Waals} forces originate from charge fluctuations, thus being an electrodynamic effect caused by quantum-mechanical uncertainties in positions and orientations of charges. Depending on the interaction partners, three subclasses can be distinguished as Keesom (two permanent dipoles), Debye (one permanent dipole, one induced dipole) and London dispersion interactions (two transient dipoles). The ubiquitous nature of van der Waals interactions is due to the fact that the latter contribution even arises in neutral, nonpolar, yet polarizable matter that means basically every atom or molecule. All three kinds of dipole interactions can be unified in that their interaction free energy follows an inverse-sixth power law in the separation \cite{parsegian2005}: \begin{align}\label{eq::pot_ia_vdW_pointpair} \Phi_\text{vdW}(r) = - \frac{C_\text{vdW}}{r^6} \end{align} This is a pleasantly simple expression, yet intricate when it comes to transferring it to two-body interactions, as we will discuss in~\secref{sec::theory_molecular_interactions_twobody_vdW}. In general, van der Waals forces are always attractive for two identical or similar molecules, yet may be repulsive for other material combinations. \subsubsection{Steric exclusion}\label{sec::theory_steric_exclusion_pointpair} Two approaching atoms or molecules will at some very small separation suddenly experience a seemingly infinite repulsive force. This effect is attributed to the overlap of electron clouds and referred to as \textit{steric repulsion}, \textit{steric exclusion} or \textit{hard core repulsion}. Without thorough theoretical foundation, several (almost) infinitely steep repulsive potential laws are empirically used to model this phenomenon. The first option is a \textit{hard wall/core/sphere} potential which has a singularity at zero separation \begin{equation} \lim_{r \to 0} \Phi_\text{c,hs}(r) = \infty \qquad \text{and} \qquad \Phi_\text{c,hs}(r) \equiv 0 \quad \text{for} \quad r>0. \end{equation} Other common choices include a \textit{power-law} potential with a large integer exponent~$n_\text{c,pow}$ \begin{equation} \Phi_\text{c,pow}(r) = C_\text{c,pow} \, r^{-n_\text{c,pow} } \end{equation} and finally an \textit{exponential} potential \begin{equation} \Phi_\text{c,exp}(r) = C_\text{c,exp} \, e^{-r / r_\text{c,exp} }. \end{equation} Note that the former two coincide in the limit~$n_\text{c,pow} \rightarrow \infty$ and increase indefinitely for~$r\rightarrow 0$ while the exponential one does not. Generally, this behavior of steric exclusion is in good agreement with our intuition based on macroscopic solid bodies coming into contact. \subsubsection{Total molecular pair potentials and force fields}\label{sec::theory_total_point_pair_potential} In many systems of interest, any two or more of the aforementioned effects may be relevant at the same time such that a combination of the pair potentials is required. This is typically done by summation of the individual potential contributions and leads to a \textit{total intermolecular pair potential}. Among the large number of possible combinations\footnote{See \cite{israel2011}[p. 138] for a comprehensive list of combinations used as total pair potential laws.}, the \textit{Lennard-Jones} (LJ) potential is probably the most commonly used variant (see \figref{fig::point-point_pot_LJ}). \begin{align}\label{eq::pot_ia_LJ_pointpair} \Phi_\text{LJ}(r) = k_{12} r^{-12} + k_6 r^{-6} = -\Phi_\text{LJ,eq} \left( \left( \frac{r_\text{LJ,eq}}{r} \right)^{12} - 2 \,\left( \frac{r_\text{LJ,eq}}{r} \right)^{6} \right) \end{align} \begin{figure}[htp]% \centering \includegraphics[width=0.4\textwidth]{point-point_pot_LJ.pdf} \caption{Lennard-Jones interaction potential for a pair of points, i.\,e.~atoms.} \label{fig::point-point_pot_LJ} \end{figure}% It is a special case of the \textit{Mie potential}~$\Phi_\text{Mie}(r) = C_\text{Mie,m} \, r^{-m} - C_\text{Mie,n} \, r^{-n}$ with exponents being chosen to model the inverse-sixth van der Waals attraction on the one hand and a strong repulsion on the other hand. The parameters can be identified as the minimal value~$\Phi_\text{LJ,eq}<0$ that the Lennard-Jones potential takes at equilibrium separation~$r_\text{LJ,eq}>0$, i.\,e., at the separation where the resulting force is zero. Other important quantities characterizing the LJ force law \begin{align} f_\text{LJ}(r) = -\frac{12\,\Phi_\text{LJ,eq}}{r_\text{LJ,eq}} \, \left( \left( \frac{r_\text{LJ,eq}}{r} \right)^{13} - \left( \frac{r_\text{LJ,eq}}{r} \right)^{7} \right) \end{align} are the minimal force value~$f_\text{LJ,min}$ and corresponding distance~$r_\text{LJ,f$_\text{min}$}$ \begin{align} f_\text{LJ,min} \approx \num{2.6899} \,\frac{\Phi_\text{LJ,eq}}{r_\text{LJ,eq}} \qquad \text{and} \qquad r_\text{LJ,f$_\text{min}$} = \left( \frac{13}{7} \right)^{1/6} \, r_\text{LJ,eq} \approx \num{1.1087} \, r_\text{LJ,eq}. \end{align} The minimal force, i.\,e., the maximal adhesive force, is commonly referred to as~\textit{pull-off force}. Israelachvili~\cite{israel2011} also points out the chance of a fortunate cancellation of errors in total pair potentials, especially close to the limit~$r \rightarrow 0$. In this regime, attractive forces tend to be underestimated by the simplified inverse-sixth term but likewise the steric repulsion is probably stronger than estimated from the power law. Both errors cancel rather than accumulate and increase the model accuracy. \paragraph{Remarks} \begin{enumerate} \item Many of the presented point-point interaction potentials decay rapidly with the distance as shown exemplarily for a law~$\Phi(r) \propto r^{-12}$ in \figref{fig::pot_ia_point_pair_r_exp-12}. % In anticipation of the numerical methods to be proposed in this work we can already state at this point that these extreme gradients are very challenging for numerical quadrature schemes that will therefore be discussed in a dedicated \secref{sec::method_numerical_integration}. \item In molecular dynamics, a \textit{force field} is typically used instead of the potential law to model the total interaction of a pair of atoms. Specific forms are being proposed for (coarse-grained) force fields modeling the interaction of macromolecules such as DNA instead of atoms. Since these all-atom approaches follow an entirely different approach as compared to the continuum model proposed here, we will not discuss force fields any further at this point. \end{enumerate} \begin{figure}[htp]% \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{potential2D_d1e-1.png} \label{fig::pot_ia_point_pair_r_exp-12_d1e-1} } \subfigure[]{ \includegraphics[width=0.45\textwidth]{potential2D_d2.png} \label{fig::pot_ia_point_pair_r_exp-12_d2} } \caption{Example of a point-point interaction potential~$\Phi(r) \propto r^{-12}$ plotted over a circular domain (blue circle) with (a) small and (b) large distance to the point-like interaction partner (red dot). Note the huge difference in scales.} \label{fig::pot_ia_point_pair_r_exp-12} \end{figure}% \subsection{Two-body interaction: Surface vs. volume interaction}\label{sec::theory_molecular_interactions_twobody} In this section, we take the important step from interacting point pairs to interactions between two bodies with defined spatial extension containing many of the fundamental point-like interaction partners considered throughout the preceding \secref{sec::theory_molecular_interactions_pointpair}. The obvious question for the spatial distribution of the interaction partners leads to the important distinction between \textit{surface} and \textit{volume interactions}. As the name suggests, in the first case, the elementary interaction partners are distributed over the surface of the bodies but not in the interior. The most important example from this category are electrostatic interactions between bodies where the charges sit on the surfaces and are not free to move around. This applies to a large number of charged, non-conductive biopolymer fibers such as actin or DNA. In the second case of volume interactions, the elementary interacting partners are distributed over the entire volume of the bodies. The most important examples here are van der Waals interactions and steric exclusion. As compared to surface interactions, this further increases the dimension of the problem making it more challenging to tackle, both by analytical as well as numerical means. In terms of notation one may also find the expressions \textit{body forces} or \textit{bulk interaction} referring to this category of interactions. Let us briefly look at volume and surface interactions as an abstract concept, leaving aside the specifics of the underlying physical effects that are to be discussed in the subsequent \secref{sec::theory_molecular_interactions_twobody_surf} and \secref{sec::theory_molecular_interactions_twobody_vdW}. Likewise, we assume additivity here and discuss the applicability later with the physical type of interaction. Since volume interactions are the more general and challenging case, we will discuss most aspects and approaches first for volume interactions and later only point out the differences considering surface interactions throughout this article. \figref{fig::beam_to_beam_interaction_particle_clouds} schematically visualizes the distribution of elementary interaction partners within two macromolecular or macroscopic bodies. \begin{figure}[htpb]% \centering \def0.3\textwidth{0.35\textwidth} \input{beam_to_beam_interaction_particle_clouds.pdf_tex} \vspace{-0.8cm} \caption{Two arbitrarily shaped, deformable bodies~$\mathcal{B}_1$ and~$\mathcal{B}_2$ with volumes~$V_1,V_2$ and continuous particle densities $\rho_1,\rho_2$.} \label{fig::beam_to_beam_interaction_particle_clouds} \end{figure} Assuming additivity, we apply \textit{pairwise summation} to arrive at the two-body interaction potential \begin{equation} \Pi_\text{ia} = \sum_{i\in \mathcal{B}_1} \sum_{j\in \mathcal{B}_2} \Phi(r_{ij}). \end{equation} Further assuming a continuous atomic density $\rho_i$, $i={1,2}$, the total interaction potential can alternatively be rewritten as integral over the volumes $V_1, V_2$ of both bodies~$\mathcal{B}_1$ and~$\mathcal{B}_2$: \begin{equation}\label{eq::pot_fullvolint} \Pi_\text{ia} = \iint_{V_1,V_2} \rho_1(\vx_1) \rho_2(\vx_2) \Phi(r) \dd V_2 \dd V_1 \qquad \text{with} \quad r = \norm{\vx_1-\vx_2} \end{equation} It can be shown that this continuum approach is the result of \textit{coarse-graining}, i.\,e., smearing-out the discrete positions of atoms in a system into a smooth atomic density function $\rho(\vx)$ \cite{Sauer2007a}. In the case of surface interactions, the dimensionality of the problem reduces and summation or integration is carried out over both bodies' surfaces~$\partial \mathfrak{B}_1$, $\partial \mathfrak{B}_2$: \begin{equation}\label{eq::pot_fullvolint_surface} \Pi_\text{ia} = \iint_{S_1,S_2} \sigma_1(\vx_1) \sigma_2(\vx_2) \Phi(r) \dd S_2 \dd S_1 \qquad \text{with} \quad r=\norm{\vx_1-\vx_2} \end{equation} Accordingly, surface densities~$\sigma_i(\vx_i)$, $i={1,2}$, replace the volume densities in this case. \paragraph{The \textbf{range of two-body interaction forces} originating from point pair potentials}$\,$\\ Let us assume a general inverse power law $\Phi(r)=k r^{-m}$ for the point pair interaction potential. It is obvious, that the potential becomes infinitely large if the separation of the two individual points~$r$ approaches zero and, on the other hand, that the potential rapidly decays with increasing distance. Turning to two bodies of finite size, i.\,e., two clouds of points, things are more involved as the following theoretical considerations demonstrate. In short, it can be shown that there is a fundamental difference between potentials with an exponent $m \leq 3$ on the one hand and $m > 3$ on the other hand. Starting with the case~$m > 3$, e.\,g., vdW interactions, the two-body interaction potential goes to infinity if the bodies approach until their surfaces touch each other. This can be illustrated by the simple example of two spheres of radius~$R$, where the vdW interaction potential scales with~$\Pi_\text{vdW} \propto g^{-1}$ (cf.~\cite[p.255]{israel2011}) with surface-to-surface separation or gap~$g=d-2R$ and the distance between the spheres' centers~$d$. This singularity of the two-body interaction potential in the limit of zero separation~$g\to 0$ is due to the fact that potentials with $m > 3$ decay so rapidly that the few point pairs with smallest separation outweigh the potentially very large number of all other, distant point pairs in terms of their potential contributions. Therefore, we can conclude that potentials with~$m > 3$ have no significant large distance contribution and the two-body interaction potential is governed by the separation of any two closest points (and their immediate surrounding). Considering the example of two cylinders later on, we will also see that the vdW interaction potential of two perpendicular cylinders does not change perceptibly if the length of the cylinders is increased (cf.~eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_perpendicular_smallseparation}), which can again be attributed to the short range of vdW interactions. The situation is substantially different for potentials with $m\leq 3$, e.\,g., Coulombic interactions. Here, the total contribution of all distant point pairs dominates over the few closest point pairs and the total interaction potential remains finite even if both bodies are in contact. Looking once again at the simple example of two spheres, Coulomb's law (cf.~eq.~\eqref{eq::pot_ia_elstat_pointpair_Coulomb}) directly shows~$\Pi_\text{elstat} \propto d^{-1}$ and thus no singularity occurs for (nearly) contacting surfaces~$g\to 0$, i.\,e., $d \to 2R$. Also, in contrast to the case of vdW mentioned above, the Coulomb interaction potential of two perpendicular cylinders would increase if their length is increased. The underlying theoretical derivations revealing also the turning point, i.\,e., the exponent~$m=3$ were first noted by Newton and can be found e.\,g.~in \cite[p. 11]{israel2011}. Due to this crucial difference, potentials with~$m>3$ will be denoted as \textit{short-range} interactions (e.\,g.~repulsive as well as attractive part of LJ) and potentials with~$m\leq3$ as \textit{long-range} interactions (e.\,g.~Coulomb) throughout this work. \subsubsection{Electrostatics of non-conductive bodies: An example for long-range surface interactions}\label{sec::theory_molecular_interactions_twobody_surf} The Coulomb interaction is additive such that the net force acting on an individual point charge in a system of point charges can be calculated from superposition of all pair-wise computed force contributions \cite{israel2011}. Equivalently, the net interaction potential results from summation of all pair potentials. A large body of literature deals with the problem of electrostatic multi-body interaction. One concept of high relevance for the present work is a well-known strategy called multipole expansion, which aims to express the resultant electrostatic potential of a (continuous) charge distribution as an (infinite) series (see e.\,g.~\cite{Prytz2015} for details). The individual terms of the series expansion generally are inverse power laws in the distance with increasing exponent and referred to as mono-, di-, quadru-, up to $n$-pole moments. At points far from the location of the charge cloud, the series converges quickly and can thus be truncated in good approximation. Regarding the total interaction potential of two charged bodies as formulated in~\eqref{eq::pot_fullvolint} or~\eqref{eq::pot_fullvolint_surface}, this already outlines the way how to determine~$\Pi_\text{elstat}$ for trivial geometries of the interacting bodies, where the integrals can potentially be solved analytically. We will return to this concept in the context of deformable slender fibers when proposing the general SSIP approach in the beginning of~\secref{sec::method_pot_based_ia} and make use of a (truncated) multi-pole expansion of the charged cross-sections for the (simplified) SSIP law for long-range surface interactions to be proposed in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}. \subsubsection{Van der Waals interaction: An example for short-range volume interactions}\label{sec::theory_molecular_interactions_twobody_vol} \label{sec::theory_molecular_interactions_twobody_vdW} Here, we want to discuss vdW interactions as one example of physically relevant volume interactions that is based on the inverse-sixth power law \eqref{eq::pot_ia_vdW_pointpair}. However, very similar considerations and formulae apply to steric interactions as well as LJ interactions. Today, we know that vdW interactions are generally non-additive. The latest and most accurate models for two-body vdW interactions are based on Lifshitz theory and, among other effects, include retardation, anisotropy and differences in polarizability. Nevertheless, a ``happy convergence'' of old, i.\,e.,~Hamaker's pairwise summation, and new, i.\,e.~Lifshitz, theory allows to determine the distance dependency from pairwise summation and then estimate the prefactor, i.\,e.,~Hamaker ``constant''~$A_\text{Ham}$, from more advanced modern theory. This approach yielding a so-called Hamaker-Lifshitz hybrid form~\cite{parsegian2005},~\cite[p.~257]{israel2011} is what motivates us to use pairwise summation in the derivation of the numerical methods to be proposed in the present work. Also, there are some special scenarios (negligible retardation, negligible difference in optical properties of the bodies, interaction in vacuum, ...), where additivity can be assumed as a good approximation even without adaption of the Hamaker constant. Generally, even the simple approach of pairwise summation requires two nested 3D integrals over both bodies' volumes, i.\,e.,~six-dimensional integration. Mainly due to this high dimensionality of the problem, unfortunately, (closed-form) analytical solutions can only be obtained for some simple special cases. Still, careful considerations and selection allow us to exploit some of these analytical expressions in order to develop efficient, reduced methods in \secref{sec::method_double_length_specific_integral}. To get a concise overview of all expressions relevant for the remainder of this work, we will provide a collection of closed-form analytical solutions in the following. First, we want to look at two cylinders representing the simplest model for two interacting straight, rigid fibers with circular cross-section. A number of publications consider this scenario and due to the simplicity of the geometry they were able to derive analytical solutions for some special cases. The resulting expressions are summarized in Table~\ref{tab::pot_ia_vdW_cylinders_formulae} and will be used for verification purposes in \secref{sec::verification_methods}. A second, highly relevant scenario is the one considering two disks. These analytical expressions, summarized in Table~\ref{tab::pot_ia_vdW_disks_formulae}, will be beneficial, and in fact provide the main ingredient, for the SSIP approach to describe molecular interactions between \textit{deformable} fibers modeled as 1D Cosserat continua. \paragraph{Two Cylinders}$\,$\\ To begin with, we consider the cases of parallel and perpendicular cylinders. Generally, the cylinders are assumed to be infinitely long, such that the boundary effects at its ends may be neglected. As the interaction potential for parallel cylinders would be infinite, one typically considers a length-specific interaction potential~$\tilde \pi_\text{vdW,cyl$\parallel$cyl}$ with dimensions of energy per unit length. This quantity thus describes the interaction of one infinitely long cylinder with a section of unit length of the other infinitely long cylinder. For perpendicular orientation (and all other mutual angles apart from $\alpha=0$) on the other hand, the total interaction potential~$\Pi_\text{vdW,cyl$\perp$cyl}$ remains finite. Even for this simple case of two cylinders, no closed-form analytical solution for the vdW interaction energy can be found for all mutual angles and all separations. One thus resorts to the consideration of the limits of small and large surface-to-surface separations for which the general solution, an infinite series, converges to the expressions presented in the following Table~\ref{tab::pot_ia_vdW_cylinders_formulae}. \begin{table}[htpb] \begin{center} \begin{tabular}{|c|c|c|}\hline &&\\ & Limit of \textit{small} separations & Limit of \textit{large} separations\\ & $g \ll R_1,R_2$ & $g,d \gg R_1,R_2$\\ &&\\\hline &&\\ parallel& $\tilde \pi_\text{vdW,cyl$\parallel$cyl,ss} = - \frac{ A_\text{Ham} }{ 24 } \, \sqrt{ \frac{ 2 R_1 R_2 }{ R_1 + R_2 } } \, g^{ -\frac{3}{2}}$ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_cyl_cyl_parallel_smallseparation} & $ \tilde \pi_\text{vdW,cyl$\parallel$cyl,ls} = - \frac{ 3 \pi }{ 8 } \, A_\text{Ham} \, R_1^2 R_2^2 \, d^{-5}$ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_cyl_cyl_parallel_largeseparation}\\ $\left[ \frac{\text{energy}}{\text{length}} \right]$ & see \cite{israel2011}[p.~255],\cite{parsegian2005}[p.~172] & see \cite{parsegian2005}[p.~16, p.~172]\\ &&\\\hline &&\\ perp. & $ \Pi_\text{vdW,cyl$\perp$cyl,ss} = - \frac{ A_\text{Ham} }{ 6 } \, \sqrt{ R_1 R_2 } \, g^{-1} $ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_cyl_cyl_perpendicular_smallseparation} & $\Pi_\text{vdW,cyl$\perp$cyl,ls} = - \frac{ \pi }{ 2 } \, A_\text{Ham} \, R_1^2 R_2^2 \, d^{ -4}$ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_cyl_cyl_perpendicular_largeseparation}\\ $\left[ \text{energy} \right]$ & see \cite{israel2011}[p.~255] & see \cite{parsegian2005}[p.~16]\\ &&\\\hline \end{tabular} \end{center} \caption{A collection of analytical solutions for the cylinder-cylinder interaction potential derived via pairwise summation. Here, $R_i$ denotes the cylinder radii, $d$ denotes the closest distance between the cylinder axes, $g$ denotes the surface-to-surface separation, i.\,e., gap, $A_\text{Ham} := \pi^2 \rho_1 \rho_2 C_\text{vdW}$ is the commonly used abbreviation known as Hamaker constant where $\rho_i$ denotes the particle densities and $C_\text{vdW}$ denotes the constant prefactor of the point-pair potential law (see eq.~\eqref{eq::pot_ia_vdW_pointpair}).} \label{tab::pot_ia_vdW_cylinders_formulae} \end{table} Despite the different dimensions of the quantities for parallel and perpendicular cylinders, we can still compare these expressions as becomes clear by the following thought experiment. Considering two ``sufficiently long'' cylinders of length~$L$ in parallel orientation, the total interaction potential is well described by~$\Pi_\text{vdW,cyl$\parallel$cyl} = \tilde \pi_\text{vdW,cyl$\parallel$cyl} \cdot L$ and thus shows the same scaling behavior in the separation as \eqref{eq::pot_ia_vdW_cyl_cyl_parallel_smallseparation} and \eqref{eq::pot_ia_vdW_cyl_cyl_parallel_largeseparation}. In addition, \eqref{eq::pot_ia_vdW_cyl_cyl_perpendicular_smallseparation} and \eqref{eq::pot_ia_vdW_cyl_cyl_perpendicular_largeseparation} are also a good approximation for the perpendicular orientation of these cylinders of finite length~$L$ since the difference in the distant point pairs is negligible. We would like to point out just a few interesting aspects of these equations. First, it is remarkable how the expressions differ in the exponent of the power law describing the distance dependency of the potential. This relates to a diverse and highly nonlinear behavior already for this simplest model system of fiber-fiber interactions composed of two cylinders. Second, the parallel orientation is a very special orientation that gives rise to the strongest possible adhesive forces between two cylinders and at the same time is the only stable equilibrium configuration. Third, the distance scaling behavior of two parallel cylinders at small separations $\tilde \pi_{\text{vdW,cyl}\parallel\text{cyl,ss}} \propto g^{ -\frac{3}{2}}$ lies between the fundamental solutions known for two infinite half spaces $\tilde{\tilde \pi} \propto g^{-2}$ (double tilde indicates a potential per unit area) and two spheres $\Pi \propto g^{-1}$. Note that again multiplication of these laws by a length or area does not alter the scaling law in the distance. Looking at the equations for large separations, we see similar relations, once again with a stronger distance scaling behavior in the parallel case. Generally, the solutions for large separations are expressed more naturally in the inter-axis separation~$d$ rather than the surface-to-surface separation~$g$. \paragraph{Two Disks}$\,$\\ This problem has been studied in literature on vdW interaction of straight, rigid cylinders of infinite length \cite{langbein1972}. In analogy to the cylinder-cylinder scenario, it turns out that not even in the simplest case of parallel oriented disks, i.\,e., two disks with parallel normal vectors, a closed analytical solution can be found for all separations. Instead, two expressions for the limit of small and large separations of the disks $g$ as compared to their radii $R_1,R_2$ are presented in the following Table~\ref{tab::pot_ia_vdW_disks_formulae}, respectively. \begin{table}[htpb] \begin{center} \begin{tabular}{|c|c|c|}\hline &&\\ & Limit of \textit{small} separations & Limit of \textit{large} separations\\ & $g \ll R_1,R_2$ & $g,d \gg R_1,R_2$\\ &&\\\hline &&\\ parallel & $\tilde{ \tilde{ \pi}}_\text{vdW,disk$\parallel$disk,ss} = - \frac{ 3 A_\text{Ham} }{ 256 } \, \sqrt{ \frac{ 2 R_1 R_2 }{ R_1 + R_2 } } \, g^{ -\frac{5}{2}}$ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_disk_disk_parallel_smallseparation} & $\tilde{ \tilde{ \pi}}_\text{vdW,disk$\parallel$disk,ls} = - A_\text{Ham} \,R_1^2 \, R_2^2 \, d^{-6}$ \refstepcounter{equation}(\theequation)\label{eq::pot_ia_vdW_disk_disk_parallel_largeseparation}\\ $\left[ \frac{\text{energy}}{\text{length}^2} \right]$ & see \cite{langbein1972} & see \cite{langbein1972}\\ &&\\\hline \end{tabular} \end{center} \caption{A collection of analytical solutions for the disk-disk interaction potential derived via pairwise summation. Here, $R_i$ denotes the disk radii, $d$ denotes the closest distance between the disk midpoints, $g$ denotes the surface-to-surface separation, i.\,e., gap, $A_\text{Ham} := \pi^2 \rho_1 \rho_2 C_\text{vdW}$ is the commonly used abbreviation known as Hamaker constant where $\rho_i$ denotes the particle densities and $C_\text{vdW}$ denotes the constant prefactor of the point-pair potential law (see eq.~\eqref{eq::pot_ia_vdW_pointpair}).} \label{tab::pot_ia_vdW_disks_formulae} \end{table} To summarize, a closed-form expression for the two-body vdW interaction potential is only known for some rare special cases and the ones relevant for fiber-fiber interactions have been identified in the voluminous literature on this topic and presented here in a brief and concise manner. We would like to conclude this section on two-body vdW interactions with a note on the analogy to steric exclusion, i.\,e., contact interactions, as already discussed for point pairs in~\secref{sec::theory_steric_exclusion_pointpair}. This class of physical interactions shares the two central properties of being extremely short in range and being volume interactions. Starting from an inverse-twelve power law as in the repulsive part of the LJ interaction law, one may apply very similar solution strategies and finally obtain very similar expressions as the ones presented in this section. For the sake of brevity, we refer to the derivations in \ref{sec::formulae_two_body_LJ_interaction} and the analysis of the resulting total LJ interaction that will also be used for the regularization of the reduced potential laws in \secref{sec::regularization}. \section{Fundamentals of Geometrically Exact 3D Beam Theory}\label{sec::fundamentals_beams} \label{sec::beam_theory} This section aims to provide a brief and concise introduction to well-known concepts of beam theory to be used in the remainder of this article. As commonly used in engineering mechanics, we refer to the term \textit{beam} as a mathematical model for a three-dimensional, slender, deformable body for which the following assumption can be made. The much larger extent of the body in its axial direction as compared to all transverse directions often justifies the Bernoulli hypothesis of rigid and therefore undeformable cross-sections. This in turn allows for a reduced dimensional description as a 1D Cosserat continuum embedded in the 3D Euclidian space. The so-called Simo-Reissner beam theory dates back to the works of Reissner~\cite{reissner1981}, Simo~\cite{simo1985}, and Simo and Vu-Quoc~\cite{simo1986}, who generalized the linear Timoshenko beam theory~\cite{Timoshenko1921} to the geometrically nonlinear regime. Since the Simo-Reissner model, which accounts for the deformation modes of axial tension, bending, torsion and shear deformation, is the most general representative of geometrically exact beam theories, we choose it as the one to be used exemplarily throughout this work. Nevertheless, the novel approach to be proposed is not restricted to a specific beam formulation. We have likewise applied it to formulations of Kirchhoff-Love type, which are known to be advantageous in the regime of high slenderness ratios where the underlying assumption of negligible shear deformation is met~\cite{Meier2017c,Meier2017b}. Refer to~\secref{sec::method_pot_based_ia_FE_discretization} for more details. \paragraph{Geometry representation}$\,$\\ A certain configuration of the 1D Cosserat continuum is uniquely defined by the centroid position and the orientation of the cross-section at every point of the continuum. The set of all centroid positions is referred to as \textit{centerline} or \textit{neutral axis} and expressed by the curve \begin{equation} s,t \mapsto \vr(s,t) \in \mathalpha{\mathbb{R}}^3 \end{equation} in space and time~$t \in \mathalpha{\mathbb{R}}$. Each material point along the centerline is represented by a corresponding value of the arc-length parameter~$s \in \left[ 0, l_0 \right] =: \Omega_l \subset \mathalpha{\mathbb{R}}$. Note that this arc-length parameter~$s$ is defined in the stress-free, initial configuration of the centerline curve~\mbox{$\vr_0(s) = \vr(s,t\!=\!0)$}. Thus, the norm of the initial centerline tangent vector yields \begin{equation} \norm{ \vr_0'(s) } := \norm{ \pdiff{ \vr_0(s) }{ s } } \equiv 1, \end{equation} but generally, in presence of axial tension, $\norm{ \vr'(s,t) } = \norm{ \tpdiff{\vr(s,t)}{s} } \neq 1$. Furthermore, the cross-section orientation at each of these material points is expressed by a right-handed orthonormal frame often denoted as \textit{material triad}: \begin{equation} s,t \mapsto \vLambda(s,t) := \left[ \vg_1(s,t),\, \vg_2(s,t),\, \vg_3(s,t) \right] \in SO^3 \end{equation} The second and third base vector follow those material fibers representing the principal axes of inertia of area. Such a triad can equivalently be interpreted as rotation tensor transforming the base vectors of a global Cartesian frame~$\vE_i \in \mathalpha{\mathbb{R}}^3, i \in {1,2,3}$ into the base vectors of the material triad $\vg_i \in \mathalpha{\mathbb{R}}^3, i \in {1,2,3}$ via \begin{equation} \vg_i(s,t) = \vLambda(s,t) \, \vE_i(s,t). \end{equation} In summary, a beam's configuration may be uniquely described by a field of centroid positions~$\vr(s,t)$ and a field of associated material triads~$\vLambda(s,t)$, altogether constituting a 1D Cosserat continuum (see \figref{fig::beam_kinematics}). \begin{figure}[htpb]% \centering \def0.3\textwidth{0.6\textwidth} \input{beam_kinematics.pdf_tex} \caption{Geometry description and kinematics of the Cosserat continuum formulation of a beam: Initial, i.\,e.,~stress-free (blue) and deformed (black) configuration. Straight configuration in initial state is chosen exemplarily here without loss of generality.} \label{fig::beam_kinematics} \end{figure} According to this concept of geometry representation, the position~$\vx$ of an arbitrary material point~$P$ of the slender body is obtained from \begin{equation} \label{eq::position_material_point_Cosserat} \vx_\text{P}(s,s_2,s_3,t) = \vr(s,t) + s_2 \, \vg_2(s,t) + s_3 \, \vg_3(s,t). \end{equation} Here, the additional convective coordinates~$s_2$ and $s_3$ specify the location of P within the cross-section, i.\,e., as linear combination of the unit direction vectors~$\vg_2$ and $\vg_3$. For a minimal parameterization of the triad, e.\,g.~the three-component rotation pseudo-vector~$\vpsi$ may be used such that we end up with six independent degrees of freedom~$(\vr,\vpsi)$ to define the position of each material point in the body by means of~\eqref{eq::position_material_point_Cosserat}. \paragraph{Remark on notation} Unless otherwise specified, all vector and matrix quantities are expressed in the global Cartesian basis~$\vE_i$. Differing bases as e.\,g.~the material frame are indicated by a subscript $\left[.\right]_{\vg_i}$. Quantities evaluated at time~$t\!=\!0$, i.\,e., the initial stress-free configuration, are indicated by a subscript $0$ as e.\,g.~in~$\vr_0(s)$. Differentiation with respect to the arc-length coordinate~$s$ is indicated by a prime, e.\,g., for the centerline tangent vector~$\vr'(s,t)=\tdiff{\vr(s,t)}{s}$. Differentiation with respect to time~$t$ is indicated by a dot, e.\,g., for the centerline velocity vector~$\dot\vr(s,t) = \tdiff{\vr(s,t)}{t}$. For the sake of brevity, the arguments~$s,t$ will often be omitted in the following. \paragraph{Remark on finite 3D rotations} To a large extent, the challenges and complexity in the numerical treatment of the geometrically exact beam theory can be traced back to the presence of large rotations. In contrast to common \textit{vector spaces}, the rotation group $SO(3)$ is a \textit{nonlinear manifold} (with Lie group structure) and lacks essential properties such as additivity and commutativity, which makes standard procedures such as interpolation or update of configurations much more involved. While \secref{sec::method_pot_based_ia} introduces the concept of section-to-section interaction laws in the most general manner, in \secref{sec::method_application_to_specific_types_of_interactions}, some additional (practically relevant) assumptions are made that allow to formulate the interaction laws as pure function of the beam centerline configuration. In turn, this strategy will allow to avoid the handling of finite rotations and to achieve simpler and more compact numerical formulations. \paragraph{Kinematics, deformation measures and potential energy of the internal, elastic forces}$\,$\\ \figref{fig::beam_kinematics} summarizes the kinematics of geometrically exact beam theory. Based on these kinematic quantities, deformation measures as well as constitutive laws can be defined. Finally, the potential of the internal (elastic) forces and moments~$\Pi_\text{int}$ is expressed uniquely by means of the set of six degrees of freedom~$(\vr,\vpsi)$ at each point of the 1D Cosserat continuum. See e.\,g.~\cite{jelenic1999,crisfield1999,Meier2017c} for a detailed presentation of these steps. \section{Numerical Examples}\label{sec::numerical_results} The set of numerical examples studied in this section aims to verify the effectiveness, accuracy and robustness of the proposed SSIP approach and the corresponding SSIP laws as a computational model for either steric repulsion, electrostatic or vdW adhesion and also a combination of those. Supplementary information on the code framework and the algorithms used for the simulations is provided in~\ref{sec::algorithm_implementation_aspects}. \subsection{Verification of the simplified SSIP laws using the examples of two disks and two cylinders}\label{sec::verification_methods} As a follow-up to the general discussion of using simplified SSIP laws in~\secref{sec::assumptions_simplifications} and the proposal of specific closed-form analytic expressions in~\secref{sec::ia_pot_double_length_specific_evaluation_vdW} and \secref{sec::ia_pot_double_length_specific_evaluation_elstat}, this section aims to analyze the accuracy in a quantitative manner. The minimal examples of two disks or two cylinders are considered in order to allow for a clear and sound analysis of either the isolated SSIP laws or its use within the general SSIP approach to modeling beam-to-beam interactions, respectively. \subsubsection{Verification for short-range volume interactions such as van der Waals and steric repulsion}\label{sec::verif_approx} Throughout this section, we consider the example of vdW interaction, but analogous results are expected for steric interaction or any other short-range volume interaction. Specifically, we will focus on the approximation quality of the proposed SSIP law from~\secref{sec::ia_pot_double_length_specific_evaluation_vdW}, which is based on the assumptions and resulting simplifications discussed in-depth in~\secref{sec::assumptions_simplifications}. Recall that, beside the obviously most important surface-to-surface separation~$g$, the rotation of the cross-sections around the closest point~$\alpha$ (quantified by the angle enclosed by their tangent vectors) and potentially also the rotation components~$\beta_1,\beta_2$ (see \figref{fig::beam_to_beam_interaction_sketch_assumptions_short_range}) have been identified as relevant degrees of freedom, yet are neglected in the simplified SSIP law proposed in~\secref{sec::ia_pot_double_length_specific_evaluation_vdW}. The influence of these factors, separation and rotation, on the approximation quality will thus be analyzed numerically in the following. Recall also from the discussions in~\secref{sec::assumptions_simplifications} and~\ref{sec::ia_pot_double_length_specific_evaluation_vdW} that only the regime of small separations will be of practical relevance in the case of short-range interactions considered here. However, we include the regime of large separations in the following analyses, mainly because it will be interesting to see the transition from small to large separations and confirm that potential values indeed drop by several orders of magnitude as compared to the regime of small separations. Moreover, it is a question of theoretical interest and has been considered in literature on vdW interactions~\cite{langbein1972}. This regime of large separations can be treated without any additional effort as described for the case of long-range interactions in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} (where this regime is the decisive one) if we take into account the corresponding remark on volume interactions. As presented in \secref{sec::theory_molecular_interactions_twobody_vdW}, analytical solutions for the special cases of parallel and perpendicular cylinders, for the regime of small and large separations, respectively, can be found in the literature \cite{israel2011,parsegian2005} and thus serve as reference solutions in this section. To the best of the authors' knowledge, no analytical reference solution has yet been reported for the intermediate regime in between the limits of large and small separations. Another source for reference solutions is the full numerical integration of the point pair potential over the volume of the interacting bodies, however it is limited due to the tremendous computational cost. Only a combination of both analytical and numerical reference solutions thus allows for a sound verification of the novel SSIP approach and the proposed SSIP laws. In the following analyses, either the SSIP, i.\,e., the interaction potential per unit length squared~$\tilde{\tilde{\pi}}$ of a pair of circular cross-sections, the interaction potential per unit length~$\tilde{\pi}$ of a pair of parallel cylinders or the interaction potential~$\Pi$ of a pair of perpendicular cylinders will be plotted as a function of the dimensionless surface-to-surface separation $g/R$, respectively. For simplicity, the radii of the beams are set to~$R_1=R_2=:R=1$. \paragraph{Parallel disks and cylinders}$\;$\\ \figref{fig::vdW_pot_over_gap_disks_parallel} shows the SSIP~$\tilde{\tilde{\pi}}$ of two disks in parallel orientation, i.\,e., their normal vectors are parallel with mutual angle~$\alpha=0$, as a function of the normalized separation~$g/R$. This is the simplest geometrical configuration and forms the basis of the proposed SSIP laws from~\secref{sec::method_application_to_specific_types_of_interactions}. We thus begin our analysis with the verification of the used analytical solutions in the limit of small (green line, cf.~eq.~\eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation}) and large (red line, cf.~eq.~\eqref{eq::pot_ia_vdW_disk_disk_parallel_largeseparation}) separations by means of a numerical reference solution (black dashed line with diamonds) obtained from 4D numerical integration of the point pair potential law~\eqref{eq::pot_ia_vdW_pointpair}. \begin{figure}[htp]% \centering \hfill \subfigure[]{ \includegraphics[width=0.49\textwidth]{disk-disk_parallel_vdW_pot_incl_sketch.pdf} \label{fig::vdW_pot_over_gap_disks_parallel} } \hfill \subfigure[]{ \includegraphics[width=0.15\textwidth]{cross_sec_integration_sectors_v14.pdf} \label{fig::cross_sec_integration_sectors} } \hspace{2.5cm} \caption{(a) VdW interaction potential per unit length squared~$\tilde{\tilde{\pi}}$ of two disks in parallel orientation over normalized surface separation~$g/R$. The analytical expressions~\eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation} (green line) and~\eqref{eq::pot_ia_vdW_disk_disk_parallel_largeseparation} (red line) used as SSIP laws throughout this work are verified by means of a numerical reference solution (black dashed line with diamonds). (b) Subdivision of circular cross-sections into integration sectors used to compute numerical reference solution. For rapidly decaying potentials, only the areas highlighted in dark and light gray considerably contribute to the total interaction potential.} \end{figure}% \figref{fig::vdW_pot_over_gap_disks_parallel} confirms that both analytical expressions match the numerical reference solution perfectly well in the limit of large and small separations, respectively. As predicted, the interaction potential of two circular disks follows a power law with (negative) exponent $2.5$ for small and $6$ for large separations% \footnote{Note that in the double logarithmic plot, a power law with exponent $m$ is a linear function with slope $m$.}. Note that all plots in this section are normalized with respect to the length scale $R$ and the energy scale $\rho_1\rho_2 C_\text{vdW}$. It is remarkable that the obtained values span several orders of magnitude which illustrates the numerical challenges associated with power laws, especially in the context of numerical integration schemes as discussed already in \secref{sec::method_numerical_integration}. Moreover, it underlines that the regime of large separations is practically irrelevant in the case of short-range interactions, because the potential values are basically zero as compared to those obtained in the small separation regime. Regarding the full range of separations, one may ask where either of the two expressions may be used given a maximal tolerable error threshold. As can be concluded from \figref{fig::vdW_pot_over_gap_disks_parallel}, the resulting error is small for separations $g/R<0.1$ with a relative error below $8\%$ and $g/R>10$ with a relative error below $7\%$. In the region of intermediate separations, the analytical solution for small separations seems to yield an upper bound, whereas the one for large separations seems to yield a lower bound for the interaction potential. Let us have a look at the efficiency gain from using the analytical solutions. The numerical reference solution requires the evaluation of a 4D integral over both cross-sectional areas for a given separation~$g$ and has been carried out in polar coordinates. Assuming Gaussian quadrature with the same number of Gauss points~$n_\text{GP,tot,transverse}$ in radial and circumferential dimension and for both cross-sections, this requires a total of $(n_\text{GP,tot,transverse})^4$ function evaluations. In contrast, the analytical expressions for the large and small separation limit, respectively, require only one function evaluation. This significant gain in efficiency is most pronounced for small separations, where the number of required Gauss points increases drastically due to the high gradient of the power law that needs to be resolved (see \secref{sec::method_numerical_integration} for details). If the number of Gauss points is not sufficient, this leads to so-called underintegration and we observed that the obtained curve of the numerical reference solution erroneously flattens (because the contribution of the closest-point pair is not captured) or becomes steeper (because the contribution of the closest-point pair is overrated). For these reasons, the computation of an accurate numerical reference solution shown in \figref{fig::vdW_pot_over_gap_disks_parallel} requires quite some effort. The integration domains were subdivided into integration sectors (see \figref{fig::cross_sec_integration_sectors}) in order to further increase the Gauss point density. But even in this planar disk-disk scenario requiring only 4D integration, we reached a minimal separation of~$g/R \approx \num{5e-3}$, below which the affordable number of Gauss points was not sufficient to correctly evaluate the SSIP~$\tilde{\tilde{\pi}}$ by means of full numerical integration% \footnote{The maximum number of~$n_\text{GP,tot,transverse}=8\times 32=256$ considered in the scope of this work led to several hours of computation time on a desktop PC for the evaluation of~$\tilde{\tilde{\pi}}$ as a numerical reference solution for~\figref{fig::vdW_pot_over_gap_disks_parallel}.}. For these very small separations, only the exact analytical dimensional reduction from 4D to 2D according to Langbein (cf.~\cite{langbein1972} and eq. \eqref{eq::dimred_langbein}) allowed to compute an accurate numerical reference solution. The analytical solutions for the disk-disk interaction potential~\eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation} (and~\eqref{eq::pot_ia_vdW_disk_disk_parallel_largeseparation}), used as SSIP law in eq.~\eqref{eq::var_iapot_small_sep} (and~\eqref{eq::var_iapot_large_sep_surface}), thus realize a significant increase in efficiency and indeed only enable the accurate evaluation of the interaction potential in the regime of very small separations. Note that such small separations are highly relevant if we consider fibers in contact since surface separations are expected to lie on atomic length scale in this case. For instance, the work of Argento et al.~\cite{Argento1997} mentions~$g=\SI{0.2}{\nano\meter}$ to be a typical value for contacting solid bodies and states that accurate numerical integration thus is the most challenging and in fact limiting factor for the numerical methods based on inter-surface potentials. As a reference value for the applications we have in mind, the fiber radius~$R$ varies from several~$\SI{}{\nano\meter}$ for DNA to $\SI{}{\milli\meter}$ for synthetic polymer fibers, resulting in a potentially very small normalized separation~$g/R$. An example for the simulation of adhesive fibers in contact can be found in the authors' recent contribution~\cite{GrillPeelingPulloff}, which studies the peeling and pull-off behavior of two fibers attracting each other either via vdW or electrostatic forces. As a next step, the interaction potential per unit length~$\tilde{\pi}$ of two parallel straight beams is considered. The length of the beams is chosen sufficiently high such that it has no perceptible influence on the results and meets the assumption of infinitely long cylinders made to derive the analytical reference solution from \cite{langbein1972}. Accordingly, a slenderness ratio $\zeta=l/R=50$ is used in the regime of small separations, whereas $\zeta=l/R=1000$ is used for large separations. \begin{figure}[htp]% \centering \includegraphics[width=0.49\textwidth]{cylinder-cylinder_parallel_vdW_pot_perlength_incl_sketch.pdf} \caption{VdW interaction potential per unit length~$\tilde{\pi}$ of two parallel cylinders over normalized surface separation~$g/R$.} \label{fig::vdW_pot_perunitlength_over_gap_cylinders_parallel} \end{figure}% Based on the experience from the disk-disk scenario, it is not surprising that the full 6D numerical integration in this case exceeds the affordable computational resources by orders of magnitude and thus can not serve as a reliable reference solution. In fact, we were not able to reproduce the theoretically predicted power law scaling in the regime of small separations despite using a number of Gauss points that led to computation times of several days. However, instead of the numerical reference solution, the analytical solution for infinitely long cylinders in the limit of very small (black dashed line, cf.~eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_parallel_smallseparation}) and very large separations (blue dashed line, cf.~eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_parallel_largeseparation}) serves as a reference in~\figref{fig::vdW_pot_perunitlength_over_gap_cylinders_parallel}. Note that as compared to the case of two circular disks the exponent of the power laws and thus the slope of the curves drops by one due to the integration over both cylinders' length dimension. Interestingly, the SSIP approach using the simplified SSIP law from~\secref{sec::ia_pot_double_length_specific_evaluation_vdW} (green line with crosses) does not yield the correct scaling behavior even in this case of parallel cylinders. This confirms the concerns from~\secref{sec::assumptions_simplifications} that the simplified SSIP law neglecting any relative rotations of the cross-sections deteriorates the accuracy of the approach in the case of short-ranged interactions in the regime of small separations. Due to this specific scenario of parallel cylinders, this deterioration can be attributed solely to the rotation components~$\beta_{1/2}$ (see~\figref{fig::beam_to_beam_interaction_sketch_assumptions_short_range}) since the included angle of the cross-section normal vectors~$\alpha$ is zero for every of the infinitely many pairs of cross-sections. Despite the correct trend of the resulting interaction potential as an inverse power law of the surface separation, it must thus be stated that the simplified SSIP law overestimates the strength of interaction and that the error increases with decreasing separation \footnote{ Note that the numerical integration error has been ruled out as cause for this behavior by choosing a high number of Gauss points~$n_\text{GP,tot,ele-length}=2\times50=100$ for each of the $64$ elements used to discretize each cylinder. A further increase of~$n_\text{GP}$ by a factor of five does not change the results using double precision. }. In the regime of large separations, however, the results for the SSIP approach (red line with circles) perfectly match the analytical reference solution (blue dashed line). This confirms the hypothesis from~\secref{sec::assumptions_simplifications} that the relative rotation of cross-sections is negligible in this regime and a high accuracy can be achieved with the simplified SSIP law. Although being of little practical importance here due to the negligible absolute values, this is a first numerical evidence for the validity of the SSIP approach in general and its high accuracy even in combination with simplified SSIP laws in the particular case of long-range interactions to be considered in the following~\secref{sec::verif_SSIP_disks_cyls_elstat}. \paragraph{Perpendicular disks and cylinders}$\;$\\ Up to now, we have only discussed the situation of parallel orientation of disks and cylinders. In the following, the accuracy of the simplified SSIP laws as well as the SSIP approach for twisted configurations will be analyzed by considering the most extreme configuration of perpendicular disks and cylinders. Again, computing a reference solution by means of full numerical integration was only affordable for the 4D case of two disks. \begin{figure}[htpb]% \centering \hspace{-0.3cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{disk-disk_perpendicular_vdW_pot_incl_sketch.pdf} \label{fig::vdW_pot_over_gap_disks_perpendicular} } \hspace{-0.2cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{cylinder-cylinder_perpendicular_vdW_pot_incl_sketch.pdf} \label{fig::vdW_pot_over_gap_cylinders_perpendicular} } \caption{(a) VdW interaction potential per unit length squared~$\tilde{\tilde{\pi}}$ of two perpendicular disks and (b) vdW interaction potential~$\Pi$ of two perpendicular cylinders, plotted over the normalized surface separation~$g/R$, respectively.} \end{figure}% The results for perpendicular disks shown in \figref{fig::vdW_pot_over_gap_disks_perpendicular} confirm that there is no difference between perpendicular and parallel orientation for large separations and the scaling behavior of the numerical reference solution (black dashed line with diamonds) with exponent~$6$ is met by the simplified SSIP law (red line, cf.~eq.~\eqref{eq::pot_ia_vdW_disk_disk_parallel_largeseparation}). On the other hand, there is a remarkable difference in the scaling behavior for small separations. While the interaction potential of two parallel disks, which is the underlying assumption of the proposed SSIP law (green line, cf.~eq.~\eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation}), follows an inverse~$2.5$ power law, the numerical reference solution (black dashed line with diamonds) suggests that this behavior changes for perpendicular disks to an inverse~$2$ power law. This time, the difference in results can be attributed to the relative rotation~$\alpha$, i.\,e., the angle included by the cross-section normal vectors and again the error of the proposed simplified SSIP law increases with decreasing separation. Finally, the scenario of perpendicular cylinders is considered and \figref{fig::vdW_pot_over_gap_cylinders_perpendicular} shows the total interaction potential~$\Pi$ as a function of the normalized smallest surface separation~$g/R$. As discussed before, the computational cost of the full 6D numerical integration is too high to compute a reliable reference solution in the case of two 3D bodies and we resort to the analytical solutions for the limits of very small and very large separations, respectively. Note that in contrast to the case of infinitely long \textit{parallel} cylinders (cf.~\figref{fig::vdW_pot_perunitlength_over_gap_cylinders_parallel}) the total interaction potential of infinitely long perpendicular cylinders is finite and the result thus has dimensions of energy instead of energy per length. Perpendicular cylinders are worth to consider because they trigger both of the sources of errors that have been analyzed individually so far - neglecting relative rotations~$\alpha$ as well as~$\beta_{1/2}$ in the simplified SSIP law. In short, the resulting accuracy is similar as for either perpendicular disks or parallel cylinders. In the decisive regime of small separations, the SSIP approach based on the simplified SSIP law (green line with crosses) fails to reproduce the correct scaling behavior of the analytical reference solution (black dashed line, cf.~eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_perpendicular_smallseparation}), whereas in the regime of large separations, the SSIP approach based on the simplified SSIP law (red line with circles) perfectly matches the analytical reference solution (blue dashed line, cf.~eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_perpendicular_largeseparation}). \paragraph{Conclusions}$\;$\\ First, this section reveals that full 6D numerical integration to compute the total interaction potential of slender continua is by orders of magnitude too expensive and can not reasonably be used as a numerical reference solution even in minimal examples of one pair of cylinders. At most, 4D numerical integration required for disk-disk interactions allows to compute numerical reference solutions for the intermediate regime of separations where no analytical solutions are known. This underlines the importance of reducing the dimensionality of numerical integration to 2D as achieved by the proposed SSIP approach in order to enable the simulation of large systems as well as a large number of time steps. Second, the thorough analysis of the accuracy resulting from using the proposed simplified SSIP law, neglecting the cross-section rotations, reveals that one has to distinguish between the regime of small and large separations. In the decisive regime of small separations, we find that the scaling behavior deviates from the analytical prediction for perpendicular disks and parallel as well as perpendicular cylinders and that the resulting error increases with decreasing separation. A remedy of this limitation could be a calibration, i.e. a scaling of the prefactor k in the simple SSIP law, to fit a given reference solution within a small range of separations (e.g. around the equilibrium distance of the LJ potential). In the authors' recent contribution~\cite{GrillPeelingPulloff}, this pragmatic procedure is shown to reproduce the global system response very well. Still, it would be valuable to include the relative rotations of the cross-sections in the applied SSIP law to obtain the correct asymptotic scaling behavior. To the best of the authors' knowledge, no analytical closed-form expression has been published yet and the like is far from trivial to derive. Therefore, we leave this as a promising enhancement of the novel approach, which we are currently working on and will address in a future publication. As mentioned before, the regime of large separations is of little practical relevance in the case of short-range volume interactions, however it is of some theoretical interest and the corresponding findings and conclusions of this regime will hold true also for long-range interactions such as electrostatics to be considered in the following section. Here, the results are in excellent agreement with the theoretically predicted power laws for parallel as well as perpendicular disks and cylinders. \subsubsection{Verification for long-range surface interactions such as electrostatics}\label{sec::verif_SSIP_disks_cyls_elstat} Turning to long-range interactions, again parallel and perpendicular disks and cylinders will be considered in order to analyze the accuracy of the simplified SSIP law from~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} both individually as well as applied within the general SSIP approach proposed in~\secref{sec::method_double_length_specific_integral}. As before, Coulombic surface interactions are chosen as specific example, however the conclusions are expected to hold true also for other types of long-range interactions. As compared to the preceding section, the computation of a numerical reference solution simplifies mainly due to the reduction from volume to surface interactions but also due to the smaller gradient values, which need to be resolved in the regime of small separations thus requiring less integration points. This allows for a verification by means of a numerical reference solution also in the case of cylinder-cylinder interaction. \begin{figure}[htpb]% \centering \hspace{-0.3cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{disk-disk_parallel_elstat_pot_incl_sketch.pdf} \label{fig::elstat_pot_over_gap_disks_parallel} } \hspace{-0.2cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{disk-disk_perpendicular_elstat_pot_incl_sketch.pdf} \label{fig::elstat_pot_over_gap_disks_perpendicular} } \caption{Electrostatic interaction potential per length squared~$\tilde{\tilde{\pi}}$ of (a) two parallel disks and (b) two perpendicular disks, plotted over the normalized surface separation~$g/R$, respectively.} \label{fig::elstat_pot_over_gap_disks} \end{figure}% \figref{fig::elstat_pot_over_gap_disks} shows the results for the simplified SSIP law obtained from the monopole-monopole interaction of two disk-shaped cross-sections in \secref{sec::ia_pot_double_length_specific_evaluation_elstat} (red line) and a numerical reference solution (black dashed line with diamonds). As expected, the proposed SSIP law excellently matches the reference solution in the regime of large separations, both for the parallel as well as perpendicular configuration. In both cases, the relative error is below $7\%$ already for $g/R=1$. The most important and remarkable result of this section however is the following. The inevitable error of the simplified SSIP law in the regime of small separations does not carry over to beam-to-beam interactions as shown in~\figref{fig::elstat_pot_over_gap_cylinders}. For both parallel as well as perpendicular cylinders, the results from the SSIP approach using this simplified SSIP law from~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} (red line with crosses) agree very well with the numerical reference solution (black dashed line with diamonds) over the entire range of separations. This confirms the theoretical considerations from~\secref{sec::assumptions_simplifications} arguing that the beam-beam interaction will be dominated by the large number of section pairs with large separations, which outweigh the contributions of the few section pairs with smallest separations. A closer look reveals that the relative error for the parallel cylinders is below $0.3\%$ even for the smallest separation~$g/R=10^{-3}$ considered here. For the presumably worst case of perpendicular cylinders, this deviation is even smaller with a relative error of~$0.03\%$, which can be explained by the following two reasons. First, a comparison of~\figref{fig::elstat_pot_over_gap_disks_parallel} and \ref{fig::elstat_pot_over_gap_disks_perpendicular} reveals that the accuracy of the simplified SSIP law in the regime of small separations is higher for perpendicular orientation, which can be regarded a happy coincidence. And second, the large majority of all section pairs has a larger separation (which according to~\figref{fig::elstat_pot_over_gap_disks} is the regime of higher accuracy) as compared to the case of parallel cylinders. \begin{figure}[htpb]% \centering \hspace{-0.3cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{cylinder-cylinder_parallel_elstat_pot_incl_sketch.pdf} \label{fig::elstat_pot_over_gap_cylinders_parallel} } \hspace{-0.2cm} \subfigure[]{ \includegraphics[width=0.49\textwidth]{cylinder-cylinder_perpendicular_elstat_pot_incl_sketch.pdf} \label{fig::elstat_pot_over_gap_cylinders_perpendicular} } \caption{Electrostatic interaction potential~$\Pi$ of (a) two parallel cylinders and (b) two perpendicular cylinders, plotted over the normalized surface separation~$g/R$, respectively. The slenderness ratio of the cylinders is $\zeta=L/R=50$.} \label{fig::elstat_pot_over_gap_cylinders} \end{figure}% Note that unlike in the case of short-range interactions, here the total interaction potential is considered also for the parallel cylinders. Due to the long range of interactions, the interaction potential per length depends on the length of the cylinders and is thus no representative quantity. For~\figref{fig::elstat_pot_over_gap_cylinders}, a slenderness ratio of~$\zeta=L/R=50$ is chosen exemplarily. The two nested 1D integrals along the cylinder length dimensions are evaluated using $n_\text{GP,ele-length}=5$ Gauss points for each of the 64 elements used to discretize each cylinder. Additionally, $n_\text{GP,circ}=8\times32=256$ Gauss points over the circumference of each disk are used to compute the numerical reference solution. In all cases, it has been verified that the numerical integration error does not influence the results noticeably. At this point, recall from~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} that the accuracy of the applied SSIP law can still be increased whenever deemed necessary by including more terms of the multipole expansion of the cross-sections. However, because the results of this section show a high level of accuracy and the resulting simplification is significant, the simplified SSIP law seems to be the best compromise for our purposes. To conclude this section it can thus be stated that the novel SSIP approach as proposed in~\secref{sec::method_double_length_specific_integral} in combination with the simplified SSIP law from~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} is a simple, efficient, and accurate computational model for long-range interactions of slender fibers. In the following, it will be applied to first numerical examples of \textit{deformable} slender fibers in~\secref{sec::num_ex_elstat_attraction_twoparallelbeams} and~\ref{sec::num_ex_twocrossedbeams_elstat_snapintocontact}. \subsection{Repulsive steric interaction between two contacting beams}\label{sec::example_contact_repusive_LJpot} This numerical example aims to demonstrate the general ability of our proposed method to preclude penetration of two slender bodies that come into contact under arbitrary mutual orientation in 3D. No adhesive forces are considered in this example. The setup is inspired by Example 1 in~\cite{Meier2017a} where the macroscopic, so-called all-angle beam contact (ABC) formulation is used to account for the non-penetrability constraint. Here, we model the contact interaction based on the repulsive part of the LJ interaction potential~\eqref{eq::pot_ia_LJ_pointpair}. More specifically, we apply the novel SSIP approach as proposed in~\secref{sec::method_double_length_specific_integral} in combination with the SSIP law proposed in~\secref{sec::ia_pot_double_length_specific_evaluation_vdW}. The parameter specifying the strength of repulsion is set to be~$k\rho_1\rho_2=10^{-16}$. To be consistent throughout this article, we apply Hermitian Simo-Reissner beam elements instead of the torsion-free Kirchhoff elements used in~\cite{Meier2017a}. As compared to the original example, this requires us to replace the hinged support of the upper beam by clamped end Dirichlet boundary conditions in order to eliminate all rigid body modes in this quasi-static example. The same number of three finite elements for the upper, deformable beam and one element for the lower, rigid beam is used. \begin{figure}[htpb]% \centering \subfigure[initial configuration]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step1.png} \label{fig::num_ex_beamrotatingonarc_snapshot_initial_config} } \subfigure[time~$t=1.0$]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step1000.png} \label{fig::num_ex_beamrotatingonarc_snapshot_step1000} } \subfigure[time~$t=1.5$]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step1500.png} \label{fig::num_ex_beamrotatingonarc_snapshot_step1500} } \subfigure[time~$t=2.0$]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step2000.png} \label{fig::num_ex_beamrotatingonarc_snapshot_step2000} } \subfigure[time~$t=2.5$]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step2500.png} \label{fig::num_ex_beamrotatingonarc_snapshot_step2500} } \subfigure[time~$t=3.0$]{ \includegraphics[width=0.3\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_integrationsegments32_nGP10_step3000.png} \label{fig::num_ex_beamrotatingonarc_snapshot_step3000} } \caption{Simulation snapshots: a straight deformable beam rotating on a rigid arc.} \label{fig::num_ex_beamrotatingonarc_snapshots} \end{figure} A sequence of the resulting simulation snapshots is shown in~\figref{fig::num_ex_beamrotatingonarc_snapshots}. As expected, the two beams do not penetrate each other in any of the various mutual orientations throughout the simulation. \begin{figure}[htpb]% \centering \subfigure[time~$t=1.95$]{ \includegraphics[width=0.242\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_GPs_100_10_forces_step1950.png} \label{fig::num_ex_beamrotatingonarc_LJpot_GPs_50_10_contactforce_distribution_step1950} } \hspace{-10pt} \subfigure[time~$t=1.97$]{ \includegraphics[width=0.242\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_GPs_100_10_forces_step1970.png} \label{fig::num_ex_beamrotatingonarc_LJpot_GPs_50_10_contactforce_distribution_step1970} } \hspace{-10pt} \subfigure[time~$t=1.99$]{ \includegraphics[width=0.242\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_GPs_100_10_forces_step1990.png} \label{fig::num_ex_beamrotatingonarc_LJpot_GPs_50_10_contactforce_distribution_step1990} } \hspace{-10pt} \subfigure[time~$t=2.0$]{ \includegraphics[width=0.242\textwidth]{num_ex_beamrotatingonarc_beam3r_herm2lin3_LJpot_k1e-16_cutoff1e-1_GPs_100_10_forces_step2000.png} \label{fig::num_ex_beamrotatingonarc_LJpot_GPs_50_10_contactforce_distribution_step2000} } \caption{Evolution of contact force distribution in the regime of small contact angles between the beam axes. Each beam element is divided in~$100$ integration segments with~$10$ Gauss points each in the simulation shown here.} \label{fig::num_ex_beamrotatingonarc_contactforce_distribution} \end{figure} \figref{fig::num_ex_beamrotatingonarc_contactforce_distribution} visualizes the contact force distributions% \footnote{More precisely, the vectorial line load with dimensions of force per unit length is visualized as an arrow at each integration point. The force resultant therefore equals the integral over the contour curve defined by the arrows' tips (i.\,e.~the area under this curve), and not the vector sum of all arrows shown. This is important to understand because the number of visible arrows per unit length depends on the discretization and is thus higher for the upper, deformable beam.} in the most interesting time span before the beams reach the parallel orientation at time~$t=2.0$. The force distribution quickly changes from a point-like force for large mutual angles to a broad distributed load for parallel beam axes. Note also that the line load has a three dimensional shape where the out-of-plane component decreases with decreasing mutual angle until both beam axes and thus also the line loads lie in one plane at~$t=2.0$. Another remarkable result is the symmetry between the line loads on both fibers. It nicely confirms that the novel approach indeed fulfills the expected \textit{local} equilibrium of interaction forces in good approximation. In contrast to existing, macroscopic formulations for beam contact, this is not postulated a priori in our approach and hence is a valuable verification at this point. See~\cite{Sauer2013} for a comprehensive discussion of this important topic in the context of contact between 3D solids described by inter-surface potentials. The \textit{global} equilibrium of contact forces on the other hand is fulfilled exactly, as can be concluded from the \text{global} conservation of linear momentum that can be shown analytically as outlined in ~\secref{sec::conservation_properties}. In this numerical example, we found that the sum of all reaction forces in either of the spatial dimensions is indeed zero with a maximal residuum of~$10^{-10}$ throughout all simulations considered here, which confirms the statement numerically. \begin{figure}[htpb]% \centering \subfigure[]{ \includegraphics[width=0.4\textwidth]{num_ex_beamrotatingonarc_reactionforce_z_over_time_LJ_numGP5_varying_num_integration_segments_vs_ABC.pdf} \label{fig::num_ex_beamrotatingonarc_reactionforce_z_over_time_LJ_numGP5_varying_num_integration_segments_vs_ABC} } \subfigure[]{ \includegraphics[width=0.4\textwidth]{num_ex_beamrotatingonarc_interaction_potential_over_time_LJ_numGP5_varying_num_integration_segments.pdf} \label{fig::num_ex_beamrotatingonarc_interaction_potential_over_time_LJ_numGP5_varying_num_integration_segments} } \caption{(a) Reaction force and (b) interaction potential over time.} \label{fig::num_ex_beamrotatingonarc_reactionforce_ia_pot_quadrature} \end{figure} \figref{fig::num_ex_beamrotatingonarc_reactionforce_ia_pot_quadrature} shows the resulting vertical reaction force as well as the interaction potential over time. Due to the inverse-twelve power law and the extremely small separations of the interacting bodies, the numerical integration of the disk-disk interaction forces is very challenging and we studied the influence of the number of Gauss points. For this purpose, the number of integration segments per element with five Gauss points each is set to~$30$, $50$, or $64$, respectively. Interestingly, the interaction potential shown in \figref{fig::num_ex_beamrotatingonarc_interaction_potential_over_time_LJ_numGP5_varying_num_integration_segments} seems to be more sensitive with respect to the integration error than the vertical reaction force shown in \figref{fig::num_ex_beamrotatingonarc_reactionforce_z_over_time_LJ_numGP5_varying_num_integration_segments_vs_ABC} despite the fact that the latter has a higher inverse power law exponent. Presumably, this is due to the fact that the reaction force is dominated by the bending deformation of the beams. For reference, the reaction force obtained by using the macroscopic ABC formulation is shown as well and is in excellent agreement with the one resulting from the repulsive part of the LJ interaction potential. A more comprehensive comparison of this novel SSIP approach to model contact between beams based on (the repulsive part of) the molecular LJ interaction and existing, macroscopic formulations based on heuristic penalty force laws is a highly interesting subject that is worth to investigate in the future. \subsection{Two initially straight, deformable fibers carrying opposite surface charge}\label{sec::num_ex_elstat_attraction_twoparallelbeams} The following example consists of two initially straight and parallel, deformable fibers that attract each other due to their surface charge of opposite sign. Its setup is kept as simple as possible to allow for an isolated and clear analysis of the physical effects as well as the main characteristics of the proposed SSIP approach. In a first step presented here, the interplay of elasticity and electrostatic attraction in the regime of large separations is studied. Additionally, the authors' recent contribution~\cite{GrillPeelingPulloff} considers the scenario of separating these adhesive fibers starting from initial contact and studies a variety of physical effects and influences in depth, which would go beyond the scope of this work. In this numerical example, we are interested in the static equilibrium configurations for varying attractive strength. As shown in~\figref{fig::num_ex_elstat_attraction_twoparallelbeams_problem_setup}, two straight beams of length~$l=5$ are aligned with the global~$y$-axis at an inter-axis separation~$d=5$. Both are simply supported and restricted to move only within the~$xy$-plane and rotate only around the global~$z$-axis. The beams have a circular cross-section with radius~$R=0.02$ which results in a slenderness ratio of~$\zeta=250$. Cross-section area, area moments of inertia and shear correction factor are computed using standard formula for a circle. A hyperelastic material law with Young's modulus~$E=10^{5}$ and Poisson's ratio~$\nu=0.3$ is applied. In terms of spatial discretization, we use five Hermitian Simo-Reissner beam elements per fiber (see \cite{Meier2017b} for details on this element formulation). \begin{figure}[htpb]% \centering \subfigure[Problem setup: undeformed configuration.]{ \def0.3\textwidth{0.45\textwidth} \input{num_ex_elstat_attraction_twoparallelbeams_problem_setup.pdf_tex} \label{fig::num_ex_elstat_attraction_twoparallelbeams_problem_setup} } \hfill \subfigure[Static equilibrium configurations for varying attractive strength. Solution for beam centerlines and corresponding value of the potential law prefactor~$k$ shown in the same color.]{ \includegraphics[width=0.5\textwidth]{num_ex_elstat_attraction_twoparallelbeams_beam3r_herm2lin3_separation5_young1e5_numintseg2_numGP10_varying_prefactor.png} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_varying_prefactor} } \caption{Two parallel beams with constant surface charge density (left beam positive, right beam negative).} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static} \end{figure} Electrostatic interaction is modeled via the SSIP approach as presented in~\secref{sec::method_double_length_specific_integral} and applied to long-range Coulomb interactions in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}. Both beams are nonconducting with a constant surface charge density of~$\sigma_1=1.0$ and~$\sigma_2=-1.0$, respectively. For simplicity, we vary the prefactor~$k$ of the underlying Coulomb law~$\Phi(r)=k\, r^{-1}$ to vary the strength of attraction. However, as becomes clear from~\eqref{eq::var_pot_ia_powerlaw_large_sep_surface}, this is equivalent to a variation of surface charge densities because in our case the product of these quantities is a constant prefactor in all relevant equations. In order to evaluate the electrostatic force and stiffness contributions, Gauss quadrature with two integration segments per element and ten Gauss points per integration segment is applied. This turns out to be fine enough to not change the presented results perceptibly. More precisely, the difference in the displacement of the beam midpoint for~$n_\text{GP}=(2 \times 10)^2$ as compared to~$(2 \times 32)^2$ is below~$10^{-8}$. No cut-off radius is applied here, i.\,e.,~the contributions of all Gauss point pairs are evaluated and included. \figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_varying_prefactor} finally shows the resulting static equilibrium configurations for different levels of attractive strength. As expected, the beams are increasingly deflected and pulled towards each other if the prefactor of the applied Coulomb law~$k$ and thus the attractive strength is increased. Like the problem definition, also all the solutions are perfectly symmetric with respect to the vertical axis of symmetry located at~$x=d/2$. Moreover, the centerline curves of each individual solution show a horizontal axis of symmetry defined by the position of the two beam midpoints in the respective deformed state. As a consequence, the vertical force components in the system cancel and the vertical reaction forces vanish. This also becomes clear when looking at the visualization of the resulting electrostatic forces as shown exemplarily for~$k=1.0$ in \figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_resulting_forces_all_elepairs}. Additionally, the forces acting on the Gauss point of one beam caused by the interaction with one finite element of the other beam are visualized individually in~\figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_all_elepairs}. This representation illustrates the nature of the SSIP approach, which is based on two nested 1D numerical integrals that are evaluated element pair-wise. Accordingly, we can identify five force contributions at each Gauss point, one for each of the five beam elements on the opposing fiber. As expected, the magnitude of these individual forces decays with the distance and the contributions of the closest element pair shown in an isolated manner in \figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_elepair3_8} constitute the largest part of the total electrostatic load on the beams and are clearly larger than the contributions of the next-nearest element pair shown in~\figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_elepair2_8}. However, the comparatively long range of electrostatic forces yields a smooth force distribution along the centerlines and we can identify non-zero force contributions even at the most distant Gauss points right next to the supports in~\figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_resulting_forces_all_elepairs}. As mentioned above, a quantitative analysis of the resulting horizontal reaction forces is presented in~\cite{GrillPeelingPulloff}. \begin{figure}[htpb]% \centering \subfigure[Resulting electrostatic forces evaluated at the Gauss points.]{ \includegraphics[width=0.6\textwidth]{num_ex_elstat_attraction_twoparallelbeams_beam3r_herm2lin3_separation5_young1e5_potlawprefactor1e0_numintseg2_numGP10_resulting_forces_all_elepairs.png} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_resulting_forces_all_elepairs} } \subfigure[Individual electrostatic force contributions of all element pairs.]{ \includegraphics[width=0.6\textwidth]{num_ex_elstat_attraction_twoparallelbeams_beam3r_herm2lin3_separation5_young1e5_potlawprefactor1e0_numintseg2_numGP10_forces_all_elepairs.png} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_all_elepairs} } \subfigure[Electrostatic force contributions of the closest element pair.]{ \includegraphics[width=0.48\textwidth]{num_ex_elstat_attraction_twoparallelbeams_beam3r_herm2lin3_separation5_young1e5_potlawprefactor1e0_numintseg2_numGP10_forces_elepair3and8.png} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_elepair3_8} } \subfigure[Electrostatic force contributions of the next-nearest pair.]{ \includegraphics[width=0.48\textwidth]{num_ex_elstat_attraction_twoparallelbeams_beam3r_herm2lin3_separation5_young1e5_potlawprefactor1e0_numintseg2_numGP10_forces_elepair2and8.png} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_forces_elepair2_8} } \caption{Electrostatic forces acting on the beams for $k=1.0$. Color indicates force magnitude.} \label{fig::num_ex_elstat_attraction_twoparallelbeams_static_force_distribution} \end{figure} To conclude this example of two charged, attractive beams, we briefly look at the nonlinear solver. Newton's method without any adaptations is used here to allow for a clear and meaningful analysis of nonlinear convergence behavior. The solutions for~$k\leq0.4$ can be found within one load step which is a remarkable result given the resulting large deflection of the beams shown in~\figref{fig::num_ex_elstat_attraction_twoparallelbeams_static_varying_prefactor} and the strong nonlinear nature of the system. For stronger attractive forces, the strength of electrostatic attraction was ramped up in up to ten equal steps~$\Delta k = 0.1$. As convergence criteria, we enforced both for the Euclidean norm of the residual vector~$\norm{\vdR} < 10^{-10}$ and for the norm of the iterative displacement update vector~$\norm{\Delta \vdX} < 10^{-8}$. In fact, this combination leads to~$\norm{\vdR} < 10^{-12}$ in almost all equilibrium configurations shown here. \subsection{Two charged deformable fibers dynamically snap into contact}\label{sec::num_ex_twocrossedbeams_elstat_snapintocontact} Due to the high gradients in the inverse power laws, molecular interactions give rise to highly dynamic systems. This is a first, simple example for a dynamic system consisting of two oppositely charged fibers with a hinged support at one end each, that will snap into contact. In the initial configuration shown in~\figref{fig::twocrossedbeams_elstat_snapintocontact_problem_setup}, the straight fibers include an angle of~$45^\circ$ and their axes are separated by~$5R$ in the out-of-plane direction~$z$. With a cross-section radius~$R=0.02$ and length~$l$ set to~$l=5$, they have a high slenderness ratio of~$\zeta=250$ and~$354$. Each of the fibers is discretized by~$10$ Hermitian Simo-Reissner beam elements and the material parameters are chosen to be~$E=10^5$, $\nu=0.3$, and~$\rho=10^{-3}$. The fibers carry a constant, opposite surface charge~$\sigma_{1/2}=\pm 1.0$ and interact via the Coulomb potential law stated in eq.~\eqref{eq::pot_ia_elstat_pointpair_Coulomb} with the prefactor set to~$C_\text{elstat}=0.4$. In order to start from a stress-free initial configuration, the charge of one of the fibers is ramped up linearly within the first~$100$ time steps. We apply the SSIP approach as proposed in~\secref{sec::method_double_length_specific_integral} and applied to Coulomb interactions in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}. A total of~$5$ integration segments per element with~$10$ Gauss points each is used to evaluate these electrostatic contributions. The contact interaction between the fibers is modeled by the line contact formulation proposed in~\cite{meier2016}, using a penalty parameter~$\varepsilon=10^3$ and~$20$ integration segments per element with~$5$ Gauss points each for numerical integration. An undetected crossing of the fiber axes is prevented by applying the modified Newton method limiting the maximal displacement increment per nonlinear iteration to~$R/2$ (see \ref{sec::algorithm_implementation_aspects} for details). \begin{figure}[htpb]% \centering \subfigure[Problem setup.]{ \def0.3\textwidth{0.3\textwidth} \small{ \input{num_ex_elstat_attraction_twocrossedbeams_snapintocontact_problem_setup.pdf_tex} } \label{fig::twocrossedbeams_elstat_snapintocontact_problem_setup} } \subfigure[Energy over time.]{ \includegraphics[width=0.6\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_energy_over_time_dt1e-5_elstat_intseg5_numGP10_linecontact_pen1e3.pdf} \label{fig::twocrossedbeams_elstat_snapintocontact_energy_over_time} } \caption{Two oppositely charged, crossed beams dynamically snap into contact.} \label{fig::twocrossedbeams_elstat_snapintocontact} \end{figure}% In terms of temporal discretization, we apply the Generalized-Alpha scheme for Lie groups as proposed in~\cite{bruels2010} and set the spectral radius at infinite frequencies to~$\rho_\infty=0.9$ for small numerical damping. A small time step size of~$\Delta t=10^{-5}$ is applied to account for the highly dynamic behavior of this system. \figref{fig::twocrossedbeams_elstat_snapintocontact_snapshots} shows a sequence of simulation snapshots where the electrostatic forces on both fibers are visualized as green arrows. We observe a large variety of mutual orientations of the two fibers and a strong coupling of adhesive, repulsive and elastic forces that demonstrate the effectiveness and robustness of the proposed SSIP approach. Most importantly, we see that the total system energy is preserved with very little deviation of~$\pm 2 \%$ as shown in~\figref{fig::twocrossedbeams_elstat_snapintocontact_energy_over_time}. Note that the negative energy values result from defining the zero level of the interaction potential at infinite separation as described in~\secref{sec::molecular_interactions_classification_general_informations}. Based on this numerical example, we can thus conclude that the novel SSIP approach proves to be effective as well as robust in a highly dynamic example with arbitrary mutual orientations of the fibers in three dimensions. \begin{figure}[htb]% \centering \subfigure[initial configuration]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0000.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_initial} } \subfigure[time $t=1\times10^{-3}$, ramp-up of charge completed]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0001.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0001} } \subfigure[time $t=5\times10^{-3}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0005.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0005} } \subfigure[time $t=1\times10^{-2}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0010.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0010} } \subfigure[time $t=2\times10^{-2}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0020.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0020} } \subfigure[time $t=4\times10^{-2}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0040.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0040} } \subfigure[time $t=6\times10^{-2}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0060.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0060} } \subfigure[time $t=8\times10^{-2}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0080.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0080} } \subfigure[time $t=1\times10^{-1}$]{ \includegraphics[width=0.31\textwidth]{num_ex_twocrossedbeams_elstat_snapintocontact_0100.png} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots_0100} } \caption{Sequence of simulation snapshots. Electrostatic forces acting on both fibers shown in green.} \label{fig::twocrossedbeams_elstat_snapintocontact_snapshots} \end{figure} \section{Finite Element Discretization and Selected Algorithmic Aspects}\label{sec::FE_discretization_algorithmic_aspects} Having discussed the space-continuous theory in \secref{sec::method_pot_based_ia} and \ref{sec::method_application_to_specific_types_of_interactions}, we now turn to the step of spatial discretization by means of finite elements. Subsequently, the most important aspects of the required algorithmic framework will be presented briefly and discussed specifically in the light of the novel SSIP approach. This includes the applied regularization technique, multi-dimensional numerical integration, an analysis of the algorithmic complexity as well as the topics of search for interaction partners and parallel computing. \subsection{Spatial discretization based on beam finite elements}\label{sec::method_pot_based_ia_FE_discretization} As presented in \secref{sec::beam_theory}, the centerline position~$\vr$ and the triad~$\vLambda$ arise as the two primary fields of unknowns. Within Simo-Reissner beam theory, both fields are uncorrelated and their discretization can hence be considered independently as follows. The Simo-Reissner finite beam element used throughout this work originates from \cite{jelenic1999,crisfield1999}, although we apply a different centerline interpolation scheme here. We employ cubic Hermite polynomials based on nodal position vectors~$\hat\vdd^1, \hat\vdd^2$ and tangent vectors~$\hat\vdt^1, \hat\vdt^2$ as the primary variables. See \cite{Meier2014} for a detailed discussion of Hermite centerline interpolation in the context of geometrically exact (Kirchhoff) beams and \cite{Meier2017b} for the details on the Hermitian Simo-Reissner beam element that is used within this article. Applying this interpolation scheme results in the following discretized centerline geometry and variation: \begin{align} \begin{split}\label{eq::centerline_discretization} \vr(\xi) \approx \vr_\text{h}(\xi) &= \sum_{i=1}^2 H_d^i(\xi) \, \hat \vdd^i + \frac{l}{2} \sum_{i=1}^2 H_t^i(\xi) \, \hat \vdt^i =: \vdH \, \hat \vdd, \\ \delta \vr(\xi) \approx \delta \vr_\text{h}(\xi) &= \sum_{i=1}^2 H_d^i(\xi) \, \delta \hat \vdd^i + \frac{l}{2} \sum_{i=1}^2 H_t^i(\xi) \, \delta \hat \vdt^i =: \vdH \, \delta \hat \vdd \end{split} \end{align} Here, all the degrees of freedom of one element relevant for the centerline interpolation, i.\,e., nodal positions~$\hat\vdd^i$ and tangents~$\hat\vdt^i$, $i=1,2$ are collected in one vector~$\hat\vdd$ and~$\vdH$ is the accordingly assembled matrix of shape functions, i.\,e., Hermite polynomials~$H_d^i$ and $H_t^i$. The newly introduced element-local parameter~$\xi \in [-1;1]$ is biuniquely related to the arc-length parameter~$s \in [s_\text{ele,min}; s_\text{ele,max}]$ describing the very same physical domain of the beam as follows and the scalar factor defining this mapping between both infinite length measures is called the element \textit{Jacobian}~$J(\xi)$: \begin{equation} \dd s = \diff{ s }{ \xi } \dd \xi =: J(\xi) \dd \xi \qquad \text{with} \qquad J(\xi) := \norm{ \diff { \vr_{0,\text{h}}(\xi) }{ \xi } }. \end{equation} Our motivation to use Hermite interpolation is that it ensures $C_1$-continuity, i.\,e., a smooth geometry representation even across element boundaries. This property turned out to be crucial for the robustness of simulations in the context of macroscopic beam contact methods \cite{Meier2017b}, and is just as important if we include molecular interactions as proposed in this article. See \cite{Sauer2011} for a comprehensive discussion of (non-)smooth geometries and adhesive, molecular interactions using 2D solid elements. Note however that neither the SSIP approach proposed in \secref{sec::method_pot_based_ia} nor the specific expressions for the interaction free energy and the virtual work are limited to this Hermite interpolation scheme. In fact, all of the following discrete expressions will be equally valid for a large number of other beam formulations, where the discrete centerline geometry is defined by polynomial interpolation, which can generally be expressed in terms of the generic shape function matrix~$\vdH$ introduced above. Recall also, that the proposed SSIP laws from \secref{sec::method_application_to_specific_types_of_interactions} solely depend on the centerline curve description, i.\,e., the rotation field does not appear in the additional contributions and hence its discretization is not relevant in the context of this work. It is therefore sufficient to apply the discretization scheme for the centerline field stated in eq.~\eqref{eq::centerline_discretization} to the expressions for the virtual work contributions~$\delta \Pi_\text{ia}$ presented in \secref{sec::method_application_to_specific_types_of_interactions} and finally end up with the discrete element residual vectors~$\vdr_{\text{ia},1/2}$. The latter need to be assembled into the global residual vector~$\vdR$ as it is standard in the (nonlinear) finite element method. Note that the linearization of all the expressions presented in this~\secref{sec::method_pot_based_ia_FE_discretization} is provided in~\ref{sec::linearization}. \subsubsection{Short-range volume interactions such as van der Waals and steric repulsion}\label{sec::method_vdW_FE_discretization} Discretization of the centerline curves according to \eqref{eq::centerline_discretization}, i.\,e., $\vr_j \approx \vr_{\text{h},j} = \vdH_j \, \hat \vd_j$ and $\delta \vr_j^\text{T} \approx \delta \vr_{\text{h},j}^\text{T} = \delta \hat \vd_j^\text{T} \, \vdH_j^\text{T}$, for both beam elements~$j={1,2}$ turns the space-continuous form~\eqref{eq::var_iapot_small_sep} of the two-body virtual work contribution from molecular interactions~$\delta \Pi_\text{m,ss}$ into its discrete counterpart \begin{align}\label{eq::var_discrete_iapot_small_sep} \delta \Pi_\text{m,ss,h} = -(m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \left( \delta \hat \vd_1^\text{T} \vdH_1^\text{T} - \delta \hat \vd_2^\text{T} \vdH_2^\text{T} \right) \frac{\vr_{h,1} - \vr_{h,2}}{d_{h}} g_h^{-m+\tfrac{5}{2}} \dd s_2 \dd s_1. \end{align} Refer to~\eqref{eq::vdW_small_sep_def_constant} for the definition of the constant~$c_\text{m,ss}$. Note that \eqref{eq::var_discrete_iapot_small_sep} only contributes to those scalar residua associated with the centerline, i.\,e., translational degrees of freedom~$\hat \vdd$. This is a logical consequence of the fact that the SSIP law solely depends on the beams' centerline curves, as discussed in detail in \secref{sec::method_application_to_specific_types_of_interactions}. For the sake of brevity, the index 'h', indicating all discrete quantities, will be omitted from here on since all following quantities are considered discrete. In eq.~\eqref{eq::var_discrete_iapot_small_sep}, the discrete element residual vectors of the two interacting elements~$j=1,2$ can finally be identified as \begin{align}\label{eq::res_ia_pot_smallsep_ele1} \vdr_{\text{m,ss},1} &= - (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \, \vdH_1^\text{T} \frac{ \left( \vr_{1} - \vr_{2} \right)}{d} \, g^{-m+\tfrac{5}{2}} \, \dd s_2 \dd s_1 \quad \text{and}\\ \vdr_{\text{m,ss},2} &= (m-\tfrac{7}{2}) \int_0^{l_1} \int_0^{l_2} c_\text{m,ss} \, \vdH_2^\text{T} \frac{ \left( \vr_{1} - \vr_{2} \right)}{d} \, g^{-m+\tfrac{5}{2}} \, \dd s_2 \dd s_1 \label{eq::res_ia_pot_smallsep_ele2}. \end{align} See \secref{sec::method_numerical_integration} for details on the numerical quadrature required to evaluate these expressions. \subsubsection{Long-range surface interactions such as electrostatics}\label{sec::method_elstat_FE_discretization} In analogy to the previous section, we discretize~\eqref{eq::var_pot_ia_powerlaw_large_sep_surface} and obtain the discrete element residual vectors \begin{align}\label{eq::res_ia_pot_largesep} \vdr_{\text{m,ls},1} &= - \int_0^{l_1} \int_0^{l_2} \, c_\text{m,ls} \, \vdH_1^\text{T} \frac{ \left( \vr_{1} - \vr_{2} \right)}{d^{m+2}} \, \dd s_2 \dd s_1 \quad \text{and} \quad \vdr_{\text{m,ls},2} &= \int_0^{l_1} \int_0^{l_2} \, c_\text{m,ls} \, \vdH_2^\text{T} \frac{ \left( \vr_{1} - \vr_{2} \right)}{d^{m+2}} \, \dd s_2 \dd s_1. \end{align} As mentioned already in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}, the discrete element residual vectors in the specific case of Coulombic interactions follow directly for~$m=1$ and~$c_\text{m,ls} = C_\text{elstat} \lambda_1 \lambda_2$. See \secref{sec::theory_electrostatics_pointcharges} for the definition of~$C_\text{elstat}$ and~\secref{sec::ia_pot_double_length_specific_evaluation_elstat} for the definition of the linear charge densities~$\lambda_i$. Again, as mentioned in~\secref{sec::ia_pot_double_length_specific_evaluation_elstat}, the case of long-range \textit{volume} interactions only requires to adapt the constant prefactor via~$c_\text{m,ls}=k m A_1 A_2 \rho_1 \rho_2$. \subsection{Objectivity and conservation properties}\label{sec::conservation_properties} It can be shown that the proposed SSIP approach from~\secref{sec::method_pot_based_ia} in combination with the SSIP laws from~\secref{sec::method_application_to_specific_types_of_interactions} fulfills the essential mechanical properties of objectivity, global conservation of linear and angular momentum as well as global conservation of energy. Due to the equivalent structure of the resulting space-discrete contributions, e.\,g., equation~\eqref{eq::var_discrete_iapot_small_sep}, as compared to the terms obtained in macroscopic beam contact formulations, we refer to the proof and detailed discussion of these important aspects in~\cite[Appendix B]{Meier2017a}. The fulfillment of conservation properties will furthermore be verified by means of the numerical examples in~\secref{sec::example_contact_repusive_LJpot} and~\secref{sec::num_ex_twocrossedbeams_elstat_snapintocontact}. \subsection{Regularization of SSIP laws in the limit of zero separation}\label{sec::regularization} The singularity of inverse power laws for zero separation is a well-known pitfall when dealing with this kind of interaction laws. See e.\,g.~\cite[p.137]{israel2011} for a discussion of this topic in the context of point-point LJ interaction as compared to a hard-sphere model. In numerical methods, one therefore typically applies a regularization that cures the singularity and ensures the robustness of the method. Sauer \cite{Sauer2011} gives an example for a regularized LJ force law between two half-spaces, where the force is linearly extrapolated below a certain separation, which is chosen as~$1.05$ times the equilibrium spacing of the two half spaces. Also, existing, macroscopic beam contact formulations rely on the regularization of the seemingly instantaneous and infinite jump in the contact force when two macroscopic beams come into contact (see e.\,g.~\cite{Meier2017a,durville2012}). However, the SSIP laws derived for disk-disk vdW or LJ interaction from~\secref{sec::ia_pot_double_length_specific_evaluation_vdW} have not yet been considered in literature. Note that LJ is the most general and challenging case considered in this work, since strong adhesive forces compete with even stronger repulsive forces whenever two fibers are about to come into contact. To be more precise, it is not only the strength of these competing forces, but also the high gradients in the force-distance relation that lead to a very stiff behavior of the governing partial differential equations. This alone places high demands on the nonlinear solver, which in combination with the already mentioned singularity at zero separation~$g=0$, and the fact that LJ interaction laws are not defined for configurations~$g<0$ where both fibers penetrate each other, makes it extremely demanding to solve the problem numerically. The results and conclusions discussed throughout this section are mainly based on the extensive numerical peeling and pull-off experiment with two adhesive fibers, which is presented in the authors' recent contribution~\cite{GrillPeelingPulloff}. In absence of a regularization, only the pragmatic yet effective approach of applying a very restrictive upper bound of the displacement increment per nonlinear iteration (see~\ref{sec::algorithm_implementation_aspects} for details) proved successful to solve for the quasi-static equilibrium configurations without occurrence of any invalid configuration~$g\leq0$ for any integration point pair in any nonlinear iteration. It must be emphasized that even a single occurrence of the latter is fatal and aborts the simulation, such that the mentioned approach is the only way to compute a solution for the full LJ interaction law, which can in turn serve as a reference solution during the validation of the regularization to be proposed and applied. However, the mentioned approach severely deteriorates the convergence behavior and leads to a large number of nonlinear iterations per time step. Thus, the regularization to be proposed in this section is superior in two respects: it guarantees the avoidance of singular/undefined values and saves a factor of five in the number of iterations of the nonlinear solver. Specifically, we apply a linear extrapolation of the total LJ force law below a certain separation~$g_\text{reg,LJ}$ in a manner very similar to \cite{Sauer2011} with the only difference that it is applied to the length-specific disk-disk force law instead of the force law between two half spaces. Linear extrapolation means that the original expression~$(m-7/2)\, c_\text{m,ss} \, g^{-m+\frac{5}{2}}$ in~\eqref{eq::res_ia_pot_smallsep_ele1} and \eqref{eq::res_ia_pot_smallsep_ele2} is replaced by a linear equation~$a\,g+b$ in the gap~$g$ for all~$g<g_\text{reg,LJ}$. The two constants~$a$ and $b$ are determined from the requirements that the force value as well as the first derivative of the original and the linear expression are identical for the regularization separation~$g=g_\text{reg,LJ}$. \figref{fig::force_disk-disk_LJ_vs_reg_LJ_linextpol} shows both the original (blue) and the regularized (red) LJ disk-disk force law as a function of the smallest surface separation~$g$. \begin{figure}[htp]% \centering \includegraphics[width=0.4\textwidth]{disk-disk_force_LJ_vs_regLJ_linextpol.png} \caption[Comparison of regularized and full LJ disk-disk force law.]{Comparison of regularized (red) and full (blue) LJ disk-disk force law. Here, $g_\text{reg,LJ}=g_\text{LJ,eq,disk$\parallel$disk}$ is shown exemplarily\footnotemark.} \label{fig::force_disk-disk_LJ_vs_reg_LJ_linextpol} \end{figure}% \footnotetext{See eq.~\eqref{eq::equilibrium_spacing_LJ_disks_parallel_smallsep} for an analytical expression of~$g_\text{LJ,eq,disk$\parallel$disk}$.} The numerical experiment of adhesive fibers studied in~\cite{GrillPeelingPulloff} reveals that this regularization yields the already mentioned great enhancement in terms of robustness as well as efficiency without any change in the system response. As shown in the comparison of the force-displacement curves therein, the results obtained with the full LJ and with the regularized LJ force law do indeed coincide down to machine precision. This is reasonable and expected, because we chose a regularization parameter~$g_\text{reg,LJ} \leq g_\text{LJ,eq,cyl$\parallel$cyl}$ that is smaller than any separation value~$g$ occurring anywhere in the system in any converged equilibrium state. Thus, the solution never ``sees'' the modification to the vdW force law in the interval~$g<g_\text{reg,LJ}$ and the results are identical. However, since during the nonlinear iterations also non-equilibrium configurations with~$g<g_\text{reg,LJ}$ occur, the nonlinear solution procedure is influenced in an extremely positive way, leading to an overall saving of a factor of five in the number of nonlinear iterations as compared to the full LJ interaction without any regularization. For more details on the comparison including all parameter values we kindly refer the reader to~\cite{GrillPeelingPulloff}. \subsection{Numerical evaluation of n-dimensional integrals of intermolecular potential laws}\label{sec::method_numerical_integration} Generally, we use $n$ nested loops of a 1D Gauss-Legendre quadrature scheme which is the well-established and de-facto standard method in nonlinear finite element frameworks and has been used also in previous publications in the context of molecular interactions \cite{Argento1997,Sauer2013}. Due to the strong nonlinearity, i.\,e., high gradients of the power laws, a large number of quadrature points is required in each dimension to achieve sufficient accuracy. This effect is most critical for high exponents of the potential law, i.\,e., vdW and steric interactions, and small separations of the interacting bodies. We thus implemented the possibility to subdivide the domain of a finite element into~$n_\text{IS}$ integration segments and apply an~$n_\text{GP}$-point Gauss rule on each of them in order to achieve sufficient density of quadrature points in every case. \subsection{Algorithm complexity}\label{sec::algorithm_complexity} Multi-dimensional numerical integration of the intermolecular potential laws as discussed above turns out to be the crucial factor in terms of efficiency. For the following analysis of efficiency, we consider the associated algorithmic complexity. Generally, all possible pairs of elements need to be evaluated, which has~$\bigO \left( n_\text{ele}^2 \right)$ complexity. Let us assume we apply a total of~$n_\text{GP,tot,ele-length}$ integration points along the element length and~$n_\text{GP,tot,transverse}$ integration points in the transversal, i.\,e., cross-sectional in-plane directions. Thus, the complexity of an approach based on full 6D numerical integration over the 3D volumes of the two interacting bodies (cf.~\eqref{eq::pot_fullvolint}) can be stated as \begin{equation} \bigO \left( n_\text{ele}^2 \cdot n_\text{GP,tot,ele-length}^2 \cdot n_\text{GP,tot,transverse}^4 \right). \end{equation} In contrast to that, the novel SSIP approach proposed in~\secref{sec::method_double_length_specific_integral} reduces the dimensionality of numerical integration from six to two (cf.~\eqref{eq::ia_pot_double_integration}) and thus yields \begin{equation} \bigO \left( n_\text{ele}^2 \cdot n_\text{GP,tot,ele-length}^2 \right) \end{equation} complexity. The resulting difference between both clearly depends on the problem size, type of interaction and other factors. To get an impression, typical numbers for the total number of quadrature points in transverse dimensions based on the numerical examples of~\secref{sec::numerical_results} are given as $n_\text{GP,tot,transverse} = 10 \ldots 100$. The gain in efficiency thus easily exceeds a factor of~$10^4$ and can be as large as a factor of~$10^8$. In addition to this tremendous saving from the inherent algorithmic complexity, the power law integrand has a smaller exponent due to the preliminary analytical integration in case of the SSIP approach. This in turn allows for a smaller number of integration points~$n_\text{ele} \cdot n_\text{GP,tot,ele-length}$ for each of the two nested 1D integrations along the centerline, given the same level of accuracy. To give an example, the vdW interaction force scales with an exponent of~$-7$ if formulated for two points (cf.~\eqref{eq::pot_ia_vdW_pointpair}) as compared to an exponent of~$-7/2$ for two circular cross-sections (cf.~\eqref{eq::var_iapot_small_sep} for~$m=6$). This makes another significant difference, especially if very small separations - as typically observed for contacting bodies - are considered. The combination of high dimensionality and strong nonlinearity of the integrand renders the direct approach of six-dimensional numerical quadrature to evaluate eq.~\eqref{eq::pot_fullvolint} unfeasible for basically any problem of practical relevance. In fact, even a single evaluation of the vdW potential of two straight cylinders to serve as a reference solution turned out to be too computationally costly below some critical, small separation. See~\secref{sec::verif_approx} for details on this numerical example. Note that although there might be more elaborate numerical quadrature schemes for these challenging integrands consisting of rational functions (see e.\,g.~\cite{Gautschi2001}), the basic problem and the conclusions drawn from this comparison of algorithmic complexities remain the same. These cost estimates based on theoretical algorithm complexity and the experience from rather small academic examples considered in~\secref{sec::numerical_results} show that the SSIP approach indeed makes the difference between feasible and intractable computational problems. This directly translates to the applicability to complex biopolymer as well as synthetic fibrous systems that we have in mind and thus significantly extends the range of (research) questions that are accessible by means of numerical simulation. \subsection{Search algorithm and parallel computing}\label{sec::search_parallel_computing} In order to find the relevant pairs of interaction partners, the same search algorithms as in the case of macroscopic contact (between beams or 3D solids) may be applied, however, the obvious difference lies in the search radius. For contact algorithms, a very small search radius covering the immediate surrounding of a considered body is sufficient, whereas for molecular interactions the search radius depends on the type of interaction and must be at least as large as the so-called cut-off radius. Only at separations beyond the cut-off radius, the energy contributions from a particular interaction are assumed to be small enough to neglect them. Depending on the interaction potential and partners, the range and thus cut-off radius can be considerably large which underlines the importance of an efficient search algorithm. In the scope of this work, a so-called bucket search strategy has been used, that divides the simulation domain uniformly into a number of cells or buckets and assigns all nodes and elements to these cells to later determine spatially proximate pairs of elements based on the content of neighboring cells. This leads to an algorithmic complexity of~$\bigO (n_\text{ele})$ and the search thus turned out to be insignificant in terms of computational cost as compared to the evaluation of pair interactions as discussed in the preceding section. See \cite{Wriggers2006} for an overview of search algorithms in the context of computational contact mechanics. To speed up simulations of large systems, parallel computing is a well-established strategy of ever increasing importance. Key to this concept is the ability to partition the problem such that an independent and thus simultaneous computation on several processors is enabled. In our framework, this partitioning is based on the same bucket strategy that handles the search for interaction partners. Regarding the evaluation of interaction forces, a pair (or set) of interacting beam elements is assigned to the processor which owns and thus already evaluates the internal and external force contribution of the involved elements. At processor boundaries, i.\,e., if the two interacting elements are owned by different processors, one processor is chosen to evaluate the interaction forces and the required information such as the element state vector of the element owned by the other processor is communicated beforehand. Upon successful evaluation of the element pair interaction, the resulting contribution to the element residual vector and stiffness matrix is again communicated for the element whose owning processor was not responsible for the pair evaluation. \section{Examples for the Derivation and Analysis of the Two-Body Interaction Potential and Force Laws for Parallel Disks and Cylinders}\label{sec::formulae_two_body_LJ_interaction} The aim of this appendix is to present the mathematical background of analytical solutions for two-body interaction potential as well as force laws. Generally, the strategy of pairwise summation, i.\,e., integration of a point pair potential, is applied. See~\secref{sec::theory_molecular_interactions_twobody_vdW} for a discussion of the applicability of this approach. Exemplarily, we consider the interaction between two parallel disks and two parallel cylinders since these scenarios proved to be most important throughout the derivation of SSIP laws as well as their verification in~\secref{sec::method_application_to_specific_types_of_interactions} and~\ref{sec::verification_methods}, respectively. In addition, we are interested in the total LJ interaction potential and force law in the limit of small separations, because the regularization proposed in~\secref{sec::regularization} is based on these theoretical considerations. Finally, also the equilibrium spacing~$g_\text{LJ,eq,cyl$\parallel$cyl}$ of two infinitely long cylinders interacting via the LJ potential will be derived and has proven helpful in order to choose an almost stress-free initial configuration of two deformable, straight fibers e.\,g.~in the authors' recent contribution~\cite{GrillPeelingPulloff} studying the peeling and pull-off behavior. \subsection{A generic interaction potential described by an inverse power law}\label{sec::derivation_pot_ia_two_body_powerlaw} Instead of $\Phi_\text{vdW}(r)$ from \eqref{eq::pot_ia_vdW_pointpair} or any other particular interaction type, here, we rather use the more general power law~$\Phi_m(r)=k_m \, r^{-m}$ for the point pair potential. As noted already in \cite{langbein1972}, this does not introduce any additional complexity in the derivations and the solutions can directly be used for other exponents~$m$. We will make use of this fact when considering LJ interaction between two disks and two cylinders analytically in~\ref{sec::derivation_pot_force_ia_LJ_disks} and \ref{sec::derivation_pot_force_ia_LJ_cylinders}, respectively. These findings are to be used in the context of deriving a proper regularization of the potential laws in \secref{sec::regularization}. \subsubsection{Disk-disk interaction}\label{sec::derivation_pot_ia_powerlaw_disks} The following refers to the analytical solutions for the disk-disk vdW interaction potential from literature that is summarized in~\secref{sec::theory_molecular_interactions_twobody_vdW}. Let us first state the underlying mathematical problem. We would like to find an analytical solution for the required 4D integral~$C_m$ over the circular area of each disk \begin{equation}\label{eq::cross_sec_integral_langbein} C_m \defvariable \iint_{A_1,A_2} \Phi_m(r) \dd A_2 \dd A_1 \qquad \text{with} \quad \Phi_m(r)=k_m \, r^{-m} \end{equation} in order to arrive at the disk-disk interaction potential \begin{equation} \tilde{\tilde{\pi}}_m = \rho_1 \rho_2 \, C_m. \end{equation} \paragraph{Details on 2(a) in~\secref{sec::theory_molecular_interactions_twobody_vdW}: The regime of large separations}$\,$\\ For the limit of large separations $g \gg R_1,R_2$, the solution is quite straightforward and can be explained in simple words as follows. The distance of any point in a disk to its center is of order $\bigO(R)$ and thus much smaller than the disks' surface-to-surface separation~$g$: \begin{equation} \tilde{r}_{1/2} = \bigO(R_{1/2}) \ll \bigO(g) \end{equation} \begin{figure}[htp]% \centering \includegraphics[width=0.35\textwidth]{cross_sec_integral.pdf} \caption{Two circular cross-sections, i.\,e.~disks in parallel alignment} \label{fig::cross_sec_parallel_integral} \end{figure} Figure \ref{fig::cross_sec_parallel_integral} illustrates the introduced geometrical quantities. The distance $r$ between any two points $\vx_1$ in disk~$1$ and $\vx_2$ in disk~$2$ may therefore be approximated by the inter-axis distance $d=g+R_1+R_2$: \begin{equation} r = \norm{\vx_1 - \vx_2} = \norm{ \vr_1 + \tilde{\vr}_1 - \vr_2 - \tilde{\vr}_2 } \approx \norm{\vr_1 - \vr_2} = d \end{equation} Double integration over both disks is hence equivalent to a multiplication with the disks' areas~$A_1,A_2$ \begin{equation} C_{m\text{,ls}} \approx A_1 A_2 \, \Phi_m(r=d) \end{equation} and finally we end up with the sought-after expression for the general disk-disk interaction potential in the limit of large separations \begin{equation}\label{eq::approx_large_sep} \tilde{\tilde{\pi}}_{m,\text{disk$\parallel$disk,ls}} \approx \rho_1 \rho_2 \, A_1 A_2 \, \Phi_m(r=d). \end{equation} Note that this approximation is valid for arbitrary pair interaction functions $\Phi(r)$. Moreover, this solution does not even depend on the parallel orientation of the disks. It is valid for all mutual angles of the disks which is important because we will apply it to arbitrary configurations of deflected beams. For the special case of parallel disks, this result can alternatively be obtained by the sound mathematical derivation of \cite[eq. (10)]{langbein1972}. The leading term of his hypergeometric series is identical to the right hand side of equation \eqref{eq::approx_large_sep}. \paragraph{Details on 1(a) in~\secref{sec::theory_molecular_interactions_twobody_vdW}: The regime of small separations}$\,$\\ Now, we consider the limit of small separations $g \ll R_1,R_2$. The problem has been studied by Langbein \cite{langbein1972} in the context of vdW attraction of rigid cylinders, rods or fibers. In the following, we will briefly present the central mathematical concept of his derivations. \begin{figure}[htpb]% \centering \includegraphics[width=0.5\textwidth]{cross_sec_integral_langbein.pdf} \caption{Integration over the cross-sections at small separations, figure taken from \cite{langbein1972} with adapted notation.} \label{fig::cross_sec_intgral_langbein} \end{figure}% The basic idea is to choose a favorable set of integration variables $p,t,\varphi,\psi$ as shown in \figref{fig::cross_sec_intgral_langbein}. In this way, the four dimensional integral \eqref{eq::cross_sec_integral_langbein} can be reduced to a double integral because the integrand $\Phi_m(p)$ does not depend on the angles $\varphi$ and $\psi$: \begin{align} C_m &= \int_{A_1} \int_{A_2} \Phi_m(p) \dd A_2 \dd A_1 = \int_p \int_t \int_{\varphi} \int_{\psi} \Phi_m(p) \dd \psi \dd \varphi \dd t \dd p \nonumber \\ & \qquad \qquad \qquad \qquad \quad=\int_p \int_t \Phi_m(p) \, 2 p \varphi(p,t) \,2 t \psi(p,t) \dd t \dd p \label{eq::dimred_langbein} \\ & \text{where} \quad \cos (\varphi) = \frac{p^2 + t^2 - R_1^2}{2pt}, \quad \cos(\psi) = \frac{t^2 + d^2 - R_2^2}{2td}, \quad d=g+R_1+R_2 \nonumber \end{align} For a general potential law $\Phi_m(r=p) = k_m p^{-m}$, this reads \begin{equation} C_m = 4 k_m \int_p \int_t p^{-m+1} \varphi t \psi \dd t \dd p \end{equation} Making use of $g \ll R_1,R_2$ and introducing reduced variables $\bar{p}= p/g$ and $\bar{t}= t/g$ leads to \begin{align} C_{m\text{,ss}} &= 4 k_m \sqrt{ \frac{2 R_1 R_2}{R_1+R_2} } \, \int_{g}^{g+2R_1+2R_2} p^{-m+1} \int_{g}^{p} \sqrt{t-g} \, \arccos \left( \frac{t}{p} \right) \dd t \dd p \\ &= 4 k_m \sqrt{ \frac{2 R_1 R_2}{R_1+R_2} } \, g^{-m+7/2} \, \int_{1}^{\infty} \bar{p}^{-m+1} \int_{1}^{\bar{p}} \sqrt{\bar{t}-1} \, \arccos \left( \frac{\bar{t}}{\bar{p}} \right) \dd \bar{t} \dd \bar{p} \end{align} Another substitution of variables $x=\bar{t}/\bar{p}$ and interchanging the order of integration finally yields the solution% \footnote{Note that in the original article \cite{langbein1972}, the final form of $C_m$ (eq.~(15) on p.~65) seems to be incorrect. A comparison with \cite[p. 172]{parsegian2005} for the case of vdW potential with~$m=6$ confirms the solution presented here. Additionally, this solution is verified by means of numerical quadrature in \secref{sec::verif_approx} (cf.~\figref{fig::vdW_pot_over_gap_disks_parallel}). }\\ \begin{equation}\label{eq::langbein_Cm_smallsep} C_{m\text{,ss}} = g^{-m+\tfrac{7}{2}} \quad \frac{2 k_m \pi}{(m-2)^2} \quad \sqrt{ \frac{2 R_1 R_2}{R_1+R_2} } \quad \frac{\Gamma(m-\tfrac{7}{2}) \,\Gamma(\tfrac{m-1}{2})}{\Gamma(m-2) \, \Gamma(\tfrac{m}{2}-1)} \qquad \text{for} \quad m>\tfrac{7}{2} \end{equation} Here, $\Gamma$ denotes the gamma function which is defined by $\Gamma(z) = \int_0^{\infty} w^{z-1} e^{-w} \dd w$. Multiplication with the particle densities finally results in the sought-after general disk-disk interaction potential for the regime of small separations \begin{equation}\label{eq::approx_small_sep} \tilde{ \tilde{ \pi}}_{m,\text{disk$\parallel$disk,ss}} = \rho_1 \rho_2 \, C_{m\text{,ss}} \qquad \text{for} \quad m>\tfrac{7}{2} \end{equation} that can be further specified by means of~$m=6$ and~$k_6 = -C_\text{vdW}$ to end up with~$\tilde{ \tilde{ \pi}}_\text{vdW,disk$\parallel$disk,ss}$ as in \eqref{eq::pot_ia_vdW_disk_disk_parallel_smallseparation}. \paragraph{Remarks} \begin{enumerate} \item Note that this solution is valid for exponents $m>7/2$ only. This is in contrast to the approximation for large separations~\eqref{eq::approx_large_sep} which is valid for arbitrary forms of the pair interaction potential~$\Phi(r)$. \item Note however the conceptual similarity of this expression to the one valid for the limit of large separations \eqref{eq::approx_large_sep}. Here, we also find a power law, however in the surface-to-surface distance $g$ instead of the inter-axis distance $d$ and with a different exponent. \end{enumerate} \subsubsection{Cylinder-cylinder interaction}\label{sec::derivation_pot_ia_powerlaw_cylinders} Considering the case of two parallel cylinders, we are interested in the length-specific interaction potential \begin{equation}\label{eq::def_length_specific_pot_powerlaw} \tilde{\pi}_{m,\text{cyl$\parallel$cyl}} = \lim_{l_1 \to \infty} \frac{1}{l_1} \, \int_{-l_1/2}^{l_1/2} \int_{-\infty}^\infty \iint_{A_1,A_2} \rho_1 \rho_2 \, \Phi_m(r) \dd A_2 \dd A_1 \dd s_2 \dd s_1 \qquad \text{with} \quad \Phi_m(r)=k_m \, r^{-m}. \end{equation} The integral over~$s_1=-l_1/2 \ldots l_1/2$ yields a factor of~$l_1$ since the integrand is constant along~$s_1$ and thus immediately cancels with the normalization factor~$1/l_1$. Exemplarily, we want to discuss the more interesting and challenging regime of small separations here. Following \cite[p.63]{langbein1972}, one can interchange the order of integration, solve the integral over the infinitely long cylinder length analytically in a first step, and then make use of the generic solution for~$C_{m,\text{ss}}$ from \eqref{eq::langbein_Cm_smallsep}, but this time with reduced exponent~$m-1$, to end up with \begin{align} \tilde{\pi}_{m,\text{cyl$\parallel$cyl},ss} &= \iint_{A_1,A_2} \int_{-\infty}^\infty \rho_1 \rho_2 \, \Phi_m(r) \dd s_2 \dd s_1 \dd A_2 \dd A_1 \\ &= \frac{3 \pi}{8} \, \rho_1 \rho_2 \, k_m \, \frac{C_{m-1,\text{ss}}}{k_{m-1}}. \label{eq::pot_ia_LJ_cylinders_parallel_smallsep} \end{align} Plugging in~$m=6$ for vdW interaction directly yields the two-body interaction potential per unit length for two parallel cylinders in the regime of small separations~$\tilde \pi_\text{vdW,cyl$\parallel$cyl,ss}$ as stated in eq.~\eqref{eq::pot_ia_vdW_cyl_cyl_parallel_smallseparation}. This generic expression~\eqref{eq::pot_ia_LJ_cylinders_parallel_smallsep} will be exploited when deriving the total LJ interaction law in \ref{sec::derivation_pot_force_ia_LJ_cylinders}. \subsection{Lennard-Jones force laws in the regime of small separations}\label{sec::derivation_pot_force_ia_LJ} As compared to the preceding sections, we now want to turn to the LJ interaction consisting of two power law contributions, one adhesive and one repulsive, respectively. Our motivation is to study the characteristics of the resulting, superposed force laws for disk-disk as well as cylinder-cylinder interactions by means of theoretical analysis of the analytical expressions. These findings shall prove valuable when deriving an effective yet accurate regularization of the LJ potential law for the limit of zero separation in \secref{sec::regularization}. We therefore focus on the regime of small separations throughout this section. Coming from the expressions for the two-body interaction potential~$\tilde{\tilde{\pi}}_{m,\text{disk$\parallel$disk,ss}}$ and~$\tilde{\pi}_{m,\text{cyl$\parallel$cyl,ss}}$ derived for a generic point pair potential~$\Phi_m$ in \ref{sec::derivation_pot_ia_two_body_powerlaw}, we will now sum the adhesive contribution~$m=6$ and the repulsive contribution~$m=12$ and differentiate once to arrive at the desired LJ force laws. \subsubsection{Disk-disk interaction}\label{sec::derivation_pot_force_ia_LJ_disks} As outlined above, we make use of~\eqref{eq::approx_small_sep} for both parts of the LJ interaction and immediately obtain \begin{equation} \tilde{\tilde{\pi}}_\text{LJ,disk$\parallel$disk,ss} = \tilde{k}_6 \, g^{-\frac{5}{2}} + \tilde{k}_{12} \, g^{-\frac{17}{2}} \end{equation} where the following abbreviations for the constant prefactors have been introduced: \begin{align} \tilde{k}_6 \defvariable \frac{\pi}{8} k_6 \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, \Gamma^2\left(\frac{5}{2}\right) \qquad \text{and} \qquad \tilde{k}_{12} \defvariable k_{12} \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, \num{5.30e-3} \end{align} For later use in the analysis of the force law, let us restate the conversion from one set of parameters~$k_6, k_{12}$ specifying the point pair LJ potential to the other commonly used set~$\Phi_\text{LJ,eq}, r_\text{LJ,eq}$ according to \eqref{eq::pot_ia_LJ_pointpair}: \begin{equation} k_6 = 2 \Phi_\text{LJ,eq} r_\text{LJ,eq}^{6} \qquad \text{and} \qquad k_{12} = - \Phi_\text{LJ,eq} r_\text{LJ,eq}^{12} \end{equation} Differentiation with respect to the separation yields the disk-disk LJ force law \begin{equation}\label{eq::force_LJ_disks_parallel_smallsep} \tilde{\tilde{f}}_\text{LJ,disk$\parallel$disk,ss} = -\diff{ \tilde{\tilde{\pi}}_\text{ LJ,disk$\parallel$disk,ss } }{ g } = \frac{5}{2} \, \tilde{k}_6 \, g^{-\frac{7}{2}} + \frac{17}{2} \, \tilde{k}_{12} \, g^{-\frac{19}{2}}. \end{equation} See~\secref{sec::regularization} for a plot of the function. This expression allows us to determine some very interesting, characteristic quantities like the equilibrium spacing~$g_\text{LJ,eq,disk$\parallel$disk}$, i.\,e., the distance where the force vanishes: \begin{equation}\label{eq::equilibrium_spacing_LJ_disks_parallel_smallsep} g_\text{LJ,eq,disk$\parallel$disk} = \left( - \frac{17}{5} \, \frac{\tilde k_{12}}{\tilde k_6} \right)^\frac{1}{6} \approx \num{0.653513} \, r_\text{LJ,eq}. \end{equation} Due to the fact, that repulsive contributions from proximate point pairs decay faster than the adhesive contributions, we obtain a smaller equilibrium spacing as compared to the scenario of a point pair. Another differentiation allows us to determine the value of the force minimum, i.\,e., the maximal adhesive force, and the corresponding separation \begin{align} \tilde{\tilde{f}}_\text{LJ,disk$\parallel$disk,min} &\approx \num{0.904115} \, \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, r_\text{LJ,eq}^{\frac{5}{2}} \, \Phi_\text{LJ,eq}\\ g_{\tilde{\tilde{f}}_\text{LJ,disk$\parallel$disk,min}} &= \left( - \frac{323}{35} \frac{\tilde k_{12}}{\tilde k_6} \right)^\frac{1}{6} \approx \num{0.7718448} \, r_\text{LJ,eq} \approx \num{1.18107} \, g_\text{LJ,eq,disk$\parallel$disk}. \label{eq::gap_force_min_LJ_disks_parallel} \end{align} These quantities turn out to be decisive for the choice of a regularized, i.\,e., altered force law that is to be used instead of the original one in order to cure the numerical problems that come with the singularity at zero separation~$g=0$. In summary, we have found an analytical, closed-form expression for the disk-disk LJ force law~\eqref{eq::force_LJ_disks_parallel_smallsep}, valid in the regime of small separations and for parallel disks. By means of elementary algebra, we were thus able to determine analytical expressions for the characteristic equilibrium spacing as well as value and spacing of the force minimum. \subsubsection{Cylinder-cylinder interaction}\label{sec::derivation_pot_force_ia_LJ_cylinders} As in the previous section (\ref{sec::derivation_pot_ia_powerlaw_cylinders}), we want to restrict ourselves to parallel, infinite cylinders and consider the length-specific interaction potential as well as force law. Again, starting from the expression for a generic interaction potential~\eqref{eq::pot_ia_LJ_cylinders_parallel_smallsep}, superposition yields \begin{equation} \tilde{\pi}_\text{LJ,cyl$\parallel$cyl,ss} = \tilde{k}_{\text{cyl},6} \, g^{-\frac{3}{2}} + \tilde{k}_{\text{cyl},12} \, g^{-\frac{15}{2}} \end{equation} where the following abbreviations have been introduced: \begin{align} \tilde{k}_{\text{cyl},6} \defvariable \frac{\pi^2}{24} k_6 \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \qquad \text{and} \qquad \tilde{k}_{\text{cyl},12} \defvariable \num{5.81868e-4} \, k_{12} \pi^2 \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \end{align} Differentiation with respect to the separation yields the cylinder-cylinder LJ force law \begin{equation}\label{eq::force_LJ_cylinders_parallel_smallsep} \tilde{f}_\text{LJ,cyl$\parallel$cyl,ss} = -\diff{ \tilde{\pi}_\text{LJ,cyl$\parallel$cyl,ss} }{ g } = \frac{3}{2} \, \tilde{k}_{\text{cyl},6} \, g^{-\frac{5}{2}} + \frac{15}{2} \, \tilde{k}_{\text{cyl},12} \, g^{-\frac{17}{2}}. \end{equation} that shall be further analyzed in the following. To begin with, the equilibrium spacing for two parallel cylinders interacting via a LJ potential can be derived as \begin{equation}\label{eq::equilibrium_spacing_LJ_cylinders_parallel_smallsep} g_\text{LJ,eq,cyl$\parallel$cyl} = \left( - 5 \, \frac{\tilde k_{\text{cyl},12}}{\tilde k_{\text{cyl},6}} \right)^\frac{1}{6} \approx \num{0.57169} \, r_\text{LJ,eq}. \end{equation} This is an extremely interesting and important result, since it leads the way to the non-trivial stress-free configuration of two flexible, initially straight fibers. We make use of this knowledge e.\,g.~in~\cite{GrillPeelingPulloff}. Again, since the repulsive contribution of proximate point pairs decays faster than the adhesive contribution, this equilibrium spacing is smaller than~$g_\text{LJ,eq,disk$\parallel$disk}$ for the disks, that in turn is smaller than~$r_\text{LJ,eq}$ in the fundamental case of a point pair. The very same value of~$57\%$ of the point pair equilibrium spacing has already been mentioned as a side note by Langbein \cite[p.~62]{langbein1972}, however, without presenting the detailed, comprehensive derivation. In addition to the equilibrium spacing, we can again determine and look at the value and location of the force minimum \begin{align}\label{eq::minimal_force_LJ_cylinders_parallel} \tilde{f}_\text{LJ,cyl$\parallel$cyl,min} &\approx \num{2.11634} \, \rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, r_\text{LJ,eq}^{\frac{7}{2}} \, \Phi_\text{LJ,eq}\\ g_{\tilde{f}_\text{LJ,cyl$\parallel$cyl,min}} &= \left( - \frac{255}{15} \, \frac{\tilde k_{\text{cyl},12}}{\tilde k_{\text{cyl},6}} \right)^\frac{1}{6} \approx \num{0.70104} \, r_\text{LJ,eq} \approx \num{1.22625} \, g_\text{LJ,eq,cyl$\parallel$cyl}. \end{align} Here, we find that the force minimum, i.\,e., the maximal adhesive force is slightly shifted towards a smaller separation as compared to the disk-disk interaction. However, expressed in terms of the respective equilibrium spacing~$g_\text{LJ,eq,cyl$\parallel$cyl}$, the value is slightly larger as compared to~$\num{1.18} \, g_\text{LJ,eq,disk$\parallel$disk}$ from~\eqref{eq::gap_force_min_LJ_disks_parallel}. With these results we conclude the derivation and analysis of LJ force laws in the regime of small separations and summarize the most important results in the following table to serve as a quick access reference. \subsubsection{Summary}\label{sec::derivation_pot_force_ia_LJ_summary} The following table gives an overview of some important quantities characterizing the LJ force laws for point-point, parallel disk-disk, and parallel cylinder-cylinder interaction. \begin{table}[htpb] \begin{center} \begin{tabular}{|c|c|c|c|}\hline &&&\vspace{-1em}\\ & equilibrium spacing & location of force min. & min.~force value\\ & $r_\text{LJ,eq}$ / $g_\text{LJ,eq}$ & $r_{f_\text{LJ,min}}$ / $g_{f_\text{LJ,min}}$ & $f_\text{LJ,min}$ / $\tilde{\tilde{f}}_\text{LJ,min}$ / $\tilde f_\text{LJ,min}$\\ &&&\vspace{-1em}\\\hline &&&\vspace{-1em}\\ point-point & $1 \, [r_\text{LJ,eq}]$ & $\num{1.11} \, [r_\text{LJ,eq}]$ & $\num{2.69} \, \left[\frac{\Phi_\text{LJ,eq}}{r_\text{LJ,eq}}\right]$\\ &&&\vspace{-1em}\\\hline &&&\vspace{-1em}\\ disk$\parallel$disk & $\num{0.65} \, [r_\text{LJ,eq}]$ & $\num{0.77} \, [r_\text{LJ,eq}]$ & $\num{0.90} \, \left[\rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, r_\text{LJ,eq}^{\frac{5}{2}} \, \Phi_\text{LJ,eq} \right]$\\ &&&\vspace{-1em}\\\hline &&&\vspace{-1em}\\ cylinder$\parallel$cylinder & $0.57 \, [r_\text{LJ,eq}]$ & $0.70 \, [r_\text{LJ,eq}]$ & $\num{2.12} \, \left[\rho_1 \rho_2 \sqrt{\frac{2 R_1 R_2}{R_1+R_2}} \, r_\text{LJ,eq}^{\frac{7}{2}} \, \Phi_\text{LJ,eq} \right]$\\\hline \end{tabular} \end{center} \caption{Comparison of characteristic quantities of LJ force laws for a pair of points, parallel disks and parallel cylinders.} \label{tab::LJ_force_laws_analysis_comparison} \end{table} \section{Introduction} Biopolymer fibers such as actin, collagen, cellulose and DNA, but also glass fibers or carbon nanotubes are ubiquitous examples for slender, deformable structures to be found on the scale of nano- to micrometers. On these length scales, molecular interactions such as electrostatic or van der Waals (vdW) forces are of utmost importance for the formation and functionality of the complex fibrous systems they constitute~\cite{French2010,israel2011,parsegian2005}. Biopolymer networks such as the cytoskeleton or the extracellular matrix, muscle fibers, Gecko spatulae or chromosomes are some of the most popular examples. To foster the understanding of such systems, which in turn allows for innovations in several fields from medical treatment to novel synthetic materials, there is an urgent need for powerful simulation tools. Finite element formulations based on the geometrically exact beam theory~\cite{jelenic1999,crisfield1999,Meier2017c} are known to model the transient (elastic) deformation of these slender structures in an accurate and efficient manner. However, no corresponding numerical methods for above mentioned molecular interactions between deformable fibers have been published yet. We thus aim to develop methods that both accurately as well as efficiently describe these molecular phenomena based on the geometrically exact beam theory in order to ultimately solve relevant practical problems on the scale of complex systems consisting of a large number of fibers in arbitrary arrangement. A comprehensive review of the origin, characteristics and mathematical description of intermolecular forces can nowadays be found in (bio)physical textbooks~\cite{israel2011,parsegian2005}. The critical point is to transfer the first principles formulated for the interaction between atoms or single molecules to the interaction between macromolecules such as slender fibers. Here, the analytical approaches to be found in textbooks and also in recent contributions~\cite{Ohshima2009,Jaiswal2012,Stedman2014,Maeda2015} from the field of theoretical biophysics are (naturally) restricted to undeformable, rigid bodies with primitive geometries such as spheres, half spaces or, most relevant in our case, cylinders. Some computational approaches can be found in the literature, but rather aim at including more complex phenomena such as retardation and solvent effects in vdW interactions \cite{Dryden2015}, still limited to rigid bodies. All-atom simulation methods like molecular dynamics do not suffer from this restriction, but their computational cost is by orders of magnitude too high to be applied to the relevant, complex biological systems mentioned in the beginning and thus currently limited to time scales of nano- to microseconds~\cite{israel2011}. On the other hand, studying the deformation of elastic, slender bodies has a long history in mechanics and today's geometrically exact finite element formulations for shear-deformable (Simo-Reissner) as well as shear-rigid (Kirchhoff-Love) beams have proven to be both highly accurate and efficient~\cite{jelenic1999,crisfield1999,Meier2017c}. Moreover, contact interaction between beams has been considered in a number of publications, e.\,g.~\cite{wriggers1997,litewka2005,durville2010,Kulachenko2012,Chamekh2014,GayNeto2016a,Konyukhov2016,Weeger2017,meier2016,Meier2017a}. However, all these methods are motivated by the macroscopic perspective of non-penetrating solid bodies rather than the microscopic view considering first principles of intermolecular repulsive forces. The combination of elastic deformation of general 3D bodies and intermolecular forces has first been considered by Argento et.\,al.~\cite{Argento1997} for small deformations, by Sauer and Li~\cite{Sauer2007a} for large deformations and finally by Sauer and Wriggers~\cite{Sauer2009} also for three-dimensional problems. In order to reduce the high computational cost associated with the required high-dimensional numerical integrals, a possible model reduction from body to surface interaction in case of sufficiently short-ranged interactions as e.\,g.~predominant in (adhesive) contact scenarios has already been addressed in these first publications and has been the focus of subsequent publications~\cite{Sauer2013,Fan2015}. However, since these formulations aim to describe the interaction between 3D bodies of arbitrary shape, the inherent complexity of the problem still requires a four-dimensional integral over both surfaces in case of surface interactions and a six-dimensional integral over both volumes for volume interactions, respectively. In contrast, beam theory describes a slender body as a 1D Cosserat continuum, such that a further reduction in the dimensionality and thus computational cost can be achieved. So far, this idea has been applied to describing the interaction between a beam and an infinite half-space in 2D as a model for the adhesion of a Gecko spatula on a rigid surface~\cite{Sauer2009,Sauer2014} and later also for the interaction of a carbon nanotube with a Lennard-Jones wall in 3D~\cite{Schmidt2015}. In both cases, the influence of the rigid half space can be evaluated analytically and formulated as a distributed load on the beam. To the best of the authors' knowledge, no approach for describing molecular interactions between curved 3D beams for arbitrary configurations and large deformations has been proposed yet. Notable previous approaches to similar problems have made simplifying assumptions. Ahmadi and Menon~\cite{Ahmadi2014} study the clumping of fibers due to vdW adhesion by means of an analytical 2D beam method, yet only include vdW interaction between the hemispherical tips based on an analytical expression for the interaction of two spheres. A numerical study of the influence of inter-fiber adhesion on the mechanical behavior of 2D fiber networks assumes an effective adhesion energy per unit length of perfectly parallel fiber segments and solves for the unknown contact length in a second, nested minimization algorithm~\cite{Negi2018}. In this article, we propose the first model specifically for molecular interactions between arbitrarily curved and oriented slender fibers undergoing large deformations in 3D. While the general model is not restricted to a specific beam formulation, in the present work it is combined with the geometrically exact beam theory and discretized via the finite element method. This novel approach is based on reduced section-to-section interaction potential (SSIP) laws that describe the resulting interaction potential between a pair of cross-sections as a closed-form analytical expression. Thus, the two-body interaction potential follows from two nested 1D integrals over this SSIP law along both fibers' axes, which are evaluated numerically. In this way, the proposed, so-called SSIP approach significantly reduces the dimensionality of the required numerical integration from six to two, and hence the associated computational cost. As compared to methods for 3D solid bodies or even to all-atom methods, this gain in efficiency opens up new fields of applications, e.\,g., the complex biological systems mentioned above. Regarding the practicability of the SSIP approach, it is also important to emphasize that it can be seamlessly integrated into an existing finite element framework for solid mechanics. In particular, it does neither depend on any specific beam formulation nor the applied spatial discretization scheme and in the context of the present work, it has exemplarily been used with geometrically exact Kirchhoff-Love as well as Simo-Reissner type beam finite elements. Likewise, it is independent of the temporal discretization and we have used it along with static and (Lie group) Generalized-Alpha time stepping schemes as well as inside a Brownian dynamics framework. For the proposed SSIP laws, which can either be derived analytically or postulated and fitted e.\,g.~to experimental data, we first present the most general form describing the interaction between arbitrarily shaped cross-sections with inhomogeneous distribution of the elementary interacting points (e.\,g.~atoms or charges). Subsequently, we focus on homogeneous, circular cross-sections and propose specific, ready-to-use SSIP laws for vdW adhesion, steric repulsion and electrostatic interaction. Based on the fundamental distinction into either short-range or long-range interactions, we present the required steps and theoretical considerations underlying the analytical derivation of the SSIP laws in a general manner, starting from first principles in form of a point pair interaction potential that is described by a power law with general exponent. Besides the expressions for the total interaction potential, also the corresponding virtual work contributions, its finite element discretization and the consistent linearization are presented. Due to the characteristic singularity of molecular interactions in the limit of zero separation, also a suitable numerical regularization of the SSIP laws will be proposed. The remainder of this article is structured as follows. \secref{sec::theoretical_foundation_molecular_interactions} briefly summarizes the fundamental concepts and theory of molecular interactions. Along with the fundamentals of the geometrically exact beam theory to be introduced in \secref{sec::fundamentals_beams}, this forms the basis for the novel SSIP approach to be proposed in \secref{sec::method_pot_based_ia}. This general approach will then be applied to specific types of physical interactions, namely vdW, steric and electrostatic interactions in \secref{sec::method_application_to_specific_types_of_interactions}. In \secref{sec::FE_discretization_algorithmic_aspects}, we turn to the finite element discretization of the newly developed numerical methods and discuss some important algorithmic aspects such as the regularization of the reduced interaction laws and the algorithmic complexity of the reduced approach. Finally, the accuracy of the proposed SSIP laws as well as the general SSIP approach to beam-to-beam interaction is validated by means of analytical as well as numerical reference solutions for academic test cases in~\secref{sec::verification_methods}. In a series of numerical examples including steric repulsion, electrostatic or vdW adhesion, the effectiveness and robustness of the novel approach is verified in the remainder of~\secref{sec::numerical_results}. We conclude the article in \secref{sec::summary_outlook} and provide an outlook to promising future enhancements of the novel approach.
1,941,325,220,333
arxiv
\section{Introduction} The study of ``Lie poset algebras" was initiated by Coll and Gerstenhaber in \textbf{\cite{CG}}, where the deformation theory of such algebras was investigated. The authors define Lie poset algebras as subalgebras of $A_{n-1}=\mathfrak{sl}(n)$ which lie between the subalgebras of upper-triangular and diagonal matrices; we will refer to such Lie subalgebras of $\mathfrak{sl}(n)$ as \textit{type-A Lie poset algebras}. In \textbf{\cite{CG}}, the authors suggest a way in which to extend the notion of Lie poset algebra to the other classical families of Lie algebras. Interestingly, the resulting Lie algebras can be related to the notion of a ``parset" as defined by Reiner (see Remark~\ref{rem:reiner} and cf. \textbf{\cite{Reiner1,Reiner2}}). Following the suggestion of \textbf{\cite{CG}}, we define posets which encode the matrix forms of such Lie poset algebras and, here, initiate an investigation into their index and spectral theories. \bigskip Formally, the index of a Lie algebra $\mathfrak{g}$ is defined as \[{\rm ind \hspace{.1cm}} \mathfrak{g}=\min_{F\in \mathfrak{g^*}} \dim (\ker (B_F)),\] \noindent where $B_F$ is the skew-symmetric \textit{Kirillov form} defined by $B_F(x,y)=F([x,y])$, for all $x,y\in\mathfrak{g}$. Of particular interest are those Lie algebras which have index zero, and are called \textit{Frobenius}.\footnote{Frobenius algebras are of special interest in deformation and quantum group theory stemming from their connection with the classical Yang-Baxter equation (see \textbf{\cite{G1,G2}}).} A functional $F\in\mathfrak{g}^*$ for which $\dim(\ker(B_F))={\rm ind \hspace{.1cm}}\mathfrak{g}=0$ is likewise called \textit{Frobenius}. Given a Frobenius Lie algebra $\mathfrak{g}$ and a Frobenius functional $F\in\mathfrak{g}^*$, the map $\mathfrak{g}\to\mathfrak{g}^*$ defined by $x\mapsto B_F(x,-)$ is an isomorphism. The inverse image of $F$ under this isomorphism, denoted $\widehat{F}$, is called a \textit{principal element} of $\mathfrak{g}$ (see \textbf{\cite{Diatta,Prin}}). In \textbf{\cite{Ooms}}, Ooms shows that the eigenvalues (and multiplicities) of $ad(\widehat{F})=[\widehat{F},-]:\mathfrak{g}\to\mathfrak{g}$ do not depend on the choice of principal element $\widehat{F}$. It follows that the spectrum of $ad(\widehat{F})$ is an invariant of $\mathfrak{g}$, which we call the \textit{spectrum} of $\mathfrak{g}$ (see \textbf{\cite{specD, specAB, unbroken}}). Recently, there has been motivation to determine combinatorial index formulas for certain families of Lie algebras. Families for which such formulas have been found include seaweed algebras and type-A Lie poset algebras (see \textbf{\cite{CHM,Coll2,CM,DK,Elash,Joseph,Panyushev1,Panyushev2,Panyushev3}}). In this article, we consider the analogues of type-A Lie poset algebras in the other classical types: $B_{k}=\mathfrak{so}(2k+1)$, $C_{k}=\mathfrak{sp}(2k)$, and $D_{k}=\mathfrak{so}(2k)$; such algebras are called \textit{type-B, C, and D Lie poset algebras}, respectively. We find that these Lie poset algebras are encoded by certain posets whose underlying sets are of the form $\{-n,\hdots,-1,0,1,\hdots,n\}$ in type B and of the form $\{-n,\hdots,-1,1,\hdots,n\}$ in types C and D. Furthermore, we fully develop the index and spectral theories of type-B, C, and D Lie poset algebras whose underlying posets have the property that there are no relations between pairs of positive integers and no relations between pairs of negative integers. In particular, for this important base case, we develop combinatorial index formulas which rely on an associated planar graph -- called a \textit{relation graph} (see Theorem~\ref{thm:indform}). These formulas allow us to fully characterize Frobenius, type B, C, and D Lie poset algebras in this case \textup(see Theorem~\ref{thm:FrobC}\textup). This classification leads to the realization that the spectrum of such algebras is \textit{binary}; that is, consists of an equal number of 0's and 1's (see Theorem~\ref{thm:spec}). The organization of this paper is as follows. Section~\ref{sec:poset} sets the combinatorial definitions and notation needed from the theory of posets. In Sections~\ref{sec:lieposet} and~\ref{sec:BCDpos} we formally introduce type-B, C, and D Lie poset algebras and the posets which encode them. Sections~\ref{sec:indexform} and~\ref{sec:spec} deal with the index and spectral theories of types-B, C, and D Lie poset algebras. Finally, in a short epilogue, we compare the (less complicated) type-A case with the type-B, C, and D cases developed here. \section{Posets}\label{sec:poset} A \textit{finite poset} $(\mathcal{P}, \preceq_{\mathcal{P}})$ consists of a finite set $\mathcal{P}$ together with a binary relation $\preceq_{\mathcal{P}}$ which is reflexive, anti-symmetric, and transitive. When no confusion will arise, we simply denote a poset $(\mathcal{P}, \preceq_{\mathcal{P}})$ by $\mathcal{P}$, and $\preceq_{\mathcal{P}}$ by $\preceq$. Throughout, we let $\le$ denote the natural ordering on $\mathbb{Z}$. Two posets $\mathcal{P}$ and $\mathcal{Q}$ are \textit{isomorphic} if there exists an order-preserving bijection $\mathcal{P}\to\mathcal{Q}$. \begin{remark} If $|\mathcal{P}|=n$, then there exists a poset $(\{1,\hdots,n\},\le')$ and $f:(\mathcal{P},\preceq_{\mathcal{P}})\to(\{1,\hdots,n\},\le')$ such that $\le'\subset\le$ and $f$ is an isomorphism. Thus, given a poset $(\mathcal{P},\preceq_{\mathcal{P}})$ with $|\mathcal{P}|=n$, unless stated otherwise, we assume that $\mathcal{P}=\{1,\hdots,n\}$ and $\preceq_{\mathcal{P}}\subset\le$. \end{remark} If $x,y\in\mathcal{P}$ such that $x\preceq y$ and there does not exist $z\in \mathcal{P}$ satisfying $x,y\neq z$ and $x\preceq z\preceq y$, then $x\preceq y$ is a \textit{covering relation}. Covering relations are used to define a visual representation of $\mathcal{P}$ called the \textit{Hasse diagram} -- a graph whose vertices correspond to elements of $\mathcal{P}$ and whose edges correspond to covering relations (see, for example, Figure~\ref{fig:poset}). \begin{Ex}\label{ex:poset} Let $\mathcal{P}=\{1,2,3,4\}$ with $1\preceq 2\preceq 3,4$. In Figure~\ref{fig:poset} we illustrate the Hasse diagram of $\mathcal{P}$. \begin{figure}[H] $$\begin{tikzpicture} \node (1) at (0, 0) [circle, draw = black, fill = black, inner sep = 0.5mm, label=left:{1}]{}; \node (2) at (0, 1)[circle, draw = black, fill = black, inner sep = 0.5mm, label=left:{2}] {}; \node (3) at (-0.5, 2) [circle, draw = black, fill = black, inner sep = 0.5mm, label=left:{3}] {}; \node (4) at (0.5, 2) [circle, draw = black, fill = black, inner sep = 0.5mm, label=right:{4}] {}; \draw (1)--(2); \draw (2)--(3); \draw (2)--(4); \addvmargin{1mm} \end{tikzpicture}$$ \caption{Hasse diagram of a poset}\label{fig:poset} \end{figure} \end{Ex} Given a subset $S\subset\mathcal{P}$, the \textit{induced subposet generated by $S$} is the poset $\mathcal{P}_S$ on $S$, where, for $x,y\in S$, $x\preceq_{\mathcal{P}_S}y$ if and only if $x\preceq_{\mathcal{P}}y$. A totally ordered subset $S\subset\mathcal{P}$ is called a \textit{chain}. Finally, we define the dual of $\mathcal{P}$, denoted $\mathcal{P}^*$, to be the poset on the same set as $\mathcal{P}$ with $j\preceq_{\mathcal{P}^*}i$ if and only $i\preceq_{\mathcal{P}}j$. \section{Lie poset algebras}\label{sec:lieposet} Let $\mathcal{P}$ be a finite poset and \textbf{k} be an algebraically closed field of characteristic zero, which we may take to be the complex numbers. The (associative) \textit{incidence algebra} $A(\mathcal{P})=A(\mathcal{P}, \textbf{k})$ is the span over $\textbf{k}$ of elements $e_{i,j}$, for $i,j\in\mathcal{P}$ satisfying $i\preceq j$, with product given by setting $e_{i,j}e_{kl}=e_{i,l}$ if $j=k$ and $0$ otherwise. The \textit{trace} of an element $\sum c_{i,j}e_{i,j}$ is $\sum c_{i,i}.$ We can equip $A(\mathcal{P})$ with the commutator product $[a,b]=ab-ba$, where juxtaposition denotes the product in $A(\mathcal{P})$, to produce the \textit{Lie poset algebra} $\mathfrak{g}(\mathcal{P})=\mathfrak{g}(\mathcal{P}, \textbf{k})$. If $|\mathcal{P}|=n$, then both $A(\mathcal{P})$ and $\mathfrak{g}(\mathcal{P})$ may be regarded as subalgebras of the algebra of $n \times n$ upper-triangular matrices over $\textbf{k}$. Such a matrix representation is realized by replacing each basis element $e_{i,j}$ by the $n\times n$ matrix $E_{i,j}$ containing a 1 in the $i,j$-entry and 0's elsewhere. The product between elements $e_{i,j}$ is then replaced by matrix multiplication between the $E_{i,j}$. \begin{Ex}\label{ex:posetmat} Let $\mathcal{P}$ be the poset of Example~\ref{ex:poset}. The matrix form of elements in $\mathfrak{g}(\mathcal{P})$ is illustrated in Figure~\ref{fig:tA}, where the $*$'s denote potential non-zero entries. \begin{figure}[H] $$\kbordermatrix{ & 1 & 2 & 3 & 4 \\ 1 & * & * & * & * \\ 2 & 0 & * & * & * \\ 3 & 0 & 0 & * & 0 \\ 4 & 0 & 0 & 0 & * \\ }$$ \caption{Matrix form of $\mathfrak{g}(\mathcal{P})$, for $\mathcal{P}=\{1,2,3,4\}$ with $1\preceq2\preceq3,4$}\label{fig:tA} \end{figure} \end{Ex} \begin{remark}\label{rem:lpa1} Let $\mathfrak{b}$ be the Borel subalgebra of $\mathfrak{gl}(n)$ consisting of all $n\times n$ upper-triangular matrices and $\mathfrak{h}$ be its Cartan subalgebra of diagonal matrices. Any subalgebra $\mathfrak{g}$ lying between $\mathfrak{h}$ and $\mathfrak{b}$ is then a Lie poset algebra; for $\mathfrak{g}$ is then the span over $\textbf{k}$ of $\mathfrak{h}$ and those $E_{i,j}$ which it contains, and there is a partial order on $\mathcal{P}=\{1,\dots,n\}$ compatible with the linear order by setting $i\preceq j$ whenever $E_{i,j}\in \mathfrak{g}$. \end{remark} \begin{remark}\label{rem:lpa2} Restricting $\mathfrak{g}(\mathcal{P})$ to trace-zero matrices results in a subalgebra of $A_{n-1}=\mathfrak{sl}(n)$, referred to as a type-A Lie poset algebra \textup(see \textup{\textbf{\cite{CG, CM,binary}}}\textup). As stated in the introduction, Coll and Mayers \textup{\textbf{\cite{CM}}} initiated an investigation into the index and spectral theories of type-A Lie poset algebras. \end{remark} Considering Remarks~\ref{rem:lpa1} and~\ref{rem:lpa2}, we make the following definition. \begin{definition} Let $\mathfrak{g}$ be one of the classical simple Lie algebras. Let $\mathfrak{b}\subset\mathfrak{g}$ be the Borel subalgebra consisting of all upper-triangular matrices and $\mathfrak{h}$ be its Cartan subalgebra of diagonal matrices. A Lie subalgebra $\mathfrak{p}\subset\mathfrak{g}$ satisfying $\mathfrak{h}\subset\mathfrak{p}\subset\mathfrak{b}$ is called a Lie poset subalgebra of $\mathfrak{g}$. If $\mathfrak{g}$ is $A_{n-1}=\mathfrak{sl}(n)$, $B_n=\mathfrak{so}(2n+1)$, $C_n=\mathfrak{sp}(2n)$, or $D_n=\mathfrak{so}(2n)$, for $n\in\mathbb{N}$, then $\mathfrak{p}$ is called a type-A, type-B, type-C, or type-D Lie poset algebra, respectively. \end{definition} \begin{remark}\label{rem:reiner} Let $\mathcal{P}$ be a poset with underlying set $\{1,\hdots,n\}$ which does not necessarily satisfy $\preceq_{\mathcal{P}}\subset\le$. In \textup{\textbf{\cite{Reiner1}}}, Reiner describes a method for identifying $\mathcal{P}$ with a subset of the root system corresponding to $\mathfrak{sl}(n)$. Reiner then generalizes this construction to produce what he calls a ``parset." If $\Phi$ is a root system, then a parset \textup(partial root system\textup) is defined to be a subset $P\subset\Phi$ such that \begin{itemize} \item $\alpha\in P$ implies $-\alpha\notin P$; and \item if $\alpha_1,\alpha_2\in P$ and $c_1\alpha_1+c_2\alpha_2\in\Phi$ for some $c_1,c_2>0$, then $c_1\alpha_1+c_2\alpha_2\in P$. \end{itemize} One can attach to each type-A, B, C, and D Lie poset algebra an appropriately typed parset by pairing each Lie poset algebra with the parset generated by its roots. Such a correspondence is many-to-one, as can be seen by considering the type-B Lie poset algebras defined by the bases: $$\{E_{1,1}-E_{5,5},E_{2,2}-E_{4,4},E_{1,2}-E_{4,5},E_{1,4}-E_{2,5}\}$$ and $$\{E_{1,1}-E_{5,5},E_{2,2}-E_{4,4},E_{1,2}-E_{4,5},E_{1,4}-E_{2,5},E_{1,3}-E_{3,5}\},$$ which correspond to the same parset. \end{remark} \section{Posets of types B, C, and D}\label{sec:BCDpos} In this section, we provide definitions for posets of types B, C, and D (cf. \textbf{\cite{CM}}). These posets encode matrix forms that define Lie poset algebras of types B, C, and D. \begin{remark} Recall that the subalgebra of upper-triangular matrices of \begin{itemize} \item $\mathfrak{sp}(2n)$ consists of $2n\times 2n$ matrices of the form given in Figure~\ref{fig:matform} with $N=\widehat{N}$, \item $\mathfrak{so}(2n)$ consists of $2n\times 2n$ matrices of the form given in Figure~\ref{fig:matform} with $N=-\widehat{N}$, and \item $\mathfrak{so}(2n+1)$ consists of $2n+1\times 2n+1$ matrices of the form given in Figure~\ref{fig:matform} with $N=-\widehat{N}$ and a 0 on the diagonal separating $M$ and $-\widehat{M}$, \end{itemize} where $\widehat{N}$ denotes the transpose of $N$ with respect to the antidiagonal. \begin{figure}[H] $$\begin{tikzpicture} \matrix [matrix of math nodes,left delimiter={(},right delimiter={)}] { M & N \\ 0 & -\widehat{M} \\ }; \end{tikzpicture}$$ \caption{Matrix form}\label{fig:matform} \end{figure} \end{remark} \begin{remark}\label{rem:utbasis} Throughout the remainder of this article, unless stated otherwise, we assume that the rows and columns of a $2n\times 2n$ \textup(resp., $2n+1\times 2n+1$\textup) matrix are labeled by $\{-n,\hdots,-1,1,\hdots,n\}$ \textup(resp., $\{-n,\hdots,-1,0,1,\hdots,n\}$\textup). \end{remark} \begin{theorem}\label{thm:utbasisC} A basis for a \begin{itemize} \item type-C Lie poset algebra can be taken which consists solely of elements of the form $E_{-i,-i}-E_{i,i}$, for all $i\in [n]$, $E_{-i,-j}-E_{j,i}$, for $i,j\in [n]$, $E_{-i,j}+E_{-j,i}$, for $i,j\in [n]$, and $E_{-i,i}$, for $i\in [n]$. \item type-D Lie poset algebra can be taken which consists solely of elements of the form $E_{-i,-i}-E_{i,i}$, for all $i\in [n]$, $E_{-i,-j}-E_{j,i}$, for $i,j\in [n]$, and $E_{-i,j}-E_{-j,i}$, for $i,j\in [n]$. \item type-B Lie poset algebra can be taken which consists solely of elements of the form $E_{-i,-i}-E_{i,i}$, for all $i\in [n]$, $E_{-i,-j}-E_{j,i}$, for $i,j\in [n]$, $E_{-i,j}-E_{-j,i}$, for $i,j\in [n]$, and $E_{-j,0}-E_{0,j}$, for $j\in[n]$. \end{itemize} \end{theorem} \begin{proof} We prove the result for type-C Lie poset algebras as the type-B and D cases follow similarly. Let $\mathfrak{g}\subset\mathfrak{sp}(2n)$ be a type-C Lie poset algebra and $$\mathscr{B}_n=\{E_{-i,-j}-E_{j,i}~|~i,j\in [n]\}\cup\{E_{-i,j}+E_{-j,i}~|~i,j\in [n]\}\cup\{E_{-i,i}~|~i\in [n]\}.$$ We claim that if $x\in \mathscr{B}_n$ occurs as a summand with nonzero coefficient in an element of $\mathfrak{g}$, then $x\in\mathfrak{g}$. Since $\mathfrak{g}$ is a type-C Lie poset algebra, $\mathfrak{g}$ contains the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{sp}(2n)$ of diagonal matrices; that is, $E_{-i,-i}-E_{i,i}\in\mathfrak{g}$, for all $i\in [n]$. Thus, letting $\mathfrak{n}\subset\mathfrak{sp}(2n)$ denote the subalgebra of strictly upper-triangular matrices, it is sufficient to show that if $a=\sum_{k=1}^rc_kx_k\in\mathfrak{g}\cap\mathfrak{n}$, where $x_k\in \mathscr{B}_n$, for $k\in[r]$, then $x_k\in\mathfrak{g}\cap\mathfrak{n}$, for some $k\in[r]$. If not, suppose that the given $a\in\mathfrak{g}$ has minimal $r$ such that no summand $x_k$ is contained in $\mathfrak{g}\cap\mathfrak{n}$; surely, $r>2$. If there exists $x_k$, a summand of $a$, such that $x_k=E_{-i,i}$, for $i\in[n]$, set $d=E_{-i,-i}-E_{i,i}$; otherwise, there exists $x_k$ such that $x_k=E_{-j,-i}-E_{i,j}$ or $E_{-j,i}+E_{-i,j}$, in which case set $d=E_{i,i}-E_{-i,-i}+E_{-j,-j}-E_{j,j}$ or $d=E_{-i,-i}-E_{i,i}+E_{-j,-j}-E_{j,j}$, respectively. In either case, $[d,a]$ is not a multiple of a but is a linear combination of the same summands $x_k$, for $k\in[r]$. To see this, note that $[d,x_k]=2x_k$, while $[d,x_l]=d_lx_l$ with $d_l=-1,0,1$, for $l\neq k$. Thus, there is a linear combination of $a$ and $[d,a]$ which is nonzero and contains no more than $r-1$ of the summands $x_k$, for $k\in[r]$; one of them is consequently already in $\mathfrak{g}\cap\mathfrak{n}$, a contradiction. Thus, since $\{E_{-i,-i}-E_{i,i}~|~i\in[n]\}\cup\mathscr{B}_n$ forms a basis of $\mathfrak{sp}(2n)$, the set $\{E_{-i,-i}-E_{i,i}~|~i\in[n]\}$ can be extended to a basis of $\mathfrak{g}$ with the desired form. \end{proof} \begin{definition}\label{def:BCDposet} A type-C poset is a poset $\mathcal{P}=\{-n,\hdots,-1,1,\hdots, n\}$ such that \begin{enumerate} \item if $i\preceq_{\mathcal{P}}j$, then $i\le j$; and \item if $i\neq -j$, then $i\preceq_{\mathcal{P}}j$ if and only if $-j\preceq_{\mathcal{P}}-i$. \end{enumerate} A type-D poset is a poset $\mathcal{P}=\{-n,\hdots,-1,1,\hdots, n\}$ satisfying 1 and 2 above as well as \begin{enumerate} \setcounter{enumi}{2} \item $i$ does not cover $-i$, for $i\in \{1,\hdots, n\}$. \end{enumerate} A type-B poset is a poset $\mathcal{P}=\{-n,\hdots,-1,0,1,\hdots, n\}$ satisfying 1 through 3 above. \end{definition} \begin{Ex}\label{ex:CHasse} In Figure~\ref{fig:hasse}, we illustrate the Hasse diagram of the type-C \textup(and D\textup) poset $\mathcal{P}=\{-3,-2,-1,1,2,3\}$ with $-2\preceq1,3$; $-3\preceq 2$; and $-1\preceq 2$. Note that adding 0 to $\mathcal{P}$ and a vertex labeled 0 to the Hasse diagram of Figure~\ref{fig:hasse} results in a type-B poset and its corresponding Hasse diagram. \begin{figure}[H] $$\begin{tikzpicture}[scale = 0.65] \node (1) at (0, 0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-3}] {}; \node (2) at (1,0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-2}] {}; \node (3) at (2, 0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-1}] {}; \node (4) at (0, 1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{3}] {}; \node (5) at (1,1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{2}] {}; \node (6) at (2, 1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{1}] {}; \draw (1)--(5); \draw (4)--(2); \draw (5)--(3); \draw (2)--(6); \end{tikzpicture}$$ \caption{Hasse diagram of a type-C poset}\label{fig:hasse} \end{figure} \end{Ex} \begin{theorem} Type-C \textup(resp., B or D\textup) posets $\mathcal{P}$ are in bijective correspondence with type-C \textup(resp., B or D\textup) Lie poset algebras $\mathfrak{p}$ as follows: \begin{itemize} \item $-i,i\in \mathcal{P}$ if and only if $E_{-i,-i}-E_{i,i}\in \mathfrak{p}$; \item $-i\preceq_{\mathcal{P}}-j$ and $j\preceq_{\mathcal{P}}i$ if and only if $E_{-i,-j}-E_{j,i}\in\mathfrak{p}$; \item $-i\preceq_{\mathcal{P}}j$ and $-j\preceq_{\mathcal{P}}i$ if and only if $E_{-i,j}+E_{-j,i}\in\mathfrak{p}$ \textup(resp., $E_{-i,j}-E_{-j,i}\in\mathfrak{p}$\textup); \end{itemize} and only in type-C \begin{itemize} \item $-i\preceq_{\mathcal{P}}i$ if and only if $E_{-i,i}\in\mathfrak{p}$. \end{itemize} \end{theorem} \begin{proof} We prove the result for type-C posets and Lie poset algebras. The proofs for the types-B and D cases are similar, only requiring minor modifications. Let $\mathcal{P}$ be a type-C poset. The collection of matrices generated by the elements in $\mathfrak{p}$ clearly consists of upper-triangular matrices in $\mathfrak{sp}(|\mathcal{P}|)$ and includes the Cartan of diagonal matrices. Furthermore, transitivity in $\mathcal{P}$ guarantees closure of $\mathfrak{p}$ under the Lie bracket. Thus, $\mathfrak{p}$ is a type-C Lie poset algebra. Conversely, if $\mathfrak{p}$ is a type-C Lie poset algebra, then taking the basis guaranteed by Theorem~\ref{thm:utbasisC}, we may use the correspondence outlined in the statement of the current theorem to form a set $\mathcal{P}$ together with a relation $\preceq_{\mathcal{P}}$ between its elements. The relations generated by the given basis elements clearly force $\preceq_{\mathcal{P}}$ to satisfy all properties required of a type-C poset except transitivity -- but transitivity is equivalent to the closure of $\mathfrak{p}$ under the Lie bracket. The result follows. \end{proof} \begin{remark} Note that as in the type-A case, type-C posets $\mathcal{P}$ determine the matrix form of the corresponding type-C Lie poset algebra by identifying which entries of a $|\mathcal{P}|\times|\mathcal{P}|$ matrix can be non-zero. In particular, the $i,j$-entry can be non-zero if and only if $i\preceq_{\mathcal{P}}j$. The same is almost true in types-B and D, except one ignores relations of the form $-i\preceq_{\mathcal{P}} i$. \end{remark} \begin{Ex}\label{ex:typeBCD} Let $\mathcal{P}$ be the poset of Example~\ref{ex:CHasse}. The matrix form encoded by $\mathcal{P}$ and defining the corresponding type-C \textup(and D\textup) Lie poset algebra is illustrated in Figure~\ref{fig:tBCD}, where $*$'s denote potential non-zero entries. \begin{figure}[H] $$\kbordermatrix{ & -3 & -2 & -1 & 1 & 2 & 3 \\ -3 & * & 0 & 0 & * & * & 0 \\ -2 & 0 & * & 0 & * & 0 & * \\ -1 & 0 & 0 & * & 0 & * & * \\ 1 & 0 & 0 & 0 & * & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & * & 0 \\ 3 & 0 & 0 & 0 & 0 & 0 & * \\ }$$ \caption{Matrix form for $\mathcal{P}=\{-3,-2,-1,1,2,3\}$ with $-2\preceq1,3$; $-3\preceq 2$; and $-1\preceq 2$}\label{fig:tBCD} \end{figure} \end{Ex} \begin{remark} Given a type-C poset $\mathcal{P}$, we denote the corresponding type-C Lie poset algebra by $\mathfrak{g}_C(\mathcal{P})$; furthermore, we define the following basis for $\mathfrak{g}_C(\mathcal{P})$: \begin{align} \mathscr{B}_C(\mathcal{P})=\{E_{-i,-i}-E_{i,i}~|~-i,i\in\mathcal{P}\}&\cup\{E_{-i,-j}-E_{j,i}~|~-i,-j,i,j\in\mathcal{P},-i\preceq -j,j\preceq i\} \nonumber \\ &\cup\{E_{-i,j}+E_{-j,i}~|~-i,-j,i,j\in\mathcal{P},-j\preceq i,-i\preceq j\} \nonumber \\ &\cup\{E_{-i,i}~|~-i,i\in\mathcal{P},-i\preceq i\}. \nonumber \end{align} Similarly, given a type-D \textup(resp., B\textup) poset $\mathcal{P}$ we denote the corresponding type-D \textup(resp., B\textup) Lie poset algebra by $\mathfrak{g}_D(\mathcal{P})$ \textup(resp., $\mathfrak{g}_B(\mathcal{P})$\textup) and define the basis $\mathscr{B}_D(\mathcal{P})$ \textup(resp., $\mathscr{B}_B(\mathcal{P})$\textup) as follows: \begin{align} \mathscr{B}_D(\mathcal{P})=\{E_{-i,-i}-E_{i,i}~|~-i,i\in\mathcal{P}\}&\cup\{E_{-i,-j}-E_{j,i}~|~-i,-j,i,j\in\mathcal{P},-i\preceq -j,j\preceq i\} \nonumber \\ &\cup\{E_{-i,j}-E_{-j,i}~|~-i,-j,i,j\in\mathcal{P},-j\preceq i,-i\preceq j,j<i\}. \nonumber \end{align} \end{remark} \begin{theorem}\label{thm:onlyC} If $\mathcal{P}$ is a type-D poset such that $-i\npreceq i$, for all $i\in \mathcal{P}$, then $\mathfrak{g}_D(\mathcal{P})$ is isomorphic to $\mathfrak{g}_C(\mathcal{P})$. \end{theorem} \begin{proof} The result follows by comparing the structure constants of $\mathfrak{g}_D(\mathcal{P})$ and $\mathfrak{g}_C(\mathcal{P})$ corresponding to the bases $\mathscr{B}_D(\mathcal{P})$ and $\mathscr{B}_C(\mathcal{P})$, respectively. \end{proof} \begin{theorem}\label{thm:BD} If $\mathcal{P}$ is a type-B poset for which 0 is not related to any other element of $\mathcal{P}$ and $\mathcal{P}_0=\mathcal{P}_{\mathcal{P}\backslash\{0\}}$, then $\mathfrak{g}_B(\mathcal{P})$ is isomorphic to $\mathfrak{g}_C(\mathcal{P}_0)$. \end{theorem} \begin{proof} In this case $\mathscr{B}_D(\mathcal{P}_0)$ forms a basis for both $\mathfrak{g}_B(\mathcal{P})$ and $\mathfrak{g}_D(\mathcal{P}_0)$. Applying Theorem~\ref{thm:onlyC} establishes the result. \end{proof} We continue to set the combinatorial notation for posets of types B, C, and D. \bigskip Given a type-B, C, or D poset $\mathcal{P}$, let $\mathcal{P}^+=\mathcal{P}_{\mathcal{P}\cap\mathbb{Z}_{>0}}$ and $\mathcal{P}^-=\mathcal{P}_{\mathcal{P}\cap\mathbb{Z}_{<0}}$; that is, $\mathcal{P}^+$ (resp., $\mathcal{P}^-)$ is the poset induced by the positive (resp., negative) elements of $\mathcal{P}$. \begin{remark} By property 2 of Definition~\ref{def:BCDposet}, we have that $\mathcal{P}^+$ is isomorphic to $(\mathcal{P}^-)^*$. \end{remark} \noindent Let $Rel_{\pm}(\mathcal{P})$ denote the set of relations $x\preceq_{\mathcal{P}} y$ such that $x\in\mathcal{P}^-$ and $y\in\mathcal{P}^+$. We call $\mathcal{P}$ \textit{separable} if $Rel_{\pm}(\mathcal{P})=\emptyset$, and \textit{non-separable} otherwise. We say that $\mathcal{P}$ is of \textit{height} $(i,j)$ if $i$ (resp., $j$) is one less than the largest cardinality of a chain in $\mathcal{P}^+$ (resp., $\mathcal{P})$. \begin{Ex} If $\mathcal{P}$ is the poset of Example~\ref{ex:CHasse}, then $\mathcal{P}^+=\{1,2,3\}$ and $\mathcal{P}^-=\{-1,-2,-3\}$; both induced posets have no relations. Further, since $\mathcal{P}$ has chains of cardinality at most one, it is of height $(0,1)$. \end{Ex} To end this section, we introduce a condensed graphical representation for height-$(0,1)$, type-B, C, or D posets which will be used in the following section. \begin{definition}\label{def:RG} Given a height-$(0,1)$, type-B, C, or D poset $\mathcal{P}$, we define the relation graph $RG(\mathcal{P})$ as follows: \begin{itemize} \item each pair of elements $-i,i\in\mathcal{P}$ are represented by a single vertex in $RG(\mathcal{P})$ labeled by $i\in\mathcal{P}^+$ \textup(omitting the vertex representing 0 in type B\textup); \item if $-i\preceq j$ in $\mathcal{P}$, then there is an edge connecting vertex $i$ and vertex $j$ in $RG(\mathcal{P})$. \end{itemize} If $RG(\mathcal{P})$ is connected, then $\mathcal{P}$ is called connected. \end{definition} \begin{remark} Note that $RG(\mathcal{P})$ is well-defined for type-B posets $\mathcal{P}$ since if $\mathcal{P}$ is of height-$(0,1)$, then 0 cannot related to any other element of $\mathcal{P}$. Further, such relation graphs are equivalent to ``signed digraphs," as defined by Reiner \textup(see \textup{\textbf{\cite{Reiner1,Reiner2}}}\textup), with the signs removed. \end{remark} \begin{remark} If $-i\preceq i$ in $\mathcal{P}$, then vertex $i$ defines a self-loop in $RG(\mathcal{P})$. Note that $RG(\mathcal{P})$ can only contain self-loops if $\mathcal{P}$ is a type-C poset. \end{remark} \begin{remark} Note that normally a poset is called connected if its corresponding Hasse diagram is connected. For the purposes of this paper, though, the notion of connected given in Definition~\ref{def:RG} for type-B, C, and D posets is more useful \textup(see Theorem~\ref{thm:disjoint}\textup). \end{remark} \begin{Ex} In Figure~\ref{fig:relationgr}, we illustrate the \textup(a\textup) Hasse diagram and \textup(b\textup) relation graph corresponding to the height-$(0,1)$, type-C poset $\mathcal{P}=\{-3,-2,-1,1,2,3\}$ with $-2\preceq1,2,3$; $-3\preceq 2$; and $-1\preceq 2$. \begin{figure}[H] $$\begin{tikzpicture}[scale = 0.65] \node (1) at (0, 0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-3}] {}; \node (2) at (1,0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-2}] {}; \node (3) at (2, 0) [circle, draw = black, fill=black, inner sep = 0.5mm, label=below:{-1}] {}; \node (4) at (0, 1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{3}] {}; \node (5) at (1,1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{2}] {}; \node (6) at (2, 1) [circle, draw = black, fill=black, inner sep = 0.5mm, label=above:{1}] {}; \node (7) at (1, -1.5) {(a)}; \draw (5)--(2); \draw (1)--(5); \draw (4)--(2); \draw (5)--(3); \draw (2)--(6); \end{tikzpicture}\quad\quad\begin{tikzpicture}[scale = 0.65] \node (1) at (0, 1.25) [circle, draw = black,fill=black, inner sep = 0.5mm, label=below:{3}] {}; \node (2) at (1,1.25) [circle, draw = black,fill=black, inner sep = 0.5mm, label=below:{2}] {}; \node (3) at (2, 1.25) [circle, draw = black,fill=black, inner sep = 0.5mm, label=below:{1}] {}; \node (7) at (1, -0.8) {(b)}; \draw (1)--(2); \draw (2)--(3); \draw (1,1.25) .. controls (0.5,2) and (1.5,2) .. (1,1.25); \end{tikzpicture}$$ \caption{(a) Hasse diagram and (b) relation graph of type-C poset}\label{fig:relationgr} \end{figure} \end{Ex} \section{Index}\label{sec:indexform} In this section, we develop combinatorial formulas for the index of type-B,C, and D Lie poset algebras corresponding to height-$(0,1)$, type-B, C, and D posets, respectively. It will be convenient to use an alternative characterization of the index. Let $\mathfrak{g}$ be an arbitrary Lie algebra with basis $\{x_1,...,x_n\}$. The index of $\mathfrak{g}$ can be expressed using the \textit{commutator matrix}, $([x_i,x_j])_{1\le i, j\le n}$, over the quotient field $R(\mathfrak{g})$ of the symmetric algebra $Sym(\mathfrak{g})$ as follows (see \textbf{\cite{D}}). \begin{theorem}\label{thm:commat} The index of $\mathfrak{g}$ is given by $${\rm ind \hspace{.1cm}} \mathfrak{g}= n-rank_{R(\mathfrak{g})}([x_i,x_j])_{1\le i, j\le n}.$$ \end{theorem} \begin{Ex} Consider $\mathfrak{g}_C(\mathcal{P})$, for $\mathcal{P}=\{-1,1\}$ with $-1\preceq 1$. A basis for $\mathfrak{g}_A(\mathcal{P})$ is given by $$\mathscr{B}_C(\mathcal{P})=\{x_1=E_{-1,-1}-E_{1,1},\text{ }x_2=E_{-1,1}\},$$ where $[x_1,x_2]=2x_2$. The commutator matrix $([x_i,x_j])_{1\le i, j\le 2}$ is illustrated in Figure~\ref{ex:commatsl2}. Since the rank of this matrix is two, it follows from Theorem~\ref{thm:commat} that $\mathfrak{g}_A(\mathcal{P})$ is Frobenius. \begin{figure}[H] $$\begin{bmatrix} 0 & 2x_2 \\ -2x_2 & 0 \end{bmatrix}$$ \caption{Commutator matrix}\label{ex:commatsl2} \end{figure} \end{Ex} \begin{remark} To ease notation, row and column labels of commutator matrices will be bolded and matrix entries will be unbolded. Furthermore, we will refer to the row corresponding to $\mathbf{x}$ in a commutator matrix -- and by a slight abuse of notation, in any equivalent matrix -- as row $\mathbf{x}$. \end{remark} \begin{remark} For ease of discourse, all results of this section will be stated in terms of type-C Lie poset algebras. Considering Theorems~\ref{thm:onlyC} and~\ref{thm:BD}, all results hold in the type-B and D cases as well. \end{remark} Throughout this section, given a type-C poset $\mathcal{P}$, we set $$\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))=([x_i,x_j])_{1\le i, j\le n}\text{, where }\{x_1,\hdots,x_n\}=\mathscr{B}_C(\mathcal{P}).$$ \begin{theorem}\label{thm:sep} If $\mathcal{P}$ is a separable, type-C poset, then $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(P)={\rm ind \hspace{.1cm}}\mathfrak{g}(P^+)={\rm ind \hspace{.1cm}}\mathfrak{g}_A(P^+)+1.$$ \end{theorem} \begin{proof} Note that $$\{x_1,\hdots,x_n\}=\{E_{i,i}~|~i\in\mathcal{P}^+\}\cup\{E_{i,j}~|~i\preceq_{\mathcal{P}^+}j\}$$ and $$\{y_1,\hdots,y_n\}=\bigg\{\sum_{i=1}^{|\mathcal{P}^+|}E_{i,i}\bigg\}\cup\{E_{i,i}-E_{i+1,i+1}~|~1\le i\le |\mathcal{P}^+|-1\}\cup\{E_{i,j}~|~i\preceq_{\mathcal{P}^+}j\}$$ both form bases for $\mathfrak{g}(\mathcal{P}^+)$, while $$\{z_1,\hdots,z_{n-1}\}=\{E_{i,i}-E_{i+1,i+1}~|~1\le i\le |\mathcal{P}^+|-1\}\cup\{E_{i,j}~|~i\preceq_{\mathcal{P}^+}j\}$$ forms a basis for $\mathfrak{g}_A(\mathcal{P}^+)$. Replacing $E_{-i,-i}-E_{i,i}$ by $E_{i,i}$ and $E_{-j,-i}-E_{i,j}$ by $E_{i,j}$ in $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ results in $([x_i,x_j])_{1\le i, j\le n}$, establishing the first equality. The second equality follows by comparing the commutator matrices $([y_i,y_j])_{1\le i, j\le n}$ and $([z_i,z_j])_{1\le i, j\le n-1}$, for $\mathfrak{g}(\mathcal{P}^+)$ and $\mathfrak{g}_A(\mathcal{P}^+)$, respectively. \end{proof} \begin{corollary} If $\mathcal{P}$ is a type-C poset such that $\mathfrak{g}_C(\mathcal{P})$ is Frobenius, then $\mathcal{P}$ is non-separable. \end{corollary} \begin{corollary}\label{cor:h00} If $\mathcal{P}$ is a height-$(0,0)$, type-C poset, then $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=|\mathcal{P}^+|.$$ \end{corollary} \begin{proof} This follows since any commutator matrix corresponding to $\mathfrak{g}(\mathcal{P}^+)$ is the $|\mathcal{P}^+|\times|\mathcal{P}^+|$ zero-matrix. \end{proof} \begin{remark} In light of Corollary~\ref{cor:h00}, the next case to consider is type-C posets $\mathcal{P}$ of height-$(0,1)$. This case is non-trivial and contains the first examples of Frobenius, type-C Lie poset algebras. The remainder of this article concerns the analysis of this case. In particular, using combinatorial index formulas developed in Section~\ref{subsec:indexform}, we are able to fully characterize those height-$(0,1)$, type-C posets which underlie Frobenius, type-C Lie poset algebras \textup(see Theorem~\ref{thm:FrobC}\textup). A spectral analysis follows in Section~\ref{sec:spec}. \end{remark} \subsection{Matrix reduction} In this section, we describe an algorithm for reducing $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$, for $\mathcal{P}$ a connected, height-$(0,1)$, type-C poset. This reduction facilitates the development of a combinatorial index formula for $\mathfrak{g}_C(\mathcal{P})$ in Section~\ref{subsec:indexform}. As a first step in our matrix reduction, we order the row and column labels of $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$, i.e., the elements of $\mathscr{B}_C(\mathcal{P})$, as follows: \begin{enumerate} \item the elements $\mathbf{E_{-i,-i}-E_{i,i}}$ in increase order of $i$ in $\mathbb{Z}$; \item the elements $\mathbf{E_{-i,j}+E_{-j,i}}$ in increasing lexicographic order of $(i,j)$, for $i<j$, in $\mathbb{Z}\times\mathbb{Z}$. \end{enumerate} \noindent With this ordering, since height-$(0,1)$, type-C posets have no non-trivial transitivity relations, $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ has the form illustrated in Figure~\ref{fig:h01m}. \begin{figure}[H] $$\begin{tikzpicture} \matrix [matrix of math nodes,left delimiter={(},right delimiter={)}] { 0 & -B(\mathcal{P})^T \\ B(\mathcal{P}) & 0 \\ }; \end{tikzpicture}$$ \caption{Matrix form of $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$, for $\mathcal{P}$ a height-$(0,1)$, type-C poset}\label{fig:h01m} \end{figure} \noindent Here, $B(\mathcal{P})$ has rows labeled by basis elements of the form $\mathbf{E_{-i,j}+E_{-j,i}}$ and columns labeled by basis elements of the form $\mathbf{E_{-i,-i}-E_{i,i}}$, and $-B(\mathcal{P})^T$ has these labels reversed. Thus, since $rank(B(\mathcal{P}))=rank(B(\mathcal{P})^T)$, to calculate the index, it suffices to determine the rank of $B(\mathcal{P})$. Now, in order to define our matrix reduction, we catalogue the forms of collections of rows in $B(\mathcal{P})$ which correspond to certain substructures of $RG(\mathcal{P})$. Further, we introduce row operations to reduce such collections of rows. To condense illustrations, when no confusion will arise, columns of zeros are omitted. \\* \noindent \textbf{Paths}: If $RG(\mathcal{P})$ has a path consisting of the sequence of vertices $i_1,\hdots,i_n$, then the corresponding rows and columns of $B(\mathcal{P})$ have the form illustrated in Figure~\ref{fig:path}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} & \mathbf{E_{-i_2,-i_2}-E_{i_2,i_2}} & \mathbf{E_{-i_3,-i_3}-E_{i_3,i_3}} & \hdots & \mathbf{E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}}} & \mathbf{E_{-i_n,-i_n}-E_{i_n,i_n}} \\ \mathbf{E_{-i_1,i_2}+E_{-i_2,i_1}} & -E_{-i_1,i_2}-E_{-i_2,i_1} & -E_{-i_1,i_2}-E_{-i_2,i_1} & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_2,i_3}+E_{-i_3,i_2}} & 0 & -E_{-i_2,i_3}-E_{-i_3,i_2} & -E_{-i_2,i_3}-E_{-i_3,i_2} & & \\ \vdots & & & & \ddots & & & \\ \mathbf{E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}} & & & & & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} \\ }} \] \caption{Matrix block corresponding to a path}\label{fig:path} \end{figure} \begin{definition} If $RG(\mathcal{P})$ contains a path consisting of the sequence of vertices $i_1,\hdots,i_n$, then define the row operation $Path(i_1,\hdots,i_n)$ on $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ to be $$(\mathbf{E_{-i_n,i_{n-1}}+E_{-i_n,i_{n-1}}})+\sum_{j=1}^{n-1}(-1)^j\frac{E_{-i_n,i_{n-1}}+E_{-i_n,i_{n-1}}}{E_{-i_{n-j},i_{n-j-1}}+E_{-i_{n-j-1},i_{n-j}}}(\mathbf{E_{-i_{n-j},i_{n-j-1}}+E_{-i_{n-j-1},i_{n-j}}})$$ performed at row $\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}}$. \end{definition} \begin{Ex} The result of applying $Path(i_1,\hdots,i_n)$ to the matrix of Figure~\ref{fig:path} is illustrated in Figure~\ref{fig:rpath}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} & \mathbf{E_{-i_2,-i_2}-E_{i_2,i_2}} & \mathbf{E_{-i_3,-i_3}-E_{i_3,i_3}} & \hdots & \mathbf{E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}}} & \mathbf{E_{-i_n,-i_n}-E_{i_n,i_n}} \\ \mathbf{E_{-i_1,i_2}+E_{-i_2,i_1}} & -E_{-i_1,i_2}-E_{-i_2,i_1} & -E_{-i_1,i_2}-E_{-i_2,i_1} & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_2,i_3}+E_{-i_3,i_2}} & 0 & -E_{-i_2,i_3}-E_{-i_3,i_2} & -E_{-i_2,i_3}-E_{-i_3,i_2} & & \\ \vdots & & & & \ddots & & & \\ \mathbf{E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}} & \pm(E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}) & 0 & 0 & \hdots & 0 & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} \\ }} \] \caption{Reduced matrix block corresponding to a path}\label{fig:rpath} \end{figure} \end{Ex} \bigskip \noindent \textbf{Self-loop}: If $RG(\mathcal{P})$ has a self-loop at vertex $i_1$, then the corresponding rows and columns of $B(\mathcal{P})$ have the form illustrated in Figure~\ref{fig:filledinnode}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} \\ \mathbf{E_{-i_1,i_1}} & -2E_{-i_1,i_1} \\ }} \] \caption{Matrix block corresponding to a self-loop}\label{fig:filledinnode} \end{figure} \noindent \textbf{Cycles}: If $RG(\mathcal{P})$ has a cycle consisting of $n>1$ vertices $i_1,\hdots,i_n$, then the corresponding rows and columns of $B(\mathcal{P})$ have the form illustrated in Figure~\ref{fig:cycle}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} & \mathbf{E_{-i_2,-i_2}-E_{i_2,i_2}} & \mathbf{E_{-i_3,-i_3}-E_{i_3,i_3}} & \hdots & \mathbf{E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}}} & \mathbf{E_{-i_n,-i_n}-E_{i_n,i_n}} \\ \mathbf{E_{-i_1,i_2}+E_{-i_2,i_1}} & -E_{-i_1,i_2}-E_{-i_2,i_1} & -E_{-i_1,i_2}-E_{-i_2,i_1} & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}} & -E_{-i_1,i_n}-E_{-i_n,i_1} & 0 & 0 & \hdots & 0 & -E_{-i_1,i_n}-E_{-i_n,i_1} \\ \mathbf{E_{-i_2,i_3}+E_{-i_3,i_2}} & 0 & -E_{-i_2,i_3}-E_{-i_3,i_2} & -E_{-i_2,i_3}-E_{-i_3,i_2} & & \\ \vdots & & & & \ddots & & & \\ \mathbf{E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}} & & & & & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} \\ }} \] \caption{Matrix block corresponding to a cycle}\label{fig:cycle} \end{figure} \begin{definition} If $RG(\mathcal{P})$ contains a cycle consisting of $n>1$ vertices $i_1,\hdots,i_n$, then define the row operation $Row_e(i_1,\hdots,i_n)$ on $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ to be $$(\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}})+\sum_{j=1}^{n-1}(-1)^j\frac{E_{-i_{1},i_n}+E_{-i_n,i_{1}}}{E_{-i_{j},i_{j+1}}+E_{-i_{j+1},i_j}}(\mathbf{E_{-i_{j},i_{j+1}}+E_{-i_{j+1},i_j}})$$ performed at row $\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}}$. \end{definition} \begin{Ex} The result of applying $Row_e(i_1,\hdots,i_n)$ to the matrix of Figure~\ref{fig:cycle}, for $n$ even, is illustrated in Figure~\ref{fig:rcycle1}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} & \mathbf{E_{-i_2,-i_2}-E_{i_2,i_2}} & \mathbf{E_{-i_3,-i_3}-E_{i_3,i_3}} & \hdots & \mathbf{E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}}} & \mathbf{E_{-i_n,-i_n}-E_{i_n,i_n}} \\ \mathbf{E_{-i_1,i_2}+E_{-i_2,i_1}} & -E_{-i_1,i_2}-E_{-i_2,i_1} & -E_{-i_1,i_2}-E_{-i_2,i_1} & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}} & 0 & 0 & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_2,i_3}+E_{-i_3,i_2}} & 0 & -E_{-i_2,i_3}-E_{-i_3,i_2} & -E_{-i_2,i_3}-E_{-i_3,i_2} & & \\ \vdots & & & & \ddots & & & \\ \mathbf{E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}} & & & & & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} \\ }} \] \caption{Reduced matrix block corresponding to an even cycle}\label{fig:rcycle1} \end{figure} \end{Ex} \begin{remark} Note that if we disregard the newly added zero row in Figure~\ref{fig:rcycle1}, then the configuration of rows is the same as that corresponding to the cycle defined by $i_1,\hdots,i_n$ with the edge between $i_1$ and $i_n$ removed. \end{remark} \begin{definition} If $RG(\mathcal{P})$ contains an odd cycle consisting of $n>1$ vertices $i_1,\hdots,i_n$, then define the row operation $Row_o(i_1,\hdots,i_n)$ on $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ to be $Row_e(i_1,\hdots,i_n)$ followed by multiplying row $(\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}})$ by $\frac{E_{-i_n,i_n}}{E_{-i_1,i_n}+E_{-i_n,i_1}}$. \end{definition} \begin{Ex} The result of applying $Row_o(i_1,\hdots,i_n)$ to the matrix of Figure~\ref{fig:cycle}, for $n$ odd, is illustrated in Figure~\ref{fig:rcycle2}. \begin{figure}[H] \[ \scalemath{0.75}{\kbordermatrix{ & \mathbf{E_{-i_1,-i_1}-E_{i_1,i_1}} & \mathbf{E_{-i_2,-i_2}-E_{i_2,i_2}} & \mathbf{E_{-i_3,-i_3}-E_{i_3,i_3}} & \hdots & \mathbf{E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}}} & \mathbf{E_{-i_n,-i_n}-E_{i_n,i_n}} \\ \mathbf{E_{-i_1,i_2}+E_{-i_2,i_1}} & -E_{-i_1,i_2}-E_{-i_2,i_1} & -E_{-i_1,i_2}-E_{-i_2,i_1} & 0 & \hdots & 0 & 0 \\ \mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}} & 0 & 0 & 0 & \hdots & 0 & -2E_{-i_n,i_n} \\ \mathbf{E_{-i_2,i_3}+E_{-i_3,i_2}} & 0 & -E_{-i_2,i_3}-E_{-i_3,i_2} & -E_{-i_2,i_3}-E_{-i_3,i_2} & & \\ \vdots & & & & \ddots & & & \\ \mathbf{E_{-i_{n-1},i_n}+E_{-i_n,i_{n-1}}} & & & & & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} & -E_{-i_{n-1},i_n}-E_{-i_n,i_{n-1}} \\ }} \] \caption{Reduced matrix block corresponding to an odd cycle}\label{fig:rcycle2} \end{figure} \end{Ex} \begin{remark} Note that the configuration of rows in Figure~\ref{fig:rcycle2} is the same as that corresponding to the cycle defined by $i_1,\hdots,i_n$ with the edge between $i_1$ and $i_n$ removed and with $i_n$ defining a self-loop. If vertex $i_n$ already defined a self-loop, then row $\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}}$ can be reduced to a zero row in the obvious way. \end{remark} \noindent Now, we come to the matrix reduction algorithm, where the relation graph $RG(\mathcal{P})$ of a connected, height-$(0,1)$, type-C poset $\mathcal{P}$ is used as a bookkeeping device to guide the reduction. \bigskip \begin{tcolorbox}[breakable, enhanced] \centerline{\textbf{Matrix Reduction Algorithm}: Let $\mathcal{P}$ be a connected, height-$(0,1)$, type-C poset} \bigskip \noindent \textbf{Step 1}: Set $G_1=RG(\mathcal{P})$, $M_1=B(\mathcal{P})$, $\Gamma_1=(G_1,M_1)$, and $k=1$. \\* \noindent \textbf{Step 2}: Check $G_k$ for self-loops. \begin{itemize} \item If $G_k$ has a self-loop at vertex $i$ and vertex $i$ is adjacent to a vertex $j$, go to \textbf{Step 3}. \item If $G_k$ contains self-loops and no vertex defining a self-loop is adjacent to any other vertex, halt. \item If $G_k$ has no self-loops, go to \textbf{Step 4}. \end{itemize} \textbf{Step 3}: Set $k=l$. Form $\Gamma_{l+1}=(G_{l+1},M_{l+1})$ as follows: \begin{enumerate} \item Perform $$\mathbf{(E_{-i,j}+E_{-j,i})}-\frac{E_{-i,j}+E_{-j,i}}{2E_{-i,i}}\mathbf{E_{-i,i}}$$ at row $\mathbf{E_{-i,j}+E_{-j,i}}$ in $M_l$. \item \begin{itemize} \item If vertex $j$ does not define a self-loop, then: \begin{enumerate}[label=\arabic*.] \setcounter{enumii}{2} \item Multiply row $\mathbf{E_{-i,j}+E_{-j,i}}$ by $\frac{2E_{-j,j}}{E_{-i,j}+E_{-j,i}}$ in $M_l$. \item Replace the row label $\mathbf{E_{-i,j}+E_{-j,i}}$ by $\mathbf{E_{-j,j}}$ in $M_l$. \item Remove the edge between nodes $i$ and $j$ in $G_k$ and add a self-loop at vertex $j$. \item Set $k=l+1$ and go to \textbf{Step 2}. \end{enumerate} \item If vertex $j$ defines a self-loop, then: \begin{enumerate}[label=\arabic*.] \setcounter{enumii}{2} \item Perform $$\mathbf{(E_{-i,j}+E_{-j,i})}-\frac{E_{-i,j}+E_{-j,i}}{2E_{-j,j}}\mathbf{E_{-j,j}}$$ at row $\mathbf{E_{-i,j}+E_{-j,i}}$ in $M_l$. \item Replace the row label $\mathbf{E_{-i,j}+E_{-j,i}}$ by $\mathbf{0}$ in $M_l$. \item Remove the edge between vertices $i$ and $j$ in $G_l$. \item Set $k=l+1$ and go to \textbf{Step 2}. \end{enumerate} \end{itemize} \end{enumerate} \textbf{Step 4}: Check $G_k$ for odd cycles containing $n>1$ vertices \begin{itemize} \item If $G_k$ has such an odd cycle: Let $i_1,\hdots,i_n$ be the vertices of a largest odd cycle in $G_k$; if there are more than one, assuming $i_1<i_j$ for $j=2,\hdots,n$, take $(i_1,\hdots,i_n)$ to be the lexicographically least in $\mathbb{Z}^n$. Go to \textbf{Step 5}. \item If $G_k$ has no such odd cycles go to \textbf{Step 6}. \end{itemize} \textbf{Step 5:} Set $l=k$. Form $\Gamma_{l+1}=(G_{l+1},M_{l+1})$ as follows \begin{enumerate} \item Perform $Row_o(i_1,\hdots,i_n)$ in $M_l$. \item Replace the row label $\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}}$ by $\mathbf{E_{-i_n,i_n}}$ in $M_l$. \item Remove the edge between vertices $i_1$ and $i_n$, and add a self-loop at vertex $i_n$ in $G_l$. \item Set $k=l+1$ and go to \textbf{Step 2} \end{enumerate} \textbf{Step 6:} Check $G_k$ for even cycles: \begin{itemize} \item If $G_k$ has an even cycle: Let $i_1,\hdots,i_n$ be the vertices of a largest even cycle in $G_k$; if there are more than one, assuming $i_1<i_j$ for $j=2,\hdots,n$, take $(i_1,\hdots,i_n)$ to be the lexicographically least in $\mathbb{Z}^n$. Go to \textbf{Step 7}. \item If $G_k$ has no even cycles, go to \textbf{Step 8}. \end{itemize} \textbf{Step 7:} Set $l=k$. Form $\Gamma_{l+1}=(G_{l+1},M_{l+1})$ as follows \begin{enumerate} \item Perform $Row_e(i_1,\hdots,i_n)$ in $M_l$. \item Replace the row label $\mathbf{E_{-i_1,i_n}+E_{-i_n,i_1}}$ by $\mathbf{0}$ in $M_l$. \item Remove the edge between vertices $i_1$ and $i_n$ in $G_l$. \item Set $k=l+1$ and go to \textbf{Step 2}. \end{enumerate} \textbf{Step 8:} Set $l=k$. Form $\Gamma_{l+1}=(G_{l+1},M_{l+1})$ as follows \begin{enumerate} \item Take $i_1$ minimal in $\mathbb{Z}$ such that $i_1$ is a degree one vertex of $G_l$ and assume that all other vertices of $G_l$ are of maximal distance $n$ from $i_1$. \item Working from $m=n$ down to 1, for all paths of length $m$ in $G_l$ starting at $i_1$, say $i_1,\hdots,i_m$, perform $Path(i_1,\hdots,i_m)$ at row $\mathbf{E_{-i_m,i_{m-1}}+E_{-i_{m-1},i_m}}$. \item Halt. \end{enumerate} \end{tcolorbox} \begin{remark} Since we only consider finite posets $\mathcal{P}$, the above algorithm must halt in a finite number of steps. \end{remark} \subsection{Index formula}\label{subsec:indexform} In this section, we determine an index formula for type-C Lie poset algebras corresponding to height-$(0,1)$, type-C posets. Throughout, we let $V(\mathcal{P})$ and $E(\mathcal{P})$ denote, respectively, the set of vertices and edges of $RG(\mathcal{P})$. \begin{lemma}\label{lem:alg} Let $\mathcal{P}$ be a connected, height-$(0,1)$, type-C poset. \begin{itemize} \item If $RG(\mathcal{P})$ contains an odd cycle, then the rank of $B(\mathcal{P})$ in $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ is $|V(\mathcal{P})|$; \item if $RG(\mathcal{P})$ contains no odd cycles, then the rank of $B(\mathcal{P})$ in $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ is $|V(\mathcal{P})|-1$; \end{itemize} \end{lemma} \begin{proof} The proof breaks into 4 cases. \\* \noindent \textbf{Case 1:} If $RG(\mathcal{P})$ contains a self-loop, then the algorithm proceeds by removing adjacent edges to vertices defining self-loops and making sure, post edge removal, that such adjacent vertices define self-loops. Thus, as $RG(\mathcal{P})$ is assumed to be connected, this implies that the algorithm above halts with $\Gamma_n=(G_n,M_n)$, where \begin{itemize} \item $G_n=RG(\mathcal{P}')$ for the poset $\mathcal{P}'$ satisfying $\mathcal{P}'=\mathcal{P}$ as sets and $-i\preceq_{\mathcal{P}'}i$, for all $i\in\mathcal{P}$, and \item $M_n$ is the $B(\mathcal{P}')$ block in $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}'))$ with (potentially) additional zero rows; \end{itemize} that is, each of the $|\mathcal{P}^+|=|V(\mathcal{P})|$ columns of $M_n$ has a corresponding unique row with unique nonzero entry in that column. Thus, the result follows. \\* \noindent \textbf{Case 2:} If $RG(\mathcal{P})$ contains an odd cycle consisting of $n>1$ vertices and no self-loops, then the algorithm starts by removing an edge from an odd cycle and adding a self-loop. From here, the algorithm proceeds, and the result follows, as in Case 1. \\* \noindent \textbf{Case 3:} If $RG(\mathcal{P})$ is a tree, then the algorithm halts at $\Gamma_n=(G_n,M_n)$, where if $i_1$ is the specified degree one vertex of \textbf{Step 8}, then given $i_j$ and $i_k$ adjacent in $RG(\mathcal{P})$ with $i_j$ contained in the unique path from $i_1$ to $i_k$ we have that row $\mathbf{E_{-i_j,i_k}-E_{-i_k,i_j}}$ is the unique row with nonzero entry in column $\mathbf{E_{-i_k,-i_k}-E_{i_k,i_k}}$; that is, all $|\mathcal{P}^+|-1=|V(\mathcal{P})|-1$ rows of $M_n$ are linearly independent and the result follows. \\* \noindent \textbf{Case 4:} If $RG(\mathcal{P})$ contains an even cycle and no odd cycles, then the algorithm removes edges from even cycles (introducing zero rows), until the resulting graph is a tree. From here, the algorithm proceeds, and the result follows, as in Case 3. \end{proof} As a result of Lemma~\ref{lem:alg}, we get the following. \begin{theorem}\label{thm:indform} If $\mathcal{P}$ is a connected, height-$(0,1)$, type-C poset, then $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=|E(\mathcal{P})|-|V(\mathcal{P})|+2\delta_{\circ},$$ where $\delta_{\circ}$ is the indicator function for $RG(\mathcal{P})$ containing no odd cycles. \end{theorem} \begin{proof} To start, by Theorem~\ref{thm:commat}, we know that $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=dim(\mathcal{C}(\mathfrak{g}_C(\mathcal{P})))-rank(\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))),$$ where $dim(\mathcal{C}(\mathfrak{g}_C(\mathcal{P})))=|E(\mathcal{P})|+|V(\mathcal{P})|$. Furthermore, $rank(\mathcal{C}(\mathfrak{g}_C(\mathcal{P})))=2\cdot rank(B(\mathcal{P}))$. By Lemma~\ref{lem:alg}, we know that if $RG(\mathcal{P})$ contains an odd cycle, then $rank(B(\mathcal{P}))=|V(\mathcal{P})|$; that is, $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=|E(\mathcal{P})|+|V(\mathcal{P})|-2|V(\mathcal{P})|=|E(\mathcal{P})|-|V(\mathcal{P})|.$$ Otherwise, $rank(B(\mathcal{P}))=|V(\mathcal{P})|-1$ so that $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=|E(\mathcal{P})|+|V(\mathcal{P})|-2(|V(\mathcal{P})|-1)=|E(\mathcal{P})|-|V(\mathcal{P})|+2.$$ The result follows. \end{proof} \begin{remark} Note that if $RG(\mathcal{P})$ is not connected, then the elements of $\mathcal{P}$ corresponding to each connected component $K_i$ of $RG(\mathcal{P})$ induce posets $\mathcal{P}_{K_i}$ which are isomorphic to connected, type-C posets of height-$(0,0)$ or $(0,1)$. \end{remark} \begin{theorem}\label{thm:disjoint} If $\mathcal{P}$ is a height-$(0,1)$, type-C poset such that $RG(\mathcal{P})$ consists of connected components $\{K_1,\hdots,K_n\}$, then $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=\sum_{i=1}^n{\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P}_{K_i}).$$ \end{theorem} \begin{proof} Note that basis elements of $\mathfrak{g}_C(\mathcal{P})$ corresponding to different connected components of $RG(\mathcal{P})$ have trivial bracket relations. Thus, $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}))$ can be arranged to be block diagonal with each block corresponding to the basis elements of a connected component of $RG(\mathcal{P})$. Since the block corresponding to $K_i$ is equivalent to $\mathcal{C}(\mathfrak{g}_C(\mathcal{P}_{K_i}))$, for $1\le i\le n$, the result follows. \end{proof} Combining Theorems~\ref{thm:indform} and~\ref{thm:disjoint} with Corollary~\ref{cor:h00} we get the following. \begin{theorem} If $\mathcal{P}$ is a height-$(0,1)$, type-C poset, then $${\rm ind \hspace{.1cm}}\mathfrak{g}_C(\mathcal{P})=|E(\mathcal{P})|-|V(\mathcal{P})|+2\eta(\mathcal{P}),$$ where $\eta(\mathcal{P})$ denotes the number of connected components of $RG(\mathcal{P})$ containing no odd cycles. \end{theorem} \begin{theorem}\label{thm:FrobC} If $\mathcal{P}$ is a height-$(0,1)$, type-C poset, then $\mathfrak{g}_C(\mathcal{P})$ is Frobenius if and only if each connected component of $RG(\mathcal{P})$ contains a single cycle which consists of an odd number of vertices. \end{theorem} \begin{proof} Let $\mathcal{P}$ be a a height-$(0,1)$, type-C poset. Combining Theorem~\ref{thm:disjoint} and Corollary~\ref{cor:h00}, we find that $\mathcal{P}$ is Frobenius if and only if $\mathcal{P}$ is a disjoint sum of Frobenius, height-$(0,1)$, type-C posets. Assume $\mathcal{P}$ is connected. Note that $|E(\mathcal{P})|-|V(\mathcal{P})|\ge -1$ with equality when $RG(\mathcal{P})$ is a tree. Thus, by Theorem~\ref{thm:indform}, if $\mathcal{P}$ is Frobenius, then $RG(\mathcal{P})$ must contain an odd cycle. If $RG(\mathcal{P})$ contains an odd cycle, then $|E(\mathcal{P})|-|V(\mathcal{P})|\ge 0$ with equality if and only if $RG(\mathcal{P})$ contains a single odd cycle. Therefore, the result follows. \end{proof} \begin{remark} To ease discourse in the following section, type-C \textup(resp., B or D\textup) posets corresponding to Frobenius, type-C \textup(resp., B or D\textup) Lie poset algebras are referred to as Frobenius, type-C \textup(resp., B or D\textup) posets. \end{remark} \section{Spectrum}\label{sec:spec} In this section, given a Frobenius, type-B, C, or D Lie poset algebra generated by a height-$(0,1)$, type-B, C, or D poset, respectively, we determine the form of a particular Frobenius functional (see Theorem~\ref{thm:FrobFun}) as well as its corresponding principal element (see Theorem~\ref{thm:pe}). With a principal element in hand, we are then able to determine the form of the spectrum for such Lie algebras (see Theorem~\ref{thm:spec}). \begin{remark} As in the previous section, all results and proofs will be in terms of type-C Lie poset algebras, but all results apply to type-B and D Lie poset algebras as well. \end{remark} \begin{remark} Throughout this section, we let $E^*_{i,j}$ denote the functional which returns the $i,j$-entry of a matrix. \end{remark} \begin{theorem}\label{thm:FrobFun} If $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset and $$F_{\mathcal{P}}=\sum_{\substack{-i\preceq j \\ i<j}}E^*_{-i,j}+\sum_{-i,i\in\mathcal{P}}\delta_{-i\preceq i}\cdot E^*_{-i,i},$$ where $\delta_{-}$ is the Kronecker delta function, then $F_\mathcal{P}$ is a Frobenius functional on $\mathfrak{g}_C(\mathcal{P})$. \end{theorem} \begin{remark} Throughout this section, we will assume that if $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset such that $RG(\mathcal{P})$ contains an odd cycle consisting of the vertices $\{i_1,\hdots,i_n\}$, then $i_1<i_2<\hdots<i_n$. Such an assumption does not limit the results of this section, since such a relabeling of the elements of a type-C poset does not alter the isomorphism class of the encoded type-C Lie poset algebra. \end{remark} \begin{lemma}\label{lem:diag} If $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset, then $B\in\mathfrak{g}_C(\mathcal{P})\cap\ker(B_{F_{\mathcal{P}}})$ must satisfy $E^*_{i,i}(B)=0$, for all $i\in\mathcal{P}$. \end{lemma} \begin{proof} Given $B\in\mathfrak{g}_C(\mathcal{P})\cap\ker(B_{F_{\mathcal{P}}})$, we have \begin{enumerate} \item $F_{\mathcal{P}}(E_{-i,j}+E_{-j,i},B])=-E^*_{-i-i}(B)+E^*_{j,j}(B)=0$, for all $-j,-i,i,j\in\mathcal{P}$ satisfying $-i\preceq j$ and $i<j$; and \item $F_{\mathcal{P}}([E_{-i,i},B])=-E^*_{-i,-i}(B)+E^*_{i,i}(B)=0$, for $-i,i\in\mathcal{P}$ satisfying $-i\preceq i$. \end{enumerate} As a result of Condition 1, \begin{equation}\label{eqn:equal} E^*_{-i,-i}(B)=E^*_{j,j}(B), \end{equation} for all $-j,-i,i,j\in\mathcal{P}$ contained in a connected component of $RG(\mathcal{P})$ satisfying $-i\preceq j$ and $i<j$. Considering each connected component $K$ of $RG(\mathcal{P})$ separately, the proof breaks into two cases. \\* \noindent \textbf{Case 1:} $K$ contains a self-loop, say at vertex $i_1$. Condition 2 and the fact that $B\in\mathfrak{sp}(|\mathcal{P}|)$ combine to imply that $E^*_{i_1,i_1}(B)=E^*_{-i_1,-i_1}(B)=0$. Thus, considering (\ref{eqn:equal}) and the fact that $K$ is connected, we may conclude that $E^*_{i,i}(B)=0$ for all $i\in\mathcal{P}$ contained in $K$. \\* \noindent \textbf{Case 2:} $K$ contains an odd cycle consisting of $n>1$ elements $\{i_1,\hdots,i_n\}$ satisfying: $i_1$ is adjacent to $i_{n}$ and $i_{2}$, $i_j$ is adjacent to $i_{j-1}$ and $i_{j+1}$, for $2\le j\le n-1$, and $i_1<\hdots<i_n$. Restricting Condition 1 to $\{i_1,\hdots,i_n\}$ and using the fact that $B\in\mathfrak{sp}(|\mathcal{P}|)$, i.e., $E^*_{i,i}(B)=-E^*_{-i,-i}(B)$, for all $i\in\mathcal{P}$, we find that $$E^*_{-i_1,-i_1}(B)=E^*_{i_2,i_2}(B)$$ $$E^*_{i_2,i_2}(B)=-E^*_{i_3,i_3}(B)$$ $$\hdots$$ $$E^*_{i_{n-1},i_{n-1}}(B)=-E^*_{i_n,i_n}(B)$$ $$-E^*_{i_n,i_n}(B)=E^*_{i_1,i_1}(B);$$ but, since $n$ is odd, this implies that $E^*_{-i_1,-i_1}(B)=E^*_{i_1,i_1}(B)$. The result follows as in Case 1. \end{proof} \begin{lemma}\label{lem:offdiag} If $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset, then $B\in\mathfrak{g}_C(\mathcal{P})\cap\ker(B_{F_{\mathcal{P}}})$ must satisfy $E^*_{-i,j}(B)=0$, for all $-i,j\in\mathcal{P}$ satisfying $-i\preceq j$. \end{lemma} \begin{proof} Note that $B\in\mathfrak{g}_C(\mathcal{P})\cap\ker(B_{F_{\mathcal{P}}})$ must satisfy \begin{eqnarray}\label{one} F_{\mathcal{P}}([E_{-i,-i}-E_{i,i},B])=\sum_{\substack{-i\preceq j\\ i<j}}E_{-i,j}^*(B)+\sum_{\substack{-k\preceq i\\ k<i}}E^*_{-k,i}(B)+2\cdot\delta_{-i\preceq i}\cdot E^*_{-i,i}(B)=0, \end{eqnarray} for all $-i,i\in\mathcal{P}$. First, we show that $E^*_{-i,j}(B)=0$, for all $-i,j\in\mathcal{P}$ satisfying $-i\preceq j$, $\{j,i\}$ does not define an edge of an odd cycle in $RG(\mathcal{P})$, and $j\neq i$. Let $\Gamma_1=RG(\mathcal{P})$. \\* \noindent \textbf{Step 1}: Consider all $-i,i\in \mathcal{P}$ for which $i$ is a vertex of degree one in $\Gamma_1$, say $i$ is adjacent to $j$, then $$F_{\mathcal{P}}([E_{-i,-i}-E_{i,i},B])=E^*_{-i,j}(B)=0\text{ }(\text{or }E^*_{-j,i}(B)=0).$$ Since $B\in\mathfrak{sp}(|\mathcal{P}|)$, this further implies that $$E^*_{-j,i}(B)=0\text{ }(\text{or }E^*_{-i,j}(B)=0).$$ Removing each such vertex $i$ and edge $\{i,j\}$ of $\Gamma_1$ results in $\Gamma_2$. \\* \noindent \textbf{Step k}: Consider all $-i,i\in \mathcal{P}$ for which $i$ is a vertex of degree one in $\Gamma_k$, say $i$ is adjacent to $j$, then taking into account the results $\mathbf{Step\text{ }1}$ through $\mathbf{Step\text{ }k-1}$, we must have $$F_{\mathcal{P}}([E_{-i,-i}-E_{i,i},B])=E^*_{-i,j}(B)=0\text{ }(\text{or }E^*_{-j,i}(B)=0).$$ Once again, since $B\in\mathfrak{sp}(|\mathcal{P}|)$, this further implies that $$E^*_{-j,i}(B)=0\text{ }(\text{or }E^*_{-i,j}(B)=0).$$ Removing each such vertex $i$ and edge $\{i,j\}$ of $\Gamma_k$ results in $\Gamma_{k+1}$. \\* \noindent Since $RG(\mathcal{P})$ is finite, there must exist $m$ for which the connected components of $\Gamma_m$ are odd cycles. Thus, $E^*_{-i,j}(B)=0$ for all $-i,j\in\mathcal{P}$ satisfying $-i\preceq j$, $\{j,i\}$ does not define an edge of an odd cycle in $RG(\mathcal{P})$, and $j\neq i$. It remains to consider $E^*_{-i,j}(B)$ corresponding to components of $\Gamma_m$. The analysis breaks into two cases. \\* \noindent \textbf{Case 1:} Components consisting of a self-loop at vertex $i$. In this case, utilizing the results of $\mathbf{Step\text{ }1}$ through $\mathbf{Step\text{ }m}$ above, we must have $$F_{\mathcal{P}}([E_{-i,-i}-E_{i,i},B])=2\cdot E^*_{-i,i}(B)=0.$$ \\* \noindent \textbf{Case 2:} Components consisting of an odd cycle with $n>1$ elements $\{i_1,\hdots,i_n\}$, where $i_1$ is adjacent to $i_{n}$ and $i_{2}$, $i_j$ is adjacent to $i_{j-1}$ and $i_{j+1}$, for $2\le j\le n-1$, and $i_1<\hdots<i_n$. Restricting equation~(\ref{one}) to $\{i_1,\hdots,i_n\}$ and utilizing the results of \textbf{Step 1} through \textbf{Step m} above, we find $$F_{\mathcal{P}}([E_{-i_1,-i_1}-E_{i_1,i_1},B])=E^*_{-i_1,i_2}(B)+E^*_{-i_1,i_n}(B)=0$$ $$F_{\mathcal{P}}([E_{-i_2,-i_2}-E_{i_2,i_2},B])=E^*_{-i_1,i_2}(B)+E^*_{-i_2,i_3}(B)=0$$ $$\vdots$$ $$F_{\mathcal{P}}([E_{-i_{n-1},-i_{n-1}}-E_{i_{n-1},i_{n-1}},B])=E^*_{-i_{n-1},i_{n}}(B)+E^*_{-i_{n-2},i_{n-1}}(B)=0$$ $$F_{\mathcal{P}}([E_{-i_{n},-i_n}-E_{i_{n},i_{n}},B])=E^*_{-i_{1},i_{n}}(B)+E^*_{-i_{n-1},i_n}(B)=0.$$ Thus, since $n$ is odd, $$E^*_{-i_1,i_n}(B)=-E^*_{-i_1,i_2}(B)=E^*_{-i_2,i_3}(B)=\hdots=E^*_{-i_{n-1},i_{n}}(B)=-E^*_{-i_{1},i_{n}}(B);$$ that is, $E^*_{-i_1,i_n}(B)=-E^*_{-i_1,i_n}(B)=0$ and thus $E^*_{-i_j,i_{j+1}}(B)=0$, for $j=1,\hdots,n-1$. Since $B\in\mathfrak{sp}(|\mathcal{P}|)$, we also get that $E^*_{-i_j,i_{j+1}}(B)=E^*_{-i_{j+1},i_{j}}(B)=0$, for $j=1,\hdots,n-1$. The result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:FrobFun}] By Lemma~\ref{lem:diag} and Lemma~\ref{lem:offdiag}, if $B\in \mathfrak{g}_C(\mathcal{P})\cap\ker(B_{F_{\mathcal{P}}})$, then $B=0$. \end{proof} \begin{remark} Given a poset $\mathcal{P}$ and the functional $F_{\mathcal{P}}$ as in Theorem~\ref{thm:FrobFun}, to determine the form of the principal element $\widehat{F_{\mathcal{P}}}=\sum_{i\in\mathcal{P}}c_iE_{i,i}$ note that, since $\widehat{F_{\mathcal{P}}}\in\mathfrak{sp}(|\mathcal{P}|)$, it must be the case that $(*)$ $c_i=-c_{-i}$, for all $i\in\mathcal{P}$. Furthermore, since $F_{\mathcal{P}}=F_{\mathcal{P}}\circ ad(\widehat{F_{\mathcal{P}}})$, it must be the case that $(**)$ $c_{-i}-c_j=1$, for $-i,j\in\mathcal{P}$ with $E^*_{-i,j}$ a summand of $F_{\mathcal{P}}$. It should be clear that $(*)$ and $(**)$ combine to completely characterize $\widehat{F_{\mathcal{P}}}$. \end{remark} \begin{theorem}\label{thm:pe} If $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset, then $\widehat{F_{\mathcal{P}}}=\sum_{i\in\mathcal{P}}c_iE_{i,i}$ satisfies \[c_{i} = \begin{cases} \frac{1}{2}, & i\in \mathcal{P}^+; \\ & \\ -\frac{1}{2}, & i\in \mathcal{P}^-. \end{cases} \] \end{theorem} \begin{proof} For each edge of $RG(\mathcal{P})$ between vertices $i$ and $j$ with $i<j$, we get the conditions $c_{-i}=1+c_j$ and $c_{-j}=1+c_i$. Let $K$ be a connected component of $RG(\mathcal{P})$. We claim that there exists $i\in\mathcal{P}^+$ such that $c_{-i}=1+c_i$. There are two cases. \\* \noindent \textbf{Case 1}: $K$ contains a self-loop. If $K$ contains a self-loop at vertex $i$, then $c_{-i}=1+c_i$, establishing the claim. \\* \noindent \textbf{Case 2}: $K$ contains an odd cycle consisting of $\{i=i_1,\hdots,i_n\}$, for $n>1$. Assume that $i_1$ is adjacent to $i_n$ and $i_2$, $i_j$ is adjacent to $i_{j-1}$ and $i_{j+1}$, for $1<j<n$, and $i_1<\hdots<i_n$. Since $n$ is odd, we must have that $$c_{-i}=c_{-i_1}=1+c_{i_2}=c_{-i_3}=1+c_{i_4}=\hdots=1+c_{i_{n-1}}=c_{-i_n}=1+c_{i_1}=1+c_i.$$ The claim follows. \\* \noindent Now, take an arbitrary $j\in\mathcal{P}^+$ and let the sequence $j=j_0,j_1,\hdots,j_m=i$ describe a path between $j$ and $i$ in $K$. If $m$ is odd, then $$c_{-j}=c_{-j_0}=1+c_{j_1}=c_{-j_2}=\hdots=c_{-j_{m-1}}=1+c_{j_m}=1+c_{i}=c_{-i}.$$ Otherwise, $$c_{-j}=c_{-j_0}=1+c_{j_1}=c_{-j_2}=\hdots=1+c_{j_{m-1}}=c_{j_m}=c_{-i}.$$ Thus, for each connected component $K$ of $RG(\mathcal{P})$ we have that $c_{-j}=c_{-k}$ and, consequently, $c_{j}=c_{k}$, for all $j,k$ representing vertices of $K$. For $j$ representing a vertex in $K$, this implies that $c_{-j}=1+c_{j}=1-c_{-j}$; that is, $c_{-j}=\frac{1}{2}$ and $c_{j}=-\frac{1}{2}$. The result follows. \end{proof} \begin{theorem}\label{thm:spec} If $\mathcal{P}$ is a Frobenius, height-$(0,1)$, type-C poset, then $\mathfrak{g}_C(\mathcal{P})$ has a spectrum consisting of an equal number of 0's and 1's. \end{theorem} \begin{proof} Consider the basis $\mathscr{B}_C(\mathcal{P})$ for $\mathfrak{g}_C(\mathcal{P})$. Given the form of $\widehat{F_{\mathcal{P}}}$ found in Theorem~\ref{thm:pe}, we see that basis elements contained in the set $$\{E_{-i,-i}-E_{i,i}~|~-i,i\in\mathcal{P}\}$$ are eigenvectors of $ad(\widehat{F_{\mathcal{P}}})$ with eigenvalue 0, and basis elements contained in the set $$\{E_{-i,j}+E_{-j,i}~|~-i,-j,i,j\in\mathcal{P},-j\preceq i,-i\preceq j\}\cup\{E_{-i,i}~|~-i,i\in\mathcal{P},-i\preceq i\}$$ are eigenvectors of $ad(\widehat{F_{\mathcal{P}}})$ with eigenvalue 1. By Theorem~\ref{thm:FrobC}, we must have that \begin{align} |\mathcal{P}^+|&=|\{E_{-i,-i}-E_{i,i}~|~-i,i\in\mathcal{P}\}| \nonumber \\ &=|\{E_{-i,j}+E_{-j,i}~|~-i,-j,i,j\in\mathcal{P},-j\preceq i,-i\preceq j\}\cup\{E_{-i,i}~|~-i,i\in\mathcal{P},-i\preceq i\}|. \nonumber \end{align} Therefore, since $\mathscr{B}_C(\mathcal{P})$ is a basis for $\mathfrak{g}_C(\mathcal{P})$, the result follows. \end{proof} \section{Epilogue} For type-A Lie poset algebras, there are no restrictions on the underlying poset, so the notion of ``height'' is less complicated. In the type-A setting, the height of a poset $\mathcal{P}$ is defined to be one less than the largest cardinality of a chain. If $\mathcal{P}$ is a connected, height-one (type-A) poset, then the current authors recently established that the index of the associated type-A Lie poset algebra $\mathfrak{g}_A(\mathcal{P})$ is given by the following nice formula (see Theorem 4, \textbf{\cite{CM}}): \begin{eqnarray}\label{index} {\rm ind \hspace{.1cm}}\mathfrak{g}_A(\mathcal{P})=|E(\mathcal{P})|-|V(\mathcal{P})|+1, \end{eqnarray} \noindent where $E(\mathcal{P})$ and $V(\mathcal{P})$, are respectively, the sets of edges and vertices of the Hasse diagram of $\mathcal{P}$. For a connected, height-$(0,1)$ type-C poset $\mathcal{Q}$, the Hasse diagram is replaced by the relations graph $RG(\mathcal{Q})$ and the type-C analogue of (\ref{index}) is given by Theorem~\ref{thm:indform}: \begin{eqnarray}\label{indexC} {\rm ind \hspace{.1cm}}\mathfrak{g}_{C}(\mathcal{Q})=|E(\mathcal{Q})|-|V(\mathcal{Q})|+2\delta_o, \end{eqnarray} \noindent where $E(\mathcal{Q})$ and $V(\mathcal{Q})$, are respectively, the sets of edges and vertices of $RG(\mathcal{Q})$, and $\delta_o$ is the indicator function for the existence of odd cycles in $RG(\mathcal{Q})$. (Of course, equation (\ref{indexC}) remains valid with ``C'' replaced by ``B'' or ``D''.) As with Frobenius, type-C Lie poset algebras corresponding to height-$(0,1)$ posets -- but more generally -- the spectrum of $\mathfrak{g}_A(\mathcal{P})$ is binary when $\mathcal{P}$ is of height \textit{two or less}. More is known. If $\mathcal{P}$ is a \textit{toral} poset (see \textbf{\cite{binary}}) of arbitrarily height for which $\mathfrak{g}_A(\mathcal{P})$ is Frobenius, then the spectrum of $\mathfrak{g}_A$ is binary. We conjecture that having a binary spectrum is a property shared by all Frobenius Lie poset algebras in all of the classical types.
1,941,325,220,334
arxiv
\section{Introduction} This paper continues to develop the line of ideas in \cite{Pspins, HEPS, AP, 1RSB}, which are all motivated by the M\'ezard-Parisi formula for the free energy in the diluted spin glass models originating in \cite{Mezard}. This formula is closely related to the original Parisi formula \cite{Parisi79, Parisi} for the free energy in the Sherrington-Kirkpatrick model \cite{SK}, but at the same time it is more complicated, because it involves a more complicated functional order parameter that encodes some very special structure of the distribution of all spins (or all multi-overlaps) rather than the distribution of the overlaps. An important progress was made by Franz and Leone in \cite{FL}, who showed that the M\'ezard-Parisi formula gives an upper bound on the free energy (see also \cite{PT}). The technical details of their work are very different but, clearly, inspired by the analogous earlier result of Guerra \cite{Guerra} for the Sherrington-Kirkpatrick model, which lead the to the first proof of the Parisi formula by Talagrand in \cite{TPF}. Another proof of the Parisi formula was given later in \cite{PPF}, based on the ultrametricity property for the overlaps proved in \cite{PUltra} using the Ghirlanda-Guerra identities \cite{GG} (the general idea that stability properties, such as the Aizenman-Contucci stochastic stability \cite {AC} or the Ghirlanda-Guerra identities, could imply ultrametricity is due to Arguin and Aizenman, \cite{AA}). The proof there combined the cavity method in the form of the Aizenman-Sims-Starr representation \cite{AS2} with the description of the asymptotic structure of the overlap distribution that follows from ultrametricity and the Ghirlanda-Guerra identities \cite{GG}. The M\'ezard-Parisi ansatz in the diluted models builds upon the ultrametric Parisi ansatz in the SK model, so it is very convenient that ultrametricity for the overlaps can be obtained just as easily in the diluted models as in the SK model, simply because the Ghirlanda-Guerra identities can be proved in these models in exactly the same way, by using a small perturbation of the Hamiltonian of the mixed $p$-spin type. However, as we mention above, the M\'ezard-Parisi ansatz describes the structure of the Gibbs measure in these models in much more detail, as we shall see below. Some progress toward explaining the features of this ansatz beyond ultrametricity was made in \cite{HEPS, AP}, where the so-called hierarchical exchangeability of the pure states and the corresponding Aldous-Hoover representation were proved. This representation looks very similar to what one expects in the M\'ezard-Parisi ansatz, but lacks some additional symmetry. One example where this additional symmetry can be proved rigorously was given in \cite{1RSB} for the $1$-RSB asymptotic Gibbs measures in the diluted $K$-spin model, where it was obtained as a consequence of the cavity equations for spin distributions developed rigorously in \cite{Pspins}. The main contribution of this paper is to show how this result can be extended to all finite-RSB asymptotic Gibbs measures for all diluted models. Namely, we will show that one can slightly modify the Hamiltonian in such a way that all finite-RSB asymptotic Gibbs measures satisfy the M\'ezard-Parisi ansatz as a consequence of the Ghirlanda-Guerra identities and the cavity equations. \section{Main results} Before we can state our main results, we will need to introduce necessary notations and definitions, as well as review a number of previous results. Let $K\geq 1$ be an integer fixed throughout the paper. A random clause with $K$ variables will be a random function $\theta(\sigma_{1},\ldots,\sigma_{K})$ on $\{-1,+1\}^K$ symmetric in its coordinates. The main examples we have in mind are the following. \medskip \noindent {\bf Example 1.} ($K$-spin model) Given an inverse temperature parameter $\beta>0$, the random function $\theta$ is given by $$ \theta(\sigma_1,\ldots,\sigma_K)= \beta g \sigma_1\cdots \sigma_K, $$ where $g$ is a random variable, typically, standard Gaussian or Rademacher.\qed \medskip \noindent {\bf Example 2.} ($K$-sat model) Given an inverse temperature parameter $\beta>0$, the random function $\theta$ is given by $$ \theta(\sigma_1,\ldots,\sigma_K)=-\beta\prod_{j\leq K} \frac{1+J_j \sigma_j}{2}, $$ where $(J_j)_{j\geq 1}$ are i.i.d. Bernoulli random variables with $\mathbb{P}(J_j=\pm 1)=1/2.$\qed \medskip We will denote by $\theta_I$ independent copies of the function $\theta$ for various multi-indices $I$. Given a parameter $\lambda>0$, called connectivity parameter, the Hamiltonian of a diluted model is defined by \begin{equation} H_N^{\mathrm{model}}(\sigma) = \sum_{k\leq \pi(\lambda N)} \theta_k(\sigma_{i_{1,k}},\ldots, \sigma_{i_{K,k}}) \label{Ham2} \end{equation} where $\pi(\lambda N)$ is a Poisson random variable with the mean $\lambda N$, and the coordinate indices $i_{j,k}$ are independent for different pairs $(j,k)$ and are chosen uniformly from $\{1,\ldots, N\}$. The main goal for us would be to compute the limit of the free energy \begin{equation} F_N^{\mathrm{model}} = \frac{1}{N}\mathbb{E} \log \sum_{\sigma\in \Sigma_N} \exp H_N^{\mathrm{model}}(\sigma) \end{equation} as $N$ goes to infinity. The formula for this limit originates in the work of M\'ezard and Parisi in \cite{Mezard}. To state how the formula looks like, we need to recall several definitions that will be used throughout the paper. \medskip \noindent \textbf{Ruelle probability cascades (RPC, \cite{Ruelle}).} Given $r\geq 1$, consider an infinitary rooted tree of depth $r$ with the vertex set \begin{equation} {\cal A} = \mathbb{N}^0 \cup \mathbb{N} \cup \mathbb{N}^2 \cup \ldots \cup \mathbb{N}^r, \label{Atree} \end{equation} where $\mathbb{N}^0 = \{*\}$, $*$ is the root of the tree and each vertex $\alpha=(n_1,\ldots,n_p)\in \mathbb{N}^{p}$ for $p\leq r-1$ has children $$ \alpha n : = (n_1,\ldots,n_p,n) \in \mathbb{N}^{p+1} $$ for all $n\in \mathbb{N}$. Each vertex $\alpha$ is connected to the root $*$ by the path $$ * \to n_1 \to (n_1,n_2) \to\cdots\to (n_1,\ldots,n_p) = \alpha. $$ We will denote the set of vertices in this path by \begin{equation} p(\alpha) = \bigl\{*\, , n_1, (n_1,n_2),\ldots,(n_1,\ldots,n_p) \bigr\}. \label{pathtoleaf} \end{equation} We will denote by $|\alpha|$ the distance of $\alpha$ from the root, i.e. $p$ when $\alpha=(n_1,\ldots,n_p)$. We will write $\alpha \succeq \beta$ if $\beta \in p(\alpha)$ and $\alpha\succ \beta$ if, in addition, $\alpha\not =\beta$, in which case we will say that $\alpha$ is a descendant of $\beta$, and $\beta$ is an ancestor of $\alpha$. Notice that $\beta\in p(\alpha)$ if and only if $*\preceq \beta\preceq \alpha.$ The set of leaves $\mathbb{N}^r$ of ${\cal A}$ will sometimes be denoted by ${\cal L}({\cal A})$. For any $\alpha, \beta\in {\cal A}$, let \begin{equation} \alpha\wedge\beta := |p(\alpha) \cap p(\beta)| - 1 \label{wedge} \end{equation} be the number of common vertices (not counting the root $*$) in the paths from the root to the vertices $\alpha$ and $\beta$. In other words, $\alpha \wedge \beta$ is the distance of the lowest common ancestor of $\alpha$ and $\beta$ from the root. Let us consider parameters \begin{equation} 0= \zeta_{-1} <\zeta_0 <\ldots < \zeta_{r-1} <\zeta_r = 1 \label{zetas} \end{equation} that will appear later in the c.d.f. of the overlap in the case when it takes finitely many values (see (\ref{zetap}) below), which is the usual functional order parameter in the Parisi ansatz. For each $\alpha\in {\cal A}\setminus \mathbb{N}^r$, let $\Pi_\alpha$ be a Poisson process on $(0,\infty)$ with the mean measure $$\zeta_{p}x^{-1-\zeta_{p}}dx$$ with $p=|\alpha|$, and we assume that these processes are independent for all $\alpha$. Let us arrange all the points in $\Pi_\alpha$ in the decreasing order, \begin{equation} u_{\alpha 1} > u_{ \alpha 2} >\ldots >u_{\alpha n} > \ldots, \label{us} \end{equation} and enumerate them using the children $(\alpha n)_{n\geq 1}$ of the vertex $\alpha$. Given a vertex $\alpha\in {\cal A}\setminus \{*\}$ and the path $p(\alpha)$ in (\ref{pathtoleaf}), we define \begin{equation} w_\alpha = \prod_{\beta \in p(\alpha)} u_{\beta}, \label{ws} \end{equation} and for the leaf vertices $\alpha \in {\cal L}({\cal A}) = \mathbb{N}^r$ we define \begin{equation} v_\alpha = \frac{w_\alpha}{\sum_{\beta\in \mathbb{N}^r} w_\beta}. \label{vs} \end{equation} These are the weights of the Ruelle probability cascades. For other vertices $\alpha\in {\cal A}\setminus {\cal L}({\cal A})$ we define \begin{equation} v_\alpha = \sum_{\beta\in {\cal L}({\cal A}),\,\beta\succ \alpha} v_\beta. \label{vsall} \end{equation} This definition obviously implies that $v_\alpha = \sum_{n\geq 1} v_{\alpha n}$ when $|\alpha|<r$. Let us now rearrange the vertex labels so that the weights indexed by children will be decreasing. For each $\alpha\in {\cal A}\setminus \mathbb{N}^r$, let $\pi_\alpha: \mathbb{N} \to \mathbb{N}$ be a bijection such that the sequence $(v_{\alpha \pi_\alpha(n)})_{n\geq 1}$ is decreasing. Using these local rearrangements we define a global bijection $\pi: {\cal A}\to {\cal A}$ in a natural way, as follows. We let $\pi(*)=*$ and then define \begin{equation} \pi(\alpha n) = \pi(\alpha) \pi_{\pi(\alpha)}(n) \label{permute} \end{equation} recursively from the root to the leaves of the tree. Finally, we define \begin{equation} V_\alpha = v_{\pi(\alpha)} \ \mbox{ for all }\ \alpha\in {\cal A}. \label{Vs2} \end{equation} The weights (\ref{vs}) of the RPC will be accompanied by random fields indexed by $\mathbb{N}^r$ and generated along the tree ${\cal A}$ as follows. \medskip \noindent \textbf{Hierarchical random fields.} Let $(\omega_\alpha)_{\alpha\in {\cal A}}$ be i.i.d. random variables uniform on $[0,1]$. Given a function $h: [0,1]^r \to [-1,1]$, consider a random array indexed by $\alpha= (n_1,\ldots, n_r)\in \mathbb{N}^r$, \begin{equation} h_\alpha = h\bigl( (\omega_\beta)_{\beta\in p(\alpha)\setminus \{*\}}\bigr) = h\bigl(\omega_{n_1},\omega_{n_1 n_2},\ldots, \omega_{n_1\ldots n_r} \bigr). \label{MPfop} \end{equation} Note that, especially, in subscripts or superscripts we will write $n_1\ldots n_r$ instead of $(n_1,\ldots, n_r)$. We will also denote by $(\omega_\alpha^I)_{\alpha\in {\cal A}}$ and \begin{equation} h_\alpha^I = h\bigl( (\omega^I_\beta)_{\beta\in p(\alpha)\setminus \{*\}}\bigr) = h\bigl(\omega^I_{n_1},\omega^I_{n_1 n_2},\ldots, \omega^I_{n_1\ldots n_r} \bigr) \label{MPfopagain} \end{equation} copies of the above arrays that will be independent for all different multi-indices $I.$ The function $h$ above is the second, and more complex, functional order parameter that encodes the distribution of spins inside the pure states in the M\'ezard-Parisi ansatz, as we shall see below. This way of generating the array $(h_\alpha)$ using a function $h: [0,1]^r \to [-1,1]$ is very redundant in a sense that there are many choices of the function $h$ that will produce the same array in distribution. However, if one prefers, there is a non-redundant (unique) way to encode an array of this type by a recursive tower of distributions on the set of distributions that is more common in physics literature. \medskip \noindent \textbf{Extension of the definition of clause.} Let us extend the definition of function $\theta$ on $\{-1,+1\}^K$ to $[-1,+1]^K$ as follows. Often we will need to average $\exp \theta(\sigma_1,\ldots,\sigma_K)$ over $\sigma_j \in \{-1,+1\}$ (or some subset of them) independently of each other, with some weights. If we know that the average of $\sigma_j$ is equal to $x_j\in [-1,+1]$ then the corresponding measure is given by $$ \mu_j({\varepsilon}) = \frac{1+x_j}{2} {\rm I}({\varepsilon}=1) + \frac{1-x_j}{2} {\rm I}({\varepsilon}=-1). $$ We would like to denote the average of $\exp \theta$ again by $\exp \theta$, which results in the definition \begin{equation} \exp \theta(x_1,\ldots,x_K) = \sum_{\sigma_1,\ldots,\sigma_K = \pm 1} \exp \theta(\sigma_1,\ldots,\sigma_K) \prod_{j\leq K}\mu_j(\sigma_j). \label{Deftheta} \end{equation} Here is how this general definition would look like in the above two examples. In the first example of the $K$-spin model, using that $\sigma_1\cdots \sigma_K \in \{-1,+1\}$, we can write $$ \exp \theta(\sigma_1, \ldots, \sigma_K) = {\mbox{\rm ch}}(\beta g)\bigl(1+\mbox{th}(\beta g)\sigma_1\cdots \sigma_K\bigr) $$ and, clearly, after averaging, $$ \exp \theta(x_1,\ldots, x_K) = {\mbox{\rm ch}}(\beta g)\bigl(1+\mbox{th}(\beta g) x_1\cdots x_K\bigr). $$ In the second example of the $K$-sat model, using that $\prod_{j\leq K} (1+J_j \sigma_j)/2 \in\{0,1\}$, we can write $$ \exp \theta(\sigma_1, \ldots, \sigma_K) = 1+(e^{-\beta}-1) \prod_{j\leq K} \frac{1+J_j \sigma_j}{2} $$ and after averaging, $$ \exp \theta(x_1, \ldots, x_K) = 1+(e^{-\beta}-1) \prod_{j\leq K} \frac{1+J_j x_j}{2}. $$ \medskip \noindent \textbf{The M\'ezard-Parisi formula.} Let $\pi(\lambda K)$ and $\pi(\lambda(K-1))$ be Poisson random variables with the means $\lambda K$ and $\lambda (K-1)$ correspondingly and consider \begin{equation} A_\alpha({\varepsilon}) = \sum_{k\leq \pi( \lambda K)} \theta_{k}(h_\alpha^{1,k},\ldots,h_\alpha^{K-1,k},{\varepsilon}) \label{Aibef} \end{equation} for ${\varepsilon}\in\{-1,+1\}$ and \begin{equation} B_\alpha = \sum_{k\leq \pi(\lambda(K-1))} \theta_{k}(h_\alpha^{1,k},\ldots,h_\alpha^{K,k}). \label{Bef} \end{equation} Let ${\rm Av}$ denote the average over ${\varepsilon}=\pm 1$ and consider the following functional \begin{equation} {\cal P}(r,\zeta,h) = \log 2 + \mathbb{E} \log \sum_{\alpha\in\mathbb{N}^r} v_\alpha\, {\rm Av} \exp A_\alpha({\varepsilon}) - \mathbb{E}\log \sum_{\alpha\in\mathbb{N}^r} v_\alpha \exp B_\alpha \label{CalP} \end{equation} that depends on $r$, the parameters (\ref{zetas}) and the choice of the functions $h$ in (\ref{MPfop}). Then the M\'ezard-Parisi ansatz predicts that \begin{eqnarray} \lim_{N\to\infty} F_N^{\mathrm{model}} = \inf_{r,\zeta,h} {\cal P}(r,\zeta,h), \label{FE} \end{eqnarray} at least in the above two examples of the $K$-spin and $K$-sat models. We will see below that all the parameters have a natural interpretation in terms of the structure of the Gibbs measure. \medskip \noindent \textbf{Franz-Leone upper bound.} As we mentioned in the introduction, it was proved in \cite{FL} that \begin{eqnarray} F_N^{\mathrm{model}} \leq \inf_{r,\zeta,h} {\cal P}(r,\zeta,h) \label{FL} \end{eqnarray} for all $N$, in the $K$-spin and $K$-sat models for even $K$. Their proof was rewritten in a slightly different language in \cite{PT} to make it technically simpler, and it was observed by Talagrand in \cite{SG} that the proof actually works for all $K\geq 1$ in the $K$-sat model. As a natural starting point for proving matching lower bound, a strengthened analogue of the Aizenman-Sims-Starr representation \cite{AS2} for diluted models was obtained in \cite{Pspins} in the language of the so called asymptotic Gibbs measures. We will state this representation in Theorem \ref{Th1} below for a slightly modified Hamiltonian, while also ensuring that the asymptotic Gibbs measures satisfy the Ghirlanda-Guerra identities. To state this theorem, we need to recall a few more definitions. \medskip \noindent \textbf{Asymptotic Gibbs measures.} The Gibbs (probability) measure corresponding to a Hamiltonian $H_N(\sigma)$ on $\{-1,+1\}^N$ is defined by \begin{equation} G_N(\sigma) = \frac{\exp H_N(\sigma)}{Z_N}, \label{GibbsN} \end{equation} where the normalizing factor $Z_N=\sum_{\sigma} \exp H_N(\sigma)$ is called the partition function. To define the notion of the asymptotic Gibbs measure, we will assume that the distribution of the process $(H_N(\sigma))_{\sigma\in\{-1,+1\}^N}$ is invariant under the permutations of the coordinates of $\sigma$ - this property is called symmetry between sites, and it clearly holds in all the models we consider. Let $(\sigma^\ell)_{\ell\geq 1}$ be an i.i.d. sequence of replicas from the Gibbs measure $G_N$ and let $\mu_N$ be the joint distribution of the array $(\sigma_i^\ell)_{1\leq i\leq N, \ell\geq 1}$ of all spins for all replicas under the average product Gibbs measure $\mathbb{E} G_N^{\otimes \infty}$, \begin{equation} \mu_N\Bigl( \bigl\{\sigma_i^\ell = a_i^\ell \ :\ 1\leq i\leq N, 1\leq \ell \leq n \bigr\} \Bigr) = \mathbb{E} G_N^{\otimes n}\Bigl( \bigl\{\sigma_i^\ell = a_i^\ell \ :\ 1\leq i\leq N, 1\leq \ell \leq n \bigr\} \Bigr) \label{muN} \end{equation} for any $n\geq 1$ and any $a_i^\ell \in\{-1,+1\}$. We extend $\mu_N$ to a distribution on $\{-1,+1\}^{\mathbb{N}\times\mathbb{N}}$ simply by setting $\sigma_i^\ell=1$ for $i\geq N+1.$ Let ${\cal M}$ denote the set of all possible limits of $(\mu_N)$ over subsequences with respect to the weak convergence of measures on the compact product space $\{-1,+1\}^{\mathbb{N}\times\mathbb{N}}$. Because of the symmetry between sites, all measures in ${\cal M}$ inherit from $\mu_N$ the invariance under the permutation of both spin and replica indices $i$ and $\ell.$ By the Aldous-Hoover representation \cite{Aldous}, \cite{Hoover2} for such distributions, for any $\mu\in{\cal M}$, there exists a measurable function $s:[0,1]^4\to\{-1,+1\}$ such that $\mu$ is the distribution of the array \begin{equation} s_i^\ell=s(w,u_\ell,v_i,x_{i,\ell}), \label{sigma} \end{equation} where the random variables $w,(u_\ell), (v_i), (x_{i,\ell})$ are i.i.d. uniform on $[0,1]$. The function $s$ is defined uniquely for a given $\mu\in {\cal M}$ up to measure-preserving transformations (Theorem 2.1 in \cite{Kallenberg}), so we can identify the distribution $\mu$ of array $(s_i^\ell)$ with $s$. Since $s$ takes values in $\{-1,+1\}$, the distribution $\mu$ can be encoded by the function \begin{equation} {\xoverline{s}\hspace{0.1mm}}(w,u,v) = \mathbb{E}_x\, s(w,u,v,x), \label{fop} \end{equation} where $\mathbb{E}_x$ is the expectation in $x$ only. The last coordinate $x_{i,\ell}$ in (\ref{sigma}) is independent for all pairs $(i,\ell)$, so it plays the role of `flipping a coin' with the expected value ${\xoverline{s}\hspace{0.1mm}}(w,u_\ell,v_i)$. Therefore, given the function (\ref{fop}), we can redefine $s$ by \begin{equation} s(w,u_\ell,v_i,x_{i,\ell}) = 2 {\hspace{0.3mm}{\rm I}\hspace{0.1mm}} \Bigl(x_{i,\ell} \leq \frac{1+ {\xoverline{s}\hspace{0.1mm}}(w,u_\ell,v_i) }{2}\Bigr) -1 \label{sigmatos} \end{equation} without affecting the distribution of the array $(s_i^\ell)$. We can also view the function ${\xoverline{s}\hspace{0.1mm}}$ in (\ref{fop}) in a more geometric way as a random measure on the space of functions, as follows. Let $du$ and $dv$ denote the Lebesgue measure on $[0,1]$ and let us define a (random) probability measure \begin{equation} G = G_w = du \circ \bigl(u\to {\xoverline{s}\hspace{0.1mm}}(w,u,\cdot)\bigr)^{-1} \label{Gibbsw} \end{equation} on the space of functions of $v\in [0,1]$, \begin{equation} H = \bigl\{ \|{\xoverline{s}\hspace{0.1mm}}\|_\infty \leq 1 \bigr\}, \label{spaceH} \end{equation} equipped with the topology of $L^2([0,1], dv)$. We will denote the scalar product in $L^2([0,1], dv)$ by $h^1\cdot h^2$ and the corresponding $L^2$ norm by $\|h\|$. The random measure $G$ in (\ref{Gibbsw}) is called an {asymptotic Gibbs measure}. The whole process of generating spins can be broken into several steps: \begin{enumerate} \item[(i)] generate the Gibbs measure $G=G_w$ using the uniform random variable $w$; \item[(ii)] consider i.i.d. sequence ${\xoverline{s}\hspace{0.1mm}}^{\ell} = {\xoverline{s}\hspace{0.1mm}}(w,u_{\ell},\cdot)$ of replicas from $G$, which are functions in $H$; \item[(iii)] plug in i.i.d. uniform random variables $(v_i)_{i\geq 1}$ to obtain the array ${\xoverline{s}\hspace{0.1mm}}^\ell(v_i) = {\xoverline{s}\hspace{0.1mm}}(w,u_\ell,v_i)$; \item[(iv)] finally, use this array to generate spins as in (\ref{sigmatos}). \end{enumerate} For a different approach to this definition via exchangeable random measures see also \cite{Austin}. From now on, we will keep the dependence of the random measure $G$ on $w$ implicit, denote i.i.d. replicas from $G$ by $({\xoverline{s}\hspace{0.1mm}}^\ell)_{\ell\geq 1}$, which are now functions on $[0,1]$, and denote the sequence of spins (\ref{sigmatos}) corresponding to the replica ${\xoverline{s}\hspace{0.1mm}}^\ell$ by \begin{equation} S({\xoverline{s}\hspace{0.1mm}}^\ell) = \Bigl( 2 {\hspace{0.3mm}{\rm I}\hspace{0.1mm}}\Bigl(x_{i,\ell} \leq \frac{1+ {\xoverline{s}\hspace{0.1mm}}^\ell(v_i) }{2}\Bigr) -1 \Bigr)_{i\geq 1} \in \{-1,+1\}^\mathbb{N}. \label{SpinsEll} \end{equation} Because of the geometric nature of the asymptotic Gibbs measures $G$ as measures on the subset of $L^2([0,1],dv)$, the distance and scalar product between replicas play a crucial role in the description of the structure of $G$. We will denote the scalar product between replicas ${\xoverline{s}\hspace{0.1mm}}^\ell$ and ${\xoverline{s}\hspace{0.1mm}}^{\ell'}$ by $R_{\ell,\ell'} = {\xoverline{s}\hspace{0.1mm}}^\ell\cdot {\xoverline{s}\hspace{0.1mm}}^{\ell'}$, which is more commonly called {the overlap} of ${\xoverline{s}\hspace{0.1mm}}^\ell$ and ${\xoverline{s}\hspace{0.1mm}}^{\ell'}$. Let us notice that the overlap $R_{\ell,\ell'}$ is a function of spin sequence (\ref{SpinsEll}) generated by ${\xoverline{s}\hspace{0.1mm}}^\ell$ and ${\xoverline{s}\hspace{0.1mm}}^{\ell'}$ since, by the strong law of large numbers, \begin{equation} R_{\ell,\ell'} = \int \! {\xoverline{s}\hspace{0.1mm}}^\ell(v) {\xoverline{s}\hspace{0.1mm}}^{\ell'}(v)\, dv = \lim_{j\to\infty} \frac{1}{j}\sum_{i=1}^j S\bigl({\xoverline{s}\hspace{0.1mm}}^\ell\bigr)_i \,S\bigl({\xoverline{s}\hspace{0.1mm}}^{\ell'}\bigr)_i \label{overlapinfty} \end{equation} almost surely. \medskip \noindent \textbf{The Ghirlanda-Guerra identities.} Given $n\geq 1$ and replicas ${\xoverline{s}\hspace{0.1mm}}^1,\ldots, {\xoverline{s}\hspace{0.1mm}}^n$, we will denote the array of spins (\ref{SpinsEll}) corresponding to these replicas by \begin{equation} S^n = \bigl(S({\xoverline{s}\hspace{0.1mm}}^\ell) \bigr)_{1\leq \ell\leq n}. \label{Sn} \end{equation} We will denote by $\langle\,\cdot\,\rangle$ the average over replicas ${\xoverline{s}\hspace{0.1mm}}^\ell$ with respect to $G^{\otimes \infty}$. In the interpretation of the step (ii) above, this is the same as averaging over $(u_\ell)_{\ell\geq 1}$ in the sequence $({\xoverline{s}\hspace{0.1mm}}(w,u_{\ell},\cdot))_{\ell\geq 1}$. Let us denote by $\mathbb{E}$ the expectation with respect to random variables $w$, $(v_i)$ and $x_{i,\ell}$. We will say that the measure $G$ on $H$ satisfies the Ghirlanda-Guerra identities if for any $n\geq 2,$ any bounded measurable function $f$ of the spins $S^n$ in (\ref{Sn}) and any bounded measurable function $\psi$ of one overlap, \begin{equation} \mathbb{E} \bigl\langle f(S^n)\psi(R_{1,n+1}) \bigr\rangle = \frac{1}{n}\hspace{0.3mm} \mathbb{E}\bigl\langle f(S^n) \bigr\rangle \hspace{0.3mm} \mathbb{E}\bigr\langle \psi(R_{1,2})\bigr\rangle + \frac{1}{n}\sum_{\ell=2}^{n}\mathbb{E}\bigl\langle f(S^n)\psi(R_{1,\ell})\bigr\rangle. \label{GG} \end{equation} Another way to express the Ghirlanda-Guerra identities is to say that, conditionally on $S^n$, the law of $R_{1,n+1}$ is given by the mixture \begin{equation} \frac{1}{n} \hspace{0.3mm}\zeta + \frac{1}{n}\hspace{0.3mm} \sum_{\ell=2}^n \delta_{R_{1,\ell}}, \label{GGgen} \end{equation} where $\zeta$ denotes the distribution of $R_{1,2}$ under the measure $\mathbb{E} G^{\otimes 2}$, \begin{equation} \zeta(\ \cdot\ ) = \mathbb{E} G^{\otimes 2}\bigl(R_{1,2}\in\ \cdot\ \bigr). \label{zeta} \end{equation} The identities (\ref{GG}) are usually proved for the function $f$ of the overlaps $(R_{\ell,\ell'})_{\ell,\ell'\leq n}$ instead of $S^n$, but exactly the same proof yields (\ref{GG}) as well (see e.g. Section 3.2 in \cite{SKmodel}). It is well known that these identities arise from the Gaussian integration by parts of a certain Gaussian perturbation Hamiltonian against the test function $f$, and one is free to choose this function to depend on all spins and not only overlaps. \medskip \noindent \textbf{Modification of the model Hamiltonian.} Next, we will describe a crucial new ingredient that will help us classify all finite-RSB asymptotic Gibbs measures. Let us consider a sequence $(g^d)_{d\geq 1}$ of independent Gaussian random variables satisfying \begin{equation} \mathbb{E} (g^d)^2 \leq 2^{-d}\epsilon^{\mathrm{pert}}, \label{gsvar} \end{equation} where ${\varepsilon}^{\mathrm{pert}}$ is a fixed small parameter, and consider the following random clauses of $d$ variables, $$ \theta^d(\sigma_1,\ldots,\sigma_d) = g^d \frac{1+\sigma_{1}}{2}\cdots \frac{1+\sigma_{d}}{2}. $$ We will denote by $g_{I}^d$ and $\theta_{I}^d$ independent copies over different multi-indices $I$. We will define a perturbation Hamiltonian by \begin{equation} H_N^{\mathrm{pert}}(\sigma)=\sum_{i\leq N} \theta_{i}^1(\sigma_i) +\sum_{d\geq 2} \sum_{k\leq \pi_d(N)}\theta_{k}^d(\sigma_{i_{1,d,k}},\ldots, \sigma_{i_{d,d,k}}), \label{HNpertmain} \end{equation} where $\pi_d(N)$ are Poisson random variables with the mean $N$ independent over $d\geq 2$ and $i_{I}$ are chosen uniformly from $\{1,\ldots, N\}$ independently for different indices $I$. Notice that, because of (\ref{gsvar}), this Hamiltonian is well defined. We will now work with the new Hamiltonian given by \begin{equation} H_N(\sigma) = H_N^{\mathrm{model}}(\sigma) + H_N^{\mathrm{pert}}(\sigma). \label{HamMain} \end{equation} Notice that the second term $H_N^{\mathrm{pert}}$ is of the same order as the model Hamiltonian, but its size is controlled by the parameter ${\varepsilon}^{\mathrm{pert}}$. For example, if we consider the free energy \begin{equation} F_N = \frac{1}{N}\mathbb{E} \log \sum_{\sigma\in \Sigma_N} \exp H_N(\sigma) \label{FNmod} \end{equation} corresponding to this modified Hamiltonian, letting ${\varepsilon}^\mathrm{pert}$ go to zero will give the free energy of the original model. Finally, as in (\ref{Deftheta}), let us extend the definition of $\theta^d$ by \begin{equation} \exp \theta^d(x_1,\ldots,x_d) = 1+(e^{g^d}-1) \frac{1+x_{1}}{2}\cdots \frac{1+x_{d}}{2} \label{thetadetx} \end{equation} to $[-1,+1]^d$ from $\{-1,+1\}^d.$ \medskip \noindent \textbf{The cavity equations for the modified Hamiltonian.} Let us now recall the cavity equations for the distribution of spins proved in \cite{Pspins}. These equations will be slightly modified here to take into account that the perturbation Hamiltonian $H_N^{\mathrm{pert}}(\sigma)$ will now also contribute to the cavity fields. We will need to pick various sets of different spin coordinates in the array $(s_i^\ell)$ in (\ref{sigma}), and it is inconvenient to enumerate them using one index $i\geq 1$. Instead, we will use multi-indices $I= (i_1,\ldots, i_n)$ for $n\geq 1$ and $i_1,\ldots, i_n\geq 1$ and consider \begin{equation} s_{I}^\ell = s(w,u_\ell, v_{I},x_{I,\ell}), \end{equation} where all the coordinates are uniform on $[0,1]$ and independent over different sets of indices. Similarly, we will denote \begin{equation} {\xoverline{s}\hspace{0.1mm}}_{I}^\ell = {\xoverline{s}\hspace{0.1mm}}^{\ell}(v_I) = {\xoverline{s}\hspace{0.1mm}}(w,u_\ell, v_{I}). \end{equation} For convenience, below we will separate averaging over different replicas $\ell$, so when we average over one replica we will drop the superscript $\ell$ and simply write \begin{equation} {\xoverline{s}\hspace{0.1mm}}_{I} = {\xoverline{s}\hspace{0.1mm}}(v_I) = {\xoverline{s}\hspace{0.1mm}}(w,u, v_{I}). \label{sG} \end{equation} Now, take arbitrary integers $n, m, q\geq 1$ such that $n\leq m.$ The index $q$ will represent the number of replicas selected, $m$ will be the total number of spin coordinates and $n$ will be the number of cavity coordinates. For each replica index $\ell\leq q$ we consider an arbitrary subset of coordinates $C_\ell\subseteq \{1,\ldots, m\}$ and split them into cavity and non-cavity coordinates, \begin{equation} C_\ell^1 = C_\ell\cap \{1,\ldots, n\},\,\,\, C_\ell^2=C_\ell\cap \{n+1,\ldots,m\}. \label{C12} \end{equation} The following quantities represent the $i$th coordinate cavity field of the modified Hamiltonian (\ref{HamMain}) in the thermodynamic limit, \begin{align} A_i({\varepsilon}) =& \sum_{k\leq \pi_i(\lambda K)} \theta_{k,i}( {\xoverline{s}\hspace{0.1mm}}_{1,k,i}, \ldots, {\xoverline{s}\hspace{0.1mm}}_{K-1,k,i}, {\varepsilon}) \nonumber \\ & + \theta_i^1({\varepsilon}) +\sum_{d\geq 2} \sum_{k\leq \pi_i(d)}\theta_{k,i}^d({\xoverline{s}\hspace{0.1mm}}_{{1,d,k,i}},\ldots, {\xoverline{s}\hspace{0.1mm}}_{{d-1,d,k,i}},{\varepsilon}), \label{Ai} \end{align} where $\pi_i(d)$ and $\pi_i(\lambda K)$ are Poisson random variables with the mean $d$ and $\lambda K$, independent of each other and independent over $d\geq 2$ and $i\geq 1$. Compared to \cite{Pspins}, now we have additional terms in the second line in (\ref{Ai}) coming from the perturbation Hamiltonian (\ref{HNpertmain}). Next, let us denote \begin{equation} {A}_i = \log {\rm Av} \exp {A}_i({\varepsilon}) \ \mbox{ and }\ \xi_i = \frac{{\rm Av} {\varepsilon} \exp {A}_i({\varepsilon}) }{\exp {A}_i}, \end{equation} where ${\rm Av}$ denotes the uniform average over ${\varepsilon} = \pm 1$. Recall that $\langle\,\cdot\,\rangle$ denotes the average with respect to the asymptotic Gibbs measure $G$. Define \begin{equation} {U}_\ell = \Bigl\langle \prod_{i\in C_\ell^1} \xi_i \prod_{i\in C_\ell^2} {\xoverline{s}\hspace{0.1mm}}_i \,\exp \sum_{i\leq n} {A}_i \Bigr\rangle \ \mbox{ and } \ {V} =\Bigl\langle \exp \sum_{i\leq n} {A}_i \Bigr\rangle. \label{Ulbar2} \end{equation} Then we will say that an asymptotic Gibbs measure $G$ satisfies the cavity equations if \begin{equation} \mathbb{E} \prod_{\ell\leq q} \Bigl\langle\, \prod_{i\in C_\ell} {\xoverline{s}\hspace{0.1mm}}_i \Bigr\rangle =\mathbb{E} \prod_{\ell\leq q} \frac{U_\ell}{V} \label{SC} \end{equation} for all choice of $n,m,q$ and sets $C_\ell^1, C_\ell^2$. \medskip \noindent \textbf{The Aizenman-Sims-Starr type lower bound.} Consider a random measure $G$ on $H$ in (\ref{spaceH}) and let ${\xoverline{s}\hspace{0.1mm}}_I$ be generated by a replica ${\xoverline{s}\hspace{0.1mm}}$ from this measure as in (\ref{sG}). From now on we will denote by $\pi(c)$ a Poisson random variable with the mean $c$ and we will assume that different appearances of these in the same equation are independent of each other and all other random variables. This means that if we write $\pi(a)$ and $\pi(b)$, we assume them to be independent even if $a$ happens to be equal to $b$. Consider \begin{align} A({\varepsilon}) =& \sum_{k\leq \pi(\lambda K)} \theta_{k}( {\xoverline{s}\hspace{0.1mm}}_{1,k}, \ldots, {\xoverline{s}\hspace{0.1mm}}_{K-1,k}, {\varepsilon}) \nonumber \\ & + \theta_i^1({\varepsilon}) +\sum_{d\geq 2} \sum_{k\leq \pi(d)}\theta_{k}^d({\xoverline{s}\hspace{0.1mm}}_{{1,d,k}},\ldots, {\xoverline{s}\hspace{0.1mm}}_{{d-1,d,k}},{\varepsilon}), \label{Aiag} \end{align} for ${\varepsilon}\in\{-1,+1\}$ and \begin{align} B =& \sum_{k\leq \pi(\lambda (K-1))} \theta_{k}( {\xoverline{s}\hspace{0.1mm}}_{1,k}, \ldots, {\xoverline{s}\hspace{0.1mm}}_{K,k}) \nonumber \\ & +\sum_{d\geq 2} \sum_{k\leq \pi(d-1)}\theta_{k}^d({\xoverline{s}\hspace{0.1mm}}_{{1,d,k}},\ldots, {\xoverline{s}\hspace{0.1mm}}_{{d,d,k}}). \label{Bag} \end{align} Again, compared to \cite{Pspins}, we have additional terms in the second line in (\ref{Aiag}) and (\ref{Bag}) coming from the perturbation Hamiltonian (\ref{HNpertmain}). Consider the following functional \begin{equation} {\cal P}(G) = \log 2 + \mathbb{E} \log \Bigl\langle {\rm Av} \exp A({\varepsilon}) \Bigr\rangle - \mathbb{E}\log \Bigl\langle \exp B \Bigr\rangle. \label{PPG} \end{equation} The following is a slight modification of the (lower bound part of the) main result in \cite{Pspins} in the setting of the diluted models. \begin{theorem}\label{Th1} The lower limit of the free energy in (\ref{FNmod}) satisfies \begin{equation} \liminf_{N\to \infty} F_N \geq \inf_G {\cal P}(G), \label{PPGeq} \end{equation} where the infimum is taken over random measures $G$ on $H$ that satisfy the Ghirlanda-Guerra identities (\ref{GG}) and the cavity equations (\ref{SC}). \end{theorem} We will call the measures $G$ that appear in this theorem asymptotic Gibbs measures, because that is exactly how they arise in \cite{Pspins}. The main difference from \cite{Pspins} is that we also include the requirement that the measures $G$ satisfy the Ghirlanda-Guerra identities in addition to the cavity equations. This can be ensured in exactly the same way as in the Sherrington-Kirkpatrick model by way of another small perturbation of the Hamiltonian (see e.g. \cite{HEPS}, where this was explained for the $K$-sat model). We are not going to prove Theorem \ref{Th1} in this paper, because it does not require any new ideas which are not already explained in \cite{Pspins, HEPS, 1RSB}, and the main reason we stated it here is to provide the motivation for our main result below. Of course, the proof involves some technical modifications to take into account the presence of the new perturbation term (\ref{HNpertmain}), but these are not difficult. Instead, we will focus on the main new idea and the main new contribution of the paper, which is describing the structure of measures $G$ that satisfy the Ghirlanda-Guerra identities and the cavity equations in the case when the overlap $R_{1,2} = {\xoverline{s}\hspace{0.1mm}} \cdot {\xoverline{s}\hspace{0.1mm}} '$ of any two points ${\xoverline{s}\hspace{0.1mm}}$ and ${\xoverline{s}\hspace{0.1mm}}'$ in the support of $G$ takes finitely many, say, $r+1$ values, \begin{equation} 0\leq q_0< q_1<\ldots< q_r \leq 1, \label{finiteoverlap} \end{equation} for any $r\geq 1$ - the so called $r$-step replica symmetry breaking (or $r$-RSB) case. To state the main result, let us first recall several known consequences of the Ghirlanda-Guerra identities. \medskip \noindent \textbf{Consequences of the Ghirlanda-Guerra identities.} By Talagrand's positivity principle (see \cite{SG, SKmodel}), if the Ghirlanda-Guerra identities hold then the overlap can take only non-negative values, so the fact that the values in (\ref{finiteoverlap}) are between $0$ and $1$ is not a constraint. Another consequence of the Ghirlanda-Guerra identities (Theorem 2.15 in \cite{SKmodel}) is that with probability one the random measure $G$ is concentrated on the sphere on radius $\sqrt{q_r}$, i.e. $G(\|{\xoverline{s}\hspace{0.1mm}}\|^2 = q_r)=1.$ Since we assume that the overlap takes finitely many values, $G$ is also purely atomic. Finally (see \cite{PUltra} or Theorem 2.14 in \cite{SKmodel}), with probability one the support of $G$ is ultrametric, $ G^{\otimes 3}(R_{2,3} \geq \min(R_{1,2},R_{1,3}))=1. $ By ultrametricity, for any $q_p$, the relation defined by \begin{equation} {\xoverline{s}\hspace{0.1mm}}\sim_{q_p} {\xoverline{s}\hspace{0.1mm}}' \Longleftrightarrow {\xoverline{s}\hspace{0.1mm}}\cdot{\xoverline{s}\hspace{0.1mm}}' \geq q_p \label{qclusters} \end{equation} is an equivalence relation on the support of $G$. We will call these $\sim_q$ equivalence clusters simply $q$-clusters. Let us now enumerate all the $q_p$-clusters defined by (\ref{qclusters}) according to Gibbs' weights as follows. Let $H_{*}$ be the entire support of $G$ so that $V_* = G(H_*) =1$. Next, the support is split into $q_1$-clusters $(H_n)_{n\geq 1}$, which are then enumerated in the decreasing order of their weights $V_n = G(H_n)$, \begin{equation} V_1 > V_2 > \ldots > V_n > \ldots > 0. \label{purelabelsfirst} \end{equation} We then continue recursively over $p\leq r-1$ and enumerate the $q_{p+1}$-subclusters $(H_{\alpha n})_{n\geq 1}$ of a cluster $H_\alpha$ for $\alpha\in \mathbb{N}^p$ in the non-increasing order of their weights $V_{\alpha n} = G(H_{\alpha n})$, \begin{equation} V_{\alpha 1} > V_{\alpha 2} > \ldots > V_{\alpha n} > \ldots > 0. \label{purelabels} \end{equation} Thus, all these clusters were enumerated $(V_\alpha)_{\alpha\in {\cal A}}$ by the vertices of the tree ${\cal A}$ in (\ref{Atree}). It is not a coincidence that we used the same notation as in (\ref{Vs2}). It is another well-known consequence of the Ghirlanda-Guerra identities that the distribution of these weights coincides with the reordering of the weights of the Ruelle probability cascades as in (\ref{Vs2}) with the parameters (\ref{zetas}) given by \begin{equation} \zeta_p = \mathbb{E} G^{\otimes 2}\bigl( R_{1,2} \leq q_p \bigr) \label{zetap} \end{equation} for $p=0,\ldots,r.$ The $q_r$-clusters are the points of the support of $G$ - these are called pure states. They were enumerated by $\alpha\in\mathbb{N}^r$ and, if we denote them by ${\xoverline{s}\hspace{0.1mm}}_\alpha$, \begin{equation} G({\xoverline{s}\hspace{0.1mm}}_\alpha) = V_\alpha \,\,\mbox{ for }\,\, \alpha\in \mathbb{N}^r. \label{Gdiscrete} \end{equation} Recall that we generate the array ${\xoverline{s}\hspace{0.1mm}}_i^\ell$ (or ${\xoverline{s}\hspace{0.1mm}}_I^\ell$ for general index $I$) by first sampling replicas ${\xoverline{s}\hspace{0.1mm}}^\ell$ from the measure $G$ (which are functions on $[0,1]$) and then plugging in i.i.d. uniform random variables $v_i$, i.e. ${\xoverline{s}\hspace{0.1mm}}_i^\ell = {\xoverline{s}\hspace{0.1mm}}^\ell(v_i)$. In the discrete setting (\ref{Gdiscrete}), this is equivalent to sampling $\alpha$ according to the weights $V_\alpha$ and then plugging in $v_i$ into ${\xoverline{s}\hspace{0.1mm}}_\alpha,$ i.e. ${\xoverline{s}\hspace{0.1mm}}_\alpha(v_i)$. Therefore, in order to describe the distribution of the array $({\xoverline{s}\hspace{0.1mm}}^\ell(v_i))_{i,\ell\geq 1}$ it is sufficient to describe the joint distribution of the arrays $$ \bigl(V_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r} \,\,\mbox{ and }\,\, \bigl({\xoverline{s}\hspace{0.1mm}}_{\alpha}(v_i)\bigr)_{i\geq 1, \alpha\in\mathbb{N}^r}. $$ In addition to the fact that $(V_\alpha)$ corresponds to some reordering of weights of the Ruelle probability cascades, it was proved in \cite{AP, HEPS} that if the measure $G$ satisfies the Ghirlanda-Guerra identities then (see Theorem $1$ and equation (36) and (37) in \cite{HEPS}): \begin{enumerate} \item[(i)] the arrays $(V_{\alpha})_{\alpha\in\mathbb{N}^r}$ and $({\xoverline{s}\hspace{0.1mm}}_{\alpha}(v_i))_{i\geq 1, \alpha\in\mathbb{N}^r}$ are independent; \item[(ii)] there exists a function $h:[0,1]^{2(r+1)}\to[-1,1]$ such that \begin{equation} \Bigl({\xoverline{s}\hspace{0.1mm}}_\alpha(v_i) \Bigr)_{i\geq 1, \alpha\in\mathbb{N}^r} \, \stackrel{d}{=}\, \Bigl( h\bigl( (\omega_\beta)_{\beta\in p(\alpha)}, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) \Bigr)_{i\geq 1,\alpha\in\mathbb{N}^r}, \label{sigmaf2} \end{equation} where, as above, $\omega_\alpha$ and $\omega_\alpha^i$ for $\alpha\in{\cal A}$ are i.i.d. uniform random variables on $[0,1]$. \end{enumerate} The M\'ezard-Parisi ansatz predicts that in the equation (\ref{sigmaf2}) one can replace the function $h$ by a function that does not depend on the coordinates $(\omega_\beta)_{\beta\in p(\alpha)}$, which would produce exactly the same fields as in (\ref{MPfopagain}). We will show that this essentially holds for finite-RSB asymptotic Gibbs measures. \medskip \noindent \textbf{Consequence of the cavity equations.} The main result of the paper is the following. \begin{theorem}\label{Th2} If a random measure $G$ on $H$ in (\ref{spaceH}) satisfies the Ghirlanda-Guerra identities (\ref{GG}) and the cavity equations (\ref{SC}) and the overlap takes $r+1$ values in (\ref{finiteoverlap}) then there exists a function $h:[0,1]^{r+2}\to[-1,1]$ such that \begin{equation} \Bigl({\xoverline{s}\hspace{0.1mm}}_\alpha(v_i) \Bigr)_{i\geq 1, \alpha\in\mathbb{N}^r} \, \stackrel{d}{=}\, \Bigl( h\bigl( \omega_*, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) \Bigr)_{i\geq 1,\alpha\in\mathbb{N}^r}, \label{sigmaf3} \end{equation} where $\omega_*$ and $\omega_\alpha^i$ for $\alpha\in{\cal A}$ are i.i.d. uniform random variables on $[0,1]$. \end{theorem} In other words, the cavity equations (\ref{SC}) allow us to simplify (\ref{sigmaf2}) and to get rid of the dependence on the coordinates $\omega_\beta$ for $\beta\in {\cal A}\setminus \{*\}$. Notice that, compared to the M\'ezard-Parisi ansatz, we still have the dependence on $\omega_*$ in (\ref{sigmaf3}). However, from the point of view of computing the free energy this is not an issue at all, because the average in $\omega_*$ is on the outside of the logarithm in (\ref{PPG}) and when we minimize over $G$ in (\ref{PPGeq}), we can replace the average over $\omega_*$ by the infimum. Of course, the infimum over $G$ in (\ref{PPGeq}) could involve measures that are not of finite-RSB type, and this is the main obstacle to finish the proof of the M\'ezard-Parisi formula, if this approach can be made to work. If one could replace the infimum in (\ref{PPGeq}) over measures $G$ that satisfy the finite-RSB condition in (\ref{finiteoverlap}) (in addition to the cavity equations and the Ghirlanda-Guerra identities) then, using Theorem \ref{Th2} and replacing the average over $\omega_*$ by the infimum, we get the lower bound that essentially matches the Franz-Leone upper bound, except that now we have additional terms in the second line in (\ref{Aiag}) and (\ref{Bag}) compared to (\ref{Aibef}) and (\ref{Bef}) coming from the perturbation Hamiltonian (\ref{HNpertmain}). However, these terms are controlled by ${\varepsilon}^\mathrm{pert}$ in (\ref{gsvar}) and, letting it go to zero, one could remove the dependence of the lower bound on these terms and match the Franz-Leone upper bound. \section{General idea of the proof}\label{Sec2ilabel} The main goal of this paper is to show that the function $h$ that generates the array ${\xoverline{s}\hspace{0.1mm}}_i^\alpha$ in (\ref{sigmaf2}), $$ {\xoverline{s}\hspace{0.1mm}}_i^\alpha = h\bigl( (\omega_\beta)_{\beta\in p(\alpha)}, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr), $$ can be replaced by a function that does not depend on the coordinates $\omega_\beta$ for $*\prec \beta\preceq \alpha$. We will show this by induction, removing one coordinate at a time from the leaf $\alpha$ up to the root $*$. Our induction assumption will be the following: for $p\in \{0,\ldots, r-1\}$, suppose that, instead of (\ref{sigmaf2}), the array ${\xoverline{s}\hspace{0.1mm}}_i^\alpha$ for $i\geq 1,\alpha\in \mathbb{N}^r$, is generated by \begin{equation} {\xoverline{s}\hspace{0.1mm}}_i^\alpha = h\bigl( (\omega_\beta)_{\beta\in p(\alpha), |\beta|\leq p+1}, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) \label{indass} \end{equation} for some function $h$ that does not depend on the coordinates $\omega_\beta$ for $|\beta|\geq p+2$. Notice that this holds for $p=r-1$, and we would like to show that one can replace $h$ by $h'$ that also does not depend on $\omega_{\beta}$ for $|\beta|=p+1$, without affecting the distribution of the array ${\xoverline{s}\hspace{0.1mm}}_i^\alpha$. Often, we will work with a subtree of ${\cal A}$ that `grows out' of a vertex at the distance $p$ or $p+1$ from the root, which means that all paths from the root to the vertices in that subtree pass through this vertex. In that case, for certainty, we will fix the vertex to be $[p] = (1,2,\ldots,p)$ or $[p+1]$. We will denote by ${\mathbb{E}_{[p]}}$ the expectation with respect to the random variables $\omega_\beta$, $\omega_\beta^i$ indexed by the descendants of $[p]$, i.e. $[p]\prec \beta$, and by ${\mathbb{E}_{[p],i}}$ the expectation with respect to $\omega_\beta^i$ for $[p]\prec \beta$. Our goal will be to prove the following. \begin{theorem}\label{Sec6iTh1} Under the assumption (\ref{indass}), for any $k\geq 1$ and any $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$, the expectation ${\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_1}\cdots {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_k}$ with respect to $(\omega_\beta^i)_{[p+1]\preceq \beta}$ does not depend on $\omega_{[p+1]}$ almost surely. \end{theorem} Here $i\geq 1$ is arbitrary but fixed and $\alpha_1,\ldots,\alpha_k$ need not be different, so the quantities ${\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_1}\cdots {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_k}$ represent all possible joint moments with respect to $(\omega_\beta^i)_{[p+1]\preceq \beta}$ of the random variables ${\xoverline{s}\hspace{0.1mm}}_i^\alpha$ for $[p+1]\preceq \alpha\in \mathbb{N}^r$. It will take us the rest of the paper to prove this result, and right now we will only explain why it completes the induction step. The reason is identical to the situation of an exchangeable sequence $X_n = h(\omega,\omega_n)$ (say, bounded in absolute value by one) such that all moments $\mathbb{E}_{\omega_1} X_1^k$ for $k\geq 1$ with respect to $\omega_1$ do not depend on $\omega$. In this case if we choose any function $h'(\omega_1)$ with this common set of moments then the sequences $(h'(\omega_n))$ and $(X_n)$ have the same distribution, which can be seen by comparing their joint moments. For example, we can choose $h'(\,\cdot\,) = h(\omega^*,\,\cdot\,)$ for any $\omega^*$ from the set of measure one on which all moments $\mathbb{E}_{\omega_1} X_1^k$ coincide with their average values $\mathbb{E} X_1^k$. We can do the same in the setting of Theorem \ref{Sec6iTh1}, which can be rephrased as follows: for almost all $(\omega_\beta,\omega_\beta^i)_{\beta\preceq [p]}$ and $\omega_{[p+1]}$, $$ {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_1}\cdots {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_k} = {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_1}\cdots {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_k} $$ for all $k\geq 1$ and $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$, where ${\mathbb{E}_{[p]}}$ now also includes the average in $\omega_{[p+1]}.$ This means that we can find $\omega_{[p+1]}=\omega_{[p+1]}^*$ such that the equality of all these moments holds for almost all $(\omega_\beta,\omega_\beta^i)_{\beta\preceq [p]}$. If we now set $$ h'\bigl( (\omega_\beta)_{\beta\in p(\alpha), |\beta|\leq p}, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) = h\bigl( (\omega_\beta)_{\beta\in p(\alpha), |\beta|\leq p}, \omega_{[p+1]}^*, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) $$ then by comparing the joint moments one can see that \begin{equation} \bigl({\xoverline{s}\hspace{0.1mm}}_i^\alpha\bigr)_{i\geq 1,\alpha\in\mathbb{N}^r} \stackrel{d}{=} \Bigl(h'\bigl( (\omega_\beta)_{\beta\in p(\alpha), |\beta|\leq p}, (\omega_\beta^i)_{\beta\in p(\alpha)} \bigr) \Bigr)_{i\geq 1,\alpha\in\mathbb{N}^r}, \end{equation} which completes the induction step. The proof of Theorem \ref{Sec6iTh1} will proceed by a certain induction on the shape of the configuration $\alpha_1,\ldots,\alpha_k$, where by the shape of the configuration we essentially mean the matrix $(\alpha_\ell\wedge \alpha_{\ell'})_{\ell,\ell'\leq k}$ (or its representation by a tree that consists of all paths $p(\alpha_\ell)$). It is clear that the quantity $$ {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_1}\cdots {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_k} $$ depends on $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ only through the shape. The induction will be somewhat involved and we will explain exactly how it will work toward the end of the paper, once we have all the tools ready. However, we need to mention now that the induction will have an important \textit{monotonicity property}: whenever we have proved the statement of Theorem \ref{Sec6iTh1} for some $\alpha_1,\ldots,\alpha_k$, we have also proved it for any subset of these vertices. At this moment, we will suppose that $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ are such that the following holds: \begin{enumerate} \item[(M)] For any subset $S\subseteq \{1,\ldots, k\}$, ${\mathbb{E}_{[p],i}} \prod_{\ell\in S}{\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$. \end{enumerate} Then over the next sections we will obtain some implications of this assumption using the cavity equations. Finally, in the last section we will show how to use these implications inductively to prove Theorem \ref{Sec6iTh1} for any choice of $\alpha_1,\ldots,\alpha_k$. Of course, the starting point of the induction will be the case of $k=1$ that we will obtain first. In fact, in this case the statement will be even stronger and will not assume that (\ref{indass}) holds (i.e. we only assume (\ref{sigmaf2})). \begin{lemma}\label{Sec2iLem1} For any $p=0,\ldots, r-1$ and any $[p]\prec \alpha \in \mathbb{N}^r$, the expectation ${\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha}$ with respect to $(\omega_\beta^i)_{[p]\prec \beta}$ does not depend on $(\omega_\beta)_{[p]\prec \beta}$ almost surely. \end{lemma} \textbf{Proof.} Consider $[p]\prec \alpha, \beta\in\mathbb{N}^r$ such that $\alpha\wedge \beta = p$. By (\ref{sigmaf2}), it is clear that the overlap of two pure states satisfies \begin{equation} R_{\alpha, \beta}:= {\xoverline{s}\hspace{0.1mm}}_\alpha\cdot {\xoverline{s}\hspace{0.1mm}}_{\beta} = \int_0^1\! {\xoverline{s}\hspace{0.1mm}}_\alpha(v) {\xoverline{s}\hspace{0.1mm}}_{\beta}(v) \, dv \stackrel{d}{=} \mathbb{E}_i \, {\xoverline{s}\hspace{0.1mm}}_i^{\alpha} {\xoverline{s}\hspace{0.1mm}}_i^{\beta}, \label{Sec2eq1} \end{equation} where $\mathbb{E}_i$ denotes the expectation in random variables $\omega_\eta^i$ that depend on the spin index $i$. By construction, we enumerated the pure states ${\xoverline{s}\hspace{0.1mm}}_\alpha$ in (\ref{Gdiscrete}) so that \begin{equation} R_{\alpha,\beta} = q_{\alpha\wedge\beta} \label{RabSec2} \end{equation} and, since $\alpha\wedge \beta = p$, we get that, almost surely, $$ q_p = R_{\alpha,\beta} = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}_i^{\alpha} {\xoverline{s}\hspace{0.1mm}}_i^{\beta} = \mathbb{E}_i \bigl({\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha} {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\beta}\bigr). $$ If we denote $v = (\omega_\eta)_{\eta\preceq [p]}$, $v_1 = (\omega_\eta)_{[p]\prec \eta\preceq \alpha}$, $v_2 = (\omega_\eta)_{[p]\prec \eta\preceq \beta}$ and $u = (\omega_\eta^i)_{\eta\preceq [p]}$ then $$ {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha} =\varphi(v,v_1,u),\,\, {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{\beta}=\varphi(v,v_2,u) $$ for some function $\varphi,$ the random variables $v,v_1,v_2,u$ are independent, and the above equation can be written as $$ q_p = \mathbb{E}_u \varphi(v,v_1,u) \varphi(v,v_2,u) $$ for almost all $v,v_1,v_2$. This means that for almost all $v$, the above equality holds for almost all $v_1, v_2$. Let us fix any such $v$ and let $\mu_v$ be the image of the Lebesgue measure on $[0,1]^{r-p}$ by the map $v_1 \to \varphi(v,v_1,\cdot)\in L^2([0,1]^{p+1},du)$. Then the above equation means that, if we sample independently two points from $\mu_v$, with probability one their scalar product in $L^2$ will be equal to $q_p$. This can happen only if the measure $\mu_v$ is concentrated on one point in $L^2$, which means that the function $\varphi$ does not depend on $v_1$. \qed \medskip \noindent Before we start using the cavity equations, we will explain a property of the Ruelle probability cascades that will play the role of the main technical tool throughout the paper. \section{Key properties of the Ruelle probability cascades}\label{Sec2label} The property described in this section will be used in two ways - directly, in order to obtain some consequences of the cavity equations, and indirectly, as a representation tool to make certain computations possible. This property is proved in Theorem 4.4 in \cite{SKmodel} in a more general form, but here we will need only a special case as follows. Let us consider a random variable $z$ taking values in some measurable space $({\cal X}, {\cal B})$ (in our case, this will always be some nice space, such as $[0,1]^n$ with the Borel $\sigma$-algebra) and let $z_\alpha$ be its independent copies indexed by the vertices of the tree $\alpha\in {\cal A}\setminus \{*\}$ excluding the root. Recall the parameters $(\zeta_p)_{0\leq p\leq r-1}$ in (\ref{zetas}). Let us consider a measurable bounded function \begin{equation} X_r:{\cal X}^r \to \mathbb{R} \label{Xr} \end{equation} and, recursively over $0\leq p\leq r-1$, define functions $X_p:{\cal X}^p \to\mathbb{R}$ by \begin{equation} X_p(x) = \frac{1}{\zeta_p}\log \mathbb{E}_z \exp \zeta_p X_{p+1}(x,z), \label{ch54Xp} \end{equation} where the expected value $\mathbb{E}_z$ is with respect to $z$. In particular, $X_0$ is a constant. Let us define \begin{equation} W_p(x,y) = \exp \zeta_p\bigl(X_{p+1}(x,y) - X_p(x)\bigr) \label{ch31Wp} \end{equation} for $x\in {\cal X}^p$ and $y\in{\cal X}.$ Let us point out that, by the definition (\ref{ch54Xp}), $\mathbb{E}_z W_p(x,z) = 1$ and, therefore, for each $x\in{\cal X}^p,$ we can think of $W_p(x,\, \cdot\, )$ as a change of density that yields the following conditional distribution on ${\cal X}$ given $x\in{\cal X}^p$, \begin{equation} \nu_p(x,B) = \mathbb{E}_z W_p(x,z)\, {\rm I}(z\in B). \label{ch51transitionprime} \end{equation} For $p=0$, $\nu_0$ is just a probability distribution on $({\cal X},{\cal B}).$ Let us now generate the array $\tilde{z}_\alpha$ for $\alpha\in {\cal A}\setminus \{*\}$ iteratively from the root to the leaves as follows. Let $\tilde{z}_n$ for $n\in\mathbb{N}$ be i.i.d. random variables with the distribution $\nu_0$. If we already constructed $\tilde{z}_\alpha$ for $|\alpha|\leq p$ then, given any $\alpha\in \mathbb{N}^p$, we generate $\tilde{z}_{\alpha n}$ independently for $n\geq 1$ from the conditional distribution $\nu_p((z_\beta)_{*\prec \beta\preceq \alpha},\, \cdot\,)$, and these are generated independently over different such $\alpha.$ Notice that the distribution of the array $(\tilde{z}_\alpha)_{\alpha\in{\cal A}\setminus\{*\}}$ depends on the distribution of $z$, function $X_r$ and parameters $(\zeta_p)_{0\leq p\leq r-1}$. With this definition, the expectation ${\mathbb{E}} f((\tilde{z}_\alpha)_{\alpha\in F})$ for a finite subset $F\subset {\cal A}\setminus\{*\}$ can be written as follow. For $\alpha\in{\cal A}\setminus \{*\}$, let \begin{equation} W_\alpha = W_{|\alpha|-1}\bigl( (z_\beta)_{*\prec \beta\prec \alpha}, z_\alpha \bigr). \label{Sec2Walpha} \end{equation} Slightly abusing notation, we could also write this simply as $W_\alpha = W_{|\alpha|-1}( (z_\beta)_{*\prec \beta\preceq \alpha})$. Given a finite subset ${C} \subset {\cal A}\setminus\{*\}$, let $$ p({C}) = \bigcup_{\alpha\in {C}} p(\alpha) \setminus \{*\} \,\,\mbox{ and }\,\, W_{{C}} = \prod_{\alpha\in p({C})} W_\alpha. $$ Then, the above definition of the array $(\tilde{z}_\alpha)$ means that \begin{equation} {\mathbb{E}} f\bigl((\tilde{z}_\alpha)_{\alpha\in {C}} \bigr) = \mathbb{E} W_{{C}} f\bigl(({z}_\alpha)_{\alpha\in {C}}\bigr). \label{Sec2expectW} \end{equation} Simply, to average over $(\tilde{z}_\alpha)_{\alpha\in {C}}$ we need to use changes of density over all vertices in the paths from the root leading to the vertices $\alpha\in {C}$. The meaning of the above construction will be explained by the following result. Recall the Ruelle probability $(v_\alpha)_{\alpha\in \mathbb{N}^r}$ cascades in (\ref{vs}) and define new random weights on $\mathbb{N}^r$, \begin{equation} \tilde{v}_\alpha = \frac{v_\alpha \exp X_r((z_\beta)_{*\prec \beta\preceq \alpha})}{\sum_{\alpha\in \mathbb{N}^r}v_\alpha \exp X_r((z_\beta)_{*\prec \beta\preceq \alpha})}, \label{tildevs} \end{equation} by the change of density proportional to $\exp X_r((z_\beta)_{*\prec \beta\preceq \alpha})$. We will say that a bijection $\pi:{\cal A}\to{\cal A}$ of the vertices of the tree ${\cal A}$ preserves the parent-child relationship if children $\alpha n$ of $\alpha$ are mapped into children of $\pi(\alpha)$, $\pi(\alpha n) = (\pi(\alpha),k)$ for some $k\in\mathbb{N}$. Another way to write this is to say that $\pi(\alpha)\wedge \pi(\beta) = \alpha\wedge\beta$ for all $\alpha,\beta\in {\cal A}.$ For example, the bijection $\pi$ defined in (\ref{permute}), (\ref{Vs2}), is of this type. Theorem 4.4 in \cite{SKmodel} gives the following generalization of the Bolthausen-Sznitman invariance property for the Poisson-Dirichlet point process (Proposition A.2 in \cite{Bolthausen}). \begin{theorem} There exists a random bijection $\rho:{\cal A}\to{\cal A}$ of the vertices of the tree ${\cal A}$, which preserves the parent-child relationship, such that \begin{equation} \bigl(\tilde{v}_{\rho(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r} \stackrel{d}{=} \bigl(v_\alpha\bigr)_{\alpha\in\mathbb{N}^r},\,\, \bigl(z_{\rho(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \stackrel{d}{=} \bigl(\tilde{z}_\alpha \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \label{Th3eq} \end{equation} and these two arrays are independent of each other. \end{theorem} This result will be more useful to us in a slightly different formulation in terms of the sequence $(V_\alpha)_{\alpha\in \mathbb{N}^r}$ in (\ref{Vs2}). Namely, if we denote by \begin{equation} \tilde{V}_\alpha = \frac{V_\alpha \exp X_r((z_\beta)_{*\prec \beta\preceq \alpha})}{\sum_{\alpha\in \mathbb{N}^r}V_\alpha \exp X_r((z_\beta)_{*\prec \beta\preceq \alpha})} \label{tildeVs} \end{equation} then the following holds. \begin{theorem}\label{Th4label} There exists a random bijection $\rho:{\cal A}\to{\cal A}$ of the vertices of the tree ${\cal A}$, which preserves the parent-child relationship, such that \begin{equation} \bigl(\tilde{V}_{\rho(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r} \stackrel{d}{=} \bigl(V_\alpha\bigr)_{\alpha\in\mathbb{N}^r},\,\, \bigl(z_{\rho(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \stackrel{d}{=} \bigl(\tilde{z}_\alpha \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \label{Th4eq} \end{equation} and these two arrays are independent of each other. \end{theorem} \textbf{Proof.} We have to apply twice the following simple observation. Suppose that we have a random array $(v_\alpha')_{\alpha\in\mathbb{N}^r}$ of positive weights that add up to one and array $(z_\alpha')_{\alpha\in{\cal A}\setminus\{*\}}$ generated along the tree similarly to $(\tilde{z}_\alpha)$ above - namely, ${z}_n'$ for $n\in\mathbb{N}$ are i.i.d. random variables with some distribution $\nu_0$ and, if we already constructed ${z}_\alpha'$ for $|\alpha|\leq p$ then, given any $\alpha\in \mathbb{N}^p$, we generate ${z}_{\alpha n}'$ independently for $n\geq 1$ from some conditional distribution $\nu_p((z_\beta)_{*\prec \beta\preceq \alpha},\, \cdot\,)$, and these are generated independently over different such $\alpha.$ Suppose that $(v_\alpha')_{\alpha\in\mathbb{N}^r}$ and $(z_\alpha')_{\alpha\in{\cal A}\setminus\{*\}}$ are independent. Consider any random permutation $\rho:{\cal A}\to{\cal A}$ that preserves the parent-child relationship, which depends only on $(v_\alpha')_{\alpha\in\mathbb{N}^r}$, i.e. it is a measurable function of this array. Then the arrays $$ \bigl(v_{\rho(\alpha)}' \bigr)_{\alpha\in\mathbb{N}^r} \,\,\mbox{ and }\,\, \bigl(z_{\rho(\alpha)}' \bigr)_{\alpha\in{\cal A}\setminus\{*\}} $$ are independent and $$ \bigl(z_{\rho(\alpha)}' \bigr)_{\alpha\in{\cal A}\setminus\{*\}} \stackrel{d}{=} \bigl(z_\alpha' \bigr)_{\alpha\in{\cal A}\setminus\{*\}}. $$ This is obvious because, conditionally on $\rho$, the array $z_{\rho(\alpha)}'$ is generated exactly like $z_\alpha'$ along the tree, so its conditional distribution does not depend on $\rho$. One example of such permutation $\rho$ is the permutation defined in (\ref{vsall}), (\ref{permute}), (\ref{Vs2}), that sorts the cluster weights indexed by $\alpha\in {\cal A}\setminus \mathbb{N}^r$ defined by \begin{equation} v_\alpha' = \sum_{\alpha\prec \beta\in\mathbb{N}^r} v_\beta'. \label{vsallprime} \end{equation} Namely, for each $\alpha\in {\cal A}\setminus \mathbb{N}^r$, we let $\pi_\alpha: \mathbb{N} \to \mathbb{N}$ be a bijection such that the sequence $v_{\alpha \pi_\alpha(n)}'$ is decreasing for $n\geq 1$ (we assume that all these cluster weights are different as is the case for the Ruelle probability cascades), let $\pi(*)=*$ and define \begin{equation} \pi(\alpha n) = \pi(\alpha) \pi_{\pi(\alpha)}(n) \label{permuteprime} \end{equation} recursively from the root to the leaves of the tree. Let us denote \begin{equation} \mathrm{Sort}\Bigl( \bigl(v_\alpha' \bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_\alpha' \bigr)_{\alpha\in{\cal A}\setminus\{*\}} \Bigr) := \Bigl( \bigl(v_{\pi(\alpha)}' \bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\pi(\alpha)}' \bigr)_{\alpha\in{\cal A}\setminus\{*\}} \Bigr). \label{sort} \end{equation} Notice that this sorting operation depends only on $(v_\alpha')_{\alpha\in\mathbb{N}^r}$, so it does not affect the distribution of $(z_\alpha')_{\alpha\in{\cal A}\setminus\{*\}}$. Now, let us show how (\ref{Th3eq}) implies (\ref{Th4eq}). First of all, the permutation $\rho$ in the equation (\ref{Th4eq}) is just the sorting operation described above, $$ \Bigl( \bigl(\tilde{V}_{\rho(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\rho(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) = \mathrm{Sort} \Bigl( \bigl(\tilde{V}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr). $$ Let $\pi$ be the permutation in (\ref{permute}), (\ref{Vs2}) and, trivially, $$ \mathrm{Sort} \Bigl( \bigl(\tilde{V}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) = \mathrm{Sort} \Bigl( \bigl(\tilde{V}_{\pi^{-1}(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\pi^{-1}(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr), $$ since the sorting operation does not depend on how we index the array. On the other hand, by the definition (\ref{tildeVs}) and the fact that $V_{\pi^{-1}(\alpha)} = v_\alpha$, $$ \tilde{V}_{\pi^{-1}(\alpha)} = \frac{v_\alpha \exp X_r((z_{\pi^{-1}(\beta)})_{\beta\in p(\alpha)})}{\sum_{\alpha\in \mathbb{N}^r}v_\alpha \exp X_r((z_{\pi^{-1}(\beta)})_{\beta\in p(\alpha)})}. $$ Also, since the permutation $\pi$ depends only on $(v_\alpha)$, by the above observation, the arrays $(v_{\alpha})_{\alpha\in\mathbb{N}^r}$ and $(z_{\pi^{-1}(\alpha)})_{\alpha\in{\cal A}\setminus\{*\}}$ are independent and $$ \bigl(z_{\pi^{-1}(\alpha)} \bigr)_{\alpha\in{\cal A}\setminus\{*\}} \stackrel{d}{=} \bigl(z_\alpha \bigr)_{\alpha\in{\cal A}\setminus\{*\}}. $$ Comparing with the definition (\ref{tildevs}), this gives that $$ \Bigl( \bigl(\tilde{V}_{\pi^{-1}(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\pi^{-1}(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) \stackrel{d}{=} \Bigl( \bigl(\tilde{v}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr), $$ and all together we showed that $$ \Bigl( \bigl(\tilde{V}_{\rho(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\rho(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) \stackrel{d}{=} \mathrm{Sort} \Bigl( \bigl(\tilde{v}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr). $$ Since we already use the notation $\rho$, let us denote the permutation $\rho$ in (\ref{Th3eq}) by $\rho'$. Then (\ref{Th3eq}) implies \begin{align*} & \mathrm{Sort} \Bigl( \bigl(\tilde{v}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) = \mathrm{Sort} \Bigl( \bigl(\tilde{v}_{\rho'(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(z_{\rho'(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) \\ & \stackrel{d}{=} \mathrm{Sort} \Bigl( \bigl({v}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(\tilde{z}_{\alpha} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr) = \Bigl( \bigl({V}_{\alpha}\bigr)_{\alpha\in\mathbb{N}^r}, \bigl(\tilde{z}_{\pi(\alpha)} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \Bigr). \end{align*} Finally, since the sorting permutation $\pi$ depends only on the array $(v_\alpha)$, by the above observation, the array $(\tilde{z}_{\pi(\alpha)})$ is independent of $(V_\alpha)$ and has the same distribution as $(\tilde{z}_{\alpha})$. This finishes the proof. \qed \section{Cavity equations for the pure states}\label{Sec3label} In this section, we will obtain some general consequences of the cavity equations (\ref{SC}) that do not depend on any inductive assumptions. In the next section, we will push this further under the assumption (M) made in Section \ref{Sec2ilabel}. First of all, let us rewrite the cavity equations (\ref{SC}) taking into account the consequences of the Ghirlanda-Guerra identities in (\ref{Gdiscrete}) and (\ref{sigmaf2}). Let us define \begin{align} {\xoverline{s}\hspace{0.1mm}}^\alpha_I \ =\ & \ h\bigl( (\omega_\beta)_{\beta\in p(\alpha)}, (\omega_\beta^I)_{\beta\in p(\alpha)} \bigr), \label{sialpha} \\ A_i^\alpha({\varepsilon}) \ =\ & \sum_{k\leq \pi_i(\lambda K)} \theta_{k,i}( {\xoverline{s}\hspace{0.1mm}}^\alpha_{1,k,i}, \ldots, {\xoverline{s}\hspace{0.1mm}}^\alpha_{K-1,k,i}, {\varepsilon}) \nonumber \\ & \ + \theta_i^1({\varepsilon}) +\sum_{d\geq 2} \sum_{k\leq \pi_i(d)}\theta_{k,i}^d({\xoverline{s}\hspace{0.1mm}}^\alpha_{{1,d,k,i}},\ldots, {\xoverline{s}\hspace{0.1mm}}^\alpha_{{d-1,d,k,i}},{\varepsilon}), \label{Aieps} \\ A_i^\alpha \ =\ & \log {\rm Av} \exp A_i^\alpha({\varepsilon}), \label{Aialpha} \\ \xi_i^\alpha \ =\ & \frac{{\rm Av} {\varepsilon} \exp A_i^\alpha({\varepsilon}) }{\exp A_i^\alpha}, \label{Sec3xiialpha} \end{align} and let $A^\alpha = \sum_{i\leq n} A_i^\alpha$. We will keep the dependence of $A^\alpha$ on $n$ implicit for simplicity of notation. Then (\ref{Ulbar2}) can be redefined by (using equality in distribution (\ref{sigmaf2})) \begin{equation} {U}_\ell =\, \sum_{\alpha\in \mathbb{N}^r} V_\alpha \prod_{i\in C_\ell^1} \xi_i^\alpha \prod_{i\in C_\ell^2} {\xoverline{s}\hspace{0.1mm}}_i^\alpha\,\exp A^\alpha \ \mbox{ and } \ {V} =\, \sum_{\alpha\in \mathbb{N}^r } V_\alpha \exp A^\alpha. \label{Ulbaralpha} \end{equation} Moreover, if we denote \begin{equation} \tilde{V}_\alpha = \frac{V_\alpha \exp A^\alpha}{{V}} = \frac{V_\alpha \exp A^\alpha}{\sum_{\alpha\in \mathbb{N}^r} V_\alpha \exp A^\alpha} \label{Valpha} \end{equation} then the cavity equations (\ref{SC}) take form \begin{equation} \mathbb{E} \prod_{\ell\leq q} \sum_{\alpha\in \mathbb{N}^r} V_\alpha \prod_{i\in C_\ell} {\xoverline{s}\hspace{0.1mm}}_i^\alpha =\mathbb{E} \prod_{\ell\leq q} \sum_{\alpha\in \mathbb{N}^r} \tilde{V}_\alpha \prod_{i\in C_\ell^1} \xi_i^\alpha \prod_{i\in C_\ell^2} {\xoverline{s}\hspace{0.1mm}}_i^\alpha. \label{SCbaralpha} \end{equation} We can also write this as \begin{equation} \mathbb{E} \sum_{\alpha_1,\ldots, \alpha_q} V_{\alpha_1}\cdots V_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell} =\mathbb{E} \sum_{\alpha_1,\ldots, \alpha_q} \tilde{V}_{\alpha_1} \cdots \tilde{V}_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\alpha_\ell} \prod_{i\in C_\ell^2} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}. \label{SCnew} \end{equation} We will now use this form of the cavity equations to obtain a different form directly for the pure states that does not involve averaging over the pure states. Let us formulate the main result of this section. Let ${\cal F}$ be the $\sigma$-algebra generated by the random variables that are not indexed by $\alpha\in {\cal A}\setminus \{*\}$, namely, \begin{equation} \theta_{k,i}, \theta_i^1, \theta^d_{k,i}, \pi_i(\lambda K), \pi_i(d), \omega_*, \omega_*^I \label{FF} \end{equation} for various indices, excluding the random variables $\omega_{\alpha}$ and $\omega_{\alpha}^I$ that are indexed by $\alpha\in {\cal A}\setminus \{*\}$. Let ${\cal I}_i$ be the set of indices $I$ that appear in various ${\xoverline{s}\hspace{0.1mm}}_I^\alpha$ in (\ref{Aieps}), i.e. $I$ of the type $(\ell, k,i,)$ or $(\ell, d,k,i)$. Let ${\cal I} = \cup_{i\geq 1} {\cal I}_i$ and let \begin{equation} z_\alpha^i = \bigl(\omega_\alpha^I \bigr)_{I\in{\cal I}_i},\,\, z_\alpha = \bigl( \omega_\alpha, (z_\alpha^i)_{i\geq 1} \bigr)= \bigl( \omega_\alpha, (\omega_\alpha^I)_{I\in{\cal I}} \bigr), \label{zeealpha} \end{equation} Notice that with this notation, conditionally on ${\cal F}$, the random variables $A_i^\alpha$ and $\xi_i^\alpha$ in (\ref{Aialpha}), (\ref{Sec3xiialpha}) for $\alpha\in\mathbb{N}^r$ can be written as \begin{equation} \xi_i^\alpha = \xi_i \bigl( (\omega_\beta,z_\beta^i)_{*\prec \beta \preceq \alpha}\bigr),\,\, A_i^\alpha = \chi_i \bigl( (\omega_\beta, z_\beta^i)_{*\prec \beta \preceq \alpha}\bigr). \label{xichi} \end{equation} for some function $\xi_i$ and $\chi_i$ (that implicitly depend on the random variables in (\ref{FF})) and $$ A^\alpha = X\bigl( (z_\beta)_{*\prec \beta \preceq \alpha}\bigr) := \sum_{i\leq n} \chi_i \bigl( (\omega_\beta, z_\beta^i)_{*\prec \beta \preceq \alpha}\bigr). $$ In the setting of the previous section, let $X_r = X$ in (\ref{Xr}) and let $(\tilde{z}_\alpha)_{\alpha\in{\cal A}\setminus \{*\}}$ be the array generated along the tree using the conditional probabilities (\ref{ch51transitionprime}). Recall that this means the following. The definition in (\ref{ch54Xp}) can be written as \begin{equation} X_{|\alpha|-1}\bigl( (z_\beta)_{*\prec \beta \prec \alpha}\bigr) =\frac{1}{\zeta_{|\alpha|-1}} \log \mathbb{E}_{z_\alpha} \exp \zeta_{|\alpha|-1} X_{|\alpha|}\bigl( (z_\beta)_{*\prec \beta \preceq \alpha}\bigr), \label{Sec3Xp} \end{equation} where $\mathbb{E}_{z_\alpha}$ is the expectation in $z_\alpha$, and the definition in (\ref{ch31Wp}) can be written as \begin{equation} W_{|\alpha|-1} \bigl( (z_\beta)_{*\prec \beta \prec \alpha},z_\alpha\bigr) = \exp \zeta_{|\alpha|-1}\Bigl(X_{|\alpha|}\bigl( (z_\beta)_{*\prec \beta \preceq \alpha}\bigr) - X_{|\alpha|-1}\bigl( (z_\beta)_{*\prec \beta \prec \alpha}\bigr) \Bigr). \label{Sec3Walpha} \end{equation} Then the array $(\tilde{z}_\alpha)_{\alpha\in{\cal A}\setminus \{*\}}$ is generated along the tree from the root to the leaves according to the conditional probabilities in (\ref{ch51transitionprime}), namely, given $(\tilde{z}_\beta)_{*\prec \beta \prec \alpha}$ we generated $\tilde{z}_\alpha$ by the change of density $W_{|\alpha|-1} \bigl( (\tilde{z}_\beta)_{*\prec \beta \prec \alpha},\ \cdot \ \bigr)$. Let us emphasize one more time that this entire construction is done conditionally on ${\cal F}$. Also, notice that the coordinates $\omega_\alpha^I$ in (\ref{zeealpha}) were independent for different $I$, but the corresponding coordinates $\tilde{\omega}_\alpha^I$ of $ \tilde{z}_\alpha = \bigl( \tilde{\omega}_\alpha, (\tilde{\omega}_\alpha^I)_{I\in{\cal I}} \bigr) $ are no longer independent, because $X_r$ and the changes of density $W_p$ depend on all of them. As in (\ref{zeealpha}) and (\ref{xichi}), let us denote \begin{equation} \tilde{z}_\alpha^i = \bigl(\tilde{\omega}_\alpha^I \bigr)_{I\in{\cal I}_i},\,\, \tilde{z}_\alpha = \bigl( \tilde{\omega}_\alpha, (\tilde{z}_\alpha^i)_{i\geq 1} \bigr),\,\, \tilde{\xi}_i^\alpha = \xi_i \bigl( (\tilde{\omega}_\beta, \tilde{z}_\beta^i)_{*\prec \beta \preceq \alpha}\bigr). \label{Sec4tildas} \end{equation} We will prove the following. \begin{theorem}\label{Sec4Th} The equality in distribution holds (not conditionally on ${\cal F}$), \begin{equation} \bigl({\tilde{\xi}}_i^{\alpha} \bigr)_{i\leq n,\alpha\in \mathbb{N}^r} \stackrel{d}{=} \bigl({\xoverline{s}\hspace{0.1mm}}_i^{\alpha} \bigr)_{i\leq n,\alpha\in \mathbb{N}^r}. \label{Sec4ThEq} \end{equation} \end{theorem} \textbf{Proof.} As in (\ref{Sec2eq1}) and (\ref{RabSec2}), we can write $$ R_{\alpha, \beta}= {\xoverline{s}\hspace{0.1mm}}_\alpha\cdot {\xoverline{s}\hspace{0.1mm}}_{\beta} = \int_0^1\! {\xoverline{s}\hspace{0.1mm}}_\alpha(v) {\xoverline{s}\hspace{0.1mm}}_{\beta}(v) \, dv = \mathbb{E}_i \, {\xoverline{s}\hspace{0.1mm}}_i^{\alpha} {\xoverline{s}\hspace{0.1mm}}_i^{\beta}, $$ where $\mathbb{E}_i$ denotes the expectation in random variables $\omega_\beta^i$ in (\ref{sialpha}) that depend on the spin index $i$, and $R_{\alpha,\beta} = q_{\alpha\wedge\beta}$. In the cavity equations (\ref{SCnew}), let us now make a special choice of the sets $C_\ell^2$. For each pair $(\ell,\ell')$ of replica indices such that $1\leq \ell<\ell'\leq q$, take any integer $n_{\ell,\ell'}\geq 0$ and consider a set $C_{\ell,\ell'}\subseteq \{n+1,\ldots,m\}$ of cardinality $|C_{\ell,\ell'}|=n_{\ell,\ell'}$. Let all these sets be disjoint, which can be achieved by taking $m=n+\sum_{1\leq \ell<\ell'\leq q} n_{\ell,\ell'}.$ For each $\ell\leq q$, let $$ C_\ell^2 = \Bigl(\bigcup_{\ell'>\ell} C_{\ell,\ell'}\Bigr) \bigcup \Bigl(\bigcup_{\ell'<\ell} C_{\ell',\ell}\Bigr). $$ Then a given spin index $i\in \{n+1,\ldots,m\}$ appears in exactly two sets, say, $C_\ell^2$ and $C_{\ell'}^2$, and the expectation of (\ref{SCnew}) in $(\omega_{\beta}^i)$ will produce a factor $\mathbb{E}_i \,s_i^{\alpha_\ell} s_i^{\alpha_{\ell'}} = R_{\alpha_\ell,\alpha_{\ell'}}$. For each pair $(\ell,\ell')$, there will be exactly $n_{\ell,\ell'}$ such factors, so averaging in (\ref{SCnew}) in the random variables $(\omega_{\beta}^i)$ for all $i\in \{n+1,\ldots,m\}$ will result in \begin{equation} \mathbb{E} \sum_{\alpha_1,\ldots, \alpha_q} V_{\alpha_1}\cdots V_{\alpha_q} \prod_{\ell<\ell'} R_{\alpha_\ell, \alpha_{\ell'}}^{n_{\ell,\ell'}} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell} =\mathbb{E} \sum_{\alpha_1,\ldots, \alpha_q} \tilde{V}_{\alpha_1} \cdots \tilde{V}_{\alpha_q} \prod_{\ell<\ell'} R_{\alpha_\ell, \alpha_{\ell'}}^{n_{\ell,\ell'}} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\alpha_\ell}. \label{SCagain} \end{equation} Approximating by polynomials, we can replace $\prod_{\ell<\ell'} R_{\alpha_\ell, \alpha_{\ell'}}^{n_{\ell,\ell'}}$ by the indicator of the set \begin{equation} {\cal C} = \bigl\{(\alpha_1,\ldots, \alpha_q) \ | \ R_{\alpha_\ell, \alpha_{\ell'}} = q_{\ell,\ell'} \mbox{ for all } 1\leq \ell<\ell' \leq q\bigr\} \end{equation} for any choice of constraints $q_{\ell,\ell'}$ taking values in $\{q_0,\ldots,q_r\}$. Therefore, (\ref{SCagain}) implies \begin{equation} \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} V_{\alpha_1}\cdots V_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell} = \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} \tilde{V}_{\alpha_1} \cdots \tilde{V}_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\alpha_\ell}. \label{SCF} \end{equation} Using the property (i) above the equation (\ref{sigmaf2}), which as we mentioned is the consequence of the Ghirlanda-Guerra identities, we can rewrite the left hand side as $$ \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} V_{\alpha_1}\cdots V_{\alpha_q} \, \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}. $$ Moreover, it is obvious from the definition of the array ${\xoverline{s}\hspace{0.1mm}}_i^\alpha$ in (\ref{sialpha}) that the second expectation depends on $(\alpha_1,\ldots, \alpha_q)\in {\cal C}$ only through the overlap constraints $(q_{\ell,\ell'})$, or $(\alpha_\ell\wedge \alpha_{\ell'})$. On the other hand, on the right hand side of (\ref{SCF}) both $\tilde{V}_\alpha$ and $\xi_i^\alpha$ depend on the same random variables through the function $A_i^\alpha({\varepsilon})$. If we compare (\ref{tildeVs}) and (\ref{Valpha}) and apply Theorem \ref{Th4label} conditionally on ${\cal F}$, we see that that there exists a random bijection $\rho:{\cal A}\to{\cal A}$ of the vertices of the tree ${\cal A}$ which preserves the parent-child relationship and such that \begin{equation} \bigl(\tilde{V}_{\rho(\alpha)}\bigr)_{\alpha\in\mathbb{N}^r} \stackrel{d}{=} \bigl(V_\alpha\bigr)_{\alpha\in\mathbb{N}^r},\,\, \bigl( (\xi_i^{\rho(\alpha)} )_{i\leq n} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \stackrel{d}{=} \bigl( ({\tilde{\xi}}_i^{\alpha})_{i\leq n} \bigr)_{\alpha\in {\cal A}\setminus \{*\}} \end{equation} and these two arrays are independent of each other (all these statement are conditionally on ${\cal F}$). If we denote by $\mathbb{E}'$ the conditional expectation given ${\cal F}$ then this implies that \begin{align*} & \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E}'\, \tilde{V}_{\alpha_1} \cdots \tilde{V}_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\alpha_\ell} = \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E}'\, \tilde{V}_{\rho(\alpha_1)} \cdots \tilde{V}_{\rho(\alpha_q)} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\rho(\alpha_\ell)} \\ &= \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E}'\, V_{\alpha_1} \cdots V_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\tilde{\xi}}_i^{\alpha_\ell} = \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E}\,' V_{\alpha_1} \cdots V_{\alpha_q} \, \mathbb{E}'\, \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\tilde{\xi}}_i^{\alpha_\ell}. \end{align*} Since the distribution of $(V_\alpha)_{\alpha\in\mathbb{N}^r}$ does not depend on the condition and $\mathbb{E}\,' V_{\alpha_1} \cdots V_{\alpha_q} = \mathbb{E} V_{\alpha_1} \cdots V_{\alpha_q}$, taking the expectation gives $$ \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} \tilde{V}_{\alpha_1} \cdots \tilde{V}_{\alpha_q} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} \xi_i^{\alpha_\ell} = \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} V_{\alpha_1} \cdots V_{\alpha_q} \, \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\tilde{\xi}}_i^{\alpha_\ell}. $$ This proves that $$ \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} V_{\alpha_1}\cdots V_{\alpha_q} \, \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell} = \sum_{(\alpha_1,\ldots, \alpha_q)\in {\cal C}} \mathbb{E} V_{\alpha_1} \cdots V_{\alpha_q} \, \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\tilde{\xi}}_i^{\alpha_\ell}. $$ Again, the second expectation in the sum on the right depends on $(\alpha_1,\ldots, \alpha_q)\in {\cal C}$ only through the overlap constraints $(q_{\ell,\ell'})$ and, since the choice of the constraints was arbitrary, we get \begin{equation} \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell} = \mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell^1} {\tilde{\xi}}_i^{\alpha_\ell} \end{equation} for any $\alpha_1,\ldots, \alpha_q \in \mathbb{N}^r$. Clearly, one can express any joint moment of the elements in these two arrays by choosing $q\geq 1$ large enough and choosing $\alpha_1,\ldots, \alpha_q$ and the sets $C_\ell^1$ properly, so the proof is complete. \qed \section{A consequence of the cavity equations for the pure states}\label{Sec4label} We will continue using the notation of the previous section, only in this section we will take $n=2$ in Theorem \ref{Sec4Th}. Let us recall the assumption (M) made at the end of Section \ref{Sec2ilabel}: we consider some $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ such that the following holds: \begin{enumerate} \item[(M)] For any subset $S\subseteq \{1,\ldots, k\}$, ${\mathbb{E}_{[p],i}} \prod_{\ell\in S}{\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$. \end{enumerate} In this section, we will obtain a further consequence of the cavity equations using that ${\mathbb{E}_{[p],i}} \prod_{\ell\leq k}{\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$, but a similar consequence will hold for any subset of these vertices. Let us denote by ${\mathbb{E}_{[p]}}$ the expectation with respect to the random variables $\omega_\eta, \omega_\eta^I$ indexed by the ancestors $\eta\succ [p]$ of $[p]$. We will use the same notation ${\mathbb{E}_{[p]}}$ to denote the expectation with respect to the random variables $\tilde{\omega}_\eta, \tilde{\omega}_\eta^I$ for $\eta\succ [p]$ conditionally on $\tilde{\omega}_\eta, \tilde{\omega}_\eta^I$ for $\eta\preceq [p]$ and all other random variables that generate the $\sigma$-algebra ${\cal F}$ in (\ref{FF}). Given any finite set ${C}\subset \mathbb{N}^r$, let us denote $$ {\xoverline{s}\hspace{0.1mm}}_i^{C} = \prod_{\alpha\in {C}} {\xoverline{s}\hspace{0.1mm}}^\alpha_i,\,\, {\tilde{\xi}}_i^{C} = \prod_{\alpha\in {C}} {\tilde{\xi}}^\alpha_i \,\,\mbox{ and }\,\, \xi_i^{C} = \prod_{\alpha\in {C}} \xi^\alpha_i. $$ Then the following holds for $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$. \begin{lemma}\label{Sec5Lem1} If ${C} = \{\alpha_1,\ldots,\alpha_k\}$ and ${\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{C}$ does not depend on $\omega_{[p+1]}$ then \begin{equation} {\mathbb{E}_{[p]}} {\tilde{\xi}}_1^{{C}} {\tilde{\xi}}_2^{{C}} = {\mathbb{E}_{[p]}} {\tilde{\xi}}_1^{{C}} {\mathbb{E}_{[p]}} {\tilde{\xi}}_2^{{C}} \label{Sec3Lem1eq} \end{equation} almost surely. \end{lemma} \textbf{Proof.} First of all, $$ {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}} = {\mathbb{E}_{[p]}} \prod_{i\leq 2 } {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}} = \prod_{i\leq 2 } {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}} $$ almost surely, since ${\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{C}$ does not depend on $\omega_{[p+1]}$. Similarly, $$ {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}} = {\mathbb{E}_{[p]}} {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}} = {\mathbb{E}_{[p],i}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}} $$ almost surely and, therefore, \begin{align} 0 = &\ \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}} - {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}} \bigr)^2 \nonumber \\ = &\ \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr)^2 -2 \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr) \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr) +\mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr)^2. \label{Sec3above} \end{align} Let us now rewrite each of these terms using replicas. Let ${C}_1 = {C}$ and for $j=2,3,4$ let ${C}_j = \{\alpha^j_1,\ldots,\alpha^j_k\}$ for arbitrary $[p+j]\preceq \alpha^j_1,\ldots,\alpha^j_k \in \mathbb{N}^r$ such that $\alpha^j_\ell \wedge \alpha^j_{\ell'} = \alpha_\ell \wedge \alpha_{\ell'}$ for any $\ell,\ell'\leq k.$ In other words, ${C}_j$ are copies of ${C}$ that consists of the descendants of different children of $[p]$. Therefore, we can write $$ {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}_j} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_j} = {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}_{j'}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_{j'}} \,\,\mbox{ and }\,\, {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}_j} = {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_i^{{C}_{j'}} $$ almost surely for any $j,j'\leq 4$ and \begin{align*} \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr)^2 = &\ \mathbb{E} {\xoverline{s}\hspace{0.1mm}}_1^{{C}_1} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_1}{\xoverline{s}\hspace{0.1mm}}_1^{{C}_2} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_2}, \\ \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr) \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr) = &\ \mathbb{E} {\xoverline{s}\hspace{0.1mm}}_1^{{C}_1} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_1}{\xoverline{s}\hspace{0.1mm}}_1^{{C}_2} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_3}, \\ \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_1^{{C}} {\mathbb{E}_{[p]}} {\xoverline{s}\hspace{0.1mm}}_2^{{C}}\bigr)^2 = &\ \mathbb{E} {\xoverline{s}\hspace{0.1mm}}_1^{{C}_1} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_2}{\xoverline{s}\hspace{0.1mm}}_1^{{C}_3} {\xoverline{s}\hspace{0.1mm}}_2^{{C}_4}. \end{align*} By Theorem \ref{Sec4Th}, this and (\ref{Sec3above}) imply that $$ \mathbb{E} {\tilde{\xi}}_1^{{C}_1} {\tilde{\xi}}_2^{{C}_1}{\tilde{\xi}}_1^{{C}_2} {\tilde{\xi}}_2^{{C}_2} -2 \mathbb{E} {\tilde{\xi}}_1^{{C}_1} {\tilde{\xi}}_2^{{C}_1}{\tilde{\xi}}_1^{{C}_2} {\tilde{\xi}}_2^{{C}_3} +\mathbb{E} {\tilde{\xi}}_1^{{C}_1} {\tilde{\xi}}_2^{{C}_2}{\tilde{\xi}}_1^{{C}_3} {\tilde{\xi}}_2^{{C}_4} = 0. $$ Repeating the above computation backwards for ${\tilde{\xi}}$ instead of ${\xoverline{s}\hspace{0.1mm}}$ gives $$ \mathbb{E} \bigl({\mathbb{E}_{[p]}} {\tilde{\xi}}_1^{{C}} {\tilde{\xi}}_2^{{C}} - {\mathbb{E}_{[p]}} {\tilde{\xi}}_1^{{C}} {\mathbb{E}_{[p]}} {\tilde{\xi}}_2^{{C}} \bigr)^2 = 0 $$ and this finishes the proof. \qed \bigskip \noindent By analogy with (\ref{Sec2Walpha}) and (\ref{Sec2expectW}), let us rewrite the expectation ${\mathbb{E}_{[p]}}$ with respect to the random variables $\tilde{\omega}_\alpha, \tilde{\omega}_\alpha^I$ for $\alpha\succ [p]$ in terms of the expectation with respect to the random variables ${\omega}_\alpha, {\omega}_\alpha^I$ for $\alpha\succ [p]$, writing explicitly the changes of density \begin{equation} W_\alpha = W_{|\alpha|-1} \bigl( (z_\beta)_{*\prec \beta \prec \alpha},z_\alpha\bigr). \label{Sec5Wa} \end{equation} As in Lemma \ref{Sec5Lem1}, let ${C} = \{\alpha_1,\ldots,\alpha_k\}$ for some $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$, let $$ p([p],{C}) = \bigl\{\beta \ \bigr|\ [p+1]\preceq \beta\preceq \alpha, \alpha\in{C} \bigr\} $$ and define \begin{equation} W_{[p],{C}} = \prod_{\alpha\in p([p],{C})} W_\alpha. \label{Sec5WpC} \end{equation} With this notation, we can rewrite (\ref{Sec3Lem1eq}) as \begin{equation} {\mathbb{E}_{[p]}} \xi_1^{{C}} \xi_2^{{C}} W_{[p],{C}} = {\mathbb{E}_{[p]}} \xi_1^{{C}}W_{[p],{C}}\, {\mathbb{E}_{[p]}} \xi_2^{{C}}W_{[p],{C}} \label{Sec5eq1} \end{equation} almost surely. Notice that in (\ref{Sec3Lem1eq}) almost surely meant for almost all random variables in (\ref{FF}) that generate the $\sigma$-algebra ${\cal F}$ and for almost all $\tilde{z}_\alpha$ for $\alpha\preceq [p]$ that are generated conditionally on ${\cal F}$ according to the changes of density in (\ref{Sec3Walpha}). However, even though in (\ref{Sec5eq1}) we simply expressed the expectation with respect to $\tilde{z}_\alpha$ for $[p]\prec \alpha$ using the changes of density explicitly, after this averaging both sides depend on ${z}_\alpha$ for $\alpha\preceq [p]$, so almost surely now means for almost all random variables in (\ref{FF}) that generate the $\sigma$-algebra ${\cal F}$ and for almost all ${z}_\alpha$ for $\alpha\preceq [p]$. The reason we can do this is very simple. Notice that $A_i^\alpha({\varepsilon})$ in (\ref{Aieps}) can be bounded by $$ |A_i^\alpha({\varepsilon})| \leq c_i:= \sum_{k\leq \pi_i(\lambda K)} \|\theta_{k,i}\|_{\infty} + |g_i^1| +\sum_{d\geq 2} \sum_{k\leq \pi_i(d)}|g_{k,i}^d|, $$ which, by the assumption (\ref{gsvar}), is almost surely finite (notice also that $c_i$ are ${\cal F}$-measurable). By induction in (\ref{Sec3Xp}), all $|X_{|\alpha|}|\leq c=c_1+c_2$ almost surely and, therefore, all changes of density in (\ref{Sec3Walpha}) satisfy $e^{-2c}\leq W_{|\alpha|}\leq e^{2c}$ almost surely. Therefore, conditionally on ${\cal F}$, the distribution of all $z_\alpha$ and $\tilde{z}_\alpha$ are absolutely continuous with respect to each other and, therefore, we can write almost surely equality in (\ref{Sec5eq1}) in terms of the random variables $z_\alpha$ for $\alpha\preceq [p]$. Next, we will reformulate (\ref{Sec5eq1}) using the assumption (\ref{indass}). To simplify the notation, let us denote for any $\alpha\in {\cal A}\setminus\{*\}$, \begin{equation} \omega_{\preceq\alpha} = (\omega_\beta)_{*\prec \beta\preceq \alpha},\,\, z_{\preceq\alpha}^i = (z_\beta^i)_{*\prec \beta \preceq \alpha},\,\, z_{\preceq\alpha} = (z_\beta)_{*\prec \beta \preceq \alpha} \end{equation} and define $\omega_{\prec\alpha}, z_{\prec\alpha}^i $ and $z_{\prec\alpha}$ similarly. Then, we can rewrite (\ref{xichi}) for $[p+1]\preceq \alpha\in \mathbb{N}^r$ as \begin{equation} \xi_i^{\alpha} = \xi_i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr),\,\, A_i^{\alpha} =\chi_i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr). \label{Sec5xichi} \end{equation} Since in the previous section we set $n=2$, we have $$ A^{\alpha} = \sum_{i\leq 2} \chi_i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr). $$ Because of the absence of the random variables $\omega_\alpha$ for $[p+1]\prec \alpha$, the integration in $z_\alpha$ in the recursive definition (\ref{Sec3Xp}) will decouple when $[p+1]\prec \alpha$ into integration over $z_\alpha^1$ and $z_\alpha^2$. Namely, let $\chi_{i,r}=\chi_i$ and, for $[p+1] \preceq \alpha$, let us define by decreasing induction on $|\alpha|$, \begin{equation} \chi_{i,|\alpha|-1}\bigl( \omega_{\preceq [p+1]}, z^i_{\prec \alpha }\bigr) = \frac{1}{\zeta_{|\alpha|-1}} \log \mathbb{E}_{z_{\alpha}^i} \exp \zeta_{|\alpha|-1} \chi_{i,|\alpha|}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr). \label{chij} \end{equation} First of all, for $[p+1]\prec \alpha$, by decreasing induction on $|\alpha|$, \begin{align} X_{|\alpha|-1}\bigl( z_{\prec \alpha}\bigr) = &\ \frac{1}{\zeta_{|\alpha|-1}} \log \mathbb{E}_{z_{\alpha}} \exp \zeta_{|\alpha|-1} X_{|\alpha|}\bigl( z_{\preceq \alpha}\bigr) \nonumber \\ \{\mbox{induction assumption}\}= &\ \frac{1}{\zeta_{|\alpha|-1}} \log \mathbb{E}_{z_{\alpha}} \exp \zeta_{|\alpha|-1} \sum_{i\leq 2} \chi_{i,|\alpha|}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr) \nonumber \\ \{\mbox{independence}\}= &\ \frac{1}{\zeta_{|\alpha|-1}} \log \prod_{i\leq 2} \mathbb{E}_{z_{\alpha}^i} \exp \zeta_{|\alpha|-1} \chi_{i,|\alpha|}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr) \nonumber \\ \{\mbox{definition (\ref{chij})}\}= &\ \sum_{i\leq 2} \chi_{i,|\alpha|-1}\bigl( \omega_{\preceq [p+1]}, z^i_{\prec \alpha}\bigr). \label{Xichii} \end{align} When we do the same computation for $\alpha = [p+1]$, the expectation $\mathbb{E}_{z_{[p+1]}}$ also involves $\omega_{[p+1]}$, so we end up with \begin{align} X_{p}\bigl( z_{\preceq [p]}\bigr) = &\ \frac{1}{\zeta_p} \log \mathbb{E}_{\omega_{[p+1]}}\prod_{i\leq 2} \mathbb{E}_{z_{[p+1]}^i} \exp \zeta_p \chi_{i,p+1}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq [p+1]}\bigr) \nonumber \\ \{\mbox{definition (\ref{chij})}\} = &\ \frac{1}{\zeta_p} \log \mathbb{E}_{\omega_{[p+1]}} \exp \zeta_p \sum_{i\leq 2} \chi_{i,p}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq [p]}\bigr). \label{Sec5Xp} \end{align} For $[p+1]\preceq \alpha$, let us define for $i=1,2$, \begin{equation} W_{|\alpha|-1}^i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr) = \exp \zeta_{|\alpha|-1}\Bigl(\chi_{i,|\alpha|}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq \alpha}\bigr) - \chi_{i,|\alpha|-1}\bigl( \omega_{\preceq [p+1]}, z^i_{\prec \alpha}\bigr) \Bigr). \label{Sec5Walphai} \end{equation} Comparing this with the definition (\ref{ch31Wp}) and using (\ref{Xichii}) we get that for $[p+1]\prec \alpha$, \begin{equation} W_{|\alpha|-1} \bigl( z_{\preceq \alpha} \bigr) = \prod_{i\leq 2} W_{|\alpha|-1}^i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq [j+1]}\bigr). \label{Sec5Walpha} \end{equation} For $\alpha = [p+1]$ this is no longer true, but if we denote \begin{equation} {\xoverline{W}\hspace{-0.6mm}}_{p} = {\xoverline{W}\hspace{-0.6mm}}_{p} \bigl( \omega_{ [p+1]}, z_{\preceq [p]}\bigr) = \exp \zeta_p\Bigl(\sum_{i\leq 2}\chi_{i,p}\bigl( \omega_{\preceq [p+1]}, z^i_{\preceq [p]}\bigr) - X_{p}(z_{\preceq [p]}) \Bigr) \label{Sec5Q} \end{equation} then we can write \begin{equation} W_p \bigl( z_{\preceq [p+1]} \bigr) = {\xoverline{W}\hspace{-0.6mm}}_{p} \bigl( \omega_{ [p+1]}, z_{\preceq [p]}\bigr) \prod_{i\leq 2} W_p^i \bigl( \omega_{\preceq [p+1]}, z^i_{\preceq [p+1]}\bigr). \label{Sec5Wp} \end{equation} If, similarly to (\ref{Sec5Wa}) and (\ref{Sec5WpC}), we denote \begin{equation} W_\alpha^i = W_{|\alpha|-1}^i \bigl( z_{\preceq \alpha}\bigr) \,\,\mbox{ and }\,\, W_{[p],{C}}^i = \prod_{\alpha\in p([p],{C})} W_\alpha^i \label{Sec5WpCi} \end{equation} then we can rewrite (\ref{Sec5WpC}) as \begin{equation} W_{[p],{C}} = {\xoverline{W}\hspace{-0.6mm}}_{p} \prod_{i\leq 2} W_{[p],{C}}^{i}. \label{Sec5Wpr} \end{equation} Notice that ${\xoverline{W}\hspace{-0.6mm}}_{p}$ does not depend on $z_\beta^i$ for $[p]\prec \beta$, while $W_{[p],{C}}^{1}$ and $\xi_1^{{C}}$ in (\ref{Sec5eq1}) do not depend on $z_\beta^2$ for $[p]\prec \beta$ and $W_{[p],{C}}^{2}$ and $\xi_2^{{C}}$ do not depend on $z_\beta^1$ for $[p]\prec \beta$. This means that if we denote by ${\mathbb{E}_{[p],i}}$ the expectation in the random variables $z_\beta^i$ for $[p]\prec \beta$ and denote \begin{equation} \eta_i^{{C}} = {\mathbb{E}_{[p],i}}\, \xi_i^{{C}} W_{[p],{C}}^{i} \label{Sec5eqsecond} \end{equation} then (\ref{Sec5eq1}) can be rewritten as \begin{equation} \mathbb{E}_{\omega_{[p+1]}} \eta_1^{{C}} \eta_2^{{C}} {\xoverline{W}\hspace{-0.6mm}}_{p} = \mathbb{E}_{\omega_{[p+1]}} \eta_1^{{C}} {\xoverline{W}\hspace{-0.6mm}}_{p} \, \mathbb{E}_{\omega_{[p+1]}} \eta_2^{{C}} {\xoverline{W}\hspace{-0.6mm}}_{p} \label{Sec5eq2} \end{equation} almost surely, because after averaging ${\mathbb{E}_{[p],i}}$ in $z_\beta^i$ for $[p]\prec \beta$, the only random variable left to be averaged in ${\mathbb{E}_{[p]}}$ is $\omega_{[p+1]}$. So far we have just rewritten the equation (\ref{Sec5eq1}) under the induction assumption (\ref{indass}). Now, we will use this to prove the main result of this section. Let us make the following simple observation: recalling the definition of $A_i^\alpha({\varepsilon})$ in (\ref{Aieps}), if we set all but finite number of random variables $(\pi_i(d))_{d\geq 2}$ to zero, the equation (\ref{Sec5eq2}) still holds almost surely. To see this, first of all, notice that because the random variables $\pi_i(d)$ take any natural value with positive probability, we can set a finite number of them to any values we like in (\ref{Sec5eq2}). For example, for any $D,D'\geq 2$, we can set $\pi_1(d)=\pi_2(d)=n_d$ for $d\leq D$ and set $\pi_1(d)=\pi_2(d)=0$ for $D< d< D'$. The remaining part of the last term in $A_i^\alpha({\varepsilon})$ can be bounded uniformly by $$ \sum_{d\geq D'} \sum_{k\leq \pi_i(d)} \| \theta_{k,i}^d \|_\infty \leq \sum_{d\geq D'} \sum_{k\leq \pi_i(d)}|g_{k,i}^d|, $$ where, by the assumption (\ref{gsvar}), we have $\mathbb{E} (g_{k,i}^d)^2 \leq 2^{-d}\epsilon^{\mathrm{pert}},$ which implies that this sum goes to zero almost surely as $D'$ goes to infinity. It follows immediately from this that we can set all but finite number of $\pi_i(d)$ in (\ref{Sec5eq2}) to zero. Moreover, we will set $\pi_1(\lambda K) = \pi_2(\lambda K) = 0$, since the terms coming from the model Hamiltonian will play no role in the proof - all the information we need is encoded into the perturbation Hamiltonian. From now on we will assume that in (\ref{Sec5eq2}), for a given $D\geq 2$, \begin{equation} \pi_1(\lambda K) = \pi_2(\lambda K) = 0,\,\, \pi_1(d)=\pi_2(d)=n_d \,\,\mbox{ for }\,\, d\leq D,\,\, \pi_1(d)=\pi_2(d)=0 \,\,\mbox{ for }\,\, d>D. \label{fixPoisson} \end{equation} In addition, let us notice that both sides of (\ref{Sec5eq2}) are continuous functions of the variables $g_i^1$ and $g_{k,i}^d$ for $k\leq n_d$ for $d\leq D$, $i=1,2$. This implies that almost surely over other random variables the equation (\ref{Sec5eq2}) holds for all $g_{k,i}^d$ and, in particular, we can set them to be equal to any prescribed values, \begin{equation} g_1^1 = g_2^1 = g^1,\,\, g_{k,1}^d=g_{k,2}^d=g_k^d. \label{fixgs} \end{equation} The following is the main result of this section. \begin{theorem}\label{Sec5Th} The random variables $\eta_i^{{C}}$ do not depend on $\omega_{[p+1]}$. \end{theorem} Here and below, when we say that a function (or random variable) does not depend on a certain coordinate, this means that the function is equal to the average over that coordinate almost surely. In this case, we want to show that $$ \eta_i^{{C}} = \mathbb{E}_{\omega_{[p+1]}} \eta_i^{{C}} $$ almost surely. \medskip \noindent \textbf{Proof of Theorem \ref{Sec5Th}.} Besides the Poisson and Gaussian random variables in (\ref{fixPoisson}), (\ref{fixgs}) and the random variable $\omega_{[p+1]}$ over which we average in (\ref{Sec5eq2}), the random variables $\eta_i^{{C}}$ for $i=1,2$ depend on $\omega_*$, $\omega_{\preceq [p]}$, $(\omega_*^I)_{I\in {\cal I}_i}$ and $z_{\preceq [p]}^i$, and ${\xoverline{W}\hspace{-0.6mm}}_{p}$ depends on the same random variables for both $i=1,2$. Let us denote $$ u_i = \bigl( (\omega_*^I)_{I\in {\cal I}_i}, z_{\preceq [p]}^i \bigr). $$ We already stated that for almost all $\omega_*$, $\omega_{\preceq [p]}$, $u_1$ and $u_2$, the equation (\ref{Sec5eq2}) holds for all Poisson and Gaussian random variables fixed as in (\ref{fixPoisson}), (\ref{fixgs}). Therefore, for almost all $\omega_*$, $\omega_{\preceq [p]}$ the equation (\ref{Sec5eq2}) holds for almost all $u_1$, $u_2$ and for all Poisson and Gaussian random variables fixed as in (\ref{fixPoisson}), (\ref{fixgs}). Let us fix any such $\omega_*$, $\omega_{\preceq [p]}$. Then, we can write $$ \eta_i^{{C}} = \varphi(u_i,\omega_{[p+1]}) \ \mbox{ and}\ {\xoverline{W}\hspace{-0.6mm}}_{p} = \psi(u_1,u_2,\omega_{[p+1]}) $$ for some functions $\varphi$ and $\psi$. These functions depend implicitly on all the random variables we fixed, and the function $\varphi$ is the same for both $\eta_1^{{C}}$ and $\eta_2^{{C}}$ because we fixed all Poisson and Gaussian random variables in (\ref{fixPoisson}), (\ref{fixgs}) to be the same for $i=1,2$. The equation (\ref{Sec5eq2}) can be written as (in the rest of this proof, let us for simplicity of notation write $\omega$ instead of $\omega_{[p+1]}$) \begin{equation} \mathbb{E}_{\omega} \varphi(u_1,\omega) \varphi(u_2,\omega) \psi(u_1,u_2,\omega) = \prod_{i=1,2}\mathbb{E}_{\omega} \varphi(u_i,\omega) \psi(u_1,u_2,\omega) \label{Sec5eq3} \end{equation} for almost all $u_1,u_2$. We want to show that $\varphi(u,\omega)$ does not depend on $\omega$. If we denote $$ c =: |g_1^1| + |g_2^1| + \sum_{d\leq D} \sum_{k\leq n_d} |g_{k}^d| $$ then, by (\ref{fixPoisson}) and (\ref{fixgs}), we can bound $A_i^\alpha({\varepsilon})$ in (\ref{Aieps}) by $|A_i^\alpha({\varepsilon})| \leq c$ for $i=1,2$. By induction in (\ref{chij}), $|\chi_{i,|\alpha|}|\leq c$ and, by (\ref{Sec5Xp}), $|X_{p}|\leq 2c$. Therefore, from the definition of ${\xoverline{W}\hspace{-0.6mm}}_{p}$ in (\ref{Sec5Q}), \begin{equation} e^{-4c}\leq {\xoverline{W}\hspace{-0.6mm}}_{p} = \psi(u_1,u_2,\omega) \leq e^{4c}. \label{Sec5eq5} \end{equation} Of course, $|\varphi|\leq 1$. Suppose that for some ${\varepsilon}>0$, there exists a set $U$ of positive measure such that the variance $\mbox{Var}_{\omega}(\varphi(u,\omega))\geq {\varepsilon}$ for $u\in U.$ Given $\delta>0$, let $(S_\ell)_{\ell\geq 1}$ be a partition of $L^1([0,1],d\omega)$ such that $\mbox{diam}(S_\ell)\leq \delta$ for all $\ell.$ Let $$ U_\ell = \bigl\{ u \ |\ \varphi(u,\,\cdot\,) \in S_\ell \bigr\}. $$ For some $\ell$, the measure of $U\cap U_\ell$ will be positive, so for some $u_1,u_2\in U$, \begin{equation} \mathbb{E}_\omega | \varphi(u_1,\omega) - \varphi(u_2,\omega) | \leq \delta. \label{Sec5eq6} \end{equation} The equations (\ref{Sec5eq5}) and (\ref{Sec5eq6}) imply that $$ \bigl| \mathbb{E}_\omega \varphi(u_1,\omega) \psi(u_1,u_2,\omega) - \mathbb{E}_\omega \varphi(u_2,\omega) \psi(u_1,u_2,\omega) \bigr| \leq e^{4c}\delta $$ and, similarly, $$ \bigl| \mathbb{E}_\omega \varphi(u_1,\omega) \varphi(u_2,\omega) \psi(u_1,u_2,\omega) - \mathbb{E}_\omega \varphi(u_1,\omega)^2 \psi(u_1,u_2,\omega) \bigr| \leq e^{4c}\delta. $$ Since $|\varphi|\leq 1$ and $\mathbb{E}_\omega \psi =1$, the first inequality implies that $$ \Bigl| \prod_{i=1,2}\mathbb{E}_\omega \varphi(u_i,\omega) \psi(u_1,u_2,\omega) - \bigl(\mathbb{E}_\omega \varphi(u_1,\omega) \psi(u_1,u_2,\omega) \bigr)^2 \Bigr| \leq e^{4c}\delta, $$ which, together with the second inequality and (\ref{Sec5eq3}), implies $$ \mathbb{E}_\omega \varphi(u_1,\omega)^2 \psi(u_1,u_2,\omega) - \bigl(\mathbb{E}_\omega \varphi(u_1,\omega) \psi(u_1,u_2,\omega) \bigr)^2 \leq 2 e^{4c}\delta. $$ The left hand side is a variance with the density $\psi$ and can be written using replicas as $$ \frac{1}{2} \iint\! \bigl(\varphi(u_1,x)- \varphi(u_1,y)\bigr)^2 \psi(u_1,u_2,x)\psi(u_1,u_2,y)\,dx dy. $$ By (\ref{Sec5eq5}) and the fact that $u_1\in U$, we can bound this from below by $$ \frac{1}{2}e^{-8c} \iint\! \bigl(\varphi(u_1,x)- \varphi(u_1,y)\bigr)^2 \,dx dy = e^{-8c}\mbox{Var}_{\omega}(\varphi(u_1,\omega)) \geq e^{-8c}{\varepsilon}. $$ Comparing lower and upper bounds, $e^{-8c}{\varepsilon} \leq e^{4c}\delta$, we arrive at contradiction, since $\delta>0$ was arbitrary. Therefore, $\mbox{Var}_{\omega}(\varphi(u,\omega)) = 0$ for almost all $u$ and this finishes the proof. \qed \section{A representation formula via properties of the RPC}\label{Sec6label} Let us summarize what we proved in the previous section. We considered ${C} = \{\alpha_1,\ldots,\alpha_k\}$ for some $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ and assumed that ${\mathbb{E}_{[p],i}} \prod_{\ell\leq k}{\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$. Then, as a consequence of this and the cavity equations, we showed that \begin{equation} \eta_i^{{C}} = {\mathbb{E}_{[p],i}}\, \xi_i^{{C}} W_{[p],{C}}^{i} \label{Sec6eqsecond} \end{equation} also does not depend on $\omega_{[p+1]}$ almost surely, where \begin{equation} W_{[p],{C}}^i = \prod_{\alpha\in p([p],{C})} W_\alpha^i \label{Sec6WpCi} \end{equation} and $p([p],{C}) = \bigl\{\beta \ \bigr|\ [p+1]\preceq \beta\preceq \alpha, \alpha\in{C} \bigr\}$. Moreover, this holds for Poisson and Gaussian random variables fixed to arbitrary values as in (\ref{fixPoisson}), (\ref{fixgs}). By the assumption (M), the same statement holds in we replace ${C}$ by any subset ${C'}\subseteq C$. In this section, we will represent the expectation ${\mathbb{E}_{[p],i}}$ in (\ref{Sec6eqsecond}) with respect to $z_\alpha^i$ for $[p]\prec \alpha$ by using the property of the Ruelle probability cascades in Theorem \ref{Th4label}. Essentially, the expectation in (\ref{Sec6eqsecond}) is of the same type as (\ref{Sec2expectW}) if we think of the vertex $[p]$ as a root. Indeed, we are averaging over random variables indexed by the vertices $[p]\prec \alpha$ which form a tree (if we include the root $[p]$) isomorphic to a tree $ \Gamma = \mathbb{N}^0\cup \mathbb{N}^1 \cup \ldots \cup \mathbb{N}^{r-p} $ of depth $r-p$. We can identify a vertex $[p]\preceq \alpha \in {\cal A}$ with the vertex $\{*\} \preceq \gamma \in \Gamma$ such that $\alpha = [p]\gamma$ (for simplicity, we denote by $[p]\gamma$ the concatenation $([p],\gamma)$). Similarly to (\ref{indass}), let us define for $\gamma\in\mathbb{N}^{r-p}$, \begin{equation} {\xoverline{s}\hspace{0.1mm}}_I^\gamma = h\bigl( (\omega_\beta)_{\beta\preceq [p]},\omega_{[p+1]}, (\omega_\beta^I)_{\beta\preceq [p]},(\omega_{[p]\beta}^I)_{*\prec \beta \preceq \gamma} \bigr). \label{Sec6sialpha} \end{equation} Notice a subtle point here: the random variables ${\xoverline{s}\hspace{0.1mm}}^\gamma_I$ are not exactly the same as ${\xoverline{s}\hspace{0.1mm}}^\alpha_I$ in (\ref{indass}) for $\alpha = [p]\gamma$. They are exactly the same only if $[p+1]\preceq \alpha$, and in this case we will often write ${\xoverline{s}\hspace{0.1mm}}_I^\gamma= {\xoverline{s}\hspace{0.1mm}}_I^{[p]\gamma}.$ Otherwise, if $[p+j]\preceq \alpha$ for $j\geq 2$ then in (\ref{indass}) we plug in the random variable $\omega_{[p+j]}$ instead of $\omega_{[p+1]}$ as we did in (\ref{Sec6sialpha}). The reason for this will become clear soon but, basically, we are going to represent the average ${\mathbb{E}_{[p],i}}$ in (\ref{Sec6eqsecond}) with respect to $\omega_{[p]\beta}^I$ for $\{*\}\prec \beta$ using the Ruelle probability cascades while $\omega_{[p+1]}$ appears in (\ref{Sec6eqsecond}) on the outside of this average. Similarly to (\ref{Aieps}) -- (\ref{Sec3xiialpha}), let us define for $\gamma\in \mathbb{N}^{r-p}$, \begin{align} A_i^\gamma({\varepsilon}) \ =\ & \theta^1({\varepsilon}) +\sum_{2\leq d\leq D} \sum_{k\leq n_d}\theta_{k}^d({\xoverline{s}\hspace{0.1mm}}^\gamma_{{1,d,k,i}},\ldots, {\xoverline{s}\hspace{0.1mm}}^\gamma_{{d-1,d,k,i}},{\varepsilon}), \label{Sec6Aieps} \\ A_i^\gamma \ =\ & \log {\rm Av} \exp A_i^\gamma({\varepsilon}), \label{Sec6Aialpha} \\ \xi_i^\gamma \ =\ & \frac{{\rm Av} {\varepsilon} \exp A_i^\gamma({\varepsilon}) }{\exp A_i^\gamma}. \label{Sec6xiialpha} \end{align} The reason why (\ref{Sec6Aieps}) looks different from (\ref{Aieps}) is because we fixed Poisson and Gaussian random variables as in (\ref{fixPoisson}), (\ref{fixgs}), so $\theta^1$ and $\theta_{k}^d$ are defined in terms of $g^1$ and $g_k^d$ in (\ref{fixgs}). Again, let us emphasize one more time that all these definition coincide with the old ones when $[p+1]\preceq [p]\gamma$ or, equivalently, when $[1]\preceq \gamma.$ We will keep the dependence on the random variables $(\omega_\beta)_{\beta\preceq [p]},\omega_{[p+1]}, (\omega_\beta^I)_{\beta\preceq [p]}$ implicit and, similarly to (\ref{Sec5xichi}), we will write for $\gamma\in\mathbb{N}^{r-p}$, \begin{equation} A_i^{\gamma} =\chi_i \bigl((z_{[p]\beta}^i)_{*\prec \beta \preceq \gamma}\bigr). \label{Sec6xichi} \end{equation} Let $\chi_{i,r}=\chi_i$ and define for $\gamma\in \Gamma\setminus\{*\}$ by decreasing induction on $|\gamma|$, $$ \chi_{i,p+|\gamma|-1}\bigl( (z_{[p]\beta}^i)_{*\prec \beta \prec \gamma} \bigr) = \frac{1}{\zeta_{p+|\gamma|-1}} \log \mathbb{E}_{z_{[p]\gamma}^i} \exp \zeta_{p+|\gamma|-1} \chi_{i,|\alpha|}\bigl( (z_{[p]\beta}^i)_{*\prec \beta \preceq \gamma}\bigr) $$ and $$ W_{p+|\gamma|-1}^i \bigl( (z_{[p]\beta}^i)_{*\prec \beta \preceq \gamma} \bigr) = \exp \zeta_{p+|\gamma|-1}\Bigl(\chi_{i,p+|\gamma|}\bigl( (z_{[p]\beta}^i)_{*\prec \beta \preceq \gamma}\bigr) - \chi_{i,p+ |\gamma|-1}\bigl( (z_{[p]\beta}^i)_{*\prec \beta \prec \gamma}\bigr) \Bigr). $$ For $[1]\preceq \gamma$ these are exactly the same definitions as in (\ref{chij}) and (\ref{Sec5Walphai}), but here we extend these definition to all $\gamma\in \Gamma\setminus \{*\}.$ \smallskip Let $\tilde{z}_{[p]\beta}^i = (\tilde{\omega}_{[p]\beta}^I)_{I\in{\cal I}_i}$ for $\beta \in \Gamma\setminus\{*\}$ be the array generated according to these changes of density along the tree $\Gamma$ as in Section \ref{Sec2ilabel}. Since $[p]$ acts as a root, we do not generate any $\tilde{\omega}_{\beta}^I$ for $|\beta|\leq p$. Similarly to (\ref{Sec4tildas}), we can write for $\gamma\in\mathbb{N}^{r-p}$, \begin{equation} \xi_i^\gamma = \xi_i \bigl( (z_{[p]\beta}^i)_{*\prec \beta \preceq \gamma}\bigr),\,\, \tilde{\xi}_i^\gamma = \xi_i \bigl( (\tilde{z}_{[p]\beta}^i)_{*\prec \beta \preceq \gamma} \bigr), \label{Sec6tildas} \end{equation} where we continue to keep the dependence on $(\omega_\beta)_{\beta\preceq [p]},$ $\omega_{[p+1]}$ and $(\omega_\beta^I)_{\beta\preceq [p]}$, as well as Poisson and Gaussian random variables we fixed above, implicit. Given the vertices $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ let $[1] \preceq \gamma_1,\ldots,\gamma_k \in \mathbb{N}^{r-p}$ be such that $\alpha_\ell = [p]\gamma_\ell.$ Then we can write $\eta_i^{C}$ in (\ref{Sec6eqsecond}) as \begin{equation} \eta_i^{C} = {\mathbb{E}_{[p],i}}\, \xi_i^{\alpha_1}\cdots \xi_i^{\alpha_k} W_{[p],{C}}^{i} = \mathbb{E}_{*,i}\, {\tilde{\xi}}_i^{\gamma_1}\cdots {\tilde{\xi}}_i^{\gamma_k}, \label{Sec6rep1} \end{equation} where $\mathbb{E}_{*,i}$ denotes the expectation in $\tilde{z}_{[p]\beta}^i$ for $\beta \in \Gamma\setminus\{*\}$. Below we will represent this quantity using the analogue of Theorem \ref{Th4label}. Let $(v_{\gamma})_{\gamma\in\mathbb{N}^{r-p}}$ be the weights of the Ruelle probability cascades corresponding to the parameters \begin{equation} 0<\zeta_{p}<\ldots<\zeta_{r-1}<1, \end{equation} let $(V_\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ be their rearrangement as in (\ref{Vs2}) and, similarly to (\ref{tildeVs}), define \begin{equation} \tilde{V}_\gamma = \frac{V_\gamma \exp A_i^\gamma}{\sum_{\gamma\in \mathbb{N}^{r-p}}V_\gamma \exp A_i^\gamma}. \label{Sec6tildeVs} \end{equation} Theorem \ref{Th4label} can be formulated in this case as follows. \begin{theorem}\label{Sec6Th4label} There exists a random bijection $\rho:\Gamma\to\Gamma$ of the vertices of the tree $\Gamma$, which preserves the parent-child relationship, such that \begin{equation} \bigl(\tilde{V}_{\rho(\gamma)}\bigr)_{\gamma\in\mathbb{N}^{r-p}} \stackrel{d}{=} \bigl(V_\gamma\bigr)_{\gamma\in\mathbb{N}^{r-p}},\,\, \bigl(z^i_{[p]\rho(\gamma)} \bigr)_{\gamma\in \Gamma\setminus \{*\}} \stackrel{d}{=} \bigl(\tilde{z}^i_{[p]\gamma} \bigr)_{\gamma\in \Gamma\setminus \{*\}} \label{Sec6Th4eq} \end{equation} and these two arrays are independent of each other. \end{theorem} The expectation $\mathbb{E}_{*,i}$ in (\ref{Sec6rep1}) depends on $\gamma_1,\ldots, \gamma_k$ only through their overlaps $(\gamma_\ell\wedge \gamma_{\ell'})_{\ell,\ell'\leq k}$. Above, we made the specific choice $\alpha_\ell = [p]\gamma_\ell$, which implies $$ \gamma_\ell\wedge \gamma_{\ell'} = q_{\ell,\ell'} : =\alpha_\ell\wedge \alpha_{\ell'} - p. $$ Now, consider the set of arbitrary configurations with these overlaps, $$ {\cal C}= \bigl\{ (\gamma_1,\ldots, \gamma_k) \in (\mathbb{N}^{r-p})^k \ \bigr|\ \gamma_\ell\wedge \gamma_{\ell'} = q_{\ell,\ell'} \bigr\}. $$ Let us denote by $\mathbb{E}_*$ the expectation in $(V_\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ in addition to $\tilde{z}_{[p]\beta}^i$ for $\beta \in \Gamma\setminus\{*\}$ in the definition of $\mathbb{E}_{*,i}$. We will also denote by $\mathbb{E}_*$ the expectation in $(V_\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ and $z_{[p]\beta}^i$ for $\beta \in \Gamma\setminus\{*\}$. Using Theorem \ref{Sec6Th4label} and arguing as in the proof of Theorem \ref{Sec4Th}, \begin{align} \mathbb{E}_* \sum_{(\gamma_1,\ldots, \gamma_k)\in {\cal C}} \tilde{V}_{\gamma_1}\cdots \tilde{V}_{\gamma_k} \, \xi_i^{\gamma_1}\cdots \xi_i^{\gamma_k} \ = & \ \mathbb{E}_* \sum_{(\gamma_1,\ldots, \gamma_k)\in {\cal C}} V_{\gamma_1}\cdots V_{\gamma_k} \, \mathbb{E}_* {\tilde{\xi}}_i^{\gamma_1}\cdots {\tilde{\xi}}_i^{\gamma_k} \nonumber \\ \ = & \ \eta_i^{C}\ \mathbb{E}_* \sum_{(\gamma_1,\ldots, \gamma_k)\in {\cal C}} V_{\gamma_1}\cdots V_{\gamma_k}. \label{Sec6repres} \end{align} Let us rewrite this equation using a more convenient notation. Let $\sigma^1,\ldots,\sigma^k$ be i.i.d. replicas drawn from $\mathbb{N}^{r-p}$ according to the weights $(V_\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ and let $\langle\,\cdot\, \rangle$ denote the average with respect to these weights. If we denote $Q^k = (\sigma^\ell\wedge \sigma^{\ell'})_{\ell,\ell'\leq k}$ and $Q = (q_{\ell,\ell'})_{\ell,\ell'\leq k}$ then we can write $$ \mathbb{E}_* \sum_{(\gamma_1,\ldots, \gamma_k)\in {\cal C}} V_{\gamma_1}\cdots V_{\gamma_k} = \mathbb{E}_* \bigl\langle {\rm I}(Q^k = Q) \bigr\rangle = \mathbb{P}(Q^k = Q) $$ and $$ \mathbb{E}_* \sum_{(\gamma_1,\ldots, \gamma_k)\in {\cal C}} \tilde{V}_{\gamma_1}\cdots \tilde{V}_{\gamma_k} \, \xi_i^{\gamma_1}\cdots \xi_i^{\gamma_k} = \mathbb{E}_* \frac{\bigl\langle \xi_i^{\sigma^1}\cdots \xi_i^{\sigma^k} {\rm I}(Q^k = Q) \exp \sum_{\ell\leq k} A_i^{\sigma^\ell} \bigr\rangle}{\bigl\langle \exp A_i^{\sigma} \bigr\rangle^k}, $$ and we can rewrite (\ref{Sec6repres}) above as $$ \mathbb{E}_* \frac{\bigl\langle \prod_{\ell\leq k} \xi_i^{\sigma^\ell} {\rm I}(Q^k = Q) \exp \sum_{\ell\leq k} A_i^{\sigma^\ell} \bigr\rangle}{\bigl\langle \exp A_i^{\sigma} \bigr\rangle^k} = \eta_i^{C} \mathbb{P}(Q^k = Q). $$ Notice that this computation also works if we replace each factor $\xi_i^{\gamma_\ell}$ in (\ref{Sec6repres}) by any power $(\xi_i^{\gamma_\ell})^{n_\ell}$ and, in particular, by setting $n_\ell = 0$ or $1$ we get the following. For a subset $S\subseteq \{1,\ldots, k\}$, let us denote $C(S)=\{\gamma_\ell \ | \ \ell\in S\}.$ Then $$ \mathbb{E}_* \frac{\bigl\langle \prod_{\ell\in S} \xi_i^{\sigma^\ell} {\rm I}(Q^k = Q) \exp \sum_{\ell\leq k} A_i^{\sigma^\ell} \bigr\rangle}{\bigl\langle \exp A_i^{\sigma} \bigr\rangle^k} = \eta_i^{C(S)} \mathbb{P}(Q^k = Q) . $$ Furthermore, it will be convenient to rewrite the left hand side using (\ref{Sec6Aialpha}) and (\ref{Sec6xiialpha}) as $$ \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle}. $$ We showed as a consequence of the assumption (M) that all $\eta_i^{C(S)}$ do not depend on $\omega_{[p+1]}$ and, therefore, we proved the following. \begin{theorem}\label{Sec6Thend} Under the assumption (M), for any subset $S\subseteq \{1,\ldots, k\}$, \begin{equation} \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle} \label{Sec6ThendEq} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely. \end{theorem} \section{Generating Gaussian random fields}\label{Sec7label} We begin by simplifying (\ref{Sec6Aieps}) further, by taking $n_2=1$ and setting all other $n_d=0$ for $d\geq 3$ except for one, $n_d = n$, $$ A_i^\gamma({\varepsilon}) = \theta^1({\varepsilon}) + \theta_{1}^2({\xoverline{s}\hspace{0.1mm}}^\gamma_{{1,2,1,i}},{\varepsilon}) +\sum_{j\leq n}\theta_{j}^d({\xoverline{s}\hspace{0.1mm}}^\gamma_{{1,d,j,i}},\ldots, {\xoverline{s}\hspace{0.1mm}}^\gamma_{{d-1,d,j,i}},{\varepsilon}). $$ One can easily see that the definition of $\theta^d$ in (\ref{thetadetx}) satisfies for ${\varepsilon}\in \{-1,+1\},$ $$ \theta^d(x_1,\ldots,x_{d-1},{\varepsilon}) =\frac{1+{\varepsilon}}{2} \log \Bigl( 1+(e^{g^d}-1) \frac{1+x_{1}}{2}\cdots \frac{1+x_{d-1}}{2} \Bigr) $$ and, therefore, we can rewrite $$ A_i^\gamma({\varepsilon}) = \frac{1+{\varepsilon}}{2}\Bigl( g^1 + \log\Bigl(1+(e^{g_1^2}-1)\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{1,2,1,i}}{2}\Bigr) +\sum_{j\leq n} \log\Bigl(1+(e^{g_j^d}-1)\prod_{\ell\leq d-1}\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,d,j,i}}{2}\Bigr) \Bigr). $$ At this moment, for simplicity of notation, we will drop some unnecessary indices. We will write ${\xoverline{s}\hspace{0.1mm}}^\gamma_{i}$ instead of ${\xoverline{s}\hspace{0.1mm}}^\gamma_{1,2,1,i}$ and, since $d$ is fixed for a moment, write ${\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,j,i}$ instead of ${\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,d,j,i}$. Also, we will denote $$ x_j:= e^{g_j^d}-1 \in (-1,\infty), \,\, y: = e^{g_1^2}-1 \in (-1,\infty). $$ Then, we can write \begin{equation} A_i^\gamma({\varepsilon}) = \frac{1+{\varepsilon}}{2}\Bigl( g^1 + \log\Bigl(1+y \frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2}\Bigr) +\sum_{j\leq n} \log\Bigl(1+x_j \prod_{\ell\leq d-1}\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,j,i}}{2}\Bigr) \Bigr). \label{SecA1} \end{equation} By Theorem \ref{Sec6Thend}, under the assumption (M), the quantities in (\ref{Sec6ThendEq}) do not depend on $\omega_{[p+1]}$ almost surely. In particular, as we discussed above, this almost sure statement can be assumed to hold for all $y,x_j \in (-1,\infty)$ by continuity. We will now take \begin{equation} x_j = x\frac{\eta_j}{\sqrt{n}} \label{Sec7etas} \end{equation} for $x\in (-1,1)$ and independent Rademacher random variables $\eta_j$ and show that, by letting $n\to\infty$, we can replace the last sum in (\ref{SecA1}) by some Gaussian field in the statement of Theorem \ref{Sec6Thend}. Let us denote $$ S_{j,i}^\gamma = \prod_{\ell\leq d-1}\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,j,i}}{2}. $$ Then with the choice of $x_j = x\eta_j/\sqrt{n}$ we can use Taylor's expansion to write \begin{equation} \sum_{j\leq n} \log\Bigl(1+x_j \prod_{\ell\leq d-1}\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,j,i}}{2}\Bigr) = \frac{x}{\sqrt{n}} \sum_{j\leq n} \eta_j S_{j,i}^\gamma - \frac{x^2}{2n} \sum_{j\leq n} (S_{j,i}^\gamma)^2 + O(n^{-1/2}). \label{Sec7napprox} \end{equation} The last term $O(n^{-1/2})$ is uniform in all parameters, so it will disappear in (\ref{Sec6ThendEq}) when we let $n$ go to infinity. For the first term, we will use the classical CLT to replace it by Gaussian and for the second term we will use the SLLN, which will produce a term that will cancel out in the numerator and denominator in (\ref{Sec6ThendEq}). However, before we do that, we will need to change the definition of the expectation $\mathbb{E}_*$ slightly. Recall that, by (\ref{Sec6sialpha}), $$ {\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell, j,i} = h\bigl( (\omega_\beta)_{\beta \preceq [p]}, \omega_{[p+1]}, (\omega_\beta)_{\beta \preceq [p]}^{\ell, j,i} , (\omega_{[p]\beta}^{\ell, j,i} )_{*\prec \beta \preceq \gamma} \bigr). $$ In (\ref{Sec6ThendEq}) we already average in the random variables $\omega_{[p]\beta}^{\ell, j,i}$ for $*\prec \beta \preceq \gamma$ but, clearly, the statement of Theorem \ref{Sec6Thend} holds if $\mathbb{E}_*$ also includes the average in $(\omega_\beta^{\ell, j,i})_{\beta\preceq [p]}$ and the Rademacher random variables $\eta_j$ in (\ref{Sec7etas}). From now on we assume this. Note that $\mathbb{E}_*$ still does not include the average with respect to the random variables $(\omega_\beta^i)_{\beta\preceq [p]}$ that appear in ${\xoverline{s}\hspace{0.1mm}}_i^\gamma$ in (\ref{SecA1}), \begin{equation} {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} = h\bigl( (\omega_\beta)_{\beta\preceq [p]}, \omega_{[p+1]}, (\omega_\beta^i)_{\beta\preceq [p]}, (\omega_{[p]\beta}^{i} )_{*\prec \beta \preceq \gamma} \bigr). \label{Sec7sbar} \end{equation} Of course, one can not apply the CLT and SLLN in (\ref{Sec6ThendEq}) directly, because there are infinitely many terms indexed by $\gamma\in\mathbb{N}^{r-p}$. However, this is not a serious problem because most of the weight of the Ruelle probability cascades $(V_\gamma)$ is concentrated on finitely many indices $\gamma$ and it is not difficult to show that (\ref{Sec6ThendEq}) is well approximated by the analogous quantity where the series over $\gamma$ are truncated at finitely many terms. Moreover, this approximation is uniform over $n$ in (\ref{Sec7napprox}). This is why the representation of $\eta_i^{C(S)}$ in the previous section using the Ruelle probability cascades plays such a crucial role. We will postpone the details until later in this section and first explain what happens in (\ref{Sec7napprox}) for finitely many $\gamma$. First of all, by the SLLN, for any fixed $(\omega_\beta)_{\beta\preceq [p]}$ and $\omega_{[p+1]}$, $$ \lim_{n\to\infty}\frac{1}{n} \sum_{j\leq n} (S_{j,i}^\gamma)^2 = \mathbb{E}_* (S_{1,i}^\gamma)^2 = \prod_{\ell\leq d-1} \mathbb{E}_* \Bigl(\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{\ell,1,i}}{2}\Bigr)^2 = \Bigl(\mathbb{E}_* \Bigl(\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{1,1,i}}{2}\Bigr)^2\Bigr)^{d-1} $$ almost surely. Of course, we can now simplify the notation by replacing ${\xoverline{s}\hspace{0.1mm}}^\gamma_{1,1,i}$ with ${\xoverline{s}\hspace{0.1mm}}^\gamma_{i}$ and replacing $\mathbb{E}_*$ by the expectation $\mathbb{E}_i$ with respect to $\omega_\beta^i$ for $\beta\in{\cal A}$, $$ \lim_{n\to\infty}\frac{1}{n} \sum_{j\leq n} (S_{j,i}^\gamma)^2 = \Bigl(\mathbb{E}_i \Bigl(\frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2}\Bigr)^2\Bigr)^{d-1} = 2^{2-2d}\Bigl( 1+2\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i}+\mathbb{E}_i ({\xoverline{s}\hspace{0.1mm}}^\gamma_{i})^2 \Bigr)^{d-1}. $$ Lemma \ref{Sec2iLem1} in Section \ref{Sec2ilabel} (for $p=0$ there) yields that $\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i}$ depends only on $\omega_*$, \begin{equation} q_0(\omega_*): = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i}, \label{Sec7q0} \end{equation} and, since $\mathbb{E}_i ({\xoverline{s}\hspace{0.1mm}}^\gamma_{i})^2$ clearly does not depend on $\gamma$, we can take $[1]\preceq \gamma$, in which case $$ \mathbb{E}_i ({\xoverline{s}\hspace{0.1mm}}^\gamma_{i})^2 = \mathbb{E}_i ({\xoverline{s}\hspace{0.1mm}}^{[p]\gamma}_{i})^2 = R_{[p]\gamma,[p]\gamma} = q_r. $$ This means that for any $\gamma\in \mathbb{N}^{r-p}$, \begin{equation} \lim_{n\to\infty}\frac{1}{n} \sum_{j\leq n} (S_{j,i}^\gamma)^2 = 2^{2-2d}\bigl(1+2q_0(\omega_*)+q_r \bigr)^{d-1} \label{Sec7LLN} \end{equation} almost surely. So, in the limit, these terms will cancel out in (\ref{Sec6ThendEq}) -- at least when we truncate the summation over $\gamma$ to finitely many $\gamma$, as we shall do below. \smallskip Next, let us look at the first sum in (\ref{Sec7napprox}) for $\gamma\in F$ for a finite set $F\subset \mathbb{N}^{r-p}$. By the classical multivariate CLT (applied for a fixed $(\omega_\beta)_{\beta\preceq [p]}$ and $\omega_{[p+1]}$), \begin{equation} \frac{1}{\sqrt{n}} \sum_{j\leq n} \eta_k \bigl(S_{j,i}^\gamma\bigr)_{\gamma\in F} \stackrel{d}{\longrightarrow} (g^\gamma)_{\gamma\in F}, \label{Sec7CLT} \end{equation} where $(g^\gamma)_{\gamma\in F}$ is a centered Gaussian random vector with the covariance $$ \mathbb{E} g^\gamma g^{\gamma'} = \mathbb{E}_* S_{1,i}^\gamma S_{1,i}^{\gamma'} = \Bigl( \mathbb{E}_i \frac{1+{\xoverline{s}\hspace{0.1mm}}_i^\gamma}{2}\cdot\frac{1+{\xoverline{s}\hspace{0.1mm}}_i^{\gamma'}}{2} \Bigr)^{d-1} = \Bigl( \frac{1}{4}\bigl(1+\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i}+\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i}+\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i} \bigr)\Bigr)^{d-1}. $$ First of all, as above $\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i} = q_0(\omega_*).$ To compute $\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i}$, we need to consider two cases. First, suppose that $\gamma\wedge\gamma'\geq 1.$ Since $\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i}$ clearly depends only on $\gamma\wedge\gamma'$, we can suppose that $[1]\preceq \gamma,\gamma'$, in which case ${\xoverline{s}\hspace{0.1mm}}^{\gamma}_{i} = {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma}_{i}$, ${\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i} = {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma'}_{i}$ and $$ \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i} = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma}_{i} {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma'}_{i} = R_{[p]\gamma,[p]\gamma'} = q_{p+\gamma\wedge\gamma'}. $$ In the second case, $\gamma\wedge\gamma' = 0$, when computing $\mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i}$ we can first average ${\xoverline{s}\hspace{0.1mm}}_i^\gamma$ with respect to $ (\omega_{[p]\beta}^{i} )_{*\prec \beta \preceq \gamma}$ and ${\xoverline{s}\hspace{0.1mm}}_i^{\gamma'}$ with respect to $ (\omega_{[p]\beta}^{i} )_{*\prec \beta \preceq \gamma'}$, since these are independent. However, by Lemma \ref{Sec2iLem1}, both of these averages do not depend on $\omega_{[p+1]}$. This means that, after taking these averages, we can replace $\omega_{[p+1]}$ in (\ref{Sec7sbar}) by $\omega_{[p+j]}$ if $[j]\preceq \gamma$, and the same for $\gamma'$. As a result, we can again write $$ \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\gamma_{i} {\xoverline{s}\hspace{0.1mm}}^{\gamma'}_{i} = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma}_{i} {\xoverline{s}\hspace{0.1mm}}^{[p]\gamma'}_{i} = R_{[p]\gamma,[p]\gamma'} = q_p = q_{p+\gamma\wedge\gamma'} $$ almost surely. If we introduce the notation $$ c_{j}(\omega_*) = \frac{1+2q_0(\omega_*) + q_{p+j}}{4} \,\,\mbox{ for }\,\, j=0,\ldots, r-p $$ then we proved that the covariance is given by \begin{equation} \mathbb{E} g^\gamma g^{\gamma'} = \mathbb{E}_* S_{1,i}^\gamma S_{1,i}^{\gamma'} = c_{\gamma\wedge\gamma'}(\omega_*)^{d-1}. \label{Sec7cov} \end{equation} Let us show right away that \begin{equation} 0\leq c_0(\omega_*) < \ldots < c_{r-p}(\omega_*). \label{Sec7cs} \end{equation} In particular, this means that the covariance in (\ref{Sec7cov}) is increasing with $\gamma\wedge\gamma'$ and the Gaussian field $(g^\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ is the familiar field that accompanies the Ruelle probability cascades in the pure $d$-spin non-diluted models. Of course, the only statement that requires a proof is the following. \begin{lemma} The inequality $c_0(\omega_*)\geq 0$ holds almost surely. \end{lemma} \textbf{Proof.} First of all, let us notice that, by Lemma \ref{Sec2iLem1}, we can also write the definition of $q_0(\omega_*)$ in (\ref{Sec7q0}) as $ q_0(\omega_*) = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\alpha_{i} $ for any vertex $\alpha\in\mathbb{N}^r$, where $\mathbb{E}_i$ denotes the expectation in the random variables $\omega_\beta^i$ for $\beta\in{\cal A}.$ Let $\mathbb{E}_i'$ denote the expectation with respect to $\omega_\beta^i$ for $\beta\in{\cal A}\setminus \{*\}$, excluding the average in $\omega_*^i$. Again, by Lemma \ref{Sec2iLem1}, we can write $\mathbb{E}_i' {\xoverline{s}\hspace{0.1mm}}^\alpha_{i}$ as $q_0(\omega_*,\omega_*^i)$. If we take any $\alpha,\alpha'\in \mathbb{N}^{r}$ such that $\alpha\wedge\alpha'=0$ then $$ q_0(\omega_*)^2 = (\mathbb{E}_i \mathbb{E}_i'{\xoverline{s}\hspace{0.1mm}}^\alpha_{i})^2 \leq \mathbb{E}_i (\mathbb{E}_i'{\xoverline{s}\hspace{0.1mm}}^\alpha_{i})^2 = \mathbb{E}_i (\mathbb{E}_i'{\xoverline{s}\hspace{0.1mm}}^\alpha_{i} \mathbb{E}_i'{\xoverline{s}\hspace{0.1mm}}^{\alpha'}_{i}) = \mathbb{E}_i {\xoverline{s}\hspace{0.1mm}}^\alpha_{i} {\xoverline{s}\hspace{0.1mm}}^{\alpha'}_{i} =q_{\alpha\wedge\alpha'} = q_0 $$ almost surely. Therefore $$ 4c_0(\omega_*)\geq 1-2\sqrt{q_0} +q_0 = (1-\sqrt{q_0})^2\geq 0. $$ This finishes the proof. \qed \medskip From now on let $\mathbb{E}_*$ also include the expectation in the Gaussian field $(g^\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ (conditionally on $\omega_*$) with the covariance (\ref{Sec7cov}). Let us denote by \begin{equation} a_i^\gamma({\varepsilon}) = \frac{1+{\varepsilon}}{2}\Bigl( g^1 + \log\Bigl(1+y \frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2}\Bigr) +x g^\gamma \Bigr) \label{Sec7a1} \end{equation} the quantity that will replace $A_i^\gamma({\varepsilon})$ in (\ref{SecA1}) in the limit $n\to\infty$. We are ready to prove the following. \begin{theorem}\label{Sec7Th} Under the assumption (M), for any subset $S\subseteq \{1,\ldots, k\}$, \begin{equation} \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle} \label{Sec7ThEq} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely. \end{theorem} \textbf{Proof.} We only need to show that the quantity in (\ref{Sec6ThendEq}) with $A_i^\gamma({\varepsilon})$ as in (\ref{SecA1}) with the choice of $x_j$ as in (\ref{Sec7etas}) converges to (\ref{Sec7ThEq}), where $\mathbb{E}_*$ was redefined above (\ref{Sec7sbar}) to include the average over $(\omega_\beta)_{\beta\preceq [p]}^{\ell, j,i}$ and Rademacher random variables $\eta_j$ in (\ref{Sec7etas}), as well as the Gaussian field $(g^\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ with the covariance (\ref{Sec7cov}). Let $(V_m')_{m\geq 1}$ be the RPC weights $(V_\gamma)_{\gamma\in\mathbb{N}^{r-p}}$ arranged in the decreasing order. For some fixed large $M\geq 1$, let us separate the averages $\langle\, \cdot\, \rangle$ in the numerator and denominator in (\ref{Sec6ThendEq}) into two sums over $V_m'$ for $m\leq M$ and for $m>M$ (for each replica). Let us denote the corresponding averages by $\langle\, \cdot\, \rangle_{\leq M}$ and $\langle\, \cdot\, \rangle_{>M}$. Let \begin{align*} a \ =&\ \bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle_{\leq M}, \\ b \ =&\ \bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle_{> M}, \\ c \ =&\ \bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle_{\leq M}, \\ d \ =&\ \bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle_{> M}. \end{align*} Note that $|a|\leq c$, $|b|\leq d$ and $$ \Bigl|\frac{a+b}{c+d} - \frac{a}{c}\Bigr| = \Bigl|\frac{bc-ad}{c(c+d)}\Bigr| \leq \Bigl|\frac{b}{c+d}\Bigr|+\Bigl|\frac{d}{c+d}\Bigr| \leq \frac{2d}{c+d}. $$ This means that the difference between (\ref{Sec6ThendEq}) and \begin{equation} \mathbb{E}_* \frac{a}{c} = \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle_{\leq M}}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle_{\leq M}} \label{Sec7trunc} \end{equation} can be bounded by $2\mathbb{E}_* d/(c+d)$. By (\ref{Sec7napprox}), the SLLN in (\ref{Sec7LLN}) and CLT in (\ref{Sec7CLT}), with probability one, (\ref{Sec7trunc}) converges to \begin{equation} \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\in S} {\varepsilon}_\ell \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle_{\leq M}}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle_{\leq M}} \label{Sec7limM} \end{equation} as $n\to\infty$. Next, we show that $\mathbb{E}_* d/(c+d)$ is small for large $M$, uniformly over $n$. Since $d/(c+d)\in [0,1]$, it is enough to show that $d$ is small and $c$ is not too small with high probability. To show that $d$ is small, we will use Chebyshev's inequality and show that $\mathbb{E}_* d$ is small. By Jensen's inequality, $$ d = \bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} A_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle_{> M} = \bigl\langle {\rm Av}\, \exp A_i^{\sigma}({\varepsilon}) \bigr\rangle_{> M}^k \leq \bigl\langle {\rm Av}\, \exp k\, A_i^{\sigma}({\varepsilon}) \bigr\rangle_{> M}. $$ If we denote $\delta_M = \mathbb{E} \sum_{m>M} V_m'$ then, since the weights $(V_\gamma)$ and the random variables in $A_i^\gamma({\varepsilon})$ are independent, $$ \mathbb{E}_*d \leq \delta_M \sup_{\gamma} {\rm Av}\, \mathbb{E}_* \exp k\, A_i^{\gamma}({\varepsilon}). $$ Using that $\log(1+t)\leq t$, we can bound $A_i^{\gamma}({\varepsilon})$ with the choice (\ref{Sec7etas}) by $$ A_i^{\gamma}({\varepsilon}) \leq |g^1| + \log\bigl(1+|y|\bigr) + \frac{1+{\varepsilon}}{2}\frac{x}{\sqrt{n}} \sum_{j\leq n} \eta_j S_{j,i}^\gamma. $$ Using that, for a Rademacher random variable $\eta$, we have $\mathbb{E} e^{\eta t} = {\mbox{\rm ch}}(t) \leq e^{t^2/2}$ we get that $$ \mathbb{E}_\eta \exp k\frac{1+{\varepsilon}}{2}\frac{x}{\sqrt{n}} \sum_{j\leq n} \eta_j S_{j,i}^\gamma \leq e^{k^2/2} $$ and, therefore, $$ \sup_{\gamma} {\rm Av}\, \mathbb{E}_* \exp k\, A_i^{\gamma}({\varepsilon}) \leq c_k:=\exp\Bigl( k|g^1| + k\log\bigl(1+|y|\bigr) +\frac{k^2}{2} \Bigr). $$ We showed that $\mathbb{E}_* d \leq c_k \delta_M$, and this bound does not depend on $n$. Since $\delta_M$ is small for $M$ large, $d$ is small with high probability uniformly over $n.$ On the other hand, to show that $c$ is not too small, we simply bound it from below by $$ c\geq \bigl(V_\gamma\, {\rm Av} \exp A_i^{\gamma}({\varepsilon})\bigr)^k $$ for $\gamma$ corresponding to the largest weight, $V_1' = V_\gamma.$ The weight $V_1'$ is strictly positive and its distribution does not depend on $n$. Also, using (\ref{Sec7napprox}), we can bound $A_i^{\gamma}({\varepsilon})$ from below by $$ A_i^{\gamma}({\varepsilon}) \geq -|g^1| + \log\bigl(1-|y|\bigr) - \Bigl|\frac{1}{\sqrt{n}} \sum_{j\leq n} \eta_j S_{j,i}^\gamma \Bigr| - L $$ for some absolute constant $L$. Even though the index $\gamma$ here is random, because it corresponds to the largest weight $V_1'$, we can control this quantity using Hoeffding's inequality for Rademacher random variables conditionally on other random variables to get $$ \mathbb{P}\Bigl( \Bigl|\frac{1}{\sqrt{n}} \sum_{j\leq n} \eta_j S_{j,i}^\gamma \Bigr| \geq t \Bigr) \leq 2e^{-t^2/2}. $$ Therefore, for any $\delta>0$ there exists $\Delta>0$ (that depends on $|g^1|$, $|y|$, $k$ and the distribution of $V_1'$) such that $\mathbb{P}(c\geq \Delta)\geq 1-\delta.$ All together, we showed that $\mathbb{E}_* d/(c+d)$ is small for large $M$, uniformly over $n$. To finish the proof, we need to show that (\ref{Sec7limM}) approximates (\ref{Sec7ThEq}) for large $M$. Clearly, this can be done by the same argument (only easier) that we used above to show that (\ref{Sec7trunc}) approximates (\ref{Sec6ThendEq}). \qed \section{Final consequences of the cavity equations} Theorem \ref{Sec7Th} implies that, under the assumption (M), \begin{equation} \mathbb{E}_* \frac{\bigl\langle {\rm Av} \prod_{\ell\leq k}(1+ {\varepsilon}_\ell) \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell)\, {\rm I}(Q^k = Q) \bigr\rangle}{\bigl\langle {\rm Av}\, \exp \sum_{\ell\leq k} a_i^{\sigma^\ell}({\varepsilon}_\ell) \bigr\rangle} \label{Sec8Eq1} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely. This follows from (\ref{Sec7ThEq}) by multiplying out $\prod_{\ell\leq k}(1+ {\varepsilon}_\ell).$ Using that for ${\varepsilon}\in\{-1,+1\}$, $$ \exp t \frac{1+{\varepsilon}}{2} = 1+ (e^t-1)\frac{1+{\varepsilon}}{2}, $$ one can see from (\ref{Sec7a1}) that $$ 2{\rm Av} \exp a_i^\gamma({\varepsilon}) = 1+e^{g^1+xg^\gamma} \Bigl(1+y \frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2}\Bigr) $$ and\smallskip $$ {\rm Av} (1+{\varepsilon}) \exp a_i^\gamma({\varepsilon}) = e^{g^1+xg^\gamma} \Bigl(1+y \frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2}\Bigr). \medskip $$ If for simplicity the notation we denote $z:=e^{g^1}\in(0,\infty)$ and \begin{equation} S_i^\gamma = \frac{1+{\xoverline{s}\hspace{0.1mm}}^\gamma_{i}}{2} \label{Sec8Sgamma} \end{equation} then (\ref{Sec8Eq1}) can be written, up to a factor $2^k z^k$, as $$ \mathbb{E}_* \frac{\bigl\langle \prod_{\ell\leq k} \exp(xg^{\sigma^\ell})(1+yS_i^{\sigma^\ell})\, {\rm I}(Q^k = Q) \bigr\rangle}{\bigl\langle 1+z \exp(xg^{\sigma})(1+yS_i^{\sigma}) \bigr\rangle^k}. $$ As before, the statement that this quantity does not depend on $\omega_{[p+1]}$ almost surely holds for all $x\in (-1,1)$, $y>-1$ and $z>0$, by continuity. Therefore, if we take the derivative with respect to $z$ and then let $z\downarrow 0$ then the quantity we get (up to a factor $-k$), $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} \exp(xg^{\sigma^\ell})(1+yS_i^{\sigma^\ell})\, {\rm I}(Q^k = Q) \Bigr\rangle, $$ does not depend on $\omega_{[p+1]}$ almost surely. This is a polynomial in $y$ of order $k+1$ and if we take the derivative $\partial^{k+1}/\partial y^{k+1}$ we get that $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} \exp(x g^{\sigma^\ell}) \,S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \Bigr\rangle, $$ does not depend on $\omega_{[p+1]}$ almost surely. Let us now take the expectation $\mathbb{E}_g$ with respect to the Gaussian field $(g^\gamma)$. By (\ref{Sec7cov}), $$ \mathbb{E}_g \exp x\sum_{\ell\leq k+1} g^{\sigma^\ell} = \exp \frac{x^2}{2}\sum_{\ell,\ell'\leq k+1} c_{\sigma^\ell\wedge \sigma^{\ell'}}(\omega_*)^{d-1} $$ and, therefore, the quantity $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \exp \frac{x^2}{2}\sum_{\ell,\ell'\leq k+1} c_{\sigma^\ell\wedge \sigma^{\ell'}}(\omega_*)^{d-1} \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ almost surely. Notice that because of the indicator ${\rm I}(Q^k = Q)$, all the overlaps $\sigma^\ell\wedge\sigma^{\ell'}$ are fixed for $\ell,\ell' \leq k$, so the factor $$ \exp \frac{x^2}{2}\sum_{\ell,\ell'\leq k} c_{\sigma^\ell\wedge \sigma^{\ell'}}(\omega_*)^{d-1} $$ can be taken outside of $\mathbb{E}_* \langle\,\cdot \,\rangle$ and cancelled out, yielding that $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \exp \frac{x^2}{2}\sum_{\ell\leq k} c_{\sigma^\ell\wedge \sigma^{k+1}}(\omega_*)^{d-1} \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ almost surely. Taking the derivative with respect to $x^2/2$ at zero gives that \begin{equation} \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \sum_{\ell \leq k} c_{\sigma^\ell\wedge \sigma^{k+1}}(\omega_*)^{d-1} \Bigr\rangle \label{Sec9polyd} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely. We proved this statement for a fixed but arbitrary $d\geq 3$, but it also holds for $d=1$ by setting $x=0$ in the previous equation. Let us take arbitrary $f_j$ for $j=0,\ldots,r-p$ and consider a continuous function $f$ on $[0,1]$ such that $f(c_j(\omega_*)) = f_j.$ Approximating this function by polynomials, (\ref{Sec9polyd}) implies that $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \sum_{\ell \leq k} \sum_{j=0}^{r-p}f_j {\rm I}(\sigma^\ell\wedge \sigma^{k+1}=j) \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ for all $(f_j)$ almost surely. Taking the derivative in $f_j$ shows that $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \sum_{\ell \leq k} {\rm I}(\sigma^\ell\wedge \sigma^{k+1}=j) \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ almost surely and, therefore, $$ \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^k = Q) \sum_{\ell \leq k} {\rm I}(\sigma^\ell\wedge \sigma^{k+1}\geq j) \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ almost surely for all $j=0,\ldots, r-p$. Let us now express this quantity as a linear combination over all possible overlap configurations that the new replica $\sigma^{k+1}$ can form with the old replicas $\sigma^1,\ldots,\sigma^k$. Given a $k\times k$ overlap constraint matrix $Q=(q_{\ell,\ell'})_{\ell,\ell'\leq k}$, let ${\cal E}(Q)$ be the set of admissible extensions $Q'=(q_{\ell,\ell'}')_{\ell,\ell'\leq k+1}$ of $Q$ to $(k+1)\times(k+1)$ overlap constraint matrices. In other words, $q_{\ell,\ell'}' = q_{\ell,\ell'}$ for $\ell,\ell'\leq k$, and there exists $\gamma_1,\ldots,\gamma_{k+1}\in \mathbb{N}^{r-p}$ such that $\gamma_\ell\wedge \gamma_{\ell'} = q_{\ell,\ell'}'$ for $\ell,\ell'\leq k+1.$ If we denote \begin{equation} n_j(Q') = \sum_{\ell \leq k} {\rm I}(q_{\ell,k+1}'\geq j) \label{Sec8nj} \end{equation} and denote $Q^{k+1} = (\sigma^\ell\wedge\sigma^{\ell'})_{\ell,\ell'\leq k+1}$ then we showed that $$ \sum_{Q'\in{\cal E}(Q)} n_j(Q')\, \mathbb{E}_* \Bigl\langle \prod_{\ell\leq k+1} S_i^{\sigma^\ell} \, {\rm I}(Q^{k+1} = Q') \Bigr\rangle $$ does not depend on $\omega_{[p+1]}$ almost surely for all $j=0,\ldots, r-p$. Since the RPC weights $(V_\gamma)$ are independent of $({\xoverline{s}\hspace{0.1mm}}_i^\gamma)$, we can rewrite this as \begin{equation} \sum_{Q'\in{\cal E}(Q)} n_j(Q') \mathbb{P}(Q^{k+1} = Q')\, M(Q') \label{Sec8at} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely for all $j=0,\ldots, r-p$, where \begin{equation} M(Q') = \mathbb{E}_* \prod_{\ell\leq k+1} S_i^{\gamma_\ell} \end{equation} for any $\gamma_1,\ldots,\gamma_{k+1}\in \mathbb{N}^{r-p}$ such that $\gamma_\ell\wedge \gamma_{\ell'} = q_{\ell,\ell'}'$ for $\ell,\ell'\leq k+1.$ Recall that this statement was proved under the induction assumption (M) in Section \ref{Sec2ilabel}, so let us express (\ref{Sec8at}) in the notation of Section \ref{Sec2ilabel} and connect everything back to the assumption (M), which we repeat one more time. Given $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$, we assumed that: \begin{enumerate} \item[(M)] for any subset $S\subseteq \{1,\ldots, k\}$, ${\mathbb{E}_{[p],i}} \prod_{\ell\in S}{\xoverline{s}\hspace{0.1mm}}_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$. \end{enumerate} If similarly to (\ref{Sec8Sgamma}) we denote, for $\alpha\in\mathbb{N}^r$, \begin{equation} S_i^\alpha = \frac{1+{\xoverline{s}\hspace{0.1mm}}^\alpha_{i}}{2} \label{Sec8Salpha} \end{equation} then the assumption (M) is, obviously, equivalent to \begin{enumerate} \item[(M)] for any subset $S\subseteq \{1,\ldots, k\}$, ${\mathbb{E}_{[p],i}} \prod_{\ell\in S}S_i^{\alpha_\ell}$ does not depend on $\omega_{[p+1]}$. \end{enumerate} Let us define $[1] \preceq \gamma_1,\ldots,\gamma_{k}\in \mathbb{N}^{r-p}$ by $\alpha_\ell = [p]\gamma_\ell$, and let $Q$ be the overlap matrix \begin{equation} q_{\ell,\ell'} = \gamma^\ell\wedge \gamma^{\ell'} \label{Sec8Qk} \end{equation} The assumption (M) depends on $\alpha_1,\ldots,\alpha_k$ only through this matrix $Q$, so one should really view it as a statement about such $Q$. Fix $1\leq j \leq r-p$ and consider any $Q'\in{\cal E}(Q)$ such that $n_j(Q')\not = 0.$ Since $$ n_j(Q') \not = 0 \Longleftrightarrow \max_{\ell\leq k} q_{\ell,k+1}'\geq j, $$ this means that $q_{\ell,k+1}' \geq j\geq 1$ for some $\ell\leq k$. Since our choice of $\alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ was such that $q_{\ell,\ell'} = \gamma_\ell\wedge \gamma_{\ell'}\geq 1$ for all $\ell,\ell'\leq k$, this also implies that $q_{\ell,\ell'}'\geq 1$ for all $\ell,\ell'\leq k+1$. In particular, we can find $[1] \preceq \gamma_{k+1}\in \mathbb{N}^{r-p}$ such that $\gamma_\ell\wedge \gamma_{k+1} = q_{\ell,k+1}'$ for $\ell\leq k$. Let $\alpha_{k+1} = [p]\gamma_{k+1}.$ Recall that, whenever $\alpha = [p]\gamma$ for $[1] \preceq \gamma\in \mathbb{N}^{r-p}$, the definitions (\ref{Sec6sialpha}) and (\ref{indass}) imply that ${\xoverline{s}\hspace{0.1mm}}^\alpha_i = {\xoverline{s}\hspace{0.1mm}}_i^\gamma$. Therefore, in this case, we can rewrite the definition of $M(Q')$ below (\ref{Sec8at}) as \begin{equation} M(Q') = \mathbb{E}_* \prod_{\ell\leq k+1} S_i^{\gamma_\ell} = {\mathbb{E}_{[p],i}} \prod_{\ell\leq k+1} S_i^{\alpha_\ell}. \end{equation} Let us summarize what we proved. \begin{theorem}\label{Sec8Th} If the matrix $Q$ defined in (\ref{Sec8Qk}) satisfies the assumption (M) then \begin{equation} \sum_{Q'\in{\cal E}(Q)} n_j(Q') \mathbb{P}(Q^{k+1} = Q')\, M(Q') \label{Sec8ThEq} \end{equation} does not depend on $\omega_{[p+1]}$ almost surely for all $j=1,\ldots, r-p$. \end{theorem} \section{Main induction argument}\label{Sec9label} Finally, we will now use Theorem \ref{Sec8Th} to prove our main goal, Theorem \ref{Sec6iTh1} in Section \ref{Sec2ilabel}. To emphasize that our inductive proof will have a monotonicity property (M), we can rephrase Theorem \ref{Sec6iTh1} as follows. \begin{theorem}\label{Sec9Th1} Under the assumption (\ref{indass}), for any $k\geq 1$, any $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ and any $S\subseteq \{1,\ldots, k\}$, the expectation ${\mathbb{E}_{[p],i}} \prod_{\ell\in S} S_i^{\alpha_\ell}$ with respect to $(\omega_\beta^i)_{[p+1]\preceq \beta}$ does not depend on $\omega_{[p+1]}$ almost surely. \end{theorem} It is much easier to describe the proof if we represent a configuration $[p+1]\preceq \alpha_1,\ldots,\alpha_k \in \mathbb{N}^r$ not by a matrix $Q = (\gamma_\ell\wedge\gamma_{\ell'})$ with $\alpha_\ell = [p]\gamma_\ell$ but by a subtree of ${\cal A}$ growing out of the vertex $[p+1]$ with branches leading to the leaves $\alpha_1,\ldots,\alpha_k$, and with an additional layer encoding their multiplicities (see Fig. \ref{Fig1}). If we think of $[p+1]$ as a root of this subtree being at depth zero, then the leaves are at depth $r-p-1.$ However, the multiplicity of any particular vertex $\alpha$ in the set $\{\alpha_1,\ldots,\alpha_k\}$ can be greater than one, so we will attach that number of children to each vertex $\alpha$ to represent multiplicities, so the depth of the tree will be $r-p$. Whenever we say that we remove a leaf $\alpha$ from the tree, we mean that we remove one multiplicity of $\alpha$. Notice that removing a leaf from the tree also removes the path to that leaf, of course, keeping the shared paths leading to other leaves that are still there. We will say that this \textit{tree is good} if $$ M(Q) = {\mathbb{E}_{[p],i}} \prod_{\ell\leq k} S_i^{\alpha_\ell} $$ does not depend on $\omega_{[p+1]}$. \begin{figure}[t] \centering \psfrag{a_0}{ {\small root $[p+1]$ at depth zero}}\psfrag{a_2}{ {\small level $(r-p-1)$ of $\alpha\in \mathbb{N}^r$}}\psfrag{AcodeR}{ {\small level $(r-p)$ of multiplicities}} \includegraphics[width=0.45\textwidth]{figure1.eps} \caption{\label{Fig1} Representing a configuration $\alpha_1,\ldots,\alpha_k$ by a tree.} \end{figure} First we are going to prove the following property, illustrated in Fig. \ref{Fig2}, that we will call property ${\cal N}_j$ for $j=0,\ldots, r-p-1.$ Let us consider an arbitrary configuration of paths leading from the root $[p+1]$ to some set of vertices at depth $j$. Let us call this part of the tree ${\cal T}_j$, which is now fixed. We pick one designated vertex at depth $j$ (right-most vertex at depth $j$ in Fig. \ref{Fig2}). To all other vertices at depth $j$ we attach arbitrary trees $\cal T$ leading to some arbitrary finite sets of leaves in $\mathbb{N}^r$ and their multiplicities. We will use the same generic notation $\cal T$ to represent an arbitrary tree, even though they can all be different. The designated vertex has some fixed number of children, say $n_j$, and to each of these children we also attach an arbitrary tree $\cal T$. Property ${\cal N}_j$ will be the following statement. \begin{enumerate} \item[(${\cal N}_j$)] Fix any ${\cal T}_j$ and the number of children $n_j$ of a designated vertex. Suppose that all trees that we just described are good (this means for all choices of trees $\cal T$, possibly empty). Then any tree obtained by adding a single new path leading from a designated vertex at depth $j$ to some new vertex $\alpha\in \mathbb{N}^r$ with multiplicity one (as in Fig. \ref{Fig2}) is also good. \end{enumerate} \begin{figure}[t] \centering \psfrag{a_0}{ {\small root $[p+1]$ at depth zero}}\psfrag{a_1}{\small $n_j$ fixed children}\psfrag{a_2}{ {\small designated vertex at depth $j$}}\psfrag{AcodeR}{ {\small new path leading to some $\alpha\in \mathbb{N}^r$}}\psfrag{T_i}{\small ${\cal T}$} \includegraphics[width=0.4\textwidth]{figure2.eps} \caption{\label{Fig2} Illustrating property ${\cal N}_j$. Solid lines represent the subtree ${\cal T}_j$ and $n_j$ children of the designated vertex at depth $j$. Each $\cal T$ represents an arbitrary tree leading to some set of leaves with their multiplicities. Dashed line represents a new path from the designated vertex to a new vertex $\alpha\in \mathbb{N}^r$, which has multiplicity one.} \end{figure} \begin{lemma} For any $j=0,\ldots, r-p-1,$ any ${\cal T}_j$ and $n_j$, the property ${\cal N}_j$ holds. \end{lemma} \textbf{Proof.} Let us fix any particular choice of trees $\cal T$ attached to non-designated vertices at depth $j$ and $n_j$ children of the designated vertex. By the assumption in property ${\cal N}_j$, this tree, as well as any tree obtained by removing a finite number of leaves, is good. This precisely means that the assumption (M) holds for this tree, or for the sets of leaves with their multiplicities encoded by this tree. Let $Q$ be the matrix described above Theorem \ref{Sec8Th} corresponding to this set of leaves. Then, Theorem \ref{Sec8Th} implies that \begin{equation} \sum_{Q'\in{\cal E}(Q)} n_j(Q') \mathbb{P}(Q^{k+1} = Q')\, M(Q') \label{Sec9Lem1eq} \end{equation} does not depend on $\omega_{[p+1]}$. Obviously, each $Q'\in {\cal E}(Q)$ corresponds to a new tree constructed by adding one more new vertex $\alpha$ to our tree or increasing the multiplicity of some old vertex by one. Moreover, if $n_j(Q') \not = 0$ then the overlap $\alpha \wedge \alpha_\ell$ with one of the old vertices $\alpha_\ell$ should be greater or equal than $j$. This means that this new vertex will be attached to the tree somewhere at depth $j$ or below. One of the possibilities is described in Fig. \ref{Fig2}, when $\alpha$ is attached by a new path to the designated vertex at depth $j$. All other possibilities -- attaching $\alpha$ below one of the non-designated vertices at depth $j$ or below one of the $n_j$ children of the designated vertex -- would simply modify one of the trees $\cal T$ in Fig. \ref{Fig2}. But such a modification results in a good tree, by the assumption in property ${\cal N}_j$. Since the sum in (\ref{Sec9Lem1eq}) is a linear combination of all these possibilities, the term corresponding to adding a new path as in Fig. \ref{Fig2} must be good, which finishes the proof. \qed \medskip \noindent Next we will prove another property that we will denote ${\cal P}_j$ for $j=0,\ldots,r-p-1$, described in Fig. \ref{Fig3}. As in Fig. \ref{Fig2}, we consider an arbitrary configuration ${\cal T}_j$ of paths leading from the root $[p+1]$ to some set of vertices at depth $j$ and we pick one designated vertex among them. To all other vertices at depth $j$ we attach arbitrary trees $\cal T$, while to the designated vertex we attach a single path leading to some vertex $\alpha\in\mathbb{N}^r$ with multiplicity one. Property ${\cal P}_j$ will be the following statement. \begin{enumerate} \item[(${\cal P}_j$)] Suppose that the following holds for any fixed tree ${\cal T}_j$ up to depth $j$. Suppose that any tree as in Fig. \ref{Fig3} is good, as well as any tree obtained by removing any finite number of leaves from this tree. Then any tree obtained by replacing a single path below the designated vertex at depth $j$ by an arbitrary tree $\cal T$ is also good. \end{enumerate} We will now prove the following. \begin{lemma} Property ${\cal P}_j$ holds for any $j=0,\ldots,r-p-1$. \end{lemma} \textbf{Proof.} First of all, notice that property ${\cal P}_{r-p-1}$ follows immediately from property ${\cal N}_{r-p-1}$. In property ${\cal N}_{r-p-1}$, arbitrary trees $\cal T$ below non-designated vertices at depth $r-p-1$ represent their arbitrary multiplicities, the trees $\cal T$ below the children of the designated vertex are empty, and the multiplicity of the designated vertex is $n_j$. Property ${\cal N}_{r-p-1}$ then implies that we can increase this multiplicity by one to $n_j+1.$ Starting from multiplicity one and using this repeatedly, we can make this multiplicity arbitrary. This is exactly the property ${\cal P}_{r-p-1}$. \begin{figure}[t] \centering \psfrag{a_0}{ {\small root $[p+1]$ at depth zero}}\psfrag{a_1}{\small single path to $\alpha\in \mathbb{N}^r$}\psfrag{a_2}{ {\small designated vertex at depth $j$}}\psfrag{T_i}{\small ${\cal T}$} \includegraphics[width=0.35\textwidth]{figure3.eps} \caption{\label{Fig3} Illustrating property ${\cal P}_j$. Solid lines represent the subtree ${\cal T}_j$ and one path from a designated vertex at depth $j$ to a leaf $\alpha\in \mathbb{N}^r$ with multiplicity one. Property ${\cal P}_j$ allows to replace this single path by an arbitrary tree $\cal T$.} \end{figure} Next, we are going to show that property ${\cal P}_{j+1}$ implies property ${\cal P}_j$. The proof of this is illustrated in Fig. \ref{Fig4}. Given any tree as in Fig. \ref{Fig3}, let us denote by $\delta_j$ the designated vertex at depth $j$. Consider the subtree ${\cal T}_{j+1}$ up to depth $j+1.$ It forms the same pattern, with the child of $\delta_j$ playing a role of the designated vertex at depth $j+1.$ Therefore, by property ${\cal P}_{j+1}$ we can replace the single path below this vertex by an arbitrary tree $\cal T$. By property ${\cal N}_j$, if we attach another path to $\delta_j$, the resulting new tree is good. Then we can again treat the child of $\delta_j$ along this new path as a designated vertex at depth $j+1$, apply property ${\cal P}_{j+1}$ and replace the path below this vertex by an arbitrary tree. If we continue to repeatedly use property ${\cal N}_j$ to attach another path to $\delta_j$ and then use property ${\cal P}_{j+1}$ to replace the part of this path below depth $j+1$ by an arbitrary tree, we can create an arbitrary tree below $\delta_j$, and this tree is good by construction. This is precisely property ${\cal P}_j$, so the proof is completed by decreasing induction on $j.$ \qed \medskip \noindent Finally, this implies Theorem \ref{Sec9Th1} (and Theorem \ref{Sec6iTh1}). As we explained in Section \ref{Sec2ilabel}, by induction on $p$ this implies Theorem \ref{Th2} . \smallskip \noindent \textbf{Proof of Theorem \ref{Sec9Th1}.} By Lemma \ref{Sec2iLem1}, the tree consisting of one path from $[p+1]$ (at depth zero) to some vertex $\alpha\in\mathbb{N}^{r}$ (at depth $r-p-1$) with multiplicity one is good. Using property ${\cal P}_0$ implies that arbitrary finite tree is good, which finishes the proof. \qed \begin{figure}[t] \centering \psfrag{a_0}{ {\small root $[p+1]$ at depth zero}}\psfrag{a_1}{$\ldots$}\psfrag{a_2}{ {\small designated vertex $\delta_j$ at depth $j$}}\psfrag{T_i}{\small ${\cal T}$} \includegraphics[width=0.4\textwidth]{figure4.eps} \caption{\label{Fig4} Illustrating proof of property ${\cal P}_j$. First we replace single path in Fig. \ref{Fig3} by arbitrary tree below the child of the designated vertex using ${\cal P}_{j+1}$. Then we iteratively add a new path using property ${\cal N}_j$ and then replace this path below depth $j+1$ be arbitrary tree using ${\cal P}_{j+1}$.} \end{figure}
1,941,325,220,335
arxiv
\section*{Introduction} In natural environments such as oceans and lakes, bacteria and other microbes navigate chemical landscapes that can change dramatically over the timescales relevant to their motility \cite{taylor:2012}. Such environments differ in fundamental ways from the static chemical gradients typically considered in studies of microbial chemotaxis (e.g.,\cite{kalinin:2009,ahmed:2010}). From the perspective of microbes, chemical cues in nature often appear as localized pulses with short duration \cite{stocker:2012,blackburn:1998}. For example, oil droplets from spills and natural seeps, organic matter exuded by lysed phytoplankton or excreted by other organisms, and marine particles are common sources of short-lived, micro-scale ($\sim$10-1000 $\mu$m) chemical pulses \cite{stocker:2012}. Motile bacteria respond to such cues by swimming up the gradients that are generated when pulses diffuse (e.g., \cite{blackburn:1998, seymour:2009, stocker:2008, seymour:2010}). When a pulse appears, for example through the lysis of a phytoplankton cell, the distribution of chemoattractants (often, dissolved organic matter) changes rapidly over both space and time \cite{fenchel:2002}. Because background conditions are highly dilute, bacteria experience the early stages of a spreading pulse as a noisy chemical gradient with low absolute concentration. In marine environments, ephemeral, micro-scale pulses of dissolved chemicals provide a substantial and perhaps dominant fraction of the resources used by heterotrophic bacteria \cite{stocker:2012,fenchel:2002,barbara:2003}. {The advantage that chemotaxis confers cells in such dynamic environments \cite{frankel:2014,celani:2010,taylor:2012} may help explain why chemotactic responses to transient nutrient sources are so common among marine bacteria \cite{barbara:2003, blackburn:1998,seymour:2009,seymour:2010}. } {Although chemotaxis appears to be an important driver of bacterial competition \cite{taylor:2012}, evolution \cite{celani:2010,frankel:2014}, and nutrient cycling \cite{stocker:2012,fenchel:2002}, the details of bacterial chemotaxis behaviour are poorly characterized for all but a few well-studied species of bacteria. An important shared feature of bacterial chemotaxis systems, however, is that the measurements of chemical concentration that underpin chemotaxis behavior are subject to considerable noise \cite{mora:2010,andrews:2006}. In particular, stochasticity in the times at which individual molecules of chemoattractant arrive at the bacterium's surface sets an upper bound on the precision with which the cell can measure changes in concentration \cite{berg:1977,bialek:2005}. Here, we demonstrate how this physical limit on the precision of temporal gradient sensing constrains when and where bacteria can respond to chemical pulses. Using this approach, we develop a general theory to predict the fundamental length and timescales over which chemotactic bacteria can respond to chemical pulses. Because it requires few assumptions about the underlying mechanisms responsible for chemotactic behaviour, the theory can be applied to the diverse assemblages of bacteria that occur in natural marine and freshwater environments.} We first discuss gradient estimation by a cell in a dynamic chemoattractant field. We then derive theoretical bounds on the regions of the environment in which bacteria can respond to gradients, and characterize the spatio-temporal evolution of these regions as a function of physical and biological parameters. Finally, we show that changes in swimming speed in response to measurements of absolute concentration -- a bacterial behaviour known as chemokinesis \cite{barbara:2003, garren:2013} -- can greatly enhance a cell's ability to measure gradients in a dynamic chemoattractant field. \section*{Model development} \subsection*{Signal and noise in temporal gradient sensing} Unlike large eukaryotic cells, which can directly measure spatial gradients in chemical concentration \cite{endres:2008}, many chemotactic bacteria navigate by measuring temporal changes in concentration as they swim \cite{macnab:1972, segall:1986}. They use these measurements to detect concentration gradients and to navigate toward more favourable conditions (toward resources, away from noxious substances). Regardless of the biochemical and behavioural mechanisms a cell uses to navigate, gradient-based navigation can only be as precise as a cell's estimate of the gradient itself; downstream transduction will, in general, only add noise \cite{bialek:2005}. One can, therefore, establish performance bounds within which real bacterial cells must operate by considering physical limits on the accuracy and precision of gradient sensing by an idealized cell. We begin by considering gradient detection by such a cell: the perfectly absorbing sphere originally described by Berg and Purcell \cite{berg:1977}. This cell swims through a dynamic chemoattractant landscape, absorbing all molecules that reach its surface (Fig.~\ref{fig1}a). In reality, bacteria absorb some ligands they use for chemotaxis, whereas others are bound only temporarily. However, absorbing ligand always leads to more accurate measurement of both absolute concentration and changes in concentration over time because molecules cannot be re-bound once they have been absorbed \cite{endres:2008, mora:2010}. We therefore assume molecules are absorbed yielding an upper limit on measurement accuracy \cite{endres:2008}. \begin{figure}[h] \captionsetup{font=small,width=16cm} \begin{center} \includegraphics[width=9.2cm]{Fig1.pdf} \end{center} \caption{Measurement of ramp rate $c_1$ by an idealized cell. (a) During a time interval of length $T$, a cell travels from a region of low concentration to a region of higher concentration, absorbing chemoattractant molecules at times $\{t_i \}$ (red spikes in time series). (b) In a static concentration field $C(x)$, $c_1$ is equal to concentration slope $g$ (slope of orange line) times swimming speed $v$. (c) In a dynamic concentration field $C(x,t)$, $c_1 \approx vg + \partial C/\partial t$; $g$ is confounded with temporal changes in concentration ($\partial C/\partial t$) and the cell may perceive a decreasing concentration (red dashed line) although the true concentration slope is positive. Figures in colour online.} \label{fig1} \end{figure} Like the well-studied enteric bacterium, \emph{Escherichia coli}, marine bacteria perform chemotaxis by altering the length of relatively straight ``runs", which are interspersed with random re-orientation events (``tumbles" for \textit{E. coli} \cite{brown:1974}, ``flicks" for marine bacteria \cite{xie:2011,xie:2015}). As a cell swims, receptors on the cell's surface bind chemoattractant molecules and a signal from the receptors is transduced through a biochemical network to one or more flagellar motors, which control the speed and direction of the flagellar rotations that drive locomotion. Changes in receptor occupancy alter the probability that the direction of flagellar rotation will reverse, leading to a re-orientation \cite{jiang:2010}, and the outcome of this is that bacteria extend runs when they perceive an increasing concentration of chemoattractant. A requirement for chemotaxis, therefore, is that the cell is capable of detecting meaningful changes in mean concentration \cite{andrews:2006} over some measurement interval of length $T$. This task is complicated by significant stochastic variation in the times at which molecules arrive at the cell's surface. The length of the measurement interval $T$ is bounded above by the characteristic timescale of stochastic re-orientations (e.g., rotational diffusion, active re-orientation \cite{berg:1977}), which for cells in the size range of \emph{E. coli} and many marine bacteria, ranges from hundreds of milliseconds \cite{blackburn:1998} to several seconds \cite{alon:1998}. A cell has little to gain by using the history of molecule encounters that extends beyond this timescale because rotational diffusion and active stochastic reorientation (e.g., tumbles, flicks) cause random changes in the cell's trajectory, decorrelating the cell's orientation, and rendering old information useless to the cell for determining whether it is currently travelling up or down a chemoattractant gradient (this issue is discussed in detail in \cite{berg:1977}). We therefore assume the measurement timescale $T$ is shorter than the timescale of stochastic reorientation and neglect processes such as rotational diffusion. For such short $T$, the chemoattractant concentration along the swimming cell's path, $c(t)$, can be linearized to $c(t) \approx c_0 + c_1 (t-t_0)$ over the time interval $(t_0-T/2,t_0+T/2)$. The cell experiences this concentration as a noisy time series of encounters with chemoattractant molecules (Fig.~\ref{fig1}a), from which it must estimate the concentration ramp rate, $c_1$, to determine whether concentration is increasing or decreasing. Using maximum likelihood, one can show that the optimal way for a perfectly absorbing sphere of radius $a$ to estimate $c_1$ (concentration $\times$ time$^{-1}$) using a sequence of molecule absorptions is, to leading order \cite{mora:2010}: $\hat{c_1} = \frac{n\sum_i (t_i - t_0)}{4\pi D a T\sum_i (t_i - t_0)^2}$, where $\hat{c_1}$ is the cell's estimate of the ramp rate, $n$ is the number of molecules absorbed over the measurement interval, $D$ is the diffusivity of the chemoattractant, and $t_i$ is the absorption time of the $i$th molecule. Importantly, $\hat{c_1}$ has typical measurement variance no less than: \begin{equation} Var(\hat{c_1}) = \frac{ 3 c_0 }{\pi D a T^3}, \label{eq:var} \end{equation} where $c_0$ is the true background concentration in the vicinity of the cell at time $t_0$, and the variance of $\hat{c_1}$ does not depend on the true ramp rate $c_1$ as long as $c_0 \gg c_1 T$ (Supplementary Text, {see also Equation (S44) in ref. \cite{mora:2010}}). This formulation assumes that a cell can ``count" many molecules in a typical observation window, which amounts to assuming that the timescale at which receptors bind chemoattractant molecules is fast relative to the length of the observation window, $T$. Receptor binding kinetics are typically {very} fast (millisecond timescales, e.g. \cite{zhang:2005, jiang:2010}), so this assumption will generally hold unless $T$ is {extremely} short. {To summarize, measurements of concentration involve three timescales that are relevant to our model formulation, which are naturally separated in chemotactic bacteria \cite{jiang:2010}: (1) the timescale of absorptions, which is typically short ($\sim$1 ms \cite{jiang:2010}), (2) the measurement window $T$, which is of intermediate length, and (3) the timescale of active re-orientations, which must be longer than $T$ if the bacterium is to perform chemotaxis \cite{berg:1977}.} Variance in the ramp rate estimate (Eq.~(\ref{eq:var})) is solely due to stochastic arrivals of chemoattractant molecules and does not include additional sources of noise resulting, for example, from noise in the biochemical network responsible for ramp rate estimation \cite{bialek:2005,lestas:2010}. Eq.~(\ref{eq:var}) thus provides a lower bound on uncertainty about the true ramp rate and a constraint within which real cells must operate, regardless of the precise biochemical mechanism though which they implement ramp rate estimation. Below we use Eq.~(\ref{eq:var}) to define the regions of space where it is possible for cells to use measurements of concentration to climb chemoattractant gradients. Outside these regions, cells may attempt to perform chemotaxis; however, we will show that for several ecologically relevant types of pulses, the signal-to-noise ratio of a cell's estimate of the concentration slope decays sharply (like a Gaussian) far from the origin of a chemoattractant pulse. This strong decrease in the signal-to-noise ratio with increasing distance implies that chemotactic cells far from the origin of a pulse will be responding primarily to noise and will not exhibit biased motion. \subsection*{Gradient estimation in a time-varying environment} For a cell swimming at speed $v$, the instantaneous local slope of the concentration profile along the cell's path, which we will refer to as the concentration slope $g$, is given by $g = \nabla C(\mathbf{x}) \cdot \mathbf{v}/v$, where $\mathbf{v}$ is the cell's velocity. The concentration slope is the quantity that is useful for climbing gradients, for example, by providing a signal for cells to lengthen runs in run-and-tumble chemotaxis \cite{brown:1974}; however, a cell the size of a bacterium ($\sim$1 $\mu$m) cannot measure $g$ directly \cite{mora:2010}. It must instead infer $g$ from its estimate of the ramp rate $\hat{c_1}$. In a time-invariant concentration field $c_1=gv$, and the maximum likelihood estimator of $g$ is proportional to the ramp rate estimator: $\hat{g}=\hat{c_1}/v$ (Fig.~1b, Supplementary Text). In a time-varying environment the concentration that a swimming cell experiences, $c(t) \approx c_0 + (v g + \partial C/\partial t) (t-t_0)$, is influenced by local temporal changes in concentration, $\partial C/\partial t$ (Fig.~1c); the ramp rate is given by $c_1=vg + \partial C/\partial t$. In this case, the time series of molecule absorptions does not contain the information needed to estimate both $g$ and $\partial C/\partial t$ and any estimator the cell uses to measure the concentration slope $g$ will be biased (Supplementary Text). For example, estimating $g$ as $\hat{g} = \hat{c_1}/v $ means that $\hat{g} \rightarrow g + (\partial C / \partial t) /v$ in the limit of many molecule absorptions. Correcting this bias would require that the cell have an independent estimate of $\partial C / \partial t$. In the absence of such an estimate, the cell can reduce bias by travelling faster, but not by increasing the length of its measurement window $T$ (Supplementary Text). This highlights an important connection between swimming speed and measurement accuracy that we explore in more detail below. Bias in the concentration slope estimate becomes important far from the origin of a pulse where cells can perceive an increasing concentration even if they are travelling down the concentration gradient, and near the origin, where cells can perceive a falling concentration even if they are travelling up a gradient (Fig.~\ref{fig1}c). \subsection*{Conditions for chemotaxis and responses to chemical pulses} If a cell is to use measurements of ramp rate to climb a concentration gradient, two conditions must be met. First, the cell must be in a region of the environment where typical values of the perceived ramp rate exceed noise: i.e., the signal-to-noise ratio (SNR) of the ramp rate estimator, $|c_1|Var(\hat{c_1})^{-1/2} \geq \delta_0$, where $\delta_0$ is a constant threshold on the SNR (Supplementary Text). Second, the ramp rate $c_1=vg + \partial C/\partial t$ and the concentration slope $g$ must have the same sign. Applying Eq.~(\ref{eq:var}) and rearranging, these conditions are: \begin{gather} \frac{|vg + \frac{\partial C }{\partial t}|}{\sqrt{c_0}} \geq \delta := \delta_0\sqrt{\frac{3}{\pi D a T^3}} ,\nonumber \\ \mathrm{and}\label{eq:conditions}\\ \; \mathrm{sign}(c_1) = \mathrm{sign}(g). \nonumber \end{gather} For a chemoattractant field with concentration $C(\mathbf{x},t)$, Conditions (\ref{eq:conditions}) define the regions where cells can reliably determine the sign of the concentration slope, a requirement for gradient-based navigation. Using Conditions (\ref{eq:conditions}), we explore how bacteria perceive three types of pulses that occur in natural environments: pulses that arise from surfaces, pulses that arise as thin chemical filaments, and pulses created by small point releases. Localized point pulses are created by many natural sources, including the lysis of small cells and excretions by larger organisms \cite{stocker:2012,blackburn:1998}. Thin chemical filaments and sheets occur when turbulence stirs dissolved chemicals. The distribution of chemicals is stretched and folded into sheets and filaments at length scales down to the Batchelor scale \cite{stocker:2012}. Mixing below the Batchelor scale is dominated by diffusion. This length scale is $l_B = (\nu D^2/ \epsilon)^{1/4}$, where $\nu$ is kinematic viscocity, $D$ is mass diffusivity, and $\epsilon$ is the turbulent dissipation rate. As $\epsilon$ changes, $l_B$ changes slowly implying that small point pulses and filaments or sheets spread primarily by diffusion across a broad range of flows. {Across a range of realistic levels of turbulence ($\epsilon \sim 10^{-9}$ to $10^{-6}$ W kg$^{-1}$ \cite{doubell:2014}) the average shear rate is of order $10^{-3}$ to 1 s$^{-1}$. Except for the highest values in this range, these shear rates are typically too low to cause significant re-orientation of bacteria as they swim \cite{rusconi:2014}. We therefore focus on the regime in which the environment is steady over the length scales considered here.} {To illustrate the utility of our theory, we consider how bacteria respond to chemical point pulses, filaments, and sheets.} These canonical geometries can be viewed as basic components of more complex chemical landscapes at larger scales (e.g., the types of landscapes considered in \cite{taylor:2012}). Extending our results to alternative geometries follows from straightforward calculations. At time, $t = 0$, a single pulse appears with planar ($N$ = 1, sheet), cylindrical ($N$ = 2, filament), or spherical ($N$ = 3, point pulse) symmetry. The size of the pulse is $M$ (molecules per unit area of sheet [$N$ = 1], per unit filament length [$N$ = 2], or per individual point pulse [$N$ = 3]). The three-dimensional chemoattractant field $C$ is governed by $\partial C /\partial t = D \Delta C$ and the concentration is: \begin{equation} C(r, t, N) = \frac{M}{(4\pi D t)^{N/2}}e^{-\frac{r^2}{4Dt}}, \label{conc_radial} \end{equation} where $D$ ($\mu$m$^2$ s$^{-1}$) is diffusivity, $r$ ($\mu$m) is the distance from the surface ($N = 1$), filament axis ($N = 2$), or centre of the point source ($N = 3$). A cell moving in this chemoattractant field with velocity $v$ ($\mu$m$\,$s$^ {-1}$) will experience a typical rate of change in concentration of $c_1\approx \nabla C \cdot \mathbf{v} + \partial C / \partial t$. \begin{figure}[t!] \captionsetup{font=small,width=16cm} \begin{center} \includegraphics[width=12cm]{Fig2.jpg} \end{center} \caption{Gradient estimation in a dynamic environment. (a) Solid orange curve shows the true concentration profile at $t = t_0$. Solid green curve shows the signal-to-noise ratio (SNR) of $\hat{c_1}$ a cell would experience if this concentration profile were static. Dotted red curve shows SNR for a cell swimming directly toward origin of pulse. Dashed blue curve shows SNR for a cell swimming directly away from origin of pulse. Concentration and SNR normalized to maximum value of one. {(b) Square-root of concentration ($\sqrt{C(r,t)}$) at $t = t_0$ (orange) and individual estimates of this concentration ($\sqrt{c(t)}$, grey) made by a cell swimming toward pulse origin. Each individual estimate is computed by calculating $\hat{c_0}$ and $\hat{c_1}$ (see Supplementary Text for equations) from a time series of random Poisson molecule arrivals \cite{asmussen:2007} with an arrival rate given by the true instantaneous concentration at the bacterium's position $C(\mathbf{x},t)$.} (c) Relative bias of concentration slope estimate ($ |\partial C /\partial t | / [ |vg| + |\partial C/\partial t | ]$) measured by slow (solid curve; $v = 30\, ~\mu$m$\, \text{s}^{-1}$) and fast swimming cells (dotted curve; $v = 96$ $\mu$m$\,\text{s}^{-1}$). In all panels, concentration governed by Eq.~(\ref{conc_radial}) with $N = 3$, $M = 10^{11}$ molecules, $v = 30\, \mu$m s$^{-1}$, $a = 1 \, \mu$m, $T = 0.1 \,$s, $t_0 = 45 \,$s, and $\delta_0 = 1$. Pulse sizes in all figures correspond roughly to the quantity of free amino acids released from a lysed phytoplankton cell of $\sim 10$ $\mu$m in diameter \cite{blackburn:1998}. } \label{noisy_grad} \end{figure} For chemoattractant pulses with concentration described by Eq.~(\ref{conc_radial}) (Fig.~\ref{noisy_grad}a, solid orange curve), the signal-to-noise ratio (Fig.~\ref{noisy_grad}a, solid green curve) divides the domain surrounding a pulse into three regions. Far from the pulse, the concentration gradient is shallow and the absolute concentration is low: cells cannot accurately measure changes in concentration because they encounter few molecules during a typical observation window (Fig.~\ref{noisy_grad}b, bottom panel). At an intermediate distance from the pulse origin, the gradient is largest in magnitude and cells encounter many molecules during a typical observation window: the SNR is greatest in this region (Fig.~\ref{noisy_grad}b, middle panel). Near the pulse origin the gradient is again shallow and variance in the concentration slope estimate is substantial (Fig.~\ref{noisy_grad}b, top panel). Moreover, in this region, concentration changes rapidly over time and the concentration slope and ramp rate may differ in sign (i.e., bias in the concentration slope estimate is large, Fig.~\ref{noisy_grad}b, top panel; Fig.~\ref{noisy_grad}c). \section*{Results} Cells far from a chemoattractant pulse cannot resolve true changes in concentration above noise (Fig.~\ref{noisy_grad}a, SNR drops below threshold $\delta_0$ for large distance). The distance beyond which $\hat{c_1}$ becomes dominated by noise is given implicitly by \begin{equation} \delta= \left| vg(r,t) + \frac{\partial C(r,t)}{ \partial t} \right| C(r,t)^{-1/2}, \label{del} \end{equation} where the term in brackets is the magnitude of the true ramp rate $c_1$ that a cell at distance $r$ with local concentration slope $g(r,t)$ experiences. Because the chemoattractant field is changing, the magnitude of the ramp rate a cell measures will depend on its direction of travel. Far from the pulse, a cell travelling directly inward (Fig.~\ref{noisy_grad}a, red dotted curve) will experience a greater SNR than a cell travelling outward (Fig.~\ref{noisy_grad}a, blue dot-dash curve). Beyond the inflection point in the concentration profile, the r.h.s. of Eq.~(\ref{del}) is maximized for cells travelling directly up the concentration gradient (i.e., toward the pulse center; Fig.~\ref{noisy_grad}a, red dotted curve). The outer boundary beyond which cells cannot reliably perceive changes in concentration is given implicitly by Eq.~(\ref{del}) with $g= -\partial C/\partial r$. We refer to the largest distance that satisfies this equation as the outer boundary of sensitivity, $r_o$ (Fig.~\ref{noisy_grad}a, red point). At distances $r>r_o$, perceived changes in concentration are dominated by noise, regardless of a cell's direction of travel. Bacteria use gradients to navigate toward regions of high attractant concentration, but also to maintain position near local maxima \cite{celani:2010}. In order to do this, a cell travelling down the concentration gradient must experience a decreasing concentration, which provides the signal the cell uses to modify swimming behavior \cite{xie:2015}. Near the origin, the SNR is maximized for a cell that is travelling directly down the concentration gradient (Fig.~2a blue dash-dot curve). For $t$ greater than a critical time, $t_s$, there is an inner boundary at a distance $r_i$ from the origin of the pulse (Fig.~2a, blue point), within which the SNR drops below threshold. For $t>t_s$ the location of this inner boundary is given implicitly by Eq.~(\ref{del}) with $g= \partial C(r,t)/ \partial r$ (Supplementary Text). The boundaries $r_o$ and $r_i$ define a dynamic region (Fig.~\ref{fig:inner_outer}, blue region in inset), outside of which bacteria cannot reliably respond to chemoattractant gradients because either the ramp rate is too noisy to resolve, or the ramp rate and the concentration slope have different signs (i.e., Conditions (\ref{eq:conditions}) are violated). Figure \ref{fig:inner_outer} shows the dynamics of $r_o$ and $r_i$ for bacteria swimming at three different speeds. For all swimming speeds, the outer boundary $r_o$ initially expands before rapidly contracting (Fig.~\ref{fig:inner_outer}, red curves). The time dependence of this boundary can be obtained by substituting Eq.~(\ref{conc_radial}) into Eq.~(\ref{del}), solving for $r_o$, and expanding the resulting product-log solution (Supplementary Text): \begin{figure}[t!] \captionsetup{font=small,width=16cm} \begin{center} \includegraphics[width=9.3cm]{Fig3.pdf} \end{center} \caption{Inner (blue) and outer (red) boundaries of the region in which cells reliably perceive gradients. Dashed line shows $v = 30\, \mu$m$\,$s$^{-1}$, maximum swimming speeds of \emph{E. coli} \cite{barbara:2003}; dash-dot line shows $v = 66\, \mu \text{m s}^{-1}$, typical cruising speed of \emph{Vibrio coralliilyticus}; dotted line shows $v = 96 \, \mu \text{m s}^{-1}$, maximum speed of \emph{V. coralliilyticus} after initiating chemokinesis \cite{garren:2013}. Other parameters as in Fig.~2. Solid grey curve is outer boundary, $r_c$, of region within which cells can resolve absolute concentration. Solid black curve is $\sqrt{4Dt}$, the radius at which the SNR is maximized for a static profile (green curve in Fig.~2). Inset shows relative sizes of region where cells can detect gradients ($r_i < r <r_o$, blue region), and region where cells can resolve absolute concentration ($r < r_c$, grey region inward) at $t = 90\,$s ($v = 66$ $\mu$m$\,$s$^{-1}$).} \label{fig:inner_outer} \end{figure} \begin{equation} r_o \approx \sqrt{4 D t \log \left[ \frac{-\log (kt^{1 + N/2})}{k t^{1 + N/2}} \right] }, \label{roapprox} \end{equation} where $k = (4 \pi D)^{N/2} \delta_0^2 /(2 \pi a M v^2 T^3)$. Swimming speeds of motile bacteria typically range from $30 \, \mu$m$\,$s$^{-1}$ to over $100\, \mu$m$\,$s$^{-1}$\cite{barbara:2003}. For many relevant chemoattractants, $D \sim 10^3\, \mu$m$^2$ s$^{-1}$, and the number of molecules released in a pulse, $M$, is generally large; for example, a point pulse created by the lysis of even a small phytoplankton cell (a common source of nutrients for marine bacteria) contains upwards of $10^{11}$ free amino acid molecules \cite{blackburn:1998}. This means that $k \ll 1$ such that the logarithmic term in Eq.~\eqref{roapprox} varies slowly with time for early times, and leading-order behaviour is initially governed by $\sqrt{t}$. Pulse size, $M$ occurs only inside the logarithmic terms in Eq.~(\ref{roapprox}) indicating that $r_o$ scales weakly with pulse size. For example, doubling the size of a small point pulse ($N = 3$) increases the volume of water in which gradients are perceived by only 50\% (assuming $M$ increases from $10^{11}$ to $2 \times 10^{11}$ molecules, $\delta_0 = 1$, and $v=66$ $\mu$m s$^{-1}$). Figure \ref{fig:ro_approx} shows the dynamics of $r_o$ for surface, filament, and point pulses. Eq.~(\ref{roapprox}) agrees well with the exact solution for $r_o$ obtained by solving Eq.~(\ref{del}) numerically (Fig.~\ref{fig:ro_approx} compare solid and dashed lines). Eventually the inner and outer boundaries of sensitivity intersect (Fig.~\ref{fig:inner_outer}), and cells can no longer reliably glean navigational information from the chemoattractant field. We refer to the time at which this occurs as $t^*$. Finding the time when the SNR falls below threshold $\delta_0$ everywhere shows that \begin{equation} t^* \approx \alpha(Mv^2T^3)^{\frac{2}{N+2}}, \label{tstar} \end{equation} where $\alpha = (\pi^{(1-N/2)} a e^{-1})^{2/(N+2)} [3(4D )^{N/2} \delta_0]^{-2/(N+2)}$ and the approximation assumes $|vg| \gg \partial C/\partial t$ at the point in space where the SNR is maximized (Supplementary Text). This relation illustrates the relative contribution of measurement time $T$ and speed $v$ to the time scale of perceptible changes in concentration, $t^*$. Moreover, Eq.~\eqref{tstar} shows that $t^*$ is proportional to $M^{2/(N + 2)}$; the scaling of $t^*$ with pulse size is sublinear for all pulse geometries meaning that doubling the size of a pulse always less than doubles the time over which it can be perceived. \begin{figure}[t!] \captionsetup{font=small,width=16cm} \begin{center} \includegraphics[width=11cm]{Fig4.pdf} \end{center} \caption{Scaling of the outer boundary of sensitivity $r_o$ for pulses emitted from surfaces (light grey), filaments (grey), and point sources (black). Solid curves are numerical solution to Eq.~(\ref{del}). Dashed curves given by Eq.~(\ref{roapprox}). Solid black line is proportional to $\sqrt{t}$. Solid curves truncated when the SNR falls below $\delta_0$. Dashed curves truncated at $t^*$ (Eq.~(\ref{tstar})). $M$ scaled so that pulses with different geometries have the same concentration profile at $t = 10$ s ($M = 8.0 \times 10^5$ molecules per $\mu$m$^2$ surface for surface source; $M = 2.8 \times 10^8$ molecules per $\mu$m length for line source; $M = 10^{11}$ molecules for point source); $v = 66\, \mu$m s$^{-1}$; other parameters as in Fig.~2.} \label{fig:ro_approx} \end{figure} The locations of inner and outer boundaries (Fig.~\ref{fig:inner_outer}) are governed, in part, by swimming speed. {Many bacteria alter swimming speed in response to stimuli, and} a natural question, therefore, is whether a cell could adjust its speed adaptively to achieve high sensitivity {to chemical gradients.} Some species exhibit a behaviour known as chemokinesis: cells swim at a speed that depends on the local concentration of chemoattractant, often swimming at a high speed when absolute concentration is high, and a low speed when concentration is low \cite{barbara:2003,garren:2013}. In the presence of a resolvable gradient, the interpretation of chemokinesis is straightforward: cells can climb the gradient faster if they swim at a higher speed (at the expense of a higher energetic cost of motility). However, chemokinesis may also have a second role. The SNR of the ramp rate is smaller than the SNR of the absolute concentration, $c_0$, implying that cells may be able to accurately detect whether absolute concentration has crossed a threshold before they can resolve changes in concentration over time. The mean rate of arrival of molecules to the surface of a sphere of radius $a$ is $4\pi D ac(t)$ \cite{berg:1977}. Poisson molecule arrivals imply that the SNR of absolute concentration $c_0$ is $c_0 Var (\hat{c_0} )^{-1/2} = c_0 [4\pi D aTc_0]^{-1/2}$. Using this ratio, we define a third boundary, $r_c$, beyond which the SNR of $\hat{c_0}$ falls below threshold, $\delta_0$: \begin{equation} r_c = \sqrt{8Dt \log(\eta t^{-N/2})}, \label{eq:rc} \end{equation} where $\eta = \delta_0^{-1} (MaT)^{1/2} (4\pi D)^{1/2-N/4}$. This boundary has the same leading order behaviour in time as $r_o$, but extends well beyond $r_o$ (Fig.~3, solid grey curve); for example, assuming $r_o$ is at its maximum value (Fig.~3), the volume within which cells can accurately measure absolute concentration in the water surrounding a small point pulse ($N = 3$) is six times larger than the volume in which cells can resolve changes in concentration (assuming $M = 10^{11}$ molecules \cite{blackburn:1998}, $\delta_0 = 1$, $v = 66$ $\mu$m s$^{-1}$). Note that we use the same threshold ($\delta_0$) on the SNR of $\hat{c_0}$ and $\hat{c_1}$ for the purpose of comparison but thresholds on these ratios need not be equal. By increasing their swimming speeds when concentration exceeds a threshold, cells can increase their sensitivity to changes in concentration (first Condition (\ref{eq:conditions}); Fig.~\ref{fig:inner_outer}) and reduce bias in estimation of the concentration slope (Fig.~2c). The effect of increasing swimming speed is to expand the region of space over which the cell can resolve gradients, $r_i < r < r_o$, and to extend the time $t^*$ beyond which gradients become too noisy for the cell to measure (Fig.~\ref{fig:inner_outer}, compare curves for different swimming speeds; Fig.~5). Effects of changes in speed may be substantial. For example, the coral pathogen \emph{Vibrio coralliilyticus} increases its speed by as much as 45\% when chemoattractant concentration is high \cite{garren:2013}. The temporal evolution of a chemoattractant pulse appears very different to a bacterium swimming at $66 \; \mathrm{\mu m \, s^{-1}}$ (typical cruising speed of \textit{V. coralliilyticus} and other \textit{Vibrio} spp.; Fig.~\ref{fig:kinesis}, blue regions) than it does to a bacterium travelling at speeds closer to $100 \; \mathrm{\mu m \, s^{-1}}$ (swimming speeds of chemokinetic \textit{V. coralliilyticus}\cite{barbara:2003,garren:2013}; Fig.~\ref{fig:kinesis}, orange regions). \begin{figure}[t!] \captionsetup{font=small,width=16cm} \begin{center} \includegraphics[width=11.5cm]{Fig5.pdf} \end{center} \caption{Effect of swimming speed on the time evolution of the region where chemotaxis is possible. Colored regions show a two-dimensional cross-section of the region in which cells can resolve chemoattractant gradients (i.e. Conditions (\ref{eq:conditions}) are satisfied). Blue regions are those experienced by a cell travelling at a cruising speed typical of the bacterium \textit{V. coralliilyticus} ($\sim 66 \, \mathrm{\mu m \;s^{-1}}$). Orange regions are those experienced by a \emph{V. coralliilyticus} cell travelling at a high speed after initiating chemokinesis ($\sim 96\, \mu$m s$^{-1}$) \cite{garren:2013}. Other parameters as in Fig.~2. Note the \emph{blind spot} that forms at the centre of the region as the inner boundary of sensitivity, $r_i$, expands.} \label{fig:kinesis} \end{figure} \section*{Discussion} Bacteria must cope with considerable noise and estimation bias when navigating dynamic chemical landscapes. The advantage conferred by an early response to chemical pulses suggests that there may be selection for high accuracy and sensitivity in the chemotaxis response \cite{taylor:2012,stocker:2012}. Our framework provides a means of studying how the basic components of bacterial navigation strategies (swimming speed, measurement time) and physical parameters (e.g., chemoattractant diffusivity, pulse size) influence when and where bacteria can perform chemotaxis. Expressions for the outer boundary of sensitivity, $r_o$ (Eq.~\ref{roapprox}), and the time after which gradients created by a pulse are no longer perceptible, $t^*$ (Eq.~\ref{tstar}), may prove particularly useful as they constrain the length and timescales over which bacteria can perceive individual chemical pulses. The relationship between the size of the pulse, pulse geometry, and the length and timescales over which the pulse is perceptible provides a basis for modeling more realistic environments where many pulses appear with characteristic sizes, geometries, and temporal statistics. For example an empirical estimate of typical inter-pulse-interval in, say, a marine environment \cite{stocker:2012}, can be compared to $t^*$ to determine whether the environment is highly granular or relatively homogeneous from the perspective of bacteria. For the canonical pulse geometries considered here (Eq.~(\ref{conc_radial})), the signal-to-noise ratio of the concentration ramp rate decays sharply far from the origin of a pulse (Fig.~2a, blue, red, and green curves). In particular, substituting Eq.~(\ref{conc_radial}) into the expression for the SNR of $\hat{c_1}$ (r.h.s. of Eq.~(\ref{del})) shows that the SNR decays like a Gaussian for large $r$ ($ \mathrm{SNR} \propto \exp[-r^2/(8Dt)]$ for large $r$). This sharp transition in the SNR means that, near the outer boundary of sensitivity, there is a stark division between cells that have access to useful chemotactic information ($ r < r_o$) and cells that do not ($r > r_o$). Using $r_o$ to partition bacterial cells into subpopulations that are near and far from chemical pulses could greatly simplify models of bacterial competition and population dynamics in complex environments \cite{taylor:2012}. Our theory makes a number of predictions that could be tested with chemotaxis experiments. First, the theory predicts that for times $t < t^*$, the mean orientation of bacterial swimming trajectories outside the region $r_i < r <r_o$ should be unbiased. Because the conditions considered in this work correspond to an upper bound on sensory accuracy, the region within which cells exhibit biased motion may be a sub-region of $r_i < r <r_o$. A second prediction is that, for times greater than $t^*$, bacteria should not exhibit biased motion anywhere in the environment because each cell's estimate of the gradient will be dominated by noise, regardless of where it is located relative to the origin of the pulse. Again, because of the assumptions used to derive $t^*$, the observed time at which the average directional bias of a bacterial population drops to zero may be shorter than $t^*$. One of the implications of our model for temporal gradient sensing is that sensory acuity is intimately linked to swimming speed (Eq.~\ref{del}, Fig.~5). Because swimming at high speed is costly \cite{taylor:2012,berg:1977}, bacteria likely benefit by changing speed in an adaptive way, cruising at low speed in the absence of a chemical signal, and speeding up when concentration exceeds a threshold. The connection between speed and measurement accuracy may explain the counterintuitive observation that some species of marine bacteria swim at high speeds even near local maxima in chemoattractant concentration \cite{barbara:2003}; bias in the concentration slope estimate is high near local maxima (Fig.~\ref{noisy_grad}b). A cell cannot decrease bias by lengthening measurement time, but it can reduce bias by swimming faster, suggesting that bacteria may use chemokinesis to enhance {chemotactic accuracy} near the \emph{blind spot} that forms at the centre of spreading chemical pulses (Fig.~5 $t$ = 120 s, $t$ = 140 s; Supplementary Text). {More generally, our framework suggests that bacteria can improve chemotactic performance by using chemokinesis and chemotaxis in concert.} The hypothesis that bacteria initiate chemokinesis in response to absolute concentration to enhance sensitivity to gradients could be investigated by independently varying the concentration gradient and absolute concentration of a chemoattractant, for example using a microfluidic device \cite{son:2015}. Our framework uses fundamental limits on the accuracy of chemical sensing \cite{mora:2010,bialek:2005} to determine when and where chemotaxis is feasible, and provides a tool for modeling bacterial behaviour in more realistic dynamic environments. Importantly, it is agnostic to the details of bacterial movement patterns and chemosensory machinery and can therefore provide general principles that apply to the broad range of bacterial species in real ecological communities that navigate using temporal gradient sensing. \section*{Acknowledgements} This work was supported by Army Research Office Grants W911NG-11-1-0385 and W911NF-14-1-0431 to S.A.L., a James S. McDonnell Foundation Fellowship to A.M.H., a Swiss National Science Foundation postdoctoral fellowship to F.C., a Human Frontier Science Program Cross-Disciplinary fellowship to D.R.B., and a Gordon and Betty Moore Marine Microbial Initiative Investigator Award (GBMF3783) to R.S.
1,941,325,220,336
arxiv
\section{Introduction} \label{sect1} The Wigner crystal \cite{wigner} appears when the energy of Coulomb repulsion between charges of same sign becomes dominant comparing to kinetic energy of charge motion. On a one-dimensional (1D) straight line this crystal can move ballistically as a whole at an arbitrary small velocity. Here, we discuss the properties of Wigner crystal sliding in 1d in presence of a periodic potential and in 1d snaked nanochannel following recent works \cite{fki,snake}. An example of snaked nanochannel of a sinusoidal form is shown in Fig.~\ref{fig1}. The snaked form of a channel is very similar to the Little suggestion \cite{little} on possibilities of superconductivity in organic molecules. As described in \cite{little,jerome}, it is assumed that organic molecules form some effective wiggled or snaked channel with an effective density of electrons $\nu$ which slides along the channel opening a new view on possibilities of superconductivity in such materials. The question about sliding in such a channel is rather nontrivial being linked with fundamental results of dynamical systems \cite{chirikov,lichtenberg} which we briefly discuss. In fact in a local approximation of small charge oscillations the forces depend linearly on displacements corresponding to a string of particles, linked locally by linear springs and placed in a periodic potential. The density of particles or charges corresponds effectively to a fixed rotation number in a dynamical symplectic map which describes the recurrent positions of particles in a static configuration with a minimum of energy. For linear springs this map is exactly reduced to the Chirikov standard map \cite{chirikov}. This model in known also as the Frenkel-Kontorova model which detailed description is given in \cite{obraun}. At small channel deformation $a$ or small amplitude of periodic potential $K$ the particles can freely slide in the periodic potential that corresponds to the regime of invariant Kolmogorov-Arnold-Moser (KAM) curves which rotation number determines the particle density. In this regime the spectrum of small excitations is characterized by a phonon spectrum with the dispersion relation $\omega=c_s k$ where $k$ is dimensionless wave vector and $c_s$ is dimensionless sound velocity. Above a certain critical strength of deformation the KAM curve at a given $\nu$ is destroyed being replaced by an invariant Cantor set known as cantori \cite{aubry}. In this regime the excitations above the ground state have a gap $\omega^2=(c_sk)^2+\Delta^2$ and the chain becomes pinned by the potential. The gap $\Delta$ is proportional to the Lyapunov exponent of dynamical orbits on such cantori set \cite{lichtenberg,aubry}. The transition between sliding and pinned phases is known as the Aubry transition \cite{obraun}. In the pinned phase the Aubry theorem guarantees that at fixed $\nu$ there is a unique ground state which static equilibrium configuration corresponds to the cantori set with positions of particles forming a devil's staircase. However, from the physical view point this ground state is rather hard to reach since in its vicinity there are exponentially many equilibrium configurations which energy is exponentially close to the energy of the ground state. Numerical studies show that already for 100 particles the energy difference is as small as $10^{-25}$ in dimensional units \cite{fk1}. This phase was called the dynamical spin glass (or dynamical instanton glass) since such properties appear in spin glasses \cite{parisi} which have random disordered on-site energies and interactions. In contrast to that these properties of the Aubry phase appear in absence of any disorder being of purely dynamical origin of cantori in a purely periodic potential. \begin{figure} \includegraphics[width=7.6cm]{fig1.eps} \caption{ } \label{fig1} \end{figure} The studies of properties of the quantum Frenkel-Kontorova model has been started in \cite{fkq1} and further significantly advanced in \cite{fkq2}. It was shown \cite{fkq2} that quantum fluctuations lead to melting of the pinned phase at sufficiently large values of dimensionless Planck constant $\hbar$. This transition is a zero temperature $T=0$ quantum phase transition. At small $\hbar \ll 1$ an $T=0$ the phonon mode is frozen but quantum tunneling gives transitions between quasi-degenerate equilibrium classical configurations which can be viewed as instantons. At small $\hbar \ll 1$ the density of instantons is small and their interactions are weak. When $\hbar$ increases the instanton density grows and above a certain critical $\hbar_c \sim 1$ the quantum melting of pinned phase takes place at zero temperature leading to zero gap, appearance of quantum phonon mode and quantum chain sliding \cite{fkq2}. The results obtained for the Wigner crystal in a periodic potential \cite{fki} in classical and quantum regimes confirm this qualitative picture. At fixed amplitude of periodic potential $K$ the classical Wigner crystal is pinned at small charge densities $\nu < \nu_{c1}$ \cite{fki}. Indeed, at $\nu \rightarrow 0$ we have a problem of one electron with zero kinetic energy and obviously, an electron is pinned by a periodic potential. \section{Sliding Wigner snake} \label{sect2} The situation is different in the case of snaked nanochannel: noninteracting electrons, corresponding to the limit $\nu \ll 1$, move freely inside the wiggled channel and pinning of the Wigner crystal appears only above a certain critical charge density $\nu > \nu_2$. An example of sliding and pinned regimes is shown in Fig.2. The data clearly show that the sliding phase at $a=0.6$ has a smooth hull function and sound dispersion law for small oscillations of the crystal. In contrast, in the pinned regime at $a=1.2$ the hull function has a form of fractal devil's staircase and gapped spectrum of small oscillations. \begin{figure} \includegraphics[width=7.6cm]{fig2.eps} \caption{ } \label{fig2} \end{figure} More detailed results on dependence of gap $\Delta$ on charge density $\nu$ and deformation $a$ are described in \cite{snake}. In \cite{snake} it is also shown that for moderate deformations $a <1$ the charge positions in a static configuration are described by a symplectic dynamical map \begin{eqnarray} \label{eq1} {\bar v} & = & v + 2 a^2 (1-\cos {\bar v}) \sin 2 \phi \; ,\nonumber\\ {\bar \phi} & = & \phi + {\bar v} + a^2 \sin {\bar v} \cos 2 \phi \; , \end{eqnarray} where $v=x_{i}-x_{i-1}$, $\phi=x_i$ are conjugated action-phase variables, bar marks their values after iteration. The map is implicit but symplectic (see e.g. \cite{lichtenberg}). Examples of Poincar\'e sections of this map at two values of deformation $a$ are shown in Fig.~\ref{fig3}. Phase space region with scattered points corresponds to chaotic dynamics with pinned phase, while the smooth invariant KAM curves correspond to the sliding phase. \begin{figure} \includegraphics[width=7.6cm]{fig3.eps} \caption{ } \label{fig3} \end{figure} In many aspects the properties of the Wigner crystal in snaked nanochannels are similar to those of the Frenkel-Kontorov model \cite{fk1} and the Wigner crystal in a periodic potential \cite{fki}: in the pinned phase there are exponentially many static configurations being exponentially close in energy and corresponding to the dynamical glass phase. However, there are also some specific features: for rational values of densities $\nu=\nu_m=1/m$, where $m$ is an integer, the Wigner snake can slide freely since a displacement does not modify the Coulomb energy of electron interactions. In analogy with the results presented in \cite{fki}, we expect that the quantum Wigner crystal shows a zero temperature quantum phase transition going from a pinned phase at $\hbar <\hbar_c \sim 1$ to a sliding phase at $\hbar > \hbar_c $. However, a direct demonstration of this fact requires further numerical simulations using quantum Monte Carlo methods described in \cite{fki,fkq1,fkq2}. \section{Discussion} \label{sect3} In the above Section we considered the Wigner crystal in a snaked nanochannel without any internal potential. It is natural to assume that a more realistic case of molecular organic conductors, as shown in the Little suggestion in Fig.~\ref{fig1}, has not only channel deformation but also a periodic potential inside the channel. Thus the case of organic conductors corresponds to a case of snaked channel with a periodic potential inside it. The combination of results of \cite{fki,snake} shows that for a given deformation and amplitude of the periodic potential we have the sliding phase in a certain range of charge densities $\nu$: \begin{equation} \nu_{c1} < \nu <\nu_{c2} \;\;\; . \label{eq2} \end{equation} We suppose that the sliding KAM phase may correspond to effective superconducting behavior of electron transport in organic conductors. Indeed, the pressure diagram of organic conductors shown in Fig.~\ref{fig4} from \cite{brazovski} shows that superconductivity exists only is a finite range of pressure. We assume that pressure gives variation of effective charge density inside the molecular channels in the Little suggestion in Fig.~\ref{fig1}. This leads us to the KAM concept of superconductivity of electrons without attractive forces: the Wigner crystal of electrons slides freely inside a snaked molecular crystal channel if the charge density is located inside of KAM phase defined by (\ref{eq2}). Of course, further studies are required for development of this concept. In fact. the sliding KAM phase can be viewed as a superfluid phase of electrons. Indeed, we see that in the KAM phase there is a spectrum of excitations with a finite sound velocity $c_s$. Thus according to the Landau criterion \cite{landau} the sliding of electrons with velocities $v<c_s$ is superfluid. Hence, the transition from the sliding KAM phase to the pinned Aubry phase corresponds to the transition from superfluid to insulator. In this superfluid liquid the charge carries have change ``e'' and not ``2e'' as it is the case for BCS pairs. May be effect of interactions between electrons in parallel snaked channels should be taken into account to have 2e-pairs. We note that it is known that repulsive interactions can create superfluid phase in disordered 1d systems, e.g. in the repulsive Hubbard model with disorder \cite{scalettar}. \begin{figure} \includegraphics[width=7.4cm]{fig4.eps} \caption{ } \label{fig4} \end{figure} The existence of dynamical spin glass phase with pinned Wigner crystal shows that there should be very slow relaxation processes corresponding to very slow transitions between quasi-degenerate static equilibrium configurations. In fact the experiments with organic conductors show very slow variations of conductivity which take place of a scale of days. Such experimental results have been reported at ECRYS-2011 by K.Miyagawa \cite{miyagawa} and P.Monceau \cite{monceau}. Usually it is argued that the glassy phase appears due to impurities. We think that the origin of this phenomenon is not related to disorder and impurities, which presence should be rather small in organic crystals used in experiments \cite{miyagawa,monceau}. In contrast this glassy phase appears as a result of dynamical spin glass phase described in \cite{fki,fkq1,fkq2,snake} which exists in purely periodic structures without any impurities and disorder. Finally, we note that in \cite{fki} it was proposed to study the dynamical spin glass with cold ions in optical lattices which can model the problem of Wigner crystal in a periodic potential. Such experiments with cold ions are now under active discussions \cite{wunderlich,tosatti} and their experimental realization is on the way \cite{haffner}. This work is supported in part by ANR PNANO project NANOTERRA.
1,941,325,220,337
arxiv
\section{Introduction} Quantum annealing (QA) attempts to exploit quantum fluctuations to solve computational problems faster than it is possible with classical computers \cite{kadowaki_quantum_1998,Brooke1999,brooke_tunable_2001,farhi_quantum_2001,morita:125210,RevModPhys.80.1061,EPJ-ST:2015}. As an approach designed to solve optimization problems, QA is a special case of adiabatic quantum computation (AQC) \cite{farhi_quantum_2000}, a universal model of quantum computing \cite{aharonov_adiabatic_2007,PhysRevLett.99.070502,Gosset:2014rp,Lloyd:2015fk}. In AQC, a system is designed to follow the instantaneous ground state of a time-dependent Hamiltonian whose final ground state encodes the solution to the problem of interest. This results in a certain amount of stability, since the system can thermally relax to the ground state after an error, as well as resilience to errors, since the presence of a finite energy gap suppresses thermal and dynamical excitations \cite{childs_robustness_2001,PhysRevLett.95.250503,TAQC,Lloyd:2008zr,amin_decoherence_2009,Albash:2015nx}. Despite this inherent robustness to certain forms of noise, AQC requires error-correction to ensure scalability, just like any other form of quantum information processing \cite{Lidar-Brun:book}. Various error correction proposals for AQC and QA have been made \cite{jordan2006error,PhysRevLett.100.160506,PhysRevA.86.042333,Young:13,Sarovar:2013kx,Young:2013fk,PAL:13,Ganti:13,Bookatz:2014uq,Mizel:2014sp,PAL:14,Vinci:2015jt,Mishra:2015ye,MNAL:15}, but an accuracy-threshold theorem for AQC is not yet known, unlike in the circuit model (e.g., \cite{Aliferis:05}). A direct AQC simulation of a fault-tolerant quantum circuit leads to many-body (high-weight) operators that are difficult to implement \cite{Young:13,Sarovar:2013kx} or myriad other problems \cite{Lloyd:2015fk}. Nevertheless, a scalable method to reduce the effective temperature would go a long way towards approaching the ideal of closed-system AQC, where only non-adiabatic transitions constitute the source of errors. Motivated by the availability of commercial QA devices featuring hundreds of qubits \cite{Dwave,Johnson:2010ys,Berkley:2010zr,Harris:2010kx}, we focus on error correction for QA. There is a consensus that these devices are significantly and adversely affected by decoherence, noise, and control errors \cite{q108,SSSV,Albash:2014if,q-sig2,Crowley:2014qp,Martin-Mayor:2015dq,King:2015zr,vinci2014hearing}, which makes them particularly interesting for the study of tailored, practical error correction techniques. Such techniques, known as quantum annealing correction (QAC) schemes, have already been experimentally shown to significantly improve the performance of quantum annealers \cite{PAL:13,PAL:14,Vinci:2015jt,Mishra:2015ye}, and theoretically analyzed using a mean-field approach \cite{MNAL:15}. However, these QAC schemes are not easily generalizable to arbitrary optimization problems since they induce an encoded graph that is typically of a lower degree than the qubit-connectivity graph of the physical device. Moreover, they typically impose a fixed code distance, which limits their efficacy. To overcome these limitations, here we present a family of error-correcting codes for QA, based on a ``nesting'' scheme, that has the following properties: (1) it can handle arbitrary Ising-model optimization problem, (2) it can be implemented on present-day QA hardware, and (3) it is capable of an effective temperature reduction controlled by the code distance. Our ``nested quantum annealing correction" (NQAC) scheme thus provides a very general and practical tool for error correction in quantum optimization. We test NQAC by studying antiferromagnetic complete graphs numerically, as well as on a D-Wave Two (DW2) processor featuring $504$ flux qubits connected by $1427$ tunable composite qubits acting as Ising-interaction couplings, arranged in a non-planar Chimera-graph lattice \cite{Bunyk:2014hb} (complete graphs were also studied for a spin glass model in Ref.~\cite{Venturelli:2014nx}). We demonstrate that our encoding schemes yields a steady improvement for the probability of reaching the ground state as a function of the nesting level, even after minor-embedding the complete graph onto the physical graph of the quantum annealer. We also demonstrate that NQAC outperforms classical repetition code schemes that use the same number of physical qubits.\\ \section{Quantum Annealing and Encoding the Hamiltonian} In QA the system undergoes an evolution governed by the following time-dependent, transverse-field Ising Hamiltonian: \begin{equation} H(t) = A(t) H_X + B(t) H_{\mathrm{P}}\ , \qquad t\in[0,t_f] \ , \label{eq:adiabatic} \end{equation} with respectively monotonically decreasing and increasing ``annealing schedules" $A(t)$ and $B(t)$. The ``driver Hamiltonian'' $H_X = -\sum_i \sigma_i^x$ is a transverse field whose amplitude controls the tunneling rate. The solution to an optimization problem of interest is encoded in the ground state of the Ising problem Hamiltonian $H_{\mathrm{P}}$, with \begin{equation} \label{eq:HP} H_{\mathrm{P}} = \sum_{i \in \mathcal{V}} h_i \sigma^z_i + \sum_{(i,j) \in \mathcal{E}} J_{ij}\sigma^z_i\sigma^z_j\, , \end{equation} where the sums run over the weighted vertices $\mathcal{V}$ and edges $\mathcal{E}$ of a graph $G = (\mathcal{V},\mathcal{E})$, and $\sigma_i^{x,z}$ denote the Pauli operators acting on qubit $i$. The D-Wave devices use an array of superconducting flux qubits to physically realize the system described in Eqs.~\eqref{eq:adiabatic} and \eqref{eq:HP} on a fixed ``Chimera" graph (see Fig.~\ref{fig:log-nesting}) with programmable local fields $\{h_i\}$, couplings $\{J_{ij}\}$, and annealing time $t_f$ \cite{Johnson:2010ys,Berkley:2010zr,Harris:2010kx}. For closed systems, the adiabatic theorem \cite{Kato:50,Jansen:07} guarantees that if the system is initialized in the ground state of $H(0) = A(0) H_X$, a sufficiently slow evolution relative to the inverse minimum gap of $H(t)$ will take the system with high probability to the ground state of the final Hamiltonian $H(t_f) =B(t_f) H_{\mathrm{P}}$. Dynamical errors then arise due to diabatic transitions, but they can be made arbitrarily small via boundary cancellation methods that control the smoothness of $A(t)$ and $B(t)$, as long as the adiabatic condition is satisfied \cite{lidar:102106,Wiebe:12,Ge:2015wo}. For open systems, specifically a system that is weakly coupled to a thermal environment, the final state is a mixed state $\rho (t_f)$ that is close to the Gibbs state associated with $H(t_f)$ if equilibration is reached throughout the annealing process \cite{Avron:2012tv,Albash:2015nx,Venuti:2015kq}. In the adiabatic limit the open system QA process is thus better viewed as a Gibbs distribution sampler. The main goal of QAC is to suppress the associated thermal errors and restore the ability of QA to act as a ground state solver. In addition QAC should suppress errors due to noise-driven deviations in the specification of $H_{\mathrm{P}}$ \cite{Young:2013fk}. Error correction is achieved in QAC by mapping the logical Hamiltonian $H(t)$ to an appropriately chosen encoded Hamiltonian $\bar H(t)$: \begin{equation} \bar H(t) = A(t) H_X + B(t) \bar H_{\mathrm{P}}\ , \qquad t\in[0,t_f] \ , \label{eq:encoded} \end{equation} defined over a set of physical qubits $\bar N$ larger than the number of logical qubits $N = |\mathcal{V}|$. Note that $\bar H_{\mathrm{P}}$ also includes penalty terms, as explained below. The logical ground state of $H_{\mathrm{P}}$ is extracted from the encoded system's state $\bar \rho(t_f)$ through an appropriate decoding procedure. A successful error correction scheme should recover the logical ground state with a higher probability than a direct implementation of $H_{\mathrm{P}}$, or than a classical repetition code using the same number of physical qubits $\bar N$. Due to practical limitations of current QA devices that prevent the encoding of $H_X$, only $H_{\mathrm{P}}$ is encoded in QAC. \begin{figure}[ht] \begin{center} \subfigure[\ Logical graph: 1st level.]{\includegraphics[width=0.23\textwidth]{4x4_ideal_logical_plot_1}\label{fig:4x4_ideal_logical_plot_1}} \subfigure[\ 1st level ME.]{\includegraphics[width=0.23\textwidth]{4x4_ideal_physical_1}\label{fig:4x4_ideal_physical_1}} \subfigure[\ Nested graph: 4th level.]{\includegraphics[width=0.23\textwidth]{4x4_ideal_logical_plot_4}\label{fig:4x4_ideal_logical_plot_4}} \subfigure[\ 4th level ME.]{\includegraphics[width=0.23\textwidth]{4x4_ideal_physical_4}\label{fig:4x4_ideal_physical_4}} \caption{Illustration of the nesting scheme. In the left column, a $C$-level nested graph is constructed by embedding a $K_N$ into a $K_{C\times N}$, with $N=4$ and $C=1$ (top) and $C=4$ (bottom). Red, thick couplers are energy penalties defined on the nested graph between the $(i,c)$ nested copies of each logical qubit $i$. The right column shows the nested graphs after ME on the DW2 Chimera graph. Brown, thick couplers correspond to the ferromagnetic chains introduced in the process. } \label{fig:log-nesting} \end{center} \end{figure} \begin{figure*}[ht] \begin{center} {\includegraphics[width=0.33\textwidth]{4x4_ideal_per_logical-}\label{fig:4x4_ideal_per_logical-}} {\includegraphics[width=0.33\textwidth]{Overlapped_K4}\label{Overlapped_K4}} {\includegraphics[width=0.33\textwidth]{MuC}\label{fig:EBoost_SQA}} \caption{Experimental and numerical results for the antiferromagnetic $K_4$, after encoding, followed by ME and decoding. Left: DW2 success probabilities $P_C(\alpha)$ for eight nesting levels $C$. Increasing $C$ generally increases $P_C(\alpha)$ at fixed $\alpha$. Middle: Rescaled $P_C(\alpha\mu_C)$ data, exhibiting data-collapse. Right: scaling of the energy boost $\mu_C$ \textit{vs} the maximal energy boost $\mu_C^{\max}$, for both the DW2 and SQA. Purple circles: DW2 results. Blue stars: SQA for the case of no ME (i.e., for the problem defined directly over $K_{C\times N}$ and no coupler noise). Red up-triangles: SQA for the Choi ME \cite{Choi2} (for a full Chimera graph), with $\sigma = 0.05$ Gaussian noise on the couplings. Yellow right-triangles: SQA for the DW2 heuristic ME \cite{Cai:2014nx,Boothby2015a} (applied to a Chimera graph with $8$ missing qubits) with $\sigma = 0.05$ Gaussian noise on the couplings. The flattening of $\mu_C$ suggests that the energy boost becomes less effective at large $C$. However, this can be remedied by increasing the number of SQA sweeps (see Appendix \ref{sec:Num_Add}), fixed here at $10^4$. Thus the lines represent best fits to only the first four data points, with slopes $0.98$, $0.91$, $0.62$ and $0.69$ respectively. In all panels $N_{\mathrm{phys}}\in [8,288]$.} \label{fig:exp-k4-nesting} \end{center} \end{figure*} In order to allow for the most general $N$-variable Ising optimization problem, we now define an encoding procedure for problem Hamiltonians $H_{\mathrm{P}}$ supported on a complete graph $K_N$. The first step of our construction involves a ``nested" Hamiltonian $\tilde H_{\mathrm{P}}$ that is defined by embedding the logical $K_N$ into a larger $K_{C\times N}$. The integer $C$ is the ``nesting level" and controls the amount of hardware resources (qubits, couplers, and local fields) used to represent the logical problem. $\tilde H_{\mathrm{P}}$ is constructed as follows. Each logical qubit $i$ ($i = 1,\dots,N$) is represented by a $C$-tuple of encoded qubits $(i,c)$, with $c = 1,\dots,C$. The ``nested" couplers $\tilde J_{(i,c),(j,c')}$ and local fields $\tilde h_{(i,c)}$ are then defined as follows: \bes \label{eq:nesting} \begin{align} \tilde J_{(i,c),(j,c')} &= J_{ij}\,, \quad \forall c,c', i\neq j\ , \\ \tilde h_{(i,c)} &= C h_{i}\,, \quad \forall c, i\ , \label{eqt:h} \\ \tilde J_{(i,c),(i,c')} &= -\gamma \,, \quad \forall c\neq c' \ . \end{align} \ees This construction is illustrated in the left column of Fig.~\ref{fig:log-nesting}. Each logical coupling $J_{ij}$ has $C^2$ copies $\tilde J_{(i,c),(j,c')}$, thus boosting the energy scale at the encoded level by a factor of $C^2$. Each local field $h_{i}$ has $C$ copies $\tilde h_{(i,c)}$; the factor $C$ in Eq.~\eqref{eqt:h} ensures that the energy boost is equalized with the couplers. For each logical qubit $i$, there are $C(C-1)/2$ ferromagnetic couplings $\tilde J_{(i,c),(i,c')}$ of strength $\gamma>0$ (to be optimized), representing energy penalties that promote agreement among the $C$ encoded qubits, i.e., that bind the $C$-tuple as a single logical qubit $i$. The second step of our construction is to implement the fully connected problem $\tilde H_{\mathrm{P}}$ on given QA hardware, with a lower-degree qubit connectivity graph. This requires a minor embedding (ME) \cite{Kaminsky-Lloyd,Choi2,klymko_adiabatic_2012,Cai:2014nx,Boothby2015a}. The procedure involves replacing each qubit in $\tilde H_{\mathrm{P}}$ by a ferromagnetically-coupled chain of qubits, such that all couplings in $\tilde H_{\mathrm{P}}$ are represented by inter-chain couplings. The intra-chain coupling represents another energy penalty that forces the chain qubits to behave as a single logical qubit. The physical Hamiltonian obtained after this ME step is the final encoded Hamiltonian $\bar H_P$. We can minor-embed a $K_{C\times N}$ nested graph representing each qubit $(i,c)$ as a physical chain of length $L = \lceil C N/4 \rceil+1$ on the Chimera graph \cite{Choi2}. This is illustrated in the right column of Fig.~\ref{fig:log-nesting}. The number of physical qubits necessary for a ME of a $K_{C\times N}$ is $N^{\mathrm{phys}}_{C} = CNL \sim C^2N^2/4$. At the end of a QA run implementing the encoded Hamiltonian $\bar H_{\mathrm{P}}$ and a measurement of the physical qubits, a decoding procedure must be employed to recover the logical state. For the sake of simplicity we only consider majority vote decoding over both the length-$L$ chain of each encoded qubit $(i,c)$ and the $C$ encoded qubits comprising each logical qubit $i$ (decoding over the length-$L$ chain first, then over the $C$ encoded qubits, does not affect performance; seeAppendix~\ref{sec:Exp_Meth}). The encoded and logical qubits can thus be viewed as forming repetition codes with, respectively, distance $L$ and $C$. Other decoding strategies are possible wherein the encoded or logical qubits do not have this simple interpretation; e.g., energy minimization decoding, which tends to outperform majority voting \cite{Vinci:2015jt}. In the unlikely event of a tie, we assign a random value of $+1$ or $-1$ to the logical qubit.\\ \begin{figure*}[ht] \begin{center} {\includegraphics[width=0.33\textwidth]{8x8_ideal_random_per_logical-}\label{fig:8x8_ideal_random_per_logical-}} {\includegraphics[width=0.33\textwidth]{8x8_ideal_random_per_logical-corrected-}\label{fig:8x8_corrected}} {\includegraphics[width=0.33\textwidth]{Choi_K8_1_pGS_ME=1_MVAll_nSW=20k_beta=01_v4}\label{fig:8x8_sqa}} \caption{Random antiferromagnetic $K_8$: experimental and numerical results. Left: success probabilities $P_C(\alpha)$ for four nesting levels. Middle: success probabilities $P_C'(\alpha)$ adjusted for classical repetition. Right: numerical results for SQA simulations with $20000$ sweeps, $\sigma = 0.05$ Gaussian noise on the couplings, and with the Choi embedding, showing five nesting levels. Inset: scaling of the energy boost $\mu_C$ \textit{vs} the maximal energy boost $\mu_C^{\max}$, for both the DW2 and SQA. Yellow circles: DW2 results. Blue crosses and red up-triangles: SQA for the Choi ME with $10000$ (crosses) and $20000$ (up-triangles) sweeps, and with $\sigma = 0.05$ Gaussian noise on the couplings. The flattening of $\mu_C$ for $C>4$ suggests that the energy boost becomes less effective at large $C$, but increasing the number of sweeps recovers the effectiveness. The lines represent best fits to only the first four data points, with respective slopes $\eta/2=0.65$, $0.75$, and $0.85$.} \label{fig:exp-k8-nesting} \end{center} \end{figure*} \section{Results} \textbf{Free energy} -- Using a mean-field analysis similar to the approach pursued in Ref.~\cite{MNAL:15} we can compute the partition function associated with the nested Hamiltonian $A(t)H_X + B(t)\tilde H_{\mathrm{P}}$ for the case with uniform antiferromagnetic couplings. This leads to the following free energy density in the low temperature and thermodynamic limits (see Appendix~\ref{sec:MeanField}): \bea \hspace{-.5cm} \beta F = C^2 \beta \left( \sqrt{\left[ A(t)/C \right]^2 + \left[ 2 {\gamma} B(t) m \right]^2 } - {\gamma} B(t) m^2 \right) \label{eq:freeE} \eea where $m$ is the mean-field magnetization. There are two key noteworthy aspects of this result. First, the driver term is rescaled as $A(t) \mapsto C^{-1} A(t)$. This shifts the crossing between the $A$ and $B$ annealing schedules to an earlier point in the evolution and is related to the fact that QAC encodes only the problem Hamiltonian term proportional to $B(t)$. Consequently the quantum critical point is moved to earlier in the evolution, which benefits QAC since the effective energy scale at this new point is higher~\cite{MNAL:15}. Second, the inverse temperature is rescaled as $\beta \mapsto C^2 \beta$. This corresponds to an effective temperature reduction by $C^2$, a manifestly beneficial effect. The same conclusion, of a lower effective temperature, is reached by studying the numerically computed success probability associated with thermal distributions (see Appendix~\ref{sec:Num_Add}). We shall demonstrate that this prediction is born out by our experimental results, though it is masked to some extent by complications arising from the ME and noise. \\ \textbf{NQAC results} -- The hardness of an Ising optimization problem, using a QA device, is controlled by its size $N$ as well as by an overall energy scale $\alpha$ \cite{q-sig2}. The smaller this energy scale, the higher the effective temperature and the more susceptible QA becomes to (dynamical and thermal) excitations out of the ground state and misspecification noise on the problem Hamiltonian. This provides us with an opportunity to test NQAC. Since in our experiments we were limited by the largest complete graph that can be embedded on the DW2 device, a $K_{32}$ (see Appendix~\ref{sec:two-MEs} for details), we tuned the hardness of a problem by studying the performance of NQAC as a function of $\alpha$ via $H_{\mathrm{P}} \mapsto \alpha H_{\mathrm{P}}$, with $0<\alpha\leq 1$. Note that we did not rescale $\gamma$; instead $\gamma$ was optimized for optimal post-decoding performance (see Appendix~\ref{sec:Exp_Add}). It is known that for the DW2, intrinsic coupler control noise can be taken to be Gaussian with standard deviation $\sigma\sim 0.05$ of the maximum value for the couplings \cite{King:2015zr}. Thus we may expect that, without error correction, Ising problems with $\alpha \lesssim 0.05$ are dominated by control noise. We applied NQAC to completely antiferromagnetic ($h_i=0$ $\forall i$) Ising problems over $K_4$ ($J_{ij} = 1$ $\forall i,j$), and $K_8$ (random $J_{ij} \in [0.1,1]$ with steps of 0.1) with nesting up to $C=8$ and $C=4$, respectively. We denote by $P_C(\alpha)$ the probability to obtain the logical ground state at energy scale $\alpha$ for the $C$-level nested implementation (see Appendix~\ref{sec:Exp_Meth} for data collection methods). The experimental QA data in Fig.~\ref{fig:exp-k4-nesting} (left) shows a monotonic increase of $P_C(\alpha)$ as a function of the nesting level $C$ over a wide range of energy scales $\alpha$. As expected, $P_C(\alpha)$ drops from $P_C(1) = 1$ (solution always found) to $P_C(0) = 6/16$ (random sampling of $6$ ground states, where $4$ out of the $6$ couplings are satisfied, out of a total of $16$ states). Note that $P_1(\alpha)$ (no nesting) drops by $\sim50\%$ when $\alpha \sim 0.1$, which is consistent with the aforementioned $\sigma\sim 0.05$ control noise level, while $P_8(\alpha)$ exhibits a similar drop only when $\alpha \sim 0.01$. This suggests that NQAC is particularly effective in mitigating the two dominant effects that limit the performance of quantum annealers: thermal excitations and control errors. To investigate this more closely, the middle panel of Fig.~\ref{fig:exp-k4-nesting} shows that the data from the left panel can be collapsed via $P_C(\alpha) \mapsto P_C(\alpha/\mu_C)$, where $\mu_C$ is an empirical rescaling factor discussed below (see also Appendix~\ref{sec:mu_C}). This implies that $P_1(\mu_C \alpha)\approx P_C(\alpha)$, and hence that the performance enhancement obtained at nesting level $C$ can be interpreted as an energy boost $\alpha \mapsto \mu_C \alpha$ with respect to an implementation without nesting. The existence of this energy boost is a key feature of NQAC, as anticipated above. Recall [Eq.~\eqref{eq:nesting}] that a nested graph $K_{C\times N}$ contains $C^2$ equivalent copies of the same logical coupling $J_{ij}$. Hence a level-$C$ nesting before ME can provide a maximal energy boost $\mu_C^{\max}=C^{\eta^{\max}}$, with $\eta^{\max}=2$. This simple argument agrees with the reduction of the effective temperature by $C^2$ based on the calculation of the free energy~\eqref{eq:freeE}. The right panel of Fig.~\ref{fig:exp-k4-nesting} shows $\mu_C$ as a function of $\mu_C^{\max}$, yielding $\mu_C \sim C^\eta$ with $\eta \approx {1.37}$ (purple circles). To understand why $\eta < \eta^{\max}$, we performed simulated quantum annealing (SQA) simulations (see Appendix~\ref{sec:Num_Meth} for details). We observe in Fig.~\ref{fig:exp-k4-nesting} (right) that without ME and control errors, the boost scaling matches $\mu_C^{\max}$ (blue stars). When including ME and control errors a performance drop results (red triangles). Both factors thus contribute to the sub-optimal energy boost observed experimentally. However, the optimal energy boost is recovered for a fully thermalized state with a sufficiently large penalty (see Appendix~\ref{sec:Num_Add}). To match the experimental DW2 results using SQA we replace the Choi ME designed for full Chimera graphs \cite{Choi2} by the heuristic ME designed for Chimera graphs with missing qubits \cite{Cai:2014nx,Boothby2015a}, and achieve a near match (yellow triangles) (see Appendix~\ref{sec:two-MEs} for more details on ME). \\ \textbf{Performance of NQAC \textit{vs} classical repetition} -- Recall that $N^{\mathrm{phys}}_{C} = CNL$ is the total number of physical qubits used at nesting level $C$; let ${C_{\max}}$ denote the highest nesting level that can be accommodated on the QA device for a given $K_N$, i.e., $C_{\max}NL\leq N_{\mathrm{tot}} < (C_{\max}+1)NL$, where $N_{\mathrm{tot}}$ is the total number of physical qubits ($504$ in our experiments). Then $M_C = \lfloor N^{\mathrm{phys}}_{C_{\max}}/N^{\mathrm{phys}}_{C}\rfloor$ is the number of copies that can be implemented in parallel. For NQAC at level $C$ to be useful, it must be more effective than a classical repetition scheme where $M_C$ copies of the problem are implemented in parallel. If a single implementation has success probability $P_C(\alpha)$, the probability to succeed at least once with $M_C$ statistically independent implementations is $P_C'(\alpha) = 1-[1-P_C(\alpha)]^{M_C}$. It turns out that the antiferromagnetic $K_4$ problem, for which a random guess succeeds with probability $6/16$, is too easy [i.e., $P_{C}'(\alpha)$ approaches $1$ too rapidly], and we therefore consider a harder problem: an antiferromagnetic $K_8$ instance with couplings randomly generated from the set $J_{ij} \in \{0.1,0.2,\dots,0.9,1\}$ (see Appendix~\ref{sec:Exp_Add} for more details and data on this and additional instances). Problems of this type turn out to have a sufficiently low success probability for our purposes, and can still be nested up to $C=4$ on the DW2 processor. Results for $P_C(\alpha)$ are shown in Fig.~\ref{fig:exp-k8-nesting} (left), and again increase monotonically with $C$, as in the $K_4$ case. For each $C$, $P_C(\alpha)$ peaks at a value of $\alpha$ for which the maximum allowed strength of the energy penalties $\gamma = 1$ is optimal ($\gamma>1$ would be optimal for larger $\alpha$, as shown in Appendix~\ref{sec:Exp_Add}; the growth of the optimal penalty with problem size, and hence chain length, is a typical feature of minor-embedded problems \cite{Venturelli:2014nx}). An energy-boost interpretation of the experimental data of Fig.~\ref{fig:exp-k8-nesting} is possible for $\alpha$ values to the left of the peak; to the right of the peak, the performance is hindered by the saturation of the energy penalties. Figure~\ref{fig:exp-k8-nesting} (middle) compares the success probabilities $P_C'(\alpha)$ adjusted for classical repetition, where we have set ${C_{\max}}=4$, and shows that $P_2'(\alpha) > P_1'(\alpha)$, i.e., even after accounting for classical parallelism $C=2$ performs better than $C=1$. However, we also find that $P_4'(\alpha)< P_3'(\alpha) \leq P_2'(\alpha)$, so no additional gain results from increasing $C$ in our experiments. This can be attributed to the fact that even the $K_8$ problem still has a relatively large $P_1(\alpha)$. Experimental tests on QA devices with more qubits will thus be important to test the efficacy of higher nesting levels on harder problems. To test the effect of increasing $C$, and also to study the effect of varying the annealing time, we present in Fig.~\ref{fig:exp-k8-nesting} (right) the performance of SQA on a random $K_8$ antiferromagnetic instance with the Choi ME. The results are qualitatively similar to those observed on the DW2 processor with the heuristic ME [Fig.~\ref{fig:exp-k8-nesting} (left)]. Interestingly, we observe a drop in the peak performance at $C=5$ relative to the peak observed for $C=4$. We attribute this to both a saturation of the energy penalties and a suboptimal number of sweeps. The latter is confirmed in Fig.~\ref{fig:exp-k8-nesting} (right, inset), where we observe that the scaling of $\mu_C$ with $C$ is better for the case with more sweeps, i.e., again $\mu_C\sim C^\eta$, and $\eta$ increases with the number of sweeps.\\ \section{DISCUSSION} Nested QAC offers several significant improvements over previous approaches to the problem of error correction for QA. It is a flexible method that can be used with any optimization problem, and allows the construction of a family of codes with arbitrary code distance. We have given experimental and numerical evidence that nesting is effective by performing studies with a D-Wave QA device and numerical simulations. We have demonstrated that the protection from errors provided by NQAC can be interpreted as arising from an increase (with nesting level $C$) in the energy scale at which the logical problem is implemented. This represents a very useful tradeoff: the effective temperature drops as we increase the number of qubits allocated to the encoding, so that these two resources can be traded. Thus NQAC can be used to combat thermal excitations, which are the dominant source of errors in QA, and are the bottleneck for scalable QA implementations. We have also demonstrated that an appropriate nesting level can outperform classical repetition with the same number of qubits, with improvements to be expected when next-generation QA devices with larger numbers of physical qubits become available. We, therefore, believe that our results are of immediate and near-future practical use, and constitute an important step toward scalable QA.\\ \section*{ACKNOWLEDGEMENTS} \noindent We thank Prof.~Hidetoshi Nishimori and Dr.~Shunji Matsuura for valuable comments, and Dr.~Aidan Roy for providing the minor embeddings used in the experiments with the D-Wave Two. Access to the D-Wave Two was made available by the USC-Lockheed Martin Quantum Computing Center. Part of the computing resources were provided by the USC Center for High Performance Computing and Communications. This work was supported under ARO grant number W911NF-12-1-0523, ARO MURI Grant Nos. W911NF-11-1-0268 and W911NF-15-1-0582, and NSF grant number INSPIRE-1551064.\\
1,941,325,220,338
arxiv
\section{Introduction}\label{sec1} Radio pulsars~\citep{Hewish-1968} are rapidly rotating neutron stars whose physical facets differ strongly from those encountered on Earth: pulsars have, for example, magnetic fields that are twelve orders of magnitude stronger than those of Earth; yet they are three orders of magnitude smaller. There are now more than 2500 pulsars known within our Galaxy, including its globular clusters\footnote{\url{http://www.atnf.csiro.au/people/pulsar/psrcat} (catalogue version 1.54)}~\citep{Manchester-2005}. Normal radio pulsars emit radio waves along the field lines over their magnetic poles; as the pulsar continuously spins and radiates, its radio emission is ultimately caught by radio telescopes as periodic pulses~\citep[see][for a general overview of pulsar radio emission]{LorimerKramer-2004}. Such pulses can be \bfref{integrated} to get a \bfref{higher signal-to-noise (S/N)} pulse profile. Rotating radio transients~\citep[RRATs,][]{McLaughlin-2006}, in contrast, emit only occasionally; \bfref{these sources are more easily found in single pulse searches than in periodicity searches}~\citep{Cordes-2003, Keane-2011}. These thus require a more detailed analysis of individual pulses. Similarly, it might be possible to receive extremely bright giant pulses from relatively young pulsars in distant galaxies, another benefit of single-pulse searches~\citep{McLaughlin-2003}. Catching single pulses is, however, challenging as the astronomical data is usually contaminated with radio frequency interference (RFI). Such detrimental signals can be produced by terrestrial sources in a frequency range coincident with that of the observation. \begin{table*}[t] \centering \caption{Past surveys dedicated to \bfref{the periodicity (PS) and single-pulse (SPS)} pulsar search\bfref{es} in nearby galaxies. \bfref{Tabulated are: galaxy, its distance, the telescope, central frequency, bandwidth in MHz, sampling time in $\bmref{\mu}$s, total dwell time in hours, and the obtained average (for PS), and peak (for SPS) flux densities in Jy.}} \begin{threeparttable} \scalebox{0.595}{ \begin{tabular}{c c c c c c c c c c} \hline\hline Galaxy & $d$ & Telescope & $F_\textrm{cntr}$ & $\Delta\nu$ & $T_\textrm{samp}$ & $T_\textrm{int}$ & $S_\textrm{min}$\,[PS] & $S_\textrm{peak}$\,[SPS] & Reference \\ & (Mpc) & & (MHz) & (MHz) & ($\bmref{\mu}$s) & (hrs) & (Jy) & (Jy) & \\ \hline \multirow{2}{*}{M33} & \multirow{2}{*}{0.84} & \multirow{2}{*}{Arecibo} & 430 & 10 & 102.4 & 3.0 & $\bmref{0.2\times10^{-3}}$ & \bfref{0.5} & 1 \\ & & & 1440 & 100 & 64 (100) & 2.0 (3.0) & $\bmref{5.3\times10^{-6}}$ & \bfref{0.1} & 2 \\[0.3cm] NGC253, NGC300, Fornax & 3.0, 2.0, 16.9 & \multirow{2}{*}{Parkes} & \multirow{2}{*}{435} & \multirow{2}{*}{32} & \multirow{2}{*}{420} & \multirow{2}{*}{3.0} & \multirow{2}{*}{\bf2ref{(\dots)}} & \multirow{2}{*}{$\bmref{0.09}$} & \multirow{2}{*}{1} \\ NGC6300, NGC7793 & 16.9, 3.5 & & & & & & & & \\[0.3cm] Leo I (dSph galaxy) & 0.25 & \multirow{2}{*}{GBT} & 350 & 100 & 81.92 & 20.0 & $\bmref{\sim2.0\times10^{-4}}$ & $\bmref{\sim0.04}$ & 3 \\ IC 10 & 0.66 & & 820 & 200 & 204.8 & 16.0 & $\bmref{1.5\times10^{-5}}$ & \bfref{0.02} & 4 \\[0.3cm] Willman 1, Boo dw, UMi dw & 0.046, 0.062, 0.075 & \multirow{6}{*}{GBT (Arecibo)} & \multirow{6}{*}{820 (327)} & \multirow{6}{*}{50 (50)} & \multirow{6}{*}{81.92 (128)} & 3.3, 4.4, 5.3 & \multirow{6}{*}{\bf2ref{(\dots)}} & \bfref{0.5 (1.54), 0.3 (1), 1.6 (5), each }$\bmref{\times10^3}$ & \multirow{6}{*}{5} \\ Dra dw, Scl dw, Sex dw & 0.078, 0.093, 0.099 & & & & & 3.6, 2.8, 3.25 & & \bfref{2.3 (7), 2.3 (7.1), 2.3 (7.1), each }$\bmref{\times10^3}$ & \\ CVn I, Leo I-II, UMa II & 0.23, 0.28, 0.23, 0.25 & & & & & 6.2, 21.9, 10.9, 2.0 & & \bfref{4 (12), 6 (19), 4 (12), 19 (57), each }$\bmref{\times10^3}$ & \\ And II, III, VI, XI-XIV & 0.9 & & & & & 1.4, 1.1, 2.5, 3.9, 3.7, 3.0, 2.0 & & \bfref{64 (198), each }$\bmref{\times10^3}$ & \\ Leo T, Leo A & 0.4, 0.78 & & & & & 4.2, 6.1 & & \bfref{13 (40), 44 (136), each }$\bmref{\times10^3}$ & \\ IC 1613, LGS 3, Peg dw & 0.9, 0.93, 0.93 & & & & & 3.5, 3.9, 2.5 & & \bfref{63 (195), 67 (207), 67 (207), each }$\bmref{\times10^3}$ & \\[0.3cm] M31 & 0.8 & WSRT & 328 & 10 & 409.6 & 32.0 & $\bmref{0.3\times10^{-3}}$ & \bfref{2.8} & 6 \\ \hline \end{tabular} } \tablebib{ (1)~\citet{McLaughlin-2003}; (2)~\citet{Bhat-2011}; (3)~\citet{RubioHerrera-2013}; (4)~\citet{Noori-2014}; \\ (5)~\citet{Vlad-2013} and Kondratiev~\citetext{priv. comm.}; (6)~\citet{Rubio-Herrera-2013}. } \end{threeparttable} \label{Surveys} \end{table*} Several other factors usually play important roles in the apparent brightness of the pulsar signal. First off, the distance $d$ to the source clearly affects the pulsar flux density: $S\,\bmref{\propto}\,d^{-2}$. Furthermore, after the pulsar wave front has travelled through the \bfref{inter}stellar medium, the lower frequencies arrive later than higher frequencies (dispersion, $t_\mathrm{arrival}\,\bmref{\propto}\,\nu^{-2}$). Multi-path propagation also introduces a delayed power-law tail in the integrated pulse profile (scattering, $\tau_\mathrm{scatter}\,\bmref{\propto}\,\nu^{-4}$). The discovery of new pulsars, either periodic or transient, provides better insights into, for example the source birth rate, the progenitor populations, the spatial and flux density distributions. The subsequent timing of the most stable radio pulsars can lead to very good tests of space-time curvature that probe gravitational effects and of supranuclear density which can establish well-fitted equations of neutron star state. Moreover, the study of frequency-dependent scattering and dispersion \bfref{can result in a more detailed understanding of the content and density profile of the free electrons in the interstellar medium~\citep{Cordes-2002}}. For the extragalactic search, apart from the analysis of intergalactic matter, the presence of pulsars in other galaxies allows us to establish the link between the galaxy evolution and the pulsar population synthesis there. From such a link we can infer whether different galactic progenitors create different neutron star populations. There have been numerous pulsar discoveries among relatively close stellar overdensities, for example globular clusters\footnote{\url{http://www.naic.edu/~pfreire/GCpsr.html}}~\citep[see][for a review]{Camilo-2005}, \bfref{and nearest neighbour galaxies of the Local Galactic Group:} the Small and Large Magellanic Clouds~\citep{Crawford-2001, Ridley-2013}; but also non-detections towards, for example, dwarf spheroidal galaxies~\citep{RubioHerrera-2013}. In addition, several efforts have been made to capture periodic as well as single pulse signals from nearby galaxies, with different telescopes. Unfortunately none of these searches discovered new sources. For the deepest of such past surveys, we list the frequencies and sensitivities in Table~\ref{Surveys}. These frequencies range from 328-1440 MHz. \bfref{Based on the facts that at lower frequencies pulsar beams get wider and pulsar fluxes get higher~\citep{Stappers-2011}, }there is a possibility that deep, lower-frequency surveys could find pulsars that these past efforts have missed. \begin{table*}[ht!] \centering \caption{LOFAR Observations of nearby galaxies M33, M81, and M82. For tied-array rings (TARs) only the coordinates of the central tied-array beam (TAB) are provided.} \begin{tabular}{c | c | c | c | c | c | c} \hline\hline ObsID & Galaxy & Angular size & Dwell time & \textnumero~TARs\,/\,TABs & RA & DEC \\ & & (arcmin) & (hrs) & & (J2000) & (J2000) \\ \hline L274117 & M33 & 70.8 $\times$ 41.7 & \multirow{5}{*}{4} & 5 TARs & 01:33:50.90 & +30:39:35.8 \\ \cline{1-3} \cline{5-7} \multirow{4}{*}{L342964} & M81 & 26.9 $\times$ 14.1 & & 3 TARs & 09:55:33.19 & +69:03:55.0 \\ \cline{2-3} \cline{5-7} & \multirow{3}{*}{M82} & \multirow{3}{*}{11.2 $\times$ 4.3} & & 1 TAB & 09:56:06.10 & +69:41:15.0 \\ \cline{5-7} & & & & 1 TAB & 09:55:52.20 & +69:40:47.0 \\ \cline{5-7} & & & & 1 TAB & 09:55:38.75 & +69:40:20.0 \\ \hline \end{tabular} \label{Obs} \end{table*} The LOw Frequency Array~\citep[LOFAR,][]{Haarlem-2013} is a radio telescope capable of tracking several nearby galaxies, at long wavelengths. The LOFAR high-band antenna's (HBA, 120 -- 240 MHz) frequency range approximately coincides with the peak of pulsar flux density distribution~\citep[100 -- 200\,MHz,][]{Stappers-2011}, which is an advantage for a pulsar search. Compared to previous low-frequency pulsar searches, further improvements such as the much increased bandwidth ($\approx 70\,\mbox{MHz}$), the high core gain ($\approx 8.8\,\mbox{K/Jy}$), and the relatively low antenna temperature~\citep[$\approx 160\,\mbox{K}$, see Eq. 3 from][]{Leeuwen-2010} mean that LOFAR can, on the one hand, find many new interesting pulsars~\citep[see][]{Coenen-2014}, and on the other hand, better investigate their fundamental emission peculiarities~\citep[see][]{Stappers-2011}. In general, pulsar beams are thought to be wider at lower frequencies, giving LOFAR \bfref{the advantage in capturing radio pulses} over higher-frequency surveys. Finally, the LOFAR multi-beaming allows for efficient, deep integrations \bfref{at a sensitivity and field of view unparalleled in the world}. The main downside of searching for pulsars with LOFAR is that dispersion and scattering degrade the inherently sharp pulsar peaks. In this paper, we describe the searches we have performed with LOFAR for pulsars in spiral Triangulum Galaxy M33 (distance $d\approx 0.73-0.94\,\mbox{Mpc}$), Bode's Galaxy M81 ($d\approx 3.50-3.74\,\mbox{Mpc}$), and starburst Cigar Galaxy M82 ($d\approx 3.5-3.8\,\mbox{Mpc}$). Given their northern positions (see Table~\ref{Obs}), these three galaxies can be well targeted with LOFAR, as the projected effective area of LOFAR's fixed dipoles, and thus the sensitivity, is the highest close to zenith~\citep[see][]{Leeuwen-2010}. The origin and current environment of a galaxy inevitably affect its star-formation rate and mass-energy distribution. As a result, the pulsar population in nearby galaxies can be different from that of our Milky Way, and a comparison between this population and that of our Milky Way is interesting. \bfref{Furthermore, measuring a dispersion measure (DM) toward an extragalactic pulsar immediately indicates the total amount of electron matter along the line of sight. Once an ensemble of such pulsars are found in a certain galaxy, one may be able to disentangle the host and interstellar components, and measure the free electron density of the intergalactic matter, a quantity highly interesting for fast radio burst (FRB) studies~\citep{Spitler-2016}\bffinal{, for example.}} Our paper is organised as follows: Section~\ref{sec2} outlines the general characteristics of observational pointings; Section~\ref{sec3} describes the actual search procedure. The search results and sensitivity estimations are given in Section~\ref{sec4}, and we present our conclusions in Section~\ref{sec5}. \section{Observations}\label{sec2} The ability of LOFAR to produce multiple highly sensitive tied-array beams (TABs) provides an opportunity to fully cover a targeted nearby galaxy and therefore carry out a deep search over its entire area. Our first targeted observation was directed at the M33 galaxy, while the second captured both M81 and M82. We use the LOFAR HBA coherent core, coherently combining the central \bffinal{20 and 23} LOFAR core high-band antenna (HBA) stations \bffinal{for first and second observations, respectively}. These operate on the same central distributed clock and are thus the largest possible portion of the telescope that can be coherently combined in real time~\citep{Stappers-2011}. Figure~\ref{Beams} shows the tiling that was used, consisting of multiple rings of TABs, with radius successively increasing by 0.075 degrees. We formed five such rings for M33 during the first observation and three rings for M81 during the second observation. This resulted in the usage of 90 beams for M33 galaxy and 37 beams for M81 galaxy. Galaxy M82 was covered using three beams. Each observation was four hours long. LOFAR survey pointings as well as time integration and number of beams are demonstrated in Table~\ref{Obs}. \begin{figure}[t!] \centering\subfigure[M33 galaxy]{ \includegraphics[width=0.7\linewidth]{M33.pdf}\label{m33} } \vfill \centering\subfigure[M81 and M82 galaxies]{ \includegraphics[width=0.7\linewidth]{M81.pdf}\label{m81m82} } \captionsetup{justification=centering} \caption{Our targeted nearby galaxies:~\subref{m33} M33 -- beam 70 is missing due to one of the LOFAR processing-cluster nodes failing;~\subref{m81m82} M81/M82. The beam shape is shown in the bottom left corner of each panel.} \label{Beams} \end{figure} Both observations were made with 68.5\,\mbox{MHz} of bandwidth around 146\,MHz. That is the maximum bandwidth available when covering the entire galaxy with tied-array beams, thus achieving highest possible sensitivity. Further observational characteristics are listed in Table~\ref{Setup}. The recorded 32-bit data were reduced to 8 bit and cleaned from RFI with the standard pulsar pipeline~\citep{Alexov-2010}. \begin{table}[t!] \centering \caption{LOFAR Observational setup.} \scalebox{0.7}{ \begin{tabular}{l r} \hline\hline \bf2ref{Parameter} & \bf2ref{Value} \\ \hline Observational dates: & March 10 and May 9, 2015 \\ Telescope: & LOFAR \\ Receiver: & HBA \\ Backend: & COBALT \\ Number of tied-array beams \\ $-$ First observation: & 90 \\ $-$ Second observation: & 40 \\ Polarisations/beam: & 2 \\ Central frequency: & 146\,MHz \\ Frequency bandwidth: & 68.5\,MHz \\ Frequency channels: & 11232 \\ Sample time: & 1310\,$\mu$s \\ Integration time: & 14400\,s \\ \hline \end{tabular} } \label{Setup} \end{table} \section{Data analysis}\label{sec3} Through the LOFAR long term archive~\citep[LTA,][]{Renting-2011}, data were transferred to the Dutch national supercomputer Cartesius\footnote{\url{https://userinfo.surfsara.nl/systems/cartesius}}. There, we performed periodicity and single-pulse searches using~\texttt{PRESTO}~\citep{Ransom-2001}\bfref{, over the course of about 350,000 core-hours of Cartesius compute time.} \subsection{Periodicity search} Our search for periodic pulses was carried out independently and in parallel for each LOFAR beam. \bfref{All three galaxies have high inclinations: M33~\citep[55$^{\circ}$,][]{Hodge-2011} and M81~\citep[32$^{\circ}$,][]{Immler-2001}, M82~\citep[77$^{\circ}$,][]{Mayya-2005}, which was one of the selection and ranking criteria~\citep{Leeuwen-2010}. The DM contributions from the Milky Way in their direction are about 50.1, 40.9, and 41.3\,pc\,cm$^{-3}$, respectively. Data were dedispersed up to a high DM of 1000\,pc\,cm$^{-3}$ DM grid, to account for potential giant molecular clouds situated in between, to allow for the errors on the free-electron models, and to remain sensitive to any low-frequency FRBs.} We erased particular RFI-affected frequencies, identified both per beam (both narrow band and at zero DM), and for general LOFAR signals. Our aim in this search is to detect the brightest sources in the target galaxies; these are most likely normal or young pulsars, not recycled MSPs. We thus focused on the somewhat slower, non-recycled section of the search parameter space. To limit processing time we started the search with a sample time 2.6 ms. Each TAB required $\approx 4\times10^4$ of DM steps to optimally correct for the possible dispersion smearing. Higher DMs were combined with successive downsampling in time. We then Fourier transformed the dedispersed time series and generally applied a periodicity search without attempting any corrections for binary acceleration. Finally, we folded the data up to 200 best candidates per beam, for subsequent manual inspection. We first tested our search pipeline on two known pulsars that are characterised in Table~\ref{test}. Both these pulsars were found successfully. However, despite many promising candidates with high DM (around 600\,pc\,cm$^{-3}$) and small spin periods (around 100 ms), the main search revealed no new, persuasive extragalactic pulsars. \begin{table} \centering \caption{Test pulsars found with LOFAR as a part of testing the search pipeline. \bfref{Tabulated are: integration time, number of used subbands, pulse period, DM, pulse duty cycle at 50\% of peak, expected flux density at 150\,MHz from the ATNF catalogue, and measured peak S/N.}} \scalebox{0.68}{ \begin{tabular}{c c c c c c c c} \hline\hline Pulsar & $t_\textrm{int}$ & subbands & $P$ & DM & $w_{50} / P$ & $S_\mathrm{150}$ & $S/N_\textrm{peak}$ \\ & (s) & & (ms) & (pc\,cm$^{-3}$) & & (Jy) & \\ \hline PSR J0332+5434 & 260 & 128 & 714.501 & 26.77 & 9.2$\times10^{-3}$ & 8.8 & 99.21 \\ PSR J0953+0755 & 600 & 288 & 253.063 & 2.969 & 3.7$\times10^{-2}$ & 2.3 & 63.92 \\ \hline \end{tabular} } \label{test} \end{table} \subsection{Single-pulse search} To find sporadic sources, we started from dedispersed time series that we obtained earlier during the periodicity search step. We then searched for non-periodic pulses via boxcar function matched-filtering~\citep{Ransom-2001} and grouped all found pulses from all LOFAR TABs into a single database to simplify their subsequent characterisation~\citep[Michilli et al., 2016, in prep., see also ][for analogous approach]{Deneva-2016}. Both test pulsars were also successfully found with a single-pulse search routine and identified on the dedispersed raw data. Next, we ranked all the pulses from the database by the signal-to-noise ratio ($S/N_\mathrm{min}\geq$ 8), their extragalactic origin (DM $\geq$ DM$_z$, where DM$_z$ = 25 pc cm$^{-3}$ is the azimuthal Galactic DM contribution derived from the ne2001 model\footnote{\url{http://www.nrl.navy.mil/rsd/RORF/ne2001/}}~\citep{Cordes-2002} scaled to LOFAR frequencies) as well as their duration ($W\leq50$ ms). Similar criteria were used by~\citet{Petroff-2015} in their search for extragalactic bursts. We looked for both repeated pulses at similar DM, as well as for completely single bursts. The main criteria for the pulse being genuine was a smooth decrease of S/N on both sides of the DM scale, see Fig.~\ref{Pulses}. The most promising candidates, both single and repeated, were then inspected directly in frequency versus time `waterfall' plots made from the raw data, see Fig.~\ref{Waterfall}. Most candidates turned out to be narrow-band RFI that remained after TAB RFI masking. In the end, no reliable astrophysical single pulses were found. \begin{figure}[t!] \centering\subfigure[pulse $\#$1]{ \includegraphics[width=0.48\linewidth]{pulse_shape1.pdf}\label{m33_pulse1} }\vfill \centering\subfigure[pulse $\#$2]{ \includegraphics[width=0.48\linewidth]{pulse_shape2.pdf}\label{m81m82_pulse1} }\hfill \centering\subfigure[pulse $\#$3]{ \includegraphics[width=0.48\linewidth]{pulse_shape3.pdf}\label{m81m82_pulse2} }\hfill \captionsetup{justification=centering} \caption{Typical DM versus time behaviour for aperiodic candidates -- S/N reduces either side of the peak due to the incorrect DM value:~\subref{m33_pulse1} for M33 galaxy;~\subref{m81m82_pulse1} and~\subref{m81m82_pulse2} for M81 and M82 galaxies \bf2ref{(the lack of characteristic points at higher DMs is due to the downsampling)}.} \label{Pulses} \end{figure} \begin{figure}[t!] \centering\subfigure[DM = 27.07\,pc\,cm$^{-3}$]{ \includegraphics[width=0.75\linewidth]{J0332+5434_1.pdf}\label{DM27.07} }\vfill \centering\subfigure[DM = 26.77\,pc\,cm$^{-3}$]{ \includegraphics[width=0.75\linewidth]{J0332+5434_2.pdf}\label{DM26.77} }\vfill \captionsetup{justification=centering} \caption{Example of the `waterfall' frequency versus time representation for the test pulsar PSR J0332+5434:~\subref{DM27.07} with an incorrect DM;~\subref{DM26.77} with a correct DM. \bf2ref{The red triangle denotes the expected position of the dispersion-corrected pulse signal on time series.}} \label{Waterfall} \end{figure} \section{Search results and implications}\label{sec4} The search did not result in any new pulsars or transients from M33, M81 or M82. \bfref{However, not all pulsars are beamed our way. The beaming fraction~\citep{Smith-1969} relates to the chance of receiving radio emission from all the pulsars of a particular class. For young pulsars that might be born in other galaxies with typical periods of 100\,ms, only about 50\% are beamed toward Earth~\citep{Lorimer-2008}. We provide sensitivity estimates only for those extragalactic pulsars that can be potentially detected.} Below we therefore derive, for each, the limit on its pseudo-luminosity. The periodic-search minimum detectable flux for a periodicity search~\citep[\bffinal{ps,}][]{Bhattacharya-1998} \begin{equation} S_{\mbox{\footnotesize{min}, ps}} = \beta\frac{T_\mathrm{sys}}{G\sqrt{n_\mathrm{p}\,\Delta\nu\,t_\mathrm{int}}} \times S/N_\mathrm{min} \times \sqrt{\frac{W}{P-W}}. \end{equation} Here $\beta$ is a digitisation factor (normally $\beta\gtrsim1$), $T_{sys}$ is the system temperature ($\mbox{K}$), $G$ is the telescope gain ($\mbox{K/Jy}$), $\Delta\nu$ is the bandwidth ($\mbox{Hz}$), and $t_\mathrm{int}$ is the observation dwell time ($\mbox{s}$). $S/N_\mathrm{min}$ corresponds to a pulse signal-to-noise threshold \bffinal{($\bmfinal{S/N_\mathrm{min}=10}$ in our case)}, and the final term quantifies how the sensitivity increases when the pulse flux is concentrated in a short pulse width $W$ every period $P$. \bffinal{The system temperature and the LOFAR Core gain were estimated using \bffinal{the} Hamaker-Carozzi beam model with 50\% systematic uncertainties~\citep{Kondratiev-2015}: $\bmfinal{T_\mathrm{sys} = (7.4\pm1.1)\times10^2}$\,K, $\bmfinal{G = A_\mathrm{eff} / (2 \cdot k) \simeq4\pm2}$\,K/Jy. Here $\bmfinal{A_\mathrm{eff} = a_\mathrm{eff}\times N_\textrm{stations}^{\gamma} \times (1 - f_\textrm{bad})}$ is a total effective area, $\bmfinal{a_\mathrm{eff}}$ is a 48-tile HBA station effective area, $N_\textrm{stations}$ is the number of HBA stations (20 for M33 and 23 for M81/M82), $\bmfinal{\gamma=0.85}$ is the coherence factor, $\bmfinal{f_\textrm{bad}\simeq5\%}$ is the fraction of broken tiles, and $\bmfinal{k}$ is Boltzmann constant.} We obtained a noise rms flux density of \bffinal{$\bmfinal{0.15\pm0.08}$\,mJy for M33 and $\bmfinal{0.13\pm0.07}$\,mJy for M81/M82}. We next determine the expected observed pulse widths $W$ as a function of \bffinal{pulse} period $P$ and dispersion measure DM. We derive the intrinsic width $w_\mathrm{av}$ by applying an ATNF-averaged pulse duty cycle $\bmfinal{\langle w_\mathrm{50}/P \rangle \simeq 0.05}$ to the period $P$. We next include instrumental broadening effects: the DM stepsize smearing $w_\mathrm{dm}$, sub-band stepsize smearing $w_\mathrm{sub}$, and intra-channel smearing $w_\mathrm{chan}$\bfref{, and LOFAR sampling time constrain $w_\mathrm{samp}$}. Scattering may also severely increase the observed pulse width, and thus hinder detections at low frequencies. According to the ne2001 model, the maximum Galactic scatter broadening limits $w_\mathrm{scatter}$ in the direction to M33 and M81/M82 galaxies are 1.3\,ms and 0.9\,ms, respectively. We assume \bfref{two scattering screens (one in our Galaxy, and another in the host galaxy) -- by assuming similar structure of both, we thus double the resulting scatter broadening times.} Furthermore, each LOFAR sensitivity curve depends on the DM searched -- we show curves for DM = 0, 10, 100, and 1000\,pc\,cm$^{-3}$. \begin{figure*}[t!] \subfigure[Our Galaxy]{ \includegraphics[width=0.5\linewidth]{FD_Galaxy_normal.pdf}\label{galaxy} } \subfigure[Targeted galaxies]{ \includegraphics[width=0.5\linewidth]{FD_Galaxy_scaled.pdf}\label{ngalaxy} } \caption{\bffinal{Diagrams of flux density at 150\,MHz $S_{150}$ versus period ${P}$. Included are all the ATNF catalogue pulsars that have reported periods and flux densities. The LOFAR sensitivity curves for DM = 0, 10, 100, 1000\,pc\,cm$^{-3}$ are drawn, with the error contour for the most realistic DM for M33/M81/M82, at 1000\,pc\,cm$^{-3}$. The most-luminous Galactic pulsar, PSR J1305-6455, is denoted with a star. Panel \subref{galaxy} shows that our survey was sensitive enough to detect almost the entire Galactic pulsar population, had they been in the field of view. In \subref{ngalaxy} we illustrate our sensitivity to these same pulsars at the actual distance of M82/M81 (left y-axis) and M33 (right y-axis).}} \label{gal_flux} \end{figure*} For a given DM, the broadening width is thus only dependent on the pulse period: \begin{equation*} W(P) = \sqrt{w_\mathrm{av}^2(P) + w_\mathrm{dm}^2 + w_\mathrm{sub}^2 + w_\mathrm{chan}^2 + w_\mathrm{samp}^2 + \left(2\,w_\mathrm{scatter}\right)^2} \end{equation*} which allows for the derivation of the sensitivity \begin{equation} S_{\mbox{\footnotesize{min}, ps}} \approx \bmfinal{0.2\pm0.1} \times 10 \times \sqrt{\frac{W(P)}{P-W(P)}}\,\mbox{mJy}. \end{equation} In addition to the periodicity sensitivity derived above, the minimum detectable flux for a single pulse search~\citep[\bffinal{sps,}][]{Cordes-2002} \begin{equation} \begin{split} S_{\mbox{\footnotesize{min}, sps}} = \frac{T_\mathrm{sys}}{G\sqrt{n_\mathrm{p}\,\Delta\nu\,W}} \times S/N_\mathrm{min} = \\ = \frac{T_\mathrm{sys}}{G\sqrt{n_\mathrm{p}\,\Delta\nu\,t_\mathrm{int}}} \times S/N_\mathrm{min} \times \sqrt{\frac{t_\mathrm{int}}{W}}. \end{split} \end{equation} Given the noise rms flux density limit and an upper limit on single pulses width ($0.05\,\mbox{s}$), we estimate the sensitivity \begin{equation} S_{\mbox{\footnotesize{min}, sps}} \approx \bmfinal{0.2\pm0.1} \times 8 \times 0.5\cdot10^3\,\mbox{mJy} \approx \bmfinal{0.8\pm0.4}\,\mbox{Jy}. \end{equation} To compare our limits to the known Galactic population, we can estimate how \bf2ref{bright} that population would appear to LOFAR, if placed in the targeted galaxies. For this purpose we first extracted from the ATNF pulsar archive all known pulsars flux densities at 400 MHz, which we scaled down to the LOFAR 150 MHz central frequency using a spectral index $\alpha = -1.8\pm0.2$~\citep{Maron-2000}. For reference we also added the millisecond~\citep{Kondratiev-2015} and non-recycled~\citep{Bilous-2015} pulsars that were previously detected with LOFAR, to our sensitivity plots. We see that our LOFAR survey observation is sensitive enough to catch most actual Galactic pulses, \bffinal{and only} fails to find \bffinal{the fastest} millisecond pulsars at high DMs, that is where \bfref{the average pulse width is larger than the initial sampling time ($\bmfinal{w_{50} \geq t_\mathrm{samp}}$) or} the broadening width is larger than the pulse period itself ($W \geq P$, Fig.~\ref{galaxy}). We note that the fact that many of these were detected by~\citet{Kondratiev-2015} is explained by their usage of for example, coherent dedispersion and folding on known ephemerides, much improving sensitivity over our blind-search filterbanked pipeline. When referring to nearby galaxies, scaling with distance squared quickly diminishes the pulsar flux density. To estimate extragalactic pulsars fluxes, we have placed all ATNF catalogue pulsars in the roughly equidistant pair M81/M82 and in M33 (Fig.~\ref{ngalaxy}). We compare our sensitivity curves with the M81/M82/M33 analogue of the very bright, distant ($d \simeq 30\,\mbox{kpc}$) Galactic pulsar J1305-6455 ($P \simeq 0.57\,\mbox{s}, S_\textrm{150, Galaxy} \simeq 0.17\,\mbox{Jy}$): we find $S_\textrm{150, M33} \simeq 0.17\,\mbox{mJy}$, $S_\textrm{150, M81/M82} \simeq 0.01\,\mbox{mJy}$. \bffinal{Our LOFAR sensitivity limit for this pulsar at} DM=1000\,pc\,cm$^{-3}$ is about \bffinal{0.53\,mJy} \bffinal{(Fig.~\ref{galaxy})}. We see that for M81/M82 the difference between the LOFAR sensitivity and the flux density of the brightest Galactic analogue pulsar is a little over \bffinal{an order of magnitude}. \bffinal{For M33, LOFAR is within a factor of a few of being able to detect the brightest Galactic analogue. With only slightly better sensitivity pulsar populations similar to that of the Galaxy could be resolved in M33.} \section{Conclusions}\label{sec5} We have conducted a deep LOFAR search for radio pulsars and other time-domain transients in nearby galaxies M33, M81, and M82 with the highest currently-possible sensitivity at low frequencies. Using 130 LOFAR beams in total, we have searched up to DMs of 1000\,pc\,cm$^{-3}$ starting with 2.6 ms sampling time, four hours integration time. The detection of the two known test pulsars validated our search pipeline. We did, however, not detect any convincing new sources. We therefore established upper limits to the tip of the luminosity distributions on our target galaxies. \bffinal{Compared to the Milky Way population, we conclude there are no extragalactic pulsars brighter by only a factor of a few in M33, and by an order of magnitude in M81 or M82, shining our way.} \begin{acknowledgements} KM would like to thank E. Orru for scheduling the LOFAR observations, D. Michilli for providing single-pulse analysis tools, \bffinal{and F. Crawford for a valuable check of our calculations}. \bfref{We also thank the referee for useful comments that clarified the paper.} LOFAR, the low frequency array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope (ILT) foundation under a joint scientific policy. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617199, and from the Netherlands Research School for Astronomy (NOVA4-ARTS). This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. Computing time was provided by NWO Physical Sciences. \end{acknowledgements}
1,941,325,220,339
arxiv
\section{Introduction} \label{sec:intro} Within the Lambda cold dark matter ($\Lambda$CDM) cosmological model, galaxies form from gas that collapses in the center of gravitationally bound dark matter (DM) halos~\citep{White1978}. Galaxies do not form, however, in every halo. In the absence of external heating sources, galaxy formation is restricted to the so-called atomic-cooling (AC) halos, i.e., halos that shock heat the gas to a temperature $T \gtrsim 10^{4} \rm \ K$, above which radiative cooling becomes efficient. Under the presence of external energetic sources that suppress atomic cooling and heat the gas~\cite[e.g.][]{Efstathiou1992}, such as the ultraviolet background (UVB) radiation field that keeps the universe (re)ionized, galaxy formation proceeds in halos that exceed a time-dependent critical mass above which the gas becomes self-gravitating. The value of this critical mass has been estimated in the past using either idealized Jeans arguments or numerical simulations~\cite[e.g.][]{Rees1986, Thoul1996, Quinn1996},\footnote{Note that the critical mass that results from simulations is usually expressed as a Jeans mass, or equivalently as a fix halo circular velocity cut below which galaxies do not form.} leading to the understanding that for galaxies to form after cosmic reionization (CR), their host halos must exceed the AC limit by a factor of a few. The AC limit is, therefore, useful to determine which halos host galaxies at high redshift, before the universe underwent CR~\citep[e.g.][]{Oh2002, Xu2013, Wise2014, Xu2016}, whereas the critical mass describes the onset of galaxy formation after CR. The onset of galaxy formation is thus deeply linked to the growth of DM halos, as only those halos that exceed the critical mass can host a luminous galaxy in their center. These ideas have been shown to agree qualitatively with results of hydrodynamical cosmological simulations~\cite[e.g.][and references therein]{Okamoto2009, Sawala2016, Benitez-Llambay2015, Benitez-Llambay2017, Fitts2017, Maccio2017}, and are of fundamental importance to explain the scarcity of observed luminous satellites compared to results of collisionless simulations~\citep{Klypin1999, Bullock2000}. However, it is clear that neither halos have uniform density nor the interstellar medium is isothermal, so the successful characterization of the critical mass necessarily requires more advanced modeling.~\citet[]{Benitez-Llambay2020} (hereafter BLF) have recently derived the critical mass for the onset of galaxy formation considering the nonlinear collapse of DM halos and the distinctive temperature-density relation of the intergalactic medium, removing the freedom inherent to Jeans arguments. Their critical mass (hereafter BLF mass), which differs from that arising from idealized Jeans modeling, is remarkably accurate compared to results of nonlinear hydrodynamical cosmological simulations. The existence of a critical mass for galaxy formation, coupled with the growth history of DM halos, has some interesting consequences. First, galaxies must form stochastically in halos of present-day mass under the critical mass~(\citetalias{Benitez-Llambay2020}). Second, galaxies residing in more massive halos today must form, on average, earlier than those inhabiting less massive halos, giving rise to a ``downsizing'' effect in a $\Lambda$CDM universe~\cite[e.g.][]{Neistein2006}. Finally, there must be a population of galaxies residing in DM halos with a present-day mass comparable to critical mass that has undergone their formation only recently. Robustly quantifying these expectations is, however, not trivial, as it requires precise knowledge of the assembly history of DM halos and the critical mass for galaxy formation. In this Letter, we address the last issue. In particular, we apply the recent~\citetalias{Benitez-Llambay2020} model (briefly described in Sec.~\ref{Sec:model}) together with a high-resolution hydrodynamical cosmological simulation to demonstrate that the existence of a population of late-forming dwarf galaxies that form after redshift $z=3$ is a robust cosmological outcome of the $\Lambda$CDM model, and highly insensitive to the simulation details, provided stars form in gas that self-gravitates. This prediction stems from the existence of a nontrivial time-dependent critical DM halo mass below which galaxy formation cannot take place and the stochastic growth that characterizes DM halos in $\Lambda$CDM, which generally depends on the cosmological parameters~\cite[e.g.][and references therein]{Lacey1993, VanDenBosch2002, Correa2015}. We quantify the abundance of late-forming dwarfs expected in a $\Lambda$CDM universe, and possible observational counterparts, in Sec.~\ref{Sec:Discussion}. \section{The Model and Simulation} \subsection{The BLF model} \label{Sec:model} The~\citetalias{Benitez-Llambay2020} model establishes the value of the critical mass for the onset of galaxy formation as a function of time. In this model, galaxy formation takes place in AC halos before CR, and in halos in which gas cannot remain in hydrostatic equilibrium afterward. To determine which halos undergo gravitational collapse to form a galaxy after CR, the model assumes that the gas that falls into dark halos is described by an effective equation of state, which is established by the photoheating background at low densities ($n_{\rm H} \lesssim 10^{-4.5} \rm \ cm^{-3}$), and by the interplay between photoheating and cooling at high densities. This nontrivial model thus avoids the common assumption that galaxy formation takes place in halos of given (fix) virial\footnote{We define virial quantities as those measured at the virial radius, $r_{200}$, defined as the radius of a sphere whose mean enclosed density is 200 times the critical density of the universe.} temperature, a condition that arises from idealized Jeans arguments.\footnote{The condition that galaxies form in halos for which the freefall time exceeds the sound-crossing time leads to the well-known condition for galaxy formation, $V_{\rm c} >> c_{s}$, in which $c_{s}$ is the sound speed of an ``isothermal'' intergalactic medium, and $V_{\rm c}$ is the halo circular velocity at $r_{\rm 200}$.} We refer the reader to the original \citetalias{Benitez-Llambay2020} paper for further details and a derivation of the critical mass. \begin{figure} \centering \includegraphics{Figures/Fig1.pdf} \caption{Stellar vs. halo mass relation for our sample of simulated ``central'' galaxies. Dotted-dashed and dashed lines show abundance-matching expectations from~\cite{Guo2010} and~\cite{Moster2013}, respectively. Galaxies are colored according to their current stellar mass ($y$-axis).} \label{Fig:Mstr_M200} \end{figure} \subsection{The Simulation} We use a high-resolution hydrodynamical cosmological simulation carried out with the {\tt P-Gadget 3} code~\citep[last described in][]{Springel2005} along with the {\tt EAGLE} model of galaxy formation~\citep{Schaye2015}. The simulation is the same introduced in~\cite{Benitez-Llambay2020} and we list here only the physical prescriptions relevant for our work. We refer the reader to the original papers for further details. The simulation evolves a periodic cubic volume of side length $20 \rm \ Mpc$, filled with $2\times 1024^3$ DM and gas particles, so that the DM and gas particle mass are $m_{\rm dm} = 2.9 \times 10^{5} \ M_{\odot}$ and $m_{\rm gas} = 5.4 \times 10^{4} \ M_{\odot}$, respectively. The adopted Plummer-equivalent gravitational softening is $\epsilon=195 \rm \ pc$. Gas particles are turned into collisionless star particles once they exceed a density threshold of $n_{\rm thr} = 1 \rm \ cm^{-3}$, a sufficiently high value that ensures that the gas within DM halos becomes self-gravitating before turning into stars. Unlike the original {\tt EAGLE} simulations, our density threshold for star formation does not depend on metallicity. Our simulation also includes radiative cooling and heating, as tabulated by~\citet{Wiersma2009}, which in turn includes the~\citet{Haardt2001} UVB radiation field. CR is modeled by turning on the UVB at redshift $z_{\rm re}=11.5$ and to ensure that gas is quickly heated to $\sim 10^{4} \ K$ at $z_{\rm re}$, an energy of $2 \rm \ eV$ per proton mass is instantaneously injected to every gas particle at that time. DM halos are identified in the simulation using {\tt HBT+}~\citep{Han2018}, which provides a catalog of ``central'' and ``satellite'' DM halos identified in the simulation using a friends-of-friends algorithm with percolation length $b=0.2$ in units of the mean interparticle separation, and assigning ``bound'' particles to each halo based on their binding energies. \subsection{Sample Selection} We select simulated galaxies as ``central'' DM halos that contain more than one stellar particle within their galactic radius, $r_{\rm gal} = 0.2 \times r_{200}$, which yields a lower galaxy stellar mass limit of $\sim 5.4 \times 10^{4} \ M_{\odot}$. As shown in Fig.~\ref{Fig:Mstr_M200}, this criterion imposes a minimum present-day halo mass $M_{200} \sim 10^{9} \ M_{\odot}$. We also restrict our sample to DM halos with virial mass $M_{200} \lesssim 10^{11} M_{\odot}$, as objects above this limit are not of interest for our study because all these systems exceed the AC limit prior to CR~\citep{Benitez-Llambay2020}, thus making it impossible for these halos to host galaxies that form late. In the selected mass range, the stellar mass of our simulated dwarfs are bounded from above and below by the abundance-matching (AM) constraints from~\citet{Moster2013} and~\citet{Guo2010}, respectively (see Fig.~\ref{Fig:Mstr_M200}). At large halo masses, the mass of the central galaxies are underestimated in our simulation compared to AM expectations, a limitation of the original {\tt EAGLE} model that does not preclude the analysis that follows. \section{Results} \subsection{The Simulated Tail of Late-forming Dwarfs} In order to look for a population of late-forming galaxies, we shall consider the galaxy formation time, $t_{\rm f}$, defined as the time at which a galaxy first formed a star particle in the simulation. In practice, $t_{\rm f}$ corresponds to the formation time of the oldest star particle found within $r_{\rm gal}$ at $z=0$. Fig.~\ref{Fig:formation_time} shows $t_{\rm f}$ as a function of present-day virial mass for our galaxy sample. Galaxies inhabiting more massive halos at redshift $z=0$ form earlier than those hosted by less massive counterparts, although the median $t_{\rm f}$ (shown by the thin red solid line) is weakly dependent on present-day halo mass. More interesting is the fact that the scatter in $t_{\rm f}$ increases significantly at low masses, peaking at about the present-day value of the~\citetalias{Benitez-Llambay2020} mass (vertical dashed line). The large scatter at low masses originates from a tail of late-forming dwarfs that we arbitrarily define as galaxies with $t_{\rm f} \gtrsim 2.2 \rm \ Gyr$ (or $z \lesssim 3$), and that constitutes less than $\sim 8 \%$ of population of dwarfs with stellar masses $M_{\rm gal} \gtrsim 5.4 \times 10^{5} \ M_{\odot}$ in the halo mass range $3 \times 10^{9} \lesssim M_{200} / M_{\odot} \lesssim 10^{10}$. But what determines, in general, the $t_{\rm f}$ versus $M_{200}$ relation shown in Fig.~\ref{Fig:formation_time}, and more particularly, what is the origin of the tail of late-forming dwarfs identified in our simulation? In light of the discussion of Sec.~\ref{sec:intro}, one may speculate that the main trend of the $t_{\rm f}$ versus $M_{200}$ relation displayed by our galaxy sample is related to the interplay between the mean assembly history of their host halos and the evolution of the critical mass for galaxy formation. We demonstrate that this is indeed the case in Fig.~\ref{Fig:critical_mass}, which shows $t_{\rm f}$ as a function of halo mass, but as measured at the galaxy formation time rather than at $z=0$, $M_{200,t_{\rm f}}$. We compare the values measured in the simulation to the time-dependent ~\citetalias{Benitez-Llambay2020} mass (red dashed line). This critical mass displays a sharp transition from $M_{\rm crit} \sim 3 \times 10^{7} \ M_{\odot}$ to $M_{\rm crit} \sim 10^{8} \ M_{\odot}$ at $z=z_{\rm re}$, and results from the fact that galaxy formation is largely limited by atomic hydrogen cooling before CR, and by the ability of the gas to undergo gravitational collapse afterward. Fig.~\ref{Fig:critical_mass} makes it clear that galaxy formation occurs predominantly in halos whose virial mass at $t_{\rm f}$ is comparable to the critical mass. Interestingly, the~\citetalias{Benitez-Llambay2020} mass traces the simulated $t_{\rm f}$ versus $M_{200,t_{\rm f}}$ relation remarkably well, even though neither the~\citetalias{Benitez-Llambay2020} model nor the {\tt EAGLE} model were tuned to do so. \begin{figure} \centering \includegraphics{Figures/Fig2.pdf} \caption{Galaxy formation time, $t_{\rm f}$, as a function of the present-day host halo mass (symbols are colored as in Fig.~\ref{Fig:Mstr_M200}). The thin red solid line indicates the running median of the distribution; the thick magenta dashed line shows the time when a mean EPS halo of mass $M_{200}$ first exceeds the~\citetalias{Benitez-Llambay2020} mass. Orange lines are similar to the thick dashed line, but they show the median and the 10th and the 90th percentiles that result from comparing individual (rather than average) EPS halo growth histories to the~\citetalias{Benitez-Llambay2020} mass. See the text for further discussion.} \label{Fig:formation_time} \end{figure} \begin{figure} \centering \includegraphics{Figures/Fig3.pdf} \caption{Galaxy formation time, $t_{\rm f}$, as a function of the halo virial mass at $t_{\rm f}$. Galaxies are colored as in Fig.~\ref{Fig:Mstr_M200}. The red dashed line shows the time-dependent critical mass for galaxy formation that results from the~\citetalias{Benitez-Llambay2020} model (see Sec.~\ref{Sec:model}). The critical mass exhibits a sudden increase from $M_{200} \sim 10^{7.5} \ M_{\odot}$ to $M_{200} \sim 10^{8.0} \ M_{\odot}$ at CR. Dashed lines show the $\Lambda$CDM mean mass growth histories calculated using the EPS formalism, for halos of present-day virial mass $10^{9}, 10^{10}$, and $10^{11} \ M_{\odot}$.} \label{Fig:critical_mass} \end{figure} \subsection{Origin of the Overall Mass Dependence of Galaxy Formation Time} The black dashed lines in Fig.~\ref{Fig:critical_mass} show three mean $\Lambda$CDM mass growth histories derived using the Extended Press-Schechter (EPS) formalism~\cite[][]{Bond1991}, for halos of present-day mass, $10^{11}$, $10^{10}$ and $10^{9} \ M_{\odot}$. These curves help to clarify the overall mass dependence of the galaxy formation time observed in Fig.~\ref{Fig:formation_time}. Consider, for example, the upper dashed curve, which corresponds to the mean mass growth of a halo with present-day mass, $M_{200}=10^{9} \ M_{\odot}$. This demonstrates that the bulk population of $10^{9} M_{\odot}$ halos cannot host a galaxy in their center, as the mean mass growth history for these halos is under the~\citetalias{Benitez-Llambay2020} mass at all times. On the other hand, massive halos today, those that greatly exceed the present-day critical mass for galaxy formation, have exceeded the~\citetalias{Benitez-Llambay2020} mass since before CR. Consequently, these halos host galaxies that have formed very early on, explaining to some extent the observed ``downsizing'', as argued in Sec.~\ref{sec:intro}. The typical formation time for those intermediate-mass halos that eventually exceed the critical mass depends monotonically on halo mass. These arguments indicate that it is possible, in principle, to understand the overall mass-dependent galaxy formation time. To this end, we must compare the mean mass growth of $\Lambda$CDM halos to the time-dependent critical mass for galaxy formation, and define $t_{\rm f}$ as the time when the mean halo mass first exceeds the~\citetalias{Benitez-Llambay2020} mass. The thick magenta dashed line in Fig.~\ref{Fig:formation_time} shows the result of this exercise. The agreement between this curve and the median $t_{\rm f}$ that results from the simulation (thin solid line), particularly at high masses, demonstrates that our interpretation is correct. However, this simple exercise results in a divergent $t_{\rm f}$ toward the present-day value of the~\citetalias{Benitez-Llambay2020} mass (vertical dashed line), as no galaxy formation can take place below this mass scale, as anticipated. This reasoning would imply that all galaxies inhabiting halos with present-day mass $M_{200} \sim M_{\rm crit,0}$ are young, contrary to the simulation results, in which only a low fraction of the systems form at late times. It is thus evident that although the mean mass growth of $\Lambda$CDM halos is useful to understand the overall trend observed in Fig.~\ref{Fig:formation_time}, it is not sufficient to account for the predominantly old population of galaxies inhabiting halos with present-day mass $M_{200} \lesssim M_{\rm crit,0}$. The existence of these galaxies in our simulation is a direct consequence of the fact that galaxy formation becomes an increasingly rare event at low halo masses, so the proper modeling of the galaxy formation time at these masses must necessarily take into account the intrinsic scatter in the growth of $\Lambda$CDM halos. \subsection{Origin of the Scatter in the Galaxy Formation Time} To account for the scatter in the mass growth of halos, we now consider individual (rather than average) EPS growth histories in bins of present-day halo mass in the range, $10^{9} \lesssim M_{200} / M_{\odot} \lesssim 10^{11}$. We sample the scatter in the assembly of halos by constructing 500 growth histories per mass bin. As in the previous section, we define $t_{\rm f}$ as the time at which the EPS mass first crosses the~\citetalias{Benitez-Llambay2020} mass. The thick orange lines of Fig.~\ref{Fig:formation_time} show the result of this calculation, in particular the median and the 10th and 90th percentiles of the distributions. By considering the intrinsic scatter in the growth history of halos we eliminate the divergence of $t_{\rm f}$ toward the present-day value of the critical mass (vertical line). This is because the great majority of these halos formed their galaxies at earlier times, albeit at much later times than massive halos. This improved model naturally reproduces the increasingly large scatter in $t_{\rm f}$ at low masses. The peak in the 90th percentile around the critical mass is in remarkable agreement with the results from the simulation, so we can safely conclude that the mass-dependent galaxy formation time is simply understood from the interplay between the critical mass for galaxy formation and the time when the halos first exceeded this mass. Within this picture, an unavoidable conclusion is that in $\Lambda$CDM there must be a low fraction of halos that have exceeded this critical mass for galaxy formation particularly late ($z<3$), thus triggering the first episode of star formation at late times. Late-forming dwarfs can thus be regarded as robust cosmological outcomes, being their formation time largely insensitive to the details of the modeling included in numerical simulations.\footnote{This is true provided star formation proceeds in gas that self-gravitates, a requirement that arises from the definition of the critical mass in the~\citetalias{Benitez-Llambay2020} model.} Finally, DM halos less massive than the critical mass today host predominantly old galaxies, similar to more massive halos. As opposed to late-forming dwarfs, these galaxies populate the low fraction of halos that collapsed unusually early to become more massive than the critical mass at earlier times~\citep[see][for a detailed discussion on this]{Benitez-Llambay2020}. Some of these halos would host the observed population of ancient ultrafaint dwarfs~\cite[e.g.][]{Simon2019}, which we do not resolve in our simulation. \begin{figure} \centering \includegraphics{Figures/Fig4.pdf} \caption{Frequency of late-forming galaxies, i.e., those that formed after redshift $z\lesssim 3$, as a function of halo mass. The black histogram shows the the result of our simulation. The other lines display the expected fraction of late-forming dwarfs for different values of the redshift of reionization, $z_{\rm re}$, as indicated in the legend. If reionization occurs earlier than $z_{\rm}=8$, our predictions become insensitive to the particular value of $z_{\rm ze}$.} \label{Fig:impact_reionization} \end{figure} \section{Discussion and Conclusions} \label{Sec:Discussion} Our analysis demonstrates that the existence of a small population of late-forming galaxies, here defined as dwarfs that started forming stars after $z<3$, is a robust outcome of the $\Lambda$CDM model. This is because their formation depends on the ability of the gas to undergo gravitational collapse, which does not depend on the particularities of the galaxy formation model assumed in simulations. Indeed, Fig.~\ref{Fig:critical_mass} demonstrates that the galaxy formation time is a fair measure of the time when the $\Lambda$CDM halos first exceeded the critical mass above which gas cannot remain in hydrostatic equilibrium. Moreover, we showed in Fig.~\ref{Fig:formation_time} that the~\citetalias{Benitez-Llambay2020} model, together with the EPS formalism, enables us to understand not only the dependence of the galaxy formation time on halo mass but also its scatter. Our results are also robust to the assumed redshift of reionization. Fig.~\ref{Fig:impact_reionization} shows the frequency of late-forming dwarfs as measured in our simulation and as obtained from comparing EPS mass growth histories to the~\citetalias{Benitez-Llambay2020} mass, assuming that the universe undergoes reionization at $z_{\rm re}=6$, $8$, $10$, and $12$. Clearly, the frequency of late-forming dwarfs is largely insensitive to the exact value of $z_{\rm re}$ provided $z_{\rm re} \gtrsim 8$, a lower limit consistent with recent Planck results~\citep{Planck2020}. Finally, we note that our results rely on the idea that galaxy formation largely proceeds in AC halos prior to cosmic reionization~\citep[e.g.,][and references therein]{Bromm2011}. Numerical simulations that include physical ingredients missed in our simulation and that are important to address this issue --such as molecular hydrogen cooling and radiative feedback effects-- support the idea that the first galaxies indeed form predominantly in atomic cooling halos~\cite[e.g.][and references therein]{Oh2002, Greif2008, Wise2014}. In particular, the work by~\cite{Wise2014} shows that the fraction of halos that host galaxies prior to cosmic reionization vanishes in a narrow range of halo mass below the AC limit. Our results thus rest on assumptions that appear plausible and warrant further scrutiny. \subsection{Late-forming Dwarfs in Other Simulations} Interestingly, late-forming dwarfs have already been spotted in cosmological hydrodynamical simulations. For example, using a high-resolution zoom-in simulation of the formation of the Local Group from the {\tt CLUES} project,~\cite{Benitez-Llambay2015} identified two dwarf galaxies with a significant delay in their formation, and they ascribed their origin to the impact of CR. Similarly, using a sample of 15 zoom-in cosmological simulations carried out with the {\tt FIRE} code,~\cite{Fitts2017} identified one dwarf galaxy that formed after $z<1$. They, too, ascribed the unusual delayed formation of this galaxy to the effect of CR. Although not discussed by the authors, a small fraction of the isolated low-mass dwarf galaxies analyzed by~\cite{Garrison-Kimmel2019} also formed particularly late compared to most dwarfs in their simulations. Finally, recent works have shown that dwarf galaxies that form early can undergo late episodes of star formation once their halos experience significant mergers, a phenomenon related to the fact that DM halos can exceed the critical mass more than once during their lifetime~\cite[e.g.][and references therein]{Benitez-Llambay2016, Rey2020}. This suggests that the critical mass for galaxy formation does not only establish the onset of galaxy formation for starless halos, but also determines the ability of luminous halos to collect sufficient gas to sustain star formation in their center, and may shape the star formation history of dwarfs.\footnote{The gas in DM halos less massive than the critical mass is stable against gravitational collapse and therefore unable to form stars~\citep{Benitez-Llambay2020}, provided the stellar content of the halo is gravitationally irrelevant.} All these results thus strongly support the idea that late-forming dwarfs are rare objects that arise naturally in numerical simulations, regardless of the adopted modeling. \subsection{Predicted Number Density} The number density of late-forming dwarfs depends on the abundance of halos that exceed the~\citetalias{Benitez-Llambay2020} mass after $z\lesssim 3$. In Fig.~\ref{Fig:number_density} we show the galaxy stellar mass function of the simulated galaxy population, and of galaxies that formed their first stars after redshift $z_{\rm f}$. As anticipated throughout our Letter, only a low fraction of galaxies make up the population of late-forming dwarfs (i.e., those with $z_{\rm f} < 3$). Moreover, the fraction of late-forming dwarfs decreases steadily with decreasing formation redshift and with increasing stellar mass. Finding a massive dwarf ($M_{\rm gal} > 10^{7} \ M_{\odot}$) undergoing its formation today becomes thus extremely rare; in fact, dwarfs undergoing their formation at the present day are expected to be faint, with masses comparable to the faint and ultrafaint dwarfs observed nearby ($M_{\rm gal} \lesssim 10^{6} \ M_{\odot}$). Although this particular statement might depend on the particular galaxy formation model included in our simulation, the good match between the observed galaxy stellar mass function from~\cite{Baldry2012} and that measured in our simulation (brown line in Fig.~\ref{Fig:number_density}) suggests that the stellar masses of the simulated dwarfs are robust, at least in the mass range of overlap. \begin{figure} \centering \includegraphics{Figures/Fig5.pdf} \caption{Stellar mass function of the simulated galaxies (brown dashed line), split by the galaxy formation redshift, $z_{\rm f}$, as indicated in the legend. Late-forming dwarfs (defined here as those with $z_{\rm f} < 3$), make up less than $20\%$ of the simulated galaxy population at the low-mass end, and they contribute even less at larger stellar masses.} \label{Fig:number_density} \end{figure} \subsection{Observational Counterparts of the Late-forming Dwarfs} It is natural to expect that late-forming galaxies exhibit systematically smaller sizes than the majority of dwarfs. This is because late-forming dwarfs form from gas that dissipates its thermal energy and sinks to the center at late times, preventing their young stars from being stirred up by mergers, as opposed to older dwarfs~\cite[see, e.g.,][for a discussion of this effect in dwarf galaxies]{Benitez-Llambay2016}. Our simulation indicates that the half-mass radius of late-forming dwarfs resolved with more than 50 particles is indeed, on average, $30 \%$ smaller than that of older dwarfs of similar present-day stellar mass. Besides their compactness, these galaxies form from generally pristine gas. Therefore, the most obvious candidates for the $\Lambda$CDM late-forming dwarfs are some of the most metal-poor star-forming blue compact dwarfs (BCDs) known in the Local Volume. BCDs such as I Zwicky 18 or DDO 68 are indeed characterized by compact sizes, ongoing intense star-formation activity in their center, and surprisingly low metallicity. Due to the difficulty of reconstructing star-formation histories with high resolution at early times, it is however unclear whether they contain a substantial population of old stars~\cite[e.g.][]{Papaderos2002, Izotov2004, Pustilnik2005, Pustilnik2008, Jamet2010}. Consider, for example, the case of I Zwicky 18. The vigorous star formation at its center at a rate of $\sim 16 \times 10^{-2} \ M_{\odot} \rm \ yr^{-1} \ kpc^{-2}$, together with its large gas supplies, low dust abundance~\cite[e.g.][]{Wu2007}, low-metallicity of $\sim 1/50 \ Z_{\odot}$~\cite[e.g.][]{Aloisi1999}, and slow circular velocity of $(38 \pm 4.4) \rm \ km \ s^{-1}$~\citep{Lelli2012}, make this galaxy an ideal analog for the most massive galaxies found in our sample of simulated late-forming dwarfs whose gaseous halo has started the runaway gravitational collapse not so long ago. Also, the extreme properties of I Zwicky 18 have led some authors to argue that this dwarf would be a present-day analog of star-forming galaxies found at high redshift~\cite[e.g.][]{Papaderos2012}. However, the idea that I Zwicky 18 is a dwarf undergoing its first formation today has been disputed on the grounds that, similarly to other observed BCDs~\cite[e.g.][]{Schulte-Ladbeck2001, Annibali2003, Vallenari2005, Aloisi2005, Makarov2017}, I Zwicky 18 contains RGB stars that put a lower limit constraint to its formation time of about 1 Gyr ago~\citep[e.g.][and references therein]{Aloisi2007, Annibali2013, Sacchi2016}. This limit, albeit old in the context of stellar population synthesis models, only constitutes a small fraction of the cosmic scales spanned by the late-forming dwarfs discussed here. On statistical grounds, however, the scarcity of massive late-forming dwarfs in $\Lambda$CDM indicates that most BCDs observed in the local universe cannot be undergoing their formation for the first time today. If our volume is representative of the local universe, our simulation suggests that the great majority of late-forming dwarfs should be much fainter than I Zwicky 18, and much fainter than most BCDs. Indeed, BCDs have stellar masses in the range $10^{7} \lesssim M_{\rm gal}/M_{\odot} \lesssim 10^{9}$, whereas Fig.~\ref{Fig:number_density} shows that simulated late-forming dwarfs have stellar masses $M_{\rm gal} \lesssim 10^{7} \ M_{\odot}$. Whether one of the known BCDs with extreme properties constitutes a true analog of the late-forming dwarfs analyzed here could be answered with deeper photometric observations that reach the oldest main-sequence turnoff~\cite[see, e.g., the review by][]{Gallart2005}. The higher resolution and sensitivity afforded by upcoming observational facilities, in particular the James Webb Space Telescope and the Extremely Large Telescope, will enable precise measurements of resolved star formation histories in these types of galaxies at larger distances, helping to constrain their formation epoch and placing them in the context of the population analyzed in our work. On the other hand, upcoming surveys such as those by the Vera Rubin Observatory may be able to detect ultrafaint dwarfs beyond our Local Group, some of which may well be the late-forming dwarfs discussed here. These endeavors should not be taken lightly, as late-forming dwarfs could provide a powerful avenue to elucidate the past growth of low-mass DM halos, probe a distinctive halo mass-scale, and test the core of our understanding of galaxy formation at the smallest scales. \newpage \acknowledgments We thank the anonymous referee for a thoughtful review of our manuscript, which led to the improvement of our presentation. This work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (GA 757535) and UNIMIB's Fondo di Ateneo Quota Competitiva (project 2020-ATESP-0133). This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (https://www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grants ST/P002293/1, ST/R002371/1, and ST/S002502/1, Durham University, and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. The simulation used in this work was performed using DiRAC’s director discretionary time awarded to A.B.L. A.B.L. is grateful to Prof. Mark Wilkinson for awarding this time.
1,941,325,220,340
arxiv
\section{Introduction} Let $k$ be an algebraically closed field of characteristic zero and $k[x_1,\ldots,x_n]$ be the standard graded polynomial ring in $n$ variables. Given a degree $d$ form $F$ the {\it Waring Problem for Polynomials} asks for the least value of $s$ for which there exist linear forms $L_1,\ldots,L_s$ such that \[F=\sum_1^s L_i^d \ . \] This value of $s$ is called the {\it Waring rank} of $F$ (or simply the {\it rank} of $F$) and will be denoted by $\mathrm{rk}(F)$. There was a long-standing conjecture describing the rank of a generic form $F$ of degree $d$, but the verification of that conjecture was only found relatively recently in the famous work of J. Alexander and A. Hirschowitz \cite{AH95}. However, for a given {\it specific} form $F$ of degree $d$ the value of $\mathrm{rk}(F)$ is not known in general. Moreover, in the general situation, there is no effective algorithmic way to compute the rank of a given form. Algorithms exist in special cases, e.g. when $n=2$ for any $d$ (i.e. the classical algorithm attributed to Sylvester) and for $d=2$ any $n$ (i.e. finding the canonical forms for quadratic forms). Given this state of affairs, several attempts have been made to compute the rank of specific forms. One particular family of examples that has attracted attention is the collection of monomials. A few cases where the ranks of specific monomials are computed can be found in \cite{LM} and in \cite{LandsbergTeitler2010}. In \cite{RS2011} the authors determine $\mathrm{rk}(M)$ for the monomials \[M=\left(x_1\cdot\ldots\cdot x_n\right)^m\] for any $n$ and $m$. In particular, they show that $\mathrm{rk}(M)=(m+1)^{n-1}$. In this paper we completely solve the Waring Problem for monomials in Proposition \ref{monmialsumofpoercorol} showing that \begin{eqnarray} \label{monomio} \ \ \ \ \mathrm{rk}({x_1}^{a_1}\cdot\ldots\cdot {x_n}^{a_n})= \frac 1{a_1+1} \Pi_{i=1}^n (a_i+1)= \left\{ \begin{matrix} 1 & \hbox{ for } n=1 \cr \\ \Pi_{i=2}^n (a_i+1) & \hbox{ for } n\geq 2\cr \end{matrix} \right. , \end{eqnarray} where $1\leq a_1\leq\ldots\leq a_n$. A lengthier proof of this result was first obtained in \cite{CCG11} and then, in a different form, in \cite{BBT2012}. Our approach to solving the Waring Problem for specific polynomials follows a well known path, namely the use of the Apolarity Lemma \ref{apolarityLEMMA} to relate the computation of $\mathrm{rk}(F)$ to the study of ideals of reduced points contained in the ideal $F^\perp$. Using these ideas we obtained a complete solution to the Waring problem for polynomials which are the sum of coprime monomials, see Theorem \ref{mainthm}. More precisely, if $F=M_1+\ldots +M_r$ where the monomials $M_i$ are such that $G.C.D.(M_i,M_j)=1,i\neq j$ and $\mathrm{deg}(F)>1$, then \[\mathrm{rk}(F)=\mathrm{rk}(M_1)+\ldots +\mathrm{rk}(M_r).\] Using our knowledge of the rank we obtained two interesting applications. We showed that, only in three variables and for degree high enough, certain monomials provide examples of forms having rank higher than the generic form, see Proposition \ref{nleq3monomial}. Finally, we find a minimal sum of powers decomposition for forms which are the sum of pairwise coprime monomials. In the case of monomials this result appeared in \cite{CCG11} and was then improved in \cite{BBT2012}. The main results of this paper were obtained in July 2011 when the authors were visiting the University of Coimbra in Portugal. The authors wish to thank GNSAGA of INDAM for the financial support during their visit. \section{Basic facts} We consider $k$, an algebraically closed field of characteristic zero and the polynomial rings { \[S=k[x_{1,1} ,\ldots, x_{1,n_1}, \ldots \ldots ,x_{r,1} ,\ldots, x_{r,n_r} ],\] \[T=k[X_{1,1} ,\ldots, X_{1,n_1}, \ldots \ldots ,X_{r,1} ,\ldots, X_{r,n_r} ].\] } We make $T$ act via differentiation on $S$, e.g. we think of $X_{i,j} =\partial/ \partial x_{i,j}$. (see, for example, \cite{Ge} or \cite{IaKa} ). We refer to a polynomial in $T$ as $\partial$, instead of using capital letters. In particular, for any form $F$ in $S_d$ we define the ideal $F^\perp\subseteq T$ as follows: \[F^\perp=\left\{\partial\in T : \partial F=0\right\}.\] Given a homogeneous ideal $I\subseteq T$ we denote by $$HF(T/I,i)= \dim_kT_i -\dim_k I_i$$ its {\it Hilbert function} in degree $i$. It is well known that for all $i > > 0$ the function $HF(T/I,i)$ is a polynomial function with rational coefficients, called the {\it Hilbert polynomial} of $T/I$. We say that an ideal $I\subseteq T$ is {\it one dimensional} if the Krull dimension of $T/I$ is one, equivalently the Hilbert polynomial of $T/I$ is some integer constant, say $s$. The integer $s$ is then called the {\it multiplicity} of $T/I$. If, in addition, $I$ is a radical ideal, then $I$ is the ideal of a set of $s$ distinct points. We will use the fact that if $I$ is a one dimensional saturated ideal of multiplicity $s$, then $HF(T/I, i)$ is always $\leq s$. Our main tool is the {\it Apolarity Lemma}, the proof of which can be found in \cite[Lemma 1.31]{IaKa}. \begin{lem}\label{apolarityLEMMA} A homogeneous degree $d$ form $F\in S$ can be written as \[F=\sum_{i=1}^s L_i^d ,\ \ \ L_i \hbox{ pairwise linearly independent}\] if and only if there exists $I\subseteq F^\perp$ such that $I$ is the ideal of a set of $s$ distinct points. \end{lem} We conclude with the following trivial, but useful, remark showing that the rank of a form does not vary by adding variables to the polynomial ring. \begin{rem}\label{leastvarREM}{ The computation of the rank of $F$ is independent of the polynomial ring in which we consider $F$. To see this, consider a rank $d$ form $F\in k[x_1,\ldots,x_n]$ and suppose we know $\mbox{rk}(F)$. We can also consider $F\in k[x_1,\ldots,x_n,y]$ and we can look for a sum of powers decomposition of $F$ in this extended ring. If \[F(x_1,\ldots,x_n)=\sum_1^r \left(L_i(x_1,\ldots,x_n,y)\right)^d,\] then, by setting $y=0$, we readily get $r\geq \mbox{rk}(F)$. Thus, by adding variables we can not get a sum of powers decomposition involving fewer summands. Moreover, if $r$ is the minimal length of a sum of powers decomposition of $F$ in the extended ring, we readily get $r= \mbox{rk}(F)$. In particular, given a monomial \[M=x_1^{a_1}\cdot\ldots\cdot x_n^{a_n},\] with $1\leq a_1\leq\ldots\leq a_n$ it is enough to work in $k[x_1,\ldots,x_n]$ in order to compute $\mbox{rk}(M)$.}\end{rem} \section{Main result} It is useful to recall the following. Let $I\subseteq {T}$ be an ideal and $\partial\in T_1$ a linear homogeneous differentiation. If $\partial$ is not a zero divisor in $T/I$ then \begin{eqnarray} \label{HF} HF(T/I,t)=\sum_{i=0}^t HF(T/(I+(\partial)),i)). \end{eqnarray} We first compute the rank of any monomial. Thus, we only consider the case $r=1$ and, just for this result, we drop the double index notation, i.e. we abuse notation and we let $S=k[x_1,\ldots,x_n]$ and $T=k[X_1,\ldots,X_n]$. \begin{prop}\label{monmialsumofpoercorol} Let $n\geq 1$ and $1\leq a_1\leq \ldots \leq a_n$. If \[M=x_1^{a_1}\cdot\ldots\cdot x_n^{a_n},\] then $\mbox{rk}(M)=\frac 1{a_1+1} \Pi_{i=1}^n (a_i+1)$. \end{prop} \begin{proof} If $n=1$, then $M$ is the power of a variable and $\mbox{rk}(M)=1$; we can then assume $n>1$. The perp ideal of $M$ is $M^\perp=(X_1^{a_1+1},\ldots,X_n^{a_n+1})$ and hence \[I= (X_n^{a_n+1}-X_1^{a_n+1},\ldots,X_2^{a_2+1}-X_1^{a_2+1})\subseteq M^\perp .\] As $I$ is the ideal of a complete intersection scheme of $\frac 1{a_1+1} \Pi_{i=1}^n (a_i+1)$ distinct points, the Apolarity Lemma yields \[\mbox{rk}(M)\leq \frac{1}{a_1+1} \Pi_{i=1}^n (a_i+1).\] We now consider $I\subseteq M^\perp$ the ideal of a scheme of $s$ distinct points; to complete the proof it is enough to show that $s\geq \frac{1}{a_1+1} \Pi_{i=1}^n (a_i+1)$. To do this, we set $I'=I:(X_1)$ and we notice that $I'$ is the ideal of a scheme of $s'\leq s$ distinct points; notice that $s'>0$ as $X_1\not\in I$. Clearly we have \[I'+(X_1)\subseteq J=M^\perp:(X_1)+(X_1)=(X_1,X_2^{a_2+1},\ldots,X_n^{a_n+1}).\] Hence, for $t\gg 0$ we get \[s'=HF(T/I',t)=\sum_{i=0}^t HF(T/(I'+(X_1)),i)\geq \] \[\sum_{i=0}^t HF(T/J,i)=\frac 1{a_1+1} \Pi_{i=1}^n (a_i+1)\] where the last equality holds as $J$ is a complete intersection ideal. The conclusion then follows as $s\geq s'$. \end{proof} We now state and prove our main result. \begin{thm}\label{mainthm} Consider the degree $d$ form \[F= M_1+ \ldots + M_r \] \[= x_{1,1} ^{a_{1,1}}\cdot\ldots\cdot x_{1,n_1}^{a_{1,n_1}}+ \ldots \ldots + x_{r,1} ^{a_{r,1}} \cdot\ldots\cdot x_{r,n_r}^{a_{r,n_r}} , \] where \[a_{i,1}+ \ldots+{a_{i,n_i}}=d, \ \ \ \ 1\leq a_{i,1}\leq\ldots \leq a_{i,n_i}, \ \ \ \ (1 \leq i \leq r ) . \] If $d=1$ then $\mathrm{rk}(F)=1$. If $d\geq 2$, then $$\mathrm{rk}(F)=\sum _{i=1}^r \mathrm{rk}(M_i) .$$ \end{thm} \begin{proof} The case $d=1$ is trivial as $F$ is a linear form, thus we only have to prove the $d\geq 2$ case. { For $d=2$, $F$ is a quadratic form. Since its associated matrix is {congruent} to a diagonal matrix of rank $\sum _{i=1}^r n_i$ the conclusion follows. } For $r=1$ the form $F$ is a monomial and the theorem is proved in Proposition \ref{monmialsumofpoercorol}. Hence we have only to consider the cases $d>2$ and $r>1$. By writing each monomial $M_i$ as a sum of powers we get a sum of powers decomposition of $F$, thus we have $\mathrm{rk}(F)\leq \sum _{i=1}^r\mathrm{rk}(M_i)$. Hence, using Lemma \ref{apolarityLEMMA}, it is enough to show that if $ F^\perp$ contains the ideal of a scheme of $s$ distinct points, then $s\geq \sum _{i=1}^r\mathrm{rk}(M_i)$. Let $I \subseteq F^\perp$ be the ideal of a scheme $\mathbb X $ of $s$ distinct points, and let $ \mathbb X' \subseteq \mathbb X$ be the subsets of the $s'$ points of $\mathbb X$ not lying on $\{X_{1,1}=\dots = X_{r,1}=0$\}. Let $I'$ be the ideal of $\mathbb X' $, i.e., \[ I'=I:( X_{1,1},\dots , X_{r,1} ) . \] We will prove that $ s' \geq \sum _{i=1}^r\mathrm{rk}(M_i)$, so the conclusion will follow as $s\geq s'$. The generic linear derivation $\alpha_1 X_{1,1}+\ldots + \alpha_r X_{r,1}$ (where $\alpha_i \in k $) is not a zero divisor in $T/I'$. Without loss of generality, and possibly rescaling the variables, we may assume that $$\partial= X_{1,1}+\ldots + X_{r,1}$$ is not a zero divisor in $T/I'$. Hence, for $t\gg 0$ we get \begin{equation} \label{0} s' = HF(T/I',t) = \sum_{i=0}^t HF(T/(I'+ ( \partial ) ),i) . \end{equation} Let $w$, ($0 \leq w \leq r$), be the number of $1$'s in the set $\{ a_{1,1}, \dots, a_{r,1} \}$. We may assume that \[ a_{1,1}= \dots= a_{w,1} =1. \] We have \[I' + (\partial) \subseteq (F^\perp:( X_{1,1},\dots , X_{r,1} ))+ (\partial ) \] \[= (F^\perp:( X_{1,1}) )\cap \ldots \cap (F^\perp:( X_{r,1})) + (\partial ) \] \[\subseteq (x_{1,2} ^{a_{1,2}} \cdot\ldots\cdot x_{1,n_1}^{a_{1,n_1}})^\perp \cap \ldots \ldots \cap ( x_{w,2} ^{a_{w,2}} \cdot\ldots\cdot x_{w,n_w}^{a_{w,n_w}} )^\perp \] \[ \cap ( ( x_{{w+1},1} ^{a_{{w+1},1}-1} \cdot x_{{w+1},2} ^{a_{{w+1},2}} \cdot \ldots\cdot x_{{w+1},n_{w+1}}^{a_{{w+1},n_{w+1}}} )^\perp + (X_{{w+1},1}) ) \] \[ \cap \ldots \ldots \cap ( ( x_{r,1} ^{a_{r,1}-1} \cdot x_{r,2} ^{a_{r,2}} \cdot \ldots\cdot x_{r,n_r}^{a_{r,n_r}} )^\perp + ( X_{r,1} )) \] \[ = J_1\cap \ldots \cap J_r, \] where \[J_{1}=(X_{1,1},X_{1,2}^{a_{1,2} +1}, \ldots, X_{1,n_1}^{a_{1,n_1}+1}, X_{2,1}, \ldots, X_{2,n_2}, \ldots \ldots , X_{r,1}, \ldots, X_{r,n_r} );\] \[J_{2}=(X_{1,1},\ldots, X_{1,n_1}, X_{2,1}, X_{2,2} ^{a_{2,2}+1}\ldots, X_{2,n_2}^{a_{2,n_2}+1}, \ldots \ldots , X_{r,1}, \ldots, X_{r,n_r} );\] \[ \vdots \] \[J_{r}=(X_{1,1},\ldots, X_{1,n_1}, \ldots \ldots , X_{r,1}, X_{r,2} ^{a_{r,2}+1} \ldots, X_{r,n_r} ^{a_{r,n_r}+1} ).\] Observe that, if $n_i=1$, then $J_i$ is the maximal ideal. So we have \begin{equation} \label{1} I' + (\partial) \subseteq J_1\cap \ldots \cap J_r. \end{equation} The only linear forms in $ J_1\cap \ldots \cap J_r$ are $X_{1,1},$ $ X_{2,1},$ $ \ldots , $ $X_{r,1}$, hence \begin{equation} \label{2} \dim ( J_1\cap \ldots \cap J_r)_1 = r. \end{equation} Now we will prove by contradiction that in $I'$ there are no linear forms. Assume that $ L \in I' $ is a linear form, $$L = \alpha _{1,1} X_{1,1}+\ldots+ \alpha _{1,n_1} X_{1,n_1}+ \cdots + \alpha _{r,1} X_{r,1}+ \cdots+ \alpha _{r,n_r} X_{r,n_r}. $$ Since $I' = I : ( X_{1,1},\dots , X_{r,1} ) $ we have $$X_{1,1}L,\dots , X_{r,1} L \in I .$$ Hence $X_{i,1}L \in F^{\perp} $, ($1 \leq i \leq r$), so for $1 \leq i \leq w$ we get $$ \alpha _{i,2}= \ldots = \alpha _ {i,n_i} =0,$$ and for $ i > w$ we get $$ \alpha _{i,1}= \ldots = \alpha _ {i,n_i} =0.$$ Hence $$L = \alpha _{1,1} X_{1,1}+\ldots+\alpha _{w,1} X_{w,1}. $$ Let $ \mathbb X'' = \mathbb X \setminus \mathbb X'$, that is, $\mathbb X''$ is the subsets of the points of $\mathbb X$ lying on $\{X_{1,1}=\dots = X_{r,1}=0$\}. Obviously $ \{L=0 \}\supseteq \mathbb X'' $. It follows that $L \in I$. Since $I \subseteq F^\perp $, and in $ F^\perp$ there are no linear forms, we get a contradiction. So we have \begin{equation} \label{3} \dim ( I' + (\partial) )_1 =1 . \end{equation} Now by (\ref{0}), (\ref{1}), (\ref{2}), (\ref{3}), we get \[ s' = \sum_{i=0}^t HF(T/(I'+ ( \partial ) ),i) \] \[=\dim T_1-1+ \sum_{i\neq1; \ i=0}^t HF(T/(I'+ ( \partial ) ),i) \] \[\geq \dim T_1-1+ \sum_{i\neq1; \ i=0}^t HF(T/ J_1\cap \ldots \cap J_r,i) \] \[=\dim T_1-1+ \sum_{ i=0}^t HF(T/ J_1\cap \ldots \cap J_r,i) -(\dim T_1 -r). \] Hence \begin{equation} \label {7} s' \geq \sum_{ i=0}^t HF(T/ J_1\cap \ldots \cap J_r,i) +r-1. \end{equation} We now need the following claim. {\it Claim:} For $t \gg 0$, \[\sum_{i=0}^t HF(T/ J_1\cap \ldots \cap J_r ,i)= \sum_{i=0}^t HF(T/ J_1 ,i)+ \ldots +\sum_{i=0}^t HF(T/ J_r ,i)-r+1 . \] \begin{proof}[Proof of the Claim:] To prove the claim we proceed by induction on $r$. If $r=1$ the claim is obvious. Let $r>1$ and consider the following short exact sequence: \[ 0\longrightarrow T/ J_1\cap \ldots \cap J_r \longrightarrow T/J_1\oplus T/ J_2\cap \ldots \cap J_r \longrightarrow T/(J_1+ J_2\cap \ldots \cap J_r)\longrightarrow 0. \] By the inductive hypothesis, and since $J_1+ J_2\cap \ldots \cap J_r$ is the maximal ideal, we get the conclusion. \end{proof} Now we notice that for $t \gg 0$ and since the $J_i$ are generated by regular sequences of length $n_1 + \dots +n_r$, we have \[\sum_{i=0}^t HF(T/J_1,i)= \frac 1{a_{1,1}+1}\Pi_{j=1}^{n_1 } ( a_{1,j}+1) = \mathrm{rk}(M_1), \] \[ \vdots \] \[\sum_{i=0}^t HF(T/J_r,i)= \frac 1{a_{r,1}+1}\Pi_{j=1}^{n_r } ( a_{r,j}+1) = \mathrm{rk}(M_r). \] Hence by (\ref{7}) and the claim the conclusion immediately follows. \end{proof} \begin{rem}\label{leastvarrem}{ Let $F=\sum_1^r M_i$ be as in Theorem \ref{mainthm} and $\mathbb{X}$ be a set of $s$ distinct points such that $I_{\mathbb{X}}\subset F^\perp$. If $\mathbb X \cap \{X_{1,1}=\dots = X_{r,1} =0\}=\mathbb{X}'\neq \emptyset$ is a set of $s'$ points, by the proof of the theorem we see that $s\geq s'+1 \geq\mathrm{rk}(F)+1$. In particular, $\mathbb{X}$ does not have the least possible cardinality if it intersects the special linear space $\{X_{1,1}=\dots = X_{r,1} =0\}$.} \end{rem} \section{Applications} We now present a few applications of our results. \subsection{On the rank of the generic form} It is well known, see \cite{AH95}, that for the generic degree $d$ form in $n$ variables $F$ one has \[\mathrm{rk}(F)=\left\lceil{{d+n-1\choose d}\over n}\right\rceil.\] However, the rank for a given specific form can be bigger or smaller than that number. Moreover, it is trivial to see that every form of degree $d$ is a sum of ${d+n\choose d}$ powers of linear forms. But, in general, it is not known how big the rank of a degree $d$ form can be. Using monomials we can try to produce explicit examples of forms having rank bigger than that of the generic form. We give a complete description of the situation for the case of three variables. In that case, for $d \gg 0$, there are degree $d$ monomials with rank bigger than that the generic form, see Proposition \ref{nleq3monomial}. However, for more than three variables, this is no longer the case, see Remark \ref{ngeq4monomial}. \begin{prop}\label{nleq3monomial} Let $n=3$ and $d>2$ be an integer. Then \[ \mbox{max}\left\{\mbox{rk}(M): M \in S_d\mbox{ is a monomial}\right\}= \left\{\begin{array}{ll}\left({d+1\over 2}\right)^2 & d\mbox{ is odd} \\ \\ {d\over 2}\left({d\over 2}+1\right) & d\mbox{ is even}\end{array}\right. \] and this number is asymptotically ${3\over 2}$ of the rank of the generic degree $d$ form in three variables, i.e. \[\mbox{max}\left\{\mbox{rk}(M): M \in S_d\mbox{ is a monomial}\right\}\simeq {3\over 2}\left\lceil{{d+2\choose 2}\over 3}\right\rceil\] for $d\gg 0$. \end{prop} \begin{proof} We consider monomials $x_1^{a_1}x_2^{a_2}x_3^{a_3}$ with the conditions $1\leq a_1\leq a_2\leq a_3$, \[a_1+a_2+a_3=d\] and we want to maximize the function $f(a_2,a_3)=(a_2+1)(a_3+1)$. Considering $a_1$ as a parameter we are reduced to an optimization problem in the plane where the constraint is given by a segment and the target function is the branch of an hyperbola. For any given $a_1$, it is easy to see that the maximum is achieved when $a_2$ and $a_3$ are as close as possible to ${d-a_1 \over 2}$. Also, when $a_1=1$ we get the maximal possible value. In conclusion $\mathrm{rk}(M)$ is maximal for the monomial \[ M=x_1x_2^{d-1\over 2}x_3^{d-1\over 2} (d \mbox{ odd}) \mbox{ or } M=x_1x_2^{{d\over 2} - 1}x_3^{{d\over 2}} (d\mbox{ even}).\] With a straightforward computation, one easily sees that the rank of the generic form is asymptotically ${d^2 \over 6}$, while the maximal rank of a degree $d$ monomial is asymptotically ${d^2 \over 4}$. The conclusion follows. \end{proof} \begin{rem}\label{ngeq4monomial} { If $n\geq 4$ and $d\gg 0$, the degree $d$ monomials do not provide examples of high rank forms. For example, let $d=(n-1)k+1$ and consider a highest rank degree $d$ monomial \[M=x_1x_2^k\cdot \ldots\cdot x_n^k.\] If $F$ is a generic degree $d$ form, then \[\mathrm{rk}(M)\simeq {n!\over (n-1)^{n-1} }\mathrm{rk}(F)\] for $d\gg 0$ and we note that ${n!\over (n-1)^{n-1} }\leq 1$ if $n\geq 4$. Hence, for each $n\geq 4$, there are infinitely many values of $d$ for which no degree $d$ monomial has rank bigger than the generic form.} \end{rem} \subsection{Sum of powers decomposition for polynomials} Since we now know the rank of any given monomial, we can give a description of one of its minimal sum of powers decompositions. An explicit form for the scalars $\gamma$ can be found in \cite{BBT2012} and it was also noticed by G. Whieldon \cite{Whieldon}. In Remark \ref{sumofcoprimerem} we see how to use this to obtain a minimal sum of powers decomposition for the sum of coprime monomials. \begin{prop}\label{sumofpowerdecmon}{ For integers $1\leq a_1\leq \ldots \leq a_n$ consider the monomial \[M=x_1^{a_1}\cdot\ldots\cdot x_n^{a_n}\] and let $\mathcal{Z}(i)=\{z\in\mathbb{C} : z^{a_i+1}=1\}$. Then \[M=\sum_{\epsilon(i)\in\mathcal{Z}(i), i=2,\ldots,n}\gamma_{\epsilon(2),\ldots,\epsilon(n)} \left(x_1+\epsilon(2)x_2+\ldots +\epsilon(n)x_n\right)^d\] where the $\gamma_{\epsilon(2),\ldots,\epsilon(n)}$ are scalars and this decomposition involves the least number of summands.} \end{prop} \begin{proof} { Another consequence of \cite[Lemma 1.15]{IaKa} allows one to write a form as a sum of powers of linear forms. If $I\subset M^\perp$ is an ideal of $s$ points, then \[M=\sum_{j=1}^{s}\gamma_j \left(\alpha_j(1)x_1+\alpha_j(2)x_2+\ldots +\alpha_j(n)x_n\right)^d\] where the $\gamma_j$ are scalars and $[\alpha_1:\ldots:\alpha_n]$ are the coordinates of the points having defining ideal $I$. Given $M$ we can choose the following ideal of points \[I=(y_2^{a_2+1} - y_1^{a_2+1}, y_3^{a_3+1} - y_1^{a_3+1}, \ldots , y_n^{a_n+1} - y_1^{a_n+1} ) .\] It is straightforward to see that the points defined by $I$ have coordinates \[[1:\epsilon(2):\ldots:\epsilon(n)]\] where $\epsilon(i)\in\mathcal{Z}(i)$. Renaming the scalars and taking all possible combinations of the roots of $1$ we get the desired $\mathrm{rk}(M)=\Pi_{i=2}^n(a_i+1)$ points and the result follows.} \end{proof} \begin{rem}{In order to find an explicit decomposition for a given monomial it is enough to solve a linear system of equations to determine the $\gamma_j$. For example, in the very simple case of $M=x_0x_1x_2$, we only deal with square roots of $1$ and we get: \[x_0x_1x_2={1\over 24}(x_0+x_1+x_2)^3-{1\over 24}(x_0+x_1-x_2)^3-{1\over 24}(x_0-x_1+x_2)^3+{1\over 24}(x_0-x_1-x_2)^3.\]} \end{rem} \begin{rem}\label{sumofcoprimerem}{Using Proposition \ref{sumofpowerdecmon} we can easily find a minimal sum of powers decomposition for the sum of coprime monomials. If $F=M_1+\ldots +M_r$, then a minimal sum of powers decomposition of $F$ is obtained by decomposing each $M_i$ as described in Proposition \ref{sumofpowerdecmon}.} \end{rem} \begin{rem}{ Let $F=\sum_1^r M_i$ be the sum of coprime monomials, and $F=\sum_1^{\mathrm{rk}(F)} L_i^d$ be a minimal sum of powers decomposition of $F$. By Remark \ref{leastvarrem} we get that each linear form $L_i$ must involve the variable $X_{1,i}, i=1,\ldots,r$, where these are the variables with the least exponent in each $M_i$. A particular instance of this property, for $r=1$, has been noticed in \cite{BBT2012}}\end{rem}
1,941,325,220,341
arxiv
\section{Introduction} \label{s:intro} \subsection{Natural Gas Compressor Stations and Cancer Mortality} During the last several decades, the United States (US) has witnessed a sharp increase in the incidence of thyroid cancer, which now accounts for 1-1.5\% of all newly diagnosed cancer cases \citep{pellegriti2013worldwide}. Increased exposure of the population to radiation and carcinogenic environmental pollutants is blamed, in part, for this increase. During the last several decades, US natural gas (NG) production has also increased rapidly. NG production and distribution systems have recently received attention as a potential source of human exposure to carcinogenic pollutants and endocrine-disrupting chemicals \citep{kassotis2016endocrine}. Recent epidemiological studies have found links between NG production and leukemia and between NG production and thyroid cancer \citep{finkel2016shale,mckenzie2017childhood}. The relationship between NG systems and thyroid cancer could be of particular interest due to their coincident rise. Most previous studies of the health effects of NG systems have focused on associations between exposure to production sites (e.g., drilling wells) and health outcomes \citep{finkel2016shale,rasmussen2016association,mckenzie2017childhood}. In this study, we turn our attention instead to the potential health effects of NG distribution systems. Specifically, we aim to provide the first data-driven epidemiological investigation of the causal effects of proximity to NG compressor stations on thyroid cancer and leukemia mortality rates. NG compressor stations are pumping stations located at 40-70 mile intervals along NG pipelines. They keep pressure in the pipelines so that NG flows in the desired direction \citep{messersmith2015}. The operations at compressor stations have raised health concerns for residents of nearby communities \citep{EHP2015}. In this paper, we exclude from consideration the health impacts of accidents at compressor stations and focus on the potentially harmful exposures to nearby communities resulting from the normal operations of compressor stations. Fugitive emissions, or unintended leaking of chemicals from the compressor station equipment, are known to occur but are not well-characterized. NG compressor stations also routinely conduct ``blowdowns'', in which pipelines and equipment are vented to reduce pressure \citep{EHP2015-2} and any chemicals present in the pipeline are reportedly released into the air in a 30-60 meter plume of gas \citep{EHP2015}. Little is known about the specific types of chemicals emitted. While airborne emissions from compressor stations are regulated by the EPA under the Clean Air Act \citep{messersmith2015}, air quality studies in Pennsylvania and Texas have discovered harmful chemicals in excess of standards near NG compressor stations \citep{wolfeagle2009,padep2010}. These chemicals include methane, ethane, propane, and numerous benzene compounds. Benzene is a known carcinogen \citep{maltoni1989benzene,golding1999possible} and many of these compounds are known or suspected endocrine disruptors \citep{epaEDSP}. Motivated by these findings, we present an investigation of the causal effects of compressor stations on county-level thyroid cancer and leukemia mortality rates. From a sample of 978 counties from the mid-western region of the US, we obtained their NG compressor stations exposure status, their thyroid cancer and leukemia mortality rates, and many suspected confounders of this relationship. While we would like to apply a classic non-parametric causal inference analysis rooted in the potential outcomes approach \citep{rubin_estimating_1974}, the data exhibit non-overlap, i.e., in some areas of the confounder space, there is little or no variability in the exposure status of the units. Due to this non-overlap, any attempt to adjust for confounding when estimating the population average causal effect must rely upon model-based extrapolation, because we have insufficient data to infer about missing potential outcomes in those regions of the confounder space. Thus non-parametric causal inference methods may yield unreliable results. In this paper, we seek to make two methodological contributions to the causal inference literature, in the context of the potential outcomes framework. First, we introduce a flexible, data-driven definition of sample propensity score overlap and non-overlap regions. Second, we propose a novel approach to estimating population average causal effects in the presence of non-overlap. Using this approach, the sample is split into a region of overlap (RO) and a region of non-overlap (RN) and distinct models, appropriate for the amount of data support in each region, are developed and applied to estimate the causal effects in the two regions separately. We have found that the proposed approach leads to improved estimation of the population average causal effects compared to existing methods. Moreover, we apply this method to estimate the population average causal effect of compressor station exposure on thyroid cancer and leukemia mortality. \subsection{Causal Inference Notation and Assumptions} We first introduce notation that will be used throughout this article. For subject $i$, $(i=1,\cdots N)$, $Y_i^{obs}$ will denote the observed outcome (here it will be assumed to be a continuous random variable, in Section~\ref{ss:binaryout} we introduce analogous notation for the binary outcomes setting), $E_i$ will denote a binary treatment or exposure, and $\boldsymbol{X}_i$ will denote a vector of observed confounders. Under the stable unit treatment value assumption \citep{rubinrandomization1980}, potential outcomes $Y_i(1)$ and $Y_i(0)$, corresponding to the outcome that would be observed under scenarios $E_i=1$ and $E_i=0$, respectively, exist for each unit. Only one of these potential outcomes can be observed, such that $Y_i^{obs}=E_i Y_i(1)+(1-E_i) Y_i(0)$. We denote each unit's missing potential outcome as $Y_i^{mis}$, i.e., $Y_i^{mis}= (1-E_i) Y_i(1)+ E_i Y_i(0)$. An individual causal effect refers to the difference in potential outcomes for an individual, i.e., $\Delta_i=Y_i(1)-Y_i(0)$. The sample average causal effect is $\Delta_S=\frac{1}{N} \sum_{i=1}^N \Delta_i$, the population conditional average causal effect is $\Delta_{P|\boldsymbol{x}}=E[Y(1)-Y(0)|\boldsymbol{x}]$, and the population average causal effect is $\Delta_P=E_{\boldsymbol{X}}[E[Y(1)-Y(0)|\boldsymbol{X}]]$. The identifiability of population level causal effects in observational studies typically relies upon the assumptions of (1) unconfoundedness and (2) positivity. Unconfoundedness implies that all confounders of the relationship between exposure and outcome are observed, i.e., $ E_i \independent (Y_i(1),Y_i(0)) |\boldsymbol{X}_i$. We assume throughout that unconfoundedness holds. Positivity, stated mathematically as $0<P(E_i=1|\boldsymbol{X}_i)<1$, is the assumption that each individual has positive probability of obtaining either exposure status. We also assume that positivity holds, i.e., that each individual in the population is eligible to receive either exposure status. A number of existing papers discuss positivity violations \citep{cole_constructing_2008,westreich_invited_2010,petersen_diagnosing_2012,damour2017overlap}. We relax the assumption of overlap, closely related to positivity, which is required to non-parametrically estimate sample and population average causal effects. The term overlap refers to the overlap of the confounder distributions across the exposure groups. Non-overlap occurs when every unit in the population is eligible to receive either exposure, but, by chance, few or no units from one exposure group are observed in some confounder strata \citep{westreich_invited_2010}. Non-overlap can be a population level feature or a finite sample issue only. Here, we address problems arising from finite sample non-overlap, i.e., scenarios in which the population exhibits complete overlap but, in some areas of the confounder space, data are sparse for one or both exposure groups, leading to representative samples with non-overlap. In the presence of non-overlap, sample and population average causal effect estimates generally suffer from bias and increased variance unless they are able to rely on the additional assumption of correct model specification \citep{king2005dangers,petersen_diagnosing_2012}. The overlap assumption can be evaluated by comparing the empirical distribution of the estimated propensity score, $\hat{\xi}_i=\hat{P}(E_i=1 \mid\boldsymbol{X})$, between the exposure groups \citep{rosenbaum_central_1983,austin_introduction_2011}, assuming the propensity score is well-estimated. The further assumption that non-overlap is a finite sample feature only is generally untestable but can sometimes be evaluated using subject-matter expertise. While the assumption of unconfoundedness becomes more plausible as the number of covariates grows, the likelihood of non-overlap increases \citep{cole_constructing_2008,damour2017overlap}; thus, non-overlap is an increasing problem in our era of high dimensional data. \subsection{Existing Methods for Estimating Causal Effects in the Presence of Non-Overlap} Methodological proposals for reducing the bias and variance of causal effect estimates in the presence of propensity score non-overlap are abundant in the causal inference literature \citep{cole_constructing_2008,crump_dealing_2009,petersen_diagnosing_2012,li_balancing_2016}; however, to our knowledge, all of the existing methods modify the estimand and its interpretation so that neither sample nor population average causal effect estimates can be obtained. In many medical and health research settings, such as evaluation of treatments, the aim of the research is to help clinicians choose between various forms of treatment for patients who are likely to adhere to any of the available treatments. In these contexts, convenience samples are common and modified estimands may be equally informative as or more informative than sample or population level estimands. However, in the environmental health applications like the one considered in this paper, study samples are often carefully selected to reflect a population of interest to policymakers. Here the primary aim is to estimate the burden of disease attributable to certain contaminants for the whole sample and/or underlying population in order to ultimately inform regulatory policies. Examples of such applications include the effect of power plant emissions on cardiovascular hospitalizations \citep{zigler2016hei}, the effect of unconventional natural gas extraction on childhood cancer incidence \citep{mckenzie2017childhood}, and the effect of gestational chemical exposures on birth outcomes \citep{ferguson2014environmental}. The most commonly recommended approach for handling propensity score non-overlap is ``trimming'' or discarding observations in regions of poor data support \citep{ho2007matching,petersen_diagnosing_2012,gutman2015estimation}. Several papers provide guidance on how to determine which observations should be trimmed, most in the context of matching \citep{cochran1973controlling,lalonde1986evaluating,dehejia1999causal,ho2007matching,crump_dealing_2009}. An appealing feature of trimming is its flexibility-- it can be applied in conjunction with any causal estimation procedure. However, trimming allows only for the estimation of the average causal effect {\it in the trimmed sample}. Moreover, trimming changes the asymptotic properties of estimators in ways that are often overlooked \citep{yang2018asymptotic}. Recent methodological developments in the context of weighting approaches to causal inference may provide more interpretable estimands than trimming. Li et al. (2\citeyear{li_balancing_2016}, 2\citeyear{li2018addressing}) introduce overlap weights, which weight each member of the sample proportional to its probability of inclusion in its counterfactual exposure group. The estimand corresponding to the overlap weights is what Li et al. call the ``average exposure effect for the overlap population'', and this overlap population, the sub-population that contains substantial proportions of both exposed and unexposed individuals, is of interest in many clinical and public health settings. \citet{zigler2017posterior} implement a similar overlap weighting approach in a Bayesian framework. In contrast to the existing literature, which emphasizes the removal or downweighting of data in regions of poor support, we propose a method that (1) minimizes model dependence where possible and (2) performs model-based extrapolation in a principled manner where necessary, yielding estimates of the population level estimand with small bias and appropriately large uncertainty. This method will be most valuable in environmental health and other applications where preserving population-level inference in spite of non-overlap is critical. Our Bayesian modeling approach estimates individual causal effects ($\Delta_i$) in the RO and the RN separately. In the first stage of this procedure, a non-parametric Bayesian Additive Regression Tree (BART) \citep{chipman_bart_2010}, is fit to the data in the RO to estimate causal effects $\Delta_i$ for each observation in the RO, where data support is abundant. In the second stage, a spline (SPL) is fitted to the estimated $\Delta_i$ in the RO to capture trends in the causal effect surface. The SPL is used to extrapolate those trends to estimate $\Delta_i$ for observations in the RN, where insufficient data support requires reliance on model specifications/extrapolation. The data in the RN are excluded from all model fitting, so that the models are not influenced by data-sparse regions; however, after model fitting to data in the RO, the observed potential outcomes in the RN are employed as covariate values to aid in prediction of causal effects in the RN, so that we maximally leverage the sparse data in the RN as well. Because of the flexibility of both BART and SPL, we will show that our model captures non-linearities and causal effect heterogeneity. In Section~\ref{s:methods}, we provide a data-driven definition of the RO and RN, and we introduce our method, called BART+SPL. Simulations in Section~\ref{s:sims} demonstrate how BART+SPL can yield improved population average causal effect estimates relative to existing methods and provide guidance in specifying tuning parameters. In Section~\ref{s:app}, BART+SPL is applied to estimate the effect of NG compressor station exposure on thyroid cancer and leukemia mortality rates. We conclude with a summary of our findings in Section~\ref{s:discuss}. \section{Methods} \label{s:methods} \subsection{Definition of Overlap and Non-Overlap Regions} \label{ss:overlap} Although \citet{king2005dangers} proposed the use of the convex hull of the data to define overlap and non-overlap regions as early as 2005, this appears to be an unpopular criteria, likely due to its extreme conservatism. \citet{crump_dealing_2009} noted the absence of a systematic definition of the RO in the causal inference literature. While they offer a definition, their goal is to identify a region of the data that will produce a minimum variance average causal effect estimate rather than to provide a general definition of the region where overlap is observed. Their definition may be ideal in the context of trimming in conjunction with non-parametric causal estimators; however, it is unlikely to be appropriate in more general settings. BART itself has also been proposed as a method for identifying the RN \citep{hill2013assessing}. Because BART yields relatively large uncertainties for predicted values of units in regions of poor data support, \citet{hill2013assessing} suggest trimming observations with posterior uncertainty values greater than some threshold. We intend to provide a more general characterization of the RO and RN here. We use the estimated propensity scores, $\hat{\xi}_i$, to define the RO, $O$, and the RN, $O^\perp$, in the sample. Throughout this section, we assume that the propensity score model has been correctly specified (or that the true propensity score is known, in which case $\xi_i$ can be substituted for $\hat{\xi}_i$ in the following); however, we demonstrate and discuss the performance of our method under propensity score model misspecification using simulations in Section~\ref{s:sims}. Let $\hat{\xi}_{(j)}$ denote the $j^{th}$ order statistic of the $\hat{\xi}_i$ and $P=\left[\hat{\xi}_{(1)},\hat{\xi}_{(N)}\right]$ be the subspace of $\left(0,1\right)$ over which the $\hat{\xi}_i$ are observed. Our definition allows every point in $P$ to be assigned to either $O$ or $O^\perp$. The user must pre-specify two parameters, denoted $a$ and $b$, which are used to identify $O$. $O^\perp$ is then defined as the complement of $O$, relative to $P$. Consider any point $o \in P$. The idea behind our definition of overlap is that, if more than $b$ units from each exposure group have estimated propensity scores lying within some open interval of size $a$ covering $o$, then $o$ is included in the region of overlap. Thus, $a$ is an interval length, i.e., a portion of the range of the estimated propensity score, and $b$ is a portion of the sample size, representing the number of estimated propensity scores from each exposure group that must lie sufficiently close, i.e., within an interval of length $a$, to any given point in order for the point to be added to the RO. Framing this definition in a way that can be operationalized, it says that there is sufficient data support (overlap) at a point $o$ if, for each exposure group separately, we can form a set that includes (1) $o$ and (2) more than $b$ estimated propensity scores and lies entirely within an interval of size less than $a$, i.e., has range less than $a$. We now introduce notation that will be used in the definition. Let $N_e$ denote the number of units in exposure group $e$ and $\hat{\xi}^e_{(i)}$ denote the $i^{th}$ propensity score order statistic in exposure group $e$. Using this notation, we propose the following definition for the region of overlap: \[O=\left\lbrace o \in P: \text{for }e=0,1,\text{ range}(\left\lbrace o, \hat{\xi}^e_{(i)},...,\hat{\xi}^e_{(i+b)} \right\rbrace)<a \text{ for some } i=1,...,N_e-b \right\rbrace\] This formalizes the notion introduced above of finding a set of $o$ and more than $b$ propensity scores, $\left\lbrace o, \hat{\xi}^e_{(i)},...,\hat{\xi}^e_{(i+b)} \right\rbrace$, with range less than $a$ for each exposure group. In comparison with previous definitions, our overlap definition provides a flexible, transparent, and data-driven approach to identifying regions of poor data support, with relatively easy-to-understand tuning parameters that give users the ability to decide what constitutes ``sufficient'' data support in the context of their own methods and application. Another contribution of this overlap definition to the wider literature is that it allows for regions of non-overlap in the interior of the propensity score distribution. In Figure 2 in Section A.3 of the Appendix, we provide an illustration of the types of non-overlap that can be captured by this definition. We define the RO and RN for BART+SPL using this definition. However, we note that BART+SPL is designed to handle exclusively non-overlap in the tails of the propensity score distribution. In Section~\ref{ss:choosea}, we provide guidance on how to specify $a$ and $b$ when applying BART+SPL. \subsection{Tree Ensembles for Causal Inference} \label{ss:trees} Regression trees are a class of non-parametric machine learning procedures, primarily used for prediction, in which a model is constructed by recursively partitioning the covariate space. Tree ensemble methods like BART \citep{chipman_bart_2010} gained popularity as a more stable generalization of regression trees. Appealing features of tree ensembles include (1) their internal construction of the model, eliminating the need to pre-specify the functional form of the association between the response and predictors and (2) their ability to capture complex non-linear associations and high order interactions \citep{strobl2009introduction}. BART is highly regarded for its consistently strong performance under the ``default'' model specifications, reducing its dependence on subjective tuning and time consuming cross validation procedures. Letting $j$ index the $J$ trees in the ensemble ($j=1,...,J$), a BART is a sum of trees model of the form \begin{equation} Y=\sum_{j=1}^J g(\boldsymbol{X};\mathcal{T}_j,\mathcal{M}_j)+\epsilon, \end{equation} where $g$ is a function that sorts each unit into one of a set of $m_j$ terminal nodes, associated with mean parameters $\mathcal{M}_j=\{\mu_1,...,\mu_{m_j}\}$, based on a set of decision rules, $\mathcal{T}_j$. $\epsilon$ is a random error term that is typically assumed to be $N(0,\sigma^2)$ when the outcome is continuous. BART has also been extended to the binary outcome setting through the addition of the probit link function. BART's strong predictive performance has been reported in many contexts \citep{zhou2008extracting,he2009profiling,chipman_bart_2010,liu2010prediction,bonato2011bayesian,hill_bayesian_2011,huang2015predicting,liu2015ensemble,kindo2016multinomial,sparapani2016nonparametric}. BART was first introduced as a tool for causal inference by \citet{hill_bayesian_2011}, who suggested fitting a BART, including the estimated propensity score as a covariate, and using it to predict missing potential outcomes. Despite BART's highly accurate potential outcome prediction in regions of the data with strong support, its predictions have been shown to sometimes contain greater bias than those of parametric and classic causal inference approaches in the presence of non-overlap \citep{hill_bayesian_2011,hill2013assessing}. Because BART relies on binary cuts of the observed predictors, it is unable to capture trends in the data and therefore extrapolates poorly. \subsection{BART+SPL} In this section, we describe BART+SPL, our proposed Bayesian approach for estimating causal effects in the presence of propensity score non-overlap. The first stage of the procedure, which we call the imputation phase, utilizes a BART to impute the missing potential outcomes and estimate individual causal effects in the RO. In the second stage, which we call the smoothing stage, a spline is fit to the BART-estimated individual causal effects in the RO and is invoked to extrapolate the causal effect trends to the individuals in the RN, leveraging the information from the observed potential outcomes for observations in the RN. Our approach for continuous outcomes is described in Sections~\ref{sss:imputation} and ~\ref{sss:smoothing} in the context of a single iteration of a Bayesian MCMC sampler for the sake of clarity, and in Section~\ref{sss:estimation}, we explain how the draws from the sampler can be invoked to estimate causal effects and associated uncertainties. This model can be implemented through the MCMC procedure described in Section A.1 of the Appendix. In Section~\ref{ss:binaryout}, we introduce an extension to binary outcomes. \subsubsection{Imputation Stage} \label{sss:imputation} In the first stage of BART+SPL, we adopt the common practice of treating the unobserved potential outcome for each individual as missing data, and we construct a BART model to impute these missing values for individuals in the RO. We introduce the subscripts $q$ and $r$ to index data from subjects in the RO and RN, respectively, e.g., $Y_{q}^{obs}$ is the observed outcome of individual $q$ in the RO and $Y_{r}^{mis}$ is the missing potential outcome of individual $r$ in the RN $(q=1,...,Q;r=1,...,R)$. Subscript $O$ and subscript $O^\perp$ refer to vectors/matrices of the values of all individuals in $O$ and $O^\perp$, respectively, e.g., $\mathbf{Y}_{O}^{mis}=\left[Y_1^{mis},...,Y_Q^{mis}\right]'$ and $\mathbf{Y}_{O^\perp}^{mis}=\left[Y_1^{mis},...,Y_R^{mis}\right]'$. In this stage, all of our modeling efforts are focused on the data in the RO. $Y_{q}^{mis}$ is first imputed using a BART model of the form \[ Y_{q}^{obs}=\sum_{j=1}^J g(E_{q},\hat{\xi}_{q},\boldsymbol{X}_{q};\mathcal{T}_j,\mathcal{M}_j)+\epsilon_{q}, \] where $\epsilon_{q}\sim N(0,\sigma_B^2)$. To do so, the Bayesian backfitting algorithm of \citet{chipman_bart_2010} is utilized to collect a sample from the posterior distribution of $\theta=\{\sigma_B^2,\mathcal{T}_j,\mathcal{M}_j; j=1,...,J\}$, $p(\theta|\mathbf{Y}_{O}^{obs})$. An imputed value of $\mathbf{Y}_{O}^{mis}$, denoted $\tilde{\mathbf{Y}}_{O}^{mis}$, is obtained by sampling from its posterior predictive distribution (ppd), $p(\mathbf{Y}_{O}^{mis}|\mathbf{Y}_{O}^{obs})=\int p(\mathbf{Y}_{O}^{mis}|\mathbf{Y}_{O}^{obs},\theta)p(\theta|\mathbf{Y}_{O}^{obs})d\theta$. $\mathbf{Y}_{O}^{obs}$ and $\tilde{\mathbf{Y}}_{O}^{mis}$ are used to construct a sample of the individual causal effects in $O$, $\tilde{\mathbf{\Delta}}_{O}$. \subsubsection{Smoothing Stage} \label{sss:smoothing} In the second stage, a smoothing model is fit to the BART-estimated individual causal effects in the RO, and the model is employed to estimate the individual causal effects in the RN by extrapolating the trends identified in the RO. With this approach, we impose the assumption that any trends in the individual causal effects (as a function of the propensity score and/or the covariates) identified in the RO can be extended into the RN. By modeling the causal effect surface in this stage rather than the separate potential outcome surfaces, we take advantage of the potentially increased smoothness of the causal effects that may occur in practice. Through tuning, we ensure that the variance in the RN is inflated to reflect the high uncertainty in the region, leading to mildly conservative confidence regions in most realistic scenarios. Assume for now that the RN includes only exposed individuals so that $E_r=1$ for all $r$. Define $Y_{q}^*(1)=Y_{q}^{obs}$ if $E_{q}=1$ and $Y_{q}^*(1)=\tilde{Y}_{q}^{mis}$ otherwise (remember $\tilde{Y}_{q}^{mis}$ denotes the BART-imputed missing potential outcome for $q$). Thus, $Y_{q}^*(1)$ is the observed or imputed potential outcome corresponding to $E=1$ for each unit in the RO. Let $rcs(z)$ denote a restricted cubic spline basis for $z$. Employing the imputed values obtained in the previous stage, we construct the following smoothing stage model: \[ \tilde{\Delta}_{q}=\boldsymbol{W' \beta}+\epsilon_{q}, \quad \boldsymbol{W}=\left[\begin{array}{c} rcs(\hat{\xi}_{q})\\ rcs(Y_{q}^*(1))\\ \boldsymbol{X}_{q} \end{array}\right], \] where $\epsilon_{q}\sim N(0,\sigma_S^2+I(\hat{\xi}_{q}\in O^\perp)\tau_{q})$. $\sigma_S^2$ is the residual variance for all units in the RO and $\tau_{q}$ is an added variance component only applied to units in the RN. The purpose of $\tau_{q}$ is to inflate the variance of units with an estimated propensity score in the RN, to adequately reflect the higher uncertainty in regions of little data support. In this model, $\tau_{q}$ is clearly unidentifiable, as the model is fit using exclusively data in the RO, and it will only come into play when invoking the ppd to predict in the RN. Here, we choose to treat it as a tuning parameter, and below we describe our recommended tuning parameter specification. We recommend restricted cubic splines in the smoothing model, because they generally demonstrated superior performance when applied to simulated data, however other spline choices may provide improved performance under some conditions. We also recommend excluding a small portion of the data at the tails of the RO (i.e., tails of the propensity score in the RO) from spline model fitting, either through the use of boundary knots or by omitting these data from the model, because BART's predictions in the tails of variables can be unstable and can negatively influence the SPL's performance. We collect a posterior sample of the spline parameters, $\psi=\{\boldsymbol{\beta},\sigma_S^2\}$. Recall the motivation for the smoothing stage is to use the trends from the RO to predict the individual causal effects in the RN. Thus, the sampled $\psi$ and the covariate values for individuals in the RN are summoned to obtain a sample from the ppd of $\mathbf{\Delta}_{O^\perp}$, $p(\mathbf{\Delta}_{O^\perp}|\mathbf{\Delta}_{O})=\int p(\mathbf{\Delta}_{O^\perp}|\mathbf{\Delta}_{O},\psi)p(\psi|\mathbf{\Delta}_{O})d\psi$. The sample is denoted $\tilde{\mathbf{\Delta}}_{O^\perp}$. Note that including $Y_{q}^*(1)$ as a predictor permits the model to capture the relationship between the causal effects and $Y(1)$, the potential outcome observed for all units in the RN. This allows the observed potential outcomes in the RN to aid in the extrapolation, so that this information is not wasted. In the case that both exposed and unexposed units fall in the RN, we define $Y_{q}^*(0)$ analogously to $Y_{q}^*(1)$ and construct a second model, identical to the one above except replacing $rcs(Y_{q}^*(1))$ with $rcs(Y_{q}^*(0))$. The ppd from the first model is then used to predict individual causal effects for exposed units in the RN and the ppd from the second model is used for unexposed units. Our recommended specification of $\tau_{q}$ is motivated by the aim to have (1) the variance of individual causal effects increase monotonically as the observation's distance from the RO (i.e., region of strong data support) increases and (2) the increase in variance be in proportion to the scale of the data. Thus, the suggested tuning parameter specification is $\tau_{q}=(10d_{q})t_O$, where $t_O=\text{range}(\tilde{\mathbf{\Delta}}_{O})$ and $d_{q}$ is the distance from the observation's propensity score to the nearest propensity score in the RO. The effect of this tuning parameter is that, for every .1 unit further we go into the RN, the variance of the individual causal effects increases by the range of the causal effects in the RO. While it could lead to somewhat conservative credible interval coverage in ``simple'' situations (e.g., when the trends in the RN are easily predicted from the trends in the RO), we have found in simulations that this choice of tuning parameter consistently provides both reasonably-sized credible intervals and acceptable coverage. \subsubsection{Estimation and Uncertainty Quantification} \label{sss:estimation} We can iterate the two stages described above $M$ times to obtain $\left\lbrace \mathbf{\Delta}^{(1)}_{O},...,\mathbf{\Delta}^{(M)}_{O} \right\rbrace$ from the imputation stage and $\left\lbrace \mathbf{\Delta}^{(1)}_{O^\perp},...,\mathbf{\Delta}^{(M)}_{O^\perp}\right\rbrace$ from the smoothing stage (note that we have traded the tilde notation from above for the $(m)$ notation to differentiate the samples from the $M$ iterations). By iterating between the two stages, we are able to account for the uncertainty in the estimation of $\mathbf{\Delta}_O$ from the first stage and pass it on to the second stage, where $\mathbf{\Delta}_O$ is used as the outcome. Thus, the uncertainty in the estimate of $\mathbf{\Delta}_{O^\perp}$ reflects the uncertainty both from stage one and stage two. For units in the RO and the RN, individual causal are estimated as $\hat{\Delta}_{q}=\frac{1}{M}\sum_{m=1}^M \Delta_{q}^{(m)}$ and $\hat{\Delta}_{r}=\frac{1}{M}\sum_{m=1}^M \Delta_{r}^{(m)}$, respectively, i.e., the posterior mean over the $M$ samples. Credible intervals for the individual causal effects can be obtained by extracting the appropriate percentiles from these $M$ samples. Samples of $\Delta_S$ are produced by $\Delta^{(m)}_S=\frac{1}{N}(\sum_{q=1}^Q \Delta^{(m)}_{q} + \sum_{r=1}^R \Delta^{(m)}_{r})$ for $m=1,...,M$, and $\hat{\Delta}_S=\frac{1}{M} \sum_{m=1}^M \Delta^{(m)}_S$. As above, percentiles of the $M$ samples provide credible interval for $\Delta_S$. In order to estimate $\Delta_P$, an additional integration over the predictors is required. \citet{wang2015accounting} discuss the necessity of such an integration step when estimating population average causal effects with models that permit non-linearity and/or heterogeneity, and they propose the application of the Bayesian bootstrap to execute it. We adopt the same approach here. For each sample of the individual causal effects, $\left\lbrace \mathbf{\Delta}^{(m)}_{O}, \mathbf{\Delta}^{(m)}_{O^\perp} \right\rbrace$, the Bayesian bootstrap is performed on it $B$ times (where $B$ is a large constant) and the average of each bootstrap sample taken to obtain $B$ draws from the posterior distribution of the population average causal effect. We randomly select one of these samples and call it $\Delta^{(m)}_P$, so that in the end we have collected $\left\lbrace \Delta^{(1)}_{P},...,\Delta^{(M)}_{P} \right\rbrace$. The population average causal effect is estimated as $\hat{\Delta}_P=\frac{1}{M} \sum_{m=1}^M \Delta^{(m)}_P$ and the credible interval formed using percentiles. \subsection{BART+SPL with Binary Outcomes} \label{ss:binaryout} By invoking BART probit \citep{chipman_bart_2010} in the imputation stage and utilizing a simple arcsine transformation in the smoothing stage, we can straightforwardly extend BART+SPL to the binary outcomes setting. While most of our notation will remain the same for binary outcomes, we note a few changes. Individual causal effects are traditionally defined as the difference in each individual's potential outcomes, as above; however, in the binary outcomes setting, estimating those differences, which can only take on values -1, 0 and 1, may be challenging and may sacrifice information. Because, in our approach, we are treating the potential outcomes as random variables, it is reasonable and desirable to instead define the individual causal effects as differences in some features of the distributions of the potential outcomes, although doing so requires a slight abuse of traditional terminology/notation. For binary outcomes, we define the individual causal effects as $\Delta_i=P(Y_i(1)=1)-P(Y_i(0)=1)$ and the estimands as $\Delta_S=\frac{1}{N} \sum_{i=1}^N \Delta_i$, $\Delta_{P|\boldsymbol{x}}=P(Y(1)=1|\boldsymbol{x})-P(Y(0)=1|\boldsymbol{x})$, and $\Delta_P=E_{\boldsymbol{X}}[P(Y(1)=1|\boldsymbol{X})-P(Y(0)=1|\boldsymbol{X})]$. Here we fit a BART probit to estimate individual causal effects in $O$, fit a spline model to the arcsine transform of these estimates (which are bounded between -1 and 1), and use the spline to estimate individual causal effects for units in $O^\perp$. While we provide below explicit forms for the imputation and smoothing models in the binary setting, we refer the reader back to the previous section for the full sampling procedure details, which follow analogously to the continuous outcomes setting. In the imputation stage, the BART probit model fit to the RO data has the following form: \[ P(Y_{q}^{obs}=1)=\Phi(\sum_{j=1}^J g(E_{q},\hat{\xi}_{q},\boldsymbol{X}_{q};\mathcal{T}_j,\mathcal{M}_j)), \] where $\Phi()$ is the standard Normal cumulative distribution function. With this model, posterior samples $\tilde{P}(Y_{q}^{obs}=1)$ and $\tilde{P}(Y_{q}^{mis}=1)$ can be drawn and used to form a posterior sample of the individual causal effect, $\tilde{\Delta}_{q}$. For the smoothing stage, as above, assume without loss of generality that all the units in $O^\perp$ all have $E_r=1$. Define $Y_{q}^*(1)=Y_{q}^{obs}$ if $E_{q}=1$ and $Y_{q}^*(1)=I(\tilde{P}(Y_{q}^{mis}=1)>0.5)$ otherwise. Then the smoothing model is \[ \text{arcsine}(\tilde{\Delta}_{q})=\boldsymbol{W' \beta}+\epsilon_{q}, \quad \boldsymbol{W}=\left[\begin{array}{c} rcs(\hat{\xi}_{q})\\ Y_{q}^*(1)\\ \boldsymbol{X}_{q} \end{array}\right], \] where $\epsilon_{q}\sim N(0,\sigma_S^2)$. Note that, unlike in the continuous case, no tuning parameter is included in the variance, as simulations indicated it was not needed to obtain reasonable coverage in the binary setting. Individual causal effects on the arcsine scale for units in $O^\perp$ can be obtained from the posterior predictive distribution and back-transformed to the desired scale. As described in the previous section, a second analogous smoothing model can be fit if $O^\perp$ contains units from both the exposed and unexposed groups. Average causal effect estimation and uncertainty quantification proceed identically to the continuous case. \section{Simulations} \label{s:sims} In this section, we conduct simulation studies to evaluate the performance of BART+SPL relative to existing methods in the presence of non-overlap and to provide guidance on how to specify parameters $a$ and $b$ in the non-overlap definition to obtain optimal performance. In Section~\ref{ss:performance}, we simulate data with a small number of confounders and varying degrees of non-overlap, and we compare BART+SPL's population average causal effect estimation performance to that of a standard BART and of an existing spline-based method for causal inference. In Section ~\ref{ss:hdperformance}, we generate data with non-overlap and high dimensional covariates and compare the performance of BART+SPL and the spline-based method. Finally, data with varying amounts of non-overlap are generated and BART+SPL is implemented with various specifications of $a$ and $b$ to provide insight on the optimal choices in Section~\ref{ss:choosea}. R code \citep{R2016} to implement BART+SPL and to reproduce all simulations is available on Github at \url{https://github.com/rachelnethery/overlap}. \subsection{Performance of BART+SPL Relative to Existing Methods} \label{ss:performance} We purposely simulate data under a challenging situation of: a) propensity score non-overlap; b) non-linearity of the potential outcomes in the propensity score; and c) heterogeneous causal effects. We wish to evaluate the relative performance of our method when utilizing a true propensity score and when utilizing a misspecified propensity score estimate. We first discuss the simulation structure when utilizing the true propensity score. We let $N=500$ and assign half of the subjects to $E=1$. We generate two confounders that are highly associated with the exposure $(E)$, one binary $(X_1: X_1|E=1\sim Bernoulli(.5), X_1|E=0\sim Bernoulli(.4))$ and one continuous $(X_2: X_2|E=1\sim N(2+c,\sqrt{1.25+0.1c}), X_2|E=0\sim N(1,1))$. Given these specifications, the true propensity scores can easily be calculated using Bayes Rule. The potential outcomes are constructed as $Y_i(1)=-3(1+\exp(-(10(X_{2i}-1)))^{-1}+0.25X_{1i}-X_{1i}X_{2i}$ and $Y_i(0)=-1.5X_{2i}$. We label the simulations with the true propensity score as 3.1A. For the simulations with a misspecified propensity score estimate, we again let $N=500$ and assign half of the subjects to $E=1$. We generate a binary confounder $(X_1: X_1|E=1\sim Bernoulli(.5), X_1|E=0\sim Bernoulli(.4))$ and a continuous confounder $(X_2: X_2|E=1\sim N(2+c,4), X_2|E=0\sim N(1,1))$. The potential outcomes are $Y_i(1)=3(1+\exp(-(10(X_{2i}-1)))^{-1}+0.25X_{1i}-0.1X_{1i}X_{2i}+0.5$ and $Y_i(0)=0.2X_{2i}+0.1X_{2i}^2+1$. As is common in the literature, we use a simple logistic regression model of the form $\text{logit}(P(E_i=1))=\beta_0+\beta_1X_{1i}+\beta_2X_{2i}$ to estimate the propensity scores. This model is clearly misspecified, because, for example, the true relationship between $E$ and $X_2$ is not linear. In practice, when the form of the propensity score model is unknown, we encourage the use of flexible models for estimation \citep{westreich2010propensity}, such as BART, neural networks, or support vector machines, in order to reduce the chance of propensity score model misspecification. The use of BART for propensity score estimation is demonstrated in the application to real data in Section~\ref{s:app}. Various flexible propensity score estimation methods could be tested and the method that achieves the best covariate balance selected. We label the simulations with the misspecified propensity score as 3.1B. We have selected these simulation structures so that, for a single value of $c$, the type and degree of propensity score non-overlap in 3.1A and 3.1B should be similar. Both 3.1A and 3.1B produce data sets with lack of overlap in the right tail of the propensity score distribution (i.e., individuals from the unexposed group are unobserved or very sparse), and with varying degrees of non-overlap controlled by $c$. Our simulations are designed to produce non-overlap in the right tail of the propensity score distribution and our motivation is to demonstrate how our method performs in the presence of different features in this RN. Thus, simulated datasets are utilized in the results below only if any intervals of non-overlap outside the right tail contain 10 observations or fewer (cumulatively), and, in these datasets, the intervals of non-overlap outside the right tail are ignored (i.e., treated as part of the RO). In this way, we ensure that the results solely reflect how the tested methods respond to the features of the intended RN. We consider three separate simulated scenarios, i.e. three different specifications of $c$, within 3.1A and 3.1B. We let $c=0$ (simulations 3.1A-i and 3.1B-i), $c=0.35$ (simulations 3.1A-ii and 3.1B-ii), and $c=0.7$ (simulations 3.1A-iii and 3.1B-iii). Example datasets from each are illustrated in Figure 3 in Section A.3 of the Appendix. With $c=0$, the RN is quite small, and the trend in the individual causal effects in the RN is mildly non-linear. With $c=0.35$, the RN is somewhat larger and the trends exhibited by the individual causal effects in the RN are moderately non-linear. With $c=0.7$, a substantial portion of the sample lies in the RN and the causal effects in the RN are highly non-linear. We use our definition of overlap with $a=.1$ and $b=7$ to define the RO and RN for each simulated dataset. We implement BART+SPL on 1,000 simulated datasets under each condition. \citet{gutman2015estimation} recommended a spline-based multiple imputation approach for estimating average causal effects. They also suggest trimming in conjunction with their method for samples suffering from non-overlap. We compared the performance of BART+SPL versus Gutman and Rubin's method both with and without trimming (T-GR and U-GR, respectively) and also BART with and without trimming (T-BART and U-BART, respectively). Detailed results of the untrimmed analyses appear in Table~\ref{tab:results1} and the distributions of the average causal effect estimates from the trimmed and untrimmed analyses can be compared in Figure 4 in Section A.3 of the Appendix. The simulation results demonstrate the dominant performance of BART+SPL compared to U-BART and U-GR under a wide range of challenging conditions. However, in extreme scenarios with unpredictable trends and large portions of the sample in the RN, even the performance of BART+SPL may deteriorate, as demonstrated by simulation 3.1A-iii, where BART+SPL gives high percent bias, and simulation 3.1B-iii, where BART+SPL gives high percent bias and poor coverage. Nonetheless, BART+SPL's performance still exceeds that of its competitors. For both BART and GR, the trimmed estimates, which are no longer estimators of the population level causal effects, are further from the true population average causal effects than the untrimmed estimates. \begin{table}[ht] \centering \caption{Absolute (Abs) bias, 95\% credible interval coverage and mean square error (MSE) in estimation of the population average causal effects in simulations from Section 3.1.} \begin{tabular}{crrrrr} \hline Simulation Setting & Method & Abs Bias & Abs Bias (\%) & Coverage & MSE \\ \hline \multirow{3}{*}{3.1A-i} & U-GR & 0.12 & 46.46 & 0.33 & 1.25 \\ & U-BART & 0.03 & 10.72 & 0.99 & 0.07 \\ & BART+SPL & 0.01 & 5.63 & 1.00 & 0.05 \\ \hline \multirow{3}{*}{3.1A-ii} & U-GR & 0.17 & 97.31 & 0.23 & 1.69 \\ & U-BART & 0.05 & 31.95 & 0.89 & 0.12 \\ & BART+SPL & 0.02 & 12.58 & 1.00 & 0.09 \\ \hline \multirow{3}{*}{3.1A-iii} & U-GR & 0.23 & 724.78 & 0.19 & 2.34 \\ & U-BART & 0.08 & 427.55 & 0.71 & 0.22 \\ & BART+SPL & 0.03 & 100.80 & 1.00 & 0.14 \\ \hline \multirow{3}{*}{3.1B-i} & U-GR & 0.26 & 50.42 & 0.00 & 0.64 \\ & U-BART & 0.13 & 25.47 & 0.12 & 0.33 \\ & BART+SPL & 0.11 & 21.64 & 0.62 & 0.27 \\ \hline \multirow{3}{*}{3.1B-ii} & U-GR & 0.32 & 66.58 & 0.00 & 0.73 \\ & U-BART & 0.18 & 36.73 & 0.04 & 0.48 \\ & BART+SPL & 0.15 & 30.95 & 0.55 & 0.36 \\ \hline \multirow{3}{*}{3.1B-iii} & U-GR & 0.41 & 94.79 & 0.00 & 0.89 \\ & U-BART & 0.24 & 55.00 & 0.01 & 0.68 \\ & BART+SPL & 0.21 & 48.03 & 0.35 & 0.49 \\ \hline \end{tabular} \label{tab:results1} \end{table} We also conducted a simulation study to evaluate the performance of BART+SPL for binary outcomes. The simulated data structures utilized were similar to those described above. The data and the results are described in Section A.2 of the Appendix. BART+SPL performed similar to or better than the competing methods (U-GR and U-BART) in each of our simulations with $N=500$. However, even without non-overlap, BART probit can fail to provide improvements over parametric methods when sample sizes are small to moderate, and thus we recommend that BART+SPL for binary outcomes only be applied to large datasets (i.e., $N \geq 500$). \subsection{BART+SPL with High Dimensional Covariates} \label{ss:hdperformance} One of the most widely-recognized limitations of BART, which was noted by its developers \citep{chipman_bart_2010}, is its poor performance when the number of predictors, $p$, is large. The decline in performance is most significant when many irrelevant predictors (i.e., predictors unrelated to the outcome) are included. Thus, in this section, we seek to examine whether and how BART+SPL should be applied in settings where the number of potential confounders is large. Although BART has been extended to permit sparsity in the $p>N$ setting \citep{linero_bayesian_2016}, we do not consider the $p>N$ case here. For these simulations, we let $N=500$ and assign half of the sample to $E=1$. We then generate 10 confounders, 5 binary and 5 continuous. The binary confounders have distribution $X_1|E=1,...,X_5|E=1\sim Bernoulli(.45)$, $X_1|E=0,...,X_5|E=0\sim Bernoulli(.4)$, and the continuous confounders have distribution $X_6|E=1,...,X_{10}|E=1\sim N(2,4)$, $X_6|E=0,...,X_{10}|E=0\sim N(1.3,1)$. We consider the following three scenarios: only these 10 confounders are present (simulation 3.2A), these 10 confounders as well as 25 randomly generated ``potential confounders'' are present (simulation 3.2B), and these 10 confounders as well as 50 randomly generated ``potential confounders'' are present (simulation 3.2C). Of course, in real applications, we often do not know a priori which of the potential confounders are true confounders, hence we include them all in the modeling. A propensity score is formed using predicted probabilities from the logistic regression $\text{logit}(P(E_i=1))=\beta_0+\boldsymbol{Z}_i\boldsymbol{\beta}$, where $\boldsymbol{Z}_i$ is a vector of the true and potential confounders. The potential outcomes are generated so that they exhibit non-linear trends in the estimated propensity score-- $Y_i(0)=.5(X_{1i}+X_{2i}+X_{3i}+X_{4i}+X_{5i})+15(1+exp(-8X_{6i}+1))^{-1}+X_{7i}+X_{8i}+X_{9i}+X_{10i}-5$ and $Y_i(1)=X_{1i}+X_{2i}+X_{3i}+X_{4i}+X_{5i}-.5(X_{6i}+X_{7i}+X_{8i}+X_{9i}+X_{10i})$. The features of these data are illustrated in Figure 6 in Section A.3 of the Appendix. The simulations are designed to have a large RN in the right tail of the estimated propensity score, with moderate non-linearity in the causal effect in the RN. The RO and RN are defined using tuning parameters $a=.1$ and $b=7$. We simulate 1,000 datasets from each of the three scenarios described above. We apply both BART+SPL and the untrimmed Gutman and Rubin spline method (GR) to each dataset. Results are provided in Table~\ref{tab:hd}. \begin{table}[ht] \centering \caption{Absolute (Abs) bias, 95\% credible interval coverage and mean square error (MSE) in estimation of the population average causal effects in simulations from Section 3.2.} \begin{tabular}{crrrrr} \hline Simulation Setting & Method & Abs Bias & Abs Bias (\%) & Coverage & MSE \\ \hline \multirow{2}{*}{3.2A}& GR & 0.9801 & 5.6382 & 0.0320 & 13.1447 \\ & BART+SPL & 0.5608 & 3.2198 & 0.9850 & 8.9744 \\ \hline \multirow{2}{*}{3.2B}& GR & 0.5627 & 3.2363 & 0.4000 & 14.3195 \\ & BART+SPL & 0.5574 & 3.2012 & 0.9740 & 8.4942 \\ \hline \multirow{2}{*}{3.2C}& GR & 0.3640 & 2.0958 & 0.7280 & 16.0908 \\ & BART+SPL & 0.6368 & 3.6591 & 0.9570 & 9.6857 \\ \hline \end{tabular} \label{tab:hd} \end{table} These results reflect BART's struggle in the presence of irrelevant predictors. When only the 10 true confounders are included in the modeling, BART+SPL outperforms GR and demonstrates similar performance as in Section~\ref{ss:performance}. However, when irrelevant predictors are introduced, GR's bias decreases while BART+SPL's remains constant or increases. With 50 irrelevant predictors, GR's bias is substantially lower than BART+SPL's (although, notably, its coverage and MSE remain inferior). These results should serve as a warning that BART+SPL is only likely to improve on existing methods in settings where the set of true confounders can be posited a priori with some confidence. \subsection{Guidelines for Defining the RN} \label{ss:choosea} The simulation results in this section are intended to provide guidance on both the degree of non-overlap that threatens BART's performance and the degree of non-overlap that threatens BART+SPL's performance. They also suggest appropriate default specifications of tuning parameters $a$ and $b$ in the non-overlap definition. To impose strict control on the size of the RO and RN, in these simulations we utilize a single confounder rather than a propensity score. Based on the above simulations, we expect the performance of BART+SPL to be comparable with varying numbers of confounders, as long as few irrelevant covariates are included. Thus, the insights gained from these simulations are likely to scale well to scenarios with larger confounder sets. We let $N=500$ and assign half of the sample to $E=1$. We generate the confounder as $X|E=1\sim N(2.5,4),\text{ } X|E=0\sim N(v,w)$, where $v$ and $w$ control the degree of non-overlap. Unlike in the previous simulations, in these we generate a single, fixed instance of the confounder and simply add random noise to (a function of) it to create the potential outcomes for each simulation. We consider two potential outcome scenarios, one of which produces data that are relatively simple to model (with BART) while the other produces data that are challenging to model. The former, which we label simulation 3.3A, is created by assigning $Y(0)=1.5+\frac{X+(X^2/2!)}{20}+\text{N}(0,0.06)$ and $Y(1)=\frac{1}{1+e^{-(X-1)}}+\text{N}(0,0.06)$ and the latter, which we call simulation 3.3B, by $Y(0)=1.5+\frac{X+(X^2/2!)+(X^3/3!)}{20}+\text{N}(0,0.06)$ and $Y(1)=\frac{1}{1+e^{-(X-1)}}+\text{N}(0,0.06)$. In both simulation 3.3A and 3.3B, we achieve different degrees of non-overlap, primarily non-overlap in the right tail of the confounder, by manipulating $v$ and $w$. In order from least to most non-overlap, we consider $\left\lbrace v=1.4,w=1.96\right\rbrace$, $\left\lbrace v=0.75,w=1.44\right\rbrace$, and $\left\lbrace v=0,w=1\right\rbrace$. Moreover, in each scenario, we test the following three specifications of $\left\lbrace a,b\right\rbrace$ in the overlap definition, in order from most to least conservative: $\left\lbrace a=0.05*(range(X)),b=10 \right\rbrace$, $\left\lbrace a=0.1*(range(X)),b=10 \right\rbrace$, and $\left\lbrace a=0.15*(range(X)),b=3 \right\rbrace$. Considering each combination of $\left\lbrace v,w\right\rbrace$ and $\left\lbrace a,b\right\rbrace$ leads to 9 different settings for each of simulation 3.3A and 3.3B, for a total of 18 simulation settings. The proportion of the sample falling into the RN, denoted $\pi$ in these simulations ranges from $\pi=2$\% to $\pi=34$\%. Of course, the impact of non-overlap on average causal effect estimates depends not only on the proportion of the sample falling in the RN but also likely on the extremity of the observations in the RN relative to the RO. In our simulations, as $\pi$ increases, the average distance between observations in the RO and the RN also increases. An example dataset from both simulation 3.3A and simulation 3.3B is presented in Figure 7 in Section A.3 of the Appendix. \begin{table}[h!] \centering \caption{Absolute (Abs) bias, 95\% credible interval coverage, and mean square error (MSE) in estimation of the population average causal effects using BART+SPL and BART applied to simulation 3.3A. RO-1 refers to simulations with the RO defined as $a=0.05*(range(X)),b=10$, RO-2 refers to simulations with the RO defined as $a=0.1*(range(X)),b=10$, and RO-3 refers to simulations with the RO defined as $a=0.15*(range(X)),b=3$.} \begin{tabular}{rrrrrrr} \hline $v,w$ & Method & $\pi$ & Abs Bias & Abs Bias (\%) & Coverage & MSE \\ \hline \multirow{4}{*}{$v=1.4,w=1.96$} & BART+SPL, RO-1 & 17 & 0.03 & 2.51 & 1.00 & 0.08 \\ & BART+SPL, RO-2 & 8 & 0.03 & 2.23 & 1.00 & 0.07 \\ & BART+SPL, RO-3 & 2 & 0.03 & 2.43 & 1.00 & 0.08 \\ & BART & & 0.03 & 2.98 & 0.81 & 0.09 \\ \hline \multirow{4}{*}{$v=0.75,w=1.44$} & BART+SPL, RO-1 & 24 & 0.05 & 4.14 & 1.00 & 0.09 \\ & BART+SPL, RO-2 & 14 & 0.04 & 3.44 & 1.00 & 0.08 \\ & BART+SPL, RO-3 & 6 & 0.05 & 4.06 & 1.00 & 0.09 \\ & BART & & 0.06 & 5.50 & 0.47 & 0.11 \\ \hline \multirow{4}{*}{$v=0,w=1$} & BART+SPL, RO-1 & 34 & 0.08 & 6.65 & 1.00 & 0.11 \\ & BART+SPL, RO-2 & 21 & 0.05 & 4.60 & 1.00 & 0.09 \\ & BART+SPL, RO-3 & 11 & 0.06 & 4.71 & 1.00 & 0.09 \\ & BART & & 0.07 & 6.28 & 0.59 & 0.11 \\ \hline \end{tabular} \label{tab:chooseaA} \end{table} We apply a standard BART (ignoring the non-overlap) and BART+SPL to $1,000$ simulated datasets under each of the 18 conditions. Table~\ref{tab:chooseaA} and Table~\ref{tab:chooseaB} contain the results for simulations 3.3A and 3.3B, respectively. While BART+SPL nearly always performs better in terms of each metric than BART, the most notable difference in the BART+SPL and BART results is the difference in coverage probabilities, with BART+SPL consistently obtaining conservative coverage and BART's coverage deteriorating as the degree of non-overlap increases. Even when only 2\% of the data falls into the RN, BART's coverage is unreliable. Thus, BART could provide misleading inference even with small amounts of non-overlap. BART+SPL's coverage is conservative, but reliable, in all the simulations assessed, however its bias tends to increase as the degree of non-overlap increases. Thus, it appears that, if some bias in the point estimate can be tolerated, BART+SPL can be expected to provide conservative inference in (non-pathological) scenarios with over 25\% of the data in the RO. However, based on the observation that BART+SPL's bias is greater than 5\% in simulation 3.3B under each overlap definition with $\left\lbrace v=0.75,w=1.44 \right\rbrace$ and $\left\lbrace v=0,w=1 \right\rbrace$, a more conservative option might be to sacrifice the population-level estimand and performed a trimmed or weighted analysis when more than 15\% of the data falls in the RN. Finally, we note that these simulations suggest that BART+SPL is quite robust to the specification of $a$ and $b$, as we see relatively small discrepancies in bias and coverage. However, some of the results of 3.3B indicate that one should avoid defining the RO too conservatively, as discarding too much information can lead to modest increases in bias. The moderate choice of $a=0.1*range(X)$ and $b=10$ provides the best results in most of the simulations, and we, therefore, recommend this as the default specification. \begin{table}[ht] \centering \caption{Absolute (Abs) bias, 95\% credible interval coverage, and mean square error (MSE) in estimation of the population average causal effects using BART+SPL and BART applied to simulation 3.3B. RO-1 refers to simulations with the RO defined as $a=0.05*(range(X)),b=10$, RO-2 refers to simulations with the RO defined as $a=0.1*(range(X)),b=10$, and RO-3 refers to simulations with the RO defined as $a=0.15*(range(X)),b=3$.} \begin{tabular}{rrrrrrr} \hline $v,w$ & Method & $\pi$ & Abs Bias & Abs Bias (\%) & Coverage & MSE \\ \hline \multirow{4}{*}{$v=1.4,w=1.96$} & BART+SPL, RO-1 & 17 & 0.05 & 4.49 & 1.00 & 0.12 \\ & BART+SPL, RO-2 & 8 & 0.04 & 4.15 & 1.00 & 0.11 \\ & BART+SPL, RO-3 & 2 & 0.04 & 3.97 & 1.00 & 0.10 \\ & BART & & 0.05 & 4.69 & 0.57 & 0.12 \\ \hline \multirow{4}{*}{$v=0.75,w=1.44$} & BART+SPL, RO-1 & 24 & 0.07 & 6.27 & 1.00 & 0.14 \\ & BART+SPL, RO-2 & 14 & 0.06 & 5.72 & 1.00 & 0.13 \\ & BART+SPL, RO-3 & 6 & 0.06 & 5.83 & 1.00 & 0.12 \\ & BART & & 0.07 & 6.71 & 0.35 & 0.14 \\ \hline \multirow{4}{*}{$v=0,w=1$} & BART+SPL, RO-1 & 34 & 0.09 & 8.27 & 1.00 & 0.16 \\ & BART+SPL, RO-2 & 21 & 0.06 & 5.72 & 1.00 & 0.12 \\ & BART+SPL, RO-3 & 11 & 0.06 & 5.12 & 1.00 & 0.11 \\ & BART & & 0.06 & 5.67 & 0.74 & 0.12 \\ \hline \end{tabular} \label{tab:chooseaB} \end{table} \section{The Effect of Natural Gas Compressor Stations on County-Level Thyroid Cancer and Leukemia Mortality} \label{s:app} We collected 2014 thyroid cancer and leukemia mortality rate estimates for each county in the US from the Global Health Data Exchange. The data and methods used to develop these estimates have been described previously \citep{mokdad2017trends}. We also obtained the locations of NG compressor stations from publicly available data complied by \cite{ornl2017}. While the data is not guaranteed to be complete, it is, to our knowledge, the most comprehensive documentation of compressor station locations in existence, with 1,359 compressor station locations verified using imagery. In order to test a causal hypothesis, we need to assume that exposure to compressor station-related emissions preceded 2014 (the year for which cancer mortality rates are observed) by at least the minimum latency period for thyroid cancer and leukemia. The CDC reports the minimum latency period for thyroid cancer as 2.5 years and the minimum latency for leukemia as 0.4 years \citep{wtc2015}. Although the dataset does not contain dates of origin for the compressor stations, it does contain peak operation dates. 84\% of the compressor stations in the dataset have peak operating dates in or before 2012; thus, it seems reasonable to assume that most of the compressor stations in the dataset operated at least 2.5 years prior to 2014. Our county-level exposure variable is an indicator of whether a compressor station is present in the county. We collected county-level demographic, socio-economic, and behavioral confounder data from the American Community Survey 2014 5-year estimates \citep{census2018} and the 2014 County Health Rankings and Roadmaps \citep{rwdf2018}. Data were accessed using Social Explorer. The confounders used are rate of primary care physicians, percent of less than 65 year olds uninsured, percent diabetic, percent current smokers, percent of people with limited access to healthy foods, percent obese, food environment index, population density, percent male, percent less than age 55, percent white, average household size, percent with bachelor's degree or higher, percent unemployed, median household income, Gini index of inequality, percent owner-occupied housing units, median rent as proportion of income, and average commute time to work. All the data used in this analysis are publicly available, and the data and R code to reproduce the analysis are posted on Github at \url{https://github.com/rachelnethery/overlap}. We note that the sensitivity of this analysis to detect exposure effects will be low, because any true health effects of exposure to compression station emissions is likely more spatially concentrated than the county level. Although a higher spatial resolution analysis would be preferable, obtaining important behavioral confounder data at a finer spatial resolution is challenging. In an effort to improve the detectability of effects, we focus our analysis on roughly the mid-western region of the US (counties with centroid longitudes between -110 and -90), where few other sources of pollution exist compared to the coastal regions \citep{di2017air}. A focus on this region is also reasonable because NG production has a longer history in this region compared to other US production regions, likely leading to greater exposure. We begin with a dataset of 1,309 counties, and, after discarding counties with any missing confounders, are left with N=978 counties. 291 of these counties are exposed (i.e., contain at least one compressor station) while 687 are unexposed. Table 7 in Section A.3 of the Appendix shows the differences in the exposed and unexposed populations. Notably, exposed counties have, on average, higher percent uninsured, lower population density, lower percent white, lower education, and higher percent unemployment. We estimate a propensity score by applying a BART probit with exposure status as the response and all the confounders as predictors. The histogram in Figure~\ref{fig:psNG} illustrates the non-overlap in the resulting propensity score, with the solid vertical lines denoting the start of the intervals of non-overlap that are detected in each tail of the propensity score using our overlap definition and $a=0.1*range(\hat{\xi})$ and $b=10$. With these specifications, 12\% of the sample falls into the RN. BART+SPL is needed to obtain population level causal effect estimates in this setting. \begin{figure}[h!] \centering \includegraphics[scale=.8]{natgas_hist.pdf} \caption{Estimated propensity score histograms stratified by exposure status and overlaid. Bold vertical lines represent the start of non-overlap intervals in both tails of the distribution.} \label{fig:psNG} \end{figure} The following outcome variables are considered: (1) 2014 thyroid cancer mortality rate, (2) the change in thyroid cancer mortality rate from 1980 to 2014, (3) 2014 leukemia mortality rate, and (4) the change in leukemia mortality rate from 1980 to 2014. The 2014 rates are log-transformed prior to analysis. We analyze each outcome using both BART+SPL and trimmed BART. Counties in the trimmed sample are more urban and densely populated, on average, than the population represented by the full sample (trimmed sample average population density is 108.40 per mi$^2$ compared to 99.38 in the full sample). Average causal effect estimates and 95\% credible intervals from each analysis can be found in Table~\ref{tab:aceNG}. The BART+SPL analysis is estimating population average causal effects, and the trimmed BART is estimating trimmed sample average causal effects. Only two of these analyses find statistically significant effects of NG compressor stations-- the trimmed BART analyses of the change in thyroid cancer mortality rates from 1980 to 2014 and of the change in leukemia mortality rates from 1980 to 2014. As discussed above, we must interpret these as significant effects only in the trimmed sample, which is on average more urban than the population of interest. The population-level estimates from BART+SPL have wider credible intervals for two reasons. First, the additional marginalization over the confounders required to obtain population-level estimates increases the variance. Second, to estimate at the population level, we must account for the additional uncertainty induced by the non-overlap, which BART+SPL does by inflating variances in the RN. With these wide credible intervals, evidence of an effect must be very compelling in order to achieve statistical significance. However, we note that the point estimate from each analysis is positive, indicating a harmful effect of compressor stations on health, and in each case the point estimates are quite similar in the trimmed and untrimmed analyses. \begin{table}[ht] \centering \caption{Average causal effects of natural gas compressor station presence on 2014 county-level thyroid cancer and leukemia mortality rates and the change in thyroid cancer and leukemia mortality rates from 1980 to 2014.} \begin{tabular}{rrrr} \hline Outcome & Method & Effect & 95\% CI \\ \hline \multirow{2}{*}{2014 Thyroid Rates} & BART+SPL & 0.001 & -0.017, 0.020 \\ & BART & 0.003 & -0.007, 0.012 \\ \hline \multirow{2}{*}{Change in Thyroid Rates 1980-2014} & BART+SPL & 0.992 & -0.308, 2.237 \\ & BART & 1.089 & 0.130, 2.038 \\ \hline \multirow{2}{*}{2014 Leukemia Rates} & BART+SPL & 0.006 & -0.013, 0.025 \\ & BART & 0.005 & -0.004, 0.014\\ \hline \multirow{2}{*}{Change in Leukemia Rates 1980-2014} & BART+SPL & 0.913 & -0.361, 2.206 \\ & BART & 0.988 & 0.014, 1.958 \\ \hline \end{tabular} \label{tab:aceNG} \end{table} In Table 8 and Table 9 in Section A.3 of the Appendix, we provide the results of two sensitivity analyses using alternate specifications of $a$ and $b$, one resulting in a larger RN and one in a smaller RN. The BART+SPL results demonstrate little sensitivity to these choices, with point estimates changing little and inference remaining the same in each analysis. This robustness agrees with findings in simulated data in Section~\ref{ss:choosea}. The inference from the trimmed BART, however, is sensitive to the $a$ and $b$ choices, with the significant leukemia effect being attenuated in one sensitivity analysis and both significant effects being attenuated in the other. This sensitivity is not surprising, given that changes in the observations trimmed correspond to changes in the estimand. Because the trimmed BART is sensitive to these subjective choices, we should interpret the results with caution. The significant and near-significant findings presented here suggest that the health effects of compressor station exposure is a topic that warrants further study with higher quality data. The data utilized here have numerous limitations that could be improved upon by future studies. In particular, an analysis at higher spatial resolution is needed in order to be able to detect geographically concentrated effects that may be washed out at the county level. Moreover, counties with compressor stations may also be more likely to be located in NG production regions; thus, an analysis at the county level may not be able to distinguish the effects of compressor station exposure from the effects of NG drilling and production-related exposures. Finally, an investigation of cancer diagnosis rates may be more informative about the possible dangers of NG compressor station exposure than our investigation of cancer mortality rates. However, cancer diagnosis rates are difficult to obtain across large geographic regions. \section{Discussion} \label{s:discuss} In this paper, we have introduced a general definition of propensity score non-overlap and have proposed a Bayesian modeling approach to estimate population average causal effects and corresponding uncertainties in the presence of non-overlap. A novel feature of our proposed approach is its separation of the tasks of estimating causal effects in the region of overlap and the region of non-overlap and its delegation of these tasks to two distinct models. A BART model is selected to perform estimation of individual causal effects in the region of overlap where there is strong data support, thanks to its non-parametric nature, its ability to capture heterogeneous effects, and its strong predictive capacity. In the region of non-overlap, where reliance on model specification is required to estimate causal effects, individual causal effects are estimated by extrapolating trends from the region of overlap via a parametric spline model. This approach, which we call BART+SPL, can be applied to data with either continuous or binary outcomes and can be implemented in a fully Bayesian manner, so that uncertainties from both stages are captured. We demonstrated via simulations that BART+SPL outperforms both stand-alone BART and stand-alone spline causal inference approaches in estimation of population average causal effects under a wide range of conditions involving propensity score non-overlap. However, due to BART's limitations in high dimensional settings, BART+SPL may give more biased results than existing methods when many irrelevant predictors are present. While we have focused primarily on the use of our overlap definition with BART+SPL, it can also be used to define the RO for trimming, and it may provide a more transparent and reproducible approach than trimming by eye. The rich causal inference literature on caliper selection for matching procedures may provide insight on how to specify $a$ and $b$ for the use of this overlap definition with trimming \citep{stuart_matching_2010,lunt_selecting_2014}. However, as demonstrated by the trimmed BART results in Section~\ref{s:app}, our overlap definition is unlikely to produce an interpretable trimmed estimand. Therefore, other strategies that prioritize the interpretability of the resulting estimand may be preferable, such as overlap weighting \citep{li_balancing_2016}. We again note that BART+SPL is intended to handle finite sample non-overlap only, because the appropriateness of estimating population average causal effects may be called into question when the population exhibits non-overlap. When non-overlap is a finite sample feature, it will disappear asymptotically. It is for this reason that we have avoided discussions of double robustness. Because double robustness is an asymptotic property, and BART+SPL is a finite sample method, we cannot make claims about double robustness in the classic sense. However, we have demonstrated in simulations that, relative to the competing methods considered, BART+SPL is most robust to both outcome and propensity score model misspecification in the presence of non-overlap. A key contribution of our work is the introduction of a tuning parameter used to inflate the variance of causal effect estimates in regions of poor data support, so that this variance adequately reflects the high estimation uncertainty in such regions. We have recommended a choice of tuning parameter that linearly increases the variance as the distance into the RN increases, and the scale of these increases is tailored to correspond reasonably to the scale of the estimated causal effects in regions of good data support. Simulation results demonstrated that the resulting credible intervals are much more reliable than those produced by competing methods in the presence of non-overlap. However, in ``simple'' scenarios (i.e., scenarios where trends in the causal effects in the RN are easily predictable based on trends in the RO), this tuning parameter can produce conservative uncertainties. Although to our knowledge, the use of Gaussian Process Regression (GPR) for causal inference has not previously been discussed in the literature, there are clear connections between the features of our approach and the features of GPR, thus a comparison of the two methods is warranted. GPR is a traditionally Bayesian non-parametric regression technique that employs kernel functions to interpolate and predict the outcome at unobserved points. It naturally accommodates non-linearities and identifies regions of poor data support and inflates uncertainties in those regions. While these features might make it an attractive and parsimonious approach to causal inference in the presence of non-overlap, BART+SPL provides the following three advantages over GPR that may render it more appealing, particularly to applied scientists: (1) less sensitivity to tuning choices, (2) more intuitive tuning parameters, and (3) greater computational scalability. BART is highly regarded for its strong and consistent performance under default tuning specifications, and, in Section~\ref{ss:choosea}, BART+SPL also demonstrated little sensitivity to tuning choices. The results of GPR are known to be highly sensitive to the choice of kernel and tuning parameters. Similarly, GPR's tuning parameters are often difficult to understand, as they are embedded in kernel functions within covariance matrices; therefore, tuning typically requires guess-and-check work. BART+SPL involves tuning parameters with straightforward interpretations relating to the definition of adequate data support. Finally, GPR requires manipulation of a $N \times N$ covariance matrix in each MCMC iteration, making it time-consuming or infeasible with large datasets, while BART+SPL is much more scalable. Our work introduces an exciting direction for methodological developments in the context of causal inference with propensity score non-overlap. For instance, other machine learning methods \citep{cristianini2000introduction,breiman2001random,schmidhuber2015deep} may also have properties that make them well-suited to handle non-overlap, and the replacement of BART with one of these other methods could be explored. Moreover, the limitations of BART in high dimensions provide an opportunity for improvement on our method. Improvements might be made through the use of other machine learning methods, through the integration of sparsity-inducing priors with BART as developed by \cite{linero_bayesian_2016}, or through the addition of pre-processing procedures to first identify a minimal confounder set. In the spirit of Bayesian Adjustment for Confounding \citep{wang2012bayesian}, propensity score model and outcome model variable selection could be accomplished simultaneously. For instance, a propensity score model and a BART+SPL model could be fit jointly, both utilizing sparsity inducing priors, where the propensity score model priors are informed by the strength of the relationship between the outcome and each covariate, and the BART+SPL model priors are informed by the strength of the relationship between the exposure and each covariate. Finally, theoretical results remain to be developed for methods that perform causal effect estimation in the RO and RN using distinct models. These could be avenues for future work. \section{Acknowledgements} Support for this work was provided by NIH grants 5T32ES007142-35, R01ES028033, P01CA134294, R01GM111339, R35CA197449, R01ES026217, P50MD010428, and R01MD012769. The authors also received support from EPA grants 83615601 and 83587201-0, and Health Effects Institute grant 4953-RFA14-3/16-4. \bibliographystyle{Chicago}
1,941,325,220,342
arxiv
\section{Introduction} The \emph{dynamic path-planning} problem consists in finding a suitable plan for each new configuration of the environment by recomputing a collision free path using the new information available at each time step~\cite{Hwang92}. This kind of problem can be found for example by a robot trying to navigate through an area crowded with people, such as a shopping mall or supermarket. The problem has been addressed widely in its several flavors, such as cellular decomposition of the configuration space~\cite{Stentz95}, partial environmental knowledge~\cite{Stentz94}, high-dimensional configuration spaces~\cite{Kavraki96} or planning with non-holonomic constraints~\cite{Lavalle99}. However, simpler variations of this problem are complex enough that cannot be solved with deterministic techniques, and therefore they are worthy to study. This paper is focused on finding and traversing a collision-free path in two dimensional space, for a holonomic robot~\footnote{A holonomic robot is a robot in which the controllable degrees of freedom is equal to the total degrees of freedom.}, without kinodynamic restrictions~\footnote{Kinodynamic planning is a problem in which velocity and acceleration bounds must be satisfied}, in two different scenarios: \begin{itemize} \item Dynamic environment: several unpredictably moving obstacles or adversaries. \item Partially known environment: some obstacles only become visible when approached by the robot. \end{itemize} Besides from one (or few) new obstacle(s) in the second scenario we assume that we have perfect information of the environment at all times. We will focus on continuous space algorithms and won't consider algorithms that use discretized representations of the configuration space, such as D*~\cite{Stentz95}, because for high dimensional problems, the configuration space becomes intractable in terms of both memory and computation time, and there is the extra difficulty of calculating the discretization size, trading off accuracy versus computational cost. The offline RRT is efficient at finding solutions but they are far from being optimal, and must be post-processed for shortening, smoothing or other qualities that might be desirable in each particular problem. Furthermore, replanning RRTs are costly in terms of computation time, as well as evolutionary and cell-decomposition approaches. Therefore, the novelty of this work is the mixture of the feasibility benefits of the RRTs, the repairing capabilities of local search, and the computational inexpensiveness of greedy algorithms, into our lightweight multi-stage algorithm. In the following sections, we present several path planning methods that can be applied to the problem described above. In section \ref{sec:RRT} we review the basic offline, single-query RRT, a probabilistic method that builds a tree along the free configuration space until it reaches the goal state. Afterward, we introduce the most popular replanning variants of the RRT: ERRT in section \ref{sec:ERRT}, DRRT in section \ref{sec:DRRT} and MP-RRT in section \ref{sec:MPRRT}. Then, in section \ref{sec:hillclimbing} we present our new hybrid multi-stage algorithm with the experimental results and comparisons in section \ref{sec:results}. Finally, the conclusions and further work are discussed in section \ref{sec:conclusions}. \section{Previous and Related Work} \label{sec:stateofart} \subsection{Rapidly-Exploring Random Tree}\label{sec:RRT} One of the most successful probabilistic sampling methods for offline path planning currently in use, is the Rapidly-exploring Random Tree (RRT), a single-query planner for static environments, first introduced in \cite{Lavalle98}. RRTs work towards finding a continuous path from a state $q_{init}$ to a state $q_{goal}$ in the free configuration space $C_{free}$, by building a tree rooted at $q_{init}$. A new state $q_{rand}$ is uniformly sampled at random from the configuration space $C$. Then the nearest node, $q_{near}$, in the tree is located, and if $q_{rand}$ and the shortest path from $q_{rand}$ to $q_{near}$ are in $C_{free}$, then $q_{rand}$ is added to the tree. The tree growth is stopped when a node is found near $q_{goal}$. To speed up convergence, the search is usually biased to $q_{goal}$ with a small probability.\\ In \cite{Kuffner00}, two new features are added to RRTs. First, the EXTEND function is introduced, which, instead of trying to add directly $q_{rand}$ to the tree, makes a motion towards $q_{rand}$ and tests for collisions. Then a greedier approach is introduced, which repeats EXTEND until an obstacle is reached. This ensures that most of the time, we will be adding states to the tree, instead of just rejecting new random states. The second extension is the use of two trees, rooted at $q_{init}$ and $q_{goal}$, which are grown towards each other. This significantly decreases the time needed to find a path. \subsection{ERRT}\label{sec:ERRT} The execution extended RRT presented in \cite{Bruce02} introduces two RRTs extensions to build an on-line planner: the \emph{waypoint cache} and the \emph{adaptive cost penalty search}, which improves re-planning efficiency and the quality of generated paths. The waypoint cache is implemented by keeping a constant size array of states, and whenever a plan is found, all the states in the plan are placed in the cache with random replacement. Then, when the tree is no longer valid, a new tree must be grown, and there are three possibilities for choosing a new target state. With probability P[\textit{goal}], the goal is chosen as the target; With probability P[\textit{waypoint}], a random waypoint is chosen, and with remaining probability a uniform state is chosen as before. Values used in \cite{Bruce02} are P[\textit{goal}]$=0.1$ and P[\textit{waypoint}]$=0.6$.\\ In the other extension --- the adaptive cost penalty search --- the planner dynamically modifies a parameter $\beta$ to help it finding shorter paths. A value of $1$ for $\beta$ will always extend from the root node, while a value of $0$ is equivalent to the original algorithm. Unfortunately, the solution presented in \cite{Bruce02} lacks of implementation details and experimental results on this extension. \subsection{Dynamic RRT} \label{sec:DRRT} The Dynamic Rapidly-exploring Random Tree (DRRT) described in \cite{Ferguson06} is a probabilistic analog to the widely used D* family of algorithms. It works by growing a tree from $q_{goal}$ to $q_{init}$. The principal advantage is that the root of the tree does not have to be changed during the lifetime of the planning and execution. Also, in some problem classes the robot has limited range sensors, thus moving obstacles (or new ones) are typically near the robot and not near the goal. In general, this strategy attempts to trim smaller branches and farther away from the root. When new information concerning the configuration space is received, the algorithm removes the newly-invalid branches of the tree, and grows the remaining tree, focusing, with a certain probability(empirically tuned to $0.4$ in \cite{Ferguson06}) to a vicinity of the recently trimmed branches, by using the a similar structure to the waypoint cache of the ERRT. In experimental results DRRT vastly outperforms ERRT. \subsection{MP-RRT}\label{sec:MPRRT} The Multipartite RRT presented in \cite{Zucker07} is another RRT variant which supports planning in unknown or dynamic environments. The MP-RRT maintains a forest $F$ of disconnected sub-trees which lie in $C_{free}$, but which are not connected to the root node $q_{root}$ of $T$, the main tree. At the start of a given planning iteration, any nodes of $T$ and $F$ which are no longer valid are deleted, and any disconnected sub-trees which are created as a result are placed into $F$. With given probabilities, the algorithm tries to connect $T$ to a new random state, to the goal state, or to the root of a tree in $F$. In \cite{Zucker07}, a simple greedy smoothing heuristic is used, that tries to shorten paths by skipping intermediate nodes. The MP-RRT is compared to an iterated RRT, ERRT and DRRT, in 2D, 3D and 4D problems, with and without smoothing. For most of the experiments, MP-RRT modestly outperforms the other algorithms, but in the 4D case with smoothing, the performance gap in favor of MP-RRT is much larger. The authors explained this fact due to MP-RRT being able to construct much more robust plans in the face of dynamic obstacle motion. Another algorithm that utilizes the concept of forests is the Reconfigurable Random Forests (RRF) presented in \cite{Li02}, but without the success of MP-RRT. \section{A Multi-stage Probabilistic Algorithm}\label{sec:hillclimbing} In highly dynamic environments, with many (or a few but fast) relatively small moving obstacles, regrowing trees are pruned too fast, cutting away important parts of the trees before they can be replaced. This reduce dramatically the performance of the algorithms, making them unsuitable for these class of problems. We believe that a better performance could be obtained by slightly modifying a RRT solution using simple obstacle-avoidance operations on the new colliding points of the path by informed local search. Then, the path could be greedily optimized if the path has reached the feasibility condition. \subsection{Problem Formulation} At each time-step, the proposed problem could be defined as an optimization problem with satisfiability constraints. Therefore, given a path our objective is to minimize an evaluation function (i.e. distance, time, or path-points), with the $C_{free}$ constraint. Formally, let the path $\rho=p_1p_2\ldots p_n$ a sequence of points, where $p_i \in \mathbb{R}^n$ a $n$-dimensional point ($p_1 = q_{init}, p_n = q_{goal}$), $O_t\in \mathcal{O} $ the set of obstacles positions at time $t$, and $eval:\mathbb{R}^n \times \mathcal{O} \mapsto \mathbb{R}$ an evaluation function of the path depending on the object positions. Then, our ideal objective is to obtain the optimum $\rho*$ path that minimize our $eval$ function within a feasibility restriction in the form \begin{equation} \displaystyle\rho*=arg\min_{\rho}[eval(\rho,O_t)] \textrm{ with } feas(\rho,O_t) = C_{free} \label{eq:problem} \end{equation} where $feas(\cdot,\cdot)$ is a \emph{feasibility} function that equals to $C_{free}$ iff the path $\rho$ is collision free for the obstacles $O_t$. For simplicity, we use very naive $eval(\cdot,\cdot)$ and $feas(\cdot,\cdot)$ functions, but this could be extended easily to more complex evaluation and feasibility functions. The used $feas(\rho,O_t)$ function assumes that the robot is a punctual object (dimensionless) in the space, and therefore, if all segments $\overrightarrow{p_i p_{i+1}}$ of the path do not collide with any object $o_j \in O_t$, we say that the path is in $C_{free}$. The $eval(\rho,O_t)$ function will be the length of the path, i.e. the sum of the distances between consecutive points. This could be easily changed to any metric such as the time it would take to traverse this path, accounting for smoothness, clearness or several other optimization criteria. \subsection{A Multi-stage Probabilistic Strategy} If solving equation \ref{eq:problem} is not a simple task in static environments, solving dynamic versions turns out to be even more difficult. In dynamic path planning we cannot wait until reaching the optimal solution because we must deliver a ``good enough'' plan within some time quantum. Then, a heuristic approach must be developed to tackle the on-line nature of the problem. The heuristic algorithms presented in sections \ref{sec:ERRT}, \ref{sec:DRRT} and \ref{sec:MPRRT}, extend a method developed for static environments, which produce a poor response to highly dynamic environments and an unwanted complexity of the algorithms. We propose a multi-stage combination of three simple heuristic probabilistic techniques to solve each part of the problem: feasibility, initial solution and optimization. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\textwidth]{diag} \caption[\textbf{A Multi-stage Strategy for Dynamic Path Planning}]{\textbf{A Multi-stage Strategy for Dynamic Path Planning}. This figure describes the life-cycle of the multi-stage algorithm presented here. The RRT, informed local search, and greedy heuristic are combined to produce an expensiveness solution to the dynamic path planning problem.} \label{fig:diag} \end{center} \end{figure} \subsubsection{Feasibility} The key point in this problem is the hard constraint in equation \ref{eq:problem} which must be met before even thinking about optimizing. The problem is that in highly dynamic environments a path turns rapidly from feasible to unfeasible --- and the other way around --- even if our path does not change. We propose a simple \emph{informed local search} to obtain paths in $C_{free}$. The idea is to randomly search for a $C_{free}$ path by modifying the nearest colliding segment of the path. As we include in the search some knowledge of the problem, the \emph{informed} term is coined to distinguish it from blind local search. The details of the operators used for the modification of the path are described in section \ref{sec:implementation}. \subsubsection{Initial Solution} The problem with local search algorithms is that they repair a solution that it is assumed to be near the feasibility condition. Trying to produce feasible paths from scratch with local search (or even with evolutionary algorithms~\cite{Xiao97}) is not a good idea due the randomness of the initial solution. Therefore, we propose feeding the informed local search with a \emph{standard RRT} solution at the start of the planning, as can be seen in figure~\ref{fig:diag}. \subsubsection{Optimization} Without an optimization criteria, the path could grow infinitely large in time or size. Therefore, the $eval(\cdot,\cdot)$ function must be minimized when a (temporary) feasible path is obtained. A simple \emph{greedy} technique is used here: we test each point in the solution to check if it can be removed maintaining feasibility, if so, we remove it and check the following point, continuing until reaching the last one. \subsection{Algorithm Implementation} \label{sec:implementation} \newcounter{temp} \newcounter{algo} \setcounter{algo}{0} \setcounter{temp}{\value{figure}} \setcounter{figure}{\value{algo}} \renewcommand{\figurename}{Algorithm} \begin{figure}[ht] \caption{\bf Main()} \label{alg:main} \begin{algorithmic}[1] \REQUIRE $q_{robot} \leftarrow$ is the current robot position \REQUIRE $q_{goal} \leftarrow$ is the goal position \WHILE{$q_{robot} \neq q_{goal}$} \STATE {\bf updateWorld}$(time)$ \STATE {\bf process}$(time)$ \ENDWHILE \end{algorithmic} \end{figure} \renewcommand{\figurename}{Figure} \addtocounter{algo}{1} \setcounter{figure}{\value{temp}} The multi-stage algorithm proposed in this paper works by alternating environment updates and path planning, as can be seen in Algorithm~\ref{alg:main}. The first stage of the path planning (see Algorithm \ref{alg:process}) is to find an initial path using a RRT technique, ignoring any cuts that might happen during environment updates. Thus, the RRT ensures that the path found does not collide with static obstacles, but might collide with dynamic obstacles in the future. When a first path is found, the navigation is done by alternating a simple informed local search and a simple greedy heuristic as is shown in Figure~\ref{fig:diag}. \setcounter{temp}{\value{figure}} \setcounter{figure}{\value{algo}} \renewcommand{\figurename}{Algorithm} \begin{figure}[ht] \caption{\bf process$(time)$} \label{alg:process} \begin{algorithmic}[1] \REQUIRE $q_{robot} \leftarrow$ is the current robot position \REQUIRE $q_{start} \leftarrow$ is the starting position \REQUIRE $q_{goal} \leftarrow$ is the goal position \REQUIRE $T_{init} \leftarrow$ is the tree rooted at the robot position \REQUIRE $T_{goal} \leftarrow$ is the tree rooted at the goal position \REQUIRE $path \leftarrow$ is the path extracted from the merged RRTs \STATE $q_{robot} \leftarrow q_{start}$ \STATE $T_{init}.init(q_{robot})$ \STATE $T_{goal}.init(q_{goal})$ \WHILE{time elapsed $<$ time} \IF{first path not found} \STATE {\bf RRT}$(T_{init},T_{goal})$ \ELSE \IF{path is not collision free} \STATE firstCol $\leftarrow$ collision point closest to robot \STATE arc$(path, firstCol)$ \STATE mut$(path, firstCol)$ \ENDIF \ENDIF \ENDWHILE \STATE {\bf postProcess}$(path)$ \end{algorithmic} \end{figure} \addtocounter{algo}{1} \renewcommand{\figurename}{Figure} \setcounter{figure}{\value{temp}} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{arc} \caption[\textbf{The arc operator}]{\textbf{The arc operator}. This operator draws an offset value $\Delta$ over a fixed interval called vicinity. Then, one of the two axises is selected to perform the arc and two new consecutive points are added to the path. $n_1$ is placed at a $\pm \Delta$ of the point $b$ and $n_2$ at $\pm \Delta$ of point $c$, both of them over the same selected axis. The axis, sign and value of $\Delta$ are chosen randomly from an uniform distribution.} \label{fig:arc} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{mut} \caption[\textbf{The mutation operator}]{\textbf{The mutation operator}. This operator draws two offset values $\Delta_x$ and $\Delta_y$ over a vicinity region. Then the same point $b$ is moved in both axises from $b=[b_x,b_y]$ to $b'=[b_x \pm \Delta_x, b_y\pm \Delta_y]$, where the sign and offset values are chosen randomly from an uniform distribution.} \label{fig:mut} \end{center} \end{figure} The second stage is the informed local search, which is a two step function composed by the \emph{arc} and \emph{mutate} operators (Algorithms \ref{alg:arc} and \ref{alg:mut}). The first one tries to build a square arc around an obstacle, by inserting two new points between two points in the path that form a segment colliding with an obstacle, as is shown in Figure~\ref{fig:arc}. The second step in the function is a mutation operator that moves a point close to an obstacle to a random point in the vicinity, as is graphically explained in Figure~\ref{fig:mut}. The mutation operator is inspired by the ones used in the Adaptive Evolutionary Planner/Navigator(EP/N) presented in \cite{Xiao97}, while the arc operator is derived from the arc operator in the Evolutionary Algorithm presented in \cite{Alfaro05}.\\ Even though the local search usually produce good results for minor changes in the environment, it does not when is faced to significant changes and is quite prone to getting stuck in an obstacle. To overcome this limitation, our algorithm recognizes this situation, and restarts an RRT from the current location, before continuing with the navigation phase. \setcounter{temp}{\value{figure}} \setcounter{figure}{\value{algo}} \renewcommand{\figurename}{Algorithm} \begin{figure}[ht] \caption{\bf arc$(path, firstCol)$} \label{alg:arc} \begin{algorithmic}[1] \REQUIRE vicinity $\leftarrow$ some vicinity size \STATE randDev $\leftarrow$ random$(-vicinity, vicinity)$ \STATE point1 $\leftarrow$ path[firstCol] \STATE point2 $\leftarrow$ path[firstCol+1] \IF{random$() \% 2$} \STATE newPoint1 $\leftarrow$ (point1[X]+randDev,point1[Y]) \STATE newPoint2 $\leftarrow$ (point2[X]+randDev,point2[Y]) \ELSE \STATE newPoint1 $\leftarrow$ (point1[X],point1[Y]+randDev) \STATE newPoint2 $\leftarrow$ (point2[X],point2[Y]+randDev) \ENDIF \IF{path segments point1-newPoint1-newPoint2-point2 are collision free} \STATE add new points between point1 and point2 \ELSE \STATE drop new point2 \ENDIF \end{algorithmic} \end{figure} \addtocounter{algo}{1} \renewcommand{\figurename}{Figure} \setcounter{figure}{\value{temp}} \setcounter{temp}{\value{figure}} \setcounter{figure}{\value{algo}} \renewcommand{\figurename}{Algorithm} \begin{figure}[ht] \caption{\bf mut$(path, firstCol)$} \label{alg:mut} \begin{algorithmic}[1] \REQUIRE vicinity $\leftarrow$ some vicinity size \STATE path[firstCol][X] $+=$ random$(-vicinity, vicinity)$ \STATE path[firstCol][Y] $+=$ random$(-vicinity, vicinity)$ \IF{path segments before and after path[firstCol] are collision free} \STATE accept new point \ELSE \STATE reject new point \ENDIF \end{algorithmic} \end{figure} \renewcommand{\figurename}{Figure} \addtocounter{algo}{1} \setcounter{figure}{\value{temp}} The third and last stage is the greedy optimization heuristic, which can be seen as a post-processing for path shortening, that eliminates intermediate nodes if doing so does not create collisions, as is described in the Algorithm \ref{alg:postProcess}. \setcounter{temp}{\value{figure}} \setcounter{figure}{\value{algo}} \renewcommand{\figurename}{Algorithm} \begin{figure}[ht] \caption{\bf postProcess$(path)$} \label{alg:postProcess} \begin{algorithmic}[1] \STATE i $\leftarrow$ 0 \WHILE{i $<$ path.size$()$-2} \IF{segment path[i] to path[i+2] is collision free} \STATE delete path[i+1] \ELSE \STATE i $\leftarrow$ i+1 \ENDIF \ENDWHILE \end{algorithmic} \end{figure} \renewcommand{\figurename}{Figure} \addtocounter{algo}{1} \setcounter{figure}{\value{temp}} \section{Experiments and Results}\label{sec:results} The multi-stage strategy proposed here has been developed to navigate highly-dynamic environments, and therefore, our experiments should be aimed towards that purpose. Therefore, we have tested our algorithm in a highly-dynamic situation on two maps, shown in figures \ref{fig:office-dynamic} and \ref{fig:800-dynamic}. For completeness sake, we have tested on the same two maps, but modified to be a partially known environment. Also, we have ran the DRRT and MP-RRT algorithms over the same situations in order to compare the performance of our proposal. \subsection{Experimental Setup} The first environment for our experiments consists on two maps with 30 moving obstacles the same size of the robot, with a random speed between 10\% and 55\% the speed of the robot. This \emph{dynamic environments} are shown in figures \ref{fig:office-dynamic} and \ref{fig:800-dynamic}.\\ \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{office-dynamic} \caption{The dynamic environment, Map 1. The \emph{green} square is our robot, currently at the start position. The \emph{blue} squares are the moving obstacles. The \emph{blue} cross is the goal.} \label{fig:office-dynamic} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{800-dynamic} \caption{The dynamic environment, Map 2. The \emph{green} square is our robot, currently at the start position. The \emph{blue} squares are the moving obstacles. The \emph{blue} cross is the goal.} \label{fig:800-dynamic} \end{center} \end{figure} The second environment uses the same maps, but with a few obstacles, three to four times the size of the robot, that become visible when the robot approaches each one of them. This \emph{partially known environments} are shown in figure \ref{fig:office-partial} and \ref{fig:800-partial}.\\ \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{office-partial} \caption{The partially know environment, Map 1. The \emph{green} square is our robot, currently at the start position. The \emph{yellow} squares are the suddenly appearing obstacles. The \emph{blue} cross is the goal.} \label{fig:office-partial} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{800-partial} \caption{The partially know environment, Map 2. The \emph{green} square is our robot, currently at the start position. The \emph{yellow} squares are the suddenly appearing obstacles. The \emph{blue} cross is the goal.} \label{fig:800-partial} \end{center} \end{figure} The three algorithms were ran a hundred times in each environment. The cutoff time was five minutes for all tests, after which, the robot was considered not to have reached the goal. Results are presented concerning: \begin{itemize} \item {\it success rate:} the percentage of times the robot arrived to the goal \item {\it number of nearest neighbor lookups performed by each algorithm(N.N.):} one of the possible bottlenecks for tree-based algorithms \item {\it number of collision checks performed(C.C.),} which in our specific implementation takes a significant percentage of the running time \item {\it time} it took the robot to reach the goal \end{itemize} \subsection{Implementation Details} The algorithms where implemented in C++ using a framework \footnote{MoPa homepage: https://csrg.inf.utfsm.cl/twiki4/bin/view/CSRG/MoPa} developed by the same authors.\\ There are several variations that can be found in the literature when implementing RRTs. For all our RRT variants, the following are the details on where we departed from the basics: \begin{itemize} \item We always use two trees rooted at $q_{init}$ and $q_{goal}$. \item Our EXTEND function, if the point cannot be added without collisions to a tree, adds the mid point between the nearest tree node and the nearest collision point to it. \item In each iteration, we try to add the new randomly generated point to both trees, and if successful in both, the trees are merged, as proposed in \cite{Kuffner00}. \item We found that there are significant performance differences with allowing or not the robot to advance towards the node nearest to the goal when the trees are disconnected, as proposed in \cite{Zucker07}. The problem is that the robot would become stuck if it enters a small concave zone of the environment(like a room in a building) while there are moving obstacles inside that zone, but otherwise it can lead to better performance. Therefore we present results for both kinds of behavior: DRRT-adv and MPRRT-adv, moves even when the trees are disconnected, while DRRT-noadv and MPRRT-noadv only moves when the trees are connected. \end{itemize} In MP-RRT, the forest was handled simply replacing the oldest tree in it if the forest had reached the maximum size allowed. Concerning the parameter selection, the probability for selecting a point in the vicinity of a point in the waypoint cache in DRRT was set to 0.4 as suggested in \cite{Ferguson06}. The probability for trying to reuse a sub tree in MP-RRT was set to 0.1 as suggested in \cite{Zucker07}. Also, the forest size was set to 25 and the minimum size of a tree to be saved in the forest was set to 5 nodes. \subsection{Dynamic Environment Results} The results in tables \ref{table:dynamic1} and \ref{table:dynamic2} show that it takes our algorithm considerably less time than it takes the DRRT and MP-RRT to get to the goal, with far less collision checks. It was expected that nearest neighbor lookups would be much lower in the multi-stage algorithm than in the other two, because they are only performed in the initial phase, not during navigation. \begin{table}[ht] \caption{Dynamic Environment Results, Map 1.} \label{table:dynamic1} \centering \begin{tabular}{|c||c|c|c|c|} \hline Algorithm & Success \% & C.C. & N.N. & Time[s]\\ \hline Multi-stage & 99 & 23502 & 1122 & 6.62 \\ \hline DRRT-noadv & 100 & 91644 & 4609 & 20.57 \\ \hline DRRT-adv & 98 & 107225 & 5961 & 23.72 \\ \hline MP-RRT-noadv & 100 & 97228 & 4563 & 22.18\\ \hline MP-RRT-adv & 94 & 118799 & 6223 & 26.86\\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Dynamic Environment Results, Map 2.} \label{table:dynamic2} \centering \begin{tabular}{|c||c|c|c|c|} \hline Algorithm & Success \% & C.C. & N.N. & Time[s]\\ \hline Multi-stage & 100 & 10318 & 563 & 8.05\\ \hline DRRT-noadv & 99 & 134091 & 4134 & 69.32\\ \hline DRRT-adv & 100 & 34051 & 2090 & 18.94\\ \hline MP-RRT-noadv & 100 & 122964 & 4811 & 67.26\\ \hline MP-RRT-adv & 100 & 25837 & 2138 & 16.34\\ \hline \end{tabular} \end{table} \subsection{Partially Known Environment Results} The results in tables \ref{table:partial1} and \ref{table:partial2} show that our multi-stage algorithm, although designed for dynamic environments, is also faster than the other two in a partially known environment, though not as much as in the previous cases. \begin{table}[ht] \caption{Partially Known Environment Results, Map 1.} \label{table:partial1} \centering \begin{tabular}{|c||c|c|c|c|} \hline Algorithm & Success \% & C.C. & N.N. & Time[s]\\ \hline Multi-stage & 100 & 12204 & 1225 & 7.96\\ \hline DRRT-noadv & 100 & 37618 & 1212 & 11.66\\ \hline DRRT-adv & 99 & 12131 & 967 & 8.26\\ \hline MP-RRT-noadv & 99 & 49156 & 1336 & 13.82\\ \hline MP-RRT-adv & 97 & 26565 & 1117 & 11.12\\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Partially Known Environment Results, Map 2.} \label{table:partial2} \centering \begin{tabular}{|c||c|c|c|c|} \hline Algorithm & Success \% & C.C. & N.N. & Time[s]\\ \hline Multi-stage & 100 & 12388 & 1613 & 17.66\\ \hline DRRT-noadv & 99 & 54159 & 1281 & 32.67\\ \hline DRRT-adv & 100 & 53180 & 1612 & 32.54\\ \hline MP-RRT-noadv & 100 & 48289 & 1607 & 30.64\\ \hline MP-RRT-adv & 100 & 38901 & 1704 & 25.71\\ \hline \end{tabular} \end{table} \section{Conclusions}\label{sec:conclusions} The new multi-stage algorithm proposed here has a very good performance in very dynamic environments. It behaves particularly well when several small obstacles are moving around seemingly randomly. This is explained by the fact that if the obstacles are constantly moving, they will sometimes move out of the way by themselves, which our algorithm takes advantage of, but the RRT based ones do not, they just drop branches of the tree, that could have been useful again just a few moments later.\\ In partially known environments the multi-stage algorithm outperforms the RRT variants, but the difference is not as much as in dynamic environments. \subsection{Future Work} There are several areas of improvement for the work presented in this paper. The most promising seems to be to experiment with different on-line planners such as the EP/N presented in \cite{Xiao97}, a version of the EvP(\cite{Alfaro05} and \cite{Alfaro08}) modified to work in continuous configuration space or a potential field navigator. Also, the local search presented here, could benefit from the use of more sophisticated operators. Another area of research that could be tackled is extending this algorithm to other types of environments, ranging from totally known and very dynamic, to static partially known or unknown environments. An extension to higher dimensional problems would be one logical way to go, as RRTs are know to work well in higher dimensions. Finally, as RRTs are suitable for kinodynamic planning, we only need to adapt the on-line stage of the algorithm to have a new multi-stage planner for problem with kinodynamic constraints. \IEEEtriggeratref{3}
1,941,325,220,343
arxiv
\section{Introduction} In the field of biology, medicine, social science, economy, and many other disciplines, the representation of relevant problems through complex graphs of interrelated concepts and entities motivates the increasing interest of the scientific community towards Network Representation Learning~\citep{Zhang20}. Indeed by learning low-dimensional representations of network vertices that reflect the network topology and the structural relationships between nodes, we can translate the non-Euclidean graph representation of nodes and edges into a fully Euclidean embedding space that can be easily ingested into vector-based machine learning algorithms to efficiently carry out network analytic tasks, ranging from vertex classification and edge prediction to unsupervised clustering, node visualization and recommendation systems~\citep{Grover16,Zhang16,Martinez17,Wang17,Wang18}. To this aim, in the past decade most of research efforts focused on homogeneous networks, by proposing matrix factorization-based methods~\citep{Natarajan14}, random walk based methods~\citep{Perozzi14,Grover16}, edge modeling methods~\citep{Jian15}, Generative Adversarial Nets~\citep{WangH19}, and deep learning methods~\citep{Cao16,Hamilton17}. Nevertheless, the highly informative representation provided by graphs that include different types of entities and relationships is the reason behind the development of increasingly complex networks, also including Knowledge Graphs~\citep{Dai20}, sometimes referred as multiplex-heterogeneous networks~\citep{valdeolivas2019random}, or simply as heterogeneous networks~\citep{Dong20} (Section \ref{sec:HetGraph}), where different types of nodes and edges are used to integrate and represent the information carried by multiple sources of information. Following this advancements, Heterogeneous Network Representation Learning (HNRL) algorithms have been recently proposed to process such complex, heterogeneous graphs~\citep{Dong20}. The core issue with HNRL is to simultaneously capture the structural properties of the network and the semantic properties of the heterogeneous nodes and edges; in other words, we need node and edge type-aware embeddings that can preserve both the structural and the semantic properties of the underlying heterogeneous graph. In this context, from an algorithmic point of view, two main lines of research have recently emerged, both inspired by homogeneous network representation learning~\citep{Dong20}: the first one leverages results obtained by methods based on the "distributional hypothesis"\footnote{The distributional hypothesis was originally proposed in linguistics \citep{fries1954meaning,harris1954distributional}. It assumes that \virgolette{linguistic items with similar distributions have similar meanings}, from which it follows that words (elements) used and occuring in the same contexts tend to purport similar meanings~\citep{harris1954distributional}.}, firstly exploited to capture the semantic similarity of words~\citep{Mikolov13}, and then extended to capture the similarity between graph nodes~\citep{Grover16}; the second one exploits neural networks specifically designed to process graphs, using e.g. convolutional filters~\citep{Kipf17}, and more generally direct supervised feature learning through graph convolutional networks~\citep{Hamilton17}. The methods of the first research line share the assumption that nodes having the same structural context or being topologically close in the network (homophily) are also close in the embedding space. Some of those methods separately process each homogeneous networks included in the original heterogeneous graph. As an example, in~\citep{Tang15} the heterogeneous network is firstly projected into several homogeneous bipartite networks; then, an embedding representing the integrated multi-source information is computed by a joint optimization technique combining the skip-gram models individually defined on each homogeneous graph. A similar decomposition is initially applied in \citep{Zitnik17}, where the original heterogeneous graph is split into a set of hierarchically structured homogeneous graphs. Each homogeneous graph is then processed through node2vec~\citep{Grover16}, and the embedding of the heterogeneous network is finally obtained by using recursive regularization, which encourages the different embeddings to be similar to their parent embedding. Another approach in this context constraints the random walks used to collect node contexts for the embeddings into specific {\em meta-paths}: the walker can step only between pre-specified pairs of vertices, thus better capturing the structural and semantic charateristics of the nodes~\citep{Dong17}. Other related approaches combine vertex pair embedding with meta-path embeddings~\citep{Park19}, or improves the heterogeneous Spacey random walk algorithm by imposing meta-paths, graphs and schema constraints~\citep{He19}. Differently from the "distributional hypothesis" approach that usually applies shallow neural networks to learn the embeddings, graph neural networks-based (GNN) approaches apply deep neural-network encoders to provide more complex representations of the underlying graph~\citep{Wu20}. By this approach the deep neural network recursively aggregates information from neighborhoods of each node, in such a way that the node neighborhood itself defines a computation graph that learns how to propagate information across the graph to compute the node features~\citep{Hamilton17,Gilmer17}. As it often happens for the distributional approach, the usual strategy used by GNN to deal with heterogeneous graphs is to decompose them into its homogeneous components. For instance, Relational Graph Convolutional Networks~\citep{Schlichtkrull18} maintain distinct weight matrices for each different edge type, or Heterogeneous Graph Neural Networks~\citep{Zhang19a}, apply first level Recurrent Neural Networks (RNN) to separately encode features for each type of neighbour nodes, and then a second level RNN to combine them. Also Decagon~\citep{Zitnik18}, which has been successfully applied to model polypharmacy side effects, uses a graph decomposition approach by which node embeddings are separately generated by edge type and the resulting computation graphs are then aggregated. Other approaches add meta-path edges to augment the graph~\citep{Wang19b} or learns attention coefficients that weight the importance of different types of vertices~\citep{Chen18}. The drawbacks of all the aforementioned GNN approaches is that some relations may not have sufficient occurrences, thus leading to poor relation-specific weights in the resulting GNN. To overcome this problem, an Heterogeneous GNN~\citep{Hu20} that uses the Transformer-like self-attention architecture have been recently proposed. Despite the impressive advancements achieved in recent years by all the aforementioned methods (distributional approaches and GNN-based approaches), they both show drawbacks and limitations. \bigskip Indeed, methods based on the "distributional hypothesis", which base the embeddings on the random neighborhood sampling, usually rely on the manual exploration of heterogeneous structures, i.e. they require human-designed meta-paths to capture the structural and semantic dependencies between the nodes and edges of the graph. This requires human intervention and non-automatic pre-processing steps for designing the meta-paths and the overall network scheme. Moreover, similarly to Heterogeneous GNN, in most cases they treat separately each type of homogeneous networks extracted from the original heterogeneous one and are not able to focus on specific types of nodes or edges that constitute the objective of the underlying prediction task (e.g. prediction of a specific edge type). For what regards GNN, an open issue is represented by their computational complexity, which is exacerbated by the intrinsic complexity of heterogeneous graphs, thus posing severe scaling limitations when dealing with big heterogeneous graphs. Moreover in most cases heterogeneous GNN models use different weight matrices for each type of edge or node, thus augmenting the complexity of the learning model. Some GNN methods augment the graphs by leveraging human-designed meta paths, thus showing the same limitation of distributional approaches, i.e. the need of human intervention and non-automatic pre-processing steps. To overcome some of these drawbacks we propose a general framework to deal with complex heterogeneous networks, in the context of the previously discussed "distributional hypothesis" random-walk based research line. The proposed approach, that we named {\em Het-node2vec} to remark its derivation from the classical {\em node2vec} algorithm~\citep{Grover16}, can process heterogeneous multi-graphs characterized by multiple types of nodes and edges and can scale up with big networks, due to its intrinsic parallel nature. {\em Het-node2vec} does not require manual exploration of heterogeneous structures and meta-paths to deal with heterogeneous graphs, but directly models the heterogeneous graph as a whole, without splitting the heterogeneous graph in its homogeneous components. It can focus on specific edges or nodes of the heterogeneous graph, thus introducing a sort of ``attention'' mechanism~\citep{Bahdanau15}, conceptually borrowed from the deep neural network literature, but realized in an original and simple way in the world of random walk visits of heterogeneous graphs. Our proposed approach is particularly appropriate when we need to predict edge or node types that are underrepresented in the heterogeneous network, since the algorithm can focus on specific types of edges or nodes, even when they are largely outnumbered by the other types. At the same time the proposed algorithms learn embeddings that are aware of the different types of nodes and edges of the heterogeneous network and of the topology of the overall network. In the next section we summarize {\em node2vec}, since {\em Het-node2vec} can be considered as its extension to heterogeneous graphs. Then, we present {\em Het-node2vec} declined in three flavours to process respectively graphs having Heterogeneous Nodes and Homogeneous Edges ({\em HeNHoE-2vec}), Homogeneous Nodes and Heterogeneous Edges ({\em HoNHeE-2vec}), and full Heterogeneous graphs having both Heterogeneous Nodes and Edges, through {\em HeNHeE-2vec}, a more general model that integrates {\em HeNHoE-2vec}~ and {\em HoNHeE-2vec}. \section{Homogeneous node2vec} Leskovec's node2vec classical algorithm applies a $2^{nd}$ order random walk to obtain samples in the neighborhood of a given node~\citep{Grover16}. Let $X(t)$ be a random variable representing the node $v \in V$ visited at time (step) $t$ during a random walk (RW) across a graph $G=(V,E)$. Considering a $2^{nd}$ order RW we are interested at estimating the probability $P$ of a transition from $X(t)=v$ to $X(t+1)=x$, given that node $r$ was been visited in the previous step of the RW, i.e. $X(t-1)=r$: \begin{equation} P(X(t+1)=x | X(t)=v, X(t-1)=r) \end{equation} \begin{figure}[t!] \centering \includegraphics[scale=0.2]{imgs/homo-RW.png} \caption{A step (at time $t$) of a $2^{nd}$ order random walk in a homogeneous graph presented in \citep{Grover16}. At time $t-1$ the random walk was in $X(t-1)= r$, and has just moved from node $r$ to node $v$ (image taken from \citep{Grover16}). Then, the probability of moving to any neighboring node is given by $\alpha w_{v,x}$, where $w_{v,x}$ is the edge weight, and $\alpha$ depends on a \virgolette{return} parameter, $p$, and on an \virgolette{outward} parameter $q$.} \label{fig:node2vec} \end{figure} According to Leskovec (see Fig.\ref{fig:node2vec}), this probability can be modeled by a (normalized) transition matrix $\Pi$ with elements: $$\pi_{vx}= \alpha_{p,q} w_{v,x}$$ where $w_{v,x}$ is the weight of the edge $(v,x)\in E$ and $\alpha_{p,q}$ is defined as: $$\alpha_{p,q} = \left\{ \begin{matrix} \frac{1}{p} \text{ if } d_{r,x}=0\\\\ 1 \text{ if } d_{r,x}=1\\\\ \frac{1}{q} \text{ if } d_{r,x}=2\\ \end{matrix}\right. $$ where: \begin{itemize} \item $d_{r,x}$ is the \virgolette{distance} from node $r$ to node $x$, that is the length of the shortest path from $r$ to $x$, whereby $d \in \{0,1,2 \}$; \item $p$ is called the \textit{return (inward)} hyper-parameter, and controls the likelihood of immediately revisiting a node in the walk; \item $q$ is the \textit{in-out} hyper-parameter which controls the probability of exploring more distant parts of the graphs. \end{itemize} The advantage of this parametric "biased" RW is that by tuning $p$ and $q$ parameters we can both leverage the homophily (through a Depth-First Sampling (DFS)-like visit) and the structural (through a Breath-First Sampling (BFS)-like visit) characteristics of the graph under study. More precisely: \begin{itemize} \item setting $p < min(1,q)$ promotes the tendency of the $RW_2$ to backtrack a step, thus keeping the walk local by returning to the source node $r$; \item parameter $q$ allow us to simulate a DFS ($q<1$), thus capturing the homophily characteristics of the node, or a BFS ($q>1$), thus capturing the structural characteristics of the node. \end{itemize} \section{Heterogeneous node2vec} \label{sec:HetGraph} {\em Het-node2vec}~ introduces $2^{nd}$ order RWs that are node and edge type-aware, in the sense that the RW are biased according to the different types of nodes and edges in the graph. This is accomplished by introducing ``switching'' parameters that control the way the RW can move between different node and edge types, thus adding a further bias to the RW towards specific types of nodes and edges. From this standpoint our approach resembles the method proposed in~\citep{valdeolivas2019random}, where ``jumps'' between different types of edges or nodes (inspired to Levy flights \citep{guo2016levy,riascos2012long,chen2020lfgcn}) are stochastically run across an heterogeneous graph. Nevertheless, the authors focused on a different problem using a different algorithm: they proposed a first order random walk with restart in heterogeneous multi-graphs to predict node labels, while our approach performs biased second order random walks for graph representation learning purposes. {\em Het-node2vec} adopts a sort of ``attention'' mechanism~\citep{Velickovic2018} in the world of RW, in the sense that RW samples are concentrated on the parts of the graph most important for the problem under investigation, while other less important parts received less attention, i.e. they are visited less intensely by the RW algorithm. In this section we introduce three algorithms that extend node2vec to heterogeneous networks: \begin{enumerate}[a.] \item {\em HeNHoE-2vec}: Heterogeneous Nodes and Homogeneous Edges to vector \item {\em HoNHeE-2vec}: Homogeneous Nodes and Heterogeneous Edges to vector \item {\em HeNHeE-2vec}: Heterogeneous Nodes and Heterogeneous Edges to vector \end{enumerate} The algorithms differ in the way the "biased" random walk is implemented, but share the common idea that the random walk can switch to nodes of different type ({\em HeNHoE-2vec}) or can switch to edges of different type ({\em HoNHeE-2vec}) or can switch to both nodes and edges of different type ({\em HeNHeE-2vec}). In particular {\em HoNHeE-2vec}~ and {\em HeNHeE-2vec}~ can manage multigraphs, i.e. graphs characterized by multiple edges of different types between the same two nodes. It is worth noting that {\em HeNHoE-2vec}~ and {\em HoNHeE-2vec}~ are special cases of {\em HeNHeE-2vec}. \subsection{Heterogeneous networks} \begin{figure}[h!] \centering \includegraphics[scale=0.30]{imgs/Het-Net.jpg} \caption{A heterogeneous multigraph with nodes and edges of different types. Different colors are used to represent node and edges types. Multiple types of edges may connect the same pair of nodes. } \label{fig:HetNet} \end{figure} Fig.~\ref{fig:HetNet} represents a heterogeneous network with different types of nodes and edges. Different colours represent different node and edge types. The proposed algorithms are able to manage heterogeneous networks and multigraphs, i.e. graphs where the same pair of nodes may be connected by multiple types of edges. For didactic purposes, we show a graph with four different subgraphs in Figure~\ref{fig:HetNet} to illustrate the requirements for the three algorithms. $G_1$ is a multigraph having nodes of the same type (cyan nodes) but edges of different types (edges having different colours) and multiple edges may connect the same pair of nodes: for this subgraph we can apply {\em HoNHeE-2vec}, since this algorithm can manage homogeneous nodes and heterogeneous edges. $G_2$ is a graph having different types of nodes, but the same type of edges (green edges): for this subgraph we can apply {\em HeNHoE-2vec}, since this algorithm can manage heterogeneous nodes and homogeneous edges. $G_3$ is a multigraph having both different types of nodes and edges, and the same pair of nodes may be connected by multiple edges: for this subgraph we can apply {\em HeNHeE-2vec}, since this algorithm can manage heterogeneous nodes and heterogeneous edges. Finally $G_4$ is a graph with both homogeneous nodes and edges (violet nodes and red edges): for this subgraph the classical {\em node2vec} suffices, since this algorithm can be applied to graphs having homogeneous nodes and edges. Note that the overall graph depicted in Fig.~\ref{fig:HetNet} can be managed with {\em HeNHeE-2vec}, since this algorithm generalizes both {\em node2vec}, {\em HoNHeE-2vec}~ and {\em HeNHoE-2vec}. \subsection{{\em HeNHoE-2vec}. Heterogeneous Nodes Homogeneous Edges to vector algorithm} \label{subsec:HeHo} \subsubsection{The algorithm} This algorithm extends node2vec by specifying the probabilities for the RW to switch between nodes with different types, adding the possibility to switch to another node $x_o$ having a type different from the source node $v_s$. One idea for the algorithm is that if we have a total of $n$ node types, we can either define the probability of switching between any two pairs of types, or we can define up to $n-2$ types as 'special' and specify the probability of switching from a non-special node to a special node or vice versa, and analogously with edges. In most cases, we would define one category as 'special'. More precisely, given a function that returns the type of each node $\phi: V \rightarrow \Sigma_V = \{V_1, ..., V_N\}$, where $V_1, ..., V_N$ represent different types of nodes, we may have a move from $v_s$ to $x_o$ such that $\phi(x_o) \neq \phi(v_s)$, using a {\em switching} parameter $s$ to modulate the probability to move toward a heterogeneous node. Note that the subscript \textit{s} stands for \textit{same type} of the source node $v_s$, while \textit{o} stands for \textit{other type} (see Fig.~\ref{fig:HeHo}). More precisely we have: \begin{itemize} \item $r_s$ or $r_o$: the node visited before $v_s$ (preceding node) having ($r_s$) the same type as $v_s$, i.e. $\phi(r_s)=\phi(v_s)$, or a different type ($r_o$), i.e. $\phi(r_o) \neq \phi(v_s)$; \item $x_s$: the nodes in the neighborhood of $v_s$ and having same type as $v_s$: $\phi(x_s)=\phi(v_s)$; \item $x_o$: the nodes in the neighborhood of $v_s$ having type different from $v_s$: $\phi(x_o) \neq \phi(v_s)$. \end{itemize} The transition matrix used to decide where to \virgolette{move} next is computed by introducing a \virgolette{\textit{switch}} parameter $s$, used in the calculation of $\alpha_{p,q,s}$: \begin{align} &P(X(t+1)=x | X(t)=v, X(t-1)=r) = \pi_{vxr} = \alpha_{p,q, s} w_{v,x} \label{eq:2ndRW1}\\ \vspace{0.6cm} &\alpha_{p,q, s} = \left\{ \begin{matrix}\label{eq:2ndRW2} \text{if } \phi(v) = \phi(x): \hspace{.5cm} \left\{ \begin{matrix} \alpha = \frac{1}{p} \text{ if } d_{r,x}=0\\\\ \alpha = 1 \text{ if } d_{r,x}=1\\\\ \alpha = \frac{1}{q} \text{ if } d_{r,x}=2\\ \end{matrix}\right.\\\\ \text{else } : \hspace{2.1cm} \left\{ \begin{matrix} \alpha = \frac{1}{ps} \text{ if } d_{r,x}=0\\\\ \alpha = \frac{1}{s} \text{ if } d_{r,x}=1\\\\ \alpha = \frac{1}{qs} \text{ if } d_{r,x}=2\\ \end{matrix}\right.\\ \end{matrix}\right. \end{align} Eq.~\ref{eq:2ndRW2} shows an implementation of a $2^{nd}$ order RW for both the situations in which: a) at step $t+1$ we move to a node having the same type of $v_s$ ($\phi(v) = \phi(x)$); b) at step $t+1$ we move toward a node having a different type ($\phi(v) \neq \phi(x)$). To better understand the dynamic of a $2^{nd}$ order RW in a heterogeneous network we consider two distinct cases (Fig.~\ref{fig:HeHo}): \begin{enumerate}[a.] \item At step $t-1$ we start from a node of the same type of $X(t) = v_s$, and hence $X(t-1)=r_s$; \item At step $t-1$ we start from a node having type different from $X(t) = v_s$, and hence $X(t-1)=r_o$ \end{enumerate} These two cases in a $2^{nd}$ order RW are mutually exclusive in the sense that obviously for a fixed $t$ we may have either $X(t-1)=r_s$ or $X(t-1)=r_o$ The first case ($X(t-1)=r_s$) is depicted in Fig.~\ref{fig:HeHo} (Left). When walking in the homogeneous graph which $v_s$ belong to, the $\alpha$ originally defined in \citep{Grover16} are used to weight edges toward $x_s$; when we move to another type of node, all the corresponding edges starting from $v_s$ are weighted through the switching parameter $s$ (eq.~\ref{eq:2ndRW2}). \begin{figure}[ht!] \begin{tabular}{cc} \includegraphics[scale=0.6]{imgs/HeHo1.jpg} & \includegraphics[scale=0.6]{imgs/HeHo2.jpg} \\ (A) & (B)\\ \end{tabular} \caption{{\em HeNHoE-2vec}~ $2^{nd}$ order $RW$. (A) starting from $X(t-1) = r_s$; (B) starting from $X(t-1)= r_o$. Green nodes are those having the same type of $X(t)=v_s$, while pink nodes are those have different type from $v_s$, i.e. ``switch'' nodes. On the edges the corresponding values of $\alpha_{p,q,s}$ are reported. } \label{fig:HeHo} \end{figure} The second case ($X(t-1)=r_o$) is depicted in Fig.~\ref{fig:HeHo} (Right). In this situation the node visited at step $t-1$ has a type different from that of $v_s$: for instance node $X(t-1)=r_o$ is a pink node while node node $X(t-1)=v_s$ is green, and the $\alpha_{p,q,s}$ values vary according to eq.~\ref{eq:2ndRW2}). The general idea of the proposed approach is to use the classical {\em node2vec} when moving inside a node homogeneous sub-graph; however, we consider the possibility to switch to other sub-graphs having different type of nodes, based on the value of an additional switching hyper-parameter $s$. Essentially, if $s < 1$, a switch between heterogeneous sub-graphs is encouraged. Moreover if $q < 1$ Depth-First-Search sampling (DFS) is encouraged, where consecutively sampled nodes tend to have an increasing distance from the starting node. Conversely, if $s>1$, switches are discouraged and the $RW$ tends to more cautiously stay in the same homogeneous graph. This especially happens when also $q > 1$, encouraging a Breadth-First visit. Summarizing, with respect to the original {\em node2vec} setting, that considers the interplay between the parameter $q$ that controls the depth-first or breadth-first like steps, and the $p$ ``return'' parameter that controls the probability of returning to to the previous node, {\em HeNHoE-2vec}~ introduces the novel $s$ ``switching'' parameter that allows us a node type-aware RW actoss the graph. \subsubsection{Simple, Multiple and Special node switching.} \label{subsub:special-switch} A single new parameter $s$ allows us to maintain the memory about the diversity of types between the preceding node and the node where to move. We name this modality of switching between different types of nodes as {\em simple switching}. Nevertheless note that, using only one $s$ parameter for all the switches between different node types, we loose the specificity of switching between two specific types of nodes of the overall heterogeneous network. \paragraph{Multiple switching} To overcome this problem we can introduce different $s_{ij}$ for each switch from type $V_i$ to type $V_j$ (and viceversa). This approach, that we name {\em multiple switching} can introduce more control on the specificity of the {\em switching} process, but at the expenses of a major complexity of the resulting model. Indeed if we have $n$ different types of nodes we can introduce till to $\binom{n}{2}=\mathcal{O}(n^2)$ different node switching parameters. \paragraph{Special node switching} To avoid the complexity induced by multiple switching parameters, and to focus on specific node types, we can define a subset of the $n$ node types as {\em special} and we could simply specify the probability of switching from a non-special node to a special node or vice versa. This can be easily accomplished by relabeling the node types included in the special set as ``special'' and those not included as ``non special'': in this way we can bring back to the simple switching algorithm (eq.~\ref{eq:2ndRW2}) with only two types of nodes defined. It is easy to see that this {\em special node switching} is a special case of the {\em multiple switching}: for instance if we have four node types and $V_1$ nodes are {\em special} and the other ones non special, we can set $s_{12}=s_{13}=s_{14}=c$, and by setting $c <1$ we can focus on switching to node of type $V_1$. \paragraph{Versus specific switching} We note that we have considered only undirected edges. With an edge $(x,y)$ and $\phi(x) \neq \phi(y)$, say $x \in V_1$, and $y \in V_2$, we have the same switching parameter $s$ or $s_{12}$ when both moving from $x$ to $y$ and from $y$ to $x$ . This could be fine, but in this way we cannot capture the {\em versus specific switching}. For instance, if we would like to focus on $V_1$ nodes and we are in $y \in V_2$, by setting $s<<1$ we improve the probability to switch to $x \in V_1$. But if the neighbours of $x$ are nodes of type $V_1$, it is likely that at the next step we will come back to $y$. In this way we cannot specifically focus the RW on nodes of type $V_1$. To overcome this problem we could define different switching values $s_{12} < s_{21}$, thus modeling in different way the switch from $V_1$ to $V_2$ ($s_{12}$) with respect to the switch from $V_2$ to $V_1$ ($s_{21}$). This approach could be also applied to special nodes, by setting two different switching parameters for moving from special to non special and from non special to special nodes. \subsubsection{Computing transition probabilities} The value for the hyper-parameter $s$ (or $s_{i,j}$) should be \virgolette{tuned} in order to obtain \virgolette{good} node/edge embeddings, that is embeddings exploiting the topology of the overall heterogeneous network. This could be done by Bayesian optimization/grid search/random search. If we set $s<1$ or $s>1$ we can respectively encourage or discourage the visit of different types of nodes- On the contrary, by setting $s=1$ we in practice transform a heterogeneous network to a homogeneous one, and eq.~\ref{eq:2ndRW2} reduces itself to the original $2^{nd}$ order random walk of the Leskovec's node2vec. Moreover if we set $p=1$ and $q=1$ we obtain a $1^{st}$ order random walk. If we finally set $s=p=q=1$ we obtain a $1^{st}$ order random walk in a homogeneous network, since we treat nodes in the same way without considering their different types. We may have at most six different $\alpha_{p,q,s}$ values associated to the edges that connect $X(t) = v_s$ to either nodes of the same type or nodes of different type of $v_s$, namely $X(t+1) = x$, coming either from a node $X(t-1)=r_s$ or $X(t-1)=r_o$. These $\alpha$ values are used to compute the unnormalized transition probability matrix $\Pi$ associated with the heterogeneous network. For instance, looking at Fig.~\ref{fig:HeHo}, the unnormalized probability of moving from $v_s$ to the node $x_s^2$ of the same black subgraph is: \begin{eqnarray} &P\big(X(t+1) = x_s^2 | X(t) = v_s, X(t-1) = r_o\big) =& \pi_{x_s^2,v_s,r_o} = \alpha_{p,q,s} w_{v_s, x_s^2}= \frac{1}{q}w_{v_s, x_s^2} \nonumber \end{eqnarray} As another example, the unnormalized probability of moving from $v_s$ to a node $x_o^2$ belonging to a different subgraph is: \begin{eqnarray} &P\big(X(t+1) = x_o^2 | X(t) = v_s, X(t-1) = r_o\big) = & \pi_{x_o^2,v_s,r_o} = \alpha_{p,q,s} w_{v_s, x_o^2}= \frac{1}{qs}w_{v_s, x_o^2} \nonumber \end{eqnarray} As a final example, the unnormalized probability of coming back to node $r_o$ is: \begin{eqnarray} & P\big(X(t+1) = r_o | X(t) = v_s, X(t-1) = r_o\big) =& \pi_{r_o,v_s,r_o} = \alpha_{p,q,s} w_{v_s, r_o}= \frac{1}{ps}w_{v_s,r_o} \nonumber \end{eqnarray} To normalize the transition probabilities $\pi$ to normalized probabilities $\hat{\pi}$, we can divide the unnormalized transition probabilities for the sum of the unnormalized ones connecting $v_s$ to its neighbours $N(v_s)$: \begin{displaymath} \hat{\pi}_{x,v_s,r} = \sum_{v \in N(v_s)} \frac{\pi_{x,v_s,r}}{\pi_{v,v_s,r}} \end{displaymath} \subsection{{\em HoNHeE-2vec}. Homogeneous Nodes Heterogeneous Edges to vector algorithm} \label{subsec:HoHe} \subsubsection{The algorithm} This algorithm explicitly considers multigraphs, and can be applied to graphs with homogeneous nodes and heterogeneous edges, including edges of different types between the the same two nodes. We can define a function $\psi: E \rightarrow \Sigma_E$, where $\Sigma_E$ represent different edge types. We say that we have an ``edge switch'' in a $2^{nd}$ order random walk when, having $X(t-1)=r$, $X(t)=v$, $X(t+1)=x$, the edge $(r,v)$ has different type than $(v,x)$, i.e. $\psi(r,v) \neq \psi(v,x)$. Looking at Fig.~\ref{fig:HoHe}, to model the $2^{nd}$ order transition probabilities of moving from $v$ to one of its neighbouring nodes $x$, i.e. \begin{equation} P(X(t+1)=x | X(t)=v, X(t-1)=r) = \pi_{vx} = \beta_{p,q,e} w_{vx} \label{eq:probHoHe} \end{equation} where $w_{vx}$ is the weight of the edge $(v,x)$, we need to define the coefficient $\beta_{p,q,e}$ that biases the RW according to the three parameters $<p,q,e>$. Basically the algorithm is similar to {\em HeNHoE-2vec}, but this time a new parameter $e$ controls the probability of ``edge switching'' instead of ``node switching'' and can model the multigraph case with homogeneous nodes. \begin{figure} \centering \includegraphics[scale=0.6]{imgs/HoHe.jpg} \caption{$2^{nd}$ order $RW$ according to the {\em HoNHeE-2vec}~ algorithm. The node $v$ represents the node visited at step $t$, while $r$ the node visited at step $t-1$. Red edges represent {\em switching} edges. The weights over the edges represent the $\beta_{p,q,e}$ values according to eq.~\ref{eq:HoHe}. } \label{fig:HoHe} \end{figure} \subsubsection{Computing transition probabilities} In Fig.~\ref{fig:HoHe} red edges refer to an edge switch, i.e. $\psi(r,v) \neq \psi(v,x)$, and the corresponding values of $\beta$ are shown. Depending on the distance of the node $x$ from $r$ and on the possible "switch" to another type of edge, we may have six different values of $\beta$ and of the resulting unnormalized transition probabilities $\pi_{v,x}$(see Fig.~\ref{fig:HoHe}): \begin{eqnarray} \nonumber P\big(X(t+1)=r, \psi(r,v) = \psi(v,r) | X(t)=v, X(t-1)=r\big) = \pi_{vr} &=&\frac{1}{p} w_{vr}\\ \nonumber P\big(X(t+1)=r, \psi(r,v) \neq \psi(v,r) | X(t)=v, X(t-1)=r\big) =\pi_{(vr)}&=&\frac{1}{pe} w_{vr}\\ \nonumber P\big(X(t+1)=x^1, \psi(r,v) = \psi(v,x^1) | X(t)=v, X(t-1)=r\big) =\pi_{vx^1}&=& w_{vx^1}\\ \nonumber P\big(X(t+1)=x^1, \psi(r,v) \neq \psi(v,x^1) | X(t)=v, X(t-1)=r\big) =\pi_{(vx^1)}&=& \frac{1}{e} w_{vx^1}\\ \nonumber P\big(X(t+1)=x^2, \psi(r,v) = \psi(v,x^2) | X(t)=v, X(t-1)=r\big) =\pi_{vx^2}&=& \frac{1}{q} w_{vx^2}\\ \nonumber P\big(X(t+1)=x^2, \psi(r,v) \neq \psi(v,x^2) | X(t)=v, X(t-1)=r\big) =\pi_{(vx^2)}&=& \frac{1}{qe} w_{vx^2}\\ \end{eqnarray} The notation $\pi_{(vx)}$ using round brackets around the edge $(v,x)$ indicates that in the step from $v$ to $x$ an edge switch has occurred. \vspace{0.4cm} Summarizing, considering a network with homogeneous nodes and heterogeneous edges $G=(V,E,\psi)$, with $\psi:E \rightarrow \Sigma_E$, then the $2^{nd}$ order transition probabilities: $$P\left(X(t+1)=x | X(t)=v, X(t-1)=r\right) = \pi_{vx} = \beta_{p,q,e} w_{v,x}$$ where $e$ is the "edge switching" parameter, can be modeled by computing $\beta_{p,q,e}$ in the following way: \begin{align} &\beta_{p,q,e} = \left\{ \begin{matrix}\label{eq:HoHe} \text{if } \psi(r,v) = \psi(v,x): \hspace{.5cm} \left\{ \begin{matrix} \beta = \frac{1}{p} \text{ if } d_{r,x}=0\\ \beta = 1 \text{ if } d_{r,x}=1\\\\ \beta = \frac{1}{q} \text{ if } d_{r,x}=2\\ \end{matrix}\right.\\\\ \text{else }: \hspace{2.7cm} \left\{ \begin{matrix} \beta = \frac{1}{pe} \text{ if } d_{r,x}=0\\\\ \beta = \frac{1}{e} \text{ if } d_{r,x}=1\\\\ \beta = \frac{1}{qe} \text{ if } d_{r,x}=2\\ \end{matrix}\right.\\ \end{matrix}\right. \end{align} \subsubsection{Characteristics of the algorithm} In eq.~\ref{eq:HoHe} the "if $\psi(r,v) = \psi(v,x)$" branch manages a $2^{nd}$ order RW when no edge switch is performed: in practice this is the classical {\em node2vec} algorithm. The else branch manages the switch to another type of edge instead, i.e. the case when $\psi(r,v) \neq \psi(v,x)$. The $e>0$ parameter controls the probability of ``switching'' between two different types of edges: low values of $e$ improve the probability to switch in the random walk from ad edge of a given type to an edge of a different type. By setting $e=1$, we treat the heterogeneous edges as homogeneous and in practice we reduce the algorithm to the classical {\em node2vec}. \ The meaning of the $p$ and $q$ parameter is the same of {\em node2vec}. Analogously to {\em HeNHoE-2vec}, we may introduce multiple $e$ parameters, one for each pair of types of edges, to model switches from a specific type of edge to another specific type. This can introduce more control in the biased random walk, but at the expenses of a more complex model. To reduce complexity and to focus on on specific edge types, we can subdivide the in ``special'' and ``non special'', as just discussed for the case of heterogeneous nodes (Section~\ref{subsub:special-switch}) This algorithm can be applied to the synthetic lethality link prediction problem~\cite{ONeil17}, when we have homogeneous nodes and heterogeneous edges. In this use case, we would define SLI edges as 'special'. \subsection{{\em HeNHeE-2vec}. Heterogeneous Nodes Heterogeneous Edges to vector algorithm} \label{subsec:HeHe} \subsubsection{The algorithm} This algorithm can be applied to multigraphs $G=(V,E,\phi,\psi)$ with $\phi:V \rightarrow \Sigma_V$ and $\psi:E \rightarrow \Sigma_E$ having different types of nodes and edges. The general idea behind this algorithm is to allow us to ``switch'' between both different types of nods (as in the {\em HeNHoE-2vec}), and also between different types of edges (as in the {\em HoNHeE-2vec}). To this end we need $4$ parameters, i.e. $p,q$, the original {\em node2vec} parameters that control BFS and DFS-like visit of the graph and the probability to come back to the previous node $X(t-1)$, and two additional parameters $s$ and $e$ that control the probability to switch respectively to another type of node or edge. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.6]{imgs/HeHe1.jpg}& \includegraphics[scale=0.6]{imgs/HeHe2.jpg}\\ (A) & (B)\\ \end{tabular} \end{center} \caption{$2^{nd}$ order $RW$ according to the {\em HeNHeE-2vec}~ algorithm. The node $v_s$ represents the node visited at step $t$. Red edges represent {\em switching} edges. The weights over the edges represent the $\gamma_{p,q,s,e}$ values according to eq.~\ref{eq:HoHe}. (A) RW starting at $X(t-1)=r_s$, where nodes $r_s$ and $v_s$ share the same type; (B) RW starting at $X(t-1)=r_o$, where nodes $r_o$ and $v_s$ have different types. } \label{fig:HeHe} \end{figure} Looking at Fig.~\ref{fig:HeHe}, $v_s$ represents the node visited by the RW at the first step of a $2^{nd}$ order RW, i.e. $X(t)=v_s$. For node visited at the previous step, there are two possibilities: we can start from a node $X(t-1)=r_s$ having the same type of $v_s$ (Fig.~\ref{fig:HeHe}, Left), or from a node $X(t-1)=r_o$ having a different type of $v_s$ (Fig.~\ref{fig:HeHe}, Right): \begin{enumerate} \item $X(t-1)=r_s$, , with $\phi(r_s)=\phi(v_s)$ \item $X(t-1)=r_o$, with $\phi(r_o) \neq\phi(v_s)$ \end{enumerate} At step $t+1$ we can move back from $v_s$ to $r_s$ if $X(t-1)=r_s$ or to $r_o$ if $X(t-1)=r_o$, or we can move to a a node $x$ of the same type of $v_s$ at distance $1$ ($x_s^1$) or at distance $2$ ($x_s^2$) from $r_s$ or $r_o$, or we can alternatively move to a node $x$ having different type of $v_s$, at distance distance $1$ ($x_o^1$) or at distance $2$ ($x_o^2$) from $r_s$ or $r_o$: \begin{enumerate} \item $X(t+1)=r_s$, with $\phi(r_s)=\phi(v_s)$ \item $X(t+1)=r_o$, with $\phi(r_o) \neq \phi(v_s)$ \item $X(t+1)=x_s^1$, with $\phi(x_s^1)=\phi(v_s)$ \item $X(t+1)=x_s^2$, with $\phi(x_s^2)=\phi(v_s)$ \item $X(t+1)=x_o^1$, with $\phi(x_o^1) \neq \phi(v_s)$ \item $X(t+1)=x_o^2$, with $\phi(x_o^2) \neq \phi(v_s)$ \end{enumerate} Hence, looking at Fig.~\ref{fig:HeHe}, to compute: \begin{displaymath} P\left(X(t+1)=x | X(t)=v, X(t-1)=r\right) = \pi_{vx} = \gamma_{p,q,s,e} w_{v,x} \end{displaymath} considering that from $v_s$ we can switch to a different type of node or edge at distance $0, 1$ or $2$ from the node visited at step $t-1$, we may have twelve possibilities: \begin{center} \[ \begin{array}{ll} 1. \; \pi_{v_s r_s} = \frac{1}{p} w_{v_s r_s} & 2. \; \pi_{(v_s r_s)} = \frac{1}{pe} w_{v_s r_s} \\ 3. \; \pi_{v_s r_o} = \frac{1}{ps} w_{v_s r_o} & 4. \; \pi_{(v_s r_o)} = \frac{1}{pse} w_{v_s r_o} \\ 5. \; \pi_{v_s x_s^1} = w_{v_s x_s^1} & 6. \; \pi_{(v_s x_s^1)} = \frac{1}{e} w_{v_s x_s^1}\\ 7. \; \pi_{v_s x_s^2} = \frac{1}{q} w_{v_s x_s^2} & 8. \; \pi_{(v_s x_s^2)} = \frac{1}{qe} w_{v_s x_s^2}\\ 9. \; \pi_{v_s x_o^1} = \frac{1}{s} w_{v_s x_o^1} & 10. \; \pi_{(v_s x_o^1)} = \frac{1}{se} w_{v_s x_o^1}\\ 11. \; \pi_{v_s x_o^2} = \frac{1}{qs} w_{v_s x_o^2} & 12. \; \pi_{(v_s x_o^2)} = \frac{1}{qse} w_{v_s x_o^2}\\ \end{array} \] \label{eq:tableHeHe} \end{center} We recall that the notation $\pi_{(vx)}$ means the probability of moving from node $v$ to $x$, having an edge switching. For instance the sixth equation of the previous table means: \begin{displaymath} \pi_{v_s x_s^1} = P\left(X(t+1)=x_s^1, \psi(r_s,v_s) \neq \psi (v_s, x_s^1) | X(t)=v, X(t-1)=r_s\right) \end{displaymath} \vspace{0.4cm} Summarizing, considering a network with heterogeneous nodes and heterogeneous edges $G=(V,E,\phi, \psi)$, with $\phi:V \rightarrow \Sigma_V$ and $\psi:E \rightarrow \Sigma_E$, then $2^{nd}$ order transition probabilities $$P\left(X(t+1)=x | X(t)=v, X(t-1)=r\right) = \pi_{vx} = \gamma_{p,q,s,e} w_{v,x} $$ where $s$ is the "node switching" and $e$ the "edge switching" parameters, can be modeled by computing $\gamma_{p,q,s,e}$ through the {\em HeNHeE-2vec}~ algorithm: \begin{align} &\gamma_{p,q,s,e} = \left\{ \begin{matrix}\label{eq:HeHe} \text{if } \psi(r,v_s) = \psi(v_s,x): \hspace{.5cm} \left\{ \begin{matrix} \text{if }\phi(x) = \phi(v_s): \hspace{.5cm} \left\{ \begin{matrix} \gamma = \frac{1}{p} \text{ if } d_{r,x}=0\\ \gamma = 1 \text{ if } d_{r,x}=1\\ \gamma = \frac{1}{q} \text{ if } d_{r,x}=2\\ \end{matrix} \right.\\\\ \text{else }: \hspace{2.2cm} \left\{ \begin{matrix} \gamma = \frac{1}{ps} \text{ if } d_{r,x}=0\\ \gamma = \frac{1}{s} \text{ if } d_{r,x}=1\\ \gamma = \frac{1}{qs} \text{ if } d_{r,x}=2\\ \end{matrix} \right.\\\\ \end{matrix} \right.\\\\ \text{else }: \hspace{2.9cm} \left\{ \begin{matrix} \text{if } \phi(x) = \phi(v_s) : \hspace{0.5cm} \left\{ \begin{matrix} \gamma = \frac{1}{pe} \text{ if } d_{r,x}=0\\ \gamma = \frac{1}{e} \text{ if } d_{r,x}=1\\ \gamma = \frac{1}{qe} \text{ if } d_{r,x}=2\\ \end{matrix} \right.\\\\ \text{else } : \hspace{2.2cm} \left\{ \begin{matrix} \gamma = \frac{1}{pse} \text{ if } d_{r,x}=0\\ \gamma = \frac{1}{se} \text{ if } d_{r,x}=1\\ \gamma = \frac{1}{qse} \text{ if } d_{r,x}=2\\ \end{matrix} \right.\\\\ \end{matrix} \right. \end{matrix} \right.\\ \end{align} \subsubsection{Characteristics of the algorithm} {\em HeNHeE-2vec}~ is a general algorithm to perform $2^{nd}$ order RW in heterogeneous multigraphs, characterized by the following features: \begin{itemize} \item can be applied to multigraphs having multiple types of nodes and edges \item can explore the graph according to a continuum between a BFS and a DFS visit (by tuning the $q$ parameter) \item can control the probability of walking a step back to the original source node (by the $p$ parameter) \item can control the probability to switch to different types of nodes (by the $s$ parameter) \item can control the probability to switch to different types of edges (by the $e$ parameter) \end{itemize} {\em HeNHeE-2vec}~ is a generalization of the the previously presented algorithms. Indeed: \begin{itemize} \item if we set $e=1$ we obtain {\em HeNHoE-2vec}. \item if we set $s=1$ we obtain {\em HoNHeE-2vec}. \item if we set $e=1$ and $s=1$ we obtain {\em node2vec}. \end{itemize} Analogously to {\em HeNHoE-2vec}~ and {\em HoNHeE-2vec}, we may introduce multiple $s$ and $e$ switching parameters, one for each pair of different types of nodes and edges. This approach on the one hand allow us to better control the biased RW, but on the other hand introduces a more complex model, with a large number of parameters to be tuned. Also in this case, analogously to Section~\ref{subsub:special-switch}, we can introduce ``special'' nodes and edges. The algorithm is inherently parallel since we can generate multiple samples of $2^{nd}$ order RW in general heterogeneous networks starting at the same time from each node of the graph. The complexity of a single $2^{nd}$ order RW step for single node of the graph is $\mathcal{O}(1)$. To perform one step for each node of the graph, the complexity is $\mathcal{O}(n)$ with $n = |V|$. To perform a random walk of length $k$, the complexity is $\mathcal{O}(kn)$, and to perform $m$ RW of length $k$ for each node the complexity is $\mathcal{O}(kmn)$. If $km<<n$ then the complexity is $\mathcal{O}(n)$, and if $km \simeq n$, then the complexity is $\mathcal{O}(n^2)$. As an example of possible relevant applications of {\em HeNHeE-2vec}, we could consider the analysis of KG-COVID-19~\citep{REESE2021}, or the analysis of the heterogeneous Monarch network~\citep{Shefchek20}. \section{Conclusion} We proposed {\em Het-node2vec} an algorithmic framework to run biased $2^{nd}$ order random walks on heterogeneous networks, to obtain embeddings that are node and edge-type aware. The proposed approach can be viewed as an extension of the classical {\em node2vec} algorithm, since the random walk is further biased, according to the type of nodes and edges. This is accomplished through simple ``switching'' mechanisms, controlled by appropriated ``switching parameters'', by which a random walk can stochastically move between different types of nodes and edges, and can focus on specific types of nodes and edges that are of particular interest for the specific learning problem under investigation. The algorithms are general enough to be applied to general unsupervised and supervised problems involving graph representation learning, ranging from edge and node label prediction to graph clustering, considering different possible application domains, ranging from biology and medicine to economy, social sciences and more in general for the analysis of complex heterogeneous systems. \bibliographystyle{plainnat}
1,941,325,220,344
arxiv
\subsection{Duality Reviewed} For the sake of completeness we recall the formal definition of a duality. We assume sufficient familiarity with category theory. A convenient reference is \cite{book3}, pp.\ 745--818. \msk Let $\cal A$ and $\cal B$ be categories and consider two contravariant functors (see \cite{book3}, Definition 3.25) $F\colon{\cal A}\to{\cal B}$ and $U\colon{\cal B}\to {\cal A}$. \begin{Definition}\quad (a)\quad It is said that $F$ and $U$ are {\it adjoint on the right} if for each object $A$ in $\cal A$ there is a natural $\cal A$-morphism $\eta_A\colon A\to UFA$ such that for each $\cal A$-morphism $f\colon A\to UB$ there is a unique $\cal B$-morphism $f'\colon B\to FA$ such that $f=(Uf')\circ \eta_A$. $$ \begin{matrix}&{\cal A}&&\hbox to 7mm{} &{\cal B}\\ \noalign{\vskip3pt} \noalign{\hrule}\cr \noalign{\vskip3pt \mdvertspace A&\mapright{\eta_A}&UFA&\hbox to 7mm{} &FA\\ \lmapdown{\forall f}&&\mapdown{Uf'}&\hbox to 7mm{}&\mapup{\exists! f'}\\ \muvertspace UB&\lmapright{\id}&UB&\hbox to 7mm{}&B.\\ \end{matrix} $$ \bsk The natural morphism $\eta_A$ is called the {\it evaluation morphism}. \nin (b) {\it The categories $\cal A$ and $\cal B$ are said to be dual} if there are two contravariant functors $F$ and $U$, adjoint on the right, such that $\eta_A$ is an isomorphism for all objects $A$ of $\cal A$. \end{Definition} \msk \nin In the general literature, authors frequently speak of ``duality'' as soon as they have two ``naturally defined'' contravariant functors that are adjoint on the right without the postulate that the natural morphism $\eta_A$ is \ an isomorphism in all cases---for example if they consider the category of Banach spaces as dual to itself under the passage of a Banach space to the dual Banach space. \bsk \subsection{Vector Spaces Reviewed} It is a basic fact that a perfect duality theory exists between the category of real or complex vector spaces $\cal V$ and the linear maps between them on the one hand and the so-called weakly complete topological vector spaces $\cal W$ and the continuous linear maps between them on the other (see \cite{dahmen, probook, book3, hofmor}). Yet this fact is less commonly mentioned than one might expect. In the following let $\K$ denote the field of either the real or the complex numbers. Most of the duality works with arbitrary topological fields, but for applications in the theory of, say, compact groups, a restriction to these fields suffices. \begin{Definition} \label{d:wvect} A topological $\K$-vector space $V$ is called {\it weakly complete} if there is a vector space $E$ such that $V$ is isomorphic as a topological vector space to the space $E^*\defi\Hom(E,\K)$ of linear scalar valued functionals endowed with the topology of pointwise convergence from the containment $\Hom(E,\K)\subseteq \K^E$. \end{Definition} \nin Together with all continuous linear maps between them, weakly complete topological vector spaces form a category denoted $\cal W$. In particular, since every $\K$-vector space $E$ is isomorphic to the direct sum $\K^{(J)}$ for some set $J$ due to the existence of a basis via the Axiom of Choice we have from these definitions the following insight: \begin{Remark} \quad Every weakly complete vector space is isomorphic as a topological vector space to $\K^J$ for some set $J$. \end{Remark} \msk \nin According to this remark the cardinality of $J$ is the only characteristic invariant of a weakly complete vector space. \msk Conversely, if a weakly complete vector space $V$ is given, a vector space $E$ such as it is required by Definition \ref{d:wvect}, is easily obtained by considering the (algebraic) vector space of its topological dual $V'=\Hom(V,\K)$ of all continuous linear functionals. Indeed, we have: \msk \begin{Theorem} \label{th:vect} The categories $\cal V$ and $\cal W$ are dual to each other with the contravariant functors $$E\mapsto E^*:{\cal V}\to {\cal W},\hbox{ respectively, } V\mapsto V':{\cal W}\to {\cal V}.$$ \end{Theorem} \nin For a vector space $E$, the evaluation morphism $\eta_E\colon E\to (E^*)'$, for $v\in E$ and $\omega\in (E^*)'$, is given by $\eta_E(v)(\omega)=\omega(v)$. In an analogous way one obtains the evaluation morphism $\eta_V\colon V\to (V')^*$. \msk For proofs see \cite{book3}, Theorem 7.30, and \cite{probook}, Appendix 2: weakly complete topological vector spaces. This duality theorem is one of the ``good'' ones for the category $\cal W$ of weakly complete topological vector spaces since it can be used for the understanding of the category $\cal W$ by transporting statements in and about it to well known {\color{\red}purely algebraic} statements on vector spaces and linear maps. \bsk \subsection{Monoidal categories} The category $\cal V$ has a ``multiplication,'' namely, the tensor product $(E,F)\mapsto E\otimes F$. Indeed this product is associative and commutative and it has any one dimensional vector space as identity object (represented conveniently by $\K$ itself). In order to meet the demands of general category theory, there is a formalism around these concepts which allows us to speak of {\it commutative monoidal categories} as is exemplified by a discussion in \cite{book3}, Appendix 3, Paragraphs A3.61---A3.92, or in \cite{hofbig}, Section 2. {\it Associativity} of the product $\otimes$ is implemented by a natural isomorphism $\alpha_{EFG}\colon E\otimes(F\otimes G)\to (E\otimes F)\otimes G$ and {\it commutativity} by a natural isomorphism $\kappa_{EF}\colon E\otimes F\to F\otimes E$. (Cf.\ \cite{book3}, pp.~787ff.) The simplest example of a commutative monoidal category is the category of sets and functions with the cartesian binary product $(X,Y)\mapsto X\times Y$ with the singleton sets as identity elements. \msk If a functor between two monoidal categories respects the available monoidal structures, it will be called {\it multiplicative}. (See \cite{book3}, Definition A3.66.) \msk For the present duality theory of vector spaces it is relevant that not only the category $\cal V$ of vector spaces has a tensor product, but that the category $\cal W$ of weakly complete vector spaces has a tensor product $(V,W)\mapsto V\otimes W$ as well (see \cite{heil} (1997) and \cite{dahmen} (2007)). The tensor product of weakly complete vector spaces has universal properties which are completely analogous to those well known for vector spaces. Indeed, we have the following proposition: \msk \begin{Proposition} \label{p:multi} The respective tensor products endow each of $\cal V$ and $\cal W$ with the structure of a commutative monoidal category, and the contravariant functors $$E\mapsto E^*\colon {\cal V}\to{\cal W}\hbox{ and } V\mapsto V' \colon {\cal W}\to{\cal V}$$ are multiplicative. \end{Proposition} \msk (See e.g.\ \cite{dahmen}). In particular, there are natural isomorphisms $$(E_1\otimes E_2)^*\cong (E_1)^*\otimes(E_2)^*\hbox{ and } (V_1\otimes V_2)' \cong (V_1)'\otimes(V_2)'.$$ \subsection{Monoids and Comonoids---Algebras and Coalgebras} Let us consider a commutative monoidal category $({\cal A}, \otimes)$. Good examples for the purpose of the discussion are the categories of sets with the cartesian product or, alternatively, the category of compact spaces and the (topological) cartesian product. A {\it multiplication} on an object $A$ of $\cal A$ is an $\cal A$-morphism $m\colon A\otimes A\to A$. Its {\it associativity} can be expressed in terms of $\cal A$-morphisms in a commutative diagram \def\arr{\hbox to 30pt{\rightarrowfill}} \def\larr{\hbox to 30pt{\leftarrowfill}} \def\ot{\otimes} $$\begin{matrix A\ot(A\ot A)&\mapright{\alpha_{AAA}}&(A\ot A)\ot A\\ \lmapdown{\id_A\ot m}&&\mapdown{m\otimes \id_A}\\ A\otimes A && A\otimes A\\ \lmapdown{m}&&\mapdown{m}\\ A&\lmapright{\id_a}&A.\\ \end{matrix} $$ \msk\nin The multiplication $m$ is called {\it commutative} if the following diagram commutes: $$ \begin{matrix \mdvertspace A\ot A&\mapright{\kappa_{AA}}&A\ot A\\ \lmapdown{m}&&\mapdown{m}\\ \muvertspace A&\lmapright{\id_A}&\hphantom{.}A.\\ \end{matrix} $$ \msk (Cf.\ \cite{book3}, Definition A3.62). A set $X$ with an associative multiplication is commonly called a semigroup, a topological space $C$ with an associative multiplication is a topological semigroup. A category $\cal A$ is said to have identity objects $\IdObject$ (such as singletons in the case of the categories of sets or topological spaces), if for each object $A$ there are natural isomorphisms $\iota_A\colon \IdObject\ot A\to A$ and $\iota'_A\colon A\ot \IdObject\to A$. If an object $A$ has a multiplication $m\colon A\ot A\to A$, then an identity of $(A,m)$ is a morphism $u\colon \IdObject\to A$ such that the following diagram commutes: $$ \begin{matrix} \mdvertspace \IdObject\ot A&\mapright{u\ot\id_A}&A\ot A \mapleft{\id_A\ot u}&A\ot \IdObject\\ \lmapdown{\iota_A}&&\lmapdown{m}&&\mapdown{\iota'_A}\\ \muvertspace A&\lmapright{\id_A}&A&\lmapleft{\id_A}&\hphantom{.}A.\\ \end{matrix} $$ \msk \begin{Definition} \label{d:monoid} In a commutative monoidal category $\cal A$, an object $A$ with an associative multiplication $m\colon A\otimes A\to A$ and an identity $u\colon \IdObject\to A$ is called a {\it monoid} in $\cal A$. \end{Definition} \msk If we take for $(\cal A, \otimes)$ the category $(\cal V, \otimes)$ of vector spaces with the tensor product then \nin {\it a monoid in $(\cal V,\otimes)$ is precisely a unital associative algebra over $\K$,} \noindent (in other words, an associative $\K$-algebra with identity). Usually, the multiplication $m\colon A\otimes A\to A$ is expressed in the form $$(a,b)\mapsto ab\defi m(a\otimes b):A\times A\to A,$$ while the identity $u\colon \R\to A$ gives rise to the element $1_A=u(1)$ satisfying $a1_A=1_Aa=a$ for all $a\in A$. In exactly the same spirit we consider the category $\cal W$ of weakly complete vector spaces. Here \nin {\it a monoid in $\cal W$ is precisely a unital associative weakly complete topological algebra over $\K$.} \section{Weakly Complete Associative Unital Algebras} It is perhaps useful for our purpose to emphasize the utter simplicity of this concept in a separate definition: \begin{Definition} \label{d:basic-definition} A {\it weakly complete unital algebra} is an associative algebra $A$ over $\K$ with identity, whose underlying vector space is weakly complete, and whose multiplication $(a,b)\mapsto ab:A\times A\to A$ is continuous. \end{Definition} \msk \begin{Example} \label{e:3.2} (1a)\quad The product algebra $\K^J$ for a set $J$ (with componentwise operations) is a weakly complete unital algebra. \ssk (1b) More generally, if $\{A_j:j\in J\}$ is any family of finite dimensional unital $\K$-algebras, then $A=\prod_{j\in J} A_j$ is a weakly complete unital algebra over $\K$. \ssk (1c) Even more generally: Let $J$ be a directed set and $$\{p_{jk}\colon A_k\to A_j\quad|\quad j\le k, \hbox{ in }J\}$$ a projective system of morphisms of finite dimensional unital $\K$-algebras. Then the projective limit $A=\lim_{j\in J}A_j$ is a weakly complete unital $\K$-algebra. (See \cite{book3}, Definition 1.25ff., pp.17ff., also \cite{probook}, pp.77ff.) \msk (2)\quad The algebra $\K[[X]]$ of all formal power series $a_0+a_1X+a_2X^2+\cdots$ with $(a_0,a_1,a_2,\dots)\in\K^{\N_0}$ and the topology making $$(a_n)_{n\in\N_0}\mapsto \sum_{n=0}^\infty a_nX^n:\K^{\N_0}\to \K[[X]]$$ an isomorphism of weakly complete vector spaces is a weakly complete unital algebra. \msk (3)\quad Let $V$ be any weakly complete topological vector space. Endow $A=V\times\K$ with componentwise addition and the multiplication $$ (v_1,r_1)(v_2,r_2)=(r_1\.v_2+r_2\.v_1,r_1r_2).$$ Then $A$ is a weakly complete unital algebra. \end{Example} \msk The weakly complete unital algebras form a category $\cal{W\hskip-2pt A}$ in a natural way with morphisms preserving multiplication and identities in the usual way. \msk If $\cal A$ and $\cal B$ are commutative monoidal categories, then a functor $F\colon{\cal A}\to{\cal B}$ is a {\it functor of monoidal categories} or, more shortly{\color{\red},} a {\it multiplicative} functor{\color{\red},} if it firstly induces a functor on the underlying categories and if it secondly respects multiplication and identities so that there are natural isomorphisms $$F(A_1\otimes_{\cal A}A_2)\cong F(A_1)\otimes_{\cal B}F(A_2),\hbox{ and}$$ $$ F(\IdObject_{\cal A})\cong \IdObject_{\cal B}.$$ It is then a routine to prove the following fact: \msk \begin{Proposition} \label{p:monoids} \quad {\rm(a)} A multiplicative functor $F\colon {\cal A}\to{\cal B}$ between commutative monoidal categories maps monoids in $\cal A$ to monoids in $\cal B$ and induces a functor between the categories of monoids in the respective categories. \msk \nin{\rm(b)}\quad In particular, if $F$ implements an equivalence of categories {\rm (see \cite{book3}, Definition A3.39)}, then the respective categories of monoids are likewise equivalent. \end{Proposition} \msk Let us consider the two categories $\cal W$ and $\cal V$ and denote by ${\cal V}^{\rm op}$ the opposite category in which all arrows are formally reversed (cf.\ \cite{book3}, Definition A3.25 and paragraph preceding it). Then by Theorem \ref{th:vect}, the functors $$E\mapsto E^*:{\cal V}^{\rm op}\to {\cal W},\hbox{ respectively, } V\mapsto V':{\cal W}\to {\cal V}^{\rm op}$$ implement an equivalence of categories, and these functors are multiplicative by Proposition \ref{p:multi}. Hence by Proposition \ref{p:monoids} we have the following lemma: \msk \begin{Lemma} \label{l:predual} \quad {\it The category of weakly complete unital topological algebras is equivalent to the category of monoids in ${\cal V}^{\rm op}$.} \end{Lemma} \msk But what is a monoid in ${\cal V}^{\rm op}$? \msk \begin{Definition} \label{d:comonoids} \quad Let $\cal A$ be a commutative monoidal category. Then a {\it co\-monoid} in $\cal A$ is an object $C$ together with morphisms $c\colon C\to C\otimes C$ and $k\colon C\to \IdObject$ such that \def\mdvertspace{\vphantom{A_{A_A}}}\def\muvertspace{\vphantom{A^{A^A}}} $$\begin{matrix C\ot(C\ot C)&\mapright{\alpha_{CCC}}&(C\ot C)\ot C\\ \lmapup{\id_C\ot c}&&\mapup{c\otimes \id_C}\\ C\otimes C && C\otimes C\\ \lmapup{c}&&\mapup{c}\\ C&\lmapright{\id_c}&C.\\ \end{matrix} $$ \medskip \cen{and} \medskip $$ \begin{matrix \mdvertspace \IdObject\ot C&\mapleft{k\ot\id_C}&C\ot C \mapright{\id_C\ot k}&C\ot \IdObject\\ \lmapup{\iota_C}&&\lmapup{c}&&\mapup{\iota'_C}\\ \muvertspace C&\lmapleft{\id_C}&C&\lmapright{\id_C}&\hphantom{.}C.\\ \end{matrix} $$ \medskip \nin are commutative (for natural isomorphisms $\iota_C\colon C\to \IdObject\otimes C$ and $\iota_C'\colon C\to C\otimes \IdObject$.) \end{Definition} An almost trivial example of comonoids are obtained in any category ${\cal C}$ with finite products and a terminal object $\IdObject$ (that is, an object such that $\Hom_{\cal C}(X,\IdObject)$ is singleton for all objects $X$: see \cite{book3}, Definition A3.6). Indeed there is a unique ``diagonal'' morphism $c_X:X\to X\times X$, and a unique morphism $k_X\colon X\to \IdObject$ and these two morphisms make {\it every} object $X$ endowed with $c_X$ and $k_X$ a coassociative comonoid. This applies to the category of sets with the cartesian product and constant functions to the singleton set. (The verification is a straightforward exercise.) \msk From the definition it is clear that a comonoid in $\cal A$ is exactly a monoid in the opposite category ${\cal A}^{\rm op}$. If $\cal A$ is a category of vector spaces over some field, notably in the case of {\color{\red}$\R$ or $\C$}, another name is present in the literature (see \cite{mich}): \msk \begin{Definition} \label{d:coalgebra} \quad A comonoid in the commutative monoidal category $\cal V$ of $\K$-vector spaces is called a {\it coalgebra} over $\K$. \end{Definition} \msk Accordingly, Lemma \ref{l:predual} translates into the following statement: \msk \def\W{\cal W} \def\WA{\cal W\hskip-2pt A} \def\CA{\cal C\hskip-2pt A} \begin{Theorem} \label{th:predual} \quad{\it The category $\WA$ of weakly complete unital topological $\K$-algebras is dual to the category of $\K$-coalgebras $\CA$.} \end{Theorem} \msk These observations conclude our collection of preliminary concepts. Theorem \ref{th:predual} is all but profound. So let us pay attention where it may lead us. \section{Weakly Complete Unital Algebras and their Group of Units} There is a dominant theorem in the theory of coalgebras, called the Fundamental Theorem of Coalgebras (see \cite{mich}, Theorem 4.12, p. 742), attributed to {\sc Cartier}. For us, the following version is relevant. It should be clear that a vector subspace $S$ of a coalgebra $C$ is a subcoalgebra if $c_C(S)\subseteq S\otimes S$ and $k_C(S)=\R$. \msk \begin{Theorem} {\rm (Fundamental Theorem of Coalgebras)} \quad \label{th:cartier} Every coalgebra $C$ is the directed union of the set of its finite dimensional subcoalgebras. \end{Theorem} \msk This is sometimes formulated as follows: {\it Every coalgebra is the injective limit of its finite dimensional subcoalgebras.} Now if we take Theorems \ref{th:predual} and \ref{th:cartier} together, we arrive at the following theorem \cite{dah}. Its consequences, as we shall see, are surprising. (For a discussion of limits in the sense of category theory see \cite{book3}, Definition A3.41 ff., and in the concrete case of limits and notably projective limits of topological groups see \cite{probook}, pp.~63ff., respectively, pp.~77ff.) A projective limit of of topological groups is {\it strict} if all bonding morphisms and all limit morphisms are surjective (see \cite{book3}, 1.32 or \cite{probook}, Definition 1.24). We shall call a projective limit of topological groups {\it a strict projective limit of quotients} if all bonding maps and all limit morphisms are surjective {\it and open}; that is, are quotient morphisms. In the situation of vector spaces, Theorem A2.12 of \cite{probook} implies that for an injective morphism $f\colon E_1\to E_2$ in the category $\cal V$ of vector spaces, the dual morphism \cen{$f^*\colon E_2^*\to E_1^*$} \nin in the category $\cal W$ of weakly complete vector spaces is automatically surjective and open. \msk \begin{Theorem} \label{th:fundamental} {\rm (The Fundamental Theorem of Weakly Complete Topological Algebras)\quad} Every weakly complete unital topological $\K$-algebra is the strict projective limit of {\color{\red} a projective system of quotient morphisms between} its finite dimensional unital quotient-algebras. \end{Theorem} \msk \nin The literature on locally compact groups shows considerable attention to structural results derived from the information that a group, say, is a projective limit of Lie groups. It is therefore remarkable that a result concluding the presence of a projective limit of additive Lie groups emerges out of the vector space duality between $\cal V$ and $\cal W$ and the Fundamental Theorem on Coalgebras. \msk For a weakly complete topological unital algebra $A$ let $\I(A)$ denote the filter basis of closed two-sided ideals $I$ of $A$ such that $\dim A/I<\infty$. We apply Theorem 1.30 of \cite{probook} and formulate: \msk \begin{Corollary} \label{c:ideal-filter} In a weakly complete topological unital algebra $A$ each neighborhood of $0$ contains an ideal $J\in\I(A)$. That is, the filter basis $\I(A)$ converges to 0. In short{\color{\red},} $\lim \I(A)=0$ and $A\cong\lim_{J\in\I(A)} A/J$. \end{Corollary} \msk \nin If $A$ is an {\it arbitrary} unital $\K$-algebra and $\I(A)$ is the lattice of all of its two-sided ideals $J$ such that $\dim A/J<\infty$, then $\lim_{J\in\I(A)} A/J$ is a weakly complete unital algebra. It is instructive to apply this procedure to the polynomial algebra ${\color{\red} A=}\K[X]$ in one variable and obtain the weakly complete algebra $$\K\<X\>\defi \lim_{J\in\I(\K[X])} \K[X]/J,$$ which we claim to satisfy the following universal property: \msk \begin{Lemma} \label{l:universal} For any element $a$ in a weakly complete $\K$-algebra $A$ there is a unique morphism of weakly complete algebras $\phi\colon \K\<X\>\to A$ such that $\phi(X)=a$. \end{Lemma} \begin{Proof} As a first step, let us observe that the algebra generated by $X$ in $\K\<X\>=\lim_{J\in\I(\K[X])} \K[X]/J$ is a dense subset. This implies that the map $\phi$ in the statement of the Lemma is unique if it exists. It remains to show its existence. Recall that $A\cong\lim_{J\in\I(A)} A/J$. Note that for $J\in\I(A)$ there is a unique morphism $\phi_J\colon\K[X]\to A/J$ sending a polynomial $p$ to $p(a)+J$. Let us look at the following diagram: $$ \begin{matrix} \K[X]&\mapright{\inc}&\K\<X\>&\mapright{\lambda}&A\\ \lmapdown{\id}&&\mapdown{\rho_J}&&\mapdown{\quot}\\ \K[X]&\lmapright{\quot}&{\frac{\K[X]}{\ker\phi_J}}&\lmapright{\phi_J'}&A/J,\\ \end{matrix} $$ \bsk \noindent where $\lambda$ is the map we want to define, where $\phi'_J$ is induced by $\phi_J$, and where $\rho_J$ is the limit map from $\K\<X\>=\lim_{I\in\I(\K[X])}\K[X]/I$. Define $\lambda_J=\phi'_J\circ \rho_J$. The ``diagonal'' morphisms $\lambda_J\colon \K\<X\>\to A/J$, $J\in\I(A)$ are seen to be compatible with the morphisms $A/J_2\to A/J_1$ for $J_1\supseteq J_2$ in $\I(A)$. Accordingly, by the universal property of the limit (cf.~\cite{book3}, Definition A3.41), the unique fill-in morphism $\lambda$ exists as asserted. \end{Proof} \nin\quad Let $\P$ denote the set of all irreducible polynomials $p$ with leading coefficient $1$. Then we have the following: \msk \begin{Lemma} \label{l:3.12} There is an isomorphism of weakly complete $\K$-algebras $$ \K\<X\>\cong \prod_{p\in\P}\K_p\<x\>,\mbox{ where } \K_p\<X\>=\lim_{k\in\N}{\frac{\K[X]}{(p^k)}}.$$ \end{Lemma} \begin{Proof} We recall that every ideal $J\in\I(\K[X])$ is generated by a nonzero polynomial $f=f(X)$, that is $J=(f)$, since $\K[X]$ is a principal ideal domain. Furthermore, each polynomial $f$ admits a unique decomposition into irreducible factors: $$ \I(\K[X])=\left\{\left(\prod_{p\in\P} p^{k_p}\right) : (k_p)_{p\in\P}\in(\N_0)^{(\P)}\right\}. $$ Here, $(\N_0)^{(\P)}$ denotes the set of all families of nonnegative integers where all but finitely many indices are zero. For each $f=\prod_{p\in\P} p^{k_p}$ we have $$ \K[X]/(f)\cong\prod_{p\in\P}\K[X]/(p^{k_p}) $$ by the Chinese Remainder Theorem. This enables us to rewrite the projective limit in the definition of $\K\<x\>$ as $$ \lim_{J\in\I( \K[X])} \K[X]/J\t \prod_{p\in\P}\left(\lim_{k\in\N}{\frac{\K[X]}{(p^k)}}\right). $$ \vskip-32.5pt \end{Proof} \msk \nin We remark that if $p\in\P$ is of degree $1$, the algebra $\K_p\<X\>$ is isomorphic to $\K[[X]]$, the power series algebra in one variable. Since for $\K=\C$, all $p\in\P$ are of degree $1$, it follows that the algebra $\C\<X\>$ is isomorphic to $\C[[X]]^\C$. \msk In the case $\K=\R$, the polynomials $p\in\P$ are of degree one or two. For $p=X-r$ with a number $r\in\R$ the algebra $\R_p\<X\>$ is isomorphic to $\R[[X]]$. But for polynomials $p\in\P$ of degree two, the situation becomes more complicated. \bsk \subsection{The Group of Units: Density} An element $a$ in an algebra $A$ is called a {\it unit} if it has a multiplicative inverse, that is, there exists an element $a'\in A$ such that $aa'=a'a=1$. The set $A^{-1}$ of units of an algebra is a group with respect to multiplication. \msk \begin{Lemma} \label{l:inversion-continuous}The group of units $A^{-1}$ of a weakly complete unital algebra $A$ is a topological group. \end{Lemma} \begin{Proof} We must show that the function $a\mapsto a^{-1}: A^{-1}\to A^{-1}$ is continuous. In every finite dimensional real or complex unital algebra, the group of units is a topologial group. This applies to each factor algebra $A/I$, for $I\in\I(A)$. Then $a\mapsto a^{-1}I: A^{-1}\to (A/I)^{-1}$ is continuous for all $I\in \I(A)$. Since the isomorphism $A\cong \lim_{I\in\I(A)} A/I$ holds also in the category of topological spaces, the continuity of $a\mapsto a^{-1}$ follows by the universal property of the limit (see \cite{book3}, Definition A3.41). \end{Proof} \msk We remark that there exist topological algebras in which inversion in the group of units is discontinuous. (See e.g.\ \cite{dahii}, Example 3.12.) \bsk The prescription which assigns to a weakly complete unital algebra $A$ its group of units $A^{-1}$ is a functor from the category of weakly complete unital algebras to the category of topological groups. This functor preserves products and intersections, hence arbitrary limits (see e.g.\ \cite{book3}, Proposition A3.53). Thus $A\cong\lim_{J\in\I(A)} A/J$ implies $A^{-1}\cong\lim_{J\in\I(A)} (A/J)^{-1}$. Since the group of units of a finite dimensional unital algebra is a (finite dimensional) linear Lie group (see \cite{book3}, Definition 5.32) we have \msk \begin{Lemma} \label{l:4.2} The group of units $A^{-1}$ of a weakly complete unital real or complex algebra $A$ is a projective limit of linear Lie groups. \end{Lemma} \msk \nin Due to a Theorem of Mostert and Shields \cite{most}, the group of units of a topological monoid on a locally euclidean space has an open group of units. This applies, in particular, to the multiplicative semigroup $(A,\cdot)$ of any finite dimensional real or complex algebra $A$. Howeveer, in this case, one has more elementary linear algebra arguments to be aware of this fact. Indeed, let $(A,+)$ denote the vector space underlying $A$ and $a\mapsto L_a:A\to \Hom((A,+),(A,+))$ the representation of $A$ into the algebra of all vector space endomorphisms of $(A,+)$ given by $L_a(x)=ax$. If we set $\delta(a)=\det(L_a)$, then we obtain a morphism of multiplicative monoids $\delta\colon (A,\cdot)\to (\R,\cdot)$ in such a fashion that $A\setminus A^{-1}=\delta^{-1}(\{0\})$. This set is a closed nowhere dense algebraic hypersurface in $A$. Thus we have \msk \begin{Lemma} \label{l:4.3} If $A$ is a finite dimensional real or complex unital algebra, then the group $A^{-1}$ of units is a dense open subgroup of the monoid $(A,\cdot)$. \end{Lemma} \nin It may be helpful to consider some examples: \msk \begin{Example} (a) In the weakly complete algebra $A=\R^\N$ of Example \ref{e:3.2}.1a, the identity $1=(e_n)_{n\in\N}$, $e_n=1\in\R$ is the limit of the sequence of $(a_m)_{m\in\N}$, $a_m=(a_{mn})_{n\in\N}$ of nonunits, where $$a_{mn}=\begin{cases}0, &\hbox{if $n=m$},\\ 1, &\hbox{otherwise}.\\ \end{cases} $$ Hence $A^{-1}$ fails to be open in $A$ while it is dense in $A$. \msk (b) In the Examples \ref{e:3.2}.2 and \ref{e:3.2}.3 of weakly complete unital algebras $A$, we have a maximal ideal $M$ (in case (2) containing all elements with $a_0=0$ and in case (3) equalling $V\times\{0\}$) such that $A^{-1}=A\setminus M$. Thus in these cases $A^{-1}$ is open (regardless of dimension). \end{Example} In order to establish a first important result on the group of units $A^{-1}$ of a weakly complete unital algebra $A$, namely, its density in $A$, we need to prove it in the special case $$A=\K_p\<X\>=\lim_{k\in \N}\K[X]/(p^k)\leqno(\#)$$ of Lemma 3.12. \msk \begin{Lemma} \label{l:density} {\rm (Density Lemma)}\quad For each irreducible polynomial $p$ over $\K$ with leading coefficient $1$, the weakly complete algebra $A\defi\K_p\<X\>$ is a local ring and its group $A^{-1}$ of units is open and dense in $A$. \end{Lemma} \begin{Proof} Let $\pi \colon A\to \K[X]/(p)$ denote the limit morphism for $k=1$ in $(\#)$ and let $J\defi\ker \pi$. For every $f\in J$, the series $\sum_{m=0}^\infty f^m$ converges in $A$ to $(1-f)^{-1}$. So $$1-J\subseteq A^{-1}.\leqno(1)$$ \nin Now let $f\in A\setminus J$. Since $F\defi \K[X]/(p)$ is a field, $\pi(f)$ has an inverse in $F$. Thus there is an element $g\in A$ with $h\defi fg\in 1-J$. By (1) $h^{-1}$ exists and $fgh^{-1}=1$. Hence $f$ is invertible. This shows that $$A\setminus J\subseteq A^{-1}.\leqno(2)$$ Trivially $A^{-1}\cap J=\emptyset$ and so equality holds in (2). This shows that the closed ideal $J$ is maximal and thus $A$ is a local ring. Moreover, $A^{-1}=A\setminus J=\pi^{-1}(F\setminus\{0\})$ is open and dense as the inverse of a dense set under an open surjective map. \end{Proof} After the Density Lemma \ref{l:density} we notice that for an irreducible polynomial $p$ the algebra $\R_p\<X\>$ is a local ring with maximal ideal $p\R_p\<X\>$ such that $$\frac{\R_p\<X\>}{p\R_p\<X\>}\cong \begin{cases}\R &\mbox{if }\deg p=1,\\ \C &\mbox{if }\deg p=2.\\ \end{cases}$$ For $p=p(X)=X$ we have $\R_p\<X\>=\R[[X]]$ as in Example 2.2.2. For $J=\R$, Example 2.2.1 can be obtained as a quotient of $\R\<X\>$, firstly, by taking for each $r\in\R$ the irreducible polynomial $p=X-r$, secondly, by noticing that $\R\cong\frac{\R[X]}{X-r}=\frac{\R[X]}p$ is a quotient of $\frac{\R[X]}{p^k}$ for each $k\in\N$ and thus of $\lim_{k\in\N}\frac{\R[X]}{p^k}$, and, finally, by determining a quotient morphism $$\R\<X\>\cong\prod_{p\in\P} \lim_{k\in\N}\frac{\R[X]}{(p^k)} \to \prod_{r\in\R} \frac{\R[X]}{(X-r)}\cong \R^\R.$$ \nin In passing, we note here again the considerable ``size'' of $\K\<X\>$. \bsk \begin{Theorem} \label{th:density}{\rm (The First Fundamental Theorem on the Group of Units)} For any weakly complete unital $\K$-algebra $A$, the group $A^{-1}$ of units is dense in $A$. \end{Theorem} \begin{Proof} Let $0\ne a\in A$ and let $V$ denote an open neighborhood of $a$ of $A$. According to Lemma \ref{l:universal} there is a morphism $\phi\colon \K\<X\>\to A$ with $\phi(X)=a$. Then $U\defi \phi^{-1}(V)$ is an open neighborhood of $X$ in $\K\<X\>$. If we find a unit $u\in \K\<X\>^{-1}$ in $U$, then $\phi(u)\in V\cap A^{-1}$ is a unit, and this will prove the density of $A^{-1}$ in $A$. By Lemma \ref{l:3.12} we have $\K\<X\>\cong \prod_{p\in\P}\K_p\<X\>$, and so the problem reduces to finding a unit near $X$ in $\K_p\<X\>$ for each $p\in\P$. The preceding Density Lemma \ref{l:density} says that this is possible. \end{Proof} \bsk \subsection{The Exponential Function} Every finite dimensional unital $\K$-algebra is, in particular, a unital Banach algebra over $\K$ with respect to a suitable norm. By \cite{book3}, Proposition 1.4, in any unital Banach algebra $A$ over $\K$ the group $A^{-1}$ of units is an open subgroup of the monoid $(A,\cdot)$, and it is a (real) linear Lie group with Lie algebra $\L(A)=A_{\rm Lie}$, the real vector space underlying $A$ with the Lie bracket given by $[x,y]=xy-yx$, with the exponential function $\exp\colon \L(A^{-1})\to A^{-1}$ given by the everywhere absolutely convergent power series $\exp x=\sum_{n=0}^\infty\frac1{n!}\.x^n$. (For $\K=\R$ this is discussed extensively in \cite{book3}, Chapter 5, notably \cite{book3}, Definition 5.32.) \msk Now let $A$ be a weakly complete unital $\K$-algebra. Every closed (2-sided) ideal $J$ of $A$ is a closed Lie algebra ideal of $A_{\rm Lie}$. We apply the Theorem \ref{th:fundamental} and note that the Lie algebra $A_{\rm Lie}$ is (up to natural isomorphism of topological Lie algebras) the strict projective limit of quotients $$\lim_{J\in\I(A)} \left(\frac{A}{J}\right)_{\rm Lie}\subseteq \prod_{J\in\I(A)}\left(\frac{A}{J}\right)_{\rm Lie}$$ of its finite dimensional quotient algebras and therefore is a pro-Lie algebra. Each of these quotient Lie algebras is the domain of an exponential function $$\exp_{A/J}\colon A_{\rm Lie}/J\to (A/J)^{-1}\subseteq A/J, \quad (\forall a_J\in A/J)\, \exp_{A/J} a_J=\sum_{n=0}^\infty\frac1{n!}\.a_J^n.$$ This yields a componentwise exponential function on $\prod_{J\in\I(A)}A/J$ which respects the bonding morphisms of the subalgebra $\lim_{J\in\I(A)}A/J$. Thus we obtain the following basic result which one finds in \cite{dah, hofmor}. \msk \begin{Theorem} {\rm(The Second Fundamental Theorem on the Group of Units)} \label{th:second} If $A$ is a weakly complete unital $\K$-algebra, then the exponential series $1+a+\frac1{2!}a^2+\cdots$ converges on all of $A$ and defines the exponential function $$\exp_A \colon A_{\rm Lie}\to A^{-1},\quad \exp_A a=\sum_{n=0}^\infty \frac1{n!}a^n$$ of the pro-Lie group $A^{-1}$. The Lie algebra $\L(A^{-1})$ of the pro-Lie group $A^{-1}$ may be identified with the topological Lie algebra $A_{\rm Lie}$, whose underlying weakly complete vector space is the underlying weakly complete vector space of $A$. \end{Theorem} \msk It is instructive to observe that while we saw any weakly complete associative unital algebra to be a projective limit of finite dimensional quotient algebras, it would be incorrect to suspect that every weakly complete real Lie algebra was a projective limit of finite dimensional quotient algebras. Indeed consider the weakly complete vector space $L\defi\R^\N\times R$ with the Lie bracket $[\((x_n)_{n\in\N},s\), \((y_n)_{n\in\N},t\)] =\(s\.(y_{n+1})_{n\\in\N} -t\.(x_{n+1})_{n\in\N},0)\)$. The $L$ is a weakly complete Lie algebra which can be shown to have no arbitrarily small cofinite dimensional ideals and so cannot be the projective limit of finite dimensional quotients. \subsection{The Group of Units of Weakly Complete Algebras} For the investigation of the group of units of a weakly complete unital algebra, this opens up the entire Lie theory of pro-Lie groups $G$, for which in the case of $\K=\R$ we refer to \cite{probook}. For instance, Lemma 3.29 of \cite{probook} tells us that $A^{-1}$ is a pro-Lie group. \msk \noindent{\bf The units in finite dimensional real or complex algebras.} For a finite dimensional unital algebra $A$ let $G\defi A^{-1}$ the subset of units, i.e. multiplicatively invertible elements. the set of pairs $(a,b)\in A\times A$ such that $ab=1$ where $1$ is the identity of $A$ is the set of solutions of a finite sequence of equations over $\K$ and is the graph of the function $$a\mapsto a^{-1}:G \to G$$ and therefore is a algebraic variety over $\K$. It is homeomorphic to $G$ in the topology induced from the unique $\K$-vector space topology of $A$. For $\K=\R$ and $\K=\C$ this identifies $G$ as a real Lie group. This raises the question about information on $G/G_0$ where $G_0$ is the identity component of $G$. {\color{\red}Recall that a topological group $G$ is called {\it almost connected} if the factor group $G/G_0$ is compact.} \begin{Lemma} \label{l:4.6} For a finite dimensional real unital algebra $A$ the group of units $A^{-1}$ has finitely many components and thus is almost connected. \end{Lemma} \begin{Proof} See \cite{boch}, Theorem 2.1.5. \end{Proof} \msk \nin If $\K=\R$ is the field of real numbers and $A=\R^n$, then $A^{-1}/(A^{-1})_0\cong \Z(2)^n$ has $2^n$ components. \bsk \begin{Lemma} \label{l:canslemma} {\rm {\color{\red}(}Mahir Can's Lemma{\color{\red})}} For a finite dimensional complex unital algebra $A$ the group of units $A^{-1}$ is connected. \end{Lemma} \begin{Proof} We write $G=A^{-1}$ for the group of units of $A$ and let $N$ denote the nilradical of the algebra $A$. Then for some natural number $k\ge2$ we have $N^k=\{0\}$ and so for each $x\in N$ we have $(1-x)^{-1}=1+x+\cdots+x^{k-1}$. Thus $1+N$ is a subgroup of $G$ which is immediately seen to be connected. Since $N$ is an ideal, for any $g\in G$ we have $N=gg^{-1}N\subseteq gN\subseteq N$ and so $gN=N$ and $g(1+N)=g+N\in A/N$, that is, $G/(1+N)\subseteq A/N$ for the semisimple factor algebra $A/N$ of $A$ modulo its nilradical. \msk By a classical theorem of {\color{\red}Wedderburn} there is an $m$-tuple $(n_1,\dots,n_m)$ of nonnegative integers such that $$A/N\cong M_{n_1}(\C)\oplus\cdots\oplus M_{n_m}(\C),$$ where $M_n(\C)$ denotes the $n\times n$ complex matrix algebra ($\cong \Hom(\C^n,\C^n)$). (See e.g.~Corollary 2.66 in \cite{bresar}). The group ${\rm GL}(n,\C)=M_n(\C)^{-1}$ of units in the full matrix algebra is connected, and so the group of units $H\defi (A/N)^{-1}$ of the algebra $A/N$ is connected. (See e.g.~\cite{good}, Theorem 2.2.5.) \msk Let $p\colon A\to A/N$ be the quotient map. We saw that $p$ induces the morphism $p|G\colon G\to H$ {\color{\red} of} complex Lie groups. By Lemma \ref{l:4.3}, $G$ is dense and open in $A$ and $H$ is open in $A/N$. The function $p$ is open, and thus the function $p|G$ is an open morphism of topological groups so that $p(G)$ is open, hence closed in $H$. The density of $G$ in $A$ implies the density of $p(G)$ in $p(A)=A/N$ and so in $H$. Therefore $p(G)=H$. Thus we have an exact sequence $$1\to 1{+}N \to G\mapright{p|G}H\to1.$$ So, since both $1+N$ and $H$ are connected, $G$ is connected which we had to show. \end{Proof} \msk \noindent{\bf Quotient morphisms of algebras.} Let $f\colon A\to B$ be a surjective morphism of weakly complete unital algebras. The restriction and corestriction of this function $f|A^{-1}:A^{-1}\to B^{-1}$ is a morphism of pro-Lie groups. Then $\L(f|A^{-1})\colon\L(A^{-1})\to\L(B^{-1})$ is $f$ itself considered as a surjective morphism of pro-Lie algebras. Then by \cite{probook}, Corollary 4.22.iii, we have $$f(\<\exp_A(A)\>)=(f|A^{-1})\<\exp_A(A)\>=\<\exp_B(B)\>. \leqno(1)$$ Assume now that $\dim B<\infty$. Then $B^{-1}$ is a Lie group and so $$\<\exp_B(B)\>=(B^{-1})_0.\leqno(2)$$ Thus $ (B^{-1})_0\subseteq f(A^{-1})$ and so $$f(A^{-1})\hbox{ is open and closed in }B^{-1}.\leqno(3)$$ By Theorem \ref{th:density}, the group of units $A^{-1}$ is dense in $A$, whence $f(A^{-1})$ is dense in $f(A)=B$, and so, in particular, $f(A^{-1})\subseteq B^{-1}$ is dense in $B^{-1}$. But $f(A^{-1})$ is also closed in $B^{-1}$ by (3), and thus $f(A^{-1})=B^{-1}$. \msk Let $I=\ker f$. Then $A^{-1}\cap(1+I)=\ker f|A^{-1}$. Thus we have a unique bijective morphism $$f'\colon \frac{A^{-1}}{A^{-1}\cap(1+I)}\to B^{-1}$$ such that $$(\forall x\in A^{-1})\, f'(x(A^{-1}\cap(1+I)))=f(x).$$ Since $A/I\cong B$, as vector spaces, we know that $A^{-1}/(A^{-1}\cap(1+I)) \cong(A/I)^{-1}$ is a Lie group with finitely many components by Lemma \ref{l:4.6}, which is, therefore, $\sigma$-compact. Hence $f'$ is an isomorphism by the Open Mapping Theorem for Locally Compact Groups (see e.g.\ \cite{book3}, Exercise EA1.21 on p.704). If $q\colon A^{-1}\to A^{-1}/\ker(f|A^{-1})$ is the quotient morphism, then $f|A^{-1}=f'\circ q$. Hence $f|A^{-1}$ is an open morphism, and we have shown the following \begin{Lemma} \label{l:4.8} Let $f\colon A\to B$ be a surjective morphism of weakly complete unital algebras and assume that $\dim B<\infty$. Then $f$ induces a quotient morphism $f|A^{-1}:A^{-1}\to B^{-1}$ from the pro-Lie group $A^{-1}$ onto the Lie group $B^{-1}$. \end{Lemma} Keep in mind that a quotient morphism is an open map! We also note that Lemma \ref{l:4.8} remains true for $\K=\C$. \bsk \nin We apply this to any weakly complete unital algebra $A$ and any finite dimensional quotient algebras $B=A/J$ for $J\in\I(A)$ and thus sharpen Lemma \ref{l:4.2} in a significant way: \begin{Theorem} \label{th:strict-quotients} {\rm(The Third Fundamental Theorem on Units)}\quad For a weakly complete unital $\K$-algebra $A$ let $\I(A)$ be the set of two-sided closed ideals $J$ such that $\dim A/J<\infty$. Then $\I(A)$ is a filter basis converging to $0$ and the group of units $A^{-1}$ is a {\rm strict} projective limit of quotients of linear almost connected Lie groups $A^{-1}/(A^{-1}\cap(1+J))$ isomorphic to $(A/J)^{-1}$ via the map $g(A^{-1}\cap(1+J))\mapsto g+J$ {\color{\red} as $J$ ranges through $\I(A)$,} such that each limit morphism agrees with the natural quotient morphism. \end{Theorem} \nin With this theorem, at the latest, it becomes clear that we had to resort to the structure theory of pro-Lie groups with almost connected Lie groups quotients. In \cite{almost} the extensive structure and Lie theory of {\it connected} pro-Lie groups of \cite{probook} was lifted to {\it almost connected} pro-Lie group. Yet it was still unknown whether a projective limit of almost connected finite dimensional Lie groups was complete and is, therefore, an almost connected pro-Lie group. However, this is now clear with the result recorded in \cite{short}. \section{Applying pro-Lie Group Theory to the Group of Units of a Weakly Complete Algebra} At this point we continue the theory of weakly complete algebras and prove the following result: \begin{Theorem} {\rm (The Fourth Fundamental Theorem on the Group of Units)} \label{th:fourth} In any weakly complete unital algebra $A$, the multiplicative group $A^{-1}$ of invertible elements is almost connected if the ground field is real and is connected if the ground field is complex. \end{Theorem} \begin{Proof} (a) Let us abbreviate $G\defi {\color{\red}A}^{-1}$. Assume that the ground field $\K$ is $\R$. By Theorem \ref{th:strict-quotients}, $A=\lim_{J\in\I(A)} A/J$ where $\dim A/J<\infty$ and $G=\lim_{J\in\I(A)} (A/J)^{-1}$ with finite dimensional Lie groups $(A/J)^{-1}$. By Lemma \ref{l:4.6}, the group $(A/J)^{-1}$ is almost connected. Let $L=G/N$ be any Lie group quotient of $G$. Let $U$ be an identity neighborhood in $G$ such that $UN=U$ and that $U/N$ has no nonsingleton subgroup. Since $\lim\I(A)=0$ we have $\lim_{J\in \I(A)} 1+J=1$. So there is a $J\in\I(A)$ such that $1+J\subseteq U$. Since $G\cap(1+J)$ is a multiplicative subgroup, $G\cap(1+J)\subseteq N$. Hence $L$ is a homomorphic image of $G/(G\cap(1+J))\cong(A/J)^{-1}$. This group has finitely many components, and so $L$ is almost connected. Now Theorem 1.1 of \cite{short} shows that $G$ is almost connected in the real case. This means that $G/G_0$ is a compact totally disconnected group and thus has arbitrarily small open-closed subgroups. (b) Assume now $\K=\C$. In particular, $A$ is a real weakly complete unital algebra and thus $G/G_0$ has arbitrarily small open-closed subgroups. Suppose that $G\ne G_0$. Then there is a proper open subgroup $H\subseteq G$. According to Theorem \ref{th:strict-quotients} $G$ is the strict projective limit of the complex Lie group quotients $G/(G\cap(1+J))=(A/J)^{-1}$, $J\in\I(A)$. Thus we find a $J\in\I(A)$ so small that $G\cap(1+J)\subseteq H$. Then $H/(G\cap(1+J))$ is a proper open subgroup of $(A/J)^{-1}$. However this complex Lie group is connected by Lemma \ref{l:canslemma}. This contradiction shows that the assumption $G\ne G_0$ is false, and thus that $G$ is connected. This completes the proof of the theorem. \end{Proof} \msk After this information is secured we invoke basic results of pro-Lie groups combined from \cite{probook}, 12.81 on p.551, and \cite{almost}. We let $A$ denote any weakly complete unital algebra (see Definition \ref{d:basic-definition}) and denote by $G$ its group of units $A^{-1}$. \begin{Theorem} \label{th:vector-space-splitting} The group of units $G$ of a weakly complete unital algebra $A$ contains a maximal compact subgroup $C$ and $A$ contains up to four closed vector subspaces $V_1,\dots,V_m$ such that $$(c,X_1,\dots,X_m)\mapsto c\exp X_1\cdots\exp X_m\colon C\times V_1\times\cdots\times V_m\to G$$ is a homeomorphism. Every compact subgroup of $G$ has a conjugate in $C$. The group $G$ is homeomorphic to a product space $\R^J\times C$ for a suitable set $J$. \end{Theorem} One has more detailed structural information on $G$ when needed: If $N(G)$ denotes the nilradical of $G_0$ (see \cite{probook}, Definition 10.40 on p.~447), then for each $1\le k\le m$ the product $N(G)\exp V_k$ is a prosolvable closed subgroup (see Definition 10.12, p. 424 of \cite{probook}). \msk Since $G$ is homeomorphic to a space of the form $\R^J\times C$ for a weakly complete vector space $\R^J$ and a compact group $C$ we know that all of the algebraic-topological properties of $G$ are those of a compact group, since $\R^J\times C$ and $C$ are homotopy equivalent. (Cf.\ \cite{hofmor}.) \msk From Lemma 1.1(iii) in \cite{short}, further Theorem \ref{th:vector-space-splitting}, and \cite{book3}, Theorem 9.41 on p.~485, and \cite{book3}, Corollary 10.38 on p.~572{\color{\red},} we also derive the following facts. \begin{Corollary} Let $G$ be the group of units of a weakly complete unital algebra. Then \begin{enumerate}[\rm(i)] \item $G$ contains a profinite subgroup $D$ such that $G=G_0D$ while $G_0\cap D$ is normal in G and central in $G_0$, and \item $G$ contains a compact subspace $\Delta$ such that $(c,d)\mapsto cd:G_0\times \Delta\to G$ is a homeomorphism. \end{enumerate} \end{Corollary} \bsk \subsection{Limitations of these Results} While we know from \cite{book3}, Corollary 2.29 on p.~43 that every compact group is contained (up to isomorphy) in the group of units of a weakly compact unital algebra, there are even Lie group{\color{\red}s} of dimension as small as 3 which cannot have any isomorphic copies contained in the group of units of a weakly complete algebra. \msk \begin{Example} \label{e:nonlinear} Among the connected noncompact Lie groups $G$ which are not linear, the following 3--dimensional examples are prominent in the literature: (a) (Garret Birkhoff) Let $N$ be the ``Heisenberg group'' of matrices $$[x,y,z]\defi\begin{pmatrix}1&x&z\\0&1&y\\0&0&1\end{pmatrix},\ x,y,z\in\R$$ and $Z=\{[0,0,n]:n\in\Z\}\cong\Z$ a central cyclic subgroup of $N$. Then $G=N/Z$ is a 3-dimensional class 2 nilpotent group which is not linear as first observed by G. Birkhoff. (See e.g. \cite{book3}, Example 5.67ff.) The group $G$ is homeomorphic to $\R^2\times \Ss^1$. \ssk (b) Let $G$ be the universal covering group of the special linear group ${\rm Sl}(2,\R)$, the group of 2 by 2 real matrices of determinant 1. Since ${\rm Sl}(2,\R)$ is homeomorphic to $\R^2\times\Ss^1$, the three dimensional Lie group $G$ is homeomorphic to $\R^3$. The 3-dimensional Lie group $G$ is not linear (See e.g.~\cite{hilnee}, Example 9.5.18.) For a practical parametrization of $G$ see e.g.~\cite{hilg}, Theorem V.4.37 on p.~425 and the surrounding discussion. \end{Example} \msk \nin We observe that the universal covering group $G$ of Sl$(2,\R)$ cannot be a subgroup of the group $A^{-1}$ of units of a weakly complete real unital algebra $A$. Indeed {\sc Hilgert and Neeb} show in \cite{hilnee}, Example 9.5.18., that every linear representation $\pi\colon G\to{\rm Gl}(n,\R)$ factors through $p\colon G\to{\rm Sl}(2,\R)$ with a representation $f\colon {\rm Sl}(2,\R)\to {\rm Gl}(n,\R)$ as $\pi=f\circ p$. So if $z$ is one of the two generators of the center of $G$ which is isomorphic to $\Z$, then $p(z^2)=1$ and thus $\<z^2\>$ is contained in the kernel of any finite dimensional linear representation of $G$. From a subgroup $G$ of $A^{-1}$ we would obtain an injective morphism $\gamma\colon G\to A^{-1}$. Every $J\in\I(A)$ yields a linear representation $q_J\colon A^{-1}\to(A/J)^{-1}$ by Theorem 3.6. Thus $q_J\circ \gamma:G\to (A/J)^{-1}$ is a linear representation which will annihilate $\<z^2\>$ for all $J\in\I(A)$. Thus the injective morphism $\gamma$ would annihilate $z^2$ which is a contradiction. We leave it as an exercise to show that the group $G$ of Birkhoff's Example 4.15(a) cannot be a subgroup of any $A^{-1}$ of a weakly complete unital algebra 1. (Cf.~\cite{book3}, Example 5.67.) \bsk \nin Still, in the next section we shall show that for each topological group $G$ there is a ``best possible'' weakly complete unital algebra $\R[G]$ with a natural ``best possible'' morphism $G\to\R[G]^{-1}$, one that would be an embedding for every compact group but would fail to be injective for the universal covering group of Sl($2,\R)$. \msk \section{The Weakly Complete Group Algebra of a Topological Group} \msk Let us complement our discussion at this stage by describing a relevant pair of adjoint functors. So let $\cal W\hskip-2pt A$ be the category of weakly complete unital $\K$-algebras and $\mathcal{T\hskip-1pt G}$ the category of topological groups. Then $$\Omega=(A\mapsto A^{-1}):{\cal W\hskip-2pt A}\to{\cal TG}$$ is a well defined functor after Lemma \ref{l:inversion-continuous}. It is rather directly seen to preserve arbitrary products and intersections. Hence by \cite{book3}, Proposition A3.51 it is a continuous functor (see loc.cit.\ Definition A3.50), that is, it preserves arbitrary limits. \msk \subsection{The Solution Set condition} In order to conclude from this information that $\Omega$ has in fact a left adjoint, we need to verify the so called {\it Solution Set Condition} (see \cite{book3}, Definition A3.58 on p.~786). \nin For this purpose we claim that for any topological group $G$ in there is a {\it set} $S(G)$ of pairs $(\phi, A)$ with a continuous group morphisms $\phi\colon G\to A^{-1}$ for some object $A$ of $\cal W\hskip-2pt A$ such that for every pair $(f, B)$, $f\colon G\to B^{-1}$ with a weakly complete unital algebra $B$ there is a pair $(\phi, A)$ in $S(G)$ and a $\cal{W\hskip-2pt A}$-embedding $e\colon A\to B$ such that $$f=G\mapright{\phi} A^{-1}\, \mapright{e|A^{-1}} B^{-1},$$ where $e|A^{-1}$ denotes the bijective restriction and corestriction of $e$. Indeed if $f\colon G\to B^{-1}$ determines a unique smallest algebraic unital abstract subalgebra $C$ of $B$ generated by $f(G)$, then there is only a set of these ``up to equivalence''. Then on each of these there is only a set of algebra topologies and, a fortiori, only a set of them for which the corestriction is continuous; for each of these, there is at most a set of algebra completions up to isomorphism. So, up to equivalence there is only a set of pairs $(\phi,A)$, $\phi\colon G\to A^{-1}$ such that the unital algebra generated by $\phi(G)$ is dense in $A$. Any such set $S(G)$ will satisfy the claim. \msk Now we are in a position to apply the Left Adjoint Functor Existence Theorem (see \cite{book3}, Theorem A3.60) to conclude that $\Omega\colon {\cal{W\hskip-2pt A}}\to {\cal G}$ has a left adjoint $\Lambda\colon {\cal G}\to{\cal{W\hskip-2pt A}}$. We write the weakly complete unital algebra $\Lambda(G)$ as $\K[G]$ and call it the {\it weakly complete group algebra of} $G$. We summarize this result: \msk \begin{Theorem} \label{th:wcga-thm} {\rm(The Weakly Complete Group Algebra Theorem)} \quad To each topological group $G$ there is attached functorially a weakly complete group algebra $\K[G]$ with a natural morphism $\eta_G\colon G\to \K[G]^{-1}$ such that the following universal property holds: \noindent For each weakly complete unital algebra $A$ and each morphism of topological groups $f\colon G\to A^{-1}$ there exists a unique morphism of weakly complete unital algebras \break $f'\colon \K[G]\to A$ restricting to a morphism $f''\colon\K[G]^{-1}\to A^{-1}$ of topological groups such that $f= f'\circ\eta_G$. \end{Theorem} $$ \begin{matrix}&{\rm top\ groups}&&\hbox to 7mm{} &{\rm wc\ algebras}\\ \noalign{\vskip3pt} \noalign{\hrule}\\ \noalign{\vskip3pt \mdvertspace G&\mapright{\eta_G}&\K[G]^{-1}&\hbox to 7mm{} &\K[G]\\ \lmapdown{\forall f}&&\mapdown{f''}&\hbox to 7mm{}&\mapdown{\exists! f'}\\ \muvertspace A^{-1}&\lmapright{\id}&A^{-1}&\hbox to 7mm{}&A\\ \end{matrix} $$ \msk \nin {\bf Fact.} {\it If $G$ is one of the two groups of {\rm Example \ref{e:nonlinear}}, then the natural morphism $\eta_G\colon G\to\K[G]$ is not injective.} However, the adjunction of the functors $A\mapsto A^{-1}$ (on the right) and $G\mapsto \K[G]$ (on the left) also has a back adjunction $$\epsilon_A\colon \K[A^{-1}]\to A$$ such that for each topological group $G$ and each continuous algebra morphism $f\colon \K[G]\to A$ there is a unique morphism of topological groups $f'\colon G\to A^{-1}$ such that $f=\epsilon_A\circ \K[f']$. (Cf.\ \cite{book3}, Proposition A3.36, p.~777). The general theory of adjunctions (as e.g. in \cite{book3}, Proposition A3.38, p.~777) now tells us that we may formulate \msk \begin{Corollary} \label{c:alternate} For any weakly complete unital algebras $A$ and any topological groups $G$ we have $$ (\forall A)\, \Big(A^{-1}\quad\mapright{\eta_{A^{-1}}}\quad \K[A^{-1}]^{-1} \quad\mapright{(\epsilon_A)^{-1}} A^{-1}\Big) =\id_{A^{-1}},\quad\hbox{and}$$ \medskip $$(\forall G)\,\Big(\K[G]\quad\mapright{\K[\eta_G]}\quad \K\big[\K[G]^{-1}\big] \quad \mapright{\epsilon_{\K[G]}}\quad \K[G]\Big)=\id_{\K[G]}.$$ \end{Corollary} In other words: Many topological groups are semidirect factors of unit groups of weakly complete algebras, for instance if they are unit groups to begin with, and many weakly complete unital algebras are homomorphic retracts (semidirect summands) of weakly complete group algebras, for instance, if they are group algebras to begin with. \bsk However, more importantly, the universal property implies conclusions which are relevant for the concrete structure theory of $\K[G]$: \msk \begin{Proposition} \label{p:generation} Let $G$ be a topological group. Then the subalgebra linearly spanned by $\eta_G(G)$ in $\K[G]$ is dense in $\K[G]$. \end{Proposition} \begin{Proof} \ Let $S=\overline{\Span}\(\eta_G(G)\)\subseteq \K[G]$ be the closed subalgebra linearly spanned by $\eta_G(G)$. Let $f_S\colon G\to S^{-1}$ be a morphism of topological groups and $f\colon G\to \K[G]$ the coextension of $f_S$. Then by the universal property of $\K[G]$ there is a unique morphism $f'\colon \K[G]\to S$ of weakly complete unital algebras such that $f'\circ \eta_G=f$, implying that $(f'|S)\circ \eta_G^{\rm o} =f_S$ with the corestriction $\eta_G^{\rm o}\colon G\to S$ of $\eta_G$ to $S$. Thus $S$ has the universal property of $\K[G]$; then the uniqueness of $\K[G]$ implies $S=\K[G]$. \end{Proof} \bsk We recall from \cite{book3}, Corollary 2.29{\color{\red}(ii)} that it can be proved in any theory of compact groups at a very early stage that \ssk \nin{\it every compact group has an isomorphic copy in the group of units of a weakly complete unital algebra.} \ssk \nin As a consequence we have \msk \begin{Theorem} \label{th:gr-alg-comp-gr} {\rm (The Group Algebra of a Compact Group)} If $G$ is a compact group, then $\eta_G\colon G\to\R[G]^{-1}$ induces an isomorphism of topological groups onto its image. \end{Theorem} \nin In other words, {\it \ssk \nin any compact group may be considered as a subgroup of the group of units of its weakly complete real group algebra.} \bsk \subsection{The Group Algebra Functor $\K[-]$ is Multiplicative} If $A$ and $B$ are weakly complete algebras, we have $(a_1\otimes b_1)(a_2\otimes b_2) =a_1a_2\otimes b_1b_2$ which implies $$A^{-1}\otimes B^{-1}\subseteq (A\otimes B)^{-1},$$ where we have used the natural inclusion function $j\colon A\times B\to A\otimes B$ and write $A^{-1}\otimes B^{-1}$ in place of $j(A^{-1}\times B^{-1})$. \msk Now let $G$ and $H$ be topological groups. Then $$\eta_G(G)\otimes \eta_H(H)\subseteq \K[G]^{-1}\otimes \K[H]^{-1} \subseteq \(\K[G]\otimes \K[H]\)^{-1},$$ and so we have the morphism $$G\times H\to \(\K[G]\otimes \K[H]\)^{-1},$$ $(g,h)\mapsto \eta_G(g)\otimes \eta_H(h)$ which, by the univeral property of $\K[-]$ gives rise to a unique morphism $\alpha \colon \K[G\times H]\to \K[G]\otimes \K[H]$ such that $$ (\forall (g,h)\in G\times H)\, \alpha(\eta_{G\times H}(g,h))=\eta_G(g)\otimes \eta_H(h).\leqno(1) $$ On the other hand, the morphisms $j_G\colon G\to G\times H$, $j_G(g)=(g,1_H)$ and $p_G\colon G\times H\to G$, $p_G(g,h)=g$ yield $p_Gj_G=\id_G$. Therefore $\K[p_G]\colon \K[G\times H]\to \K[G]$ is an algebra retraction, and via $\K[j_G]$ we may identify $\K[G]$ with a subalgebra of $\K[G\times H]$; likewise $\K[H]$ is an algebra retract of the algebra $\K[G\times H]$. Since $(g,1)(1,h)=(g,h)$ in $G\times H$, with the identifications of $\K[G], \K[H]\subseteq \K[G\times H]$ we have $$ (\forall (g,h)\in G\times H)\, \eta_G(g)\eta_H(h)= \eta_{G\times H}(g,h) \in \K[G\times H].\leqno(2)$$ The function $$\K[G]\times \K[H]\to \K[G\times H],\quad (a,b)\mapsto ab$$ is a continuous bilinear map of weakly complete vector spaces; therefore the universal property of the tensor product in $\cal W$ yields a unique $\cal W$-morphism \cen{$\beta\colon \K[G]\otimes \K[H]\to \K[G\times H]$} \noindent such that $$(\forall a\in \K[G],\,b\in \K[H]) \, \beta(a\otimes b)= ab\in \K[G\times H].\leqno(3)$$ Now if for an arbitrary element $(g,h)\in G\times H$ we set $a=\eta_G(g)$ and $b=\eta_H(h)$, then we have $$\beta\(\eta_G(g)\otimes\eta_H(h)\)=a\otimes b= ab=\eta_G(g)\eta_H(h)=\eta_{G\times H}(g,h).\leqno(4)$$ By Proposition \ref{p:generation}, $\eta_G(G)$ generates $\K[G]$ as weakly complete unital algebra and likewise $\eta_H(H)$ generates $\K[H]$ in this fashion, and the algebraic tensor product of $\K[G]$ and $\K[H]$ is dense in $\K[G]\otimes \K[H]$. Therefore, (4) implies $\beta\circ \alpha=\id_{\K[G\times H]}$. In other words, the diagram $$ \begin{matrix}\K[G\times H]&\mapright{\id_{\K[G\times H]}}&\K[G\times H]\\ \lmapdown{\alpha}&&\mapup{\beta}\\ \K[G]\otimes \K[H]&\lmapright{\id_{\K[G]\otimes \K[H]}}&\K[G]\otimes \K[H]\\ \end{matrix} $$ \medskip \noindent commutes. Similarly, let us look at $\alpha\circ\beta:\K[G]\otimes \K[H]\to \K[G]\otimes \K[H]$: We recall (4) and (1) and verify $$\alpha\(\beta(\eta_G(g)\otimes\eta_H(h)\)=\alpha(\eta_{G\times H}(g,h)) =\eta_G(g)\otimes\eta_H(h)$$ By the same argument as above we conclude $\alpha\circ\beta= \id_{\K[G]\otimes \K[H]}$. Taking everything together, we have proved the following important result: \msk \begin{Theorem} \label{th:multiplicative} {\rm (Multiplicativity of the Group Algebra Functor $\K[-]$)} For two arbitrary topological groups $G$ and $H$ the natural morphisms of weakly complete unital algebras $\alpha\colon\K[G\times H]\to\K[G]\otimes\K[H]$ and $\beta\colon\K[G]\otimes\K[H]\to\K[G\times H]$ are isomorphisms which are inverses of each other. \end{Theorem} \bsk \subsection{Multiplication and Comultiplication on the Group Algebra $\K[G]$} Let $G$ be a topological group and $\delta_G\colon G\to G\times G$ the diagonal morphism $\delta_G(g)=(g,g)$. Together with the constant morphism $k_G\colon G\to \IdObject=\{1\}$ we have a comonoid $(\delta_G,k_G)$ according to the example following Definition \ref{d:comonoids}. Since the group-algebra functor $\K[-]$ is multiplicative we have {\it morphisms of weakly complete unital algebras} $\K[\delta_G]\colon \K[G]\to \K[G\times G]$ and $\K[k_G]\colon \K[G]\to \K[\{1\}]=\K$. By Theorem \ref{th:multiplicative} above we have an isomorphism $\alpha_G\colon\K[G\times G]\to \K[G]\otimes\K[G]$ which gives us the following observation: \msk \begin{Lemma} For any topological group $G$, the weakly complete group algebra $\K[G]$ supports a cocommutative and coassociative comultiplation $$\gamma_G\colon \K[G]\to \K[G]\otimes \K[G],\quad \gamma_G=\alpha_G\circ \K[\delta_G]$$ which is a morphism of weakly complete unital algebras, and there is a co-identity $\kappa_G\colon\K[G]\to\K$ which is an algebra morphism. \end{Lemma} \msk The following fairly immediate remark will be relevant: \begin{Remark} \label{r:grlike} Let $G$ be a topological group and $x\in\K[G]$. If $x\in \eta_G(G)$, then the following statement holds: $$\gamma_G(x)=x\otimes x\mbox{ and }\kappa(x)=1.\leqno(\dag)$$ The set of elements satisfying $(\dag)$ is linearly independent. \end{Remark} \msk \begin{Proof} We recall the definition of $$\gamma_G=\left(\K[G]\mapright{\K[\delta_G]}\K[G\times G]\mapright{\alpha_G}\K[G]\otimes\K[G]\right). \leqno(*)$$ If $a=\eta_G(g)$ for some $g\in G$, then $c(a)=\alpha_G(a,a)=a\otimes a$ by $(*)$ and by (1) above. The linear independence is a an exercise in linear algebra which one finds in \cite{hofbig}, pp.~66, 67. \end{Proof} \msk For each topological group $G$ the {\it opposite group} $G^{\rm op}$ is the underlying topological space of $G$ together with the multiplication $(g,h)\mapsto g{*}h$ defined by $g{*}h=hg$. The groups $G$ and $G^{\rm op}$ are isomorphic under the function $\inv_G\colon G\to G^{\rm op}$, $\inv_G(g)=g^{-1}$. Analogously, every topological algebra $A$ gives rise to an opposite algebra $A^{\rm op}$ on the same underlying topological vectorspace but with the multiplication defined by $a{*}b=ba$, giving us $$(A^{-1})^{\rm op}=(A^{\rm op})^{-1}$$ by definition, but not necessariy being isomorphic to $A$. Consequently, $$((\K[G])^{-1})^{\rm op}=(\K[G]^{\rm op})^{-1}$$ and there are morphisms of topological groups $\eta_G\colon G\to \K[G]^{-1}$ and $\eta_{G^{\rm op}}\colon G^{\rm op}\to \K[G^{\rm op}]^{-1}$. Accordingly, we have an isomorphism $\K[\inv_G]\colon\K[G]\to\K[G^{\rm op}]$ of weakly complete topological algebras and, accordingly, an involutive isomorphism of topological groups $\K[\inv_G]^{-1}\colon\K[G]^{-1}\to\K[G^{\rm op}]^{-1}$. This gives us a commutative diagram $$ \begin{matrix} G&\mapright{\eta_G}&\K[G]^{-1}\\ \lmapdown{\inv_G}&&\mapdown{\K[\inv_G]^{-1}}\\ G^{\rm op}&\lmapright{\eta_{{G^{\rm op}}}}&\K[G^{\rm op}]^{-1}.\\ \end{matrix} $$ producing an isomorphism of weakly complete algebras $\K[G]\to \K[G^{\rm op}]$. But we also have a commutative diagram $$ \begin{matrix}G&\mapright{\eta_G}&\K[G]^{-1}\hfill\\ \lmapdown{\inv_G}&&\mapdown{\inv_{\K[G]^{-1}}}\hfill\\ G^{\rm op}&\lmapright{\eta_{{G^{\rm op}}}}&(\K[G]^{-1})^{\rm op} =(\K[G]^{\rm op})^{-1}.\\ \end{matrix} $$ Let us abbreviate $f\defi\(\eta_{G^{\rm op}})\circ\inv_G\colon G\to(\K[G]^{\rm op})^{-1}$. So by the adjunction formalism, there is a unique involutive isomorphism $f'\colon \K[G]\to \K[G]^{\rm op}$ of weakly complete algebras such that $f=f'|\K[G]^{-1}\circ\eta_G$. \bsk We have a grounding functor $A\to|A|$ from the cagegory $\WA$ of weakly complete algebras to the category $\W$ of weakly complete vector spaces, where $|A|$ is simply the weakly complete vector space underlying the weakly complete algebra $A$. With this convention we formulate the following definition: \begin{Definition} \label{l:symmetry} For each topological group $G$ there is a morphism of weakly complete vector spaces $$\sigma_G\defi |f'|\colon |\K[G]| \to |\K[G]^{\rm op}|=|\K[G]|, $$ often called {\it symmetry} {\color{\red} or {\it antipode}} such that for any topological group $G$ we have $$(\forall g\in G)\, \sigma_G(\eta_G(g))=\eta_G(g^{-1})=\eta_G(g)^{-1}.$$ \end{Definition} \nin Equivalently, $$(\forall x\in \eta_G(G))\, \sigma_G(x)\.x=x\.\sigma_G(x)=1,$$ and, using the bilinearity of multiplication $(x,y)\mapsto xy$ in $\K[G]$ and defining $\mu_G\colon \K[G]\otimes \K[G]\to \K[G]$ by $\mu_G(x\otimes y)=xy$ and remembering from Remark \ref{r:grlike} that $\gamma_G(x)=x\otimes x$ for all $x=\eta_G(g)$ {\color{\red} with} some $g\in G$, once more, equivalently, $$(\forall x \in\eta_G(G))\, (\mu_G\circ(\sigma_G\otimes \id_{\K[G]})\circ\gamma_G)(x)=1.\leqno(+)$$ By Proposition \ref{p:generation}, the weakly complete algebra $\K[G]$ is the closed linear span of $\eta_G(G)$, equation $(+)$ holds in fact for all elements or $\K[G]$. Thus we have shown the following \msk \begin{Proposition} \label{p:prelim-hopf} For any topological group $G$, the following diagram involving natural morphisms of weakly complete vector spaces commutes where $W_G=|\K[G]|$: $$ \begin{matrix} W_G\otimes W_G&\hfill\mapright{\sigma_G\otimes \id_{W_G}}\hfill&W_G\otimes W_G\\ \lmapup{\gamma_G}&&\mapdown{\mu_G}\\ W_G&\lmapright{\iota_G \circ |\kappa_G|}&W_G,\\ \end{matrix} $$ \msk \noindent where $\kappa_G\colon \K[G]\to \K=\K[\IdObject]$, $\IdObject=\{1\}$ is induced by the constant morphism and $\iota_G\colon \K=\K[\IdObject]\to\K[G]$ by the unique inclusion $\IdObject\to G$. In this diagram, the multiplication $\mu_G$ defines the algebra structure of the group algebra $\K[G]$ on $W_G$ and the comultiplication $\gamma_G$ is an algebra morphism. \end{Proposition} \msk \subsection{Weakly Complete Bialgebras and Hopf-Algebras} We need to view the basic result of Proposition \ref{p:prelim-hopf} in its more general abstract frame work. \nin In an arbitrary monoidal category $({\cal A},\otimes)$ we have monoids $A\otimes A\mapright{m} A\mapleft{u} \IdObject$ and comonoids $\IdObject\mapleft{k} C\mapright{c} C\otimes C$. Every pair of monoids $A$ and $B$ gives rise to a monoid $A\otimes B$. In particular, for each monoid $A$ also $A\otimes A$ is a monoid. An $\cal A$-morphism $f\colon A\to B$ between monoids is a {\it monoid morphism} if they respect the monoid structure in an evident fashion. These matters are discussed in a comprehensive fashion in \cite{book3} in Appendix 3 in the section entitled ``Commutative Monoidal Categories and their Monoid Objects''; see pp.787 ff., Example A3.61ff. \msk \begin{Definition} \label{d:hopf} (a) A {\it bimonoid} in a commutative monoidal category is an object together with both a monoid structure $(m,u)$ and comonoid structure $(c,k)$, $$ A\mapright{c}A\otimes A\mapright{m}A\hbox{\quad and\quad} \IdObject\mapright{{\color{\red}u}}A\mapright{{\color{\red}k}}\IdObject,$$ such that $c$ is a monoid morphism. \msk \nin (b) A {\it group} (or often {\it group object} in a commutative monoidal category) is a bimonoid with commutative comultiplication and with an $\cal A$-morphism $\sigma\colon A\to A$, called {\it inversion} or {\it symmetry} (as the case may be) which makes the following diagram commutative $$ \begin{matrix} A\otimes A&\mapright{\sigma\otimes\id}&A\otimes A\\ \lmapup{c}&&\mapdown{m}\\ A&\lmapright{uk}&A,\\ \end{matrix}\leqno(\sigma) $$ plus a diagram showing its compatibility with the comultiplication (see \cite{book3}, Definition A3.64.ii). {\color{\red} \nin (c) In our commutative monoidal categories $({\mathcal V},\otimes)$ and $({\mathcal W},\otimes)$ of $\K$-vector spaces, respectively, weakly complete $\K$-vectorspaces, a group object $(A,m,c,u,k,\sigma)$ is called a {\it Hopf algebra}, respectively, {\it a weakly complete Hopf algebra}.} \end{Definition} \msk In reality, the definition of a bimonoid is symmetric and the equivalent conditions that $c$ be a monoid morphism, respectively, that $m$ be a comonoid morphism can be expressed in one commutative diagram (see \cite{book3}, Diagram following Definition A3.64, p.~793). Also it can be shown that in a group the diagram arising from the diagram $(\sigma)$ by replacing $\sigma\otimes \id$ by $\id\otimes\sigma$ commutes as well. \msk {\color{\red} Since the group objects and group morphisms in a commutative monoida; category form a category (see e.g.\ \cite{book3}, paragraph following Exercise EA3.34, p.~793), the Hopf-algebras in $({\mathcal V},\otimes)$ and $({\mathcal W},\otimes)$ do constitute categories in their respective contexts.} \msk \begin{Example} (a) In the category of sets (or indeed in any category with finite products and terminal object) every monoid is automatically a bimonoid if one takes as comultiplication the diagonal morphism $X\to X\times X$. \msk (b) A bimonoid in the category of vector spaces with the tensor product as multiplication is called a {\it bialgebra}. Sometimes it is also called a {\it Hopf-algebra}, and sometimes this name is reserved for a group object in this category. The identity $u\colon \K\to A$ of an algebra identifies $\K$ with the subalgebra $\K\.1$ of $A$. The coidentity $k\colon A\to\K$ of a bialgebra {\color{\red} may be} considered as the endomorphism $uk\colon A\to A$ mapping $A$ onto the subalgebra $\K\.1$ and then is often called the {\it augmentation}. \end{Example} In the present context we do restrict our attention to the commutative monoidal categories $({\cal V},\otimes)$ and $({\cal W},\otimes)$ of $\K$-vector spaces, respectively, weakly complete $\K$-vector spaces with their respective tensor products. In terms of the general terminology which is now available to us, the result in Proposition \ref{p:prelim-hopf} may be rephrased as follows: \bsk \begin{Theorem} \label{th:hopf} {\rm (The Hopf-Algebra Theorem for Weakly Complete Group Algebras)} For every topological group $G$, the group algebra $\K[G]$ is a weakly complete Hopf-algebra with a comultiplication $\gamma_G$ and symmetry $\sigma_G$. \end{Theorem} \bsk \bsk \section{Grouplike and Primitive Elements} \bsk In any theory of Hopf algebras it is common to single out two types of special elements, and we review them in the case of weakly complete Hopf algebras. \msk \begin{Definition} \label{d:grouplike} Let $A$ be a weakly complete coassociative coalgebra with comultiplication $c$ and coidentity $k$. Then an element $a\in A$ is called {\it grouplike} if $k(a)=1$ and $c(a)=a\otimes a$. \msk If $A$ is a bialgebra, $a\in A$ is called {\it primitive}, if $c(a)= a\otimes 1 + 1\otimes a$. \end{Definition} For any $a\in A$ with $c(a)=a\otimes a$, the conditions $a\ne0$ and $k(a)=1$ are equivalent. \msk These definitions apply, in particular, to any weakly complete Hopf algebra and thus especially to each weakly complete group algebra $\K[G]$. By Remark \ref{r:grlike} the subset $\eta_G(G)$ is a linearly independent set of grouplike elements. \begin{Remark} \label{r:warning} In at least one source on bialgebras in earlier contexts, the terminology conflicts with the one introduced here which is now commonly accepted. In \cite{hofbig}, p.~66, Definition 10.17, the author calls a grouplike element in a coalgebra {\it primitive}. Thus some caution is in order concerning terminology. Primitive elements in the sense of Definition \ref{d:grouplike} do not occur in \cite{hofbig}. \end{Remark} \msk The following observations {\color{\red} are proved straightforwardly:} \begin{Lemma} \label{l:monoid} The {\color{\red} set $G$ of} grouplike elements of a weakly complete bialgebra $A$ is a closed submonoid of $(A,\.)$ and {\color{\red} the set $L$ of} primitive elements of $A$ is a closed Lie subalgebra of $A_{\rm Lie}$. {\color{\red} If $A$ is a Hopf algebra, then $G$ is a closed subgroup of $A^{-1}$.} \end{Lemma} \bsk For a morphism $f\colon W_1\to W_2$ of weakly complete vector spaces let $f'=\Hom(-,\K)\colon W_2'\to W_1'$ denote the dual morphism of vector spaces. For a weakly complete coalgebra $A$ let $A' = \Hom(A,\K)$ be the dual of $A$. Then $A'$ is an algebra: If $c\colon A\to A\otimes A$ is its comultiplication, then $c'\colon A'\otimes A'\to A'$ is the multiplication of $A'$. For a unital algebra $R$ and a weakly complete coalgebra $A$ in duality let $(a,g)\mapsto\<a,g\>:R\times A\to\K$ denote the pairing between $R$ and $A$, where for $f\in\R=\Hom(A,\K)$ and $a\in A$ we write $\<f,a\>=f(a)$. {\color{\red} \begin{Definition} \label{d:sixfour} Let $R$ be a unital algebra over $\K$. Then a {\it character} of $R$ is a morphism of unital algebras $R\to \K$. The subset of $\K^R$ consisting of all algebra morphisms inherits the topology of pointwise convergence from $\K^R$ and as a topological space is called the {\it spectrum} of $R$ and is denoted $\Spec(R)$. If $k\colon R\to \K$ is a morphism of algebras, then an element $d\in{\cal V}(R,\K)$ is called a \emph{derivative} (sometimes also called a \emph{derivation} or \emph{infinitesimal character}) of $R$ (with respect to $k$) if it satisfies $$ (\forall x, y\in R)\, d(xy)= d(x)k(y)+k(x)d(y).$$ The set of all derivatives of $R$ is denoted $\Der(R)$. \end{Definition}} \msk Now let $R$ be a unital algebra and $A\defi R^*$ its dual weakly complete coalgebra with comultiplication $c$ such that $ab=c'(a\otimes b)$ for all $a,b\in R$. In these circumstances we have: \begin{Proposition} \label{p:grouplike} Let $g\in A$. Then the following statements are equivalent: \begin{enumerate}[\rm(i)] \item $g\in A$ is grouplike in the coalgebra $A$. \item $g\colon R\to \K$ is a character of $R$, that is, is an element of $\Spec(R)$. \end{enumerate} \end{Proposition} \begin{Proof} The dual of $A\otimes A$ is $R\otimes R$ in a canonical fashion such that for $r_1,r_2\in R$ and $h_1,h_2\in A$ we have $$\<r_1\otimes r_2,h_1\otimes h_2\>=\<r_1,h_1\>\<r_2,h_2\>.$$ The set of linear combinations $L=\sum_{j=1}^n a_j\otimes b_j\in A\otimes A$ is dense in $A\otimes A$. So two elements $x,y\in R\otimes R$ agree if and only if for all such linear combinations $L$ we have $$\<x,L\>=\<y,L\>,$$ and this clearly holds if and only if for all $a,b\in A$ we have $$\<x,a\otimes b\>=\<y,a\otimes b\>.$$ We apply this to $x=c(g)$ and $y=g\otimes g$ and observe that (i) holds if and only if $$(\forall r,s\in R)\,\<r\otimes s,c(g)\>=\<r\otimes s, g\otimes g\>. \leqno(0)$$ Now $$g(rs)=\<rs,g\>=\<m(r\otimes s),g\>=\<r\otimes s,c(g)\>.\leqno(1)$$ $$g(r)g(s)=\<r,g\>\<s,g\>=\<r\otimes s,g\otimes g\>.\leqno(2)$$ So in view of (0),(1) and (2), assertion (i) holds if and only if $g(rs)=g(r)g(s)$ for all $r, s\in R$ is true. Since $g$ is a linear form on $A$, this means exactly that a nonzero $g$ is a morphism of weakly complete algebras, i.e., $g\in\Spec(R)$. \end{Proof} \bsk A particularly relevant consequence of Proposition \ref{p:grouplike} will be the following con{\color{\red}clusion} for the group algebra $\R[G]$ of a {\it compact} group. Recall that after Theorem \ref{th:gr-alg-comp-gr} we always identify a compact group $G$ with a subgroup of $\R[G]$ in view of the embedding $\eta_G$. We shall see that the set $G\subseteq\R[G]$ contains already {\it all} grouplike elements (see Theorem \ref{th:comp-group-like} below). \msk Let us return to the primitive elements of a unital bialgebra. {\color{\red} For this purpose assume that $R$ is not only a unital algebra, but a bialgebra over $\K$, which implies that its dual $A\defi R^*$ is not only a coalgebra but a bialgebra as well.} \begin{Proposition} \label{l:primitive} Let $R$ be a unital bialgebra and $d\in A$. Then the following statements are equivalent: \begin{enumerate}[\rm(i)] \item $d$ is primitive in the bialgebra $A$. \item $d\colon R\to\K$ a derivative of $R$ with respect to the coidentity $k$, \ that is, an element in $\Der(R)$. \end{enumerate} \end{Proposition} The procedure of the proof of Proposition \ref{p:grouplike} allows us to leave the explicit proof of this proposition as an exercise. \msk \begin{Definition} \label{d:grlike} Let $A$ be a weakly complete Hopf algebra. Then we write $\Gamma(A)$ for the subset of all grouplike elements of $A$ and $\Pi(A)$ for the subset of all primitive elements. \end{Definition} In view of Proposition \ref{p:monoids}, the following proposition is now readily verified: \begin{Proposition} \label{l:substructures} In a weakly complete Hopf algebra $A$, the subset $\Gamma(A)$ is a closed subgroup of $A^{-1}$ and $\Pi(A)$ is a closed Lie subalgebra of $A_{\rm Lie}$. If $f\colon A\to B$ is a morphism of weakly complete Hopf algebras, then $f(\Gamma(A)) \subseteq \Gamma(B)$, and $f(\Pi(A))\subseteq \Pi(B)$. \end{Proposition} \msk {\color{\red} Accordingly, if $\mathcal{WH}$ denotes the category of weakly complete Hopf algebras, then $\Gamma$ defines a functor from $\mathcal{WH}$ to the category of topological groups $\mathcal{T\hskip-1pt G}$, also called {\it the grouplike grounding}, while $\Pi$ defines a functor from $\mathcal{WH}$ to the category of topological Lie algebras over $\K$. The functor $\HH\defi(G\mapsto \R[G]): \mathcal{T\hskip-1pt G}\to \mathcal{WH}$ from topological groups to weakly complete Hopf algebras was known to us since Theorem \ref{th:wcga-thm} as a functor into the bigger category of all weakly complete unital algebras. That theorem will now be sharpened as follows: \begin{Theorem} \label{p:grounding} {\rm (The Weakly Complete Group Hopf Algebra Adjunction Theorem)}\quad The functor $\HH\colon \mathcal{T\hskip-1pt G}\to \mathcal{WH}$ from topological groups to weakly complete Hopf algebras is left adjoint to the grouplike grounding $\Gamma\colon \mathcal{WH}\to \mathcal{T\hskip-1pt G}$. \end{Theorem} In other words, for a topological group $G$ there is natural morphism of topological groups $\eta_G\colon G\to \Gamma(\HH(G))= \Gamma(\K[G])$ such that for each morphism of topological groups $f\colon G\to \Gamma(A)$ for a weakly complete Hopf algebra $A$ there is a unique morphism of weakly complete Hopf algebras ${\color{\red}f'}\colon \HH(G)\to A$ such that $f(g)=f'(\eta_G(g))$ for all $g\in G$: $$ \begin{matrix}&\mathcal{T\hskip-1pt G}&&\hbox to 7mm{} &\mathcal{WH}\\ \noalign{\vskip3pt} \noalign{\hrule}\\ \noalign{\vskip3pt \mdvertspace G&\mapright{\eta_G}&\Gamma(\HH(G))&\hbox to 7mm{} & \HH(G)=\K[G]\\ \lmapdown{\forall f}&&\mapdown{\Gamma(f')}&\hbox to 7mm{}& \mapdown{\exists! f'}\\ \muvertspace \Gamma(A)&\lmapright{\id}&\Gamma(A)&\hbox to 7mm{}&A\\ \end{matrix} $$ } \msk \begin{Proof} Let $A$ be a weakly complete Hopf algebra and $f\colon G\to \Gamma(A)$ a continuous group morphism. Since $A$ is in particular a weakly complete associative unital algebra and $\Gamma(A)\subseteq A^{-1}$, by the Weakly Complete Group Algebra Theorem \ref{th:wcga-thm} there is a unique morphism $f'\colon\K[G]\to A$ of weakly complete algebras such that $f(g)=f'(\eta_G(g))$ for all $g\in G$. Since each $\eta_G(g)$ is grouplike by Remark \ref{r:grlike} we have $\eta_G(G)\subseteq \Gamma(\HH(G))$. We shall see below that the morphism $f'$ of weakly complete algebras is indeed a morphism of weakly complete Hopf algebras and therefore maps grouplike element into grouplike elements. Hence $f'$ maps $\Gamma\(\HH(G)\)$ into $\Gamma(A)$. We now have to show that $f'$ is a morphism of Hopf algebras, that is, (a) it respects comultiplication, (b) coidentity, and (c) symmetry. \nin For (a) we must show that the following diagram commutes: $$\begin{matrix} \K[G]&\mapright{c_{\K[G]}}&\K[G]\otimes\K[G]&\cong& \K[G\times G]\cr \lmapdown{f'}&&\mapdown{f'\otimes f'}&&\cr A&\mapright{c_A}&A\otimes A. &&\end{matrix}$$ Since $\K[G]$ is generated as a topological algebra by $\eta_G(G)$ by Proposition \ref{p:generation}, it suffices to track all elements $x=\eta_G(g)\in \K[G]$ for $g\in G$. Every such element is grouplike in $\K[G]$ by Remark \ref{r:grlike}, and so $(f'\otimes f')c_{\K[G]}(x)= (f'\otimes f')(x\otimes x)=f'(x)\otimes f'(x)$ in $A\otimes A$, while on the other hand $f'(x)=f(g)\in \Gamma(A)$, whence $c_A(f'(x))=f'(x)\otimes f'(x)$ as well. This proves (a). For (b) we must show that the following diagram commutes: $$\begin{matrix} \K[G]&\mapright{k_{\K[G]}}& \K&\cong& \K[\{1_G\}]\cr \lmapdown{f'}&&\mapdown{\id_\K}\cr A&\mapright{k_A}&\K. &&\end{matrix}$$ Again it suffices to check the elements $x=\eta_A(g)$. Since all grouplike elements are always mapped to 1, this is a trivial exercise. Finally consider (c), where again we follow all elements $x=\eta_A(g)$. On the one hand we have $f'(\sigma_{\K[G]}(x)) =f'(x^{-1})=f'(x)^{-1}$ in $A^{-1}$. But again $f'(x)$ is grouplike, and thus $\sigma_A(f'(x))=f'(x)^{-1}$, which takes care of case (c), and this completes the proof of the theorem. \end{Proof} \msk As with any adjoint pair of functors there is an alternative way to express the adjunction in the preceding theorem: see e.g.\ \cite{book3}, Proposition A3.36, p.~777: \begin{Corollary} \label{c:grounding} For each weakly complete Hopf algebra $A$ there is a natural morphism {\color{\red} of Hopf algebras} $\epsilon_A\colon \HH(\Gamma(A))\to A$ such that for any topological group $G$ and any morphism of {\color{\red} Hopf algebras} $\phi\colon \HH(G)\to A$ there is a unique continuous group morphism $\phi'\colon G\to \Gamma(A)$ such that for each $x\in \HH(G)=\K[G]$ one has $\phi(x)=\epsilon_A(\K[\phi](x))$, where $\K[\phi]=\HH(\phi)$. \end{Corollary} The formalism we described in Corollary \ref{c:alternate} we can formulate with the present adjunction as well: \begin{Corollary} \label{c:alternate2} For any weakly complete Hopf algebra $A$ and any topological group $G$ we have $$ (\forall A)\, \Big(\Gamma(A)\quad\mapright{\eta_{\Gamma(A)}}\quad \Gamma(\K[\Gamma(A)]) \quad \mapright{\Gamma(\epsilon_A)}\ \Gamma(A)\Big) =\id_{\Gamma(A)},\hbox{ and}\leqno(1)$$ \medskip $$(\forall G)\,\Big(\K[G]\quad\mapright{\K[\eta_G]}\quad \K\big[\Gamma(\K[G])\big] \quad \mapright{\epsilon_{\K[G]}}\quad \K[G]\Big)=\id_{\K[G]}.\leqno(2)$$ \end{Corollary} {\color{\red} \begin{Definition}\label{d:span-grl} For any weakly complete Hopf algebra $A$ we let $\SS(A)$ denote the closed linear span $\overline{\Span}\(\Gamma(A)\)$ in $A$. \end{Definition} Since $\Gamma(A)$ is a multiplicative subgroup of $(A,\cdot)$, clearly $\SS(A)$ is a weakly closed subalgebra of $A$. Similarly, one easily verifies $c\(\Gamma(A)\)\subseteq \Gamma(A\otimes A) \subseteq \SS(A\otimes A)$ for the comultiplication of $A$, and so $c\(\SS(A)\) \subseteq \SS(A\otimes A)$. Therefore \ssk \nin {\it $\SS(A)$ is Hopf subalgebra of $A$.} \msk \begin{Lemma} \label{l:generation2} If $A=\K[G]$ is the weakly complete group algebra of a topological group, then $A=\SS(A)$. \end{Lemma} \begin{Proof} By Remark \ref{r:grlike}, $$\eta_G(G)\subseteq \Gamma(A).\leqno(1)$$ By Proposition \ref{p:generation}, $$\overline{\Span}\(\eta_G(G)\)=A.\leqno(2)$$ Hence $$\SS(A)=\overline{\Span}\(\Gamma(A)\)=A.\leqno(3)$$ \vglue-17pt \end{Proof} \begin{Corollary} \label{c:epsilon-epic} In the circumstances of {\rm Corollary \ref{c:grounding}}, $$\im(\epsilon_A)=\epsilon_A\(\HH\(\Gamma(A)\)\)=\SS(A).$$ \end{Corollary} \begin{Proof} Set $B=\HH(\Gamma(A))$; then $\epsilon_A\colon B\to A$ is a morphism of Hopf algebras, and so, in particular, a morphism of weakly complete vector spaces. Hence $\im(\epsilon_A)= \epsilon_A(B)$ is a closed Hopf subalgebra of $A$. Since $\epsilon_A$ is a morphim of Hopf algebras, $\epsilon_A(\Gamma(B))\subseteq \Gamma(A)$ and thus $\epsilon_A\(\SS(B)\)\subseteq \SS(A)$. By Lemma \ref{l:generation2} we have $B=\SS(B)$ and so $\im(\epsilon_A)=\epsilon_A\(\SS(B)\)$ and so $\im(\epsilon_A) \subseteq\SS(A)$. On the other hand, by Corollary \ref{c:alternate2}, we have $\Gamma(A)\subseteq \im(\epsilon_A)$ and since $\im(\epsilon_A)$ is closed, we conclude $\SS(A)\subseteq \im(\epsilon_A)$ which completes the proof. \end{Proof} In particular, {\it $\epsilon_A$ is quotient homomorphism if and only if $A=\SS(A)$.} \msk After we identified the image of the Hopf algebra morphism $\epsilon_A$, the question arises for its kernel. Thus let $J=\ker\epsilon_A$. As a kernel of a morphism of Hopf algebras, $J$ is, firstly, the kernal of a morphism of weakly complete algebras, and secondly, satisfies the condition $$ c(J)\subseteq A\otimes J + J\otimes A.$$ (See e.g.\ \cite{hofbig}, pp.\ 50ff., notably, Proposition 10.6 with appropriate modifications.) We leave the significance of this kernel open for the time being and return to it in the context of compact groups. } Here we record what is known in general on the pro-Lie group $\Gamma(A)\subseteq A^{-1}$ for a weakly complete Hopf algebra $A$ in general. \bsk It was first observed in \cite{dah} that any weakly complete Hopf algebra $A$ accommodates grouplike elements (and so the spectrum $\Spec(R)$ of the dual Hopf algebra of $A$) and its primitive elements (and hence the derivatives $\Der(R)$ of the dual) in a particularly satisfactory set-up for whose proof we refer to \cite{dah}, \cite{dahii}, or \cite{hofmor}. In conjunction with the present results, we can formulate the following theorem: \begin{Theorem} \label{th:hopf-prolie} Let $A$ be a weakly complete $\K$-Hopf-algebra and $A^{-1}$ its group of units which is an almost connected pro-Lie group {\rm (Theorem \ref{th:fourth})} {\color{\red} containing as a closed subgroup the group $\Gamma(A)$ of grouplike elements. In particular, $\Gamma(A)$ is a pro-Lie group. Correspondingly, the set $\Pi(A)$ of primitive elements of $A$ is a closed Lie subalgebra of the pro-Lie algebra $A_{\rm Lie}$ {\rm(Theorem \ref{th:second})} and may be identified with the Lie algebra $\L(\Gamma(A))$ of $\Gamma(A)$. The exponential function $\exp_A\colon A_{\rm Lie}\to A^{-1}$ restricts to the exponential function $\exp_{\Gamma(A)}:\L(\Gamma(A))\to \Gamma(A)$.} \end{Theorem} Again, we shall pursue this line {\color{\red} below} by applying it specifically to the real group Hopf algebras $A\defi \R[G]$ in the special case that $G$ is a {\it compact} group by linking it with particular information we have on compact groups. But first we have to explore duality more explicitly on the Hopf algebra level. {\color{\red} \section {The Dual of Weakly Complete Hopf Algebras and of Group Algebras} } In this section we will have a closer look at the dual space $A'$ of a weakly complete {\color{\red} Hopf} algebra $A$. We let $G$ denote the topological group of grouplike elements $g\in A$. The underlying weakly complete vector space of $A$ is a topological left and right $G$-module $A$ with the module operations $$ \begin{matrix} &(g,a)\mapsto g\.a: &G\times A\to A,\ &g\.a:= ga, &\mbox{and}\\ &(a,g)\mapsto a\.g: &G\times A\to A,\ &a\.g:= ag.\\ \end{matrix} $$ Recall that $\I(A)$ is the filterbasis of closed two-sided ideals $J$ of $A$ such that $A/J$ is a finite dimensional algebra and that $A\cong\lim_{J\in\I(A)}A/J$. We can clearly reformulate Corollary \ref{c:ideal-filter} in terms of $G$-modules as follows: \begin{lemma} \label{l:cofinite} For the topological group $G=\Gamma(A)$, the $G$-module $A$ has a filter basis $\I(A)$ of closed two-sided submodules $J{\subseteq}A$ such that $\dim(A/J){<}\infty$ and that $A=\lim_{J\in\I(A)} A/J$ is a strict projective limit of finite dimensional $G$-modules. The filter basis $\I(A)$ in $A$ converges to $0\in A$. \end{lemma} For a $J\in\I(A)$ let $J^\perp=\{f\in A': (\forall a\in J)\,\<f,a\>=0\}$ denote the annihilator of $J$ in the dual $V$ of $A$. We compare the ``Annihilator Mechanism'' from \cite{book3}, Proposition 7.62 and observe the following configuration: $$ \begin{matrix}\mdddvertspace A&\hskip 2truecm&\{0\}&&\\ \Big|&&\Big|&\Bigg\}&\!\!\cong\ (A/J)' \\ \muuuvertspace J&&J^\perp&&\\ \Big|&&\Big|&\Bigg\}&\!\!\cong\ J'\\ \{0\}&&\hphantom{.}A'.&&\\ \end{matrix} $$ \medskip \noindent In particular we recall the fact that $J^\perp\cong (A/J)'$ showing that $J^\perp$ is a finite-dimensional $G$-module on either side. By simply dualizing Lemma \ref{l:cofinite} we obtain \begin{Lemma} \label{l:finite} For the topological group $G=\Gamma(A)$, the dual $G$-module $R\defi A'$ of the weakly complete $G$-module $A$ has an up-directed set $\D(R)$ of finite-dimensional two-sided $G$-submodules (and $\K$-coalgebras!) $F\subseteq R$ such that $R$ is the direct limit \[ R= \colim\limits_{F\in\D(R)}F=\bigcup_{F\in \D(R)}F. \] The colimit is taken in the category of (abstract) $G$-modules, i.e.~modules without any topology. \end{Lemma} \noindent This means that for the topological group $G=\Gamma(A)$, every element $\omega$ of the dual of $A'$ is contained in a finite dimensional left- and right-$G$-module (and $\K$-subcoalgebra). We record this in the following form: \begin{Lemma} \label{l:translates} Let $\omega\in A'$. Then the vector subspaces $\Span(G\.\omega)$ and $\Span(\omega\.G)$ of both the left orbit and the right orbit of $\omega$ are finite dimensional, and both are contained in a finite dimensional $\K$-subcoalgebra of $A'$. \end{Lemma} \noindent For any $\omega\in A'$ the restriction $f\defi \omega|G:G\to \K$ is a continuos function such that each of the sets of translates $f_g$, $f_g(h)=f(gh)$, respectively, ${}_gf$, ${}_gf(h)=f(hg)$ forms a finite dimensional vector subspace of the space $C(G,\K)$ of the vector space of all continuous $\K$-valued functions $f$ on $G$. \msk \begin{Definition} \label{d:R(G,K)} For an arbitrary topological group $G$ we define $\RR(G,\K) \subseteq C(G,\K)$ to be that set of continuous functions $f\colon G\to \K$ for which the linear span of the set of translations ${}_gf$, ${}_gf(h)=f(hg)$, is a finite dimensional vector subspace of $C(G,\K)$. The functions in $\RR(G,\K)$ are called {\it representative functions.} \end{Definition} In Lemma \ref{l:translates} we saw that for a weakly complete Hopf algebra $A$ and its dual $A'$ (consisting of continuous linear forms) we have a natural linear map $$\tau_A\colon A'\to \RR(\Gamma(A),\K),\ \tau_A(\omega)(g)= (\omega|\Gamma(A))(g).$$ An element $\omega\in A'$ is in the kernel of $\tau_A$ if and only if $\omega(\Gamma(A))=\{0\}$ if and only if $\omega\(\SS(A)\)=\{0\}$ if and only if $\omega\in \SS(A)^\perp$. We therefore observe: \begin{Lemma} \label{l:tau-a} There is an exact sequence of $\K$-vector spaces $$0\to\SS(A)^\perp\mapright{\inc}A'\mapright{\tau_A}\RR(\Gamma(A),\K).$$ \end{Lemma} Let us complete this exact sequence by determining the surjectivity of $\tau_A$ at least in the case that $A=\K[G]$ for a topological group $G$. Indeed, let us show that a function $f\in \RR(G,\K)$ for an arbitrary topological group $G$ is in the image of $\tau_A$. So let $f\in\RR(G,\K)$, then $U\defi \Span\{{}_gf:g\in G\}$ is an $n$-dimensional $\K$-vector subspace of $C(G,\K)$. Accordingly, $M_U\defi\Hom(U,U)$ is an $n^2$-dimensional unital $\K$-algebra such that $\pi\colon G\to {\rm Gl}(U)$, $\pi(g)(u)={}_gu$, $u\in U$, is a continuous representation of $G$ on $U$. Now we employ the universal property of the group algebra $\K[G]$ expressed in Theorem \ref{th:wcga-thm} and find a unique morphism of weakly complete algebras $\pi'\colon \K[G]\to M_U$ inducing a morphism of topological groups $\pi'|\K[G]^{-1}:\K[G]^{-1}\to {\rm Gl}(U)$ such that $\pi=\(\pi'|\K[G]^{-1}\)\circ\eta_G$. If $\eta_G$ happens to be an embedding and $G$ is considered as a subgroup of $\K[G]^{-1}$, then the representation $\pi$ is just the restriction of $\pi'$ to $G$. Now we recall that $f\in U$ and note that for any endomorphism $\alpha\in M_U$ of $U$ we have $\alpha(f)\in U\subseteq C(G,\K)$. Then for the identity $1\in G$ the element $\alpha(f)(1)\in\K$ is well defined and so we obtain a linear map $L\colon M_U\to\K$ by defining $L(\alpha)=\alpha(f)(1)$. So we finally define the linear form $\omega_\pi=L\circ \pi'\colon \K[G]\to \K$ and calculate for any $g\in G$ the field element $$\omega_\pi(\eta_G(g))=L\(\pi'(\eta_G(g))\)= L(\pi(g))=\pi(g)(f)(1)={}_gf(1)=f(g).$$ Thus we have in fact shown the following \begin{Lemma} \label{l:rafael} For any topological group $G$ the following statements are equivalent for a continuous function $f\in C(G,\K)$: \begin{enumerate}[\rm(i)] \item $f\in\RR(G,\K)$. \item There is a continuous linear form $\omega\colon \K[G]\to \K$ such that $\omega\circ\eta_G=f$ \end{enumerate} \end{Lemma} Remember here that in case $G$ is naturally embedded into $\K[G]$ that condition (ii) just says \msk \hskip-30pt(ii$'$) $\omega|G=f$ {\it for some} $\omega\in \K[G]'$. For special weakly complete Hopf algebras $A$ which are close enough to $\K[G]$, these insights allow us to formulate a concrete identifiction of the dual $A'$ of the group Hopf-algebra $\K$ over the ground fields $\R$ or $\C$, where again we consider the pair $({\mathcal V}, {\mathcal W})$ of dual categories $\mathcal V$ of $\K$-vector spaces and $\mathcal W$ of weakly complete $\K$-vector spaces. For an arbitrary topological group $G$, let again $\K[G]'$ denote the topological dual of the weakly complete group Hopf-algebra $\K[G]$ over the field $\K\in\{\R,\C\}$. Then the following identification of the topological dual of $\K[G]$ in the category ${\mathcal V}$ provides the background of the the topological dual of any weakly complete Hopf algebra generated by its grouplike elements: \begin{Theorem} \label{th:dual} {\rm(a)} For an arbitrary topological group $G$, the function $$\omega\mapsto \omega\circ \eta_G: \K[G]' \to \RR(G,\K)$$ is an isomorphism of $\K$-vector spaces. {\rm(b)} If $A$ is a weakly complete Hopf algebra satisfying $\SS(A)=A$, and if $G$ is the group $\Gamma(A)$ of grouplike elements of $A$, then $\tau_A\colon A' \to \RR(G,\K)$ implements an isomorphism onto a Hopf subalgebra of the Hopf algebra $\RR(G,\K)$ dual to the group Hopf algebra $\K[G]$. \end{Theorem} In the sense of (a) in the preceding theorem, $\K[G]'$ may be identified with the vector space $\RR(G,\K)$ of continuous functions $f\colon G\to \K$ for which that the translates ${}_gf$ (where ${}_gf(h)=f(gh)$) (equivalently, the translates $f_g$ (where $f_g(h)=f(gh)$) span a finite dimensional $\K$-vector space. The vector space $\RR(G,\K)$ is familiar in the literature as the vector space of representative functions on $G$, where it is most frequently formulated for compact groups $G$ and where it is also considered as a Hopf-algebra. In that case, the isomorphism of Theorem \ref{th:dual} is also an isomorphism of Hopf algebras. We are choosing here the covariant group algebra $\K[G]$ to be at the center of attention and obtain $\RR(G,\K)$ via vector space duality from $\K[G]$. Conversely, if one asks for a ``concrete'' description of $\K[G]$, then the answer may now be that, in terms of topological vector spaces, as a topological vector space, $\K[G]$ is the {\color{\red}algebraic} dual {\color{\red} (consisting of all linear forms)} of the (abstract) vector subspace $\RR(G,\K)$ of the vector space $C(G,\K)$ of continuous functions $G\to \K$. If $G$ is a compact group, $C(G,\K)$ is a familiar Banach space. \bsk At this point one realizes that in case of a finite group $G$ the $\K$-vector spaces $\K[G]$ and $\RR(G,\K)$ are finite dimensional of the same dimension and are, therefore isomorphic. In particular, we have a class of special examples of $\K$-Hopf algebras: \begin{Example} \label{e:fin-dim} Let $G$ be a finite group. Then $A=\RR(G,\K)=\K^G$ is a finite dimensional real Hopf algebra which may be considered as the dual of the group algebra $\K[G]$. A grouplike element $f\in\Gamma(A)$ then is a character $f\colon \K[G]\to \K$. The group $G$ may be considered as the subgroup $\Gamma(\K[G])$, and then $f|G\colon G\to \K^{-1}$ is a morphism of groups. Conversely, every group morphism $G\to \K^{-1}$ yields a grouplike element of $A$. In view of the finiteness of $G$ we have $$ \Gamma\(\RR(G,\K)\)\cong \Hom(G,\Ss^1),\quad \Ss^1=\{z\in\C:|z|=1\}.$$ If $G$ is a finite simple group, then the $|G|$-dimensional Hopf algebra $A=\RR(G,\K)$ has no nontrivial grouplike elements, that is, $\Gamma(A)=\{1\}$ and $\SS(A)=\K\.1$. The smallest example of this kind arises from the group $G=A_5$, the group of 120 even permutations of a set of 5 elements. \end{Example} \section{$\C[G]$ as an Involutive Weakly Complete Algebra} A complex unital algebra $A$ is called {\it involutive} if {\color{\red}it is endowed with} a real vector space automorphism $a\mapsto a^*$ such that for all $a, b\in A$ with have $a^{**}=a$, ${\color{\red}(c\.a)^*} =\overline c\.a^*$, and $(ab)^*=b^*a^*$. A complex weakly complete unital topological algebra whose underlying algebra is involutive with respect to an involution $^*$ is a weakly complete unital involutive algebra, if $a\mapsto a^*$ is an isomorphism of the underlying weakly complete topological vector space. \hskip-10pt Every C$^*$-Algebra is an involutive algebra; a simple example is $\Hom(\C^n,\C^n)$, i.e, an algebra isomorphic to the algebra of $n\times n$ complex matrices with the conjugate transpose as involution. An element $a\in A$ is {\it unitary} if $aa^*=a^*a=1$, that is if $a^*=a^{-1}$, and it is called {\it hermitian} if $a=a^*$. If $A$ is an involutive topological unital algebra with an exponential function $\exp$, then for each hermitian element $h$ the element {\color{\red}$\exp(i\.h)$} is unitary. Note that in $A=\C$ the unitary elements are the elements on the unit circle $\SS^1$ and the hermitian elements are the ones on the real line $\R$. The theory of the function $h\mapsto{\color{\red}\exp(ih)}$ is called {\it trigonometry}. \msk The involutive algebras form a category with respect to morphisms $f\colon A\to B$ satisfying $f(a^*)=f(a)^*$. \msk \begin{Lemma} \label{l:generalnonsense} If $A$ is a complex weakly complete unital topological coalgebra, then the complex vector space dual $A'$ is an involutive unital algebra for the involution $f\mapsto f^*$ defined by $f^*(a)=\overline{f(a^*)}$. \end{Lemma} The proof is a straightforward exercise. \bsk Assume now that $G$ is a topological group. We shall introduce an involution on $\C[G]$ making it an involutive algebra. \msk (a) For every complex vector space $V$ we introduce a complex vector space $\til V$ by endowing the real vector space underlying $V$ with a complex scalar multiplication $\bullet$ defined by $c{\bullet}v=\overline c\.v$. (b) The composition $$ |\C[G]|\mapright{\sigma_G}|\C[G]|\mapright{\id_{\C[G]}} |\C[G]|\til{\phantom m}$$ of the Hopf algebra symmetry $\sigma_G: |\C[G]|\to |\C[G]|$ and the identity map $\C[G]\to \C[G]\til{\phantom m}$ yields an involution $$^*\colon|\C[G]{\color{\red}|}\to|\C[G]|,\quad \mbox{ \it such that } (\forall c\in\C,\ a\in\C[G])\, a^*=\sigma(a)\mbox { \it and }(c\.a)^*= \overline c\.a^*.$$ Moreover, by definition of $\sigma_G$, we have $\eta_G(g)^*=\eta_G(g^{-1})$ for all $g\in G$. \msk The proof of the following remarks may again be safely left as an exercise. \begin{Lemma} \label{l:clear} For each topological group $G$, the complex algebra $\C[G]$ is an involutive algebra with respect to $^*$, and the comultiplication $c\colon \C[G]\to\C[G]\otimes\C[G]$ and the coidentity $k\colon\C[G]\to\C$ are morphisms of involutive algebras. All elements in $\eta_G(G)\subseteq \C[G]$ are unitary grouplike elements. \end{Lemma} From Lemma \ref{l:generalnonsense} we derive directly the following observation: \begin{Lemma} \label{l:alsoclear} The dual $\RR(G,\C)$ of the weakly complete involutive algebra $\C[G]$ is an involutive algebra. \end{Lemma} \begin{Definition} \label{d:sixfour2} Let $R$ be a unital algebra over $\C$. Then the set of hermitian characters of $R$ is denoted by $\Spec_h(R)$ is called the {\it hermitian spectrum} of $R$. \end{Definition} For a {\it compact} group $G$, however, the commutative unital algebra $\RR(G,\K)$ is a {\it dense} subalgebra of the commutative unital Banach $\K$--algebra $C(G,\K)$. (See e.g.\ \cite{book3}, Theorem 3.7.) We note that for every $g\in G$, the {\it point evaluation} $f\mapsto f(g): \RR(G,\C)\to \C$ belongs to $\Spec_h(\RR(G,\C))$. In this particular situation, the literature provides the following result in which we consider $\C[G]$ as the dual $\RR(G,\C)^*$ of the involutive algebra $\RR(G,\C)$. \msk \begin{Theorem} \label{th:classic} For a compact group $G$, the hermitian spectrum \centerline{$\Spec_h(\RR(G,\C))\subseteq \C[G]$} \nin is precisely the set of point evaluations. \end{Theorem} \begin{Proof} See e.g. \cite{hofbig}, Proposition 12.26 or \cite{hoc}, p.~28. Cf.\ also \cite{dix}, no. {\bf1.}3.7.\ on p.~7. \end{Proof} In the case $\K=\R$, any morphism $f\colon \RR(G,\R)\to \R$ of real algebras, that is, any (real) character of $\RR(G,\R)$ extends uniquely to a (complex) character $\til f\colon \C\otimes_\R \RR(G,\R)\to \C$, where $\C\otimes_{\color{\red}\R}\RR(G,\R)\cong\RR(G,\C)$, such that {\color{\red}$\tilbar f= \bartil f$}. Trivially, $f$ is a point evaluation of $\RR(G,\R)$, if and only if $\til f$ is a point evaluation of $\RR(G,\C)$, and $f$ is continuous if and only if $\til f$ is continuous. Hence Theorem \ref{th:classic} implies the analogous result over $\R$: \begin{Corollary} \label{c:classic} For a compact group $G$, the set $\Spec(\RR(G,\R))\subseteq \RR(G,\R)^*$ of characters of $\RR(G,\R)$ is precisely the set of point evaluations $f\mapsto f(g)$, $G\in G$. \end{Corollary} {\color{\red} Recall that $\RR(G,\R)^*$ and $\R[G]$ are natually isomorphic by duality since $\R[G]'$ and $\RR(G,\R)$ are naturally isomorphic by Theorem \ref{th:dual}. After Theorem \ref{th:gr-alg-comp-gr} we may identify a compact group $G$ with its isomorphic image via $\eta_G$ in $\R[G]$. From Remark \ref{r:grlike} we know $G\subseteq \Gamma(\R[G])$ (see Definition \ref{d:grlike}). Now finally we have the full information on one half of the adjunction of $\HH$ and $\Gamma$ of Theorem \ref{p:grounding} in the case of {\it compact} groups $G$: \begin{Theorem} \label{th:comp-group-like} {\rm(Main Theorem for Compact Groups A)} \quad For every compact group $G$ the natural morphism of topological groups $\eta_G\colon G\to \Gamma\(\HH(G)\) =\Gamma(\R[G])$ is an isomorphism of compact groups. \end{Theorem} In other words, if for a compact group $G$ we identify $G$ with a subgroup of $(\R[G],\.)$ for its weakly complete group algebra $\R[G]$, then $G=\Gamma(\R[G])$, i.e. every grouplike element of the Hopf algebra $\R[G]$ is a group element of $G$. \begin{Proof} We apply Proposition \ref{p:grouplike} with $R=\RR(G,\R)$ and $A=\R[G]$. Then the set of grouplike elements of $\R[G]$ is $\Spec(\RR(G,\R))$. Let $\ev\colon G\to\Spec(\RR(G,\R))$ denote the function given by $\ev(g)(f)=f(g)$ for all $g\in G$, $f\in\RR(G,\R)$. Then $\ev(G)\subseteq \Spec(\RR(G,\R))$ by Remark \ref{r:grlike} and Proposition \ref{p:grouplike}. However, Theorem \ref{th:classic} and Corollary \ref{c:classic} show that in fact equality holds, and that is the assertion of the theorem. \end{Proof} It appears that a direct proof of the assertion that everery grouplike element is a group element in $\R[G]$ would be difficult to come by even in the case of an infinite compact group $G$. (If $G$ happens to be finite then the proof is elementary linear algebra.). As it stands, after Theorem \ref{th:comp-group-like}, the category theoretical background on the adjunction of the functors $\Gamma$ and $\HH$ expressed in Corollary \ref{c:alternate2} (1) and (2) gives some additional insight for compact $G$, respectively, compact $\Gamma(A)$. Indeed Corollary \ref{c:alternate2}(1) yields at once: \begin{Corollary} \label{c:nochwas} Let $A$ be a weakly complete real Hopf algebra and assume that the group $G\defi\Gamma(A)$ of its grouplike elements is compact. Then the natural morphism of topological groups $\Gamma(\epsilon_A)\colon \Gamma\(\HH(G)\)\to G$ is an isomorphism. \end{Corollary} } \bsk We recall that every compact group $G$ is a pro-Lie group with a Lie algebra $\L(G)$ and an exponential function $\exp_G\colon\L(G)\to G$ according to \cite{book3}. \msk \begin{Theorem} \label{th:comp-prolie} {\color{\red} {\rm (Main Theorem for Compact Groups B)}} \quad Consider the compact group $G$ as a subgroup of the multiplicative monoid $(\R[G],\cdot)$ of its weakly complete group Hopf algebra. Then {\color{\red} $G=\Gamma(\R[G])$, that is, $G$ is the set of grouplike elements of the Hopf algebra $\R[G]$, while $\L(G)\cong\Pi(\R[G])$, that is, the Lie algebra of $G$ is the set of primitive elements of $\R[G]$. Hence the Lie algebra of $G$ is isomorphic to the subalgebra $\Pi(\R[G])$ of the Lie algebra $\R[G]_{\rm Lie}$.} Moreover, the global exponential function $\exp_{{\color{\red}\R}[G]}:\R[G]_{\rm Lie} \to \R[G]^{-1}$ of $\R[G]$ restricts{\color{\red}, up to isomorphy,} to the exponential function $\exp_G\colon\L(G)\to G$ of $G$. \end{Theorem} \msk We have now seen that a compact group and its Lie theory are completely incorporated into its weakly complete group Hopf algebra. Under such circumstances it is natural to ask whether a similar assertion could be made for the ample theory of Radon measures on a compact group including its Haar measure. We shall address this question and give a largely affirmative answer in the next section. \msk What we have to address at this point is the question, whether for a weakly complete real Hopf algebra $A$ in which the group $\Gamma(A)$ is compact and algebraically and topologically generates $A$, the natural morphism $\epsilon_A\colon \HH\(\Gamma(A)\)=\R[\Gamma(A)]\to A$ is in fact an isomorphism. \begingroup \color{\red} For the investigation of this question we need some preparation. Assume that $G$ is a compact group and $\RR(G,R)\subseteq C(G,\R)$ is the Hopf algebra of all functions $f\in C(G,\R)$ whose translates span a finite dimensional vector subspace. We now let $M$ be a Hopf subalgebra and a $G$-submodule of $\RR(G,\R)$. Recall that $\Spec M$ denotes the set of all algebra morphisms $M\to\R$. Trivially we have a morphism $\omega \mapsto \omega|M:\Spec \RR(G,\R) \to \Spec(M)$. From Corollary \ref{c:classic} we know that $G \cong \Spec \RR(G,\R)$ via point evaluation. In the case that $M=A'$ and $G=\Gamma(A)$ such that $\Gamma(\epsilon_A)\colon \Gamma(\R[G])\to G$ an isomorphism as in Corollary \ref{c:nochwas} we know that $$g\mapsto(f\mapsto f(g)):G\to \Spec(M)\quad\mbox{is an isomorphism.} \leqno(E)$$ From \cite{book3}, Definition 1.20, p.~13 we recall that $M\subseteq C(G,\R)$ is said to {\it separate points} if for two points $g_1\ne g_2$ in $G$ there is an $f\in M$ such that $f(g_1)\ne f(g_2)$. In other words, different points in $G$ can be distinguished by different point evaluations of functions from $M$. So condition (E) secures that the functions of $M$ separate the points of $G$. This has the following consequence: \begin{Lemma} \label{l:stone-wei} If the unital subalgebra $M$ of $\RR(G,\R)$ satisfies $(E)$, then $M$ is dense in $C(G,\R)$ with respect to the sup norm and is dense in $L^2(G,\R)$ with respect to the $L^2$-norm. \end{Lemma} \begin{Proof} Since $M$ is a unital subalgebra of $\RR(G,\R)$, it contains the scalar multiples of the constant functions of value 1, that is, $M$ contains all the constant functions. Moreover, by Hypothesis (E), the algebra $M\subseteq C(G,\R)$ separates the points of $G$. Therefore the Stone-Weierstra{\ss} Theorem applies and shows that $M$ is dense in $C(G,\R)$ in the sup norm topology of $C(G,\R)$. Since $L^2(G,\R)$ is the $L^2$-norm completion of $C(G,\R)$ and $M$ is uniformly dense in $C(G,R)$ it follows that $M$ is dense in $L^2(G,\R)$ in the $L^2$-norm. (Cf.\ e.g.\ \cite {book3} Theorem 3.7 and its proof.) \end{Proof} \begin{Lemma} \label{l:crucial} If a $G$-submodule $M$ of $\RR(G,E)$ is $L^2$-dense in $L^2(G,\R)$ then it agrees with $\RR(G,R)$. \end{Lemma} \begin{Proof} Let $\hat G$ denote the set of isomorphy classes of irreducible $G$-modules. By the Fine Structure Theorem of $\RR(G,\R)$ (see \cite{book3}, Theorem 3.28) $\RR(G,R)=\sum_{\epsilon\in\hat G}\RR(G,\R)_\epsilon$ where $\sum$ denotes the algebraic direct sum of (finite dimensional) vector subspaces and where $\RR(G,\R)_\epsilon$ is a finite direct sum of simple modules for each $\epsilon\in \hat G$. In particular, each $\RR(G,\R)_\epsilon$ is finite dimensional. Further $L^2(G,\R)=\bigoplus_{\epsilon\in\hat G} \RR(G,\R)_\epsilon$ where $\bigoplus$ denotes the Hilbert space direct sum. The submodule $M$ of $\RR(G,\R)$ adjusts to the canonical decomposition of $\RR(G,\R)$ since $M_\epsilon$ is necessarily a submodule of $\RR(G,\R)_\epsilon$. Hence $M=\sum_{\epsilon\in\hat G}M_\epsilon$ and the $L_2$-closure of $M$ in $L^2(G,\R)$ is the Hilbert space sum $\bigoplus_{\epsilon\in\hat G} M_\epsilon$. \msk By way of contradiction suppose now that that $M\ne \RR(G,\R)$. Then there is an $\epsilon'\in\hat G$ such that $M_{\epsilon'}\ne \RR(G,\R)_{\epsilon'}$. Since $\RR(G,\R)_{\epsilon'}$ is finite dimensional, $$\bigoplus_{\epsilon\in\hat G}M_\epsilon=M_{\epsilon'}\oplus \bigoplus_{\epsilon\ne\epsilon'}M_\epsilon$$ is properly smaller than the Hilbert space sum $$\bigoplus_{\epsilon\in\hat G}\RR(G,\R)_\epsilon =\RR(G,\R)_{\epsilon'}\oplus\bigoplus_{\epsilon\ne\epsilon'}\RR(G,R)_\epsilon =L^2(G,\R),$$ contradicting the hypothesis that $M$ is $L^2$-dense in $L^2(G,\R)$. This contradiction proves the lemma. \end{Proof} Now we are ready for the third main result on compact groups in the present context: the statement which parallels Theorem \ref{th:comp-group-like}. \begin{Theorem} \label{th:comp-prolie2} {\rm (Main Theorem on Compact Groups C)}\quad Let $A$ be a weakly complete real Hopf Algebra satisfying the following two conditions: \begin{enumerate}[\rm(i)] \item The subgroup $\Gamma(A)$ of grouplike elements of $A$ is compact, \item $\Gamma(A)$ generates $A$ algebraically and topologically, that is, $\SS(A)=A$. \end{enumerate} Then $$\epsilon_A\colon \R[\Gamma(A)]=\HH\(\Gamma(A)\)\to A$$ is a natural isomorphism. \end{Theorem} \begin{Proof} We set $G=\Gamma(A)$. By Corollary \ref{c:epsilon-epic} the morphism $\epsilon_A\colon \R[G]\to A$ is a quotien homomorphism of weakly complete Hopf algebras which by Corollary \ref{c:nochwas} induces an isomorphism $\Gamma(\epsilon_A)\colon \Gamma(\R[G])\to G$. So $\Gamma(\R[G])$ is identified with $G$ if we consider $G$ as included in $\R[G]$ according to Theorem \ref{th:comp-group-like}. By the Duality between real Hopf algebras in $\mathcal V$ and weakly complete real Hopf algebras in $\mathcal W$, the dual morphism $\epsilon_A'\colon A'\to \R[G]'$ is injective. Theorem \ref{th:dual} then gives us an inclusion $A'\subseteq\RR(G,\R)$ of real Hopf algebras as well as of $G$-modules such that the natural map $\Spec(A')\to\Spec\(\RR(G,\R)\)$ is the identity. That is, Condition (E) above (preceding Lemma \ref{l:stone-wei}) holds and Lemmas \ref{l:stone-wei} and \ref{l:crucial} apply. Therefore $A'=\RR(G,\R)$. This in turn shows that $\epsilon_A$ is an isomorphism. \end{Proof} For a concise formulation of the consequences let us use the following notation: \begin{Definition} A real weakly complete Hopf algebra $A$ will be called {\it compactlike} if it is such that the subgroup $\Gamma(A)$ of grouplike elements is compact and $\SS(A)=\overline{\Span}\(\Gamma(A)\)=A$. \end{Definition} \begin{Theorem} {\rm (The Equivalence Theorem of the Category of Compact Groups and compactlike Hopf algebras)} The categories of compact groups and of weakly complete compactlike Hopf algebras are equivalent. \end{Theorem} \begin{Proof} This follows immediately from Theorems \ref{th:comp-group-like} and \ref{th:comp-prolie2}. \end{Proof} \begin{Corollary} \label{c:tannaka} {\rm(Tannaka Duality)}\quad The category of compact groups is dual to the full category of real abstract Hopf algebras of the form $\RR(G,\R)$ with a compact group $G$. \end{Corollary} \endgroup \section{The Radon Measures within the Group Algebra of a Compact Group} We shall invoke measure theory in the form pioneered for arbitrarly locally compact groups in \cite{boui}. For a {\it compact} group $G$ it is less technical and adapts reasonably to the formalism of its real group algebra $\R[G]$. This discussion will help us to understand the power ot the group algebras $\R[G]$ for a compact group. \ssk Indeed any compact Hausdorff topological group provides us with a real Banach algebra $C(G,\R)$ endowed with the sup-norm which makes it into a Hopf-algebra in the categoy of Banach spaces. Accordingly, its topological dual $C(G,\R)'$ yields the Banach algebra and indeed Banach Hopf algebra $M(G,\R)$ (see e.g.\ \cite{hofbig}). Its elements $\mu$ are the so called {\it Radon measures} on $G$ and the general source books of this orientation of measure and probability theory is Bourbaki's book \cite{boui} and for the foundations of harmonic analysis the book of Hewitt and Ross \cite{hewross}. For a measure theory in the context of compact groups see also \cite{book3}, Appendix 5: ``Measures on Compact Groups''. \msk So let $W$ be a weakly complete real vector space. Then $W$ may be identified with ${W'}^*$ (see Theorem \ref{th:vect}). For $F\in C(G,W)$ and $\mu\in M(G,\R)$ we obtain a unique element $\int_G F\, d\mu\in W$ such that we have $$(\forall \omega\in W')\quad \left\<\omega,\int_GF\,d\mu\right\> =\int_{g\in G}\<\omega,F(g)\>d\mu(g).\leqno(*)$$ (See \cite{boui}, Chap. III, \S 3, {\bf n}$^{{\rm o} 1}$, D\'efinition 1.) Let $\supp(\mu)$ denote the support of $\mu$. (See \cite{boui}, Chap. III, \S2, {\bf n}$^{{\rm o} 2}$, D\'efinition 1.) \msk \begin{Lemma}\label{l:lin} Let $T\colon W_1\to W_2$ be a morphism of weakly complete vector spaces, $G$ a compact Hausdorff space and $\mu$ a measure on $G$. If $F\in C(G,W_1)$, then $T(\int_G F\,d\mu)=\int_G (T\circ F) d\mu$. \end{Lemma} \noindent(See e.g.\ \cite{boui}, Chap. III, \S 3, {\bf n}$^{{\rm o} 2}$, Proposition 2.) In \cite{boui} it is shown that the vector space $M(G,\R)$ is also a complete lattice w.r.t.\ a natural partial order (see \cite{boui}, Chap. III, \S 1, {\bf n}$^{\rm o}$ 6) so that each $\mu\in M(G)$ is uniquely of the form $\mu=\mu^+-\mu^-$ for the two positive measures $\mu^+=\mu\vee 0$ and $\mu^-=-\mu\vee 0$. One defines $|\mu|=\mu^++\mu^-$. If $M^+(G)$ denotes the cone of all positive measures, we have $M(G)=M^+(G)-M^+(G)$ (\cite{boui}, Chap. III, \S 1, {\bf n}$^{\rm o}$ 5, Th\'eor\`eme 2). Moreover, $\|\mu\|=|\mu|(1)=\int d|\mu|$. A measure is called a {\it probability measure} if it is positive and $\mu(1)=1$. We write $P(G)$ for the set of all probability measures on $G$ and we note $M^+(G)=\R_+\.P(G)$ where $\R_+=[0,\infty[\ \subseteq\R$. We denote by $M_p(G)$ the vector space $M(G,\R)$ with the topology of pointwise convergence and recall that $P(G)$ has the structure of a compact submonoid of $M_p(G)^\times$; some aspects are discussed in $\cite{book3}$, Appendix 5. On $M^+(G)$ the topologies of $M_p(G)$ and the compact open topology of $M(G,\R)$ agree (\cite{boui}, Chap. III, \S 1, {\bf n}$^{\rm o}$ 10, Proposition 18), Also $M^+_p(G)$ is a locally compact convex pointed {\it cone} with the closed convex hull $P(G)$ of the set of point measures as {\it basis}. We also recall, that any positive linear form on $C(G,\R)$ is in $M^+(G)$ (i.e., is continuous) (see \cite{boui}, Chap. III, \S 1, {\bf n}$^{\rm o}$ 5, Th\'eor\`eme 1). \bsk \subsection{Measures and Group Algebras} Now we allow this machinery and Theorems \ref{th:hopf-prolie} and \ref{th:gr-alg-comp-gr} to come together to elucidate the structure of $\R[G]$ for compact groups $G$. \bsk We let $G$ be a compact group. By Theorem \ref{th:gr-alg-comp-gr} it is no loss of generality to assume that $G$ is a compact subgroup of $\R[G]^{-1}$, where $\R[G]$ is the weakly complete group Hopf algebra of $G$ and $\eta_G\colon G\to \R[G]^{-1}$ is the inclusion morphism. By Theorem \ref{th:dual} there is an isomorphism $\omega\mapsto f_\omega\colon \R[G]'\to \RR(G,\R)$ such that $$(\forall \omega\in\R[G]',\, g\in G)\, \<\omega,\eta_G(g)\>=f_\omega(g),$$ and, in the reverse direction, the function $\omega\mapsto \omega|G: \R[G]'\to C(G,\R)$ induces an isomorphism of vector spaces $\R[G]'\to{\mathcal R}(G,\R)$. \msk Therefore, in the spirit of relation $(*)$, we are led to the following definition \msk \begin{Definition} \label{d:maindef} Let $G$ be a compact group. Then each $\mu\in M(G,\R)$ gives rise to an element $$\rho_G(\mu)\defi \int_G\eta_G\,d\mu\in \R[G]$$ such that for all $\omega\in\R[G]'$ we have $$\<\omega,\rho_G(\mu)\>=\int_{g\in G} \<\omega,\eta_G\>\,d\mu(g) =\int_{g\in G}f_\omega(g)\,d\mu(g)=\mu(f_\omega).\leqno(**)$$ Therefore we have a morphism of vector spaces $$\rho_G\colon M(G,\R)\to\R[G].$$ We let $\tau_{\RR(G,\R)}$ denote the weakest topology making the functions $\mu\mapsto \mu(f):M(G,\R)\to\R$ {\color{\red}continuous} for all $f\in\RR(G,\R)$ \end{Definition} On any compact subspace of $M_p(G)$ such as $P(G)$ the topology $\tau_{\RR(G,\R)}$ agrees with the topology of $M_p(G)$. \msk \begin{Lemma} \label{l:injective} The morphism $\rho_G$ is injective and has dense image. \end{Lemma} \begin{Proof} We observe $\mu\in\ker\rho_G$ if for all $f\in\RR(G,\R)$ we have $\int_{g\in G}f(g)\,d\mu(g)=0$. Since $\mu$ is continuous on $C(G,\R)$ in the norm topology and $\RR(G,\R)$ is dense in $C(G,\R)$ by the Theorem of Peter and Weyl (see e.g.\ \cite{book3}, Theorem 3.7), it follows that $\mu=0$. So $\rho_G$ is injective. If $\mu=\delta_x$ is a measure with support $\{x\}$ for some $x\in G$, then $\rho_G(\mu)=\int_G\eta_G d\delta_x=x$. Thus $G\subseteq\rho_G(M(G))$. Since $\R[G]$ is the closed linear span of $G$ by Proposition \ref{p:generation}, it follows that $\rho_G$ has a dense image. \end{Proof} We note that in some sense $\rho_G$ is dual to the inclusion morphism of vector spaces $\sigma_G\colon\RR(G,\R)\to C(G,\R)$. Returning to (**) in Definition \ref{d:maindef}, for a compact group $G$, we observe \begin{Lemma} \label{l:topologies} The morphism $$\rho_G:(M(G,\R),\tau_{\RR(G,\R)}) \to \R[G]$$ is a topological embedding. \end{Lemma} \msk If $\mu$ is a probabililty measure, then the element $\rho_G(\mu)=\int_G\eta_G\,d\mu$ is contained in the compact closed convex hull $\overline{\conv}(G)\subseteq \R[G]$. Intuitively, $\int_G \eta_G d\mu\in\overline{\conv}(G)$ is the center of gravity of the ``mass'' distribution $\mu$ contained in $G\subseteq\R[G]$. In particular, if $\gamma\in M(G,\R)$ denotes normalized Haar measure on $G$, then $$\rho_G(\gamma)=\int_G\eta_G\,d\gamma=\int_{g\in G} g\, dg$$ is the center of gravity of $G$ itself with respect to Haar measure. \msk We note that in the weakly complete vector space $\R[G]$ the closed convex hull $$ B(G)\defi \overline{\conv}(G) \subseteq \R[G]$$ is compact. (See e.g.\ \cite{book3}, Exercise E3.13.) \begin{Lemma}\label{l:conv} The restriction $\rho_G|P(G): P(G)\to B(G)$ is an affine homeomorphism. \end{Lemma} \begin{Proof} (i) Affinity is clear and injectivivity we know from Lemma \ref{l:injective}{\color{\red}.} (ii) Since $P(G)$ is compact in the weak topology and $\rho_G$ is injective and continuous, $\rho_G|P(G)$ is a homeomorphism onto its image. But $G\subseteq \rho_G(P(G))$, and $B(G)$ is the closed convex hull of $G$ in $\R[G]$, it follows that $B(G)\subseteq\rho_G(p(G))$. \end{Proof} \msk If $k\colon \R[G] \to \R$ is the augmentation map (i.e., the coidentity morphism), then $k(G)=\{1\}$ and so $k(B(G))=\{1\}$ as well. From $GG\subseteq G$ we deduce that $\conv(G)\conv(G)\subseteq\conv(G)$ and from there, by the continuity of the multiplication in $\R[G]$ and $1\in G\subseteq B(G)$, it follows that $B(G)$ is a compact submonoid of $\R[G]^\times$ contained in the submonoid $k^{-1}(1)$. \msk Then the cone $\R_+[G]\defi\R_+\.B(G)$, due to the compactness of $B(G)$, is a locally compact submonoid as well. The set \centerline{$k^{-1}(1)\cap \R_+[G]=\{{\color{\red}x\in \R_+[G]:k(x)}=1\} =B(G)$} \nin is a compact basis of the cone $\R_+[G]$. \msk \begin{Corollary} \label{c:conv} The function $\rho_{{\color{\red}G}}|M^+(G):M^+(G)\to \R_+[G]$ is an isomorphism of convex cones and $\rho_G(M(G))=\R_+[G]-\R_+[G]$ \end{Corollary} \begin{Proof} Since $M^+(G)=\R_+\.P(G)$ and $\R_+[G]=\R_+\.B(G)$, Lemma \ref{l:conv} shows that $\rho_G|M^+(G)$ is an affine homeomorphism. Since $M(G)=M^+(G)-M^+(G)$, the corollary follows. \end{Proof} Among other things this means that every element of $\R_+[G]-\R_+[G]$ is a an integral $\int_G \eta\,d\mu$ in $\R[G]$ for some Radon measure $\mu\in M(G)$ on $G$. \msk In order to summarize our findings we firstly list the required conventions: \nin Let $G$ be a compact group viewed as a subgroup of the group $\R[G]^{-1}$ of units of the weakly complete group algebra $\R[G]$. Let $B(G)=\overline{\conv}(G)$ denote the closed convex hull of $G$ in $\R[G]$ and define $\R_+[G]=\R_+\.B(G)$. Let $k\colon \R[G]\to \R$ denote the augmentation morphism and $I=\ker k$ the augmentation ideal. We let $\eta_G\colon G\to\R[G]$ denote the inclusion map and consider $\rho_G\colon M(G,\R)\to\R[G]$ with $\rho_G(\mu)=\int_G\eta_G\,d\mu$. \begin{Theorem} \label{th:convexity} For a compact group $G$ we have the following conclusions: \begin{enumerate}[\rm (a)] \item $B(G)\supseteq G$ is a compact submonoid of $1+I\subseteq {\color{\red}(\R[G],\cdot)}$ with Haar measure $\gamma$ of $G$ as zero element. \item $\R_+[G]$ is a locally compact pointed cone with basis $B(G)$, and is a submonoid of ${\color{\red}(\R[G],\cdot)}$. \item The function $\rho_G\colon (M(G),\tau_{\RR(G,\R)})\to\R[G]$ is an injective morphism of topological vector spaces {\color{\red} with dense image $\R_+[G]-\R_+[G]$. It induces a homeomorphism onto its image.} \item The function $\rho_G|M^+(G)\colon M_p^+(G)\to \R_+[G]$ is an affine homeomorphism from the locally compact convex cone of positive Radon measures on $G$ onto $\R_+[G]\supseteq B(G)\supseteq G$. \end{enumerate} \end{Theorem} \begin{Remark} The Haar measure $\gamma$ is mapped by $\rho_G$ onto the center of gravity $\int_G \eta_G\,d\gamma$ of {\color{\red}$G$, $\gamma\in B(G)\subseteq 1+I$}. \end{Remark} \bsk It should be noted that $\rho_G\colon M(G,\R)\to\R[G]$ is far from surjective if $G$ is infinite: If we identify $\R[G]$ with $\RR(G,\R)^*$ according to Theorem \ref{th:dual}, then any element $u\in\R[G]$ representing a {\color{\red} linear form on $\RR(G,\R)$ which is discontinuous in the norm topology induced by $C(G,\R)$} fails to be an element of $\rho_G(M(G))$. \msk Theorem \ref{th:convexity} shows that for a compact group $G$, the weakly complete real group algebra $\R[G]$ does not only contain $G$ and the entire pro-Lie group theory encapsulated in the exponential function $\exp_G\colon \L(G)\to G$ but also the measure theory, notably, that of the monoid of probability measures $P(G)\cong B(G)$. \bsk Recall the hyperplane ideal $I=\ker k$ for the augmentation $k\colon \R[G]\to\R$. \begin{Corollary} \label{c:split} Let $G$ be a compact group, $\R[G]$ its real group algebra, and $\gamma\in\R[G]$ its normalized Haar measure. Then $J\defi \R\.\gamma$ is a one-dimensional ideal, and $$\R[G]=I\oplus J$$ is the ideal direct sum of $I$ and $J$. The vector subspace $J$ is a minimal nonzero ideal. In particular, $J\cong \R[G]/I\cong\R$ and $I\cong \R[G]/J$. \end{Corollary} \begin{Proof} In the multiplicative monoid $B(G)\subseteq \R[G]${\color{\red},} the idempotent element $\lambda$ is a {\color{\red} zero of the monoid $(I,\cdot)$}, that is, $$\lambda B(G)=B(G)\lambda =\{\lambda\}.$$ (See \cite{book3}, Corollary A5.12.) As a consequence, $$ JB(G) = B(G)J\subseteq J.$$ The vector space $\Span B(G)$ contains $\Span G$ which is dense in $\R[G]$ by Proposition \ref{p:generation}. Hence $J\R[G]=\R[G]J\subseteq J$ and so $J$ is a two-sided ideal. Since $k(B(G))=\{1\}$ by Theorem \ref{th:convexity}(a) we know $J\not\subseteq I$, and since $I$ is a hyperplane, $\R[G]=I\oplus J$ follows. \end{Proof} \msk We note that $\R[G]/J$ is a weakly complete topological algebra containing a copy of $G$ and indeed of $P(G)$ with Haar measure in the copy of $P(G)$ being the zero of the algebra. It is not a group algebra nor a Hopf algebra in general as the example of $G=\Z(3)$ shows. {\color{\red} \msk While the group $\Gamma(\R[G])\cong G$ of grouplike elements of $\R[G]$ (and its closed convex hull $B(G)$) is contained in the affine hyperplane $1+I$, in the light of Theorem \ref{th:comp-prolie} it is appropriate to observe that in the circumstances of {\rm Corollary \ref{c:split}}, the Lie algebra of primitive elements $\Pi(\R[G])\cong \L(G)$ is contained in $I=\ker k$. Indeed the ground field $\R$ is itself a Hopf algebra with the natural isomorphism $c_\R\colon \R\to \R\otimes \R$ satisfying $c_\R(r)= r\.(1\otimes1)=r\otimes1=1\otimes r$. Now the coidentity $k$ of any coalgebra $A$ is a morphism of coalgebras so that we have a commutative diagram for $A\defi\R[G]$: \vglue-10pt $$\begin{matrix} A&\mapright{c_A}&A\otimes A\\ \lmapdown{k}&&\mapdown{k\otimes k}\\ \R&\mapright{c_\R}&\R\otimes\R,\end{matrix}\leqno(1) $$ If $a\in A$ is primitive, then $c(a)=a\otimes1 + 1\otimes a$. The commutativity of (1) provides $k(a)\otimes1=\alpha(k(a)) (k\otimes k)(c(a))=(k\otimes k)(a\otimes1 +1\otimes a)= k(a)\otimes1 + 1\otimes k(a)$, yielding $1\otimes k(a)=0$, that is $k(a)=0$ which indeed means $a\in\ker k=I$. We note that these matters are also compatible with the Main Theorem for Compact Groups B \ref{th:comp-prolie} insofar as, trivially, $\exp(I)\subseteq 1+I$. } \bsk \vglue25pt \nin {\bf Acknowledgments.}\quad The authors thank {\sc G\'abor Luk\'acs} for extensive and deep discussions inspiring much of the present investigations, which could have hardly evolved without them. \nin They also gratefully acknowledge the kind assistance of {\sc Mahir Can} of Tulane University, New Orleans in the compilation of the references, notably with \cite{boch}, \cite{bresar}, and \cite{good}. Lemma \ref{l:canslemma} is rightfully called Can's Lemma. \nin We are also very grateful to the thorough and expeditious input of the referee who has assisted us in eliminating numerous formal flaws in our manuscript and who directed us to relevant points that needed further exposition.
1,941,325,220,345
arxiv
\section{Introduction} In this paper we study the binary quasiorder on semigroups. Every semigroup carries many important quasiorders (for example, those generated by the Green relations). One of them is the binary quasiorder $\lesssim$ defined as follows. Given two elements $x,y$ of a semigroup $X$ we write $x\lesssim y$ if $\chi(x)\le\chi(y)$ for any homomorphism $\chi:X\to\{0,1\}$. On every semigroup $X$ the binary quasiorder generates a congruence, which coincides with the least semilattice congruence, and decomposes the semigroup into a semilattice of semilattice-indecomposable semigroups. This fundamental decomposition result was proved by Tamura \cite{Tam56} (see also \cite{Petrich63}, \cite{Petrich64}, \cite{TS66}). Because of its fundamental importance, the least semilattice congruence has been deeply studied by many mathematicians, see the papers \cite{CB93}, \cite{CB}, \cite{Galbiati}, \cite{Gigon}, \cite{MRV}, \cite{PBC}, \cite{Putcha73}, \cite{Putcha74}, \cite{PW}, \cite{Petrich63}, \cite{Petrich64}, \cite{Sulka}, \cite{TK54}, \cite{Tamura73}, \cite{Tamura82}, surveys \cite{Mitro2004}, \cite{MS}, and monographs \cite{BCP}, \cite{Mitro2003}, \cite{Petrich73}. The aim of this paper is to provide a survey of known and new results on the binary quasiorder and the least semilattice congruence on semigroups. The obtained results will be applied in the theory of categorically closed semigroups developed by the first author in collaboration with Serhii Bardyla, see \cite{BB1,BB2,BB3,BB4,BB5}. \section{Preliminaries} In this section we collect some standard notions that will be used in the paper. We refer to \cite{Howie} for Fundamentals of Semigroup Theory. We denote by $\w$ the set of all finite ordinals and by $\IN\defeq\w\setminus\{0\}$ the set of all positive integer numbers A {\em semigroup} is a set endowed with an associative binary operation. A semigroup $X$ is called a {\em semilattice} if $X$ is commutative and every element $x\in X$ is an {\em idempotent} which means $xx=x$. Each semilattice $X$ carries the {\em natural partial order} $\leq$ defined by $x\le y$ iff $xy=x$. For a semigroup $X$ we denote by $E(X)\defeq\{x\in X:xx=x\}$ the set of idempotents of $X$. Let $X$ be a semigroup. For an element $x\in X$ let $$x^\IN\defeq\{x^n:n\in\IN\}$$ be the monogenic subsemigroup of $X$, generated by the element $x$. For two subsets $A,B\subseteq X$, let $AB\defeq\{ab:a\in A,\;b\in B\}$ be the product of $A,B$ in $X$. For an element $a$ of a semigroup $X$, the set $$H_a=\{x\in X:(xX^1=aX^1)\;\wedge\;(X^1x=X^1a)\}$$ is called the {\em $\mathcal H$-class} of $a$. Here $X^1=X\cup\{1\}$ where $1$ is an element such that $1x=x=x1$ for all $x\in X^1$. By Corollary 2.2.6 \cite{Howie}, for every idempotent $e\in E(X)$ its $\mathcal H$-class $H_e$ coincides with the maximal subgroup of $X$, containing the idempotent $e$. \section{The binary quasiorder} In this section we discuss the binary quasiorder on a semigroup and its relation to the least semilattice congruence. Let $\two$ denote the set $\{0,1\}$ endowed with the operation of multiplication inherited from the ring $\IZ$. It is clear that $\two$ is a two-element semilattice, so it carries the natural partial order, which coincides with the linear order inherited from $\IZ$. For elements $x,y$ of a semigroup $X$ we write $x\lesssim y$ if $\chi(x)\le \chi(y)$ for every homomorphism $\chi:X\to\two$. It is clear that $\lesssim$ is a quasiorder on $X$. This quasiorder will be referred to as {\em the binary quasiorder} on $X$. The obvious order properties of the semilattice $\two$ imply the following (obvious) properties of the binary quasiorder on $X$. \begin{proposition}\label{p:quasi2} For any semigroup $X$ and any elements $x,y,a\in X$, the following statements hold: \begin{enumerate} \item if $x\lesssim y$, then $ax\lesssim ay$ and $xa\lesssim ya$; \item $xy\lesssim yx\lesssim xy$; \item $x\lesssim x^2\lesssim x$; \item $xy\lesssim x$ and $xy\lesssim y$. \end{enumerate} \end{proposition} For an element $a$ of a semigroup $X$ and subset $A\subseteq X$, consider the following sets: $$ {\Uparrow}a\defeq\{x\in X:a\lesssim x\},\quad {\Downarrow}a\defeq\{x\in X:x\lesssim a\},\quad\mbox{and}\quad{\Updownarrow}a\defeq\{x\in X:a\lesssim x\lesssim a\} $$ called the {\em upper $\two$-class}, {\em lower $\two$-class} and the {\em $\two$-class} of $x$, respectively. Proposition~\ref{p:quasi2} implies that those three classes are subsemigroups of $X$. For two elements $x,y\in X$ we write $x\Updownarrow y$ iff ${\Updownarrow}x={\Updownarrow}y$ iff $\chi(x)=\chi(y)$ for any homomorphism $\chi:X\to\two$. Proposition~\ref{p:quasi2} implies that $\Updownarrow$ is a congruence on $X$. We recall that a {\em congruence} on a semigroup $X$ is an equivalence relation $\approx$ on $X$ such that for any elements $x\approx y$ of $X$ and any $a\in X$ we have $ax\approx ay$ and $xa\approx ya$. For any congruence $\approx$ on a semigroup $X$, the quotient set $X/_\approx$ has a unique semigroup structure such that the quotient map $X\to X/_\approx$ is a semigroup homomorphism. The semigroup $X/_\approx$ is called the {\em quotient semigroup} of $X$ by the congruence $\approx$~. A congruence $\approx$ on a semigroup $X$ is called a {\em semilattice congruence} if the quotient semigroup $X/_\approx$ is a semilattice. Proposition~\ref{p:quasi2} implies that $\Updownarrow$ is a semilattice congruence on $X$. The intersection of all semilattice congruences on a semigroup $X$ is a semilattice congruence called the {\em least semilattice congruence}, denoted by $\eta$ in \cite{Howie}, \cite{HL} (by $\xi$ in \cite{Tamura73}, \cite{Mitro2004}, and by $\rho_0$ in \cite{BCP}). The minimality of $\eta$ implies that $\eta\subseteq {\Updownarrow}$. The inverse inclusion ${\Updownarrow}\subseteq\eta$ will be deduced from the following (probably known) theorem on extensions of $\two$-valued homomorphisms. \begin{theorem}\label{t:extend} Let $\pi:X\to Y$ be a surjective homomorphism from a semigroup $X$ to a semilattice $Y$. For every subsemilattice $S\subseteq Y$ and homomorphism $f:\pi^{-1}[S]\to\two$ there exists a homomorphism $F:X\to\two$ such that $F{\restriction}_{\pi^{-1}[S]}=f$. \end{theorem} \begin{proof} We claim that the function $F:X\to\two$ defined by $$F(x)=\begin{cases}1&\mbox{if $\exists z\in\pi^{-1}[S]$ such that $\pi(xz)\in S$ and $f(xz)=1$};\\ 0&\mbox{otherwise}; \end{cases} $$is a required homomorphism extending $f$. To see that $F$ extends $f$, take any $x\in\pi^{-1}[S]$. If $f(x)=1$, then for $z=x$ we have $\pi(xz)=\pi(x)\pi(z)=\pi(x)\pi(x)=\pi(x)\in S$ and $f(xz)=f(x)f(z)=f(x)f(x)=1$ and hence $F(x)=1=f(x)$. If $F(x)=1$, then there exists $z\in \pi^{-1}[S]$ such that $\pi(xz)\in S$ and $f(x)f(z)=f(xz)=1$, which implies that $f(x)=1$. Therefore, $F(x)=1$ if and only if $f(x)=1$. Since $\two$ has only two elements, this implies that $f=F{\restriction}_{\pi^{-1}[S]}$. To show that $F$ is a homomorphism, fix any elements $x_1,x_2\in X$. We should prove that $F(x_1x_2)=F(x_1)F(x_2)$. First assume that $F(x_1x_2)=0$. If $F(x_1)$ or $F(x_2)$ equals $0$, then $F(x_1)F(x_2)=0$ and we are done. So, assume that $F(x_1)=1=F(x_2)$. Then the definition of $F$ yields elements $z_1,z_2\in \pi^{-1}[S]$ such that $\pi(x_iz_i)\in S$ and $f(x_iz_i)=1$ for every $i\in\{1,2\}$. Now consider the element $z=z_1z_2\in\pi^{-1}[S]$ and observe that $$\pi(x_1x_2z)=\pi(x_1x_2z_1z_2)=\pi(x_1)\pi(x_2)\pi(z_1)\pi(z_2)=\pi(x_1z_1)\pi(x_2z_2)\in S$$ and $f(z)=f(z_1z_2)=f(z_1)f(z_2)=1\cdot 1=1$ and hence $F(x_1x_2)=1$ by the definition of $F$. By this contradicts our assumption. Next, assume that $F(x_1x_2)=1$. Then there exists $z\in\pi^{-1}[S]$ such that $\pi(x_1x_2z)\in S$ and $f(x_1x_2z)=1$. Let $z'=x_1x_2z\in\pi^{-1}[S]$ and observe that for every $i\in\{1,2\}$ we have $\pi(x_iz')=\pi(x_i)\pi(x_1)\pi(x_2)\pi(z)=\pi(x_1)\pi(x_2)\pi(z)=\pi(x_1x_2z)\in S$. It follows from $1=f(x_1x_2z)=f(x_1)f(x_2)f(z)=f(x_i)f(x_1)f(x_2)f(z)$ that $f(x_i)=1=f(z')$ and hence $F(x_i)=1$. Then $F(x_1)F(x_2)=1=F(x_1x_2)$, which completes the proof. \end{proof} \begin{corollary}\label{c:extend} Any homomorphism $f:S\to \two$ defined on a subsemilattice $S$ of a semilattice $X$ can be extended to a homomorphism $F:X\to\two$. \end{corollary} \begin{proof} Apply Theorem~\ref{t:extend} to the identity homomorphism $\pi:X\to X$. \end{proof} Corollary~\ref{c:extend} implies the following important fact, first noticed by Petrich \cite{Petrich63}, \cite{Petrich64} and Tamura \cite{Tamura73}. \begin{theorem} The congruence $\Updownarrow$ on any semigroup $X$ coincides with the least semilattice congruence on $X$. \end{theorem} \begin{proof} Let $\eta$ be the least semilattice congruence on $X$ and $\eta(\cdot):X\to X/\eta$ be the quotient homomorphism assigning to each element $x\in X$ its equivalence class $\eta(x)\in X/\eta$. We need to prove that $\eta(x)={\Updownarrow}x$ for all $x\in X$. Taking into account that ${\Updownarrow}$ is a semilattice congruence and $\eta$ is the least semilattice congruence on $X$, we conclude that $\eta\subseteq{\Updownarrow}$ and hence $\eta(x)\subseteq {\Updownarrow}x$ for all $x\in X$. Assuming that $\eta\ne{\Updownarrow}$, we can find elements $x,y\in X$ such that $x\Updownarrow y$ but $\eta(x)\ne \eta(y)$. Consider the subsemilattice $S=\{\eta(x),\eta(y),\eta(x)\eta(y)\}$ of the semilattice $X/\eta$. It follows from $\eta(x)\ne\eta(y)$ that $\eta(x)\eta(x)\ne\eta(x)$ or $\eta(x)\eta(y)\ne\eta(y)$. Replacing the pair $x,y$ by the pair $y,x$, we can assume that $\eta(x)\eta(y)\ne \eta(y)$. In this case the unique function $h:S\to\two$ with $h^{-1}(1)=\{\eta(y)\}$ is a homomorphism. By Corollary~\ref{c:extend}, the homomophism $h$ can be extended to a homomorphism $H:X/\eta\to\two$. Then the composition $\chi\defeq H\circ\eta(\cdot):X\to\two$ is a homomorphism such that $\chi(x)=0\ne 1=\chi(y)$, which implies that ${\Updownarrow}x\ne{\Updownarrow}y$. But this contradicts the choice of the points $x,y$. This contradicton completes the proof of the equality ${\Updownarrow}=\eta$. \end{proof} A semigroup $X$ is called {\em $\two$-trivial} if every homomorphism $h:X\to\two$ is constant. Tamura \cite{Tamura73}, \cite{Tamura82} calls $\two$-trivial semigroups {\em semilattice-indecomposable} (or briefy {\em $s$-indecomposable}) semigroups. Theorem~\ref{t:extend} implies the following fundamental fact first proved by Tamura \cite{Tam56} and then reproved by another method in \cite{TS66}, see also \cite{Petrich63}, \cite{Petrich64}. \begin{theorem}[Tamura]\label{t:Tamura} For every element $x$ of a semigroup $X$ its $\two$-class ${\Updownarrow}x$ is a $\two$-trivial semigroup. \end{theorem} Now we provide an inner description of the binary quasiorder via prime (co)ideals, following the approach of Petrich \cite{Petrich64} and Tamura \cite{Tamura73}. A subset $I$ of a semigroup $X$ is called \begin{itemize} \item an {\em ideal} in $X$ if $(IX)\cup (XI)\subseteq I$; \item a {\em prime ideal} if $I$ is an ideal such that $X\setminus I$ is a subsemigroup of $X$; \item a ({\em prime}) {\em coideal} if the complement $X\setminus I$ is a (prime) ideal in $X$. \end{itemize} According to this definition, the sets $\emptyset$ and $X$ are prime (co)ideals in $X$. Observe that a subset $A$ of a semigroup $X$ is a prime coideal in $X$ if and only if its {\em characteristic function} $$\chi_A:X\to\two,\quad \chi_A:x\mapsto\chi_A(x)\defeq\begin{cases}1&\mbox{if $x\in A$},\\ 0&\mbox{otherwise}, \end{cases} $$ is a homomorphism. This function characterization of prime coideals implies the following inner description of the $\two$-quasiorder, first noticed by Tamura in \cite{Tamura73}. \begin{proposition}\label{p:smallest-pi} For any element $x$ of a semigroup $X$, its upper $\two$-class ${\Uparrow}x$ coincides with the smallest coideal of $X$ that contains $x$. \end{proposition} The following inner description of the upper $\two$-classes is a modified version of Theorem 3.3 in \cite{Petrich64}. \begin{proposition}\label{p:Upclass} For any element $x$ of a semigroup $X$ its upper $\two$-class ${\Uparrow}x$ is equal to the union $\bigcup_{n\in\w}{\Uparrow}_{\!n}x$, where ${\Uparrow}_{\!0}x=\{x\}$ and $${\Uparrow}_{\!n{+}1}x\defeq\{y\in X:X^1yX^1\cap({\Uparrow}_{\!n}x)^2\ne \emptyset\}$$ for $n\in\w$. \end{proposition} \begin{proof} Observe that for every $n\in\w$ and $y\in {\Uparrow}_{\!n}x$ we have $yy\in X^1yX^1\cap({\Uparrow}_{\!n}x)^2\ne\emptyset$ and hence $y\in {\Uparrow}_{\!n{+}1}x$. Therefore, $({\Uparrow}_{\!n}x)_{n\in\w}$ is an increasing sequence of sets. Also, for every $y,z\in{\Uparrow}_{\!n}x$ the we have $yz\in X^1yzX^1\cap({\Uparrow}_{\!n}x)^2$ and hence $yz\in {\Uparrow}_{\!n+1}x$, which implies that the union ${\Uparrow}_{\!\w}x\defeq\bigcup_{n\in\w}{\Uparrow}_{\!n}x$ is a subsemigroup of $X$. The definition of the sets ${\Uparrow}_{\!n}x$ implies that the complement $I=X\setminus{\Uparrow}_{\!\w}x$ is an ideal in $X$. Then ${\Uparrow}_{\!\w}x$ is a prime coideal in $X$. Taking into account that ${\Uparrow}x$ is the smallest prime coideal containing $x$, we conclude that ${\Uparrow}x\subseteq{\Uparrow}_{\!\w}x$. To prove that ${\Uparrow}_{\!\w}x\subseteq{\Uparrow}x$, it suffices to check that ${\Uparrow}_{\!n}x\subseteq{\Uparrow}x$ for every $n\in\w$. It is trivially true for $n=0$. Assume that for some $n\in\w$ we have already proved that ${\Uparrow}_{\!n}x\subseteq {\Uparrow}x$. Since ${\Uparrow}x$ is a coideal in $X$, for any $y\in X\setminus{\Uparrow}x$ we have $\emptyset=X^1yX^1\cap {\Uparrow}x\supseteq X^1yX^1\cap{\Uparrow}_{\!n}x$, which implies that $y\notin{\Uparrow}_{\!n{+}1}x$ and hence ${\Uparrow}_{\!n{+}1}\subseteq{\Uparrow}x$. By the Principle of Mathematical Induction, ${\Uparrow}_{\!n}x\subseteq {\Uparrow}x$ for all $n\in\w$ and hence ${\Uparrow}_{\!\w}x=\bigcup_{n\in\w}{\Uparrow}_{\!n}x\subseteq {\Uparrow}x$, and finally ${\Uparrow}_{\!\w}x={\Uparrow}x$. \end{proof} For a positive integer $n$, let $$2^{<n}\defeq\bigcup_{k<n}\{0,1\}^k\quad\mbox{and}\quad 2^{\le n}\defeq\bigcup_{k\le n}\{0,1\}^k.$$For a sequence $s=(s_0,\dots,s_{n-1})\in 2^n$ and a number $k\in\{0,1\}$, let $$s\hat{\;}k\defeq (s_0,\dots,s_{n-1},k)\quad\mbox{and}\quad k\hat{\;}s\defeq (k,s_0,\dots,s_{n-1}).$$ The following proposition provides a constructive description of elements of the sets ${\Uparrow}_{\!n}x$ appearing in Proposition~\ref{p:Upclass}. \begin{proposition}\label{p:Upclass-tree} For every $n\in\IN$ and every element $x$ of a semigroup $X$, the set ${\Uparrow}_{\!n}x$ coincides with the set ${\Uparrow}_{\!n}'x$ of all elements $y\in X$ for which there exist sequences $\{x_s\}_{s\in 2^{\le n}}$, $\{y_s\}_{s\in 2^{\le n}}\subseteq X$ and $\{a_{s}\}_{s\in 2^{\le n}},\{b_s\}_{s\in 2^{\le n}}\subseteq X^1$ satisfying the following conditions: \begin{enumerate} \item[$(1_n)$] $x_s=x$ for all $s\in 2^n$; \item[$(2_n)$] $y_s=a_sx_sb_s$ for every $s\in 2^{\le n}$; \item[$(3_n)$] $y_s=x_{s\hat{\;}0}x_{s\hat{\;}1}$ for every $s\in 2^{<n}$; \item[$(4_n)$] $x_{()}=y$ for the unique element $()$ of $2^0$. \end{enumerate} \end{proposition} \begin{proof} This proposition will be proved by induction on $n$. For $n=1$,we have $$ \begin{aligned} {\Uparrow}_1&\defeq\{y\in X:xx\in X^1yX^1\}=\{y\in X:\exists a,b\in X^1\;\;ayb=xx\}\\ &=\{y\in X:\exists \{x_s\}_{s\in 2^{\le 1}},\{y_s\}_{s\in 2^{\le 1}}\subseteq X,\; \{a_s\}_{a\in 2^{\le 1}},\{b_s\}_{s\in 2^{\le 1}}\subseteq X^1,\\ &\hskip60pt x_{(0)}=x_{(1)}=x,\;y_{()}=x_{(0)}x_{(1)},\;x_{()}=y,\;y_{()}=a_{()}x_{()}b_{()}\}={\Uparrow}'_{\!1}x. \end{aligned} $$ Assume that for some $n\in\IN$ the equality ${\Uparrow}_{\!n}x={\Uparrow}'_{\!n}x$ has been proved. To check that ${\Uparrow}_{\!n{+}1}x\subseteq{\Uparrow}'_{\!n{+}1}x$, take any $x_{()}\in{\Uparrow}_{\!n{+}1}x$. The definition of ${\Uparrow}_{\!n{+}1}x$ ensures that $X^1x_{()}X^1\cap ({\Uparrow}_{\!n}x)^2\ne\emptyset$ and hence $a_{()}x_{()}b_{()}=x_{(0)}x_{(1)}$ for some $a_{()},b_{()}\in X^1$ and $x_{(0)}x_{(1)}\in{\Uparrow}_{\!n}x={\Uparrow}_{\!n}'x$. By the definition of the set ${\Uparrow}'_{\!n}x$, for every $k\in\{0,1\}$, there exist sequences $\{x_{k\hat{\;}s}\}_{s\in 2^{\le n}},\{y_{k\hat{\;}s}\}_{s\in 2^{\le n}}\subseteq X$ and $\{a_{k\hat{\;}s}\}_{s\in 2^{\le n}},\{b_{k\hat{\;}s}\}_{s\in 2^{\le n}}\subseteq X^1$ such that \begin{itemize} \item $x_{k\hat{\;}s}=x$ for all $s\in 2^n$; \item $y_{k\hat{\;}s}=a_{k\hat{\;}s}x_{k\hat{\;}s}b_{k\hat{\;}s}$ for every $s\in 2^{\le n}$; \item $y_{k\hat{\;}s}=x_{k\hat{\;}s\hat{\;}0}x_{k\hat{\;}s\hat{\;}1}$ for every $s\in 2^{<n}$. \end{itemize} Then the sequences $\{x_s\}_{s\in 2^{\le n{+}1}},\{x_s\}_{s\in 2^{\le n{+}1}}\subseteq X$ and $\{a_s\}_{s\in 2^{\le n{+}1}},\{b_s\}_{s\in 2^{\le n{+}1}}\subseteq X^1$ witness that $x_{()}\in{\Uparrow}'_{\!n{+}1}x$, which completes the proof of the inclusion ${\Uparrow}_{\!n{+}1}x\subseteq {\Uparrow}_{\!n{+}1}'x$. \smallskip To prove that ${\Uparrow}_{\!n{+}1}'x\subseteq {\Uparrow}_{\!n{+}1}x$, take any $x_{()}\in {\Uparrow}_{\!n{+}1}'x$ and by the definition of ${\Uparrow}_{\!n{+}1}'x$, find sequences $\{x_s\}_{s\in 2^{\le n{+}1}},\{x_s\}_{s\in 2^{\le n{+}1}}\subseteq X$ and $\{a_s\}_{s\in 2^{\le n{+}1}},\{b_s\}_{s\in 2^{\le n{+}1}}\subseteq X^1$ satisfying the conditions $(1_{n+1})$--$(3_{n+1})$. The for every $k\in\{0,1\}$ the sequences $\{x_{k\hat{\;}s}\}_{s\in 2^{\le n}},\{x_{k\hat{\;}s}\}_{s\in 2^{\le n}}\subseteq X$ and $\{a_{k\hat{\;}s}\}_{s\in 2^{\le n}},\{b_{k\hat{\;}s}\}_{s\in 2^{\le n}}\subseteq X^1$ witness that $x_{(0)},x_{(1)}\in {\Uparrow}'_{\!n}={\Uparrow}_{\!n}x$ and then the equalities $a_{()}x_{()}b_{()}=y_{()}=x_{(0)}x_{(1)}\in({\Uparrow}_{\!n}x)^2$ imply that $X^1x_{()}X^1\cap({\Uparrow}_{\!n}x)^2\ne\emptyset$ and hence $x_{()}\in {\Uparrow}_{\!n{+}1}x$, which completes the proof of the equality $ {\Uparrow}_{\!n{+}1}x={\Uparrow}_{\!n{+}1}'x$. \end{proof} A semigroup $X$ is called {\em duo} if $aX=Xa$ for every $a\in X$. Observe that each commutative semigroup is duo. The upper $\two$-classes in duo semigroups have the following simpler description. \begin{theorem}\label{t:duo} For any element $a\in X$ of a duo semigroup $X$ we have $${\Uparrow}a=\{x\in X:a^\IN\cap XxX\ne\emptyset\}.$$ \end{theorem} \begin{proof} First we prove that the set $\frac{a^\IN}X\defeq \{x\in X:a^\IN\cap XxX\ne\emptyset\}$ is contained in ${\Uparrow}a$. In the opposite case, we can find a point $x\in\frac{a^\IN}{X}\setminus {\Uparrow}a$. Taking into account that ${\Uparrow}a$ is a coideal containing $a$, we conclude that $a^\IN\subseteq{\Uparrow}a$ and $\emptyset= XxX\cap{\Uparrow}a\supseteq XxX\cap a^\IN$, which contradicts the choice of the point $x\in\frac{a^\IN}X$. This contradiction shows that $\frac{a^\IN}X\subseteq {\Uparrow}a$. Next, we prove that $\frac{a^\IN}X$ is prime coideal. Since $X$ is a duo semigroup, for every $x\in X$ we have $X^1x=xX^1=X^1xX^1$. If $x,y\in \frac{a^\IN}X$, then $$X^1x\cap a^\IN=X^1xX^1\cap a^\IN\ne\emptyset\ne X^1yX^1\cap a^\IN=yX^1\cap a^\IN$$ and hence $X^1xyX^1\in a^\IN\ne\emptyset$, which means that $xy\in\frac{a^\IN}X$. Therefore, $\frac{a^\IN}X$ is a subsemigroup of $X$. The definition of $\frac{a^\IN}X$ ensures that $X\setminus \frac{a^\IN}X$ is an ideal in $X$. Then $\frac{a^\IN}X\subseteq{\Uparrow}a$ is a prime coideal in $X$ and $\frac{a^\IN}X={\Uparrow}a$, by the minimality of ${\Uparrow}a$. \end{proof} For viable semigroups Putcha and Weissglass \cite{PW} proved the following simplification of Proposition~\ref{p:Upclass}. Following Putcha and Weissglass \cite{PW}, we define a semigroup $X$ to be {\em viable} if for any elements $x,y\in X$ with $\{xy,yx\}\subseteq E(X)$, we have $xy=yx$. For various equivalent conditions to the viability, see \cite{Ban}. \begin{proposition}[Putcha--Weissglass]\label{p:PW} If $X$ is a viable semigroup, then for every idempotent $e\in E(X)$we have ${\Uparrow}e=\{x\in X:e\in X^1xX^1\}$. \end{proposition} \begin{proof} We present a short proof of this theorem, for convenience of the reader. Let ${\Uparrow}_{\!1}e\defeq \{x\in X:e\in X^1xX^1\}$. By Proposition~\ref{p:Upclass}, ${\Uparrow}_{\!1}e\subseteq {\Uparrow}e$. The reverse inclusion will follow from the minimality of the prime coideal ${\Uparrow}e$ as soon as we prove that ${\Uparrow}_{\!1}e$ is a prime coideal in $X$. It is clear from the definition that ${\Uparrow}_{\!1}e$ is a coideal. So, it remains to check that ${\Uparrow}_{\!1}e$ is a subsemigroup. Given any elements $x,y\in {\Uparrow}_{\!1}e$, find elements $a,b,c,d\in X^1$ such that $axb=e=cyd$. Then $axbe=ee=e$ and $(beax)(beax)=be(axbe)ax=beeax=beax$, which means that $beax$ is an idempotent. By the viability of $X$, $axbe=e=beax$. By analogy we can prove that $ecyd=e=ydec$. Then $aeaxydex=ee=e$ and hence $xy\in{\Uparrow}_{\!1}e$. \end{proof} Proposition~\ref{p:PW} has an important corollary, proved in \cite{PW}. \begin{corollary}[Putcha--Wiessglass] If $X$ is a viable semigroup, then for every $x\in X$ its $\two$-class ${\Updownarrow}x$ contains at most one idempotent. \end{corollary} \begin{proof} To derive a contradiction, assume that the semigroup ${\Updownarrow}x$ contains two distinct idempotents $e,f$. By Proposition~\ref{p:PW}, there are elements $a,b,c,d\in X^1$ such that $e=afb$ and $f=ced$. Observe that $afbe=ee=e$ and $(beaf)(beaf)=be(afbe)af=beeaf=beaf$ and hence $afbe$ and $beaf$ are idempotents. The viability of $X$ ensures that $afbe=beaf$. By analogy we can prove that $eafb=e=efbea$, $cedf=f=dfce$ and $fced=f=edfc$. These equalities imply that $H_e=H_f$ and hence $e=f$ because the group $H_e=H_f$ contains a unique idempotent. But the equality $e=f$ contradicts the choice of the idempotents $e,f$. \end{proof} \section{The structure of $\two$-trivial semigroups} Tamura's Theorem~\ref{t:Tamura} motivates the problem of a deeper study of the structure of $\two$-trivial semigroups. This problem has been considered in the literature, see, e.g. \cite[\S3]{Petrich64}. Proposition~\ref{p:smallest-pi} implies the following simple characterization of $\two$-trivial semigroups. \begin{theorem}\label{t:primesimple} A semigroup $X$ is $\two$-trivial if and only if every nonempty prime ideal in $X$ coincides with $X$. \end{theorem} Observe that a semigroup $X$ is $\two$-trivial if and only if $X={\Uparrow}x$ for every $x\in X$. This observation and Propositions~\ref{p:Upclass} and \ref{p:Upclass-tree} imply the following characterization. \begin{proposition} A semigroup $X$ is $\two$-trivial if and only if for every $x,y\in X$ there exists $n\in\IN$ and sequences $\{a_{s}\}_{s\in 2^{\le n}},\{b_s\}_{s\in 2^{\le n}}\subseteq X^1$ and $\{x_s\}_{s\in 2^{\le n}},\{y_s\}_{s\in 2^{\le n}}\subseteq X$ satisfying the following conditions: \begin{enumerate} \item $x_s=x$ for all $s\in 2^n$; \item $y_s=a_sx_sb_s$ for every $s\in 2^{\le n}$; \item $y_s=x_{s\hat{\;}0}x_{s\hat{\;}1}$ for every $s\in 2^{<n}$; \item $x_{()}=y$ for the unique element $()$ of $2^0$; \end{enumerate} \end{proposition} A semigroup $X$ is called {\em Archimedean} if for any elements $x,y\in X$ there exists $n\in\IN$ such that $x^n\in XyX$ for some $a,b\in X$. A standard example of an Archimedean semigroup is the additive semigroup $\IN$ of positive integers. For commutative semigroups the following characterization was obtained by Tamura and Kimura in \cite{TK54}. \begin{theorem}\label{t:Archimed} A duo semigroup $X$ is $\two$-trivial if and only if $X$ is Archimedean. \end{theorem} \begin{proof} If $X$ is $\two$-trivial, then by Theorem~\ref{t:duo}, for every $x,y\in X$ there exists $n\in\w$ such that $x^n\in XyX$, which means that $X$ is Archimedean. If $X$ is Archimedean, then for every $\in X$, we have $${\Uparrow}x=\{y\in X:x^\IN\cap (XxX)\ne\emptyset\}=X,$$ see Theorem~\ref{t:duo}, which means that the semigroup $X$ is $\two$-trivial. \end{proof} Following Tamura \cite{Tamura82}, we define a semigroup $X$ to be {\em unipotent} if $X$ contains a unique idempotent. \begin{theorem}[Tamura, 1982]\label{t:max-ideal} For the unique idempotent $e$ of an unipotent $\two$-trivial semigroup $X$, the maximal group $H_e$ of $e$ in $X$ is an ideal in $X$. \end{theorem} \begin{proof} This theorem was proved by Tamura in \cite{Tamura82}. We present here an alternative (and direct) proof. To derive a contradiction, assume that $H_e$ is not an ideal in $X$. Then the set $I\defeq\{x\in X:\{ex,xe\}\not\subseteq H_e\}$ is not empty. We claim that $I$ is an ideal in $X$. Assuming the opposite, we could find $x\in I$ and $y\in X$ such that $xy\notin I$ or $yx\notin I$. If $xy\notin I$, then $\{exy,xye\}\subseteq H_e$. Taking into account that $exy$ and $xye$ are elements of the group $H_e$, we conclude that $exy=exye=xye$. Let $g$ be the inverse element to $xye$ in the group $H_e$. Then $exyg=xyeg=xyg=e$. Replacing $y$ by $yg$, we can assume that $ye=y$ and $xy=e$. Observe that $yxyx=y(xy)x=yex=(ye)x=yx$, which means that $yx$ is an idempotent in $S$. Since $e$ is a unique idempotent of the semigroup $X$, $yx=e=xy$. It follows that $xe=x(yx)=(xy)x=ex$ and $ey=(yx)y=y(xy)=ye=y$. Using this information it is easy to show that $xe=ex\in H_e$. By analogy we can show that the assumption $yx\notin I$ implies $ex=xe\in H_e$. So, in both cases we obtain $ex=xe\in H_e$, which contradicts the choice of $x\in I$. This contradiction shows that $I$ is an ideal in $S$. Observe that for any $x,y\in X\setminus I$ we have $\{ex,xe,ey,ye\}\subseteq H_e$. Then also $xye=x(eye)=(xe)(ye)\in H_e$ and $exy=(exe)y=(ex)(ey)\in H_e$, which means that $xy\in X\setminus I$ and hence $I$ is a nontrivial prime ideal in $X$. But the existence of such an ideal contradicts the $\two$-triviality of $X$. \end{proof} An element $z$ of a semigroup $X$ is called {\em central} if $zx=xz$ for all $x\in X$. \begin{corollary}\label{c:EZK}The unique idempotent $e$ of a unipotent $\two$-trivial semigroup $X$ is central in $X$. \end{corollary} \begin{proof} Let $e$ be a unique idempotent of the unipotent semigroup $X$. By Tamura's Theorem~\ref{t:max-ideal}, the maximal subgroup $H_e$ of $e$ is an ideal in $X$. Then for every $x\in X$ we have $xe,ex\in H_e$. Taking into account that $xe$ and $ex$ are elements of the group $H_e$, we conclude that $ex=exe=xe$. This means that the idempotent $e$ is central in $X$. \end{proof} As we already know a semigroup $X$ is $\two$-trivial if and only if each nonempty prime ideal in $X$ is equal to $X$. A semigroup $X$ is called \begin{itemize} \item {\em simple} if every nonempty ideal in $X$ is equal to $X$; \item {\em $0$-simple} if contains zero element $0$, $XX\ne\{0\}$ and every nonempty ideal in $X$ is equal to $X$ or $\{0\}$; \item {\em congruence-free} if every congruence on $X$ is equal to $X\times X$ or $\Delta_X\defeq\{(x,y)\in X\times X:x=y\}$. \end{itemize} It is clear that a semigroup $X$ is $\two$-trivial if $X$ is either simple or congruence-free. On the other hand the additive semigroup of integers $\mathbb N$ is $\two$-trivial but not simple. \begin{remark}\label{r:Bard} By \cite{ACMU}, \cite{CM}, there exists an infinite $0$-simple congruence-free monoid $X$. Being congruence-free, the semigroup $X$ is $\two$-trivial. On the other hand, $X$ contains at least two central idempotents: $0$ and $1$. The $\two$-trivial monoid $X$ is not unipotent and its center $Z(X)=\{z\in X:\forall x\in X\;(xz=zx)\}$ is not $\two$-trivial. The polycyclic monoids (see \cite{BG1}, \cite{BG2}, \cite{Bard16}, \cite{Bard20}) have the similar properties. By Theorem 2.4 in \cite{BG1}, for $\lambda\ge 2$ the polycyclic monoid $P_\lambda$ is congruence-free and hence $\two$-trivial, but its center $Z(P_\lambda)=\{0,1\}$ is not $\two$-trivial. \end{remark} \section{Acknowledgements} The authors express their sincere thanks to Oleg Gutik and Serhii Bardyla for valuable information on congruence-free monoids (see Remark~\ref{r:Bard}) and to all listeners of Lviv Seminar in Topological Algebra (especially, Alex Ravsky) for active listening of the talk of the first named author that allowed to notice and then correct a crucial gap in the initial version of this manuscript.
1,941,325,220,346
arxiv
\section{Introduction} In 1984 Donaldson \cite{Don84} showed that the moduli space of $SU(r)$-instantons on $\mathbb{R}^4$ is isomorphic to the moduli space of rank $r$ holomorphic vector bundles on the complex projective plane $\mathbb{P}^2$ which are {\em framed} on a line at infinity, i.e.~which are trivial when restricted to that line and have a fixed trivialization there. His proof exploited, on the one hand, the ADHM construction \cite{ADHM}, and, on the other, the monadic description of vector bundles over $\mathbb{P}^2$ \cite{barth77, hulek79, ocs80}. According to the first correspondence the moduli space of instantons on $\mathbb{R}^4$ is interpreted as a hyper-K\"ahler quotient, while, according to the latter, the moduli space of framed holomorphic vector bundles on $\mathbb{P}^2$ is realized as a symplectic GIT quotient. The two constructions (as it was remarked by Donaldson) turn out to be equivalent thanks to a result by Kirwan \cite{K84} based on previous work by Kempf and Ness \cite{KN78}. The generalization of the ADHM construction to the case of ALE spaces \cite{KNa90} prompted Nakajima to introduce the notion of {\em quiver variety} \cite{Na94}. Very broadly speaking, a Nakajima's quiver variety associated with a quiver $\mathcal{Q}$ can be described as a coarse moduli space of semistable representations of an auxiliary quiver $\overline{\mathcal{Q}^{\mathrm{fr}}}$ concocted from $\mathcal{Q}$ (see \S \ref{quiversection} for precise statements). Under very mild assumptions (Theorem \ref{ginzburgtheorem} $=$ \cite[Theorem~5.2.2.(ii)]{Gin}), every Nakajima's quiver variety is a smooth quasi-projective variety and carries a holomorphic symplectic structure obtained through a Hamiltonian reduction. Besides the above motivations, the interest of such varieties lies in the fact that they have been used by Nakajima \cite{Na94, Na98} ``to give a geometric construction of universal enveloping algebras of Kac-Moody Lie algebras and of all irreducible integrable (e.g., finite dimensional) representations of those algebras'' \cite[p.~143]{Gin}. For more information on this thriving subject see \cite{Schiff} and references therein. After Donaldson's pioneering result, a great deal of work has been directed to the general study of moduli spaces of framed sheaves on (stacky) complex surfaces \cite{L93, HL95a, HL95b, N02a, N02b, BM, SAL, BS}. Given a complex variety $X$, an effective divisor $D$ on $X$ and a torsion-free sheaf $\mathcal{F}_D$ on $D$, a {\em framed sheaf} is a pair $(\mathcal{E}, \theta)$, where $\mathcal{E}$ is a torsion-free sheaf on $X$ and $\theta\colon \mathcal{E}\vert_D \stackrel{\sim}{\tol} \mathcal{F}_D$ is an isomorphism. Huybrechts and Lehn \cite{HL95a, HL95b}, working in a slightly more general setting, introduced a stability condition giving rise to fine moduli spaces of framed sheaves. On smooth projective surfaces, as shown by Bruzzo and Markushevich \cite{BM}, fineness can be ensured by replacing the stability condition by the one of ``good framing'' (see Definition \ref{goodnessdef}). More precisely, they strengthened previous results by Nevins \cite{N02a, N02b} proving that, if $D$ is a big and nef curve on $X$ and $\mathcal{F}_D$ is a good framing sheaf on $D$, then for any class $\gamma \in H^\bullet(X,\mathbb Q)$ there exists a (possibly empty) quasi-projective scheme ${\mathcal {M}}_X(\gamma)$ that is a fine moduli space of $(D, \mathcal{F}_D)$-framed sheaves on $X$ with Chern character $\gamma$ (\cite[Thm~3.1 ]{BM} $=$ Thm~\ref{thmBM} herein). However, the aforementioned theorem by Bruzzo and Markushevich does not provide any information either about the nonemptiness of the moduli space of framed shaves in question, or about its local and global geometric structure. The investigation of specific examples, therefore, retains its importance. The original construction for framed bundles on $\mathbb{P}^2$ was extended by King \cite{Ki89} to framed holomorphic vector bundles on the blowup of $\mathbb{P}^2$ at a point and by Buchdahl \cite{Bu93} to multiple blowups; Henni \cite{He} generalised Buchdahl's results to the non-locally free sheaves. The case of framed torsion-free sheaves on $\mathbb{P}^2$ was first described by Nakajima in his influential lectures \cite{Na99}. This represents a key example, because the corresponding moduli spaces admit an ADHM description in terms of linear data arising as moment map equation of a holomorphic symplectic quotient and are Nakajima's quiver varieties. Moreover, these spaces are desingularizations of the moduli spaces of ideal instantons over $\mathbb{R}^4$ \cite{Na99}, so that they can be exploited to compute Nekrasov's partition function \cite{Nek03, BFMT} and to perform the so-called instanton counting (i.e., the technique consisting in the use of localization for the equivariant cohomology of the moduli space acted upon by a suitable torus to compute invariants of the moduli space and of the surface) \cite{NY05}. It should be pointed out that, contrary to the case of $\mathbb{P}^2$, the moduli spaces constructed by King, Buchdhal and Henni in the above cited papers have not been given a description as quiver varieties (even in a broader sense than Nakajima's original one). Whether or not this is possible seems to be a challenging problem. Moduli spaces of framed sheaves on Hirzebruch surfaces $\Sigma_n$ have been studied in the papers \cite{BBR, BBLR} (see also \cite{Ra11, La15}). In particular, a fine moduli space $\mathcal{M}^{n}(r, a,c)$ parameterizing isomorphism classes of framed sheaves $\mathcal{E}$ on $\Sigma_n$, which have Chern character $\textrm{ch}(\mathcal{E}) = (r, aE, -c -\frac{1}{2} na^2)$ and are trivial, with a fixed trivialization, on a ``line at infinity'', is constructed by means of a monadic approach (Theorem \ref{thmEMon}). One can prove that the space $\mathcal{M}^{n}(r, a,c)$ is nonempty if and if the inequality $$c \geq C_{\text{m}}(n,a)= \frac{1}{2} na(1-a)$$ is satisfied (Theorem \ref{mainthm}). We shall refer to the case when equality holds as the ``minimal case''. In the rank 1 case the moduli space of framed sheaves on $\Sigma_n$ can be naturally identified with the Hilbert scheme of points on the total space of the line bundle $\mathcal{O}_{\mathbb{P}^1}(-n)$ over $\mathbb{P}^1$, in an analogous way to that by which the moduli space of rank 1 framed sheaves on $\mathbb{P}^2$ is identified with $\operatorname{Hilb}^c(\mathbb{C}^2)$. The schemes $\operatorname{Hilb}^c (\operatorname{Tot}(\mathcal{O}_{\mathbb{P}^1}(-n)))$ admit a realization in terms of generalized ADHM data and turn out to be irreducible connected components of the moduli spaces of representations of suitable quivers. So, they are quiver varieties, although {\it not} in Nakajima's sense (except for $n=2$), but in a more general one (see \cite{Gin} and Section \ref{quiversection}). When $n=2$, $\operatorname{Tot}(\mathcal{O}_{\mathbb{P}^1}(-2))$ is the ALE space $A_1$, and indeed our description coincides with that one obtained by Kuznetsov in \cite{kuz}. While, in general, the spaces $\mathcal{M}^{n}(r, a,c)$ seem to resist to be described as quiver varieties, such a description is achievable, in addition to the rank 1 case, also for the minimal case. Actually, as proved in Proposition \ref{corMshMq}, the schemes $\mathcal{M}^{n}(r, a,C_{\text{m}}(n,a))$ are quiver varieties, which slightly generalize those used by Nakajima \cite{Na94, Na96} to represent partial flag varieties (see Proposition \ref{propCotFlag}). \bigskip The present paper is organized as follows. Sections \ref{sectionBM}, \ref{quiversection}, and \ref{MFSsection} are chiefly of an expository nature. In \S \ref{sectionBM} we outline Bruzzo and Markushevich's theorem about the existence of a fine moduli space for good framings sheaves, while in \S \ref{quiversection} we recall some basics facts about quivers, their representations, and quiver varieties. In \S \ref{MFSsection} we review the construction of the moduli spaces of framed sheaves on $\mathbb{P}^2$ and on $\mathbb{P}^2$ blown-up at $n$ distinct points: albeit both admit a description in terms of ADHM data, only in the first case a characterization as quiver varieties is always available (see Remark \ref{remarkADHM}). Section \ref{SecMon} is devoted to summarize the basics of the construction of the moduli space of framed sheaves on Hirzebruch surfaces in terms of monads, as worked out in \cite{BBR, BBLR}. The last two sections contain our main results about the minimal case. In \S \ref{sectionminimalcase} we prove (Theorem \ref{thm:minimal}) that $\mathcal{M}^{1}(r, a,C_{\text{m}})$ is isomorphic to the Grassmannian $\operatorname{Gr}(a,r)$, while, for $n\geq 2$, $\mathcal{M}^{n}(r, a, C_{\text{m}})$ is isomorphic to the total space of the vector bundle $T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}$. Moreover, we show (\S \ref{geometricremarks}) that, heuristically, the Grassmannian variety parameterizes the (isomorphism classes of) framings, while the fibres of the direct sum of (copies of) the cotangent bundles classify the sheaves away from the line at infinity. Finally, in \S \ref{Section-Nakajima-Flag} we provide a description of the moduli spaces $\mathcal{M}^{n}(r, a, C_{\text{m}})$ as quiver varieties. \bigskip {\bf Acknowledgements.} We warmly thank our friend Ugo Bruzzo: not only he co-authored the two papers whose contents are summarised in Section \ref{SecMon}, but gave substantial contributions to the results of Section \ref{sectionminimalcase} as well. We are grateful to the anonymous referee, whose suggestions and remarks helped us to substantially improve the original, very much shorter, version of this paper. This work was partially supported by the PRIN ``Geometria delle variet\`a algebriche'' and by the University of Genoa's research grant ``Aspetti matematici della teoria dei campi interagenti e quantizzazione per deformazione''. V.L. is supported by the FAPESP post-doctoral grant number 2015/07766-4. \bigskip{\bf Conventions.} A {\em scheme} is a separated scheme of finite type over $\mathbb{C}$; a {\em variety} is a reduced scheme. A {\em sheaf} is a coherent sheaf. \bigskip\section{Moduli spaces of framed sheaves}\label{sectionBM} We summarize the main definitions and results of Sections 2 and 3 of \cite{BM}. Let $X$ be a smooth projective variety of dimension $n\geq 2$, $D \subset X$ an effective divisor, and $\mathcal{F}_D$ a torsion-free sheaf on $D$. \begin{defin}\label{definframedsheaf} A torsion-free sheaf $\mathcal{E}$ on $X$ is \emph{$(D, \mathcal{F}_D)$-framable} if there is an $\mathcal{O}_X$-epimorphism $\mathcal{E} \to \mathcal{F}_D$ restricting to an isomorphism $\mathcal{E}\vert_{D} \stackrel{\sim}{\tol} \mathcal{F}_D$. A \emph{$(D, \mathcal{F}_D)$-framed sheaf $(\mathcal{E}, \theta)$ on $X$} is a pair consisting of a $(D, \mathcal{F}_D)$-framable sheaf $\mathcal{E}$ and a framing isomorphism $\theta\colon \mathcal{E}\vert_{D} \stackrel{\sim}{\tol} \mathcal{F}_D$. Two framed sheaves $(\mathcal{E}, \theta)$ and $(\mathcal{E}', \theta')$ are isomorphic if there is an $\mathcal{O}_X$-isomorphism $\Lambda\colon \mathcal{E} \stackrel{\sim}{\tol} \mathcal{E}'$ such that $\theta' \circ \Lambda\vert_{D}= \theta$. \end{defin} Let $H$ be a polarization on $X$. For any sheaf $\mathcal{E}$ on $X$, we denote by $P^H(\mathcal{E})$ the Hilbert polynomial $P^H(\mathcal{E})(k) = \chi(\mathcal{E}\otimes \mathcal{O}_X(kH))$. If $\operatorname{rk} \mathcal{E} > 0$, for any nonnegative real number $\eta$ , the $\eta$-slope of $\mathcal{E}$ is the quantity $$\mu^H_{\eta}(\mathcal{E}) = \frac{c_1(\mathcal{E})\cdot H^{n-1} - \eta}{\operatorname{rk} \mathcal{E}}\,.$$ For $\eta =0$ we recover, of course, the ordinary notion of slope, and so we write $\mu^H_{0}(\mathcal{E}) = \mu^H(\mathcal{E})$. A framed sheaf $(\mathcal{E}, \theta)$ is said to be {\em $(H, \delta)$-stable}, where $\delta$ is a positive real number, if any subsheaf $\mathcal{S} \subset \mathcal{E}$ with $0 < \operatorname{rk} \mathcal{S} \leq \operatorname{rk} \mathcal{E}$ satisfies one of the following inequalities: \begin{align*} &\mu^H (\mathcal{S}) < \mu^H_\delta(\mathcal{E}) \quad \text{if}\, \mathcal{S} \, \text{is contained in the kernel of the restriction} \ \mathcal{E} \to \mathcal{E}\vert_{D}\,; \\ &\mu^H_\delta (\mathcal{S}) < \mu^H_\delta(\mathcal{E}) \quad \text{otherwise.} \end{align*} By applying the results proved by Huybrechts and Lehn \cite{HL95a, HL95b} in the setting of stable pairs, one can show that the isomorphism classes of $(H, \delta)$-stable framed sheaves $(\mathcal{E}, \theta)$ with fixed Hilbert polynomial $P^{H}(\mathcal{E})$ form a fine moduli space which is a quasi-projective scheme. A fine moduli space for framed sheaves can be also constructed through a different approach. Instead of imposing (as above) a stability condition on the sheaf that is to be framed, one requires the framing divisor and the framing sheaf to have the {\it goodness} property stated in the following definition. \begin{defin}\label{goodnessdef} \cite[Def.~2.4]{BM} A divisor $D$ on $X$ is a \emph{good framing divisor} if it can be written as the sum of prime divisors $D_i$ with positive integer coefficients $a_i$, i.e.~$D = \sum a_i D_i$, and there exists a big and nef divisor of the form $\sum b_i D_i$, with $b_i \geq 0$. A sheaf $\mathcal{F}_D$ on $D$ is a \emph{good framing sheaf} if it is locally free and there exists a nonnegative real number $C < (\operatorname{rk} \mathcal{F}_D)^{-1} D^2 \cdot H^{n -2}$ such that, for any locally free subsheaf $\mathcal{S}_D \subset \mathcal{F}_D$ of constant positive rank, one has \begin{equation*} \frac{\operatorname{deg} c_1(\mathcal{S}_D)}{\operatorname{rk} \mathcal{S}_D} \leq \frac{\operatorname{deg} c_1(\mathcal{F}_D)}{\operatorname{rk} \mathcal{F}_D} + C\,. \end{equation*} \end{defin} If $D$ is a good framing divisor, and if we fix any vector bundle $\mathcal{G}_{D}$ on $D$, any polarization $H$ and any polynomial $P \in {\mathbb Q}[t]$, then the set $\mathcal M$ of torsion-free sheaves $\mathcal{E}$ such that $\mathcal{E}|_{D}\simeq\mathcal{G}_{D}$ and $P^H(\mathcal{E})= P$ is {\em bounded}, that is to say, there exists a scheme $\mathfrak L$ along with a sheaf ${\mathbf E}$ over $X\times \mathfrak L$ such that, for any $\mathcal{E} \in \mathcal M$, there is a closed point $q\in {\mathfrak L}$ and an isomorphism $\mathcal{E} \simeq {\mathbf E}_{X\times \{q\}}$ \cite[Thm~2.5]{BM}. Let us restrict ourselves to the case $\dim X =2$. The fact that the set $\mathcal M$ is bounded implies that the Castelnuovo-Mumford regularity\footnote{Recall that, for a sheaf $\mathcal{E}$ on a polarized surface $(X,H)$, the Castelnuovo-Mumford regularity $\rho^H(\mathcal{E})$ is the minimal integer $m$ such that $h^i (X, \mathcal{E} \otimes {\mathcal O}_X((m-i)H)) = 0$ for all $i >0$.} $\rho^H(\mathcal{E})$ is uniformly bounded over all of $\mathcal M$; therefore, by \cite[Lemma 1.7.9]{HLbook}, there is a constant $\tilde C$ (depending solely on $H$, $\mathcal{F}_D$ and $P$) such that $\mu^H(\mathcal{S}) \leq \mu^H(\mathcal{E}) + \tilde C$ for all $\mathcal{E} \in {\mathcal M}$ and for all nonzero subsheaves $\mathcal{S} \subset \mathcal{E}$. If the good framing divisor $D$ is a big and nef curve, then the divisor $H_N= H + ND$ is ample for any $N >0$: a direct computation shows that there exists a positive integer $N^\ast$ such that the range of positive real numbers $\delta$, for which all the framed sheaves $\mathcal{E} \in \mathcal M$ are $(H_{N^\ast}, \delta)$-stable, is nonempty. Since, as said above, $(H_{N^\ast}, \delta)$-stable framed sheaves constitute fine moduli spaces which are quasi-projective schemes, the following result is proved. \begin{thm}\label{thmBM} \cite[Thm~3.1]{BM} Let $X$ be a smooth projective surface, $D$ a big and nef curve on $X$, and $\mathcal{F}_D$ a good framing sheaf on $D$. Then for any class $\gamma \in H^\bullet(X,\mathbb Q)$, there exists a (possibly empty) quasi-projective scheme ${\mathcal {M}}_X(\gamma)$ that is a fine moduli space of $(D, \mathcal{F}_D)$-framed sheaves on $X$ with Chern character $\gamma$. \end{thm} In the case where the framing divisor $D$ is a smooth and irreducible curve and $D^2 >0$, a semistable vector bundle on $D$ is a good framing sheaf with $C=0$. Hence, Theorem \ref{thmBM} entails the following result. \begin{cor} \cite[Cor.~3.3]{BM}\label{corBM} Let $X$ be a smooth projective surface, $D$ a smooth, irreducible, big and nef curve, and $\mathcal{F}_D$ a semistable vector bundle on $D$. For any class $\gamma\in H^\bullet(X, \mathbb {Q})$, there exists a (possibly empty) quasi-projective scheme ${\mathcal {M}}_X(\gamma)$ that is a fine moduli space of $(D, \mathcal{F}_D)$-framed sheaves on $X$ with Chern character $\gamma$. \end{cor} \bigskip\section{Generalities on quivers and quiver varieties}\label{quiversection} In the subsequent sections we shall provide a few explicit realizations of moduli spaces of framed sheaves as quiver varieties. With this end in mind, we recall here some basic facts about quiver representations and quiver varieties (see \cite{Gin} for details). A quiver $\mathcal{Q}$ is a finite oriented graph, given by a set of vertices $I$ and a set of arrows $E$. The path algebra $\mathbb{C}\mathcal{Q}$ is the $\mathbb{C}$-algebra with basis the paths in $\mathcal{Q}$ and with a product given by the composition of paths whenever possible, zero otherwise. Usually one includes among the generators of $\mathbb{C}\mathcal{Q}$ a complete set of orthogonal idempotents $\{e_{i}\}_{i\in I}$: this can be considered a subset of $E$ by regarding $e_{i}$ as a loop of ``length zero'' starting and ending at the $i$-th vertex. A (complex) representation of a quiver $\mathcal{Q}$ is a pair $(V,X)$, where $V= \bigoplus_{i\in I}V_{i}$ is an $I$-graded complex vector space and $X=(X_{a})_{a\in E}$ is a collection of linear maps such that $X_{a}\in\operatorname{Hom}_{\mathbb{C}}(V_i,V_j)$ whenever the arrow $a$ starts at the vertex $i$ and terminates at the vertex $j$. We say that a representation $(V,X)$ is supported by $V$, and denote by $\operatorname{Rep}(\mathcal{Q},V)$ the space of representations of $\mathcal{Q}$ supported by $V$. Morphisms and direct sums of representations are defined in the obvious way; it can be shown that the abelian category of complex representations of $\mathcal{Q}$ is equivalent to the category of left $\mathbb{C}\mathcal{Q}$-modules. A subrepresentation of a given representation $(V,X)$ is a pair $(S,Y)$, where $S$ is an $I$-graded subspace of $V$ which is preserved by the linear maps $X$, and $Y$ is the restriction of $X$ to $S$. We consider only finite-dimensional representations. If $\dim_{\mathbb{C}} V_{i}=v_{i}$, a representation $(V,X)$ of $\mathcal{Q}$ is said to be $\vec{v}$-dimensional, where $\vec{v}=(v_{i})_{i\in I}\in\mathbb{N}^{I}$. More generally one can define the representations of a quotient algebra $B=\mathbb{C}\mathcal{Q}/J$, for some ideal $J$ of the path algebra $\mathbb{C}\mathcal{Q}$. In this case it is customary to call the pair $(\mathcal{Q},J)$ a \emph{quiver with relations}. The representations of $B$ are the subset of the representations $(V,X)$ of $\mathcal{Q}$, whose linear maps $X=(X_{a})_{a\in E}$ satisfy the relations given by the elements of $J$. The abelian category of complex representations of $B$ is equivalent to the category of left $B$-modules. We denote by $\operatorname{Rep}(B,\vec{v})$ the space of representations of $B$ supported by a given $\vec{v}$-dimensional vector space $V$. There is a natural action of $G_{\vec{v}}=\prod_i\operatorname{GL}(v_i)$ on $\operatorname{Rep}(B,\vec{v})$ given by change of basis. One would like to consider the space of isomorphism classes of $\vec{v}$-dimensional representations of $B$, but unfortunately in most cases this space is ``badly behaved''. To overcome this drawback, adopting A.~King's strategy \cite{king}, one introduces a notion of (semi)stability depending on the choice of a parameter $\vartheta\in\mathbb{R}^{I}$. Let us recall that the $\vartheta$-{\emph slope} $\mu_\vartheta(V,X)$ of a nontrivial representation $(V,X)$ of $B$ is the ratio $$\mu_\vartheta(V,X)= \frac{\sum_{i\in I} \vartheta_i v_i}{\sum_{i\in I} v_i}\,.$$ \begin{defin} \label{defstability} A representation $(V,X)$ of $B$ is said to be $\vartheta$-semistable (resp., $\vartheta$-stable) if, for any proper nontrivial subrepresentation $(S,Y)$, one has $\mu_\vartheta(S,Y) \leq \mu_\vartheta(V,X)$ (resp., strict inequality holds). \end{defin} \begin{rem} It may be worth emphasizing that we do not assume $\sum_{i\in I} \vartheta_i v_i = 0$ as in the original paper \cite{king}, following instead Rudakov's approach \cite{Rud97} (which is actually equivalent to King's one; see also \cite[Remark 2.3.3]{Gin}). \end{rem} Let $\operatorname{Rep}(B,\vec{v})^{ss}_\vartheta$ be the subset of $\operatorname{Rep}(B,\vec{v})$ consisting of semistable representations. According to Proposition 5.2 of \cite{king}, the coarse moduli space of $\vec{v}$-dimensional $\vartheta$-semistable representations of $B$ is the GIT quotient $$\mathcal{M}(B,\vec{v})_{\vartheta}=\operatorname{Rep}(B,\vec{v})^{ss}_\vartheta/\!/_{\!\vartheta}G_{\vec{v}}\,.$$ If $\vec{v}$ is a primitive vector, then the open subset $\mathcal{M}^{s}(B,\vec{v})_{\vartheta} \subset \mathcal{M}(B,\vec{v})_{\vartheta}$ consisting of stable representations makes up a fine moduli space \cite[Proposition 5.3]{king}. A useful construction in studying representations of quivers is that of \emph{framed quiver}. Given a quiver $\mathcal{Q}$ with vertex set $I$ and arrow set $E$, its framed quiver $\mathcal{Q}^{\mathrm{fr}}$ is defined as the quiver whose vertex set is $I\sqcup I'$, where $I'$ is a copy of $I$ with a fixed bijection $i\to i'$ and whose arrow set $E^{\mathrm{fr}}$ is obtained by adding for every $i\in I$ new arrows $i \stackrel{d_i}{\longrightarrow} i'$ to $E$. In view of Remark \ref{hilbertasquiver} and of the results of Section \ref{Section-Nakajima-Flag}, we find it convenient to introduce also the notion of \emph{generalized framed} (GF) quiver. Given a quiver $\mathcal{Q}$ as above, we denote by $\mathcal{Q}^{\mathrm{gfr}}$ \emph{an} associated quiver whose vertex set is $I\sqcup I'$, where $I'$ is a copy of $I$ with a fixed bijection $i\to i'$, and whose arrow set $E^{\mathrm{gfr}}$ is obtained by adding to $E$ new arrows \begin{equation} \xymatrix@R+3em{ i \ar@/_10pt/[d]_-{a_{1}} \ar@/_30pt/[d]_-{a_{2}} \ar@/_45pt/@{.}[d] \ar@/_55pt/@{.}[d] \ar@/_65pt/[d]_-{a_{p(i)}} \\ i' \ar@/_10pt/[u]_-{b_{1}} \ar@/_30pt/[u]_-{b_{2}} \ar@/_45pt/@{.}[u] \ar@/_55pt/@{.}[u] \ar@/_65pt/[u]_-{b_{q(i)}} } \end{equation} with $p(i) > 0$ and $q(i) \geq 0$, for all $i\in I$. Of course, when $p(i)=1$ and $q(i)=0$ for all $i\in I$, one recovers the standard definition of $\mathcal{Q}^{\mathrm{fr}}$. A representation of $\mathcal{Q}^{\mathrm{gfr}}$ is supported by $V\oplus W$, where $V$ and $W$ are finite-dimensional $I$-graded vector spaces. If $\dim_{\mathbb{C}} V_{i}=v_{i}$ and $\dim_{\mathbb{C}} W_{i}=w_{i}$, a representation $(V\oplus W,X)$ of $\mathcal{Q}^{\mathrm{gfr}}$ is said to be $(\vec{v},\vec{w})$-dimensional, where $\vec{v}=(v_{i})_{i\in I}\in\mathbb{N}^{I}$ and $\vec{w}=(w_{i})_{i\in I}\in\mathbb{N}^{I}$. We denote by $\operatorname{Rep}(\mathcal{Q}^{\mathrm{gfr}},\vec{v},\vec{w})$ the space of representations of $\mathcal{Q}^{\mathrm{gfr}}$ supported by a fixed $(\vec{v},\vec{w})$-dimensional vector space $V\oplus W$. In the sequel, we shall assume that $\vec{w} \neq 0$. Analogously to the unframed case, one can define the path algebra $\mathbb{C}\mathcal{K}$ of a GF quiver $\mathcal{K}$, and consider the representations of the quotient algebra $\Lambda=\mathbb{C}\mathcal{K}/L$, for any given ideal $L$ of $\mathbb{C} \mathcal{K}$. Also the notion of subrepresentation is completely analogous to the unframed case. However, the representation space $\operatorname{Rep}(\Lambda,\vec{v},\vec{w})$ is regarded just as a $G_{\vec{v}}$-variety, where the group $G_{\vec{v}}$ acts by change of basis on $(V_{i})_{i\in I}$, whilst the action of the group $G_{\vec{w}}$ is ignored. The previous notion of (semi)stability is extended to the representations of a GF quiver by slightly modifying a result due to Crawley-Boevey \cite[p.~261]{CrBo}. Let $\mathcal{Q}$ be a quiver with vertex set $I$, $\mathcal{Q}^{\mathrm{gfr}}$ an associated GF quiver, and $J$ an ideal of the algebra $\mathbb{C}\mathcal{Q}^{\mathrm{gfr}}$. \begin{lemma} \label{lmIsoCB} For all dimension vectors $(\vec{v},\vec{w})\in\mathbb{R}^{I\sqcup I'}$, there exist a quiver $\mathcal{Q}^{\vec{w}}$, with vertex set $I\sqcup\{\infty\}$, and an ideal $J^{\vec{w}}$ of the algebra $\mathbb{C}\mathcal{Q}^{\vec{w}}$ such that there is a $G_{\vec{v}}$-equivariant isomorphism \begin{equation} \operatorname{Rep}\left(\mathbb{C}\mathcal{Q}^{\mathrm{gfr}}/J ,\vec{v},\vec{w}\right)\simeq\operatorname{Rep}\left(\mathbb{C}\mathcal{Q}^{\vec{w}}/J^{\vec{w}}, \vec{v},1\right)\,. \label{eqIsoCB0} \end{equation} \end{lemma} The quiver $\mathcal{Q}^{\vec{w}}$ of Lemma \ref{lmIsoCB} is built by adding to $\mathcal{Q}$ a vertex at $\infty$, and, for any $i\in I$, a number of arrows from the vertex $i$ to the vertex $\infty$ and viceversa equal, respectively, to $w_i\,p(i)$ and $w_i\, q(i)$. Note that a representation of $\mathcal{Q}^{\vec{w}}$ is supported by a vector space $V\oplus V_{\infty}$, where $V$ is an $I$-graded vector space, whilst $V_{\infty}$ is a vector space associated with the vertex $\infty$. Such a representation is said to be $(\vec{v},v_{\infty})$-dimensional, where $v_{\infty}=\dim_{\mathbb{C}}V_{\infty}$. The group $G_{\vec{v}}$ acts on the right-hand side of \eqref{eqIsoCB0} by change of basis on $(V_{i})_{i\in I}$. Let $\vec{v}\in\mathbb{N}^{I}$ and $\vartheta\in\mathbb{R}^{I}$ be, respectively, a dimension vector and a stability parameter for a representation of $\mathcal{Q}$. One defines a stability parameter $\widehat{\vartheta}=(\vartheta,\vartheta_{\infty})\in\mathbb{R}^{I\sqcup\{\infty\}}$ for a $(\vec{v},1)$-dimensional representations of $\mathcal{Q}^{\vec{w}}$ by setting $\vartheta_{\infty}=-\vartheta\cdot\vec{v}$. \begin{defin} \label{defFrStab} A $(\vec{v},\vec{w})$-dimensional representation of $\mathbb{C}\mathcal{Q}^{\mathrm{gfr}}/J$ is said to be $\vartheta$-semistable (resp., stable) if and only if its image through the isomorphism \eqref{eqIsoCB0} is $\widehat{\vartheta}$-semistable (resp., stable). \end{defin} Given a quiver $\mathcal{Q}$ with vertex set $I$, for any $\vec{v},\vec{w}\in\mathbb{N}^{I}$, $\lambda\in\mathbb{C}^{I}$ and $\vartheta\in\mathbb R^{I}$, one can define the associated \emph{Nakajima quiver variety} $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ \cite{Na94}. The main steps of the construction of $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ can be summarized as follows (see \cite{Gin} for further details): \begin{enumerate} \item[(i)] One considers the framed quiver $\mathcal{Q}^{\mathrm{fr}}$ and the associated \emph{double} $\overline{\mathcal{Q}^{\mathrm{fr}}}$. The latter has the same vertex set as $\mathcal{Q}^{\mathrm{fr}}$, and for any arrow $i\stackrel{a}{\longrightarrow}j$ in $E^{\mathrm{fr}}$, with $i,j\in I\sqcup I'$, one adds an \emph{opposite} arrow $j\stackrel{a^{*}}{\longrightarrow}i$. It is easy to see that, for all dimension vectors $(\vec{v},\vec{w})$, there is an isomorphism \begin{equation} \operatorname{Rep}(\overline{\mathcal{Q}^{\mathrm{fr}}},\vec{v},\vec{w})\simeq T^{\vee} \operatorname{Rep}(\mathcal{Q}^{\mathrm{fr}},\vec{v},\vec{w})\,. \end{equation} As a consequence of that, $\operatorname{Rep}(\overline{\mathcal{Q}^{\mathrm{fr}}},\vec{v},\vec{w})$ carries a canonical holomorphic symplectic form $\tilde{\omega}$: \begin{equation} \tilde{\omega}=\operatorname{tr}\left(\sum_{a\in E}\mbox{\rm d} X_{a}\wedge\mbox{\rm d} X_{a^{*}}+\sum_{i\in I}\mbox{\rm d} X_{d_{i}}\wedge \mbox{\rm d} X_{d_{i}^{*}}\right)\,. \label{eq-tildeomega} \end{equation} \item[(ii)] Let $\mathfrak{g}_{\vec{v}}=\bigoplus_{i\in I}\operatorname{End}_{\mathbb{C}}(\mathbb{C}^{v_{i}})$ be the Lie algebra associated with $G_{\vec{v}}$. The group $G_{\vec{v}}$ acts naturally on $\operatorname{Rep}(\overline{\mathcal{Q}^{\mathrm{fr}}},\vec{v},\vec{w})$ if one regards $\operatorname{GL}(v_i)$ as the group of automorphisms of the vector space associated with the $i$-th vertex (of the original quiver $\mathcal{Q}$). Since this action is symplectic, one can introduce a moment map $\mu\colon \operatorname{Rep}(\overline{\mathcal{Q}^{\mathrm{fr}}},\vec{v},\vec{w})\to\mathfrak{g}_{\vec{v}}^\ast\simeq\mathfrak{g}_{\vec{v}}$, given by \begin{equation} (V\oplus W, X)\mapsto \sum_{a\in E}\left(X_a\circ X_{a^\ast}-X_{a^\ast}\circ X_a\right)+\sum_{i \in I}X_{d_{i}^{*}}\circ X_{d_{i}}\,. \end{equation} This gives rise to a moment element, which we call again $\mu$: \begin{equation} \mu=\sum_{a\in E}[a,a^{*}]+\sum_{i\in I} d_{i}^{*}d_{i}\in\mathbb{C}\overline{\mathcal{Q}^{\mathrm{fr}}}\,. \end{equation} It is easy to see that $\mu$ admits a decomposition $\mu=(\mu_{i})_{i\in I}$, where $\mu_{i}\in e_{i}(\mathbb{C}\overline{\mathcal{Q}^{\mathrm{fr}}})e_{i}$. \item[(iii)] The \emph{framed preprojective algebra $\Pi_{\lambda}(\mathcal{Q})$ of $\mathcal{Q}$ with parameter $\lambda$} is defined as the quotient $\mathbb{C}\overline{\mathcal{Q}^{\mathrm{fr}}}/J$, where $J$ is the ideal of $\mathbb{C}\overline{\mathcal{Q}^{\mathrm{fr}}}$ generated by the elements $\{\mu_i-\lambda_i\}_{i\in I}$. The fibre $$\mu^{-1}\left(\sum_{i \in I}\lambda_i \mathbf{1}_{v_i}\right) \subset \operatorname{Rep}(\overline{\mathcal{Q}^{\mathrm{fr}}},\vec{v},\vec{w})$$ is the space of $(\vec{v},\vec{w})$-dimensional representations of $\Pi_{\lambda}(\mathcal{Q})$, which we shall denote by $\operatorname{Rep}(\Pi_{\lambda}(\mathcal{Q}),\vec{v},\vec{w})$. \item[(iv)] The quiver $\overline{\mathcal{Q}^{\mathrm{fr}}}$ can be regarded as a GF quiver associated with the double $\overline{\mathcal{Q}}$ of $\mathcal{Q}$. By applying to the representations of $\Pi_{\lambda}(\mathcal{Q})$ the notion of (semi)stability introduced in Definition \ref{defFrStab}, one introduces the space $\operatorname{Rep}(\Pi_{\lambda}(\mathcal{Q}),\vec{v},\vec{w})_{\vartheta}^{\mathrm{ss}}$ of $\vartheta$-semistable representations and defines the Nakajima quiver variety $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ as the quotient $$\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})=\operatorname{Rep}(\Pi_{\lambda}(\mathcal{Q}),\vec{v},\vec{w})_{\vartheta}^{\mathrm{ss}}/\!/_{\!\vartheta}G_{\vec{v}}\,.$$ \end{enumerate} The symplectic form \eqref{eq-tildeomega} is $G_{\vec{v}}$-invariant, so that it induces a $G_{\vec{v}}$-invariant Poisson structure $\{-,-\}\sptilde$ on $\operatorname{Rep}(\Pi_{\lambda},\vec{v},\vec{w})_{\vartheta}^{\mathrm{ss}}$. As the quotient $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ is a Hamiltonian reduction, it inherits a Poisson structure $\{-,-\}$. Theorem~5.2.2.(ii) in \cite{Gin} provides a sufficient condition for the smoothness of $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ and for the nondegeneracy of $\{-,-\}$. To state this result we need to introduce some notation and a definition. Given a quiver $\mathcal{Q}$, with vertex set $I$ and arrow set $E$, we define its \emph{adiacency matrix} $A_{\mathcal{Q}}$ and its \emph{Cartan matrix} $C_{\mathcal{Q}}$ as follows: $A_{\mathcal{Q}}=(a_{ij})_{i,j\in I}$, where $a_{ij}=\sharp\{ \text{arrows in $E$ going from $j$ to $i$}\}$, and $C_{\mathcal{Q}}=2\bm{1}_{I}-A_{\overline{\mathcal{Q}}}$. For all dimension vectors $\vec{v}\in\mathbb{N}^{I}$ let $R_{\mathcal{Q}}(\vec{v})$ be the set of vectors \begin{equation} R_{\mathcal{Q}}(\vec{v})=\left\{\vec{u}\in\mathbb{Z}^{I}\setminus\{0\}\left| \begin{gathered} (C_{\mathcal{Q}}\vec{u})\cdot \vec{u}\leq 2\\ 0\leq u_{i}\leq v_{i}\qquad\text{for all $i\in I$} \end{gathered} \right.\right\}\,. \end{equation} Given a vector $\vec{s}\in\mathbb{R}^{I}$, we put $\vec{s}^{\bot}=\left\{\vec{t}\in\mathbb{R}^{I}\left|\vec{t}\cdot\vec{s}=0\right.\right\}$. \begin{defin}\label{defin-v-reg} Given a dimension vector $\vec{v}\in\mathbb{N}^{I}$, a pair of parameters $(\lambda,\vartheta)\in\mathbb{C}^{I}\oplus\mathbb{R}^{I}\simeq \mathbb{R}^{3}\otimes_{\mathbb{R}}\mathbb{R}^{I}$ is said to be \emph{$\vec{v}$-regular} if and only if \begin{equation} (\lambda,\vartheta)\in \mathbb{R}^{3}\otimes_{\mathbb{R}}\mathbb{R}^{I} )\setminus (\bigcup_{\vec{u}\in R_{\mathcal{Q}}(\vec{v})}\mathbb{R}^{3}\otimes_{\mathbb{R}}\vec{u}^{\bot})\,. \end{equation} \end{defin} \begin{thm}\cite[Theorem~5.2.2.(ii)]{Gin} \label{ginzburgtheorem} Let $\mathcal{Q}$ be a quiver with vertex set $I$, let $\vec{v},\vec{w}\in\mathbb{N}^{I}$ be dimension vectors such that $\vec{w}\neq0$, and let $(\lambda,\vartheta)\in\mathbb{C}^{I}\oplus\mathbb{R}^{I}$ be a $\vec{v}$-regular pair of parameters. Then all $\vartheta$-semistable representations of $\Pi_{\lambda}$ are $\vartheta$-stable, the variety $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$ is smooth and connected of dimension $2\vec{w}\cdot\vec{v}-(C_{\mathcal{Q}}\vec{v})\cdot\vec{v}$, and the Poisson structure $\{-,-\}$ is nondegenerate. \end{thm} Thus, under the hypotheses of Theorem \ref{ginzburgtheorem}, the symplectic form \eqref{eq-tildeomega} descends to a symplectic form $\omega$ on $\mathcal{N}_{\lambda,\vartheta}(\mathcal{Q},\vec{v},\vec{w})$. It is easy to see that a pair of parameters $(0,\vartheta)$, with $\vartheta\neq0$ and $\vartheta_{i}\geq0$ for all $i\in I$, is $\vec{v}$-regular for all dimension vectors $\vec{v}\in\mathbb{N}^{I}$. \bigskip\bigskip \section{Two examples of moduli spaces of framed sheaves}\label{MFSsection} \subsection{Framed sheaves on $\mathbb{P}^2$} \label{P2section} In the terminology of Section \ref{sectionBM}, we take the line at infinity $\ell_\infty=\{[z_0:z_1:0]\in\mathbb{P}^2\}$ as framing divisor, and the trivial sheaf $\mathcal{O}_{\ell_\infty}^{\oplus r}$ as framing sheaf. So, an $(\ell_\infty, \mathcal{O}_{\ell_\infty}^{\oplus r})$-framed (hereafter, simply \emph{framed}) sheaf on $\mathbb{P}^2$ is a pair $(\mathcal{E},\theta)$, where $\mathcal{E}$ is a torsion-free sheaf of rank $r$ such that $\mathcal{E}|_{\ell_\infty}\simeq\mathcal{O}_{\ell_\infty}^{\oplus r}$, and $\theta\colon\mathcal{E}|_{\ell_\infty}\stackrel{\sim}{\longrightarrow}\mathcal{O}_{\ell_\infty}^{\oplus r}$ is an isomorphism. By Corollary \ref{corBM} we know that for any $\gamma\in H^\bullet(\mathbb{P}^2,\mathbb{Q})$ there exists a fine moduli space $\mathcal{M}_{\mathbb{P}^2}(\gamma)$ for framed sheaves on $\mathbb{P}^2$ with Chern character $\gamma=(\gamma_0,\gamma_1,\gamma_2)$. Note, however, that this space is empty whenever $\gamma_1\neq 0$, since the existence of the framing implies the vanishing of the first Chern class. We denote by $\mathcal{M}(r,c)$ the space $\mathcal{M}_{\mathbb{P}^2}(\gamma)$ with $\gamma=(r,0,c)$. The following result is due to Nakajima \cite{Na99} and provides an ADHM description for this moduli space (generalising a previous result by Donaldson for framed vector bundles on $\mathbb{P}^2$ \cite[Prop.~1]{Don84}). \begin{thm}\cite[Thm~2.1]{Na99}\label{thm:Naka} There is an isomorphism of algebraic varieties between $\mathcal{M}(r,c)$ and $N(r,c)/\operatorname{GL}(c,\mathbb{C})$, where $N(r,c)$ is the quasi-affine variety of quadruples \[ (B_1,B_2,i,j)\in\operatorname{End}(\mathbb{C}^c)^{\oplus 2}\oplus\operatorname{Hom}(\mathbb{C}^r,\mathbb{C}^c)\oplus\operatorname{Hom}(\mathbb{C}^c,\mathbb{C}^r) \] satisfying the conditions \begin{itemize} \item[(i)] $[B_1,B_2]+ij=0$; \item[(ii)] there exists no proper subspace $S\subsetneq \mathbb{C}^c$ such that $B_\alpha(S)\subseteq S$ ($\alpha=1,2$) and $\operatorname{Im} i\subseteq S$, \end{itemize} and the group $\operatorname{GL}(c,\mathbb{C})$ acts on $N(r,c)$ by means of the formula \[ g\cdot(B_1,B_2,i,j)=(gB_1g^{-1},gB_2g^{-1},gi,jg^{-1})\,. \] \end{thm} \begin{rem} Theorem \ref{thm:Naka} allows us to interpret the space $\mathcal{M}(r,c)$ as the Nakajima quiver variety $\mathcal{N}_{0,-1}(\mathcal{L}_{1},c,r)$, where $\mathcal{L}_{1}$ is the so-called \emph{Jordan quiver}, having one vertex and one loop at this vertex. In other words, $\mathcal{M}(r,c)$ can be also viewed as the moduli space of $(-1)$-semistable representations of the framed preprojective algebra $\Pi_{0}(\mathcal{L}_1)$; note that $\overline{\mathcal{L}_1^{\mathrm{fr}}}$ is the quiver \[ \[email protected]{ \mbox{\scriptsize$1$}\\ \bullet\ar@(ul,dl)_{B_1}\ar@(ur,dr)^{B_2}\ar@/^2ex/[dddd]^{j}\\ \\ \\ \\ \bullet\ar@/^2ex/[uuuu]^{i}\\ \mbox{\scriptsize\phantom{'}$1'$} } \] It is easy indeed to recognize the moment map equation in the condition (i) of Theorem \ref{thm:Naka}, while condition (ii) ensures precisely that the representations we are considering are $(-1)$-semistable. \end{rem} The proof of Theorem \ref{thm:Naka} relies on the fact that any torsion-free sheaf $\mathcal{E}$ on $\mathbb{P}^2$ which is trivial along $\ell_\infty$ and has Chern character $\operatorname{ch}(\mathcal{E}) = (r, 0, c)$ is isomorphic to the cohomology of the monad \begin{equation}\label{monadP2} M(a,b) :\qquad \xymatrix{ 0\ar[r]&V\otimes\mathcal{O}_{\mathbb{P}^2}(-1)\ar[r]^-a&\widetilde{W}\otimes\mathcal{O}_{\mathbb{P}^2}\ar[r]^-b&V'\otimes\mathcal{O}_{\mathbb{P}^2}(1)\ar[r]&0\,, } \end{equation} where $V$, $\widetilde{W}$, and $V'$ are complex vector spaces of dimension, respectively, $c$, $2c+r$, and $c$. So, one has $\mathcal{E}\simeq\ker b/\operatorname{Im} a$. Applying Theorem \ref{thm:Naka} to the rank $1$ case, one gets ADHM data for the Hilbert scheme of points of $\mathbb{C}^2$. Indeed, the double dual $\mathcal{E}^{\ast\ast}$ of $\mathcal{E}$ is locally free and has vanishing first Chern class, so that it is isomorphic to the structure sheaf $\mathcal{O}_{\mathbb{P}^2}$. As a consequence, since $\mathcal{E}$ is trivial along $\ell_{\infty}$, the mapping carrying $\mathcal{E}$ to the schematic support of $\mathcal{O}_{\mathbb{P}^2}/ \mathcal{E}$ yields an isomorphism \begin{equation} \mathcal{M}(1,c) \simeq \operatorname{Hilb}^c (\mathbb{P}^2 \setminus \ell_{\infty}) = \operatorname{Hilb}^c (\mathbb{C}^2)\,. \end{equation} In this particular case the stability condition implies that $j=0$ \cite[Prop.~2.8 (1)]{Na99}, so that the description of $\operatorname{Hilb}^c(\mathbb{C}^2)$ can be given in terms of triples $(B_1,B_2,i)$, and condition (i) of Theorem \ref{thm:Naka} can be rephrased by saying that $B_1$ and $B_2$ are commuting matrices. \begin{rem} Let $D$ be the divisor $\{\infty\}\times \mathbb{P}^1 \cup \mathbb{P}^1\times \{ \infty\}$ on the surface $\mathbb{P}^1 \times \mathbb{P}^1$. The moduli space of $(D, \mathcal{O}_D)$-framed sheaves on $\mathbb{P}^1 \times \mathbb{P}^1$ of rank $r$ and second Chern class $c$ is isomorphic to $\mathcal{M}(r,c)$. So, there is an action of the group $\Gamma = \mathbb{Z} / n\mathbb{Z}$ on $\mathcal{M}(r,c)$ which is induced by the action of $\Gamma$ on $\mathbb{P}^1 \times \mathbb{P}^1$ given by the multiplication of the second coordinate of the second $\mathbb{P}^1$ by the $n$-th roots of unity. It can be proved \cite{Bis, FR} that a connected component of $\mathcal{M}^\Gamma(r,c)$, the $\Gamma$-equivariant locus inside $\mathcal{M}(r,c)$, is isomorphic to the moduli space ${\mathcal P}_{\underline{c}}$ of parabolic bundles on $\mathbb{P}^1 \times \mathbb{P}^1$, where $\underline{c} = (c_0, \dots , c_{n-1})$ is a partition of $c$. Starting from the quiver construction of $\mathcal{M}(r,c)$, Finkelberg and Rybnikov \cite{FR} provided a description of ${\mathcal P}_{\underline{c}}$ as a quiver variety (the quiver in question is a chainsaw quiver). Takayama \cite{Tak} exploited this description to establish an isomorphism between moduli spaces of solutions to Nahm's equations over the circle and moduli spaces of locally free parabolic sheaves over $\mathbb{P}^1 \times \mathbb{P}^1$. \end{rem} \subsection{Framed sheaves on multi-blow-ups of the projective plane} We denote by $\widetilde{\mathbb{P}}^2$ the complex projective plane blown-up at $n$ distinct points $p_1,\dots,p_n\notin \ell_\infty$; let $\varpi\colon \widetilde{\mathbb{P}}^2 \longrightarrow \mathbb{P}^2$ be the canonical projection. The Picard group of $\widetilde{\mathbb{P}}^2$ is freely generated on $\mathbb{Z}$ by the class $H$, the divisor of the pullback of the generic line in $\mathbb{P}^2$, and by the classes $\{E_i\}_{i=1}^n$, $E_i$ being the exceptional divisor corresponding to the blow-up at $p_i$. Analogously to the case of $\mathbb{P}^2$, we take $\tilde{\ell}_\infty =\varpi^{-1}(\ell_\infty)$ as framing divisor and the trivial sheaf $\mathcal{O}_{\tilde{\ell}_\infty}^{\oplus r}$ as framing sheaf. Corollary \ref{corBM} ensures that there exists a fine moduli space $\mathcal{M}_{\widetilde{\mathbb{P}}^2}(\gamma)$ for framed sheaves $(\mathcal{E}, \theta)$ on $\widetilde{\mathbb{P}}^2$ with Chern character $\gamma=(r ,\gamma_1, -c +\frac{1}{2}\gamma_1^2) \in H^\bullet(\widetilde{\mathbb{P}}^2,\mathbb{Q})$. Notice that the first Chern class of every torsion-free sheaf which is trivial on $\tilde{\ell}_\infty$ has no component along $H$; hence, $\mathcal{M}_{\widetilde{\mathbb{P}}^2}(\gamma)$ is empty whenever $\gamma_1\cdot H \neq 0$. When $\gamma_1 = \sum_{i=1}^na_iE_i$, the space $\mathcal{M}_{\widetilde{\mathbb{P}}^2}(\gamma)$ will be denoted by $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c)$. An explicit description of such a space was given by Henni in \cite{He}. The particular result for locally free sheaves had been previously proved first by King \cite[Thm~3.3.2]{Ki89} in the case when $n=1$ and $c_1=0$ and then extended by Buchdal \cite[Prop.~1.10]{Bu04} to general values of $n$ and $c_1$. \begin{thm} \cite[Prop.~2.20]{He} A torsion-free sheaf $\mathcal{E}$ on $\widetilde{\mathbb{P}}^2$ which is trivial along $\tilde{\ell}_{\infty}$ and has Chern character $\operatorname{ch}(\mathcal{E}) = (r, \sum_{i=1}^na_iE_i, -c -\frac{1}{2} \sum_{i=1}^na_i^2)$ is isomorphic to the cohomology of a monad \begin{equation} \xymatrix{ 0 \ar[r] & \bigoplus_{s=0}^nK_s\otimes\mathcal{O}_{\widetilde{\mathbb{P}}^2}(-H+E_s) \ar[r]^-{\alpha} & W\otimes\mathcal{O}_{\widetilde{\mathbb{P}}^2} \ar[r]^-{\beta} & \bigoplus_{s=0}^nL_s\otimes\mathcal{O}_{\widetilde{\mathbb{P}}^2}(H-E_s) \ar[r] & 0 }\,, \end{equation} where $E_0:=0$ and $\{K_s\}_{s=0}^n$, $W$, $\{L_s\}_{s=0}^n$ are complex vector spaces with \begin{align*} \dim K_0&=c+\frac{1}{2}\sum_{i=1}^na_i(a_i+1)=:k\ ,\quad\dim L_0=c+\frac{1}{2}\sum_{i=1}^na_i(a_i-1)=:l\,,\\ \dim W&= 2(n+1)k-2\sum_{i=1}^na_i+r\ ,\quad\dim K_s=k-a_s \ ,\quad \dim L_s=k\quad \text{($s=1,\dots,n$)}\,. \end{align*} \end{thm} Before providing the ADHM description for $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c)$, we notice that, since $p_1,\dots,p_n\notin\ell_\infty=\{[z_0:z_1:0]\in\mathbb{P}^2\}$, they all belong to the standard affine chart $U_2=\{z_2\neq 0\}$; we denote by $(p_i^0,p_i^1)$ the affine coordinates of $p_i$ inside $U_2$. A space of ADHM data for $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c)$ is given by the quasi-affine variety $H(r;a_1,\dots,a_n;c)$ in \[ \operatorname{Hom}(\mathbb{C}^k,\mathbb{C}^l)^{\oplus 3}\oplus\bigoplus_{s=1}^n\operatorname{Hom}(\mathbb{C}^{k-a_s},\mathbb{C}^k)\oplus\bigoplus_{s=1}^n\operatorname{Hom}(\mathbb{C}^{k-a_s},\mathbb{C}^l)\oplus\operatorname{Hom}(\mathbb{C}^k,\mathbb{C}^r)\oplus\operatorname{Hom}(\mathbb{C}^r,\mathbb{C}^l) \] characterized by the following conditions:\\ for any point $(A,C_0,C_1;B_1,\dots,B_n;B_1',\dots,B_n';e;f) \in H(r;a_1,\dots,a_n;c)$,\\ (i) the $(l+nk)\times(l+nk)$ matrix \[ M:= \begin{pmatrix} A & B_1'&B_2'&\cdots&B_n'\\ \mathbf{1}_{k}&B_1&0&\cdots&0\\ \mathbf{1}_{k}&0&B_2&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \mathbf{1}_{k}&0&0&\cdots&B_n \end{pmatrix} \] is invertible; \noindent (ii) the $(l+nk)\times(l+nk)$ matrices \[ Q_j:= \begin{pmatrix} -C_j & p_1^jB_1'&p_2^jB_2'&\cdots&p_n^jB_n'\\ p_1^j\mathbf{1}_{k}&p_1^jB_1&0&\cdots&0\\ p_2^j\mathbf{1}_{k}&0&p_2^jB_2&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ p^j_n\mathbf{1}_{k}&0&0&\cdots&p_n^jB_n \end{pmatrix}\qquad \text{for $j=0,1$}\,, \] satisfy the equation \[ [Q_0M^{-1}Q_1-Q_1M^{-1}Q_0]_l^k+fe=0\,, \] where the notation $[\,\star\,]_l^k$ denotes the block of the matrix $\star$ formed by the first $l$ rows and $k$ columns. Let $G$ be the subgroup of $\operatorname{GL}(l+nk, \mathbb{C})\times\operatorname{GL}(l+nk,\mathbb{C})$ whose elements $(h,g)$ are of the following form \begin{equation}\label{group-form} h=\operatorname{diag}(h_{0},h_{1},\cdots,h_{n}),\quad\quad g=\left(\begin{array}{lllll}g_{0}&g_{1}&g_{2}&\cdots&g_{n}\\ 0&h^{-1}_{0}&0&\cdots&0\\ 0&0&h^{-1}_{0}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&h^{-1}_{0}\\ \end{array}\right)\,, \end{equation} where $h_{0}\in\operatorname{GL}(k,\mathbb{C})$, $g_{0}\in\operatorname{GL}(l,\mathbb{C})$, $h_{s}\in\operatorname{GL}(k-a_{s}, \mathbb{C})$ and $g_{s}\in\operatorname{Mat}_\mathbb{C}(l\times k)$ for $s=1,\dots,n$. We define a $G$-action on $H(r;a_1,\dots,a_n;c)$ indirectly, by means of the formulas \begin{equation} \left\{ \begin{array}{rcl} M & \to & gMh\\ Q_{j} & \to & gQ_{j}h\quad\text{for}\quad j=0,1\\ e & \to & eh_{0}\\ f & \to & g_{0}f \end{array} \right.\qquad\qquad (h,g)\in G\,. \end{equation} Finally, we get the following result (cf.~also \cite[Thm~3.4.1]{Ki89} and \cite[\S 3]{Bu04}). \begin{thm}\cite[Thm~6.1]{He} The variety $H(r;a_1,\dots,a_n;c)$ is a locally trivial principal $G$-bundle over $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c)$; in particular, there is an isomorphism of smooth algebraic varieties $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c) \simeq H(r;a_1,\dots,a_n;c)/G\,.$ \end{thm} \begin{rem}\label{remarkADHM} Contrary to the case of $\mathbb{P}^2$, these ADHM data cannot be exploited, as they stand, to interpret the space $\widetilde{\mathcal{M}}(r;a_1,\dots,a_n;c)$, for general values of the invariants, as a quiver variety. The main difficulty in this respect is that the group $G$ cannot be regarded, in general, as the group of isomorphisms of representations of any quiver (cf.~\cite[proof of Lemma 6.7]{He}). Nevertheless, in the case $n=1$ this difficulty can be overcome: indeed, as shown by Henni \cite[\S 2.2.1]{HePhD}, the $G$-principal bundle $H(r;a_{1};c)\longrightarrow \widetilde{\mathcal{M}}(r;a_1;c)$ admits a reduction to a $\operatorname{GL}(k,\mathbb{C})\times\operatorname{GL}(l,\mathbb{C})$-principal bundle. It is likely that the space $\widetilde{\mathcal{M}}(r;a_1;c)$ can be embedded into a quiver variety: a hint in this direction comes from the case of rank $1$ sheaves. In fact there is an isomorphism $\widetilde{\mathcal{M}}(r;a_1;c)\simeq \mathcal{M}^{1}(r,a_{1},c)$, where the latter is the moduli space of framed sheaves on the first Hirzebruch surface (see Section \ref{SecMon}). When $r=1$, we know that $\widetilde{\mathcal{M}}(1;a_1;c)\simeq \widetilde{\mathcal{M}}(1;0;c)$ is isomorphic to a connected component of a quiver variety \cite[Theorem 4.1]{BBLR}. \end{rem} \bigskip\section{Framed sheaves on Hirzebruch surfaces}\label{SecMon} The $n$-th Hirzebruch surface $\Sigma_n$ can be defined as the projective closure of the total space of the line bundle $\mathcal{O}_{\mathbb{P}^1}(-n)$. There is a natural ruling $\Sigma_n\longrightarrow\mathbb{P}^1$ whose fibre determines a class $F \in \operatorname{Pic}(\Sigma_n)$. Let $H$ and $E$ be the classes of sections squaring, respectively, to $n$ and $-n$: one can prove that $\operatorname{Pic}(\Sigma_n)$ is freely generated on $\mathbb{Z}$ by $H$ and $F$; we put $\Ol_{\Sigma_n}(p,q) = \Ol_{\Sigma_n}(pH+qF)$. It should be also recalled that $\Sigma_n$ is a Poisson surface \cite[Remark 2.5]{BaMa}. In what follows we assume $n>0$, in view of the fact that we wish to choose as framing divisor a curve in the class $H$. The class $H$ is big if and only if $n>0$ (for one has $H^{2}=n$), so that if $n=0$ a curve in $H$ does not satisfies the hypotheses of Corollary \ref{corBM}. Let $\ell_{\infty}\simeq\mathbb{P}^1$ be a ``line at infinity'' which belongs to the class $H$ and does not intersect $E$. An $(\ell_{\infty}, \mathcal{O}_{\ell_\infty}^{\oplus r})$-framed sheaf (or, for brevity's sake, a {\em framed sheaf}) on $\Sigma_n$ is a pair $(\mathcal{E}, \theta)$, where $\mathcal{E}$ is a rank $r$ torsion-free sheaf trivial along $\ell_{\infty}$ and $\theta \colon \mathcal{E}\vert_{\ell_{\infty}}\stackrel{\sim}{\longrightarrow}\mathcal{O}_{\ell_\infty}^{\oplus r}$ is an isomorphism. Notice that the condition of being trivial at infinity implies $c_{1}(\mathcal{E})\propto E$. By Corollary \ref{corBM}, there exists a fine moduli space $\mathcal{M}^{n}(r,a,c)= \mathcal{M}_{\Sigma_n}(\gamma)$ parameterizing isomorphism classes of framed sheaves $(\mathcal{E}, \theta)$ on $\Sigma_n$ with Chern character $\textrm{ch}(\mathcal{E}) =\gamma = (r, aE, -c - \frac{1}{2} na^2)$. We assume that the framed sheaves are normalized in such a way that $0\leq a\leq r-1$. \begin{thm} \cite[Cor.~4.6]{BBR}\label{thmEMon} A torsion-free sheaf $\mathcal{E}$ on $\Sigma_n$ which is trivial along $\ell_{\infty}$ and such that $\textrm{ch}(\mathcal{E}) = (r, aE, -c -\frac{1}{2} na^2)$ is isomorphic to the cohomology of a monad \begin{equation} \xymatrix{ M(\alpha,\beta):&0 \ar[r] & \U_{\vec{k}} \ar[r]^-{\alpha} & \V_{\vec{k}} \ar[r]^-{\beta} & \W_{\vec{k}} \ar[r] & 0 }\,, \label{fundamentalmonad} \end{equation} where $\vec{k} = (n,r,a,c)$ and \begin{equation} \U_{\vec{k}}:=\Ol_{\Sigma_n}(0,-1)^{\oplus k_1},\quad \V_{\vec{k}}:=\Ol_{\Sigma_n}(1,-1)^{\oplus k_2} \oplus \Ol_{\Sigma_n}^{\oplus k_4},\quad \W_{\vec{k}}:=\Ol_{\Sigma_n}(1,0)^{\oplus k_3}\,, \end{equation} with \begin{equation} k_1=c+\dfrac{1}{2}na(a-1),\quad k_2=k_1+na,\quad k_3=k_1+(n-1)a,\quad k_4=k_1+r-a\,. \label{k_i} \end{equation} \end{thm} The set $L_{\vec{k}}$ of pairs in $\operatorname{Hom}(\U_{\vec{k}},\V_{\vec{k}})\oplus\operatorname{Hom}(\V_{\vec{k}},\W_{\vec{k}})$ fitting into the complex \eqref{fundamentalmonad} and such that the cohomology of the complex is torsion-free and trivial at infinity is a smooth algebraic variety. We now wish to parameterize isomorphism classes of framed sheaves $(\mathcal{E}, \theta)$, with $\mathcal{E}$ isomorphic to the cohomology of \eqref{fundamentalmonad}. To this aim, we first introduce a principal $\operatorname{GL}(r,\mathbb{C})$-bundle $P_{\vec{k}}$ over $L_{\vec{k}}$, whose fibre at a point $(\alpha,\beta)$ is identified with the space of framings for $\mathcal{E}$. Next we take the action on $P_{\vec{k}}$ of the algebraic group $G_{\vec{k}}=\operatorname{Aut}(\U_{\vec{k}})\times\operatorname{Aut}(\V_{\vec{k}})\times\operatorname{Aut}(\W_{\vec{k}})$. The action is free, and the quotient $P_{\vec{k}}/G_{\vec{k}}$ is a smooth algebraic variety \cite[Thm~5.1]{BBR}. This variety can be identified with the moduli space $\mathcal{M}^{n}(r,a,c)$ by constructing a universal family. One of the advantages of the monadic description is the possibility of obtaining a necessary and sufficient condition for the nonemptiness of $\mathcal{M}^{n}(r,a,c)$. \begin{thm}\cite[Thm~3.4]{BBR}\label{mainthm} The moduli space $\mathcal{M}^{n}(r,a,c)$ is nonempty if and only if \begin{equation}\label{NEC} c + \frac{1}{2} na(a-1) \geq 0\,, \end{equation} Whenever this condition is satisfied, it is a smooth, irreducible algebraic variety of dimension $2rc + (r-1) na^2$. \end{thm} \subsection{Hilbert schemes of points of $\operatorname{Tot}\mathcal{O}_{\mathbb{P}^1}(-n)$} The rank 1 case is especially important. Indeed, from the assumption $r=1$ it follows that $a=0$ (in our normalization), so that one can argue as in the case of $\mathbb{P}^2$ (see Subsection \ref{P2section}), and conclude that \begin{equation} \mathcal{M}^{n}(1,0,c) \simeq \operatorname{Hilb}^c (\Sigma_n \setminus \ell_{\infty}) = \operatorname{Hilb}^c (\operatorname{Tot}(\mathcal{O}_{\mathbb{P}^1}(-n)))\,. \end{equation} Morevover, in this case, the monadic representation gives rise to a genuine ADHM description of the Hilbert scheme of points $\operatorname{Hilb}^c (\operatorname{Tot}(\mathcal{O}_{\mathbb{P}^1}(-n)))$, given as follows. We denote by $P^{n}(c)$ the subset of the vector space $\operatorname{End}(\mathbb{C}^{c})^{\oplus n+2}\oplus\operatorname{Hom}(\mathbb{C}^{c},\mathbb{C})$ whose points $\left(A_1,A_2;C_1,\dots,C_{n};e\right)$ satisfy the following conditions: \begin{enumerate} \item[(P1)] \begin{equation*} \begin{cases} A_1C_1A_{2}=A_2C_{1}A_{1}&\qquad\text{when $n=1$}\\[15pt] \begin{aligned} A_1C_q&=A_2C_{q+1}\\ C_qA_1&=C_{q+1}A_2 \end{aligned} \qquad\text{for}\quad q=1,\dots,n-1&\qquad\text{when $n>1$;} \end{cases} \end{equation*} \smallskip \item[(P2)] $A_1+\lambda A_2$ is a \emph{regular pencil} of matrices; equivalently, there exists $[\nu_{1},\nu_{2}]\in\mathbb{P}^1$ such that $\det(\nu_1A_1+\nu_2A_2)\neq0$;\smallskip \item[(P3)] for all values of the parameters $\left([\lambda_1,\lambda_2],(\mu_1,\mu_{2})\right)\in\mathbb{P}^1\times\mathbb{C}^{2}$ such that \begin{equation*} \lambda_{1}^{n}\mu_{1}+\lambda_{2}^{n}\mu_{2}=0 \end{equation*} there is no nonzero vector $v\in\mathbb{C}^c$ such that \begin{equation*} \left\{ \begin{array}{l} C_{1}A_{2}v=-\mu_1v\\ C_{n}A_{1}v=(-1)^n\mu_2v\\ v\in\ker e \end{array}\right. \qquad\text{and}\qquad\left(\lambda_2{A_1}+\lambda_1{A_2}\right)v=0\,. \end{equation*}\smallskip \end{enumerate} The space $P^n(c)$ is a space of ADHM data for our Hilbert schemes; indeed, one has: \begin{thm} \cite[Thm~3.1]{BBLR}\label{3..1BBLR} There is an isomorphism of smooth algebraic varieties between $\mathcal{M}^n(1,0,c)$ and $P^n(c)/(\operatorname{GL}(c,\mathbb{C})\times\operatorname{GL}(c,\mathbb{C}))$, where the group $\operatorname{GL}(c,\mathbb{C})\times\operatorname{GL}(c,\mathbb{C})$ acts on $P^n(c)$ by means of the formula \[ (\phi_1,\phi_2)\cdot(A_i,C_j,e)=(\phi_2A_i\phi_1^{-1},\, \phi_1C_j\phi_2^{-1},\, e\phi_1^{-1})\,. \] \end{thm} Notice that, as shown in \cite{BBLR}, there is an open cover of $\operatorname{Hilb}^c(\operatorname{Tot}\mathcal{O}_{\mathbb{P}^1}(-n))$, whose elements are all isomorphic to $\operatorname{Hilb}^c(\mathbb{C}^2)$: as a matter of fact, the restriction of our ADHM data to these open sets coincides with Nakajima's ADHM data for $\operatorname{Hilb}^c(\mathbb{C}^2)$. \begin{rem}\label{hilbertasquiver} By relying on the ADHM description given in Theorem \ref{3..1BBLR}, it is possible to prove that the spaces $\operatorname{Hilb}^c(\operatorname{Tot}\mathcal{O}_{\mathbb{P}^1}(-n))$ can be embedded, as irreducible connected components, into moduli spaces of semistable representations of suitable quotients of the path algebras of the GF quivers \begin{equation} \label{eqQnfr} \[email protected]{ &\mbox{\scriptsize$0$}&&\mbox{\scriptsize$1$}\\ &\bullet\ar@/_1ex/[ldd]_{j}\ar@/^/[rr]^{a_{1}}\ar@/^4ex/[rr]^{a_{2}}&&\bullet\ar@/^10pt/[ll]^{c_{1}} \\ \\ \mbox{\scriptsize$0'$}\ \ \bullet\ \ &&&\\ &&\\ &&\mbox{\framebox[1cm]{\begin{minipage}{1cm}\centering $n=1$\end{minipage}}} } \qquad\qquad \[email protected]{ &\mbox{\scriptsize$0$}&&\mbox{\scriptsize$1$}\\ &\bullet\ar@/_3ex/[ldddd]_{j}\ar@/^/[rr]^{a_{1}}\ar@/^4ex/[rr]^{a_{2}}&&\bullet\ar@/^/[ll]^{c_{1}} \ar@/^4ex/[ll]^{c_2} \\ \\ \\&&&&& \\ \mbox{\scriptsize$0'$}\ \ \bullet\ \ar@/_12pt/[ruuuu]_{i_1}&&&&& \\ &&& \mbox{\framebox[1cm]{\begin{minipage}{1cm}\centering $n=2$\end{minipage}}} } \end{equation} \begin{equation} \[email protected]{ &&\mbox{\scriptsize$0$}&&\mbox{\scriptsize$1$}\\ &&\bullet\ar@/_3ex/[llddddddd]_{j}\ar@/^/[rr]^{a_{1}}\ar@/^4ex/[rr]^{a_{2}}&&\bullet\ar@/^/[ll]^{c_{1}} \ar@/^4ex/[ll]^{c_2}\ar@/^7ex/[ll]^{\ell_1}\ar@{..}@/^10ex/[ll]\ar@{..}@/^11ex/[ll] \ar@/^12ex/[ll]^{\ell_{n-2}}& \\ \\ \\ \\ \\&&&&&\mbox{\framebox[1cm]{\begin{minipage}{1cm}\centering $n\geq3$\end{minipage}}} \\ \\ \bullet\ar@/_/[rruuuuuuu]^{i_1}\ar@/_3ex/[rruuuuuuu]_{i_2}\ar@{..}@/_6ex/[rruuuuuuu]\ar@{..}@/_8ex/[rruuuuuuu]\ar@/_10ex/[rruuuuuuu]_{i_{n-1}}&&&&&\\ \mbox{\scriptsize\phantom{'}$0'$}&&&& } \end{equation} For precise definitions and statements we refer to \cite[\S 4]{BBLR} (see in particular Theorem 4.4, {\em loc.~cit.}). Except for the case $n=2$, the Hilbert scheme $\operatorname{Hilb}^c(\operatorname{Tot}\mathcal{O}_{\mathbb{P}^1}(-n))$ is not a holomorphic symplectic variety. However, it carries a natural Poisson structure induced by the Poisson bivector defined on the Hirzebruch surface $\Sigma_n$, as follows from Bottacin's general results \cite{Bot}. It seems to be an interesting and challenging problem to characterise this Poisson structure in purely quiver-theoretic terms, possibly by resorting to the noncommutative notion of ``double Poisson bracket'' on the path algebra of a quiver (see \cite{VDB, CrBo, BIE}). \end{rem} \bigskip \section{The minimal case}\label{sectionminimalcase} In this section we give a complete description of the moduli spaces $\mathcal{M}^{n}(r,a,c)$ when the minimality condition stated in Theorem \ref{mainthm}, namely \begin{equation}\label{eq:min} c= C_{\text{m}}(n,a) =\frac{1}{2}na(1-a)\,, \end{equation} is satisfied. In the following we will simply write $C_{\text{m}}$ instead of $C_{\text{m}}(n,a)$, since no ambiguity is likely to arise. \begin{rem} If $\mathcal{E}$ is a torsion-free sheaf on $\Sigma_{n}$ which is trivial at infinity and satisfies the ``minimality'' condition \eqref{eq:min}, then $\mathcal{E}$ is locally free. This can be deduced from the canonical injection $\mathcal{E}\rightarrowtail\mathcal{E}^{**}$ (but it is also a consequence of the monadic description \eqref{eqMinimalMonad} below). \end{rem} \begin{thm}\label{thm:minimal} There are isomorphisms \begin{equation} \mathcal{M}^n\left(r,a,C_{\mathrm{m}} \right)\simeq \begin{cases} \operatorname{Gr}(a,r)&\text{if $n=1$;}\\ T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}&\text{if $n\geq2$,} \end{cases} \label{eqIsoM} \end{equation} where $\operatorname{Gr}(a,r)$ is the Grassmannian of $a$-planes in $\mathbb{C}^r$. \end{thm} The next three Subsections are devoted to prove Theorem \ref{thm:minimal}. \subsection{The monad in the minimal case.} Condition \eqref{eq:min} implies $\U_{\vec{k}}=0$, so that the monad \eqref{fundamentalmonad} reduces to the complex \begin{equation} \xymatrix{ 0\ar[r]&\mathcal{O}_{\Sigma_n}(1,-1)^{\oplus na}\oplus\mathcal{O}_{\Sigma_n}^{\oplus r-a}\ar[r]^-{\beta}&\mathcal{O}_{\Sigma_n}(1,0)^{\oplus (n-1)a}\ar[r]&0\,, } \label{eqMinimalMonad} \end{equation} whose cohomology sheaf is just the kernel of $\beta$. To ensure that this sheaf is trivial at infinity, one has also to impose the invertibility of the linear map $\Phi = H^0(\beta|_{\ell_\infty}(-1))$ (see \cite[\S 3]{BBLR}). We denote by $\operatorname{GL}(a,r)$ the maximal parabolic subgroup of $\operatorname{GL}(r)$ consisting of upper block triangular matrices of the form \[ \begin{pmatrix} A&B\\ 0&C \end{pmatrix}\,,\qquad\text{where}\quad \begin{cases} A\in\operatorname{Mat}_\mathbb{C}(a\times a)&\\ B\in\operatorname{Mat}_\mathbb{C}(a\times(r-a))\\ C\in\operatorname{Mat}_\mathbb{C}((r-a)\times (r-a))\,. \end{cases} \] First of all, notice that, if $n=1$ or $a=0$, then $\W_{\vec{k}}=0$, and $\Phi$ is the identity (i.e.~zero) morphism between null vector spaces: this implies that the variety $L_{\vec{k}}$ reduces to a point, and consequently $P_{\vec{k}}=\operatorname{GL}(r)$. To describe the corresponding moduli spaces we have to compute the automorphism group of $\V_{\vec{k}}$: \begin{itemize} \item for $n=1$, $\V_{\vec{k}}=\mathcal{O}_{\Sigma_1}(1,-1)^{\oplus a}\oplus\mathcal{O}_{\Sigma_1}^{\oplus r-a}$; therefore, $$ \operatorname{Aut}(\V_{\vec{k}})\simeq\operatorname{GL}(a,r)\ \ \text{and}\ \ \mathcal{M}^1\left(r,a,a(1-a)/2\right)\simeq \operatorname{GL}(r)/\operatorname{GL}(a,r)\simeq\operatorname{Gr}(a,r)\,; $$ \item for $a=0$, $\V_{\vec{k}}=\mathcal{O}_{\Sigma_n}^{\oplus r}$; therefore, $$ \operatorname{Aut}(\V_{\vec{k}})\simeq\operatorname{GL}(r) \ \ \text{and}\ \ \mathcal{M}^n\left(r,0,0\right)\simeq\operatorname{GL}(r)/\operatorname{GL}(r)=\{\ast\}=\operatorname{Gr}(0,r)\,. $$ \end{itemize} Let us now assume $n\geq2$ and $a\geq 1$. To prove Theorem \ref{thm:minimal} we make use of the well-known isomorphism \begin{equation}\label{eq:iso} T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}\simeq \left (\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)^{\oplus n-1}\times\operatorname{GL}(r)\right )/\operatorname{GL}(a,r)\,, \end{equation} where the $\operatorname{GL}(a,r)$-action is given by: \begin{align*} \begin{pmatrix} A&B\\ 0&C \end{pmatrix}\cdot b&=AbC^{-1}&\text{for $b\in\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)$,}\\ \begin{pmatrix} A&B\\ 0&C \end{pmatrix}\cdot \theta&=\theta\begin{pmatrix} A&B\\ 0&C \end{pmatrix}^{-1}&\text{for $\theta\in\operatorname{GL}(r)$.} \end{align*} \smallskip A more explicit description of the bundle $P_{\vec{k}}$ can be obtained by the following procedure.\\ {\bf 1)} We represent the morphism $\beta$ by a matrix. To this aim, we have to choose a basis for the vector space \[ \operatorname{Hom}(\mathbb{C}^{na},\mathbb{C}^{(n-1)a})\otimes H^0(\mathcal{O}_{\Sigma_n}(0,1))\oplus\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^{(n-1)a})\otimes H^0(\mathcal{O}_{\Sigma_n}(1,0))\,. \] We take homogeneous coordinates $[y_1:y_2]$ on $\mathbb{P}^1$ and pull them back to $\Sigma_n$ by means of the canonical projection $\pi\colon\Sigma_n\to\mathbb{P}^1$; the elements $\{y_2^qy_1^{h-q}\}_{q=0}^h$ form a basis for the vector space $H^0(\mathcal{O}_{\Sigma_n}(0,h))$, for all $h\geq1$. We denote by $s_E$ the unique (up to homotheties) global section of $\mathcal{O}_{\Sigma_n}(E)$ and by $s_\infty$ the section of $\mathcal{O}_{\Sigma_n}(1,0)$ whose vanishing locus is $\ell_\infty$. The multiplication by $s_E$ induces an immersion \[ \xymatrix{ \mathcal{O}_{\Sigma_n}(0,n)\ar@{>}[r]&\mathcal{O}_{\Sigma_n}(1,0)\,, } \] so that the set $\{y_2^qy_1^{n-q}s_E\}_{q=0}^{n}\cup\{s_\infty\}$ is a basis for the space $H^0(\mathcal{O}_{\Sigma_n}(1,0))$. According to these choices, the morphism $\beta$ is represented by a matrix of the form \begin{equation} \begin{pmatrix} \beta_{1} & \beta_{2} \end{pmatrix}= \left(\beta_{10}y_1+\beta_{11}y_2\quad\sum_{q=0}^n\beta_{2q}(y_2^qy_1^{n-q}s_E)+\beta_{2,n+1}s_\infty\right)\,, \label{eqBeta} \end{equation} whilst an automorphism $\psi\in\operatorname{Aut}(\V_{\vec{k}})$ is represented by a matrix of the form \begin{equation} \psi= \begin{pmatrix} \psi_{11} & \psi_{12}\\ 0 & \psi_{11} \end{pmatrix}= \begin{pmatrix} \psi_{11} & \sum_{q=0}^{n-1}\psi_{12,q}(y_2^qy_1^{n-1-q}s_E)\\ 0 & \psi_{22} \end{pmatrix}\,. \end{equation} {\bf 2)} The morphism $\Phi$ is represented by an $n(n-1)a\times n(n-1)a$ matrix , whose only nonvanishing terms are $\beta_{10},\beta_{11}\in\operatorname{Mat}_\mathbb{C}((n-1)a\times na)$: \begin{equation} \Phi= \begin{pmatrix} \beta_{10}&&\\ \beta_{11}&\beta_{10}&\\ &\beta_{11}&\ddots&\\ &&\ddots&\beta_{10}\\ &&&\beta_{11} \end{pmatrix}\,. \label{eqPhi} \end{equation} {\bf 3)} A framing for the kernel of $\beta$ is provided by the choice of a basis for $H^0(\ker\beta|_{\ell_\infty})=\ker H^0(\beta|_{\ell_\infty})$. In other words, it is given by an injective linear map \[ \xi\colon\mathbb{C}^r\to H^0(\V_{\vec{k}}|_{\ell_{\infty}})\quad\text{such that}\quad H^0(\beta|_{\ell_\infty})\circ\xi=0\,. \] Summing up, {\em the bundle $P_{\vec{k}}$ can be described as the set of pairs $(\beta,\xi)$ as above such $\det\Phi\neq 0$}. The group $G_{\vec{k}}=\operatorname{Aut}\V_{\vec{k}}\times\operatorname{Aut}\W_{\vec{k}}$ acts on $P_{\vec{k}}$ as follows: \[ (\psi,\chi)\cdot(\beta,\xi)=(\chi\circ\beta\circ\psi^{-1},H^0(\psi|_{\ell_\infty})\circ\xi)\,. \] \subsection{A technical Lemma} Let $X$ be a smooth algebraic variety over $\mathbb{C}$ and $G$ a complex affine algebraic group acting on $X$; let $\gamma\colon X\times G \to X\times X$ be the induced morphism given by $(x, g) \mapsto (x, g\cdot x)$. The set-theoretical quotient $X/G$ has a natural structure of ringed space, whose topology is the quotient topology induced by the canonical projection $q\colon X\longrightarrow X/G$ and whose structure sheaf is the sheaf of $G$-invariant functions. If the action is free and $\gamma$ is a closed immersion, then $X/G$ is a smooth algebraic variety, the pair $(X/G,q)$ is a geometric quotient of $X$ modulo $G$, and $X$ is a (locally isotrivial) principal $G$-bundle over $X/G$. This can be proved by arguing as in the proof of \cite[Theorem~5.1]{BBR}. Let $Y$ be a smooth closed subvariety of $X$ and let $H\stackrel{\iota}{\hookrightarrow}G$ be a closed subgroup of $G$. Assume that $H$ acts on $Y$ and that the inclusion $j\colon Y \hookrightarrow X$ is $H$-equivariant. We denote by $p\colon Y\longrightarrow Y/H$ the canonical projection. \begin{lemma} \label{lemmaquot} If the intersection of $\operatorname{Im} j$ with every $G$-orbit in $X$ is nonempty and, for any $G$-orbit $O_{G}$ in $X$, one has $\operatorname{Stab}_{G}(O_{G}\cap\operatorname{Im} j)=\operatorname{Im}\iota$, then $j$ induces an isomorphism $\bar{\jmath}\colon Y/H \longrightarrow X/G$ of algebraic varieties. \end{lemma} \begin{proof} By \cite[Prop.~0.7]{Mum} the morphism $q$ is affine. Hence, if $U\subset X/G$ is an open affine subset, $V=q^{-1}(U)$ is affine as well; if we set $V=\operatorname{Spec} A$, then $U= \operatorname{Spec}(A^{G})$, and the restricted morphism $q|_{V}$ is induced by the canonical immersion $q^{\sharp}\colon A ^{G}\hookrightarrow A$. Since $j$ is an affine morphism \cite[Prop.~1.6.2.(i)]{EGA2}, the counterimage $W=j^{-1}(V)$ is affine, $W=\operatorname{Spec} B$, and by the equivariance of $j$ it is $H$-invariant. It follows that its image $p(W)=\operatorname{Spec}(B^{H})$ is affine and the restricted morphism $p|_{W}$ is induced by the canonical immersion $p^{\sharp}\colon B^{H}\hookrightarrow B$. Let $j^{\sharp}\colon A \longrightarrow B $ be the ring homomorphism determined by $j$. It is not difficult to check that $q^{\sharp} \circ j^{\sharp}$ is an isomorphism onto $p^{\sharp}(B^H) \subset B$. So $j^{\sharp}$ induces a ring isomorphism $\bar{\jmath}\,^{\sharp} \colon A^G \longrightarrow B^H$. This shows that $\bar{\jmath}\colon Y/H \longrightarrow X/G$ is an isomorphism. \end{proof} \begin{cor} Under the previous hypotheses, the principal $H$-bundle $p\colon Y\longrightarrow Y/H$ is a reduction of the principal $G$-bundle $q\colon X\longrightarrow X/G$. In particular, if $X \longrightarrow X/G$ is locally trivial, then $Y\longrightarrow Y/H$ is locally trivial as well. \end{cor} \subsection{Reduction of the structure group.}\label{reductionstructuregroup} The proof of Theorem \ref{thm:minimal} is based on the reduction of the structure group $G_{\vec{k}}$ of the principal bundle $P_{\vec{k}} \longrightarrow \mathcal{M}^n(r,a,C_{\mathrm{m}})$ to the group $\operatorname{GL}(a,r)$. To this aim let us introduce the closed immersion \begin{equation} j\colon \operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)^{\oplus n-1}\times\operatorname{GL}(r)\hookrightarrowP_{\vec{k}} \label{eqj} \end{equation} given by $j(b_1,\dots.b_{n-1},\theta)= (\beta,\xi)$, where the morphism $\beta$ is defined by the equations \begin{equation} \begin{cases} \beta_{10}= \begin{pmatrix} \mathbf{1}_{(n-1)a}&0 \end{pmatrix}\,;\\ \beta_{11}= \begin{pmatrix} 0&\mathbf{1}_{(n-1)a} \end{pmatrix}\,;\\ \beta_{2q}=0&\text{for $q=0,\dots,n$;}\\ \beta_{2,n+1}= \begin{pmatrix} b_1\\\vdots\\b_{n-1} \end{pmatrix}\,, \end{cases} \label{eq-j-Beta} \end{equation} \smallskip and the linear map $\xi$ is represented by the $(n^2a+(r-a))\times r$ matrix \begin{equation} \begin{pmatrix} \xi_1\\ \vdots\\ (-1)^{i-1}\xi_i\\ \vdots\\ (-1)^{n-1}\xi_n\\ \theta_{r-a} \end{pmatrix}\,, \quad \text{where} \ \ \begin{cases} \text{$\theta_{r-a}$ is the matrix consisting of the last $r-a$ rows of $\theta$,}\\ \text{$\xi_i$ is an $na\times r$ block of the form}\quad \begin{pmatrix} 0_{(i-1)a\times r}\\ \theta^a\\ 0_{(n-i)a\times r} \end{pmatrix}\,, \\ \text{$\theta^a$ is the matrix consisting of the first $a$ rows of $\theta$.} \end{cases} \label{eqxi} \end{equation} A direct computation shows that the pair $(\beta,\xi)$ belongs to $P_{\vec{k}}$. \smallskip We claim that \emph{the immersion $j$ determines a reduction of the principal $G_{\vec{k}}$-bundle $P_{\vec{k}} \longrightarrow \mathcal{M}^n(r,a,C_{\mathrm{m}})$ to the principal $\operatorname{GL}(a,r)$-bundle $\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)^{\oplus n-1}\times\operatorname{GL}(r) \longrightarrow \mathcal{M}^n(r,a,C_{\mathrm{m}})$}. Indeed we can define an immersion $\imath\colon \operatorname{GL}(a,r) {\hookrightarrow} G_{\vec{k}}=\operatorname{Aut}(\V_{\vec{k}})\times\operatorname{Aut}(\W_{\vec{k}})$ by setting \begin{equation} \imath \begin{pmatrix} A&B\\ 0&C \end{pmatrix} = (\psi,\chi)\,, \label{eq-imath} \end{equation} where \[ \psi= \begin{pmatrix} A\otimes\mathbf{1}_n&B\otimes \mathbf{v}\\ 0&C \end{pmatrix}\quad\text{and}\quad\chi=A\otimes\mathbf{1}_{n-1}\,, \] with $\mathbf{v}=(v_0,\dots,v_{n-1})^T$ and $v_i=(-1)^{n-1-i}y_2^iy_1^{n-1-i}s_E$. It is easy to see that the immersion $j$ is $\operatorname{GL}(a,r)$-equivariant. \begin{lemma} The image of $j$ has nonempty intersection with every $G_{\vec{k}}$-orbit, and for any $G_{\vec{k}}$-orbit $O$, the stabilizer in $G_{\vec{k}}$ of $O\cap\operatorname{Im} j$ is precisely the image of $\imath$. \label{eqj-red} \end{lemma} \begin{proof} We consider the following block decomposition of the matrix $\Phi^{-1}$: \begin{equation} \begin{aligned} \Phi^{-1}&= \begin{pmatrix} \varphi_{11} & \cdots & \varphi_{1n}\\ \vdots & & \vdots\\ \varphi_{n-1,1} & \cdots & \varphi_{n-1,n}\\ \end{pmatrix} \begin{array}{l} \updownarrow na \phantom{\varphi_{11}}\\ \vdots \phantom{\varphi_{11}}\\ \updownarrow na \phantom{\varphi_{11}} \end{array}\\ &\phantom{=}\mspace{20mu} \begin{matrix} \overleftrightarrow{\mbox{\tiny$(n-1)a$}} & \cdots & \overleftrightarrow{\mbox{\tiny$(n-1)a$}} \end{matrix} \end{aligned} \label{eqPhi-1} \end{equation} Because of eq.~\eqref{eqPhi}, the conditions $\Phi\Phi^{-1}=\Phi^{-1}\Phi=\bm{1}_{n(n-1)a}$ imply that \begin{equation} \begin{aligned} \beta_{10}\varphi_{1q}&=\delta_{1,q}\bm{1}_{(n-1)a} & \text{for}\quad q&=1,\dots,n\\ \varphi_{1q}\beta_{10}+\varphi_{1,q+1}\beta_{11}&=\delta_{1,q}\bm{1}_{na} & \text{for}\quad q&=1,\dots,n-1 \end{aligned} \label{eqbeta-phi} \end{equation} Let $(\beta,\xi)\inP_{\vec{k}}$, and let $O$ be its $G_{\vec{k}}$-orbit. Eqs. \eqref{eqbeta-phi} imply that, by acting with $G_{\vec{k}}$, one can find a point inside $O$ such that \begin{equation} \beta_{10}= \begin{pmatrix} \bm{1}_{(n-1)a} & 0 \end{pmatrix}\,. \label{eqb10} \end{equation} We call $O_{0}$ the subvariety of $O$ cut by eq. \eqref{eqb10}. It is easy to see that the stabilizer $G_{0}$ of $O_{0}$ in $G_{\vec{k}}$ is the subgroup characterized by the condition $\psi_{11}= \left(\begin{smallmatrix} \chi & 0\\ 0 & A \end{smallmatrix}\right)$, for some $A\in \operatorname{GL}(a)$. Let $O_{q}$ be the subvariety of $O_{0}$ cut by the equation \begin{equation} \beta_{11}= \begin{pmatrix} * & 0\\ 0 & \bm{1}_{qa} \end{pmatrix} \end{equation} and let $G_{q}$ be the closed subgroup of $G_{0}$ characterized by the condition $\chi= \left(\begin{smallmatrix} * & 0\\ 0 & A\otimes\bm{1}_{q} \end{smallmatrix}\right)$, for $q=1,\dots,n-1$. By using eqs.~\eqref{eqbeta-phi} and reasoning by induction, one can show that, for any $q=0,\dots,n-2$ and for any point $(\beta,\xi) \in O_q$, there exists an element $g \in G_q$ such that $g\cdot (\beta,\xi)$ lies inside $O_{q+1}$. Moreover, it is easy to check that the stabilizer of $O_{q+1}$ in $G_q$ is $G_{q+1}$. Summing up, if $(\beta,\xi)\inP_{\vec{k}}$ is any point and $O$ is its $G_{\vec{k}}$-orbit, by acting with $G_{\vec{k}}$ on $(\beta,\xi)$ one can find a point in $O_{n-1}$, which is the subvariety of $O$ cut by eq. \eqref{eqb10} and by the equation \begin{equation} \beta_{11}= \begin{pmatrix} 0 & \bm{1}_{(n-1)a} \end{pmatrix}\,. \label{eqb11} \end{equation} The stabilizer $G_{n-1}$ of $O_{n-1}$ in $G_{\vec{k}}$ is the subgroup characterized by the conditions $\psi_{11}=A\otimes\bm{1}_{n}$ and $\chi =A\otimes\bm{1}_{n-1}$, for some $A\in \operatorname{GL}(a)$. Given any point $(\beta,\xi)\in O_{n-1}$, by acting with $G_{n-1}$ one can find a point such that \begin{equation} \beta_{2q}=0\qquad\text{for}\quad q=0,\dots,n\,. \label{eqb2q} \end{equation} Let $O_{n}$ be the subvariety cut in $O_{n-1}$ by eq.~\eqref{eqb2q}. It is easy to see that the stabilizer $G_{n}$ of $O_{n}$ in $G_{n-1}$ is the closed subgroup determined by the condition $\psi_{12}=B\otimes \mathbf{v}$, where $\mathbf{v}=(v_0,\dots,v_{n-1})^T$ and $v_i=y_2^i(-y_1)^{n-1-i}s_E$, for some $B\in\operatorname{Mat}_\mathbb{C}(a\times(r-a))$. It is easy to see that $G_{n}=\operatorname{Im}\imath$. Let $Z$ be the subvariety cut in $P_{\vec{k}}$ by eqs.~\eqref{eqb10}, \eqref{eqb11} and \eqref{eqb2q}: we claim that $Z=\operatorname{Im} j$. Indeed the condition $H^0(\beta|_{\ell_\infty})\circ\xi$ implies that, for all points $(\beta,\xi)\in Z$, the matrix $\xi$ has the form described in eq.~\eqref{eqxi} and, since $\xi$ has maximal rank, it follows that the $r\times r$ matrix $\left(\begin{smallmatrix} \theta^{a} \\ \theta_{r-a}\end{smallmatrix}\right)$ is invertible. \end{proof} Lemmas \ref{lemmaquot} and \ref{eqj-red} imply that the immersion $j$ determines a reduction of the structure group of the principal $G_{\vec{k}}$-bundle $P_{\vec{k}}$ to a principal $\operatorname{GL}(a,r)$-bundle, as we claimed. In particular, one has the isomorphisms \begin{equation} (\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)^{\oplus n-1}\times\operatorname{GL}(r))/\operatorname{GL}(a,r)\simeqP_{\vec{k}}/G_{\vec{k}}\, \simeq \mathcal{M}^n(r,a,C_{\mathrm{m}}); \label{eqIsoRed} \end{equation} in view of \eqref{eq:iso}, this concludes the proof of Theorem \ref{thm:minimal}. We notice that the varieties $\operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)^{\oplus n-1}\times\operatorname{GL}(r)$ in the case $n>1$ and $\operatorname{GL}(r)$ in the case $n=1$ can be thought of as ADHM data spaces for $\mathcal{M}^n(r,a,C_{\mathrm{m}})$. In Section \ref{Section-Nakajima-Flag} the schemes $\mathcal{M}^n(r,a,C_{\mathrm{m}})$ will be constructed as quiver varieties, and so a different ADHM description will be provided. \begin{rem} Theorem \ref{thm:minimal} is consistent with instanton counting, which shows that the moduli spaces $\mathcal{M}^n\left(r,a,C_{\text{m}} \right)$ have the same Betti numbers as $\operatorname{Gr}(a,r)$ \cite{BPT}. \end{rem} \subsection{Some geometric remarks.} \label{geometricremarks} In this subsection, we give a more intrinsic interpretation to the isomorphism \eqref{eqIsoM}. \begin{prop} \label{lmEExt} Let $\mathcal{E}$ be a sheaf on $\Sigma_{n}$. \begin{enumerate} \item[(i)] The sheaf $\mathcal{E}$ is locally free, trivial at infinity, and satisfies condition \eqref{eq:min} if and only if it fits into an extension of the form \begin{equation} \xymatrix{ 0 \ar[r] & \Ol_{\Sigma_n}(E)^{\oplus a} \ar[r]^-{i} & \mathcal{E} \ar[r]^-{p} & \Ol_{\Sigma_n}^{\oplus r-a} \ar[r] & 0 } \label{eqEExt} \end{equation} for some integers $r>0$ and $0\leq a<r$. \item[(ii)] Two vector bundles $\mathcal{E}$ and $\mathcal{E}'$ which are trivial at infinity and satisfy condition \eqref{eq:min} are isomorphic if and only if they fit into isomorphic extensions of the form \eqref{eqEExt}. \item[(iii)] The set of isomorphism classes of vector bundles $\mathcal{E}$ on $\Sigma_{n}$ which are trivial at infinity, satisfy condition \eqref{eq:min} and such that $\operatorname{rk}(\mathcal{E})=r$, $c_{1}(\mathcal{E})=aE$ can be identified with the vector space $\operatorname{Ext}^{1}_{\Ol_{\Sigma_n}}\left(\Ol_{\Sigma_n}^{\oplus r-a},\Ol_{\Sigma_n}(E)^{\oplus a}\right)$. \end{enumerate} \end{prop} \begin{proof} (i) Let $\mathcal{E}$ be a vector bundle on $\Sigma_{n}$ which is trivial at infinity and satisfies the minimality condition \eqref{eq:min}. Then $c_{1}(\mathcal{E})=aE$ for some integer $a$ such that $0\leq a<r=\operatorname{rk}(\mathcal{E})$, while the second Chern class $c_{2}(\mathcal{E})$ is fixed by eq.~\eqref{eq:min}. Theorem \ref{thmEMon} and eq.~\eqref{eqMinimalMonad} imply that $\mathcal{E}\simeq\ker\beta$. For $n=1$, one has $\ker\beta= \mathcal{O}_{\Sigma_{1}}(E)^{\oplus a}\oplus \mathcal{O}_{\Sigma_{1}}^{\oplus r-a}$, so that $\mathcal{E}$ fits into a split extension of the form \eqref{eqEExt}: this proves the first statement in this case. Let us assume $n\geq2$. We claim that \begin{equation} \ker\beta_{1}\simeq\Ol_{\Sigma_n}(E)^{\oplus a} \label{eqKerBeta1} \end{equation} where $$\beta_{1}\colon\Ol_{\Sigma_n}(1,-1)^{\oplus na}\longrightarrow\Ol_{\Sigma_n}(1,0)^{\oplus(n-1)a}$$ is the first block of the matrix $\beta$ (see eq.~\eqref{eqBeta}). Indeed $\Ol_{\Sigma_n}(1,-1)^{\oplus na}\simeq\mathbb{C}^{a}\otimes\Ol_{\Sigma_n}(1,-1)^{\oplus n}$, $\Ol_{\Sigma_n}(1,0)^{\oplus(n-1)a}\simeq\mathbb{C}^{a}\otimes \Ol_{\Sigma_n}(1,0)^{\oplus n-1}$, and up to the action of $G_{\vec{k}}$ one has $\beta_{1}=\bm{1}_{a}\otimes f$, where \begin{equation} f=\begin{pmatrix} \bm{1}_{n-1}y_{1} & 0 \end{pmatrix} + \begin{pmatrix} 0 & \bm{1}_{n-1}y_{2} \end{pmatrix}\,. \label{eqFinBeta} \end{equation} It follows that $\ker\beta_{1}\simeq(\ker f)^{\oplus a}$. Eq.~\eqref{eqFinBeta} implies that $f$ is surjective, so that $\ker f$ is locally free, of rank $1$ and $c_{1}(\ker f)=H-nF=E$. This proves the claim. By eq.~\eqref{eqMinimalMonad} $\mathcal{E}$ fits into the following commutative diagram \begin{equation} \xymatrix{ & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \Ol_{\Sigma_n}(E)^{\oplus a} \ar[r]^-{i} \ar[d]^-{\kappa} & \mathcal{E} \ar[r]^-{p} \ar[d]^-{h} & \Ol_{\Sigma_n}^{\oplus r-a} \ar[r] \ar@{=}[d] & 0\\ 0 \ar[r] & \Ol_{\Sigma_n}(1,-1)^{\oplus na} \ar[r]^-{j} \ar[d]^-{\beta_{1}} & \V_{\vec{k}} \ar[r]^-{\pi} \ar[d]^{\beta} & \Ol_{\Sigma_n}^{\oplus r-a} \ar[r] & 0\\ &\W_{\vec{k}} \ar@{=}[r] \ar[d] & \W_{\vec{k}} \ar[d] \\ & 0 & 0 } \label{eqDiaExt} \end{equation} where $j=\left(\begin{smallmatrix} \bm{1}_{na} \\ 0 \end{smallmatrix}\right)$, $\pi=\left(\begin{smallmatrix} 0 & \bm{1}_{r-a} \end{smallmatrix}\right)$ and $p=\pi\circ h$. In the left column we have used eq.~\eqref{eqKerBeta1} and the surjectivity of $f$. The morphism $i$ is induced by the other morphisms, and the injectivity of $i$ is a consequence of the injectivity of $j\circ \kappa= h\circ i$. The surjectivity of $p$ is a consequence of the Snake Lemma. This proves the first statement in one direction. Conversely, let $\mathcal{E}$ be a sheaf on $\Sigma_{n}$ that fits into eq.~\eqref{eqEExt}. The Chern character of $\mathcal{E}$ is easily computed, and in particular it turns out that $\mathcal{E}$ satisfies condition \eqref{eq:min}. Since $\mathcal{E}$ is an extension of locally free sheaves, it is locally free, and its restriction $\mathcal{E}|_{\ell_\infty}$ is locally free of the same rank. By restricting \eqref{eqEExt} to $\ell_{\infty}$, by twisting the result by $\mathcal{O}_{\ell_\infty}(-1)$ and by taking cohomology one gets $H^{i}(\mathcal{E}|_{\ell_\infty}(-1))=0$, $i=0,1$, which implies that $\mathcal{E}|_{\ell_\infty}$ is trivial. (ii) In one direction, the second statement is straightforward. In the other direction, we have to distinguish between the case $n=1$ and the case $n\geq 2$. For $n=1$, \cite[Lemma~3.1]{BBR} implies that $$\operatorname{Ext}^{1}_{\mathcal{O}_{\Sigma_{1}}}\left(\mathcal{O}_{\Sigma_{1}}^{\oplus r-a},\mathcal{O}_{\Sigma_{1}}(E)^{\oplus a}\right)=0\,.$$ It follows that all extensions of the form \eqref{eqEExt} split, and this proves the second statement in this case. For $n\geq2$, if $\mathcal{E}$ and $\mathcal{E}'$ are two isomorphic vector bundles which are trivial at infinity and satisfy condition \eqref{eq:min}, then the first part of the proof entails that $\mathcal{E}$ and $\mathcal{E}'$ fit into extensions of the form \eqref{eqEExt}. By \cite[Lemma~4.7]{BBR} any isomorphism $\Lambda\colon\mathcal{E}\longrightarrow\mathcal{E}'$ lifts uniquely to a monads isomorphism $(\psi,\chi)$. The second statement is easily checked by means of the diagram \eqref{eqDiaExt}. (iii) The statement follows from (i) and (ii). \end{proof} Let $X_{n}=\Sigma_{n}\setminus\ell_\infty$. This open subset can be naturally regarded as the total space of the line bundle $\mathcal{O}_{\mathbb{P}^1}(-n)$. \begin{lemma} \label{lmE-iso-away-infty} Two vector bundles $\mathcal{E}$ and $\mathcal{E}'$ of same Chern character, which are trivial at infinity and satisfy condition \eqref{eq:min}, are isomorphic if and only if their restrictions $\mathcal{E}|_{X_{n}}$ and $\mathcal{E}'|_{X_{n}}$ are isomorphic as $\mathcal{O}_{X_{n}}$-modules. \end{lemma} \begin{proof} Let $\operatorname{rk}(\mathcal{E})=\operatorname{rk}(\mathcal{E}')=r$, $c_{1}(\mathcal{E})=c_{1}(\mathcal{E}')=aE$. By Proposition \ref{lmEExt}, it is enough to prove that the natural morphism \begin{equation} \operatorname{Ext}^{1}_{\Ol_{\Sigma_n}}\left(\Ol_{\Sigma_n}^{\oplus r-a},\Ol_{\Sigma_n}(E)^{\oplus a}\right)\longrightarrow\operatorname{Ext}^{1}_{\mathcal{O}_{X_{n}}}\left(\mathcal{O}_{X_{n}}^{\oplus r-a},\left(\Ol_{\Sigma_n}(E)|_{X_{n}}\right)^{\oplus a}\right) \end{equation} is an isomorphism. This is equivalent to prove that the natural morphism \begin{equation} H^{1}(\Sigma_{n},\Ol_{\Sigma_n}(E))\longrightarrow H^{1}\left(X_{n},\Ol_{\Sigma_n}(E)|_{X_{n}}\right) \label{eqRestrE} \end{equation} is an isomorphism. For any $\Ol_{\Sigma_n}$-module $\mathcal{F}$, we denote by $H^{i}_{\ell_\infty}(\Sigma_{n},\mathcal{F})$ its $i$-th cohomology group with supports in $\ell_\infty$. By \cite[Exercise~III.2.3.(e)]{Har} there is an exact sequence \begin{multline} \xymatrix{ H^{0}(X_{n},\mathcal{O}_{X_{n}}) \ar[r]^-{\partial_{\mathcal{O}}} & H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}) \ar[r] & H^{1}(\Sigma_{n},\Ol_{\Sigma_n}) \ar[r] & }\\ \xymatrix{ \ar[r] & H^{1}(X_{n},\mathcal{O}_{X_{n}}) \ar[r] & H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}) \ar[r] & H^{2}(\Sigma_{n},\Ol_{\Sigma_n}) }\,. \end{multline} Our first claim is that \begin{equation} \begin{aligned} &\text{the connecting morphism} \quad H^{0}(X_{n},\mathcal{O}_{X_{n}})\stackrel{\partial_{\mathcal{O}}}{\longrightarrow} H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}) \quad \text{is surjective} \\ &\text{and}\quad H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}) =0\,. \end{aligned} \label{eqCsuppO} \end{equation} We first notice that, by \cite[Lemma~3.1]{BBR}, $H^{1}(\Sigma_{n},\Ol_{\Sigma_n})=0$, so that $\partial_{\mathcal{O}}$ is surjective. Next, by using \cite[Exercise~III.4.1]{Har}, we have the isomorphism \begin{equation} H^{1}(X_{n},\mathcal{O}_{X_{n}}) \simeq H^{1}(\mathbb{P}^1,\pi_{*}\mathcal{O}_{X_{n}})\,, \label{eq-van1} \end{equation} where $\pi\colon X_{n}\longrightarrow\mathbb{P}^1$ is the natural projection. It is not difficult to show by direct computation that $\pi_{*}\mathcal{O}_{X_{n}}\simeq\mathcal{O}_{\mathbb{P}^1}^{\oplus\mathbb{N}}$, so that \begin{equation} H^{1}(\mathbb{P}^1,\pi_{*}\mathcal{O}_{X_{n}})\simeq H^{1}(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1})^{\oplus\mathbb{N}}=0\,. \label{eq-van2} \end{equation} Eqs. \eqref{eq-van1} and \eqref{eq-van2} imply that $H^{1}(X_{n},\mathcal{O}_{X_{n}})=0$. Finally, again by \cite[Lemma~3.1]{BBR}, we have that $H^{2}(\Sigma_{n},\Ol_{\Sigma_n})=0$. The vanishing of $H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n})$ follows. An analogous claim to \eqref{eqCsuppO} holds true for the sheaf $\Ol_{\Sigma_n}(E)$, i.e., \begin{equation} \begin{aligned} &\text{the morphism}\quad H^{0}\left(X_{n},\Ol_{\Sigma_n}(E)|_{X_{n}}\right)\stackrel{\partial_{E}}{\longrightarrow} H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E))\quad \text{is surjective}\\ &\text{and}\quad H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E))=0\,. \end{aligned} \label{eqCsuppE} \end{equation} Indeed, the short exact sequence $$\xymatrix{ 0 \ar[r] & \Ol_{\Sigma_n} \ar[r] & \Ol_{\Sigma_n}(E) \ar[r] & \mathcal{O}_{E}(-n) \ar[r] & 0}$$ induces a commutative diagram \begin{equation} \xymatrix{ 0 \ar[r] & H^{0}(X_{n},\mathcal{O}_{X_{n}}) \ar[r]^-{f} \ar[d]^{\partial_{\mathcal{O}}} & H^{0}\left(X_{n},\Ol_{\Sigma_n}(E)|_{X_{n}}\right) \ar[r] \ar[d]^{\partial_{E}} & H^{0}\left(X_{n},\mathcal{O}_{E}(-n)|_{X_{n}}\right) \ar[d]\\ H^{0}_{\ell_\infty}(\Sigma_{n},\mathcal{O}_{E}(-n)) \ar[r] & H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}) \ar[r]^-{g} & H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E)) \ar[r] & H^{1}_{\ell_\infty}(\Sigma_{n},\mathcal{O}_{E}(-n)) } \end{equation} whose rows are exact. Since $E\cap\ell_\infty=\varnothing$, there are inclusions \begin{equation} E \subset X_{n}\qquad\text{and}\qquad \ell_\infty\subset \Sigma_{n}\setminus E\,. \label{eqInclEli} \end{equation} From the first inclusion it follows that $H^{0}\left(X_{n},\mathcal{O}_{E}(-n)|_{X_{n}}\right) = H^{0}\left(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1}(-n)\right)=0$, hence $f$ is an isomorphism. The second inclusion, in view of \cite[Exercise~III.2.3.(f)]{Har}, implies that $$H^{i}_{\ell_\infty}(\Sigma_{n},\mathcal{O}_{E}(-n))=0\qquad\text{for all}\quad i\geq0\,,$$ so that $g$ is an isomorphism. Thus, $\partial_{E}$ is surjective because this is the case for $\partial_{\mathcal{O}}$. The second inclusion of \eqref{eqInclEli} implies also that $$H^{i}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E))\simeq H^{i}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n})\qquad\text{for all}\quad i\geq0\,.$$ By the last statement in \eqref{eqCsuppO} one has $H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E))=0$. This proves the claim \eqref{eqCsuppE}. Finally, the fact that the morphism \eqref{eqRestrE} is an isomorphism follows from the exact sequence \begin{multline} \xymatrix{ H^{0}\left(X_{n},\Ol_{\Sigma_n}(E)|_{X_{n}}\right) \ar[r]^-{\partial_{E}} & H^{1}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E)) \ar[r] & H^{1}(\Sigma_{n},\Ol_{\Sigma_n}(E)) \ar[r] & }\\ \xymatrix{ \ar[r] & H^{1}\left(X_{n},\Ol_{\Sigma_n}(E)|_{X_{n}}\right) \ar[r] & H^{2}_{\ell_\infty}(\Sigma_{n},\Ol_{\Sigma_n}(E))\,. } \label{eqSeqCsuppE} \end{multline} \end{proof} \begin{cor} \label{proTipFib} For $n\geq2$, there is an isomorphism between the typical fibre of the vector bundle $T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}$ and the vector space of isomorphism classes of $\mathcal{O}_{X_{n}}$-modules $\mathcal{E}|_{X_{n}}$ obtained by restricting vector bundles $\mathcal{E}$ on $\Sigma_{n}$ that are trivial at infinity, satisfy condition \eqref{eq:min}, and such that $\operatorname{rk}(\mathcal{E})=r$, $c_{1}(\mathcal{E})=aE$. \end{cor} \begin{proof} In view of Lemma \ref{lmE-iso-away-infty} it is enough to prove that there is an isomorphism between the typical fibre of the vector bundle $T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}$ and the vector space of isomorphism classes of vector bundles $\mathcal{E}$ on $\Sigma_{n}$ which are trivial at infinity, satisfy condition \eqref{eq:min}, and such that $\operatorname{rk}(\mathcal{E})=r$, $c_{1}(\mathcal{E})=aE$. Notice first that there is an isomorphism of complex vector spaces \begin{equation} \operatorname{Ext}^{1}_{\Ol_{\Sigma_n}}\left(\Ol_{\Sigma_n}^{\oplus r-a},\Ol_{\Sigma_n}(E)^{\oplus a}\right)\simeq \operatorname{Hom}(\mathbb{C}^{r-a},\mathbb{C}^a)\otimes H^{1}(\Ol_{\Sigma_n}(E))\,. \label{eqTipFib} \end{equation} By the Riemann-Roch formula, since $h^{0}(\Ol_{\Sigma_n}(E))=1$ and $h^{2}(\Ol_{\Sigma_n}(E))=0$ (see \cite[Lemma~3.1]{BBR}), one has $h^{1}(\Ol_{\Sigma_n}(E))=n-1$. The right-hand side of \eqref{eqTipFib} can therefore be regarded as the typical fibre of the vector bundle $T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}$ in view of eq. \eqref{eqIsoRed}. The thesis is then a consequence of Proposition \ref{lmEExt}(iii). \end{proof} As for the Grassmannian $\operatorname{Gr}(a,r)$, this parameterizes equivalence classes of framings. More explicitly, two framings $\theta\colon\mathcal{E}|_{\ell_\infty}\stackrel{\sim}{\longrightarrow}\mathcal{O}_{\ell_\infty}^{\oplus r}$ and $\theta'\colon\mathcal{E}'|_{\ell_\infty}\stackrel{\sim}{\longrightarrow}\mathcal{O}_{\ell_\infty}^{\oplus r}$ are equivalent if and only if there is an isomorphism $\Lambda\colon\mathcal{E}\longrightarrow\mathcal{E}'$ such that \begin{equation} \theta'\circ\Lambda\vert_{\ell_{\infty}} = \theta \label{eqEquivFr} \end{equation} (cf. Definition \ref{definframedsheaf}). Since any framing $\theta$ is an isomorphism between two trivial sheaves on $\ell_\infty\simeq\mathbb{P}^1$, there is a 1-1 correspondence between the set of framings and the set of the induced isomorphisms $H^{0}(\theta)\colon H^{0}(\mathcal{E}|_{\ell_\infty})\longrightarrow\mathbb{C}^{r}$ between global sections. For this reason, eq.~\eqref{eqEquivFr} is equivalent to \begin{equation} H^{0}(\theta')\circ H^{0}(\Lambda\vert_{\ell_{\infty}}) = H^{0}(\theta)\,. \end{equation} In view of the reduction of the structure group of $P_{\vec{k}}$ made in Subsection \ref{reductionstructuregroup}, we see that, for every element $[(\mathcal{E},\theta)]\in\mathcal{M}^n\left(r,a,C_{\mathrm{m}} \right)$, the space $H^{0}(\mathcal{E}|_{\ell_\infty})$ can be identified with a fixed $r$-dimensional subspace $V$ of $H^{0}(\V_{\vec{k}}|_{\ell_\infty})$. In particular, if one computes the matrix $H^{0}(\beta|_{\ell_\infty})$ by means of eqs.~\eqref{eq-j-Beta}, one gets a fixed isomorphism $V\stackrel{\sim}{\longrightarrow}\mathbb{C}^{r}$: in this way, we can regard a framing as a matrix in $\operatorname{GL}(r)$. By using the immersion \eqref{eq-imath}, one proves easily that two framings $\theta$ and $\theta'$ are equivalent if and only if $H^0(\theta')g=H^0(\theta)$ for some $g\in\operatorname{GL}(a,r)$. With these identifications in mind, it is then straightforward that the Grassmannian $\operatorname{Gr}(a,r)=\operatorname{GL}(r)/\operatorname{GL}(a,r)$ parameterizes the equivalence classes $[\theta]$ of framings. For $n\geq2$, the previous construction enables one to reinterpret intrinsically the canonical projection $T^\vee\operatorname{Gr}(a,r)^{\oplus n-1}\longrightarrow\operatorname{Gr}(a,r)$ as the map $[(\mathcal{E},\theta)]\longrightarrow[\theta]$. \section{Nakajima's flag varieties and the spaces $\mathcal{M}^n\left(r,a,C_{\mathrm{m}} \right)$}\label{Section-Nakajima-Flag} In this section we show (Proposition \ref{corMshMq}) that there is an isomorphism between the moduli space $\mathcal{M}^n(r,a,C_{\mathrm{m}})$ and the moduli space of the representations of a suitable GF quiver with relations. The proof is based on a straightforward generalization of a result due to Nakajima \cite{Na94, Na96}. For any positive integer $d$, let $\mathcal{A}_{d}$ be the Dynkin quiver \begin{equation} \xymatrix@R-2em@C+1ex{ \mbox{\footnotesize{$0$}} & \mbox{\footnotesize{$1$}} & \mbox{\footnotesize{$2$}} & & \mbox{\footnotesize{$d-2$}} & \mbox{\footnotesize{$d-1$}}\\ \bullet & \bullet \ar[l]_-{a_{1}} & \bullet \ar[l]_-{a_{2}} & \cdots \ar[l] & \bullet \ar[l] & \bullet \ar[l]_-{a_{d-1}} } \end{equation} So, $I=\{0,\dots,d-1\}$ and the dimension vector $\vec{v}$ of a representation $(V,X)$ of $\mathcal{A}_{d}$ is an element of $\mathbb{N}^{d}$. We want to associate with $\mathcal{A}_{d}$ a sequence of GF quiver $\mathcal{Q}_{d,n}$ for $n\geq 1$. Since we are interested only in $(\vec{v},\vec{w})$-dimensional representations, where $\vec{w}=(u,0,\dots,0)$ for some integer $u >0$, it is enough to add to $I$ only the vertex $0'$. We define the quivers $\mathcal{Q}_{d,n}$ as follows \begin{equation} \begin{gathered} \xymatrix@R-2em@C+1ex{ \mbox{\footnotesize{$0'$}}&\mbox{\footnotesize{$0$}} & \mbox{\footnotesize{$1$}} & \mbox{\footnotesize{$2$}} & & \mbox{\footnotesize{$d-2$}} & \mbox{\footnotesize{$d-1$}}\\ \bullet &\bullet \ar[l]_-{j} & \bullet \ar[l]_-{a_{1}} & \bullet \ar[l]_-{a_{2}} & \cdots \ar[l] & \bullet \ar[l] & \bullet \ar[l]_-{a_{d-1}} }\\ \xymatrix@R-2em@C+1.5em{ \mbox{\footnotesize{$0'$}}&\mbox{\footnotesize{$0$}} & \mbox{\footnotesize{$1$}} & \mbox{\footnotesize{$2$}} & & \mbox{\footnotesize{$d-2$}} & \mbox{\footnotesize{$d-1$}}\\ \bullet \ar@/_15pt/[r]^-{i_{1}} \ar@/_30pt/[r]^-{i_{2}} \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r]_-{i_{n-1}} &\bullet \ar@/_/[l]_-{j} \ar@/_15pt/[r]^-{b_{11}} \ar@/_30pt/[r]^-{b_{12}} \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r]_-{b_{1,n-1}} & \bullet \ar@/_/[l]_-{a_{1}} \ar@/_15pt/[r]^-{b_{21}} \ar@/_30pt/[r]^-{b_{22}} \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r]_-{b_{2,n-1}} & \bullet \ar@/_/[l]_-{a_{2}} \ar@/_15pt/[r] \ar@/_30pt/[r] \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r] &\mspace{10mu} \cdots \mspace{10mu} \ar@/_/[l] \ar@/_15pt/[r] \ar@/_30pt/[r] \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r] & \bullet \ar@/_/[l] \ar@/_15pt/[r]^-{b_{d-1,1}} \ar@/_30pt/[r]^-{b_{d-1,2}} \ar@/_35pt/@{.>}[r] \ar@/_40pt/@{.>}[r] \ar@/_50pt/[r]_-{b_{d-1,n-1}} & \bullet \ar@/_/[l]_-{a_{d-1}} \\ } \end{gathered} \begin{gathered} \mbox{\framebox[1cm]{\begin{minipage}{1cm}\centering $n=1$\end{minipage}}} \vspace{3cm} \\ \mbox{\framebox[1cm]{\begin{minipage}{1cm}\centering $n\geq2$\end{minipage}}} \end{gathered} \end{equation} A representation of a quiver of this form is supported by a vector space $V\oplus U$, where $U$ is associated with the vertex $0'$; for the sake of brevity, we shall say that such a representation is $(\vec{v},u)$-dimensional. For $n\geq 2$ let $J_{d,n}$ be the ideal of the algebra $\mathbb{C}\mathcal{Q}_{d,n}$ generated by the following relations: \begin{equation} \begin{gathered} i_{q}j=0\qquad q=1,\dots,n-1 \qquad \qquad\text{when}\quad d=1\\[5pt] \left\{ \begin{aligned} a_{1}b_{1q}+i_{q}j&=0\\ b_{1q}a_{1}&=0 \end{aligned} \right.\qquad q=1,\dots,n-1\qquad\qquad\text{when}\quad d=2\\[5pt] \left\{ \begin{aligned} a_{1}b_{1q}+i_{q}j&=0\\ a_{p+1}b_{p+1,q}-b_{pq}a_{p}&=0\\ b_{d-1,q}a_{d-1}&=0 \end{aligned} \right.\qquad \begin{aligned} p&=1,\dots,d-2\\ q&=1,\dots,n-1 \end{aligned} \qquad \qquad\text{when}\quad d>2\,. \end{gathered} \label{eqGenJ} \end{equation} We define the algebra $\mathcal{F}_{d,n}$ as follows \begin{equation}\label{algebraFDN} \mathcal{F}_{d,n}= \begin{cases} \mathbb{C} \mathcal{Q}_{d,1} &\qquad\text{when}\quad n=1\\ \mathbb{C}\mathcal{Q}_{d,n}/J_{d,n} &\qquad\text{when}\quad n>1\,.\\ \end{cases} \end{equation} We choose an integer $u>d$ and a dimension vector $\vec{v}=(v_{0},\dots,v_{d-1})$ such that $u>v_{0}>v_{1}>\dots >v_{d-1}>0$. Let $U=\mathbb{C}^{u}$ and $V= \bigoplus_{i=0}^{d-1} V_i =\bigoplus_{i=0}^{d-1}\mathbb{C}^{v_{i}}$. We fix also the stability parameter $\vartheta^+=(1,1,\dots,1)\in\mathbb{R}^{d}$. We recall that the partial flag variety $\operatorname{Fl}(\vec{v},u)$ is the smooth projective variety whose points can be identified with filtrations $\mathbb{C}^{u}\supset E_{0} \supset E_{1} \supset \dots \supset E_{d-1}$ of complex vector spaces such that $\dim E_{i}=v_{i}$, $i=0,\dots,d-1$. The following result was proved by Nakajima for $n\leq 2$ \cite{Na94, Na96}. \begin{prop} \label{propCotFlag} Let $\mathcal{M}(\mathcal{F}_{d,n},\vec{v},u)_{\vartheta^+}$ be the moduli space of $\vartheta^+$\!-semistable $(\vec{v},u)$-dimensional representations of $\mathcal{F}_{d,n}$. There is an isomorphism \begin{equation} \mathcal{M}(\mathcal{F}_{d,n},\vec{v},u)_{\vartheta^+}\simeq \begin{cases} \operatorname{Fl}(\vec{v},u) &\qquad\text{if}\quad n=1;\\ T^\vee\operatorname{Fl}(\vec{v},u)^{\oplus n-1}&\qquad\text{if}\quad n\geq2. \end{cases} \label{eqIsoFlag} \end{equation} \end{prop} To prove Proposition \ref{propCotFlag} we simplify the notation we introduced in Section \ref{quiversection}. For a representation of $\mathcal{F}_{d,n}$ supported by $\mathbb{V}:=\left(\bigoplus_{i=0}^{d-1}\mathbb{C}^{v_{i}}\right)\oplus\mathbb{C}^{u}$ we set \begin{align} \label{eqABfe} e&=X_{j} & f_{q}&=X_{i_{q}} & A_{p}&=X_{a_{p}} & B_{pq}&=X_{b_{pq}} \end{align} with $p=1,\dots,d-1$ and $q=1,\dots,n-1$ (in the case $d=1$ there are no morphisms $A_{p}$ and $B_{pq}$, whilst in the case $n=1$ there are no morphisms $f_{q}$ and $B_{pq}$). According to Definition \ref{defFrStab}, a $\vartheta^+$\!-semistable $(\vec{v},u)$-dimensional representation of $\mathcal{F}_{d,n}$ is defined in terms of an auxiliary quiver $\mathcal{Q}_{d,n}^{u}$ and of an ideal $J^{u}_{d,n}\subset\mathbb{C}\mathcal{Q}_{d,n}^{u}$. The quiver $\mathcal{Q}_{d,n}^{u}$ is defined by renaming the vertex $0'$ as $\infty$, by replacing the arrow $j$ with $u$ arrows $\tilde{\jmath}_{1},\dots,\tilde{\jmath}_{u}$ and, if $n>1$, by replacing the arrow $i_{q}$ with $u$ arrows $\tilde{\imath}_{q1},\dots,\tilde{\imath}_{qu}$, for all $q=1,\dots,n-1$. For $n>1$ the ideal $J^{u}_{d,n}$ is generated by the relations obtained by replacing the product $i_{q}j$ with the sum of products $\sum_{l=1}^{u}\tilde{\imath}_{ql}\tilde{\jmath}_{l}$ in eqs. \eqref{eqGenJ}. The definition of the algebra $\mathcal{F}^{u}_{d,n}$ is given analogously to eq.~\eqref{algebraFDN}. For a representation of $\mathcal{F}^{u}_{d,n}$ supported by $\mathbb{U}:=\left(\bigoplus_{i=0}^{d-1}\mathbb{C}^{v_{i}}\right)\oplus\mathbb{C}$ we set \begin{align} \tilde{e}_{l}&=X_{\tilde{\jmath}_{l}} & \tilde{f}_{ql}&=X_{\tilde{\imath}_{ql}} & A_{p}&=X_{a_{p}} & B_{pq}&=X_{b_{pq}} \end{align} with $l=1,\dots,u$, $p=1,\dots,d-1$, and $q=1,\dots,n-1$ (in the case $d=1$ there are no morphisms $A_{p}$ and $B_{pq}$, whilst in the case $n=1$ there are no morphisms $\tilde{f}_{ql}$ and $B_{pq}$). We now write down the isomorphism \eqref{eqIsoCB0} for the particular case in question. Once fixed a basis $\{\varepsilon_{1},\dots,\varepsilon_{u}\}$ for $\mathbb{C}^{u}$, we define the linear morphisms \begin{equation} \begin{array}{rccl} \varphi_{l}\colon&\mathbb{C}^{u}&\longrightarrow&\mathbb{C}\\ & z = \sum z_k \varepsilon_k &\longmapsto & z_{l} \end{array}\qquad,\qquad \begin{array}{rccl} \psi_{l}\colon&\mathbb{C}&\longrightarrow&\mathbb{C}^{u}\\ & \nu &\longmapsto & \nu \varepsilon_{l} \end{array} \end{equation} for $l=1,\dots, u$; of course, one has $\operatorname{id}_{\mathbb{C}^{u}}=\sum_{l=1}^{u}\psi_{l}\circ\varphi_{l}$. The isomorphism \eqref{eqIsoCB0} is given by the map \begin{equation} \label{eqIsoCB'} (e,f_{p},A_{p},B_{pq})_{ \begin{subarray}{l} p=1,\dots,d-1 \\ q=1,\dots,n-1 \end{subarray} } \longmapsto (\tilde{e}_{l},\tilde{f}_{pl},A_{p},B_{pq})_{ \begin{subarray}{l} p=1,\dots,d-1 \\ q=1,\dots,n-1\\ l=1,\dots,u \end{subarray} } \end{equation} where $\tilde{e}_{l}=\varphi_{l}\circ e$ and $\tilde{f}_{pl}=f_{p}\circ\psi_{l}$. \begin{lemma} \label{lmStabQdn} A representation $\left(\mathbb{V},X\right) \in \operatorname{Rep}(\mathcal{F}_{d,n};\vec{v},u)$ is $\vartheta^+$\!-semistable if and only if it is $\vartheta^+$\!-stable. It is $\vartheta^+$\!-semistable if and only if the morphisms $e$, $A_{0}$,\dots, $A_{d-1}$ are injective. \end{lemma} \begin{proof} Given a subrepresentation $(S\oplus S_\infty,Y)$ of a representation $\left(\mathbb{U},X\right)$ of $\mathcal{F}^{u}_{d,n}$, we set $s_{i}=\dim S_{i}$ for $i=\infty,0,\dots,d-1$ and $\vec{s}=(s_{0},\dots,s_{d-1})$. According to Definition \ref{defFrStab}, a representation $\left(\mathbb{U},X\right)$ of $\mathcal{F}^{u}_{d,n}$ is $\widehat{\vartheta}^+$\!-semistable if, for all proper, nontrivial subrepresentations $(S\oplus S_\infty,Y)$, one has \begin{equation} \vartheta^+\cdot\vec{s}\leq(\vartheta^+\cdot\vec{v})s_{\infty}\,, \label{eqSemiStab1'} \end{equation} it is $\widehat{\vartheta}^+$\!-stable if strict inequality holds. It is easy to see that a representation $\left(\mathbb{U},X\right)$ is $\widehat{\vartheta}^+$\!-semistable if and only if for all proper, nontrivial subrepresentations one has $s_{\infty}=1$, and that all $\widehat{\vartheta}^+$\!-semistable representations are stable. This establishes the first statement of the Lemma. As for the second statement, we start by showing that, given a representation $\left(\mathbb{U},X\right)$, the condition $s_{\infty}=1$ holds true for all proper, non trivial subrepresentations if and only if \begin{equation} \bigcap_{l=1}^{u}\ker \tilde{e}_{l}=\{0\}\qquad\text{and}\qquad\ker A_{p}=\{0\}\qquad\text{for}\quad p=0,\dots,d-1\,. \label{eqTeA'} \end{equation} To prove this claim in one direction we argue by contradiction. If we assume that eq.~\eqref{eqTeA'} is false, then:\\ $\bullet$ in the case $n=1$, one can find a subrepresentation supported by $S=(0,\dots,0,S_{m},0,\dots,0)$, where \begin{equation}\label{eqsubrep} S_{m} = \begin{cases} &\bigcap_{l=1}^{u}\ker \tilde{e}_{l} \quad \text{for}\ m=0 \\ &\ker A_{m} \quad \text{for}\ m\in\{1,\dots,d-1\}, \text{with} \ d>1 \,; \end{cases}\end{equation} $\bullet$ in the case $n>1$, one can find a subrepresentation supported by $S=(0,\dots,0,S_{m},S_{m+1},\dots,S_{d-1})$, where $S_{m}$ is defined as in eq.~\eqref{eqsubrep}, while $S_{m+1},\dots,S_{d-1}$ are defined inductively by the formula \begin{equation} S_{p}=\sum_{q=1}^{n-1}B_{pq}(S_{p-1}) \label{eqSp'} \end{equation} where $p=m+1,\dots,d-1$.\\ In both cases, one has $s_\infty = 0$, so that the given subrepresentations are destabilizing. In the other direction, we assume that eq.~\eqref{eqTeA'} holds true. Let $(S\oplus S_\infty,Y)$ be a proper, nontrivial subrepresentation. If $s_{\infty}=0$, then $s_{m}>0$ for some $m\in\{0,\dots,d-1\}$, so that one has the immersion \begin{equation} \sum_{l=1}^{u}(\tilde{e}_{l}\circ A_{0}\circ A_{1} \circ\dots\circ A_{m})(S_{m}) \subseteq S_{\infty} \end{equation} Eq. \eqref{eqTeA'} implies that $s_{\infty}>0$, and so we have a contradiction. Hence, $s_{\infty} = 1$. The second statement is then equivalent to eq.~\eqref{eqTeA'}, because one has $e(v)=\sum_{l=1}^{u}\tilde{e}_{l}(v)\varepsilon_{l}$ for all $v\in\mathbb{C}^{v_{0}}$. \end{proof} \begin{proof}[Proof of Proposition \ref{propCotFlag}.] The cases $n=1, 2$, are, respectively, Example 3.7 in \cite{Na96} and Theorem 7.3 in \cite{Na94} (see also \cite[Example~3.2.7]{Gin}). Note that there is a $G_{\vec{v}}$-equivariant morphism $\tilde{q}\colon\operatorname{Rep}(\mathcal{F}_{d,2},\vec{v},u)^{ss}_{\vartheta^+}\longrightarrow\operatorname{Rep}(\mathcal{F}_{d,1},\vec{v},u)^{ss}_{\vartheta^+}$ given by the maps \begin{equation} \left\{ \begin{array}{rcll} (e,f) & \longmapsto & e &\qquad\text{if $d=1$}\\[7pt] (e,f,A_{p},B_{p})_{p=1}^{d-1} & \longmapsto & (e,A_{p})_{p=1}^{d-1} &\qquad\text{if $d>1$} \end{array} \right. \end{equation} where, for simplicity, we have put $f=f_{1}$ and $B_{p}=B_{p1}$. The morphism $\tilde{q}$ descends to a morphism $q\colon \mathcal{M}(\mathcal{F}_{d,2},\vec{v},u)_{\vartheta^+}\longrightarrow \mathcal{M}(\mathcal{F}_{d,1},\vec{v},u)_{\vartheta^+}$. By composing $q$ with the isomorphisms \eqref{eqIsoFlag} for $n=1,2$, one gets the canonical projection $T^\vee\operatorname{Fl}(\vec{v},u)\longrightarrow\operatorname{Fl}(\vec{v},u)$. For $n\geq 3$, it is easy to prove that there is a $G_{\vec{v}}$-equivariant isomorphism \begin{equation} \operatorname{Rep}(\mathcal{F}_{d,n},\vec{v},w)^{ss}_{\vartheta^+}\simeq \underbrace{\mathcal{R}_{2}\times_{\mathcal{R}_{1}}\mathcal{R}_{2}\times_{\mathcal{R}_{1}}\cdots\times_{\mathcal{R}_{1}}\mathcal{R}_{2}}_{(n-1)-\text{times}} \end{equation} where $\mathcal{R}_{2}=\operatorname{Rep}(\mathcal{F}_{d,2},\vec{v},u)^{ss}_{\vartheta^+}$ and $\mathcal{R}_{1}=\operatorname{Rep}(\mathcal{F}_{d,1},\vec{v},u)^{ss}_{\vartheta^+}$. This isomorphism descends to the quotient: \begin{equation} \begin{aligned} \mathcal{M}(\mathcal{F}_{d,n},\vec{v},u)_{\vartheta^+} &\simeq \underbrace{\mathcal{M}_{2}\times_{\mathcal{M}_{1}}\mathcal{M}_{2}\times_{\mathcal{M}_{1}}\cdots\times_{\mathcal{M}_{1}}\mathcal{M}_{2}}_{(n-1)-\text{times}}\simeq\\ &\simeq\underbrace{T^\vee\operatorname{Fl}(\vec{v},u)\times_{\operatorname{Fl}(\vec{v},u)}T^\vee\operatorname{Fl}(\vec{v},u)\times_{\operatorname{Fl}(\vec{v},u)}\cdots\times_{\operatorname{Fl}(\vec{v},u)}T^\vee\operatorname{Fl}(\vec{v},u)}_{(n-1)-\text{times}}\simeq\\ &\simeq T^\vee\operatorname{Fl}(\vec{v},u)^{\oplus n-1} \end{aligned} \end{equation} where $\mathcal{M}_{2}=\mathcal{M}(\mathcal{F}_{d,2},\vec{v},u)_{\vartheta^+}$ and $\mathcal{M}_{1}=\mathcal{M}(\mathcal{F}_{d,1},\vec{v},u)_{\vartheta^+}$. \end{proof} As a direct consequence of Proposition \ref{propCotFlag}, we get a quiver description for the moduli space $\mathcal{M}^n(r,a,C_{\mathrm{m}})$. \begin{prop} \label{corMshMq} There is an isomorphism of complex algebraic varieties \begin{equation} \mathcal{M}^n(r,a,C_{\mathrm{m}})\simeq\mathcal{M}(\mathcal{F}_{1,n}, a,r)_{\vartheta^+} \end{equation} for any $n\geq1$. \end{prop} \begin{proof} It suffices to compose the isomorphisms \eqref{eqIsoM} and \eqref{eqIsoFlag}. \end{proof} In particular, the previous result allows one to regard the quasi-affine variety $\operatorname{Rep}(\mathcal{F}_{1,n}, a,r)^{ss}_{\vartheta^+}$ as a space of ADHM data for $\mathcal{M}^n(r,a,C_{\mathrm{m}})$. \begin{rem} The space $\mathcal{M}^2(r,a,C_{\mathrm{m}})$ is isomorphic to the Nakajima quiver variety $\mathcal{N}_{0,1}(\mathcal{A}_{1},a,r)$ (indeed, $\mathcal{Q}_{1,2}=\overline{\mathcal{A}_{1}^{\mathrm{fr}}}$). Since the pair $(0,1)$ is $a$-regular (see Definition \ref{defin-v-reg}), $\mathcal{M}^2(r,a,C_{\mathrm{m}})$ carries a symplectic structure which is induced by that one in eq.~\eqref{eq-tildeomega}. With the notation introduced in eq.~\eqref{eqABfe}, this symplectic form can be written as \begin{equation} \omega=\operatorname{tr}(\mbox{\rm d} f_1\wedge\mbox{\rm d} e) \,. \label{eq-omega} \end{equation} It is easy to see that $\omega$ coincides, up to isomorphism, with the canonical symplectic structure of $T^\vee\operatorname{Gr}(a,r)$. \end{rem} \begin{rem} As proved by Sala \cite{SAL}, $\mathcal{M}^2(r,a,C_{\mathrm{m}})$ --- like all the spaces $\mathcal{M}^2(r,a,c)$ --- carries also a holomorphic symplectic structure defined in sheaf-theoretic terms. The question that naturally arises is the following: does this symplectic structure coincide with that defined by eq.~\eqref{eq-omega}? For $n\neq 2$, the spaces $\mathcal{M}^n(r,a,C_{\mathrm{m}})$ --- and, more generally, all the spaces $\mathcal{M}^n(r,a,c)$ --- are expected to carry a natural Poisson structure. This is suggested by the results proved by Bottacin \cite{Bot95, Bot, Bot00} and by the case $r=1$, but it is unclear how this structure could be constructed. Work is in progress to address these issues. \end{rem} \frenchspacing\bigskip \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
1,941,325,220,347
arxiv
\section{introduction} The notion of Galois extension asssociated to a Hopf algebra $H$ was introduced in 1981 by Kreimer and Takeuchi in the following way: let $A$ be a right $H$-comodule algebra with coaction $\rho_{A}(a)=a_{(0)}\otimes a_{(1)}$, then the extension $A^{coH}\hookrightarrow A$, where $A^{coH}=\{a\in A\;;\; \rho_{A}(a)=a\otimes 1_{H}\}$ is the subalgebra of coinvariant elements, is $H$-Galois if the canonical morphism $\gamma_{A}:A\otimes_{A^{coH}}A\rightarrow A\otimes H$, defined by $\gamma_{A}(a\otimes b)=ab_{(0)}\otimes b_{(1)}$, is an isomorphism. This definition has its origin in the approach to Galois theory of groups acting on commutative rings developed by Chase, Harrison and Rosenberg and in the extension of this theory to coactions of a Hopf algebra $H$ acting on a commutative $k$-algebra $A$ over a commutative ring $k$, developed in 1969 by Chase and Sweedler \cite{CSW}. An interesting class of $H$-Galois extensions has been provided by those for which there exists a convolution invertible right $H$-comodule morphism $h:H\rightarrow A$ called the cleaving morphism. These extensions were called cleft and it is well known that, using the notion of normal basis introduced by Kreimer and Takeuchi in \cite{KT}, Doi and Takeuchi proved in \cite{doi3} that $A^{coH}\hookrightarrow A$ is a cleft extension if and only if it is $H$-Galois with normal basis, i.e., the extension $A^{coH}\hookrightarrow A$ is $H$-Galois and $A$ is isomorphic to the tensor product of $A^{coH}$ with $H$ as left $A^{coH}$-modules and right $H$-comodules. The result obtained by Doi and Takeuchi was generalized in \cite{ManelEmilio} to $H$-Galois extensions for Hopf algebras living in a symmetric monoidal closed category ${\mathcal C}$ and in \cite{BRZ} Brzezi\'{n}ski proved that if $A$ is an algebra, $C$ is a coalgebra and $(A,C,\psi)$ is an entwining structure such that $A$ is an entwined module, the existence of a convolution invertible $C$-comodule morphism $h:C\rightarrow A$ is equivalent to that $A$ is a Galois extension by the coalgebra $C$ (see \cite{BRZH} for the definition) and $A$ is isomorphic, as left $A^{coH}$-modules and right $C$-comodules, to the tensor product of the coinvariant subalgebra $A^{coC}$ with $C$. A more general result was proved in \cite{AFG2}, in a monoidal setting, for weak Galois extensions associated to the weak entwining structures introduced by Caenepeel and De Groot in \cite{caengroot}. In \cite{AFG2} the notion of weak cleft extension was defined, and Theorem 2.11 of \cite{AFG2} stated that for a weak entwining structure $(A,C,\psi)$ such that $A$ is an entwined module, if the functor $A\otimes -$ preserves coequalizers, $A$ is a weak $C$-cleft extension of the coinvariants subalgebra if and only if it is a weak $C$-Galois extension and the normal basis property, defined in \cite{AFG2}, holds. Since Galois extensions associated to weak Hopf algebras (see \cite{bohm}) are examples of weak Galois extensions, the characterization of weak cleft extensions in terms of weak Galois extensions satisfying the normal basis condition can be applied to them. Morever, this kind of result can be obtained for cleft extensions associated to lax entwining structures \cite{AFGS-1}, and for cleft extensions associated to co-extended weak entwining structures \cite{AFG-coexten}. The results cited in the previous paragraphs were proved in an associative setting because all the extensions are linked to Hopf algebras, to weak Hopf algebras, or to algebraic structures related with them, i.e. entwining structures and weak entwining structures. The main motivation of this paper is to show that it is possible to obtain similar results working in a non-associative context, that is, when we study extensions related with non-associative algebra structures like Hopf quasigroups or, more generally, like weak Hopf quasigroups. Hopf quasigroups are a generalization of Hopf algebras in the context of non-associative algebra, where the lack of the associativity is compensated by some axioms involving the antipode. The notion of Hopf quasigroup was introduced by Klim and Majid in \cite{Majidesfera}, in order to understand the structure and relevant properties of the algebraic $7$-sphere, and is a particular instance of unital coassociative $H$-bialgebra in the sense of P\'erez Izquierdo \cite{PI2}. It includes as example the enveloping algebra of a Malcev algebra (see \cite{Majidesfera} and \cite{PIS}) when the base ring has characteristic not equal to $2$ nor $3$. In this sense Hopf quasigroups extend the notion of Hopf algebra in a parallel way that Malcev algebras extend the one of Lie algebra. On the other hand, it also contains as an example the notion of quasigroup algebra of an I.P. loop. Therefore, Hopf quasigroups unify I.P. loops and Malcev algebras in the same way that Hopf algebras unify groups and Lie algebras. On the other hand, weak Hopf quasigroups are a new Hopf algebra generalization (see \cite{AFG-Weak-quasi}) that encompass weak Hopf algebras and Hopf quasigroups. As was proved in \cite{AFG-Weak-quasi}, the main family of non-trivial examples of these algebraic structures can be obtained working with bigroupoids, i.e., bicategories where every $1$-cell is an equivalence and every $2$-cell is an isomorphism. The first result linking Hopf Galois extensions with normal basis and cleft extensions in the Hopf quasigroup setting can be found in \cite{AFG-3}. More specifically, in \cite{AFGS-2} we introduce the notion of cleft extension (cleft right $H$-comodule algebra) for a Hopf quasigroup $H$ in a strict monoidal category ${\mathcal C}$ with tensor product $\otimes$ and unit object $K$. The notion of Galois extension with normal basis for $H$ was introduced in \cite{AFG-3}, and we proved that, when the object of coinvariants is the unit object of the category, cleft extensions and Galois extension with normal basis and with the inverse of the canonical morphism almost lineal, are the same. Therefore, in \cite{AFG-3}, we extend the result proved by Doi and Takeuchi in \cite{doi3} to the Hopf quasigroup setting, characterizing Galois extensions with normal basis in terms of cleft extensions when the object of coinvariants is $K$. The aim of this new paper is to show that all these results, that is, the one obtained for Hopf algebras in \cite{doi3}, the one obtained for weak Hopf algebras in \cite{AFG2}, and the one proved for Hopf quasigroups in \cite{AFG-3}, are particular instances of a more general result that we can prove for weak Hopf quasigroups. An outline of the paper is as follows. In Section 1 we set the general framework and review the basic properties of weak Hopf quasigroups, in a strict symmetric monoidal category with equalizers and coequalizers, focusing in the following fact: if $H$ is a weak Hopf quasigroup and $\Pi_{H}^{L}$ is the target morphism (this morphism is defined as in the weak Hopf algebra setting), the image of $\Pi_{H}^{L}$, denoted by $H_{L}$, is a monoid, that is the restriction of the product of $H$ to $H_{L}$ is associative. In Section 2, we introduce the notions of right $H$-comodule magma, weak $H$-Galois extension, and weak $H$-Galois extension with normal basis, proving some technical results that we need in the following sections. Section 3 is devoted to the study of weak $H$-cleft extensions for weak Hopf quasigroups. In particular we show that these kind of extensions contain as examples the notion of weak $H$-cleft extension associated to a weak Hopf algebra \cite{nmra1}, as well as the notion of cleft right $H$-comodule algebra introduced in \cite{AFGS-2} for Hopf quasigroups. In the last section, we can find the main result of this paper, which assures that for any right $H$-comodule magma $(A,\rho_{A})$ such that $A\otimes -$ preserves coequalizers, under suitable conditions (see Theorem \ref{caracterizacion}), the following assertions are equivalent: \begin{itemize} \item $A^{co H}\hookrightarrow A$ is a weak $H$-Galois extension with normal basis and the morphism $\gamma_{A}^{-1}$ is almost lineal. \item $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension. \end{itemize} In the associative setting the conditions assumed in Theorem \ref{caracterizacion} hold trivially and then it generalizes the one proved by Doi and Takeuchi for Hopf algebras in \cite{doi3}. Also, for a weak Hopf algebra $H$, we obtain an equivalence that is a particular instance of the one obtained in \cite{AFG2} for Galois extensions associated to weak entwining structures. Finally, as a corollary of Theorem \ref{caracterizacion}, we have a result for Hopf quasigroups, which shows the close connection between the notion of cleft right $H$-comodule algebra and the one of $H$-Galois extension with normal basis introduced in this paper, improving the equivalence obtained in \cite{AFG-3} because we remove the condition $A^{co H}=K$. \section{Weak Hopf quasigroups} Throughout this paper $\mathcal C$ denotes a strict symmetric monoidal category with tensor product $\otimes$, unit object $K$ and natural isomorphism of symmetry $c$. For each object $M$ in $ {\mathcal C}$, we denote the identity morphism by $id_{M}:M\rightarrow M$ and, for simplicity of notation, given objects $M$, $N$ and $P$ in ${\mathcal C}$ and a morphism $f:M\rightarrow N$, we write $P\otimes f$ for $id_{P}\otimes f$ and $f \otimes P$ for $f\otimes id_{P}$. We want to point out that there is no loss of generality in assuming that ${\mathcal C}$ is strict because by Theorem 3.5 of \cite{Christian} (which implies the Mac Lane's coherence theorem) every monoidal category is monoidally equivalent to a strict one. This lets us to treat monoidal categories as if they were strict and, as a consequence, the results proved in this paper hold for every non-strict symmetric monoidal category. From now on we also assume that ${\mathcal C}$ admits equalizers and coequalizers. Then every idempotent morphism splits, i.e., for every morphism $\nabla_{Y}:Y\rightarrow Y$ such that $\nabla_{Y}=\nabla_{Y}\circ\nabla_{Y}$, there exist an object $Z$ and morphisms $i_{Y}:Z\rightarrow Y$ and $p_{Y}:Y\rightarrow Z$ such that $\nabla_{Y}=i_{Y}\circ p_{Y}$ and $p_{Y}\circ i_{Y} =id_{Z}$. \begin{definition} {\rm By a unital magma in ${\mathcal C}$ we understand a triple $A=(A, \eta_{A}, \mu_{A})$ where $A$ is an object in ${\mathcal C}$ and $\eta_{A}:K\rightarrow A$ (unit), $\mu_{A}:A\otimes A \rightarrow A$ (product) are morphisms in ${\mathcal C}$ such that $\mu_{A}\circ (A\otimes \eta_{A})=id_{A}=\mu_{A}\circ (\eta_{A}\otimes A)$. If $\mu_{A}$ is associative, that is, $\mu_{A}\circ (A\otimes \mu_{A})=\mu_{A}\circ (\mu_{A}\otimes A)$, the unital magma will be called a monoid in ${\mathcal C}$. Given two unital magmas (monoids) $A= (A, \eta_{A}, \mu_{A})$ and $B=(B, \eta_{B}, \mu_{B})$, $f:A\rightarrow B$ is a morphism of unital magmas (monoids) if $\mu_{B}\circ (f\otimes f)=f\circ \mu_{A}$ and $ f\circ \eta_{A}= \eta_{B}$. By duality, a counital comagma in ${\mathcal C}$ is a triple ${D} = (D, \varepsilon_{D}, \delta_{D})$ where $D$ is an object in ${\mathcal C}$ and $\varepsilon_{D}: D\rightarrow K$ (counit), $\delta_{D}:D\rightarrow D\otimes D$ (coproduct) are morphisms in ${\mathcal C}$ such that $(\varepsilon_{D}\otimes D)\circ \delta_{D}= id_{D}=(D\otimes \varepsilon_{D})\circ \delta_{D}$. If $\delta_{D}$ is coassociative, that is, $(\delta_{D}\otimes D)\circ \delta_{D}= (D\otimes \delta_{D})\circ \delta_{D}$, the counital comagma will be called a comonoid. If ${D} = (D, \varepsilon_{D}, \delta_{D})$ and ${ E} = (E, \varepsilon_{E}, \delta_{E})$ are counital comagmas (comonoids), $f:D\rightarrow E$ is a morphism of counital comagmas (comonoids) if $(f\otimes f)\circ \delta_{D} =\delta_{E}\circ f$ and $\varepsilon_{E}\circ f =\varepsilon_{D}.$ If $A$, $B$ are unital magmas (monoids) in ${\mathcal C}$, the object $A\otimes B$ is a unital magma (monoid) in ${\mathcal C}$ where $\eta_{A\otimes B}=\eta_{A}\otimes \eta_{B}$ and $\mu_{A\otimes B}=(\mu_{A}\otimes \mu_{B})\circ (A\otimes c_{B,A}\otimes B).$ In a dual way, if $D$, $E$ are counital comagmas (comonoids) in ${\mathcal C}$, $D\otimes E$ is a counital comagma (comonoid) in ${\mathcal C}$ where $\varepsilon_{D\otimes E}=\varepsilon_{D}\otimes \varepsilon_{E}$ and $\delta_{D\otimes E}=(D\otimes c_{D,E}\otimes E)\circ( \delta_{D}\otimes \delta_{E}).$ Finally, if $D$ is a comagma and $A$ a magma, given two morphisms $f,g:D\rightarrow A$ we will denote by $f\ast g$ its convolution product in ${\mathcal C}$, that is $$f\ast g=\mu_{A}\circ (f\otimes g)\circ \delta_{D}.$$ } \end{definition} The notion of weak Hopf quasigroup in a braided monoidal category was introduced in \cite{AFG-Weak-quasi}. Now we recall this definition in our symmetric setting. \begin{definition} \label{Weak-Hopf-quasigroup} {\rm A weak Hopf quasigroup $H$ in ${\mathcal C}$ is a unital magma $(H, \eta_H, \mu_H)$ and a comonoid $(H,\varepsilon_H, \delta_H)$ such that the following axioms hold: \begin{itemize} \item[(a1)] $\delta_{H}\circ \mu_{H}=(\mu_{H}\otimes \mu_{H})\circ \delta_{H\otimes H}.$ \item[(a2)] $\varepsilon_{H}\circ \mu_{H}\circ (\mu_{H}\otimes H)=\varepsilon_{H}\circ \mu_{H}\circ (H\otimes \mu_{H})$ \item[ ]$= ((\varepsilon_{H}\circ \mu_{H})\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (H\otimes \delta_{H}\otimes H)$ \item[ ]$=((\varepsilon_{H}\circ \mu_{H})\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (H\otimes (c_{H,H}\circ\delta_{H})\otimes H).$ \item[(a3)]$(\delta_{H}\otimes H)\circ \delta_{H}\circ \eta_{H}=(H\otimes \mu_{H}\otimes H)\circ ((\delta_{H}\circ \eta_{H}) \otimes (\delta_{H}\circ \eta_{H}))$ \item[ ]$=(H\otimes (\mu_{H}\circ c_{H,H})\otimes H)\circ ((\delta_{H}\circ \eta_{H}) \otimes (\delta_{H}\circ \eta_{H})).$ \item[(a4)] There exists $\lambda_{H}:H\rightarrow H$ in ${\mathcal C}$ (called the antipode of $H$) such that, if we denote the morphisms $id_{H}\ast \lambda_{H}$ by $\Pi_{H}^{L}$ (target morphism) and $\lambda_{H}\ast id_{H}$ by $\Pi_{H}^{R}$ (source morphism), \begin{itemize} \item[(a4-1)] $\Pi_{H}^{L}=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ ((\delta_{H}\circ \eta_{H})\otimes H).$ \item[(a4-2)] $\Pi_{H}^{R}=(H\otimes(\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,H}\otimes H)\circ (H\otimes (\delta_{H}\circ \eta_{H})).$ \item[(a4-3)]$\lambda_{H}\ast \Pi_{H}^{L}=\Pi_{H}^{R}\ast \lambda_{H}= \lambda_{H}.$ \item[(a4-4)] $\mu_H\circ (\lambda_H\otimes \mu_H)\circ (\delta_H\otimes H)=\mu_{H}\circ (\Pi_{H}^{R}\otimes H).$ \item[(a4-5)] $\mu_H\circ (H\otimes \mu_H)\circ (H\otimes \lambda_H\otimes H)\circ (\delta_H\otimes H)=\mu_{H}\circ (\Pi_{H}^{L}\otimes H).$ \item[(a4-6)] $\mu_H\circ(\mu_H\otimes \lambda_H)\circ (H\otimes \delta_H)=\mu_{H}\circ (H\otimes \Pi_{H}^{L}).$ \item[(a4-7)] $\mu_H\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)=\mu_{H}\circ (H\otimes \Pi_{H}^{R}).$ \end{itemize} \end{itemize} Note that, if in the previous definition the triple $(H, \eta_H, \mu_H)$ is a monoid, we obtain the notion of weak Hopf algebra in a symmetric monoidal category. Then, if ${\mathcal C}$ is the category of vector spaces over a field ${\Bbb F}$, we have the monoidal version of the original definition of weak Hopf algebra introduced by B\"{o}hm, Nill and Szlach\'anyi in \cite{bohm}. On the other hand, under these conditions, if $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas (equivalently, $\eta_{H}$, $\mu_{H}$ are morphisms of counital comagmas), $\Pi_{H}^{L}=\Pi_{H}^{R}=\eta_{H}\otimes \varepsilon_{H}$. As a consequence, conditions (a2), (a3), (a4-1)-(a4-3) trivialize, and we get the notion of Hopf quasigroup defined by Klim and Majid in \cite{Majidesfera} in the category of vector spaces over a field ${\Bbb F}$. } \end{definition} \begin{example} \label{main-example} {\rm It is possible to obtain non-trivial examples of weak Hopf quasigroups by working with bicategories in the sense of B\'enabou \cite{BEN}. We give a brief summary of this construction. The interested reader can see the complete details in \cite{AFG-Weak-quasi}. A bicategory ${\mathcal B}$ consists of: \begin{itemize} \item A set ${\mathcal B}_{0}$, whose elements $x$ are called $0$-cells. \item For each $x$, $y\in {\mathcal B}_{0}$, a category ${\mathcal B}(x,y)$ whose objects $f:x\rightarrow y$ are called $1$-cells and whose morphisms $\alpha:f \Rightarrow g$ are called $2$-cells. The composition of $2$-cells is called the vertical composition of $2$-cells and if $f$ is a $1$-cell in ${\mathcal B}(x,y)$, $x$ is called the source of $f$, represented by $s(f)$, and $y$ is called the target of $f$, denoted by $t(f)$. \item For each $x\in {\mathcal B}_{0}$, an object $1_{x}\in {\mathcal B}(x,x)$, called the identity of $x$; and for each $x,y,z\in {\mathcal B}_{0}$, a functor $${\mathcal B}(y,z)\times {\mathcal B}(x,y)\rightarrow {\mathcal B}(x,z)$$ which in objects is called the $1$-cell composition $(g,f)\mapsto g\circ f$, and on arrows is called horizontal composition of $2$-cells: $$f,f^{\prime}\in {\mathcal B}(x,y), \;\; g,g^{\prime}\in {\mathcal B}(y,z), \; \alpha:f \Rightarrow f^{\prime}, \; \beta:g \Rightarrow g^{\prime}$$ $$(\beta, \alpha)\mapsto \beta\bullet \alpha:g\circ f \Rightarrow g^{\prime}\circ f^{\prime}$$ \item For each $f\in {\mathcal B}(x,y)$, $g\in {\mathcal B}(y,z)$ and $h\in {\mathcal B}(z,w)$, an associative isomorphism $\xi_{h,g,f}: (h\circ g)\circ f\Rightarrow h\circ (g\circ f)$; and for each $1$-cell $f$, unit isomorphisms $l_{f}:1_{t(f)}\circ f\Rightarrow f$, $r_{f}:f\circ 1_{s(f)}\Rightarrow f$, satisfying the following coherence axioms: \begin{itemize} \item The morphism $\xi_{h,g,f}$ is natural in $h$, $f$ and $g$ and $l_{f}$, $r_{f}$ are natural in $f$. \item Pentagon axiom: $ \xi_{k,h,g\circ f}\circ \xi_{k\circ h,g, f}=(id_{k}\bullet \xi_{ h,g, f})\circ \xi_{k, h\circ g, f}\circ (\xi_{k,h,g}\bullet id_{f}).$ \item Triangle axiom: $r_{g}\bullet id_{f}=(id_{g}\bullet l_{f})\circ \xi_{g,1_{t(f)},f}.$ \end{itemize} \end{itemize} A bicategory is normal if the unit isomorphisms are identities. Every bicategory is biequivalent to a normal one. A $1$-cell $f$ is called an equivalence if there exists a $1$-cell $g:t(f)\rightarrow s(f)$ and two isomorphisms $g\circ f\Rightarrow 1_{s(f)}$, $f\circ g\Rightarrow 1_{t(f)}$. In this case we will say that $g\in Inv(f)$ and, equivalently, $f\in Inv(g)$. A bigroupoid is a bicategory where every $1$-cell is an equivalence and every $2$-cell is an isomorphism. We will say that a bigroupoid ${\mathcal B}$ is finite if ${\mathcal B}_{0}$ is finite and ${\mathcal B}(x,y)$ is small for all $x,y$. Note that if ${\mathcal B}$ is a bigroupoid where ${\mathcal B}(x,y)$ is small for all $x,y$, and we pick up a finite number of $0$-cells, considering the full sub-bicategory generated by these $0$-cells, we have an example of finite bigroupoid. Let ${\mathcal B}$ be a finite normal bigroupoid and denote by ${\mathcal B}_{1}$ the set of $1$-cells. Let ${\Bbb F}$ be a field and ${\Bbb F}{\mathcal B}$ the direct product $${\Bbb F}{\mathcal B}=\bigoplus_{f\in {\mathcal B}_{1}}Ff.$$ The vector space ${\Bbb F}{\mathcal B}$ is a unital nonassociative algebra where the product of two $1$-cells is equal to their $1$-cell composition if the latter is defined and $0$ otherwise, i.e., $g.f=g\circ f$ if $s(g)=t(f)$ and $g.f=0$ if $s(g)\neq t(f)$. The unit element is $$1_{{\Bbb F}{\mathcal B}}=\sum_{x\in {\mathcal B}_{0}}1_{x}.$$ Let $H={\Bbb F}{\mathcal B}/I({\mathcal B})$ be the quotient algebra where $I({\mathcal B})$ is the ideal of ${\Bbb F}{\mathcal B}$ generated by $$ h-g\circ (f\circ h),\; p-(p\circ f)\circ g,$$ with $f\in {\mathcal B}_{1},$ $g\in Inv(f)$, and $h,p \in {\mathcal B}_{1}$ such that $t(h)=s(f)$, $t(f)=s(p)$. In what follows, for any $1$-cell $f$ we denote its class in $H$ by $[f]$. If we assume that $I({\mathcal B})$ is a proper ideal and for $[f]$ we define $[f]^{-1}$ by the class of $g\in Inv(f)$, we obtain that $[f]^{-1}$ is well-defined. Therefore the vector space $H$ with the product $\mu_{H}([g]\otimes [f])=[g.f]$ and the unit $$\eta_{ H}(1_{{\Bbb F}})=[1_{{\Bbb F}{\mathcal B}}]=\sum_{x\in {\mathcal B}_{0}}[1_{x}]$$ is a unital magma. Also, it is easy to show that $H$ is a comonoid with coproduct $\delta_{H}([f])=[f]\otimes [f]$ and counit $\varepsilon_{H}([f])=1_{{\Bbb F}}$. Moreover, the antipode is defined by $\lambda_{H}:H\rightarrow H$, $\lambda_{H}([f])=[f]^{-1}$ and $H=(H,\eta_{H}, \mu_{H}, \varepsilon_{H}, \delta_{H}, \lambda_{H})$ is a weak Hopf quasigroup. Note that, in this example, if ${\mathcal B}_{0}=\{x\}$ we obtain that $H$ is a Hopf quasigroup. Moreover, if $\vert {\mathcal B}_{0}\vert >1$ and the product defined in $H$ is associative we have an example of weak Hopf algebra. } \end{example} In the end of this section we recall some properties of weak Hopf quasigroups we will need in what sequel. The interested reader can see the proofs in \cite{AFG-Weak-quasi}. First note that, by Propositions 3.1 and 3.2 of \cite{AFG-Weak-quasi}, the following equalities \begin{equation} \label{pi-l} \Pi_{H}^{L}\ast id_{H}=id_{H}\ast \Pi_{H}^{R}=id_{H}, \end{equation} \begin{equation} \label{pi-eta} \Pi_{H}^{L}\circ\eta_{H}=\eta_{H}=\Pi_{H}^{R}\circ\eta_{H}, \end{equation} \begin{equation} \label{pi-varep} \varepsilon_{H}\circ \Pi_{H}^{L}=\varepsilon_{H}=\varepsilon_{H}\circ \Pi_{H}^{R}. \end{equation} hold, the antipode of a weak Hopf quasigroup $H$ is unique and $\lambda_{H}\circ \eta_{H}=\eta_{H}$, $\varepsilon_{H}\circ\lambda_{H}=\varepsilon_{H}$. Moreover, if we define the morphisms $\overline{\Pi}_{H}^{L}$ and $\overline{\Pi}_{H}^{R}$ by $$\overline{\Pi}_{H}^{L}=(H\otimes (\varepsilon_{H}\circ \mu_{H}))\circ ((\delta_{H}\circ \eta_{H})\otimes H)$$ and $$\overline{\Pi}_{H}^{R}=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes (\delta_{H}\circ \eta_{H})),$$ in Proposition 3.4 of \cite{AFG-Weak-quasi}, we proved that $\Pi_{H}^{L}$, $\Pi_{H}^{R}$, $\overline{\Pi}_{H}^{L}$ and $\overline{\Pi}_{H}^{R}$ are idempotent. On the other hand, Propositions 3.5, 3.7 and 3.9 of \cite{AFG-Weak-quasi} assert that \begin{equation} \label{mu-pi-l} \mu_{H}\circ (H\otimes \Pi_{H}^{L})=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H), \end{equation} \begin{equation} \label{mu-pi-r} \mu_{H}\circ (\Pi_{H}^{R}\otimes H)=(H\otimes(\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,H}\otimes H)\circ (H\otimes \delta_{H}), \end{equation} \begin{equation} \label{mu-pi-l-var} \mu_{H}\circ (H\otimes \overline{\Pi}_{H}^{L})=(H\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (\delta_{H}\otimes H), \end{equation} \begin{equation} \label{mu-pi-r-var} \mu_{H}\circ (\overline{\Pi}_{H}^{R}\otimes H)=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes \delta_{H}), \end{equation} \begin{equation} \label{delta-pi-l} (H\otimes \Pi_{H}^{L})\circ \delta_{H}=(\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ ((\delta_{H}\circ \eta_{H})\otimes H), \end{equation} \begin{equation} \label{delta-pi-r} (\Pi_{H}^{R}\otimes H)\circ \delta_{H}=(H\otimes \mu_{H})\circ (c_{H,H}\otimes H)\circ (H\otimes (\delta_{H}\circ \eta_{H})), \end{equation} \begin{equation} \label{delta-pi-l-var} (\overline{\Pi}_{H}^{L}\otimes H)\circ \delta_{H}=(H\otimes \mu_{H})\circ ((\delta_{H}\circ \eta_{H})\otimes H), \end{equation} \begin{equation} \label{delta-pi-r-var} (H\otimes \overline{\Pi}_{H}^{R})\circ \delta_{H}=( \mu_{H}\otimes H)\circ (H\otimes (\delta_{H}\circ \eta_{H})), \end{equation} \begin{equation} \label{pi-delta-mu-pi-1} \Pi^{L}_{H}\circ \mu_{H}\circ (H\otimes \Pi^{L}_{H})=\Pi^{L}_{H}\circ \mu_{H}, \end{equation} \begin{equation} \label{pi-delta-mu-pi-2} \Pi^{R}_{H}\circ \mu_{H}\circ (\Pi^{R}_{H}\otimes H)=\Pi^{R}_{H}\circ \mu_{H}, \end{equation} \begin{equation} \label{pi-delta-mu-pi-3} (H\otimes \Pi^{L}_{H})\circ \delta_{H}\circ \Pi^{L}_{H}=\delta_{H}\circ \Pi^{L}_{H}, \end{equation} \begin{equation} \label{pi-delta-mu-pi-4} ( \Pi^{R}_{H}\otimes H)\circ \delta_{H}\circ \Pi^{R}_{H}=\delta_{H}\circ \Pi^{R}_{H}, \end{equation} hold. Also, it is possible to show the following identities involving the idempotent morphisms $\Pi_{H}^{L}$, $\Pi_{H}^{R}$, $\overline{\Pi}_{H}^{L}$, $\overline{\Pi}_{H}^{R}$ and the antipode $\lambda_{H}$ (see Propositions 3.11 and 3.12 of \cite{AFG-Weak-quasi}): \begin{equation} \label{pi-composition-1} \Pi_{H}^{L}\circ \overline{\Pi}_{H}^{L}=\Pi_{H}^{L},\;\;\;\Pi_{H}^{L}\circ \overline{\Pi}_{H}^{R}=\overline{\Pi}_{H}^{R}, \end{equation} \begin{equation} \label{pi-composition-2} \overline{\Pi}_{H}^{L}\circ \Pi_{H}^{L}=\overline{\Pi}_{H}^{L},\;\;\;\overline{\Pi}_{H}^{R}\circ \Pi_{H}^{L}=\Pi_{H}^{L}, \end{equation} \begin{equation} \label{pi-composition-3} \Pi_{H}^{R}\circ \overline{\Pi}_{H}^{L}=\overline{\Pi}_{H}^{L},\;\;\; \Pi_{H}^{R}\circ \overline{\Pi}_{H}^{R}=\Pi_{H}^{R}, \end{equation} \begin{equation} \label{pi-composition-4} \overline{\Pi}_{H}^{L}\circ \Pi_{H}^{R}=\Pi_{H}^{R},\;\;\; \overline{\Pi}_{H}^{R}\circ \Pi_{H}^{R}=\overline{\Pi}_{H}^{R}, \end{equation} \begin{equation} \label{pi-antipode-composition-1} \Pi_{H}^{L}\circ \lambda_{H}=\Pi_{H}^{L}\circ \Pi_{H}^{R}= \lambda_{H}\circ \Pi_{H}^{R}, \end{equation} \begin{equation} \label{pi-antipode-composition-2} \Pi_{H}^{R}\circ \lambda_{H}=\Pi_{H}^{R}\circ \Pi_{H}^{L}= \lambda_{H}\circ \Pi_{H}^{L}, \end{equation} \begin{equation} \label{pi-antipode-composition-3} \Pi_{H}^{L}=\overline{\Pi}_{H}^{R}\circ \lambda_{H}=\lambda_{H} \circ\overline{\Pi}_{H}^{L}, \end{equation} \begin{equation} \label{pi-antipode-composition-4} \Pi_{H}^{R}= \overline{\Pi}_{H}^{L}\circ \lambda_{H}=\lambda_{H} \circ \overline{\Pi}_{H}^{R}. \end{equation} Moreover, by Proposition 3.16 of \cite{AFG-Weak-quasi}, the equalities \begin{equation} \label{mu-assoc-1} \mu_{H}\circ (\mu_{H}\otimes H)\circ (H\otimes ((\Pi_{H}^{L}\otimes H)\circ \delta_{H}))=\mu_{H}= \mu_{H}\circ (\mu_{H}\otimes \Pi_{H}^{R})\circ (H\otimes \delta_{H}), \end{equation} \begin{equation} \label{mu-assoc-2} \mu_{H}\circ (\Pi_{H}^{L}\otimes \mu_{H})\circ (\delta_{H}\otimes H)=\mu_{H}= \mu_{H}\circ (H\otimes (\mu_{H}\circ ( \Pi_{H}^{R}\otimes H)))\circ (\delta_{H}\otimes H), \end{equation} \begin{equation} \label{mu-assoc-3} \mu_{H}\circ (\lambda_{H}\otimes (\mu_{H}\circ ( \Pi_{H}^{L}\otimes H)))\circ (\delta_{H}\otimes H)=\mu_{H}\circ (\lambda_{H}\otimes H)\end{equation} $$= \mu_{H}\circ (\Pi_{H}^{R}\otimes (\mu_{H}\circ ( \lambda_{H}\otimes H)))\circ (\delta_{H}\otimes H), $$ \begin{equation} \label{mu-assoc-4} \mu_{H}\circ (\mu_{H}\otimes H)\circ (H\otimes ((\lambda_{H}\otimes\Pi_{H}^{L})\circ \delta_{H}))=\mu_{H}\circ (H\otimes \lambda_{H})\end{equation} $$= \mu_{H}\circ (\mu_{H}\otimes H)\circ (H\otimes ((\Pi_{H}^{R}\otimes \lambda_{H})\circ \delta_{H})),$$ hold and we have that \begin{equation} \label{2-mu-delta-pi-l} (\mu_{H}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ \delta_{H\otimes H}=(\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H), \end{equation} \begin{equation} \label{2-mu-delta-pi-r} ((\mu_{H}\circ (\Pi_{H}^{R}\otimes H))\otimes \mu_{H})\circ \delta_{H\otimes H}=(H\otimes \mu_{H})\circ (c_{H,H}\otimes H)\circ (H\otimes \delta_{H}). \end{equation} Therefore (see Theorem 3.19 of \cite{AFG-Weak-quasi}), for any weak Hopf quasigroup $H$ the antipode of $H$ is antimultiplicative and anticomultiplicative, i.e., \begin{equation} \label{anti-antipode-1} \lambda_{H}\circ \mu_{H}=\mu_{H}\circ c_{H,H}\circ (\lambda_{H}\otimes \lambda_{H}), \end{equation} \begin{equation} \label{anti-antipode-2} \delta_{H}\circ \lambda_{H}=(\lambda_{H}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H}. \end{equation} Finally, if $H_{L}=Im(\Pi_{H}^{L})$ and $p_{L}:H\rightarrow H_{L}$ and $i_{L}:H_{L}\rightarrow H$ are the morphisms such that $\Pi_{H}^{L}=i_{L}\circ p_{L}$ and $p_{L}\circ i_{L}=id_{H_{L}}$, $$ \setlength{\unitlength}{3mm} \begin{picture}(30,4) \put(3,2){\vector(1,0){4}} \put(11,2.5){\vector(1,0){10}} \put(11,1.5){\vector(1,0){10}} \put(1,2){\makebox(0,0){$H_{L}$}} \put(9,2){\makebox(0,0){$H$}} \put(24,2){\makebox(0,0){$H\otimes H$}} \put(5.5,3){\makebox(0,0){$i_{L}$}} \put(16,3.5){\makebox(0,0){$ \delta_{H}$}} \put(16,0.5){\makebox(0,0){$(H\otimes \Pi_{H}^{L}) \circ \delta_{H}$}} \end{picture} $$ is an equalizer diagram and $$ \setlength{\unitlength}{1mm} \begin{picture}(101.00,10.00) \put(20.00,8.00){\vector(1,0){25.00}} \put(20.00,4.00){\vector(1,0){25.00}} \put(55.00,6.00){\vector(1,0){21.00}} \put(32.00,11.00){\makebox(0,0)[cc]{$\mu_{H}$ }} \put(33.00,0.00){\makebox(0,0)[cc]{$\mu_{H}\circ (H\otimes \Pi_{H}^{L}) $ }} \put(65.00,9.00){\makebox(0,0)[cc]{$p_{L}$ }} \put(13.00,6.00){\makebox(0,0)[cc]{$ H\otimes H$ }} \put(50.00,6.00){\makebox(0,0)[cc]{$ H$ }} \put(83.00,6.00){\makebox(0,0)[cc]{$H_{L} $ }} \end{picture} $$ is a coequalizer diagram. As a consequence, $(H_{L}, \eta_{H_{L}}=p_{L}\circ \eta_{H}, \mu_{H_{L}}=p_{L}\circ \mu_{H}\circ (i_{L}\otimes i_{L}))$ is a unital magma in ${\mathcal C}$ and $(H_{L}, \varepsilon_{H_{L}}=\varepsilon_{H}\circ i_{L}, \delta_{H}=(p_{L}\otimes p_{L})\circ \delta_{H}\circ i_{L})$ is a comonoid in ${\mathcal C}$ (see Proposition 3.13 of \cite{AFG-Weak-quasi}). If $H$ is the weak Hopf quasigroup defined in Example \ref{main-example} note that $H_{L}=\langle [1_{x}], \; x\in {\mathcal B}_{0}\rangle$. Then, in this case we have that the induced product $\mu_{H_{L}}$ is associative because $[1_{x}].([1_{y}].[1_{z}])$ and $([1_{x}].[1_{y}]).[1_{z}]$ are equal to $[1_{x}]$ if $x=y=z$ and $0$ otherwise. Surprisingly, the associativity of the product $\mu_{H_{L}}$ is a general property: \begin{proposition} \label{monoid-hl} Let $H$ be a weak Hopf quasigroup. The following identities hold: \begin{equation} \label{monoid-hl-1} \mu_{H}\circ ((\mu_{H}\circ (i_{L}\otimes H))\otimes H)=\mu_{H}\circ (i_{L}\otimes \mu_{H}), \end{equation} \begin{equation} \label{monoid-hl-2} \mu_{H}\circ (H\otimes (\mu_{H}\circ (i_{L}\otimes H)))=\mu_{H}\circ ((\mu_{H}\circ (H\otimes i_{L}))\otimes H), \end{equation} \begin{equation} \label{monoid-hl-3} \mu_{H}\circ (H\otimes (\mu_{H}\circ (H\otimes i_{L})))=\mu_{H}\circ (\mu_{H}\otimes i_{L}). \end{equation} As a consequence, the unital magma $H_{L}$ is a monoid in ${\mathcal C}$. \end{proposition} \begin{proof} First we will prove that \begin{equation} \label{aux-1-monoid-hl} \delta_{H}\circ \mu_{H}\circ (i_{L}\otimes H)=(\mu_{H}\otimes H)\circ (i_{L}\otimes \delta_{H}), \end{equation} \begin{equation} \label{aux-2-monoid-hl} \delta_{H}\circ \mu_{H}\circ (H\otimes i_{L})=(\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes i_{L}). \end{equation} Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm}\delta_{H}\circ \mu_{H}\circ (i_{L}\otimes H)$ \item[ ]$=(\mu_{H}\otimes \mu_{H})\circ \delta_{H\otimes H}\circ (i_{L}\otimes H) $ \item[ ]$=(\mu_{H}\otimes (\mu_{H}\circ (\overline{\Pi}_{H}^{R}\otimes H))\circ \delta_{H\otimes H}\circ (i_{L}\otimes H) $ \item[ ]$=(\mu_{H}\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes H))\circ (H\otimes \delta_{H}))\circ \delta_{H\otimes H}\circ (i_{L}\otimes H)$ \item[ ]$=(\mu_{H}\otimes H)\circ (i_{L}\otimes \delta_{H}) .$ \end{itemize} The first equality follows by (a1) of Definition \ref{Weak-Hopf-quasigroup}. The second one follows by Remark 3.15 of \cite{AFG-Weak-quasi} and the third one by (\ref{mu-pi-r-var}). Finally, the fourth one is a consequence of the coassociativity of $\delta_{H}$ and (a1) of Definition \ref{Weak-Hopf-quasigroup}. On the other hand, by (a1) of Definition \ref{Weak-Hopf-quasigroup}, (\ref{pi-delta-mu-pi-3}), (\ref{mu-pi-l}) and the coassociativity of $\delta_{H}$, we obtain (\ref{aux-2-monoid-hl}) because \begin{itemize} \item[ ]$\hspace{0.38cm} \delta_{H}\circ \mu_{H}\circ (H\otimes i_{L})$ \item[ ]$=(\mu_{H}\otimes \mu_{H})\circ \delta_{H\otimes H}\circ (H\otimes i_{L}) $ \item[ ]$=(\mu_{H}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ \delta_{H\otimes H}\circ (H\otimes i_{L}) $ \item[ ]$=(\mu_{H}\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H)))\circ \delta_{H\otimes H}\circ (H\otimes i_{L}) $ \item[ ]$=(\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes i_{L}) .$ \end{itemize} Then, (\ref{monoid-hl-1}) holds because \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_{H}\circ ((\mu_{H}\circ (i_{L}\otimes H))\otimes H)$ \item[ ]$= \mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ ( (\mu_{H}\circ (i_{L}\otimes H)) \otimes \delta_{H})$ \item[ ]$= (\varepsilon_{H}\otimes H)\circ \mu_{H\otimes H}\circ ((\delta_{H}\circ \mu_{H}\circ (i_{L}\otimes H))\otimes \delta_{H})$ \item[ ]$= (\varepsilon_{H}\otimes H)\circ \mu_{H\otimes H}\circ (((\mu_{H}\otimes H)\circ (i_{L}\otimes \delta_{H}))\otimes \delta_{H}) $ \item[ ]$=(\varepsilon_{H}\otimes H)\circ (\mu_{H}\otimes H)\circ (i_{L}\otimes ((\mu_{H}\otimes \mu_{H})\circ \delta_{H\otimes H})) $ \item[ ]$= (\varepsilon_{H}\otimes H)\circ (\mu_{H}\otimes H)\circ (i_{L}\otimes (\delta_{H}\circ \mu_{H})) $ \item[ ]$=(\varepsilon_{H}\otimes H) \circ \delta_{H}\circ \mu_{H}\circ (i_{L}\otimes \mu_{H}) $ \item[ ]$= \mu_{H}\circ (i_{L}\otimes \mu_{H}).$ \end{itemize} The first equality follows by (\ref{mu-assoc-1}), the second one by (\ref{mu-pi-l}) and the third and sixth ones by (\ref{aux-1-monoid-hl}). The fourth one is a consequence of (a2) of Definition \ref{Weak-Hopf-quasigroup}. In the fifth one we used (a1) of Definition \ref{Weak-Hopf-quasigroup} and the last one relies on the properties of the counit. The proof for (\ref{monoid-hl-2}) is the following: \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_{H}\circ (H\otimes (\mu_{H}\circ (i_{L}\otimes H))$ \item[ ]$= \mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ (H\otimes (\delta_{H} \circ \mu_{H}\circ (i_{L}\otimes H)))$ \item[ ]$= \mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ (H\otimes \mu_{H}\otimes H)\circ (H\otimes i_{L}\otimes \delta_{H})$ \item[ ]$=(\varepsilon_{H}\otimes H)\circ\mu_{H\otimes H}\circ (\delta_{H}\otimes ((\mu_{H}\otimes H)\circ (i_{L}\otimes \delta_{H}))) $ \item[ ]$=((\varepsilon_{H}\circ \mu_{H})\otimes \mu_{H})\circ (H\otimes c_{H,H}\otimes H)\circ (((\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes i_{L}))\otimes \delta_{H})$ \item[ ]$=((\varepsilon_{H}\circ \mu_{H})\otimes \mu_{H})\circ ( H\otimes c_{H,H}\otimes H)\circ ((\delta_{H}\circ \mu_{H}\circ (H\otimes i_{L}))\otimes \delta_{H}) $ \item[ ]$= (\varepsilon_{H}\otimes H)\circ\delta_{H}\circ \mu_{H}\circ ((\mu_{H}\circ (H\otimes i_{L}))\otimes H)$ \item[ ]$= \mu_{H}\circ ((\mu_{H}\circ (H\otimes i_{L}))\otimes H).$ \end{itemize} The first equality is a consequence of (\ref{mu-assoc-1}), the second one follows by (\ref{aux-1-monoid-hl}) and in the third one we used (\ref{mu-pi-l}). The fourth equality relies on the naturalness of $c$ and (a2) of Definition \ref{Weak-Hopf-quasigroup}. The fifth one follows from (\ref{aux-2-monoid-hl}), in the sixth equality we applied (a1) of Definition \ref{Weak-Hopf-quasigroup} and the last one follows by the properties of the counit. Similarly, we will prove (\ref{monoid-hl-3}). Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm}\mu_{H}\circ (H\otimes (\mu_{H}\circ (H\otimes i_{L})) $ \item[ ]$=\mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ (H\otimes (\delta_{H} \circ \mu_{H}\circ (H\otimes i_{L}))) $ \item[ ]$=\mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ (H\otimes ((\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes i_{L}))) $ \item[ ]$= (\varepsilon_{H}\otimes H)\circ\mu_{H\otimes H}\circ (\delta_{H}\otimes ((\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes i_{L}))) $ \item[ ]$=(\varepsilon_{H}\otimes H)\circ (\mu_{H}\otimes \mu_{H})\circ ( \mu_{H}\otimes c_{H,H}\otimes H)\circ (H\otimes c_{H,H}\otimes c_{H,H})\circ (\delta_{H}\otimes \delta_{H}\otimes i_{L}) $ \item[ ]$=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ ((\delta_{H}\circ \mu_{H})\otimes i_{L}) $ \item[ ]$= \mu_{H}\circ (\mu_{H}\otimes (\Pi_{H}^{L}\circ i_{L}))$ \item[ ]$=\mu_{H}\circ (\mu_{H}\otimes i_{L}) .$ \end{itemize} The first equality follows by (\ref{mu-assoc-1}), the second one by (\ref{aux-2-monoid-hl}) and the third one by (\ref{mu-pi-l}). The fourth one is a consequence of the naturalness of $c$ and (a2) of Definition \ref{Weak-Hopf-quasigroup}. In the fifth one we used (a1) of Definition \ref{Weak-Hopf-quasigroup}, the sixth one follows by (\ref{aux-2-monoid-hl}) and the last one relies on the properties of $\Pi_{H}^{L}$. Finally, by Proposition 3.9 of \cite{AFG-Weak-quasi}, (\ref{monoid-hl-2}) and the equality \begin{equation} \label{pi-mu-pi-pi} \Pi_{H}^{L}\circ \mu_{H}\circ (\Pi_{H}^{L}\otimes \Pi_{H}^{L})=\mu_{H}\circ (\Pi_{H}^{L}\otimes \Pi_{H}^{L}), \end{equation} it is easy to show that $\mu_{H_{L}}\circ (H_{L}\otimes \mu_{H_{L}})=\mu_{H_{L}}\circ (\mu_{H_{L}}\otimes H_{L})$ and therefore the unital magma $H_{L}$ is a monoid in ${\mathcal C}$. Note that (\ref{pi-mu-pi-pi}) holds because, by (\ref{mu-pi-l}), (\ref{pi-delta-mu-pi-3}) and the naturalness of $c$, we have $$\mu_{H}\circ (\Pi_{H}^{L}\otimes \Pi_{H}^{L})=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ ((\delta_{H}\circ \Pi_{H}^{L})\otimes H)$$ $$=((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (((H\otimes \Pi_{H}^{L})\circ \delta_{H}\circ \Pi_{H}^{L})\otimes H)=\Pi_{H}^{L}\circ \mu_{H}\circ (\Pi_{H}^{L}\otimes \Pi_{H}^{L}).$$ \end{proof} \section{Galois extensions associated to weak Hopf quasigroups} In this section we introduce the notion of Galois extension (with normal basis) associated to a weak Hopf quasigroup that generalizes the one defined for Hopf algebras in \cite{KT} and for weak Hopf algebras in \cite{AFG2}. Moreover, if we consider that $\varepsilon_H$ and $\delta_H$ are morphisms of unital magmas, $H$ is a Hopf quasigroup and we get a definition of Galois (with normal basis) extension associated to a Hopf quasigroup. \begin{definition} \label{H-comodulomagma} {\rm Let $H$ be a weak Hopf quasigroup and let $(A, \rho_{A})$ be a unital magma (monoid), which is also a right $H$-comodule (i.e., $(A\otimes \varepsilon_{H})\circ \rho_{A}=id_{A}$, $(\rho_{A}\otimes H)\circ \rho_{A}=(A\otimes \delta_{H})\circ \rho_{A}$), such that \begin{equation} \label{chmagma} \mu_{A\otimes H}\circ (\rho_{A}\otimes \rho_{A})=\rho_{A}\circ \mu_{A}. \end{equation} We will say that $A$ is a right $H$-comodule magma (monoid) if any of the following equivalent conditions hold: \begin{itemize} \item[(b1)]$(\rho_{A}\otimes H)\circ \rho_{A}\circ \eta_{A}=(A\otimes (\mu_{H}\circ c_{H,H})\otimes H)\circ ((\rho_{A}\circ \eta_{A})\otimes (\delta_{H}\circ \eta_{H})). $ \item[(b2)]$(\rho_{A}\otimes H)\circ \rho_{A}\circ \eta_{A}=(A\otimes \mu_{H}\otimes H)\circ ((\rho_{A}\circ \eta_{A})\otimes (\delta_{H}\circ \eta_{H})). $ \item[(b3)]$(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_{A}=(\mu_{A}\otimes H)\circ (A\otimes (\rho_{A}\circ \eta_{A})).$ \item[(b4)]$(A\otimes \Pi_{H}^{L})\circ \rho_{A}= ((\mu_{A}\circ c_{A,A})\otimes H)\circ (A\otimes (\rho_{A}\circ \eta_{A})).$ \item[(b5)]$(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_{A}\circ \eta_{A}=\rho_{A}\circ\eta_{A}.$ \item[(b6)]$ (A\otimes \Pi_{H}^{L})\circ \rho_{A}\circ \eta_{A}=\rho_{A}\circ\eta_{A}.$ \end{itemize} This definition is similar to the notion of right $H$-comodule monoid in the weak Hopf algebra setting and the proof for the equivalence of (b1)-(b6) is the same. Note that, if $H$ is a Hopf quasigroup and $(A, \rho_{A})$ is a unital magma, which is also a right $H$-comodule, we will say that $A$ is a right $H$-comodule magma if it satisfies (\ref{chmagma}) and $\eta_{H}\otimes \eta_{A}=\rho_{A}\circ \eta_{A}$. In this case (b1)-(b6) trivialize. } \end{definition} \begin{example} \label{Hescomodulomagma} {\rm Let $H$ be a weak Hopf quasigroup. Then $(H, \delta_H)$ is a right $H$-comodule magma. } \end{example} \begin{definition} \label{coinvariantesA} {\rm Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. We denote by $A^{co H}$ the equalizer of the morphisms $\rho_A$ and $(A\otimes \Pi_{H}^{L})\circ \rho_A$ (equivalently, $\rho_A$ and $(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A$) and by $i_A$ the injection of $A^{co H}$ in $A$. The triple $(A^{co H}, \eta_{A^{co H}}, \mu_{A^{co H}})$ is a unital magma (the submagma of coinvariants of $A$), where $\eta_{A^{co H}}:K\rightarrow A^{co H}$, $\mu_{A^{co H}}:A^{co H}\otimes A^{co H}\rightarrow A^{co H}$ are the factorizations of the morphisms $\eta_A$ and $\mu_A\circ (i_A\otimes i_A)$ through $i_{A}$, respectively. Indeed, by (b6) of Definition \ref{H-comodulomagma} we have that $ (A\otimes \Pi_{H}^{L})\circ \rho_{A}\circ \eta_{A}=\rho_{A}\circ\eta_{A}.$ As a consequence, there exists a unique morphism $\eta_{A^{co H}}:K\rightarrow A^{co H}$ such that \begin{equation} \label{eta-coinv} \eta_{A}=i_{A}\circ \eta_{A^{co H}}. \end{equation} On the other hand, using (\ref{chmagma}), (b6) of Definition \ref{H-comodulomagma} and (\ref{pi-mu-pi-pi}) we obtain \begin{itemize} \item[ ]$\hspace{0.38cm} \rho_A\circ \mu_A\circ (i_A\otimes i_A)$ \item[ ]$=\mu_{A\otimes H}\circ ((\rho_A\circ i_A)\otimes (\rho_A\circ i_A))$ \item[ ]$=(\mu_{A}\otimes (\mu_{H}\circ (\Pi_{H}^{L}\otimes \Pi_{H}^{L})))\circ (A\otimes c_{H,A}\otimes H)\circ ((\rho_A\circ i_A)\otimes (\rho_A\circ i_A))$ \item[ ]$=(\mu_{A}\otimes (\Pi_{H}^{L}\circ \mu_{H}))\circ (A\otimes c_{H,A}\otimes H)\circ ((\rho_A\circ i_A)\otimes (\rho_A\circ i_A))$ \item[ ]$=(A\otimes \Pi_{H}^{L})\circ \rho_A\circ \mu_A\circ (i_A\otimes i_A).$ \end{itemize} Therefore, there exists a unique morphism $\mu_{A^{co H}}:A^{co H}\otimes A^{co H}\rightarrow A^{co H}$ satisfying \begin{equation} \label{mu-coinv} \mu_A\circ (i_A\otimes i_A)=i_{A}\circ \mu_{A^{co H}}. \end{equation} } \end{definition} \begin{lemma} \label{igualdadesmurho} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. The following equalities hold: \begin{equation} \label{muArhoA-1} \rho_A\circ \mu_A\circ (i_A\otimes A)=(\mu_A\otimes H)\circ (i_A\otimes \rho_A), \end{equation} \begin{equation} \label{muArhoA-2} \rho_A\circ \mu_A\circ (A\otimes i_A)=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes i_A), \end{equation} \begin{equation} \label{muArhoA-22} (\mu_{A}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_{A}\otimes \rho_{A})=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A). \end{equation} \end{lemma} \begin{proof} The first equality follows because $A$ is a right $H$-comodule magma, the properties of the equalizer $i_A$, (\ref{mu-pi-r-var}) and the naturalness of $c$. Indeed, \begin{itemize} \item[ ]$\hspace{0.38cm} \rho_A\circ \mu_A\circ (i_A\otimes A)$ \item[ ]$=\mu_{A\otimes H}\circ ((\rho_A\circ i_A)\otimes \rho_A)$ \item[ ]$=\mu_{A\otimes H}\circ (((A\otimes \overline{\Pi}_{H}^{R})\circ\rho_A\circ i_A)\otimes \rho_A)$ \item[ ]$=(\mu_A\otimes (\varepsilon_H\circ \mu_H)\otimes H)\circ (A\otimes c_{H,A}\otimes \delta_H)\circ ((\rho_A\circ i_A)\otimes \rho_A)$ \item[ ]$=(((A\otimes \varepsilon_H)\circ \rho_A)\otimes H)\circ (\mu_A\otimes H)\circ (i_A\otimes \rho_A)$ \item[ ]$=(\mu_A\otimes H)\circ (i_A\otimes \rho_A).$ \end{itemize} In a similar way, but using (\ref{mu-pi-l}), we get (\ref{muArhoA-2}): \begin{itemize} \item[ ]$\hspace{0.38cm} \rho_A\circ \mu_A\circ (A\otimes i_A)$ \item[ ]$=\mu_{A\otimes H}\circ (\rho_A\otimes (\rho_A\circ i_A))$ \item[ ]$=\mu_{A\otimes H}\circ (\rho_A\otimes ((A\otimes \Pi_{H}^{L})\circ\rho_A\circ i_A))$ \item[ ]$=(\mu_A\otimes (((\varepsilon_H\circ \mu_H)\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_H\otimes H)))\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_A\otimes (\rho_A\circ i_A))$ \item[ ]$=(((A\otimes \varepsilon_H)\circ \rho_A)\otimes H)\circ (\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes i_A)$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes i_A).$ \end{itemize} Finally, \begin{itemize} \item[ ]$\hspace{0.38cm} (\mu_{A}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_{A}\otimes \rho_{A})$ \item[ ]$= (\mu_{A}\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H)))\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_{A}\otimes \rho_{A}) $ \item[ ]$= (((A\otimes \varepsilon_{H})\circ \mu_{A\otimes H}\circ (\rho_{A}\otimes \rho_{A}))\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_{A}\otimes A) $ \item[ ]$= (((A\otimes \varepsilon_{H})\circ \rho_{A}\circ \mu_{A})\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_{A}\otimes A) $ \item[ ]$= (\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A), $ \end{itemize} where the first equality follows by (\ref{mu-pi-l}), the second one follows by the comodule condition of $A$ and the naturalness of $c$, the third one is a consequence of (\ref{chmagma}) and the last one relies on the counit properties. Therefore, (\ref{muArhoA-22}) holds and the proof is complete. \end{proof} \begin{remark} {\rm It is not difficult to see that the coinvariant submagma $H^{co H}$ of the right $H$-comodule magma $(H, \delta_H)$ is $H_L$. Moreover in this case the equations (\ref{muArhoA-1}) and (\ref{muArhoA-2}) are (\ref{aux-1-monoid-hl}) and (\ref{aux-2-monoid-hl}) respectively. } \end{remark} \begin{proposition} \label{idempotentenabla} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. The morphism $\nabla_A:A\otimes H\rightarrow A\otimes H$, defined as $$\nabla_A=\mu_{A\otimes H}\circ (A\otimes H\otimes (\rho_A\circ \eta_A)),$$ is idempotent and it is a right $H$-comodule morphism for $\rho_{A\otimes H}=A\otimes \delta_H$. Moreover, if $(A, \rho_A)$ is a right $H$-comodule magma, it satisfies that \begin{equation} \label{nablaAmodulo} \nabla_A\circ (\mu_A\otimes H)=(\mu_A\otimes H)\circ (A\otimes \nabla_A). \end{equation} As a consequence, there exist an object $A\square H$ and morphisms $i_{A\otimes H}$ and $p_{A\otimes H}$ such that $\nabla_A=i_{A\otimes H}\circ p_{A\otimes H}$ and $id_{A\square H}=p_{A\otimes H}\circ i_{A\otimes H}$. \end{proposition} \begin{proof} Note that, by (b3) of Definition \ref{H-comodulomagma}, we obtain that \begin{equation} \label{new-exp-nabla} \nabla_A=(A\otimes (\mu_H\circ c_{H,H}))\circ (((A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A)\otimes H). \end{equation} Then $\nabla_A$ is an idempotent morphism. Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm} \nabla_A\circ \nabla_A$ \item[ ]$=(A\otimes (\mu_H\circ (\mu_H\otimes H)\circ (H\otimes (c_{H,H}\circ (\overline{\Pi}_{H}^{R}\otimes \overline{\Pi}_{H}^{R})\circ \delta_H))))\circ (A\otimes c_{H,H})\circ (\rho_A\otimes H)$ \item[ ]$=(A\otimes (\mu_H\circ (\mu_H\otimes \overline{\Pi}_{H}^{R})\circ (H\otimes c_{H,H})))\circ (A\otimes H\otimes ((\mu_H\otimes H)\circ (H\otimes (\delta_H\circ \eta_H))))$ \item[ ]$\hspace{0.38cm} \circ (A\otimes c_{H,H})\circ (\rho_A\otimes H)$ \item[ ]$=(A\otimes (\mu_H\circ c_{H,H}))\circ (A\otimes (\varepsilon_H\circ \mu_H\circ (\mu_H\otimes H))\otimes H\otimes H)\circ (A\otimes H\otimes H\otimes (\delta_{H}\circ \eta_{H})\otimes H)$ \item[ ]$\hspace{0.38cm}\circ (\rho_A\otimes ((\Pi_{H}^{R}\otimes H)\circ \delta_H))$ \item[ ]$=(A\otimes (\mu_H\circ c_{H,H}))\circ (A\otimes (\varepsilon_H\circ \mu_H\circ (H\otimes \mu_H))\otimes H\otimes H)\circ (A\otimes H\otimes H\otimes (\delta_{H}\circ \eta_{H})\otimes H)$ \item[ ]$\hspace{0.38cm}\circ (\rho_A\otimes ((\Pi_{H}^{R}\otimes H)\circ \delta_H))$ \item[ ]$=(A\otimes (\varepsilon_H\circ \mu_H)\otimes H)\circ (\rho_A\otimes (\mu_{H\otimes H}\circ (((\Pi_{H}^{R}\otimes H)\circ \delta_H)\otimes (\delta_H\circ \eta_H))))$ \item[ ]$=(A\otimes (\varepsilon_H\circ \mu_H)\otimes H)\circ (\rho_A\otimes (\Pi_{H}^{R}*\Pi_{H}^{R})\otimes H)\circ (A\otimes \delta_H)$ \item[ ]$=(A\otimes (\varepsilon_H\circ \mu_H)\otimes H)\circ (\rho_A\otimes \Pi_{H}^{R}\otimes H)\circ (A\otimes \delta_H)$ \item[ ]$=(A\otimes \varepsilon_H\otimes H)\circ (A\otimes \mu_{H\otimes H})\circ (\rho_A\otimes H\otimes (\delta_{H}\circ \eta_H))$ \item[ ]$=\nabla_A.$ \end{itemize} In the preceding computations, the first equality follows by (\ref{new-exp-nabla}), the naturalness of $c$ and because $A$ is a right $H$-comodule; the second one by (\ref{delta-pi-r-var}) and by the naturalness of $c$. In the third one we use (\ref{delta-pi-r}), the naturalness of $c$ and the definiton of $\overline{\Pi}_{H}^{R}$; the fourth one relies on (a2) of Definition \ref{Weak-Hopf-quasigroup}; the fifth one on the naturalness of $c$; the sixth one on the coassociativity of the coproduct and on (\ref{delta-pi-r}). The seventh equality is a consequence of (a4-7) and (a4-3) of Definition \ref{Weak-Hopf-quasigroup}, the eighth one follows by (\ref{delta-pi-r}) and finally, the last one follows by the naturalness of $c$, the definiton of $\overline{\Pi}_{H}^{R}$ and (\ref{new-exp-nabla}). Now, using (a1) of Definition \ref{Weak-Hopf-quasigroup}, the condition of right $H$-comodule for $A$ and (b6) of Definition \ref{H-comodulomagma}, and the naturalness of $c$ and (\ref{2-mu-delta-pi-l}), we get that $\nabla_A$ is a right $H$-comodule morphism, i.e. \begin{equation} \label{nabla-comod} (A\otimes \delta_H)\circ \nabla_A=(\nabla_A\otimes H)\circ (A\otimes \delta_H). \end{equation} Indeed, \begin{itemize} \item[ ]$\hspace{0.38cm} (A\otimes \delta_H)\circ \nabla_A$ \item[ ]$=(\mu_A\otimes \mu_{H\otimes H})\circ (A\otimes A\otimes \delta_H\otimes \delta_H)\circ (A\otimes c_{H,A}\otimes H)\circ (A\otimes H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=(\mu_A\otimes \mu_{H\otimes H})\circ (A\otimes A\otimes \delta_H\otimes ((H\otimes \Pi_{H}^{L})\circ \delta_H))\circ (A\otimes c_{H,A}\otimes H)\circ (A\otimes H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=(\nabla_A\otimes H)\circ (A\otimes \delta_H).$ \end{itemize} Finally, \begin{itemize} \item[ ]$\hspace{0.38cm} \nabla_A\circ (\mu_A\otimes H)$ \item[ ]$=(\mu_A\otimes (\varepsilon_H\circ \mu_H\circ (\mu_H\otimes H))\otimes (\mu_H\circ c_{H,H}))\circ (((A\otimes c_{H,A}\otimes H)\circ (\rho_A\otimes \rho_A))\otimes (\delta_H\circ \eta_H)\otimes H)$ \item[ ]$=(\mu_A\otimes (\varepsilon_H\circ \mu_H\circ (H\otimes \mu_H))\otimes (\mu_H\circ c_{H,H}))\circ (((A\otimes c_{H,A}\otimes H)\circ (\rho_A\otimes \rho_A))\otimes (\delta_H\circ \eta_H)\otimes H)$ \item[ ]$=(\mu_A\otimes (\varepsilon_H\circ \mu_H)\otimes (\mu_H\circ c_{H,H}))\circ (A\otimes c_{H,A}\otimes ((H\otimes \overline{\Pi}_{H}^{R})\circ \delta_H)\otimes H)\circ (\rho_A\otimes \rho_A\otimes H)$ \item[ ]$= (A\otimes \varepsilon_{H}\otimes (\mu_H\circ c_{H,H}\circ (\overline{\Pi}_{H}^{R}\otimes H)))\circ ((\mu_{A\otimes H}\circ (\rho_{A}\otimes \rho_{A}))\otimes H\otimes H)\circ (A\otimes \rho_{A}\otimes H)$ \item[ ]$=(((A\otimes \varepsilon_H)\circ \rho_A\circ \mu_A)\otimes H)\circ (A\otimes \nabla_A)$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes \nabla_A),$ \end{itemize} where the first and fifth equalities follow by (\ref{chmagma}) and (\ref{new-exp-nabla}), the second one by (a2) of Definition \ref{Weak-Hopf-quasigroup} and the third one by (\ref{delta-pi-r-var}). In the fourth equality we used that $A$ is a right $H$-comodule, and the last one follows by the counit properties. Therefore, (\ref{nablaAmodulo}) holds and the proof is complete. \end{proof} Note that, by the lack of associativity, for $M=A\otimes H$, $\varphi_{M}=\mu_A\otimes H$ is not a left $A$-module structure (i.e. $\varphi_{M}\circ (\eta_{A}\otimes M)=id_{M}$, $\varphi_{M}\circ (A\otimes \varphi_{M})=\varphi_{M}\circ (\mu_{A}\otimes M)$). Moreover, if $A=H$, by (\ref{delta-pi-r}), we have \begin{equation} \label{nabladeH} \nabla_H=(\mu_H \otimes H)\circ (H\otimes \Pi_{H}^{R}\otimes H)\circ (H\otimes \delta_H). \end{equation} \begin{lemma} \label{igualdadesnabla} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. The following equalities hold: \begin{equation} \label{nabla-1} p_{A\otimes H}\circ (A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))=p_{A\otimes H}\circ(\eta_A\otimes H), \end{equation} \begin{equation} \label{nabla-2} (A\otimes (\delta_H\circ \mu_H))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))= (((A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)))\otimes H)\circ \delta_H, \end{equation} \begin{equation} \label{nabla-3} \nabla_A\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)=(\mu_A\otimes H)\circ (A\otimes \rho_A). \end{equation} \end{lemma} \begin{proof} The equality (\ref{nabla-1}) holds because, composing with $i_{A\otimes H}$, we have \begin{itemize} \item[ ]$\hspace{0.38cm} \nabla_A\circ (A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes (\mu_{H}\circ (\mu_{H}\otimes H)))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes \mu_{A}\otimes H\otimes H)\circ (H\otimes A\otimes c_{H,A}\otimes H)$ \item[ ]$\hspace{0.38cm} \circ (H\otimes (\rho_{A}\circ \eta_{A})\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes (\mu_{H}\circ ((\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes \mu_{A}\otimes H\otimes H)\circ (H\otimes A\otimes c_{H,A}\otimes H)$ \item[ ]$\hspace{0.38cm} \circ (H\otimes (\rho_{A}\circ \eta_{A})\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes (\mu_{H}\circ (H\otimes (\mu_{H}\circ (\Pi_{H}^{L}\otimes H)))))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes \mu_{A}\otimes H\otimes H)\circ (H\otimes A\otimes c_{H,A}\otimes H)$ \item[ ]$\hspace{0.38cm} \circ (H\otimes (\rho_{A}\circ \eta_{A})\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes (\mu_{A\otimes H}\circ ((\rho_{A}\circ \eta_{A})\otimes (\rho_{A}\circ \eta_{A})))) $ \item[ ]$=(A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \mu_{A}\circ (\eta_{A}\otimes \eta_{A})))$ \item[ ]$=(A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$=\nabla_A\circ (\eta_{A}\otimes H), $ \end{itemize} where the first equality follows by the naturalness of $c$, the second one follows by (b6) of Definition \ref{H-comodulomagma}, and the third one follows by (\ref{monoid-hl-2}) and by the naturalness of $c$. In the fourth equality we used the naturalness of $c$ and (b6) of Definition \ref{H-comodulomagma}. The fifth equality is a consequence of (\ref{chmagma}) and the sixth and seventh ones rely on the properties of the unit of $A$. On the other hand, the proof for (\ref{nabla-2}) is the following: \begin{itemize} \item[ ]$\hspace{0.38cm} (A\otimes (\delta_H\circ \mu_H))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes ((\mu_{H}\otimes \mu_{H})\circ \delta_{H\otimes H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A})) $ \item[ ]$= (A\otimes (\mu_{H\otimes H}\circ (\delta_{H}\otimes H\otimes \Pi_{L}^{H})))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes ((\rho_{A}\otimes H)\circ \rho_{A}\circ \eta_{A}))$ \item[ ]$= (A\otimes ((\mu_{H}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ \delta_{H\otimes H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A})) $ \item[ ]$= (A\otimes ((\mu_{H}\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H)))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= (((A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)))\otimes H)\circ \delta_H.$ \end{itemize} In these equalities the first one is consequence of (a1) of Definition (\ref{Weak-Hopf-quasigroup}), the second one holds because $A$ is a right $H$-comodule and by (b6) of Definition \ref{H-comodulomagma}. In the third one we applied again that $A$ is a right $H$-comodule, the fourth one follows by (\ref{2-mu-delta-pi-l}) and the last one relies on the naturalness of $c$. Finally, (\ref{nabla-3}) is a direct consequence of the equalities (\ref{nablaAmodulo}) and \begin{equation} \label{rho-nabla} \nabla_A\circ \rho_A=\rho_A. \end{equation} Note that (\ref{rho-nabla}) holds because, by (\ref{chmagma}) and the unit properties, we have $$\nabla_A\circ \rho_A =\mu_{A\otimes H}\circ (\rho_A\otimes (\rho_A\circ \eta_A))=\rho_A\circ \mu_{A}\circ (A\otimes \eta_{A})=\rho_{A}.$$ \end{proof} \begin{proposition} \label{monoidecoinvariantes} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma such that \begin{equation} \label{AsubH-2} \mu_{A}\circ (A\otimes (\mu_A\circ (i_A\otimes A)))=\mu_A\circ ((\mu_A\circ (A\otimes i_A))\otimes A). \end{equation} Then $(A^{co H}, \eta_{A^{co H}}, \mu_{A^{co H}})$ is a monoid. Moreover the morphism $$\overline{\gamma}_A=p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes \rho_A):A\otimes A\rightarrow A\square H$$ factorizes through the coequalizer diagram $$ \setlength{\unitlength}{1mm} \begin{picture}(101.00,10.00) \put(22.00,8.00){\vector(1,0){40.00}} \put(22.00,4.00){\vector(1,0){40.00}} \put(75.00,6.00){\vector(1,0){21.00}} \put(43.00,11.00){\makebox(0,0)[cc]{$(\mu_{A}\circ (A\otimes i_A))\otimes A$ }} \put(43.00,0.00){\makebox(0,0)[cc]{$A\otimes (\mu_{A}\circ (i_A\otimes A))$ }} \put(85.00,9.00){\makebox(0,0)[cc]{$n_{A}$ }} \put(10.00,6.00){\makebox(0,0)[cc]{$ A\otimes A^{co H}\otimes A$ }} \put(70.00,6.00){\makebox(0,0)[cc]{$A\otimes A$ }} \put(105.00,6.00){\makebox(0,0)[cc]{$A\otimes_{A^{co H}}A$ }} \end{picture} $$ and, if we denote by $\gamma_A$ this factorization, the following equalities: \begin{equation} \label{gammarho-1} (\gamma_A\otimes H)\circ \rho^{1}_{A\otimes_{A^{co H}}A}=(p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)\circ (\rho_A\otimes \delta_{H})\circ i_{A\otimes H}\circ \gamma_A, \end{equation} \begin{equation} \label{gammarho-2} (\gamma_A\otimes H)\circ \rho^{2}_{A\otimes_{A^{co H}}A}=(p_{A\otimes H}\otimes H)\circ(A\otimes \delta_H)\circ i_{A\otimes H}\circ \gamma_A, \end{equation} hold, where $\rho^{1}_{A\otimes_{A^{co H}}A}$ and $\rho^{2}_{A\otimes_{A^{co H}}A}$ are the factorizations, through the coequalizer $n_{A}$, of the morphisms $(n_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A)$ and $(n_{A}\otimes H)\circ (A\otimes \rho_A)$, respectively. \end{proposition} \begin{proof} Trivially, if (\ref{AsubH-2}) holds, the triple $(A^{co H}, \eta_{A^{co H}}, \mu_{A^{co H}})$ is a monoid. On the other hand, consider the coequalizer diagram $$ \setlength{\unitlength}{1mm} \begin{picture}(101.00,10.00) \put(22.00,8.00){\vector(1,0){40.00}} \put(22.00,4.00){\vector(1,0){40.00}} \put(75.00,6.00){\vector(1,0){21.00}} \put(43.00,11.00){\makebox(0,0)[cc]{$(\mu_{A}\circ (A\otimes i_A))\otimes A$ }} \put(43.00,0.00){\makebox(0,0)[cc]{$A\otimes (\mu_{A}\circ (i_A\otimes A))$ }} \put(85.00,9.00){\makebox(0,0)[cc]{$n_{A}$ }} \put(10.00,6.00){\makebox(0,0)[cc]{$ A\otimes A^{co H}\otimes A$ }} \put(70.00,6.00){\makebox(0,0)[cc]{$A\otimes A$ }} \put(105.00,6.00){\makebox(0,0)[cc]{$A\otimes_{A^{co H}}A$ }} \end{picture} $$ By (\ref{muArhoA-1}) and (\ref{AsubH-2}) we have $$(\mu_{A}\otimes H)\circ (A\otimes \rho_{A})\circ (A\otimes (\mu_{A}\circ (i_A\otimes A)))=((\mu_{A}\circ (A\otimes \mu_{A}))\otimes H)\circ (A\otimes i_{A}\otimes \rho_{A})=(\mu_{A}\otimes H)\circ ((\mu_{A}\circ (A\otimes i_{A}))\otimes \rho_{A})$$ and, therefore, there exists a unique morphism such that \begin{equation} \label{can-fact} \gamma_{A}\circ n_{A}=\overline{\gamma}_A. \end{equation} Also, by (\ref{muArhoA-1}), (\ref{muArhoA-2}), the naturalness of $c$, and the definition of $n_{A}$, we have $$(n_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A) \circ ((\mu_{A}\circ (A\otimes i_A))\otimes A)=(n_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A) \circ (A\otimes (\mu_{A}\circ (i_A\otimes A)))$$ and $$(n_{A}\otimes H)\circ (A\otimes \rho_A)\circ ((\mu_{A}\circ (A\otimes i_A))\otimes A)=(n_{A}\otimes H)\circ (A\otimes \rho_A) \circ (A\otimes (\mu_{A}\circ (i_A\otimes A))).$$ Then, there exists unique morphisms $\rho^{1}_{A\otimes_{A^{co H}}A}, \rho^{2}_{A\otimes_{A^{co H}}A}: A\otimes_{A^{co H}}A\rightarrow A\otimes_{A^{co H}}A\otimes H$ such that \begin{equation} \label{rho-fact-1} \rho^{1}_{A\otimes_{A^{co H}}A}\circ n_{A}=(n_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A), \end{equation} \begin{equation} \label{rho-fact-2} \rho^{2}_{A\otimes_{A^{co H}}A}\circ n_{A}=(n_{A}\otimes H)\circ (A\otimes \rho_A), \end{equation} respectively. For $\rho^{1}_{A\otimes_{A^{co H}}A}$ the equality (\ref{gammarho-1}) holds because by composing with the coequalizer $n_{A}$, \begin{itemize} \item[ ]$\hspace{0.38cm} (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)\circ (\rho_A\otimes \delta_{H})\circ i_{A\otimes H}\circ \gamma_A\circ n_{A}$ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)\circ (\rho_A\otimes \delta_{H})\circ \nabla_{A}\circ (\mu_{A}\otimes H)\circ (A\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)\circ ((\rho_A\circ \mu_{A})\otimes \delta_{H}) \circ (A\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)\circ (( \mu_{A\otimes H}\circ (\rho_{A}\otimes \rho_{A}))\otimes \delta_{H})\circ (A\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (\mu_{A}\otimes (\mu_{H}\circ (\mu_{H}\otimes \lambda_{H})\circ (H\otimes \delta_{H}))\otimes H)\circ (A\otimes c_{H,A}\otimes \delta_{H})\circ (\rho_{A}\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (\mu_{A}\otimes (\mu_{H}\circ (H\otimes \Pi_{H}^{L}))\otimes H)\circ (A\otimes c_{H,A}\otimes \delta_{H})\circ (\rho_{A}\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (A\otimes c_{H,H})\circ (\mu_{A}\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H))\otimes H)\circ (A\otimes c_{H,A}\otimes \delta_{H})$ \item[ ]$\hspace{0.38cm}\circ (\rho_{A}\otimes \rho_{A}) $ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (((A\otimes \varepsilon_{H})\circ \mu_{A\otimes H}\circ (\rho_{A}\otimes \rho_{A}))\otimes c_{H,H})\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_{A}\otimes \rho_{A})$ \item[ ]$= (p_{A\otimes H}\otimes H)\circ (((A\otimes \varepsilon_{H})\circ \rho_A\circ \mu_{A})\otimes c_{H,H})\circ (A\otimes c_{H,A}\otimes H)\circ (\rho_{A}\otimes \rho_{A}) $ \item[ ]$=(\overline{\gamma}_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_{A}\otimes A) $ \item[ ]$=((\gamma_A\circ n_{A})\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_{A}\otimes A) $ \item[ ]$=(\gamma_A\otimes H)\circ \rho^{1}_{A\otimes_{A^{co H}}A}\circ n_{A},$ \end{itemize} where the first and the tenth equalities follow by (\ref{can-fact}), the second one follows by (\ref{nabla-3}) and the third and eighth ones follow by (\ref{chmagma}). In the fourth identity we used that $A$ is a right $H$-comodule and the coassociativity of $\delta_{H}$. The fifth equality relies on (a4-6) of Definition \ref{Weak-Hopf-quasigroup} and the sixth one is a consequence of (\ref{mu-pi-l}). In the seventh equality we applied the naturalness of $c$ and the comodule structure of $A$, the ninth one follows by the counit properties and the naturalness of $c$ and the last one follows by (\ref{rho-fact-1}). Finally, by (\ref{rho-fact-2}), the comodule structure of $A$ and (\ref{nabla-comod}) we have $$(\gamma_A\otimes H)\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ n_{A}=(p_{A\otimes H}\otimes H)\circ(A\otimes \delta_H)\circ i_{A\otimes H}\circ \gamma_A\circ n_{A},$$ and then (\ref{gammarho-2}) holds. \end{proof} \begin{lemma} \label{morfismosauxiliares} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma such that the functor $A\otimes -$ preserves coequalizers. Assume that \begin{equation} \label{AsubH-3} \mu_{A}\circ (A\otimes (\mu_A\circ (A\otimes i_A)))=\mu_A\circ (\mu_A\otimes i_A)). \end{equation} Then the morphism $n_{A}\circ (\mu_A\otimes A)$ factorizes though the coequalizer $A\otimes n_{A}$. We will denote by $\varphi_{A\otimes_{A^{co H}}A}$ this factorization, i.e., the unique morphism such that \begin{equation} \label{varphi} \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes n_{A})=n_{A}\circ (\mu_A\otimes A). \end{equation} \end{lemma} \begin{proof} If the functor $A\otimes -$ preserves coequalizers, we have that $$ \setlength{\unitlength}{1mm} \begin{picture}(120.00,10.00) \put(19.00,8.00){\vector(1,0){40.00}} \put(19.00,4.00){\vector(1,0){40.00}} \put(80.00,6.00){\vector(1,0){21.00}} \put(41.00,11.00){\makebox(0,0)[cc]{$A\otimes (\mu_{A}\circ (A\otimes i_A))\otimes A$ }} \put(41.00,0.00){\makebox(0,0)[cc]{$A\otimes A\otimes (\mu_{A}\circ (i_A\otimes A))$ }} \put(88.00,9.00){\makebox(0,0)[cc]{$A\otimes n_{A}$ }} \put(3.00,6.00){\makebox(0,0)[cc]{$ A\otimes A\otimes A^{co H}\otimes A$ }} \put(71.00,6.00){\makebox(0,0)[cc]{$A\otimes A\otimes A$ }} \put(118.00,6.00){\makebox(0,0)[cc]{$A\otimes A\otimes_{A^{co H}}A$ }} \end{picture} $$ is a coequalizer diagram, and then the result follows easily by (\ref{AsubH-3}) and by the properties of $n_{A}$. \end{proof} Now we introduce the definition of Galois extension associated to a weak Hopf quasigroup. \begin{definition} \label{Galois} {\rm Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma satisfying (\ref{AsubH-2}). We say that $A^{co H}\hookrightarrow A$ is a weak $H$-Galois extension if the morphism $\gamma_A$ is an isomorphism. Let $\rho^{2}_{A\otimes_{A^{co H}}A}$ be the morphism introduced in Proposition \ref{monoidecoinvariantes}. The pair $(A\otimes_{A^{co H}}A, \rho^{2}_{A\otimes_{A^{co H}}A})$ is a right $H$-comodule and so is $(A\square H, \rho_{A\square H})$ with $$\rho_{A\square H}=(p_{A\otimes H}\otimes H)\circ (A\otimes\delta_{H})\circ i_{A\otimes H}.$$ Then, $\gamma_A$ is a morphism of right $H$-comodules, because composing with $n_{A}$ and using (\ref{can-fact}), (\ref{nabla-comod}) and (\ref{gammarho-2}), the equality $$\rho_{A\square H}\circ \gamma_A\circ n_{A}= (\gamma_A\otimes H)\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ n_{A}$$ holds and therefore \begin{equation} \label{gamma-comod} \rho_{A\square H}\circ \gamma_A=(\gamma_A\otimes H)\circ \rho^{2}_{A\otimes_{A^{co H}}A}. \end{equation} On the other hand, if $\varphi_{A\square H}= p_{A\otimes H}\circ (\mu_{A}\otimes H)\circ (A\otimes i_{A\otimes H})$, by (\ref{can-fact}) and (\ref{nablaAmodulo}), we obtain that $\gamma_{A}$ is almost lineal, i.e., \begin{equation} \label{gamma-almost-lineal} \varphi_{A\square H} \circ (A\otimes (\gamma_A\circ n_{A}\circ (\eta_{A}\otimes A)))=\gamma_A\circ n_{A}. \end{equation} If $A^{co H}\hookrightarrow A$ is a weak $H$-Galois extension such that the functor $A\otimes -$ preserves coequalizers, and the equality (\ref{AsubH-3}) holds, we will say that $\gamma_A^{-1}$ is almost lineal if it satisfies that \begin{equation} \label{almostlineal} \gamma_A^{-1}\circ p_{A\otimes H}=\varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H))). \end{equation} } \end{definition} \begin{definition} \label{basenormal} {\rm Let $A^{co H}\hookrightarrow A$ be a weak $H$-Galois extension. We will say that $A^{co H}\hookrightarrow A$ is a weak $H$-Galois with normal basis if there exists an idempotent morphism of left $A^{co H}$-modules ($\varphi_{A^{co H}\otimes H}=\mu_{A^{co H}}\otimes H$) and right $H$-comodules ($\rho_{A^{co H}\otimes H}=A^{co H}\otimes \delta_H$), $$\Omega_A:A^{co H}\otimes H\rightarrow A^{co H}\otimes H,$$ and an isomorphism of left $A^{co H}$-modules and right $H$-comodules $$b_A:A\rightarrow A^{co H}\times H,$$ where $A^{co H}\times H$ is the image of $\Omega_A$ and $\varphi_{A^{co H}\times H}=r_A\circ (\mu_{A^{co H}}\otimes H)\circ (A^{co H}\otimes s_A)$, $\rho_{A^{co H}\times H}=(r_A\otimes H)\circ (A^{co H}\otimes \delta_H)\circ s_A$, being $s_A:A^{co H}\times H\rightarrow A^{co H}\otimes H$ and $r_A:A^{co H}\otimes H\rightarrow A^{co H}\times H$ the morphisms such that $s_A\circ r_A=\Omega_A$ and $r_A\circ s_A=id_{A^{co H}\times H}$. Note that by Proposition \ref{monoidecoinvariantes}, $A^{co H}$ is a monoid and then $\varphi_{A^{co H}\otimes H}$ is a left $A^{co H}$-module structure for $A^{co H}\otimes H$. } \end{definition} \begin{remark} \label{Galoiswhaandhq} {\rm In the weak Hopf algebra setting, Definition \ref{Galois} is a generalization of the notion of weak $H$-Galois extension (with normal basis) given in \cite{AFG2}. Recall that if $H$ is a weak Hopf algebra and $A$ a right $H$-comodule monoid, the equality (\ref{almostlineal}) is always true. Indeed, by the definitions of $\varphi_{A\otimes_{A^{co H}}A}$ and $\gamma_A$ and taking into account that $A$ is a monoid and (\ref{nabla-3}), \begin{itemize} \item[ ]$\hspace{0.38cm} \gamma_A\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes n_{A})$ \item[ ]$=\gamma_A\circ n_{A}\circ (\mu_A\otimes A)$ \item[ ]$=p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)\circ (\mu_A\otimes A)$ \item[ ]$=p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes (\nabla_A\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)))$ \item[ ]$=p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes (i_{A\otimes H}\circ \gamma_A\circ n_{A})),$ \end{itemize} and then $\gamma_A\circ \varphi_{A\otimes_{A^{co H}}A}=p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes (i_{A\otimes H}\circ \gamma_A))$. Therefore \begin{itemize} \item[ ]$\hspace{0.38cm} \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))$ \item[ ]$=\gamma_A^{-1}\circ \gamma_A\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))$ \item[ ]$=\gamma_A^{-1}\circ p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes (i_{A\otimes H}\circ \gamma_A\circ \gamma_A^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))$ \item[ ]$=\gamma_A^{-1}\circ p_{A\otimes H}\circ (\mu_{A}\otimes H)\circ (A\otimes (\nabla_A\circ (\eta_A\otimes H)))$ \item[ ]$=\gamma_A^{-1}\circ p_{A\otimes H},$ \end{itemize} and $\gamma_A^{-1}$ is almost lineal. On the other hand, if $H$ is a Hopf quasigroup, $\nabla_A=id_{A\otimes H}$ and then $\gamma_A$ is the factorization through the coequalizer of the morphism $(\mu_A\otimes H)\circ (A\otimes \rho_A)$. Then, for this algebraic structure, Definition \ref{Galois} is the notion of $H$-Galois extension for Hopf quasigroups (see \cite{AFG-3}). Also, $\varphi_{A\square H}=\mu_{A}\otimes H$, and, as a consequence, the condition of almost lineal for $\gamma_{A}$ is \begin{equation} \label{gamma-almostlinealquasigroup} (\mu_{A}\otimes H)\circ (A\otimes (\gamma_A\circ n_{A}\circ (\eta_{A}\otimes A)))=\gamma_A\circ n_{A}. \end{equation} Now condition almost lineal for $\gamma_A^{-1}$ says that the equality \begin{equation} \label{almostlinealquasigroup} \gamma_A^{-1}=\varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ (\eta_A\otimes H))) \end{equation} holds. } \end{remark} \begin{example} \label{HesGalois} {\rm Let $H$ be a weak Hopf quasigroup. Then $H_L\hookrightarrow H$ is a weak $H$-Galois extension with normal basis. Also, $\gamma_H^{-1}$ is almost lineal. First of all, note that by Proposition \ref{monoid-hl}, equalities (\ref{AsubH-2}) and (\ref{AsubH-3}) hold for the right $H$-comodule magma $(H, \delta_H)$. Moreover, let $\gamma_H^{-1}=n_{H}\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)\circ i_{H\otimes H}:H\square H\rightarrow H\otimes_{H_L}H$. Then \begin{itemize} \item[ ]$\hspace{0.38cm} \gamma_H\circ \gamma_H^{-1}$ \item[ ]$=p_{H\otimes H}\circ (\mu_H\otimes H)\circ (H\otimes \delta_H))\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H))\circ i_{H\otimes H}$ \item[ ]$=p_{H\otimes H}\circ (\mu_H\otimes H)\circ (H\otimes \Pi_{H}^{R}\otimes H)\circ (H\otimes \delta_H))\circ i_{H\otimes H}$ \item[ ]$=p_{H\otimes H}\circ \nabla_H\circ i_{H\otimes H}$ \item[ ]$=id_{H\square H}.$ \end{itemize} In the preceding calculations, the first equality follows by the definition of $\gamma_H$; the second one relies on the coassociativity of $\delta_{H}$ and on (a4-7) of Definition \ref{Weak-Hopf-quasigroup}; in the third one we use (\ref{nabladeH}); finally, the last one is a direct consequence of the factorization of $\nabla_{H}$. On the other hand, \begin{itemize} \item[ ]$\hspace{0.38cm} \gamma_H^{-1}\circ \gamma_H\circ n_H$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)\circ \nabla_H\circ (\mu_H\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)\circ (\mu_H\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes \Pi_{H}^{L}\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes (i_L\circ p_L)\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=n_H\circ (H\otimes (\Pi_{H}^{L}*id_H))$ \item[ ]$=n_H,$ \end{itemize} where the first equality follows by the definition of $\gamma_H$; the second one by applying (\ref{nabla-3}) to the right $H$-comodule magma $H$. The third equality is a consequence of the coassociativity of $\delta_{H}$ and (a4-6) of Definition \ref{Weak-Hopf-quasigroup}; the fourth one follows because $\Pi_{H}^{L}=i_L\circ p_L$; the fifth equality uses the properties of $n_{H}$ and the last one follows by (\ref{pi-l}). As a consequence, $\gamma_H^{-1}\circ \gamma_H=id_{H\otimes_{H_L}H}$ and $H_L\hookrightarrow H$ is a weak $H$-Galois extension. Now we must show that the extension has a normal basis. Let $\Omega_H:H_L\otimes H\rightarrow H_L\otimes H$ be the morphism defined as $\Omega_H=(p_L\otimes H)\circ \delta_H\circ \mu_H\circ (i_L\otimes H)$. By (\ref{pi-l}), $\Omega_H$ is idempotent. Moreover, using that $i_{L}$ is an equalizer, (a1) of Definition \ref{Weak-Hopf-quasigroup}, and (\ref{mu-pi-r-var}) we obtain that $\Omega_H=((p_L\circ \mu_H)\otimes H)\circ (i_{L}\otimes \delta_H)$ and then $\Omega_H$ is a right $H$-comodule morphism. Moreover, using (\ref{pi-delta-mu-pi-1}) and the equality (\ref{monoid-hl-2}), \begin{itemize} \item[ ]$\hspace{0.38cm} (\mu_{H_L}\otimes H)\circ (H_L\otimes \Omega_H)$ \item[ ]$=((p_L\circ \mu_H\circ (i_{H_L}\otimes \Pi_{H}^{L}))\otimes H)\circ (H_L\otimes i_{H_L}\otimes \delta_H)$ \item[ ]$=((p_L\circ \mu_H\circ (i_{H_L}\otimes H))\otimes H)\circ (H_L\otimes i_{H_L}\otimes \delta_H)$ \item[ ]$=((p_L\circ \mu_H\circ (\mu_H\circ (i_{H_L}\otimes i_{H_L})\otimes H))\otimes H)\circ (H_L\otimes H_L\otimes \delta_H)$ \item[ ]$=\Omega_H\circ (\mu_{H_L}\otimes H),$ \end{itemize} and $\Omega_H$ is a morphism of left $H_L$-modules. On the other hand, let $s_H:H_L\times H\rightarrow H_L\otimes H$ and $r_H:H_L\otimes H\rightarrow H_L\times H$ be the morphisms such that $s_H\circ r_H=\Omega_H$ and $r_H\circ s_H=id_{H_L\times H}$ and define $b_H=r_H\circ (p_L\otimes H)\circ \delta_H$. It is not difficult to see that $b_H$ is a right $H$-comodule isomorphism with inverse $b_H^{-1}=\mu_H\circ (i_{H_L}\otimes H)\circ s_H$. Moreover, \begin{itemize} \item[ ]$\hspace{0.38cm} \varphi_{H_{L}\times H}\circ (H_L\otimes b_H)$ \item[ ]$=r_H\circ (\mu_{H_L}\otimes H)\circ (H_L\otimes \Omega_H)\circ (H_L\otimes ((p_L\otimes H)\circ \delta_H))$ \item[ ]$=r_H\circ (\mu_{H_L}\otimes H)\circ (H_L\otimes \Omega_H)\circ (H_L\otimes \eta_{H_L}\otimes H)$ \item[ ]$=r_H$ \item[ ]$=r_H\circ \Omega_H$ \item[ ]$=b_H\circ \mu_H\circ (i_{L}\otimes H),$ \end{itemize} and $H_L\hookrightarrow H$ is a weak $H$-Galois extension with normal basis. Finally, in this case, if $H\otimes -$ preserves coequalizers, the morphism $\gamma_H^{-1}$ is almost lineal. Indeed: Let $\varphi_{H\otimes_{H_L}H}:H\otimes H\otimes_{H_L}H\rightarrow H\otimes_{H_L}H$ be the factorization though the coequalizer $H\otimes n_{H}$ of the morphism $n_{H}\circ (\mu_H\otimes H)$, i.e., the morphism such that \begin{equation} \label{varphiparaH} \varphi_{H\otimes_{H_L}H}\circ (H\otimes n_{H})=n_{H}\circ (\mu_H\otimes H). \end{equation} Then, by (a4-3) of Definition \ref{Weak-Hopf-quasigroup}, (\ref{nabladeH}) and (\ref{varphiparaH}), \begin{itemize} \item[ ]$\hspace{0.38cm} \varphi_{H\otimes_{H_{L}}\otimes H}\circ (H\otimes (\gamma_H^{-1}\circ p_{H\otimes H}\circ (\eta_H\otimes H)))$ \item[ ]$=\varphi_{H\otimes_{H_L}H}\circ (H\otimes n_H)\circ (H\otimes ((\mu_H\otimes H)\circ (\Pi_{H}^{R}\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)\circ \delta_H))$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=n_H\circ (\mu_H\otimes H)\circ (H\otimes \lambda_H\otimes H)\circ (\mu_H\otimes \delta_H)\circ (H\otimes \Pi_{H}^{R}\otimes H)\circ (H\otimes \delta_H)$ \item[ ]$=\gamma_H^{-1}\circ p_{H\otimes H},$ \end{itemize} and $\gamma_H^{-1}$ is almost lineal. } \end{example} To finish this section we show two technical lemmas that will be useful in order to get the main result of this paper which gives a characterization of weak $H$-Galois extensions with normal basis. \begin{lemma} \label{igualdadesgalois} Let $H$ be a weak Hopf quasigroup and let $A^{co H}\hookrightarrow A$ be a weak $H$-Galois extension. Then the following equalities hold: \begin{equation} \label{igualdadesgalois-1} \rho^{1}_{A\otimes_{A^{co H}}A}\circ \gamma_{A}^{-1}=((\gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \mu_H\otimes H)\circ (\rho_A\otimes ((\lambda_H\otimes H)\circ \delta_H))\circ i_{A\otimes H}, \end{equation} \begin{equation} \label{igualdadesgalois-2} ((\gamma_A^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes \delta_H)=\rho^{2}_{A\otimes_{A^{co H}}A}\circ \gamma_A^{-1} \circ p_{A\otimes H}. \end{equation} \end{lemma} \begin{proof} The first equality follows easily from (\ref{gammarho-1}) composing with $\gamma_{A}^{-1}\otimes H$ on the left and with $\gamma_{A}^{-1}$ on the right. On the other hand, if we compose in (\ref{gammarho-2}) with $\gamma_{A}^{-1}\otimes H$ on the left and with $\gamma_{A}^{-1}\circ p_{A\otimes H}$ on the right we obtain (\ref{igualdadesgalois-2}). \end{proof} \begin{lemma} \label{morfismomA} Let $H$ be a weak Hopf quasigroup and let $A^{co H}\hookrightarrow A$ be a weak $H$-Galois extension with normal basis. Then there is a unique morphism $m_A:A\otimes_{A^{co H}}A\rightarrow A$ such that \begin{equation} \label{condicionmA} m_A\circ n_{A}=\mu_A\circ (A\otimes (((i_{A}\otimes \varepsilon_H)\circ s_A\circ b_A))). \end{equation} Moreover, the equalities \begin{equation} \label{SegundacondicionmA} m_A\circ \gamma_A^{-1}\circ p_{A\otimes H}\circ \rho_A=(i_A\otimes \varepsilon_H)\circ s_A\circ b_A \end{equation} and \begin{equation} \label{terceracondicionmA} \rho_A\circ m_A=(m_A\otimes H)\circ \rho^{1}_{A\otimes_{A^{co H}}A} \end{equation} hold. \end{lemma} \begin{proof} The proof for (\ref{condicionmA}) is similar to the given in Lemma 1.9 of \cite{AFG2} but using (\ref{AsubH-2}) instead of the associativity. On the other hand, \begin{itemize} \item[ ]$\hspace{0.38cm} m_A\circ \gamma_A^{-1}\circ p_{A\otimes H}\circ \rho_A$ \item[ ]$=m_A\circ \gamma_A^{-1}\circ \gamma_A\circ n_{A}\circ (\eta_A\otimes A)$ \item[ ]$=m_A\circ n_{A}\circ (\eta_A\otimes A)$ \item[ ]$=(i_A\otimes \varepsilon_H)\circ s_A\circ b_A,$ \end{itemize} and we have (\ref{SegundacondicionmA}). As far as (\ref{terceracondicionmA}), composing with the coequalizer $n_{A}$ and using (\ref{SegundacondicionmA}), (\ref{muArhoA-2}), the naturalness of $c$, (\ref{condicionmA}) and (\ref{rho-fact-1}), \begin{itemize} \item[ ]$\hspace{0.38cm} \rho_A\circ m_A\circ n_{A}$ \item[ ]$=((\rho_A\circ \mu_A\circ (A\otimes i_A))\otimes \varepsilon_H)\circ (A\otimes (s_A\circ b_A))$ \item[ ]$=(((\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes i_A))\otimes \varepsilon_H)\circ (A\otimes (s_A\circ b_A))$ \item[ ]$=((m_A\circ n_{A})\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_A\otimes A)$ \item[ ]$=(m_A\otimes H)\circ \rho^{1}_{A\otimes_{A^{co H}}A}\circ n_{A},$ \end{itemize} and the equality (\ref{terceracondicionmA}) holds. \end{proof} Note that in the previous proof, by the lack of associativity, we cannot say that $m_A$ is a left $A$-module morphism. Nevertheless, if the functor $A\otimes -$ preserves coequalizers, by (\ref{AsubH-3}) the equality \begin{equation} \label{msubAdemodulos} \mu_A\circ (A\otimes m_A)=m_A\circ \varphi_{A\otimes_{A^{co H}}A} \end{equation} holds. \section{Cleft extensions associated to a weak Hopf quasigroup} In this section we introduce the notion of weak H-cleft extension associated to a weak Hopf quasigroup $H$. As a particular instances we recover the theory of cleft extensions associated to a weak Hopf algebra \cite{nmra1, AFG2} and to a Hopf quasigroup \cite{AFGS-2, AFG-3}. \begin{definition} \label{Cleft} {\rm Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. We will say that $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension if there exists a right $H$-comodule morphism $h:H\rightarrow A$ (called the cleaving morphism) and a morphism $h^{-1}:H\rightarrow A$ such that \begin{itemize} \item[(c1)] $h^{-1}*h=(A\otimes (\varepsilon_H\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)).$ \item[(c2)] $(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H=(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ h^{-1}.$ \item[(c3)] $\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)=\mu_{A}\circ (A\otimes (h^{-1}*h)).$ \item[(c4)] $\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (A\otimes \delta_H)=\mu_{A}\circ (A\otimes (h*h^{-1})).$ \end{itemize} } \end{definition} \begin{example} \label{Hescleft} {\rm Let $H$ be a weak Hopf quasigroup. Then $H_L\hookrightarrow H$ is a weak $H$-cleft extension with cleaving map $h=id_H$ and $h^{-1}=\lambda_H$. } \end{example} Note that if $H$ is a weak Hopf algebra and $(A,\rho_{A})$ is a right $H$-comodule monoid, conditions (c3) and (c4) trivialize. Then, in this case, we get the definition of weak $H$-cleft extension given in \cite{AFG2}. On the other hand, as a particular case, if $H$ is a Hopf quasigroup we obtain the following definition of weak $H$-cleft extension: \begin{definition} \label{CleftparaHq} {\rm Let $H$ be a Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. We will say that $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension if there exists a right $H$-comodule morphism $h:H\rightarrow A$ and a morphism $h^{-1}:H\rightarrow A$ such that \begin{itemize} \item[(d1)] $h^{-1}*h=\varepsilon_{H}\otimes \eta_{A}.$ \item[(d2)] $(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H=h^{-1}\otimes \eta_H.$ \item[(d3)] $\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)=A\otimes \varepsilon_H.$ \item[(d4)] $\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (A\otimes \delta_H)=\mu_{A}\circ (A\otimes (h*h^{-1})).$ \end{itemize} } \end{definition} \begin{remark} \label{cleft-previo} {\rm Let $H$ be a Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma. Let $h:H\rightarrow A$ be a comodule morphism and let $h^{-1}:H\rightarrow A$ be a morphism. Note that, in general, the convolution product $h*h^{-1}$ is not $\varepsilon_H\otimes \eta_A$. If true, condition (d4) turns into \begin{equation} \label{d4-new} \mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (A\otimes \delta_H)=A\otimes \varepsilon_H. \end{equation} On the other hand, if we assume (\ref{d4-new}), we have that $h*h^{-1}=\varepsilon_H\otimes \eta_A$ and then \begin{equation} \label{primeraequiv} \rho_{A}\circ h^{-1}=(h^{-1}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H} \end{equation} holds. Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm}(h^{-1}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H}$ \item[ ]$= (\rho_{A}\circ (h\ast h^{-1}))\ast ((h^{-1}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H})$ \item[ ]$= \mu_{A\otimes H}\circ (( \mu_{A\otimes H}\circ ((\rho_{A}\circ h^{-1})\otimes (\rho_{A}\circ h))\circ \delta_{H})\otimes ((h^{-1}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H}) )\circ \delta_{H}$ \item[ ]$= (\mu_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (\mu_{A}\otimes (\mu_{H}\circ (\mu_{H}\otimes \lambda_{H})\circ \delta_{H})\otimes A )\circ (A\otimes c_{H,A}\otimes H\otimes A)$ \item[ ]$\hspace{0.38cm}\circ ((\rho_{A}\circ h^{-1})\otimes ((h\otimes H)\circ \delta_{H})\otimes h^{-1}) \circ (H\otimes \delta_{H})\circ \delta_{H} $ \item[ ]$= (\mu_{A}\otimes H)\circ (\mu_{A}\otimes c_{H,A})\circ (A\otimes c_{H,A}\otimes A)\circ ((\rho_{A}\circ h^{-1})\otimes ((h\otimes h^{-1})\circ \delta_{H}))\circ \delta_{H}$ \item[ ]$=\rho_{A}\circ h^{-1}.$ \end{itemize} In the last equalities, the first one follows by $h*h^{-1}=\varepsilon_H\otimes \eta_A$ and the second one by (\ref{chmagma}). In the third one we used that $h$ is a comodule morphism, the coassociativity of $\delta_{H}$ and the naturalness of $c$. The fourth one is a consequence of the quasigroup structure of $H$ and, finally, the last one follows by the naturalness of $c$ and (\ref{d4-new}). If (\ref{primeraequiv}) holds, we obtain (d2) because, using the coassociativity of $\delta_{H}$ and the naturalness of $c$: \begin{itemize} \item[ ]$\hspace{0.38cm} (A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H $ \item[ ]$=(A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes ( (h^{-1}\otimes \lambda_{H})\circ c_{H,H}\circ \delta_{H}))\circ \delta_{H} $ \item[ ]$= (h^{-1}\otimes H)\circ c_{H,H}\circ ((id_{H}\ast \lambda_{H})\otimes H)\circ \delta_{H} $ \item[ ]$= h^{-1}\otimes \eta_{H}.$ \end{itemize} Therefore, if $h*h^{-1}=\varepsilon_H\otimes \eta_A$ and $h$ is total ($h\circ\eta_{H}=\eta_{A}$), we recover the notion of cleft comodule algebra (or $H$-cleft extension for Hopf quasigroups) introduced in \cite{AFGS-2}. } \end{remark} In the following Proposition we collect the main properties of weak $H$-cleft extensions. \begin{proposition} \label{propiedades basicas} Let $H$ be a weak Hopf quasigroup and let $A^{co H}\hookrightarrow A$ be a weak $H$-cleft extension with cleaving morphism $h$. Then we have that \begin{itemize} \item[(i)] The morphisms $h*h^{-1}$ and $q_A=\mu_{A}\circ (A\otimes h^{-1})\circ \rho_A$ factorize through the equalizer $i_{A}.$ \item[(ii)] $\mu_A\circ ((h^{-1}*h)\otimes A)=(A\otimes (\varepsilon_H \circ \mu_H))\circ (c_{H,A}\otimes H)\circ (H\otimes \rho_A).$ \item[(iii)] $(h^{-1}*h)*h^{-1}=h^{-1}=h^{-1}*(h*h^{-1}).$ \item[(iv)] $h*(h^{-1}*h)=h=(h*h^{-1})*h.$ \item[(v)] $\mu_{A}\circ (A\otimes (h^{-1}*h))\circ \rho_A=id_A.$ \item[(vi)] If $A^{co H}\hookrightarrow A$ satisfies (\ref{AsubH-2}), the equality $\mu_A\circ (\mu_A\otimes A)\circ (A\otimes q_A\otimes h)\circ (A\otimes \rho_A)=\mu_A$ holds. \end{itemize} \end{proposition} \begin{proof} (i) Taking into account that $h$ is a morphism of right $H$-comodules, $h*h^{-1}=q_A\circ h$ and then it suffices to get the proof for the morphism $q_A$. \begin{itemize} \item[ ]$\hspace{0.38cm} \rho_A\circ q_A$ \item[ ]$=\mu_{A\otimes H}\circ (\rho_A\otimes (\rho_A\circ h^{-1}))\circ \rho_A$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes ((A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_{H}))\circ \rho_A$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes ((A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ h^{-1}))\circ \rho_A$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes ((A\otimes (\overline{\Pi}_{H}^{R}\circ \overline{\Pi}_{H}^{R}))\circ \rho_A\circ h^{-1}))\circ \rho_A$ \item[ ]$=(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ q_A.$ \end{itemize} In these computations, the first and the second equalities follow because $A$ is a right $H$-comodule magma; the third one by (c2) of Definition \ref{Cleft}; the fourth one relies on the idempotent character of $\overline{\Pi}_{H}^{R}$; finally, the last equality uses the arguments of the preceding identities but in the inverse order. As a consequence, there is a morphism $p_A:A\rightarrow A^{co H}$ such that $q_A=i_A\circ p_A$. Assertion (ii) is a direct consequence of (c1) of Definition \ref{Cleft}, (b4) of Definition \ref{H-comodulomagma}, (\ref{mu-pi-l}) and the naturalness of $c$. Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_A\circ ((h^{-1}*h)\otimes A)$ \item[ ]$= \mu_A\circ (((A\otimes (\varepsilon_H\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)))\otimes A) $ \item[ ]$= (A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (((\mu_{A}\circ c_{A,A})\otimes H)\circ (A\otimes (\rho_{A}\circ \eta_{A})))) $ \item[ ]$=(A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H \otimes ((A\otimes \Pi_{H}^{L})\circ \rho_{A})) $ \item[ ]$= (A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H \otimes \rho_{A}). $ \end{itemize} As far as (iii), we get $(h^{-1}*h)*h^{-1}=h^{-1}*(h*h^{-1})$ by (c4) of Definition \ref{Cleft} and by the coassociativity of $\delta_{H}$. The equality $(h^{-1}*h)*h^{-1}=h^{-1}$ follows by (ii) and (c2) of Definition \ref{Cleft}. In a similar way, $h*(h^{-1}*h)=(h*h^{-1})*h$ is a consequence of the coassociativity of $\delta_{H}$ and (c3) of Definition \ref{Cleft}. The equality $h*(h^{-1}*h)=h$ follows using that $h$ is a comodule morphism, (c1) of Definition \ref{Cleft} and (\ref{chmagma}). It is easy to prove (v) taking into account (c1) of Definition \ref{Cleft} and (\ref{chmagma}). Finally, by (\ref{AsubH-2}), the condition of right $H$-comodule for $A$, (c3) of Definition \ref{Cleft} and (v), we have \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_A\circ (\mu_A\otimes A)\circ (A\otimes q_A\otimes h)\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (\mu_A\otimes A)\circ (A\otimes (i_A\circ p_A)\otimes h)\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes \mu_A)\circ (A\otimes (i_A\circ p_A)\otimes h)\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes (\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)))\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes (\mu_A\circ (A\otimes (h^{-1}*h))\circ \rho_A))$ \item[ ]$=\mu_A,$ \end{itemize} and the proof is complete. \end{proof} \begin{remark} {\rm Note that, in the previous result, we did not use (c4) of Definition \ref{Cleft}. } \end{remark} \begin{proposition} \label{equivalencia1} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma satisfying (\ref{AsubH-2}). Assume that there exist $h:H\rightarrow A$ and $h^{-1}:H\rightarrow A$ such that $h$ is a right $H$-comodule morphism and conditions (c1), (c3) and (c4) of Definition \ref{Cleft} hold. Then condition (c2) is equivalent to (\ref{primeraequiv}). \end{proposition} \begin{proof} First we will prove (c2)$\Rightarrow$ (\ref{primeraequiv}): Let $f$, $g$ and $l$ be the morphisms $f=(h^{-1}\otimes \lambda_H)\circ c_{H,H}\circ \delta_H$, $g=\rho_A\circ h$ and $l=\rho_A\circ h^{-1}$. We will show that $f=l$. First of all, note that \begin{itemize} \item[ ]$\hspace{0.38cm} f*g$ \item[ ]$=(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (\lambda_H\otimes (h^{-1}*h)\otimes H)\circ (H\otimes \delta_H)\circ \delta_H$ \item[ ]$=(A\otimes \mu_H)\circ (c_{H,A}\otimes (((\varepsilon_H\circ \mu_H)\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_H\otimes H)))\circ (\lambda_H\otimes c_{H,A}\otimes H)$ \item[ ]$\hspace{0.38cm} \circ (\delta_H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=(A\otimes \mu_H)\circ (c_{H,A}\otimes \mu_H)\circ (\lambda_H\otimes c_{H,A}\otimes H)\circ (\delta_H\otimes ((A\otimes \Pi_{H}^{L})\circ \rho_A\circ \eta_A))$ \item[ ]$=(A\otimes (\mu_H\circ (\lambda_H\otimes \mu_H)\circ (\delta_H\otimes H)))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=(A\otimes (\mu_H\circ (\Pi_{H}^{R}\otimes H)))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=(A\otimes H\otimes (\varepsilon_H\circ \mu_H\circ c_{H,H}))\circ (A\otimes (\mu_H\circ c_{H,H})\otimes c_{H,H}) \circ (((\rho_A\circ \eta_A)\otimes (\delta_H\circ \eta_H))\otimes H)$ \item[ ]$=\rho_A\circ (A\otimes (\varepsilon_H\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=\rho_A\circ (h^{-1}*h)$ \item[ ]$=l*g,$ \end{itemize} where the first equality follows because $h$ is a comodule morphism as well as by the coassociativity of $\delta_{H}$ and the naturalness of $c$; the second one follows by (c1) of Definition \ref{Cleft}, the coassociativity of $\delta_{H}$ and the naturalness of $c$; in the third one we use (\ref{mu-pi-l}), and the fourth one is a consequence of (b6) of Definition \ref{H-comodulomagma} and the naturalness of $c$. The fifth equality relies on (a4-4) of Definition \ref{Weak-Hopf-quasigroup}, the sixth one on (\ref{mu-pi-r}) and the naturalness of $c$ and the seventh one follows because $A$ is a right $H$-comodule and by the naturalness of $c$. Finally, the eight equality is a consequence of (c1) of Definition \ref{Cleft} and the last one follows by (\ref{chmagma}). On the other hand, the following identity holds \begin{equation} \label{aux-10} (h^{-1}*h)\circ \mu_H=((\varepsilon_H\circ \mu_H)\otimes (h^{-1}*h))\circ (H\otimes \delta_H). \end{equation} Indeed: using (c1) of Definition \ref{Cleft}, the naturalness of $c$ and (a2) of Definition \ref{Weak-Hopf-quasigroup}, \begin{itemize} \item[ ]$\hspace{0.38cm} (h^{-1}*h)\circ \mu_H$ \item[ ]$= (A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (\mu_{H}\otimes (\rho_{A}\circ \eta_{A})) $ \item[ ]$= (A\otimes (\varepsilon_{H}\circ \mu_{H}\circ (\mu_{H}\otimes H)))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H)\circ (H\otimes H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$=(A\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (H\otimes \delta_{H}\otimes H))\circ (c_{H,A}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H)\circ (H\otimes H\otimes (\rho_{A}\circ \eta_{A})) $ \item[ ]$= (A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ ((\varepsilon_{H}\circ \mu_{H})\otimes c_{H,A}\otimes H)\circ (H\otimes \delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$=((\varepsilon_H\circ \mu_H)\otimes (h^{-1}*h))\circ (H\otimes \delta_H). $ \end{itemize} Then, $(f*g)*f=f$ because \begin{itemize} \item[ ]$\hspace{0.38cm} (f*g)*f $ \item[ ]$= (\mu_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (A\otimes (\mu_H\circ(\mu_H\otimes \lambda_H)\circ (H\otimes \delta_H))\otimes A)\circ ((c_{H,A}\circ (\lambda_{H}\otimes (h^{-1}\ast h)))\otimes H\otimes h^{-1})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes ((\delta_{H}\otimes H)\circ \delta_{H}))\circ \delta_{H} $ \item[ ]$= (\mu_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (A\otimes \mu_H\otimes A)\circ ((c_{H,A}\circ (\lambda_{H}\otimes (h^{-1}\ast h)))\otimes H\otimes h^{-1})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes ((((H\otimes \Pi_{H}^{L})\circ \delta_{H}) \otimes H)\circ \delta_{H}))\circ \delta_{H}$ \item[ ]$= (\mu_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (A\otimes \mu_H\otimes A)\circ ((c_{H,A}\circ (\lambda_{H}\otimes ((h^{-1}\ast h)\circ \mu_{H})))\otimes H\otimes h^{-1}) $ \item[ ]$\hspace{0.38cm}\circ (H\otimes ((H\otimes c_{H,H})\circ ((\delta_{H}\circ \eta_{H})\otimes H))\otimes H)\circ (\delta_{H}\otimes H)\circ \delta_{H}$ \item[ ]$= (\mu_{A}\otimes H)\circ (A\otimes c_{H,A})\circ (A\otimes \mu_H\otimes A)\circ ((c_{H,A}\circ (\lambda_{H}\otimes (((\varepsilon_H\circ \mu_H)\otimes (h^{-1}*h))\circ (H\otimes \delta_H))))\otimes H\otimes h^{-1}) $ \item[ ]$\hspace{0.38cm}\circ (H\otimes ((H\otimes c_{H,H})\circ ((\delta_{H}\circ \eta_{H})\otimes H))\otimes H)\circ (\delta_{H}\otimes H)\circ \delta_{H}$ \item[ ]$=(A\otimes \mu_{H})\circ (c_{H,A}\otimes H)\circ (H\otimes c_{H,A})\circ (\lambda_{H}\otimes \Pi_{H}^{L}\otimes ((h^{-1}\ast h)\ast h^{-1})) \circ (\delta_{H}\otimes H)\circ \delta_{H}$ \item[ ]$=c_{H,A}\circ ((\lambda_{H}\ast \Pi_{H}^{L})\otimes h^{-1})\circ \delta_{H}$ \item[ ]$=c_{H,A}\circ (\lambda_{H}\otimes h^{-1})\circ \delta_{H} $ \item[ ]$= f,$ \end{itemize} where the first equality is a consequence of the coassociativity of $\delta_{H}$, the naturalness of $c$ and the condition of comodule morphism for $h$. The second one follows by (a4-6) of Definition \ref{Weak-Hopf-quasigroup}, the third one follows by (\ref{delta-pi-l}) and the fourth one relies on (\ref{aux-10}). In the fifth one we used the coassociativity of $\delta_{H}$ and the naturalness of $c$. The sixth one can be obtained using (iii) of Proposition \ref{propiedades basicas} and the naturalness of $c$, the seventh one follows by (a4-3) of Definition \ref{Weak-Hopf-quasigroup} and the last one follows by the naturalness of $c$. As a consequence, $f=l$. Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm} f$ \item[ ]$=(f*g)*f$ \item[ ]$=(l*g)*f$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\mu_A\otimes(\mu_H\circ (\mu_H\otimes \lambda_H)\circ (H\otimes \delta_H))\otimes A)$ \item[ ]$\hspace{0.38cm} \circ (A\otimes c_{H,A}\otimes H\otimes A)\circ ((\rho_A\circ h^{-1})\otimes ((h\otimes H)\circ \delta_H)\otimes h^{-1})\circ (\delta_H\otimes H)\circ \delta_H$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ ((\mu_{A\otimes H}\circ (\rho_A\otimes ((A\otimes \Pi_{H}^{L})\circ \rho_A)))\otimes A)\circ(( (h^{-1}\otimes h)\circ \delta_{H}) \otimes h^{-1})\circ \delta_H$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (((\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ (\rho_{A}\otimes A))\otimes A)\circ (( (h^{-1}\otimes h)\circ \delta_{H}) \otimes h^{-1})\circ \delta_H$ \item[ ]$=((\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (A\otimes \delta_H))\otimes H)\circ (A\otimes c_{H,H})\circ ((\rho_A\circ h^{-1})\otimes H)\circ \delta_H$ \item[ ]$=(\mu_A\circ (A\otimes (h\ast h^{-1}))\otimes H)\circ (A\otimes c_{H,H})\circ ((\rho_A\circ h^{-1})\otimes H)\circ \delta_H$ \item[ ]$=(\mu_A\otimes H)\circ (A\otimes c_{H,A})\circ ((\rho_A\circ h^{-1})\otimes (h\ast h^{-1}))\circ \delta_H$ \item[ ]$=\mu_{A\otimes H}\circ ((\rho_A\circ h^{-1})\otimes ((A\otimes \Pi_{H}^{L})\circ \rho_A\circ (h\ast h^{-1})))\circ \delta_H$ \item[ ]$=\mu_{A\otimes H}\circ ((\rho_A\circ h^{-1})\otimes (\rho_A\circ (h\ast h^{-1})))\circ \delta_H$ \item[ ]$=\rho_A\circ (h^{-1}\ast (h\ast h^{-1}))$ \item[ ]$=l,$ \end{itemize} where the first and the second equalities follow by the identities previously proved, and the third one is a consequence of the coassociativity of $\delta_{H}$, the naturalness of $c$ and the condition of comodule morphism for $h$. In the fourth equality we used that $h$ is a morphism of comodules and (a4-6) of Definition \ref{Weak-Hopf-quasigroup}, while the fifth and the ninth ones follow by (\ref{muArhoA-22}). The sixth one relies on the coassociativity of $\delta_{H}$ and the naturalness of $c$, the seventh one on (c4) of Definition \ref{Cleft} and the eighth one follows by naturalness of $c$. In the tenth one we applied (i) of Proposition \ref{propiedades basicas} and the eleventh one relies on (\ref{chmagma}) . Finally, the last one follows by (iii) of Proposition \ref{propiedades basicas}. Conversely, (\ref{primeraequiv}) $\Rightarrow$ (c2). Indeed: \begin{itemize} \item[ ]$\hspace{0.38cm}(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H$ \item[ ]$= (A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes ((h^{-1}\otimes \lambda_H)\circ c_{H,H} \circ \delta_H))\circ \delta_H $ \item[ ]$=(h^{-1}\otimes \Pi_{H}^{L})\circ c_{H,H}\circ \delta_H $ \item[ ]$=(h^{-1}\otimes (\overline{\Pi}_{H}^{R}\circ \lambda_{H}))\circ c_{H,H}\circ \delta_H $ \item[ ]$=(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ h^{-1}, $ \end{itemize} where the first and the fourth equalities follow by (\ref{primeraequiv}), the second one by the coassociativity of $\delta_{H}$ and the naturalness of $c$ and the third one by (\ref{pi-antipode-composition-3}). \end{proof} \begin{proposition} \label{equivalencia2} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma satisfying (\ref{AsubH-2}). Assume that there exist $h:H\rightarrow A$ and $h^{-1}:H\rightarrow A$ such that $h$ is a right $H$-comodule morphism and conditions (c1), (c2) and (c3) of Definition \ref{Cleft} hold. Then condition (c4) is equivalent to \begin{equation} \label{segundaequiv} \mu_A\circ (\mu_A\otimes h^{-1})\circ (A\otimes \rho_A)=\mu_A\circ (A\otimes q_A). \end{equation} \end{proposition} \begin{proof} We get (c4) of Definition \ref{Cleft} by composing with $A\otimes h$ in (\ref{segundaequiv}) and using that $h$ is a morphism of $H$-comodules. As far as the "if" part, \begin{itemize} \item[ ]$\hspace{0.38cm}\mu_A\circ (\mu_A\otimes h^{-1})\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ ((\mu_{A}\circ ((\mu_{A}\circ (A\otimes q_{A}))\otimes h)\circ (A\otimes \rho_{A}))\otimes h^{-1})\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (\mu_{A}\otimes A)\circ ((\mu_{A}\circ (A\otimes q_{A}))\otimes ((h\otimes h^{-1})\circ \delta_{H}))\circ (A\otimes \rho_A) $ \item[ ]$=\mu_A\circ (\mu_A\otimes A)\circ (A\otimes q_A\otimes (h\ast h^{-1}))\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes \mu_A)\circ (A\otimes q_A\otimes (h\ast h^{-1}))\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes (\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (q_{A}\otimes \delta_H)\circ \rho_{A}))$ \item[ ]$=\mu_A\circ (A\otimes \mu_A)\circ (A\otimes (\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes ((h^{-1}\otimes h)\circ\delta_{H})))\otimes h^{-1})\circ (A\otimes A\otimes \delta_{H})\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes \mu_A)\circ (A\otimes (\mu_A\circ (A\otimes (h^{-1}\ast h))\circ \rho_A)\otimes h{-1})\circ (A\otimes \rho_A)$ \item[ ]$=\mu_A\circ (A\otimes q_A).$ \end{itemize} In the preceding computations, the first equality follows by (vi) of Proposition \ref{propiedades basicas}; the second one by the comodule condition for $A$, and the third and fifth ones by (c4) of Definition \ref{Cleft}; in the fourth one we use (\ref{AsubH-2}) and $q_A=i_A\circ p_A$. The sixth equality follows because $A$ is a right $H$-comodule and coassociativity of $\delta_{H}$; the seventh one relies on (c3) of Definition \ref{Cleft}; finally, in the last one we use (v) of Proposition \ref{propiedades basicas}. \end{proof} \section{The main theorem} Now we get the main result of this paper which gives a characterization of Galois extensions with normal basis in terms of cleft extensions. \begin{theorem} \label{caracterizacion} Let $H$ be a weak Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma satisfying (\ref{AsubH-2}), (\ref{AsubH-3}) and such that the functor $A\otimes -$ preserves coequalizers. The following assertions are equivalent. \begin{itemize} \item[(i)] $A^{co H}\hookrightarrow A$ is a weak $H$-Galois extension with normal basis and the morphism $\gamma_{A}^{-1}$ is almost lineal. \item[(ii)] $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension. \end{itemize} \end{theorem} \begin{proof} (i) $\Rightarrow$ (ii) Let $A^{co H}\hookrightarrow A$ be a weak $H$-Galois extension with normal basis. Using that $\Omega_A$ is a morphism of left $A^{co H}$-modules and right $H$-comodules it is not difficult to see that so are the morphisms $\omega_A=b_A^{-1}\circ r_A:A^{co H}\otimes H\rightarrow A$ and $\omega_A^{\prime}=s_A\circ b_A:A\rightarrow A^{co H}\otimes H$. Now define $$h=\omega_A\circ (\eta_{A^{co H}}\otimes H).$$ Taking into account that $\omega_A$ is a morphism of $H$-comodules, so is $h$. Let $h^{-1}$ be the morphism defined as $$h^{-1}=m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H),$$ where $m_A$ is the morphism obtained in Lemma \ref{morfismomA}. By Proposition \ref{monoidecoinvariantes}, (\ref{condicionmA}), and taking into account that $\omega_A^{\prime}$ is a morphism of $H$-comodules we obtain that $$(m_A\otimes H)\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ n_{A}=(\mu_A\otimes H)\circ (A\otimes ((i_A\otimes H)\circ \omega_A^{\prime}))$$ and then, by (\ref{AsubH-2}) and using that $\omega_{A}$ is a morphism of $A^{co H}$-comodules, we get that \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_A\circ (m_A\otimes (\omega_A\circ (\eta_{A^{co H}}\otimes H)))\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ n_{A}$ \item[ ]$=\mu_A\circ ((\mu_A\circ (A\otimes i_A)\otimes (\omega_A\circ (\eta_{A^{co H}}\otimes H))))\circ (A\otimes \omega_A^{\prime})$ \item[ ]$=\mu_A\circ (A\otimes (\omega_A\circ \omega_A^{\prime}))$ \item[ ]$=\mu_A.$ \end{itemize} As a consequence, \begin{equation} \label{igualdadmsubAomega} \overline{\mu}_A=\mu_A\circ (m_A\otimes h)\circ \rho^{2}_{A\otimes_{A^{co H}}A}, \end{equation} where $\overline{\mu}_A$ denotes the factorization of the morphism $\mu_A$ through the coequalizer $n_{A}$, i.e., $\overline{\mu}_A\circ n_{A}=\mu_A$. Note that \begin{equation} \label{igualdadmsubAomega-2} \overline{\mu}_A=(A\otimes \varepsilon_{H})\circ i_{A\otimes H}\circ \gamma_{A} \end{equation} also holds. Now we show conditions (c1)-(c4) of Definition \ref{Cleft}. Using (\ref{igualdadesgalois-2}), (\ref{igualdadmsubAomega}) and the equality (\ref{igualdadmsubAomega-2}), we get (c1). Indeed, \begin{itemize} \item[ ]$\hspace{0.38cm} h^{-1}*h$ \item[ ]$=\mu_A\circ (m_A\otimes h)\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)$ \item[ ]$=\overline{\mu}_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)$ \item[ ]$=(A\otimes \varepsilon_H)\circ i_{A\otimes H}\circ \gamma_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)$ \item[ ]$=(A\otimes \varepsilon_H)\circ \nabla_A\circ (\eta_A\otimes H)$ \item[ ]$=(A\otimes (\varepsilon_H\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)).$ \end{itemize} The proof for (c2) is the following: In one hand we have \begin{itemize} \item[ ]$\hspace{0.38cm} (A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H$ \item[ ]$=(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes ((m_A\otimes H)\circ \rho^{1}_{A\otimes_{A^{co H}}A}\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))\circ \delta_H$ \item[ ]$=(A\otimes \mu_H)\circ (c_{H,A}\otimes H) \circ (H\otimes (((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes \lambda_{H}))\otimes H)$ \item[ ]$\hspace{0.38cm}\circ (\rho_{A}\otimes \delta_{H}) \circ \nabla_{A}\circ (\eta_A\otimes H))) \circ \delta_{H}$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \mu_{H})\circ (A\otimes c_{H,H}\otimes H)\circ (c_{H,A}\otimes (c_{H,H}\circ ((\mu_{H}\otimes H)\circ (H\otimes ( (\lambda_{H}\otimes H)\circ\delta_{H}\circ \mu_{H})))))$ \item[ ]$\hspace{0.38cm}\circ (H\otimes \rho_{A}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \mu_{H})\circ (A\otimes c_{H,H}\otimes H)\circ (c_{H,A}\otimes (c_{H,H}\circ ((\mu_{H}\otimes H)$ \item[ ]$\hspace{0.38cm}\circ (H\otimes (( (\mu_{H}\circ c_{H,H}\circ (\lambda_{H}\otimes \lambda_{H}))\otimes \mu_{H})\circ \delta_{H\otimes H})))))\circ (H\otimes \rho_{A}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H)$ \item[ ]$\hspace{0.38cm}\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \mu_{H}\otimes H)\circ (c_{H,A}\otimes ((H\otimes (\mu_{H}\circ (\lambda_{H}\otimes H)))\circ (\delta_{H}\otimes H)) \otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes \rho_{A}\otimes \lambda_{H}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H\otimes H)\circ (\delta_{H}\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ (H\otimes (\mu_{H}\circ (\Pi_{H}^{L}\otimes H))))\otimes H)\circ (c_{H,A}\otimes H\otimes H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes \rho_{A}\otimes \lambda_{H}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H\otimes H)\circ (\delta_{H}\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_{H}\circ ((\mu_{H}\circ ((H\otimes \Pi_{H}^{L})\otimes H))))\otimes H)\circ (c_{H,A}\otimes H\otimes H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes \rho_{A}\otimes \lambda_{H}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H\otimes H)\circ (\delta_{H}\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})$ \item[ ]$\hspace{0.38cm}\circ (A\otimes (\mu_{H}\circ (((\varepsilon_{H}\circ\mu_{H})\otimes H)\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H))\otimes H)\otimes H)\circ (c_{H,A}\otimes H\otimes H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes \rho_{A}\otimes \lambda_{H}\otimes H\otimes H)\circ (H\otimes c_{H,A}\otimes H\otimes H)\circ (\delta_{H}\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (((((A\otimes (\varepsilon_{H}\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes \rho_{A}))\otimes \Pi_{H}^{L})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes c_{H,A})\circ (\delta_{H}\otimes A))\otimes \mu_{H})\circ (H\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (((A\otimes (((\varepsilon_{H}\circ \mu_{H})\otimes \Pi_{H}^{L})\circ (H\otimes c_{H,H})\circ (\delta_{H}\otimes H)))$ \item[ ]$\hspace{0.38cm}\circ (c_{H,A}\otimes H)\circ (H\otimes \rho_{A}))\otimes \mu_{H})\circ (H\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (((A\otimes (\Pi_{H}^{L}\circ\mu_{H}\circ (H\otimes \Pi_{H}^{L})))\circ (c_{H,A}\otimes H)\circ (H\otimes \rho_{A}))\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (((A\otimes (\Pi_{H}^{L}\circ\mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes \rho_{A}))\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (H\otimes c_{H,A}\otimes H)\circ (\delta_{H}\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \Pi_{H}^{L}\otimes H)\circ (A\otimes (\mu_{H\otimes H}\circ (\delta_{H}\otimes \delta_{H})))$ \item[ ]$\hspace{0.38cm}\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \Pi_{H}^{L}\otimes H)\circ (A\otimes (\delta_{H}\circ \mu_{H})) \circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A})),$ \end{itemize} where the first equality follows by (\ref{terceracondicionmA}), the second one follows by (\ref{igualdadesgalois-1}) and the naturalness of $c$, the third one follows by the naturalness of $c$ and the unit properties and the fourth one is a consequence of (a1) of Definition \ref{Weak-Hopf-quasigroup} and (\ref{anti-antipode-1}). The fifth and the thirteenth equalities rely on the comodule condition for $A$ and on the naturalness of $c$. In the sixth one we used (a4-5) of Definition \ref{Weak-Hopf-quasigroup} and the seventh one follows by (\ref{monoid-hl-1}). The eighth and the eleventh ones are a consequence of (\ref{mu-pi-l}) and the ninth one was obtained using the naturalness of $c$ and the coassociativity of $\delta_{H}$. The tenth one follows by the naturalness of $c$ and the twelfth one relies on (\ref{pi-delta-mu-pi-1}). Finally, the last one follows by (a1) of Definition \ref{Weak-Hopf-quasigroup}. On the other hand, \begin{itemize} \item[ ]$\hspace{0.38cm} (A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ h^{-1}$ \item[ ]$=(m_A\otimes \overline{\Pi}_{H}^{R})\circ \rho^{1}_{A\otimes_{A^{co H}}A}\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \overline{\Pi}_{H}^{R})\circ (A\otimes c_{H,H})\circ (A\otimes \mu_H\otimes H)\circ (\rho_A\otimes ((\lambda_H\otimes H)\circ \delta_H))\circ \nabla_A\circ (\eta_A\otimes H)$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \overline{\Pi}_{H}^{R})\circ (A\otimes c_{H,H})\circ (A\otimes \mu_H\otimes H)\circ (\rho_A\otimes ((\lambda_H\otimes H)\circ \delta_H\circ \mu_H))$ \item[ ]$\hspace{0.38cm} \circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \overline{\Pi}_{H}^{R})\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_H\circ (H\otimes \mu_{H})\circ (H\otimes \lambda_{H}\otimes H)\circ (\delta_{H}\otimes H))\otimes H)$ \item[ ]$\hspace{0.38cm} \circ (\rho_A\otimes \lambda_H\otimes \mu_{H})\circ (A\otimes \delta_{H}\otimes H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes \overline{\Pi}_{H}^{R})\circ (A\otimes c_{H,H})\circ (A\otimes (\mu_H\circ (\Pi_{H}^{L}\otimes H))\otimes H) \circ (\rho_A\otimes \lambda_H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm}\circ (A\otimes \delta_{H}\otimes H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\overline{\Pi}_{H}^{R}\circ \mu_H\circ ((\overline{\Pi}_{H}^{R}\circ \lambda_{H})\otimes H))\otimes H)\circ (\rho_A\otimes \lambda_H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm} \circ (A\otimes \delta_{H}\otimes H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\overline{\Pi}_{H}^{R}\circ \lambda_{H}\circ \mu_H\circ c_{H,H})\otimes H)\circ (\rho_A\otimes H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm} \circ (A\otimes \delta_{H}\otimes H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes (\Pi_{H}^{L}\circ \mu_H\circ c_{H,H})\otimes H)\circ (\rho_A\otimes H\otimes \mu_{H})$ \item[ ]$\hspace{0.38cm} \circ (A\otimes \delta_{H}\otimes H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \Pi_{H}^{L}\otimes H)\circ (A\otimes (\mu_{H\otimes H}\circ (\delta_{H}\otimes \delta_{H})))$ \item[ ]$\hspace{0.38cm}\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$ \item[ ]$= ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \Pi_{H}^{L}\otimes H)\circ (A\otimes (\delta_{H}\circ \mu_{H})) \circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A})).$ \end{itemize} In the preceding computations, the first equality follows by (\ref{terceracondicionmA}), the second one by (\ref{igualdadesgalois-1}); in the third we use the unit properties and the fourth one follows by (a1) of Definition \ref{Weak-Hopf-quasigroup}, the comodule condition for $A$, and the naturalness of $c$. The fifth one is a consequence of (a4-5); the sixth one follows by (\ref{pi-antipode-composition-3}) and the naturalness of $c$. In the seventh one we applied (\ref{anti-antipode-1}) and the equality \begin{equation} \label{pil-mu-pirvar-h} \overline{\Pi}_{H}^{R}\circ \mu_{H}\circ (\overline{\Pi}_{H}^{R}\otimes H)=\overline{\Pi}_{H}^{R}\circ \mu_{H}, \end{equation} which is a consequence of (\ref{pi-delta-mu-pi-2}), (\ref{pi-composition-3}) and (\ref{pi-composition-4}). The eighth one relies on (\ref{pi-antipode-composition-3}), and the ninth one follows by the comodule condition for $A$ and the naturalness of $c$. Finally, the last one follows by (a1) of Definition \ref{Weak-Hopf-quasigroup}. Therefore, (c2) holds, because $$(A\otimes \mu_H)\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ h^{-1}))\circ \delta_H$$ $$=((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes H)\circ (A\otimes c_{H,H})\circ (A\otimes \Pi_{H}^{L}\otimes H)\circ (A\otimes (\delta_{H}\circ \mu_{H})) \circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_{A}\circ \eta_{A}))$$ $$=(A\otimes \overline{\Pi}_{H}^{R})\circ \rho_A\circ h^{-1}.$$ To see (c3), \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)$ \item[ ]$=\mu_A\circ ((m_A\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_{A}^{-1}\circ p_{A\otimes H}\circ ( \eta_A\otimes H))))\otimes h)\circ (A\otimes \delta_{H})$ \item[ ]$=\mu_A\circ ((m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H})\otimes h)\circ (A\otimes \delta_H)$ \item[ ]$=\mu_A\circ (m_A\otimes h)\circ \rho^{2}_{A\otimes_{A^{co H}}A}\circ \gamma_{A}^{-1}\circ p_{A\otimes H}$ \item[ ]$=\overline{\mu}_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}$ \item[ ]$=(A\otimes \varepsilon_H)\circ \nabla_A$ \item[ ]$=(\mu_A\otimes ((\varepsilon_H\circ \mu_{H}))\circ (c_{H,A}\otimes H)\circ (H\otimes (\rho_A\circ \eta_A)))$ \item[ ]$=\mu_{A}\circ (A\otimes (h^{-1}*h)),$ \end{itemize} where the first equality follows by (\ref{msubAdemodulos}); the second one because $\gamma_{A}^{-1}$ is almost lineal (see (\ref{almostlineal})); in the third one we use (\ref{igualdadesgalois-2}); in the fourth one (\ref{igualdadmsubAomega}). The fifth one is a consequence of the equality (\ref{igualdadmsubAomega-2}); the sixth one relies on the definition of $\nabla_A$; and the last equality follows by (c1). Finally, by (\ref{msubAdemodulos}), the condition of almost lineal for $\gamma_{A}^{-1}$ and (\ref{can-fact}), we have \begin{itemize} \item[ ]$\hspace{0.38cm} \mu_{A}\circ (\mu_{A}\otimes h^{-1})\circ (A\otimes \rho_A)$ \item[ ]$=m_A\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))\circ (\mu_{A}\otimes H)\circ (A\otimes \rho_A)$ \item[ ]$=m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)$ \item[ ]$=m_A\circ n_{A}.$ \end{itemize} Moreover, by (\ref{msubAdemodulos}), the condition of almost lineal for $\gamma_{A}^{-1}$, (\ref{condicionmA}), and (\ref{SegundacondicionmA}) we obtain \begin{itemize} \item[ ]$\hspace{0.38cm}\mu_{A}\circ (A\otimes q_A)$ \item[ ]$= m_{A}\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_{A}^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))))\circ (A\otimes \rho_{A})$ \item[ ]$= m_{A}\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_{A}^{-1}\circ p_{A\otimes H}\circ \rho_A))$ \item[ ]$=\mu_A\circ (A\otimes (m_A\circ \gamma_{A}^{-1}\circ p_{A\otimes H}\circ \rho_A))$ \item[ ]$=m_A\circ n_{A}.$ \end{itemize} Therefore, by Proposition \ref{equivalencia2}, (c4) holds. Now we will prove (ii) $\Rightarrow$ (i). Let $A^{co H}\hookrightarrow A$ be a weak $H$-cleft extension with cleaving morphism $h$. Then the morphism $$\gamma_{A}^{-1}=n_{A}\circ (\mu_A\otimes A)\circ (A\otimes ((h^{-1}\otimes h)\circ \delta_H))\circ i_{A\otimes H}$$ is the inverse of $\gamma_A$. Indeed, first note that by (c1) of Definition \ref{Cleft} we have \begin{equation} \label{aux-fin-1} \mu_A\circ (A\otimes (h^{-1}\ast h))=(A\otimes \varepsilon_{H})\circ \nabla_{A}, \end{equation} and, as a consequence, using that $\nabla_{A}$ is a right $H$-comodule morphism, we obtain \begin{equation} \label{aux-fin-2} ((\mu_A\circ (A\otimes (h^{-1}\ast h))\otimes H)\circ (A\otimes \delta_{H})= \nabla_{A}. \end{equation} Then, $ \gamma_{A}\circ \gamma_{A}^{-1}=id_{A\square H}$ because \begin{itemize} \item[ ]$\hspace{0.38cm} i_{A\otimes H}\circ \gamma_{A}\circ \gamma_{A}^{-1}$ \item[ ]$=\nabla_{A}\circ (\mu_A\otimes H)\circ (A\otimes (\rho_A\circ h))\circ ((\mu_A\circ (A\otimes h^{-1}))\otimes H)\circ (A\otimes \delta_H)\circ i_{A\otimes H}$ \item[ ]$=\nabla_{A}\circ ((\mu_A\circ (\mu_A\otimes A)\circ (A\otimes ((h^{-1}\otimes h)\circ \delta_H)))\otimes H)\circ (A\otimes \delta_H)\circ i_{A\otimes H}$ \item[ ]$=\nabla_{A}\circ ((\mu_A\circ (A\otimes (h^{-1}\ast h))\otimes H)\circ (A\otimes \delta_{H})\circ i_{A\otimes H}$ \item[ ]$=\nabla_{A}\circ \nabla_A\circ i_{A\otimes H}$ \item[ ]$=\nabla_A\circ i_{A\otimes H}$ \item[ ]$=i_{A\otimes H},$ \end{itemize} where the first equality follows by (\ref{can-fact}), the second one taking into account that $h$ is a morphism of $H$-comodules and the coassociativity of $\delta_{H}$, the third one relies on (c3) of Definition \ref{Cleft} and the fourth one follows by (\ref{aux-fin-2}). Finally the last equalities follow by the properties of $\nabla_{A}$. The equality $ \gamma_{A}^{-1}\circ \gamma_{A}=id_{A\otimes_{A^{co H}}A}$ holds because \begin{itemize} \item[ ]$\hspace{0.38cm} \gamma_{A}^{-1}\circ \gamma_{A}\circ n_{A}$ \item[ ]$=n_{A}\circ (\mu_A\otimes A)\circ (A\otimes (((h^{-1}\otimes h)\circ \delta_H)))\circ \nabla_A\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)$ \item[ ]$=n_{A}\circ (\mu_A\otimes A)\circ (A\otimes (((h^{-1}\otimes h)\circ \delta_H)))\circ (\mu_A\otimes H)\circ (A\otimes \rho_A)$ \item[ ]$=n_{A}\circ ((\mu_A\circ (\mu_A\otimes h^{-1})\circ (A\otimes \rho_A))\otimes h)\circ (A\otimes \rho_A)$ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes q_A))\otimes h)\circ (A\otimes \rho_A)$ \item[ ]$=n_{A}\circ (A\otimes (\mu_A\circ (q_A\otimes h)\circ \rho_A))$ \item[ ]$=n_{A}\circ (A\otimes (\mu_A\circ (\mu_{A}\otimes A)\circ (A\otimes ((h^{-1}\otimes h)\circ \delta_{H})\circ \rho_{A}))) $ \item[ ]$=n_{A}\circ (A\otimes (\mu_A\circ (A\otimes (h^{-1}*h))\circ \rho_{A}))$ \item[ ]$=n_{A},$ \end{itemize} where the first equality follows by (\ref{can-fact}); the second one by (\ref{nabla-3}); in the third and the sixth ones we use that $A$ is a right $H$-comodule; the fourth one relies on Proposition \ref{equivalencia2}. The fifth equality follows because $q_A=i_A\circ p_A$; the seventh one uses (c3) of Definition \ref{Cleft}; finally, the last one follows by (v) of Proposition \ref{propiedades basicas}. Now we show that $\gamma_{A}^{-1}$ is almost lineal. Indeed, firstly note that \begin{itemize} \item[ ]$\hspace{0.38cm} \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ p_{A\otimes H}\circ (\eta_A\otimes H)))$ \item[ ]$=n_{A}\circ (\mu_A\otimes A)\circ (A\otimes \mu_A\otimes A)\circ (A\otimes A\otimes ((h^{-1}\otimes h)\circ \delta_{H}))\circ (A\otimes (\nabla_{A}\circ (\eta_{A}\otimes H)))$ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes (\mu_{A}\circ (A\otimes h^{-1})\circ \nabla_{A}\circ (\eta_{A}\otimes H))))\otimes h)\circ (A\otimes \delta_{H})$ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes ((h^{-1}\ast h)\ast h^{-1})))\otimes h)\circ (A\otimes \delta_{H}) $ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes h^{-1}))\otimes h)\circ (A\otimes \delta_{H}), $ \end{itemize} where the first equality follows by the definition of $\gamma_{A}^{-1}$ and (\ref{varphi}), the second one because $\nabla_{A}$ is a right $H$-comodule morphism, the third one relies on (\ref{aux-fin-2}) and the last one follows by (iii) of Proposition \ref{propiedades basicas}. Secondly, by similar arguments, and using (i) of Proposition \ref{propiedades basicas} and (\ref{AsubH-2}), we obtain \begin{itemize} \item[ ]$\hspace{0.38cm} \gamma_A^{-1}\circ p_{A\otimes H}$ \item[ ]$=n_{A}\circ (\mu_A\otimes A)\circ (A\otimes ((h^{-1}\otimes h)\circ \delta_H))\circ \nabla_A$ \item[ ]$=n_{A}\circ (\mu_A\otimes A)\circ ( (\mu_{A}\circ (A\otimes (h^{-1}\ast h)))\otimes ((h^{-1}\otimes h)\circ \delta_H))\circ (A\otimes \delta_{H})$ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes ((h^{-1}\ast h)\ast h^{-1})))\otimes h)\circ (A\otimes \delta_{H}) $ \item[ ]$=n_{A}\circ ((\mu_A\circ (A\otimes h^{-1}))\otimes h)\circ (A\otimes \delta_{H}). $ \end{itemize} Therefore, $\gamma_{A}^{-1}$ is almost lineal. To finish the proof we must show that the extension has a normal basis. Let $\omega_A$ and $\omega_{A}^{\prime}$ be the morphisms $\omega_A=\mu_A\circ (i_A\otimes h)$ and $\omega_{A}^{\prime}=(p_A\otimes H)\circ \rho_A$. By (c3) of Definition \ref{Cleft}, the comodule condition for $\rho_{A}$ and (v) of Proposition (\ref{propiedades basicas}), $\omega_A\circ \omega_{A}^{\prime}=id_A$ and then the morphism $\Omega_A=\omega_{A}^{\prime}\circ \omega_A:A^{co H}\otimes H\rightarrow A^{co H}\otimes H$ is idempotent. Let $s_A:A^{co H}\times H\rightarrow A^{co H}\otimes H$ and $r_A:A^{co H}\otimes H\rightarrow A^{co H}\times H$ be the morphisms such that $s_A\circ r_A=\Omega_A$ and $r_A\circ s_A=id_{A^{co H}\times H}$. Taking into account that $h$ is a comodule morphism and (\ref{muArhoA-1}), it is not difficult to see that $\Omega_A$ is a morphism of right $H$-comodules. Also, \begin{equation} \label{omega-ia} (i_{A}\otimes H)\circ \Omega_{A}=(\mu_{A}\otimes H)\circ (i_{A}\otimes ((h\ast h^{-1})\otimes H)\circ \delta_{H}), \end{equation} because \begin{itemize} \item[ ]$\hspace{0.38cm} (i_{A}\otimes H)\circ \Omega_{A}$ \item[ ]$= ((q_{A}\circ \mu_{A})\otimes H)\circ (i_{A}\otimes ((h\otimes H)\circ \delta_{H})$ \item[ ]$= ((\mu_{A}\circ (\mu_{A}\otimes h^{-1})\circ (A\otimes (\rho_{A}\circ h)))\otimes H)\circ (i_{A}\otimes \delta_{H}) $ \item[ ]$=((\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes ((h\otimes h^{-1})\circ \delta_{H})))\otimes H)\circ (i_{A}\otimes \delta_{H}) $ \item[ ]$= (\mu_{A}\otimes H)\circ (i_{A}\otimes ((h\ast h^{-1})\otimes H)\circ \delta_{H}),$ \end{itemize} where the first equality follows by the definition of $\Omega_{A}$, the second one follows by (\ref{muArhoA-1}), in the third one we use the condition of comodule morphism for $h$, and the last one relies on (c4) of Definition \ref{Cleft}. To prove that $\Omega_{A}$ is a morphism of left $A^{co H}$-modules, first note that, by (\ref{muArhoA-1}), (\ref{segundaequiv}) and (\ref{mu-coinv}), we have \begin{itemize} \item[ ]$\hspace{0.38cm} q_{A}\circ \mu_{A}\circ (i_{A}\otimes A)$ \item[ ]$= \mu_{A}\circ (A\otimes h^{-1})\circ \rho_{A}\circ \mu_{A}\circ (i_{A}\otimes A)$ \item[ ]$= \mu_{A}\circ (\mu_{A}\otimes h^{-1})\circ (i_{A}\otimes \rho_{A})$ \item[ ]$=\mu_{A}\circ (i_{A}\otimes q_{A})$ \item[ ]$=\mu_{A}\circ (i_{A}\otimes (i_{A}\circ p_{A})) $ \item[ ]$=i_{A}\circ \mu_{A^{co H}}\circ ( A^{co H}\otimes p_{A}),$ \end{itemize} and then \begin{equation} \label{old-equa} p_{A}\circ \mu_{A}\circ (i_{A}\otimes A)=\mu_{A^{co H}}\circ ( A^{co H}\otimes p_{A}). \end{equation} Therefore the equality \begin{equation} \label{omega-fin} \Omega_{A}=((\mu_{A^{co H}}\circ (A^{co H}\otimes p_{A}))\otimes H)\circ ( A^{co H}\otimes (\rho_{A}\circ h)) \end{equation} holds because, by (\ref{muArhoA-1}) and (\ref{old-equa}), $$\Omega_{A} =((p_{A}\circ\mu_{A} \circ (i_{A}\otimes A))\otimes H)\circ ( A^{co H}\otimes (\rho_{A}\circ h)) =((\mu_{A^{co H}}\circ (A^{co H}\otimes p_{A}))\otimes H)\circ ( A^{co H}\otimes (\rho_{A}\circ h)).$$ Then, $\Omega_{A}$ is a morphism of left $A^{co H}$-modules. Indeed, by (\ref{omega-fin}) \begin{itemize} \item[ ]$\hspace{0.38cm}(\mu_{A^{co H}}\otimes H)\circ (A^{co H}\otimes \Omega_{A})$ \item[ ]$=(\mu_{A^{co H}}\otimes H)\circ (A^{co H} \otimes (((\mu_{A^{co H}}\circ (A\otimes p_{A}))\otimes H)\circ ( A^{co H}\otimes (\rho_{A}\circ h))))$ \item[ ]$= ((\mu_{A^{co H}}\circ (A\otimes p_{A}))\otimes H)\circ ( A^{co H}\otimes (\rho_{A}\circ h))\otimes (\mu_{A^{co H}}\otimes H)$ \item[ ]$= \Omega_{A}\circ (\mu_{A^{co H}}\otimes H).$ \end{itemize} Finally, let $b_A=r_A\circ \omega_{A}^{\prime}$. Using that $\Omega_{A}$ is a right $H$-comodule morphism, we obtain that $b_A$ is a right $H$-comodule morphism. Also, it is easy to show that $b_A$ is an isomorphism with inverse $b_{A}^{-1}=\omega_A\circ s_A$. Finally, the morphism $b_A$ is a morphism of left $A^{co H}$-modules because its inverse is a morphism of left $A^{co H}$-modules. Indeed, using that $\Omega_{A}$ is a morphism of left $A^{co H}$-modules, (\ref{mu-coinv}) and (\ref{AsubH-2}), we have \begin{itemize} \item[ ]$\hspace{0.38cm} b_{A}^{-1}\circ \varphi_{A^{co H}\times H}$ \item[ ]$= \mu_{A}\circ (i_{A}\otimes h)\circ \Omega_{A}\circ (\mu_{A^{co H}}\otimes H)\circ (A^{co H}\otimes s_{A})$ \item[ ]$= \mu_{A}\circ ((i_{A}\circ \mu_{A^{co H}})\otimes h)\circ (A^{co H}\otimes (\Omega_{A}\circ s_{A})) $ \item[ ]$=((\mu_{A}\circ (\mu_{A}\otimes A)\circ (i_{A}\otimes i_{A}\otimes h)\circ (A^{co H}\otimes s_{A}) $ \item[ ]$= \mu_{A}\circ (i_{A}\otimes (\mu_{A}\circ (i_{A}\otimes h)))\circ (A^{co H}\otimes s_{A})$ \item[ ]$= \mu_{A}\circ (i_{A}\otimes b_{A}^{-1}).$ \end{itemize} \end{proof} \begin{remark} {\rm In the associative setting conditions (\ref{AsubH-2}), (\ref{AsubH-3}) hold and, for example, the previous result generalizes the one proved by Doi and Takeuchi for Hopf algebras in \cite{doi3}. Also, for a weak Hopf algebra $H$, by Remark \ref{Galoiswhaandhq}, we obtain that the assertions \begin{itemize} \item[(i)] $A^{co H}\hookrightarrow A$ is a weak $H$-Galois extension with normal basis, \item[(ii)] $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension, \end{itemize} are equivalent for a right $H$-comodule monoid $A$. This equivalence is a particular instance of the one obtained in \cite{AFG2} for Galois extensions associated to weak entwining structures. } \end{remark} As a Corollary of Theorem \ref{caracterizacion}, for Hopf quasigroups we have a result which shows the close connection between the notion of cleft right $H$-comodule algebra ($H$-cleft extension for Hopf quasigroups), introduced in \cite{AFGS-2}, and the one of $H$-Galois extension with normal basis introduced in this paper. Also, when $A^{co H}=K$ we have the equivalence proved in \cite{AFG-3} because, in this case, $i_{A}=\eta_{A}$. \begin{corollary} \label{corolarioHq} Let $H$ be a Hopf quasigroup and let $(A, \rho_A)$ be a right $H$-comodule magma satisfying (\ref{AsubH-2}), (\ref{AsubH-3}) and such that the functor $A\otimes -$ preserves coequalizers. The following assertions are equivalent. \begin{itemize} \item[(i)] $A^{co H}\hookrightarrow A$ is an $H$-Galois extension with normal basis, the morphism $\gamma_{A}^{-1}$ is almost lineal, $\Omega_{A}=id_{A^{co H}\otimes H}$ and $b_{A}\circ \eta_{A}=\eta_{A^{co H}}\otimes \eta_{H}$. \item[(ii)] $A^{co H}\hookrightarrow A$ is an $H$-cleft extension. \end{itemize} \end{corollary} \begin{proof} First, note that in this setting $\rho_{A}\circ \eta_{A}=\eta_{A}\otimes \eta_{H}$ and then $\nabla_{A}=id_{A\otimes H}$. Also, the submonoid of coinvariants $A^{co H}$ is defined by the equalizer of $\rho_{A}$ and $A\otimes \eta_{H}$. Therefore, \begin{equation} \label{ia-hquasi} \rho_{A}\circ i_{A}=i_{A}\otimes \eta_{H}. \end{equation} The proof for (i) $\Rightarrow$ (ii) is the following. Let $A^{co H}\hookrightarrow A$ be a weak $H$-Galois extension with normal basis. Assume that $\Omega_A=id_{A^{co H}\otimes H}$. Then $r_A=id_{A^{co H}\otimes H}=s_A$ and by Theorem \ref{caracterizacion}, $A^{co H}\hookrightarrow A$ is a weak $H$-cleft extension with cleaving morphism $h=b_A^{-1}\circ (\eta_A\otimes H)$, and whose convolution inverse is $h^{-1}=m_A\circ \gamma_A^{-1}\circ (\eta_A\otimes H)$. Moreover, \begin{itemize} \item[ ]$\hspace{0.38cm} h*h^{-1}$ \item[ ]$=m_A\circ \varphi_{A\otimes_{A^{co H}}A}\circ (A\otimes (\gamma_A^{-1}\circ (\eta_A\otimes H)))\circ \rho_A\circ h$ \item[ ]$=m_A\circ \gamma_A^{-1}\circ \rho_A\circ b_A^{-1}\circ (\eta_A\otimes H)$ \item[ ]$=(i_A\otimes \varepsilon_H)\circ b_A\circ b_A^{-1}\circ (\eta_A\otimes H)$ \item[ ]$=\eta_A\otimes \varepsilon_H,$ \end{itemize} where the first equality follows because $h$ is a morphism of $H$-comodules and by (\ref{msubAdemodulos}); the second one uses that $\gamma_A^{-1}$ is almost lineal, and the third one relies on (\ref{SegundacondicionmA}). Also, $h\circ \eta_{H}=\eta_{A}$ because $b_{A}\circ \eta_{A}=\eta_{A^{co H}}\otimes \eta_{H}$ holds. Therefore, by Remark \ref{cleft-previo}, $A^{co H}\hookrightarrow A$ is an $H$-cleft extension. On the other hand, let $A^{co H}\hookrightarrow A$ be an $H$-cleft extension with cleaving morphism $h$. Then, $$h^{-1}\ast h=h\ast h^{-1}=\eta_{A}\otimes \varepsilon_{H}$$ because $$\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)=A\otimes \varepsilon_H=\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h\otimes h^{-1})\circ (A\otimes \delta_H).$$ Put $\Omega_A=id_{A^{co H}\otimes H}$. Obviously it is an idempotent morphism of left $A^{co H}$-modules and right $H$-comodules. Consider the morphisms $b_A=(p_A\otimes H)\circ \rho_A$ and $b_A^{-1}=\mu_A\circ (i_A\otimes h)$. Using that $A$ is a right $H$-comodule, we obtain $$ b_{A}^{-1}\circ b_A=\mu_{A}\circ (\mu_{A}\otimes A)\circ (A\otimes h^{-1}\otimes h)\circ (A\otimes \delta_H)\circ \rho_A=id_A.$$ On the other hand, applying that $h$ is a comodule morphism, (\ref{muArhoA-1}) and (\ref{old-equa}), we have \begin{itemize} \item[ ]$\hspace{0.38cm} b_A\circ b_{A}^{-1}$ \item[ ]$=((p_A\circ \mu_A\circ (i_A\otimes A))\otimes H)\circ (A^{co H}\otimes (\rho_{A}\circ h))$ \item[ ]$=((p_A\circ \mu_A\circ (i_A\otimes h))\otimes H)\circ (A^{co H}\otimes \delta_H)$ \item[ ]$=((\mu_{A^{co H}}\circ (A^{co H}\otimes (p_{A}\circ h)))\otimes H)\circ (A^{co H}\otimes \delta_H).$ \end{itemize} Therefore, $b_A\circ b_{A}^{-1} = id_{A^{co H}\otimes H}$ because \begin{equation} \label{fin-fin} \mu_{A^{co H}}\circ (A^{co H}\otimes (p_{A}\circ h))=A^{co H}\otimes \varepsilon_{H}. \end{equation} Indeed, composing with $i_{A}$ we obtain $$i_{A}\circ \mu_{A^{co H}}\circ (A^{co H}\otimes (p_{A}\circ h))=\mu_{A}\circ (i_{A}\otimes (h\ast h^{-1}))=i_{A}\otimes \varepsilon_{H}$$ and then (\ref{fin-fin}) is proved. Trivially, $b_A$ is a morphism of right $H$-comodules, and by (\ref{AsubH-2}), $b_{A}^{-1}$ is a morphism of left $A^{co H}$-modules. Then, $b_A$ is a morphism of left $A^{co H}$-modules. \end{proof} \section*{Acknowledgements} The authors were supported by Ministerio de Econom\'{\i}a y Competitividad of Spain (European Feder support included). Grant MTM2013-43687-P: Homolog\'{\i}a, homotop\'{\i}a e invariantes categ\'oricos en grupos y \'algebras no asociativas.
1,941,325,220,348
arxiv
\section{Introduction} The space $\mathcal Q(G)$ of real-valued quasimorphisms on a group $G$ and its subspace $\mathcal H(G)$ of real-valued homogeneous quasimorphisms are classical objects of study in analytic and geometric group theory (see e.g. \cite{BargeGhys,FujiwaraEtAl, BestvinaFujiwara, Brooks,BuMo1,BuMo2, EpsteinFujiwara,Grigorchuk,OsinHull} and the references therein). Here a function $\alpha: G \rightarrow \mathbb{R}$ is called a quasimorphism if $|\alpha(gh) -\alpha(g) - \alpha(h)|$ is bounded uniformly over all $g,h \in G$ and homogeneous if $\alpha(g^n) = n\cdot \alpha(g)$ for all $g \in G$, $n \in \mathbb N$. Of particular relevance for applications in rigidity theory is the quotient space \[{\mathcal H}_0(G) := \mathcal H(G)/{\rm Hom}(G,\mathbb{R}),\] which admits a cohomological interpretation in terms of bounded cohomology \cite{Gromov, Ivanov, Grigorchuk, scl}. The simplest numerical invariant derived from the study of quasimorphisms is the dimension \[d_0(G) := \dim{\mathcal H}_0(G) \in \mathbb N_0 \cup \{\infty\}.\] Using central extensions of lattices in products of Hermitian Lie groups, one can show that the range of the invariant $d_0$ is all of $\mathbb N_0 \cup \{\infty\}$ \cite{RemyMonod, BuMo1}. However, in almost all other examples studied in the literature the invariant $d_0$ takes values in $\{0, \infty\}$ only. For example, $d_0(G) = 0$ for all amenable groups \cite{Johnson, Gromov} but also for certain non-amenable groups such as lattices in higher rank Lie groups \cite{BuMo1}, while $d_0(G) = \infty$ for all acylindrically hyperbolic groups \cite{FujiwaraEtAl, OsinHull, Osin}. This class of groups includes in particular all (relatively) Gromov-hyperbolic groups, all non-abelian mapping class groups and all groups of outer automorphisms of free groups of finite rank $\geq 3$. For some of these classes, $d_0(G) = \infty$ was established earlier (see in particular \cite{EpsteinFujiwara, BestvinaFujiwara}). These examples indicate that if ${\mathcal H}(G)$ is non-trivial then it tends to be infinite-dimensional (and in this case it is automatically of uncountable dimension). \medskip This article aims at providing a conceptual explanation for the largeness of ${\mathcal H}(G)$ in the non-trivial case. To this end we point out that there is a hidden symmetry group ${\rm QOut(G)}$ acting on $\mathcal H(G)$, extending the natural ${\rm Out}(G)$-action. In order to get a better understanding of the structure of $\mathcal H(G)$ one should take these hidden symmetries into account. Thus we will consider not only the quotient ${\mathcal H}_0(G)$, but also \[\widehat{\mathcal H}_0(G) := \mathcal H(G)/({\rm span}({\rm QOut(G)}.{\rm Hom}(G,\mathbb{R}))),\] and its reduced version \[\widehat{\mathcal H}_0^{\rm red}(G) := {\mathcal H}(G)/\overline{{\rm span}({\rm QOut(G)}.{\rm Hom}(G,\mathbb{R}))},\] where the closure is taken with respect to the (typically non-complete) topology of pointwise convergence on ${\mathcal H}(G)$. While $d_0(G)$ counts the number of linearly independent non-trivial homogeneous quasimorphisms, the dimensions $d(G) := \dim \widehat{\mathcal H}_0(G)$ and $d_{\rm red}(G) := \dim \widehat{\mathcal H}_0^{\rm red}(G)$ count the number of such quasimorphisms modulo hidden symmetries (not taking or taking the topology into account respectively). Note that by definition \[0 \leq d_{\rm red}(G) \leq d(G) \leq d_0(G) \leq \infty.\] The following main result of this article, which we will establish in Theorem \ref{ThmMain} below, shows that at least one of the two inner inequalities is strict in general. \begin{theorem}\label{IntroMain} Let $G$ be a finitely generated non-abelian free group. Then $d_0(G) = \infty$, but \[d_{\rm red}(G)=0.\] \end{theorem} This indicates that taking hidden symmetries into account yields to a substantially different count of quasimorphisms. Let us now explain how these hidden symmetries arise. We follow the very general idea that categorification of a functor exhibits its hidden symmetries. More precisely, let $\mathfrak C$ be a category and $F: \mathfrak C \rightarrow \mathfrak{Set}$ be a set-valued functor which is representable in the sense that there exists an object $C_0 \in \mathfrak C$ and a natural equivalence $F \cong {\rm Hom}_{\mathfrak C}(-, C_0)$. Then ${\rm Aut}_{\mathfrak C}(C)$ acts on $F(C)$, and thereby any such representation of a functor exhibits symmetries. While the functor of (homogeneous) quasimorphisms is not representable over the category $\mathfrak{Grp}$ of groups it turns out that it is representable over a certain modification of this category, which we discuss next. Let us first discuss the contravariant functor ${\mathcal Q}$ which to each group $G$ associates the space ${\mathcal Q}(G)$ of real-valued homogeneous quasimorphism on $G$ and to every group homomorphism $f: G \rightarrow H$ the pullback $f^*: {\mathcal Q}(H) \rightarrow {\mathcal Q}(G)$ given by $f^*\alpha := \alpha \circ f$. We would like to define a category ${\mathfrak{QGrp}}(G)$, whose objects are groups and whose morphisms are certain maps between groups such that ${\rm Hom}_{{\mathfrak{QGrp}}}(G; \mathbb{R}) \cong {\mathcal Q}(G)$. In any such category we need to ensure at least that for all $f \in {\rm Hom}_{{\mathfrak{QGrp}}}(G,H)$ and $\alpha \in {\mathcal Q}(H)$ the composition $f^*\alpha := \alpha \circ f$ is contained in ${\mathcal Q}(G)$. Following a suggestion by Uri Bader, our idea now is to take the universal category satisfying this property: \begin{definition}\label{DefQM} A map $f: G \rightarrow H$ is called a \emph{quasimorphism} if for all $\alpha \in {\mathcal Q}(H)$ we have $f^*\alpha \in {\mathcal Q}(G)$. The category ${\mathfrak{QGrp}}(G)$ of \emph{quasigroups} is defined as the category whose objects are groups and whose morphisms are quasimorphisms. \end{definition} Note that, by the very definition, quasimorphisms compose, whence $\mathfrak{QGrp}(G)$ is indeed a category. It is easy to see\footnote{All claims made in the introduction will be carefully established in Subsection \ref{SecBasic} below.} that indeed $ {\rm Hom}_{{\mathfrak{QGrp}}}(G,\mathbb{R}) =\mathcal Q(G)$. We deduce that ${\rm Aut}_{{\mathfrak{QGrp}}}(G)$ acts on $\mathcal Q(G)$, extending the action of ${\rm Aut}_{\mathfrak Grp}(G)$. Our definition of quasimorphism is substantially more general than previous definitions studied in the literature. It is instructive to compare our definition to the more restricted classical definition of Ulam \cite[Chapter 6]{Ulam}, which was recently (and independently from the present work) studied by Fujiwara and Kapovich \cite{FujiwaraKapovich}. They show (among many other things) that between hyperbolic groups there are no non-trivial bijective quasimorphisms in the sense of Ulam. On the contrary it follows from Theorem \ref{IntroMain} that with our more general definition there are plenty of bijective quasimorphisms between free groups (see also the second part of Theorem \ref{SummaryQOut} below). We thus obtain a rich theory even for hyperbolic groups. Interestingly enough, it suffices to consider a slight generalization of Ulam's condition to obtain many interesting examples. We will discuss this and various related notions of quasimorphisms in Section \ref{SecComparison} below. In the sequel we will always use the term quasimorphism in the sense of Definition \ref{DefQM}; for distinction we will refer to classical quasimorphisms $G \rightarrow \mathbb{R}$ as \emph{real-valued quasimorphisms}. Similar to the functor $\mathcal Q(G)$ we can also represent the functor $\mathcal H(G)$. For this it is convenient to think of $\mathcal H(G)$ not as a subspace, but rather as a quotient of $\mathcal Q(G)$. Namely, let us call $\alpha, \beta \in \mathcal Q(G)$ \emph{equivalent} (denoted $\alpha \sim \beta$) if $\alpha-\beta$ is a bounded function. Then the inclusion $\mathcal H(G) \hookrightarrow \mathcal Q(G)$ induces a linear isomorphism \[ \mathcal H(G) \xrightarrow{\cong}\mathcal Q(G)/\mathord\sim. \] In the sequel we will always tacitly identify $\mathcal H(G)$ with this quotient. \begin{definition} Two quasimorphisms $f, g \in {\rm Hom}_{{\mathfrak{QGrp}}}(G,H)$ are \emph{equivalent} (denoted $f \sim g$) if for every $\alpha \in \mathcal Q(G)$ we have $f^*\alpha \sim g^*\alpha$. We write $[f]$ for the equivalence class of the quasimorphism $f$. The category ${\mathfrak{HQGrp}}(G)$ of \emph{homogeneous quasigroups} is defined as the category whose objects are groups and whose morphisms are equivalence classes of quasimorphisms.\end{definition} It is easy to see that composition of equivalence classes of quasimorphisms is well-defined by choosing representatives, thus ${\mathfrak{HQGrp}}(G)$ is indeed a category, and it is also easy to see that $ {\rm Hom}_{{\mathfrak{HQGrp}}}(G,\mathbb{R}) =\mathcal H(G)$. The category ${\mathfrak{HQGrp}}(G)$ has quite a bit more structure then ${\mathfrak{QGrp}}(G)$. Namely, it is an additive category with addition on ${\rm Hom}(G, H)$ given by $[f] \oplus [g] = [fg]$ where $fg(x) := f(x)g(x)$. The natural equivalence $ {\rm Hom}_{{\mathfrak{HQGrp}}}(G,\mathbb{R}) \cong \mathcal H(G)$ respects this abelian group structure. \begin{definition} The \emph{quasioutomorphism group} of a group $G$ is \begin{equation}{\rm QOut}(G) := {\rm Aut}_{{\mathfrak{HQGrp}}}(G).\end{equation} \end{definition} We will see below that for every group $G$ the composition of the canonical inclusion ${\rm Aut}_{\mathfrak Grp}(G) \rightarrow {\rm Aut}_{{\mathfrak{QGrp}}}(G)$ with the canonical projection ${\rm Aut}_{{\mathfrak{QGrp}}}(G) \rightarrow {\rm Aut}_{{\mathfrak{HQGrp}}}(G) = {\rm QOut}(G)$ factors through the outer automorphism group ${\rm Out}(G)$ of $G$. We thus obtain a natural map (in general neither injective nor surjective) \[ {\rm Out}(G) \rightarrow {\rm QOut}(G), \] with respect to which the corresponding actions on $\mathcal H(G)$ are equivariant. In the case where $G = F$ is a finitely-generated free group this map is actually an inclusion. Theorem \ref{IntroMain} implies that, unlike {\rm Out}(F), the group ${\rm QOut}(F)$ has a dense orbit in $\mathcal H(G)$. At present we have a complete understanding of the group ${\rm QOut}(G)$ only for amenable groups $G$. For finitely-generated non-abelian free groups $F$ we understand certain large subgroups, but not the full structure of ${\rm QOut}(F)$. The following result summarizes some of our main results. \begin{theorem}\label{SummaryQOut} \begin{itemize} \item[(i)] If $G$ is amenable and its abelianization has finite rank $r$, then \[{\rm QOut}(G) \cong {\rm GL}_r(\mathbb{R}).\] \item[(ii)] If $G$ is a finitely-generated non-abelian free group, then ${\rm QOut}(G)$ is an uncountable non-amenable group, which contains ${\rm Out}(G)$ and torsion of arbitrary order. \end{itemize} \end{theorem} We will establish the various claims of the theorem in Theorem \ref{AmenableMain1}, Corollary \ref{CorNonAmenability} and Theorem \ref{thm:wobbling:embedding:and:consequences} below. This article is organized as follows: In Section \ref{SecRQM} we first recall some background concerning real-valued quasimorphisms. We then discuss in some detail the case of free groups, and in particular the ${\rm Out}(F_n)$-action on $\mathcal H(F_n)$.The main result of that section, which is an extension of a classical theorem of Grigorchuk and might be of independent interest, constructs a dense, countable dimensional ${\rm Out}(F_n)$-invariant subspace of $\mathcal H(F_n)$. We then start discussing the categories of quasigroups and homogeneous quasigroups in Section \ref{SecHQGrp}. In particular, we show that the category of homogeneous quasigroups is equivalent to a much smaller subcategory of quasi-separated homogeneous quasigroups. In Section \ref{SecAmenable} we apply these general techniques to compute the quasioutomorphism group of an amenable group with abelianization of finite rank. In Section \ref{SecFreeQM} we study quasioutomorphism groups of free groups and establish our main theorem. Finally, in Section \ref{SecComparison} we compare our results to those of Fujiwara and Kapovich and discuss various related classes of quasimorphisms. \section{Preliminaries on real-valued quasimorphisms}\label{SecRQM} \subsection{General properties of $\mathcal H(G)$} Since the definition of the category of (homogeneous) quasigroups involves real-valued quasimorphism, we need to recall some basic properties of such quasimorphisms first. Recall from the introduction that a map $\alpha: G \rightarrow \mathbb{R}$ is called a \emph{quasimorphism} if \[D(\alpha) := \sup_{g,h \in G} |\alpha(gh) - \alpha(g) -\alpha(h)| < \infty.\] In this case the real number $D(\alpha)$ is referred to as the \emph{defect} of $\alpha$. Recall also that a quasimorphism is \emph{homogeneous} if $\alpha(g^n) = n\cdot\alpha(g)$ for all~$g \in G$ and $n \in \mathbb N$, and that two quasimorphisms are called \emph{equivalent} if their difference is a bounded function. Finally we remind the reader of our notations $\mathcal Q(G)$ and $\mathcal H(G)$ for the spaces of all, respectively all homogeneous real-valued quasimorphisms on $G$. We will usually denote real-valued quasimorphisms by small Greek letters. The following facts are elementary, see e.g. \cite[Sec. 2.2]{scl} for proofs. \begin{lemma}\label{sclStuff} \begin{itemize} \item[(i)] If $p: G \rightarrow H$ is a group homomorphism and $\alpha: H \rightarrow \mathbb{R}$ a (homogeneous) quasimorphism, then $p^*\alpha := \alpha \circ p$ is a (homogeneous) quasimorphism. \item[(ii)] Every quasimorphism $\alpha$ is equivalent to a unique homogeneous quasimorphism $\widehat{\alpha}$, called its homogenization. In particular, every bounded homogeneous quasimorphism is trivial. \item[(iii)] Every homogeneous quasimorphism is conjugation-invariant and satisfies $\alpha(g^{-1}) = -\alpha(g)$. \item[(iv)] Every homogeneous quasimorphism on an amenable group is a homomorphism. \end{itemize} \end{lemma} This shows in particular that $\mathcal H(G) \cong \mathcal Q(G)/\sim$. Moreover, by (i) we have an action of ${\rm Aut}_{\mathfrak{Grp}}(G)$ on $\mathcal H(G)$, which by (iii) factors through an action of ${\rm Out}(G)$. The ${\rm Out}(G)$-module $\mathcal H(G)$ admits the following cohomological interpretation: Denote by $H^2(G; \mathbb{R})$ and $H^2_b(G; \mathbb{R})$ the second group cohomology, respectively second bounded group cohomology of $G$ with trivial coefficient module $\mathbb{R}$ \cite{Ivanov, MonodDiss}. Then there is a natural comparison map $H^2_b(G; \mathbb{R}) \rightarrow H^2(G; \mathbb{R})$ whose kernel $EH^2_b(G; \mathbb{R})$ satisfies \begin{equation}\label{CohomologicalInterpretation} EH^2_b(G; \mathbb{R}) \cong {\mathcal H}_0(G) := \mathcal H(G)/{\rm Hom}(G,\mathbb{R}).\end{equation} Moreover, this isomorphism is compatible with the respective ${\rm Out}(G)$-actions \cite{Grigorchuk, MonodDiss}. The space $H^2_b(G; \mathbb{R})$ can be equipped with the structure of a Banach space~\cite{MonodDiss}. It follows that for finitely generated groups $G$ or more generally for all groups $G$ with countable-dimensional $H^2(G; \mathbb{R})$ and countable-dimensional space ${\rm Hom}(G,\mathbb{R})$, the space $\mathcal H(G)$ is either finite-dimensional or uncountable-dimensional. To deal with the latter case it is convenient to introduce a topology on $\mathcal H(G)$. On $\mathcal Q(G)$ we can consider the (locally convex) topology of pointwise convergence, i.e., $\alpha_n \rightarrow \alpha$ provided $\alpha_n(g) \rightarrow \alpha(g)$ for all $g \in G$. As discussed above we can consider $\mathcal H(G)$ either as a subspace or as a quotient of $\mathcal Q(G)$ and equip it with either the quotient topology or the subspace topology. These two topologies are \emph{not} the same. Consider e.g. the case $G = \mathbb{Z}$ and define functions \[ b_n(k) := \left\{\begin{array}{ll} k & |k|< n,\\ n & |k| \geq n.\end{array} \right. \] Then $b_n \in \mathcal Q(\mathbb{Z})$ and $b_n \rightarrow {\rm Id}_{\mathbb{Z}}$. In the quotient ${\mathcal Q}(\mathbb{Z})/\mathord\sim$ we have $[b_n] = 0$, since $b_n$ is bounded. Thus with respect to the quotient topology we have $[{\rm Id}_\mathbb{Z}] \in \overline{\{0\}}$. In particular, ${\mathcal Q}(\mathbb{Z})/\mathord\sim$ is not Hausdorff, whereas $\mathcal H(G) \subseteq \mathcal Q(G)$ is Hausdorff for any group $G$ when equipped with the subspace topology. For this reason, we will always equip $\mathcal H(G)$ with the subspace topology in the sequel. There are two main advantages of this topology. Firstly, the action of ${\rm Out}(G)$ is by homeomorphisms. Indeed, if $\alpha_n \rightarrow \alpha$ pointwise, then for all $f \in {\rm Out}(G)$ and $g \in G$ we have $f^*\alpha_n(g) = \alpha_n(f(g)) \rightarrow \alpha(f(g)) = f^*\alpha(g)$. Secondly, the topology has good separability properties. For example, assume that $G$ is a finitely generated non-abelian free group. Then $\mathcal H(G)$ admits a countable dimensional dense subspace with respect to the topology of pointwise convergence \cite{Grigorchuk}, whereas the norm topology on $H^2_b(G; \mathbb{R})$ is not separable (see \cite[Cor. 18]{Rolli}). The main disadvantage of our topology is that it is not complete since the pointwise limit of homogeneous quasimorphisms is not necessarily a homogeneous quasimorphism. It is unclear to us whether an ${\rm Out}(G)$-invariant complete separable locally convex topology on $\mathcal H(G)$ exists. \subsection{The case of free groups I: Counting quasimorphisms} In the remainder of this section we are going to discuss the case of a finitely generated non-abelian free group $F$ in some detail. This serves the double purpose to illustrate the notions introduced in the previous subsection and to prepare our study of ${\rm QOut}(F)$ below. In view of the latter goal, we will introduce some concepts in greater generality than needed here. Our main goal is to exhibit an explicit subspace of $\mathcal H^*(F)<\mathcal H(F)$ which is at the same time countable-dimensional, ${\rm Out}(F)$-invariant and dense (with respect to the topology introduced in the previous subsection). Since ${\rm Out}(F)$ acts continuously on $\mathcal H(F)$ and $\mathcal H^*(F)$ is dense, we can deduce that the ${\rm Out}(F)$-action on $\mathcal H(F)$ is uniquely determined by its restriction to $\mathcal H^*(F)$, so that we can think of elements of ${\rm Out}(F)$ as countably infinite matrices. A countable-dimensional dense subspace of $\mathcal H(F)$ was first constructed by Grigorchuk in \cite{Grigorchuk}. This subspace is however not ${\rm Out}(F)$-invariant, so we will have to enlarge it to an ${\rm Out}(F)$-invariant space. Both Grigorchuk's construction and ours are based on the notion of a counting quasimorphism. These quasimorphisms and their generalizations play a major role in the modern theory of quasimorphisms, cf. \cite{FujiwaraEtAl, OsinHull, EpsteinFujiwara, BestvinaFujiwara}. In the case of a finitely-generated free group $F$ they can be defined as follows: Let $S$ be a free generating set of $F$ and identify $F$ with the set of reduced words over $S \cup S^{-1}$, including the empty word $\varepsilon$. Given two words $w_1$, $w_2$ over $S\cup S^{-1}$ we write $w_1 = w_2$ if they coincide and $w_1 \equiv w_2$ if they define the same element of $F$. Of course, for reduced words these notions coincide. Given two reduced words $w = y_1\cdots y_l$ and $w_0 = x_1\cdots x_n$ over $S\cup S^{-1}$, a \emph{$w_0$-subword} of $w$ is a sequence $y_j\cdots y_{j+n-1}$ with $y_l = x_{l-j+1}$ for all $l \in \{j, \dots, j+n-1\}$. Two $w_0$-subwords $y_j\cdots y_{j+n-1}$ and $y_k\cdots y_{k+n-1}$ are said to \emph{overlap} if $\{j, \dots, j+n-1\} \cap \{k, \dots, k+n-1\}\neq \emptyset$. A family of $w_0$-subwords of $w$ is called \emph{non-overlapping} if they do not overlap pairwise. We denote by $\#_{w_0}(w)$ the maximal number of distinct (but potentially overlapping) $w_0$-subwords of $w$ and by $\#^*_{w_0}(w) \leq \#_{w_0}(w)$ the maximal number of non-overlapping $w_0$-subwords of $w$. Thus for instance $\#_{ss}(sssss) = 4$ and $\#^*_{ss}(sssss) =2$. \begin{definition} For a reduced word~$w_0$ over $S \cup S^{-1}$ we define define maps $\phi_{w_0}: F \rightarrow \mathbb{Z}$ and $\phi^*_{w_0}: F \rightarrow \mathbb{Z}$ as follows. For every element~$g \in F$ let $w_g$ be the unique reduced word over $S \cup S^{-1}$ with $g \equiv w_g$. Then we set \[\phi_{w_0}(g) := \#_{w_0}(w_g) - \#_{w_0^{-1}}(w_g) \quad \text{ and } \quad \phi^*_{w_0}(g) := \#^*_{w_0}(w_g) - \#^*_{w_0^{-1}}(w_g).\] \end{definition} It is easy to check that both $\phi_{w_0}$ and $\phi^*_{w_0}$ are quasimorphisms, called the \emph{overlapping counting quasimorphism} and the \emph{non-overlapping counting quasimorphism} associated with $w_0$ respectively. In \cite{scl} these are called the \emph{big counting quasimorphism} and the \emph{little counting quasimorphism} respectively. We will almost exclusively work with the $\phi_{w_0}$ and thus simply call them counting quasimorphism for short. Note that all notions discussed so far depend crucially on the choice of the free generating set $S$. In general, counting quasimorpisms are not homogeneous. We denote by $\widehat{\phi_{w_0}}$ the homogenization of $\phi_{w_0}$ and refer to such quasimorphisms as \emph{homogenized overlapping counting quasimorphisms}, or \emph{hoc-quasimorphisms} for short. We are going to prove the following equivariant version of Grigorchuk's theorem: \begin{theorem}\label{HStar} Let $\mathcal H^*(F,S)$ denote the subspace of $\mathcal H(F)$ spanned by the hoc quasimorphisms with respect to the free generating set $S$. Then \begin{itemize} \item[(i)] $\mathcal H^*(F,S)$ is invariant under the action of ${\rm Out}(F)$, hence independent of the generating set $S$. \item[(ii)] $\mathcal H^*(F) := \mathcal H^*(F,S)$ is dense in $\mathcal H(F)$ with respect to the topology of pointwise convergence. \item[(iii)] $\mathcal H^*(F)$ is of countable dimension. \end{itemize} \end{theorem} The remainder of this section is devoted to the proof of Theorem \ref{HStar}. We will complete the proof in Subsection \ref{SecProofHStar} below. \subsection{The case of free groups II: Grigorchuk's theorem} In this section we recall Grigorchuk's construction of a dense, countable-dimensional subspace of $\mathcal H(F)$, which immediately implies Parts (ii) and (iii) of Theorem \ref{HStar}. Throughout we fix a free generating set $S$ of $F$. Since we will use a variation of the ideas behind Grigorchuk's proof in our study of word-exchange quasimorphisms in Section \ref{SecWordExchange} below, we introduce the relevant concepts in slightly greater generality than necessary for the goal at hand. Let $x_1, \dots, x_n, y_1, \dots, y_m \in S \cup S^{-1}$ and consider two reduced words $w_1 = x_1\cdots x_n$ and~$w_2 = y_1\cdots y_m$. We say that some proper postfix of~$w_1$ is a proper prefix of~$w_2$ if there is an integer~$i\in\{1,\ldots,n-1\}$ such that~$n-i < m$ and for all~$t\in \{1,\ldots, n-i\}$ we have~$x_{t+i} = y_t$. We say~$w_2$ is a subword of~$w_1$ if there is an integer~$i\in\{0,\ldots,n-1\}$ such that~$n-i \geq m$ and for all~$t\in \{1,\ldots, n-i\}$ we have~$x_{t+i} = y_t$. We say~$w_1$ and~$w_2$ \emph{overlap} if a proper prefix of one of the words is a proper postfix of the other word or one of the words is a proper subword of the other. We write $w_1 \pitchfork w_2$ if $w_1$ and $w_2$ do not overlap and call $w$ \emph{non-self-overlapping} if $w \pitchfork w$. \begin{definition}\label{DefIndependent} A set $\{w_1,\ldots,w_k\}$ of $k$ distinct, reduced, non-empty words is \emph{independent} if the following hold: \begin{itemize} \item[(i)] The set $\{w_1,\ldots,w_k,{w_1}^{-1},\ldots,{w_k}^{-1}\}$ has cardinality $2k$. \item[(ii)] For any $u, u' \in \{w_1,\ldots,w_k,{w_1}^{-1},\ldots,{w_k}^{-1}\}$ (potentially equal) we have $u \pitchfork u'$. \end{itemize} In this case we say that the words $w_1, \dots, w_k$ are \emph{mutually independent}. \end{definition} General sets of independent words will be studied in Section \ref{SecWordExchange} below. In the context of Grigorchuk's theorem and its equivariant version we only need to consider \emph{self-independent words}, i.e., reduced words $w$ such that the singleton $\{w\}$ is independent. Note that, by definition, $w$ is self-independent if and only if $w^{-1}$ is self-independent. \begin{lemma}\label{SelfIndependenceLemma} Assume that a word $w$ over $S$ is cyclically reduced and non-empty. Then $w^{-1}$ is also cyclically reduced and non-empty. In this case, $w$ and~$w^{-1}$ are distinct and do not overlap. \end{lemma} \begin{proof} The first statement is obvious. Now assume that $w$ (and hence also $w^{-1}$) is cyclically reduced and non-empty. Assume for contradiction that $w$ and $w^{-1}$ overlap or coincide. Then after possibly exchanging $w$ and $w^{-1}$ we may assume that there exists $t \in \{1, \dots, n\}$ such that $w = x_1 \cdots x_n$, $w^{-1} = y_1 \cdots y_n$ and \[y_j = x_{n-j+1}^{-1} \quad (j=1, \dots, n), \quad y_l = x_{n-t+l} \quad (l=1, \dots, t). \] We distinguish two cases: If $t=2r$ is even then we get $x_{n-r+1}^{ -1} = y_r = x_{n-r}$ contradicting the fact that $x_1 \cdots x_n$ is reduced. If $t=2r+1$, then $x_{n-r}^{-1}=y_{r+1} = x_{n-r}$, which is impossible. \end{proof} \begin{corollary}\label{cor:SelfIndependence} Let $w$ be a non-empty reduced word over $S$. \begin{itemize} \item[(i)] If $w$ (or equivalently $w^{-1}$) is cyclically reduced and non-self-overlapping, then $w$ (and hence $w^{-1}$) is self-independent. \item[(ii)] If $w$ is self-independent then $\phi_w = \phi_{w}^*$. \end{itemize} \end{corollary} \begin{proof} (i) is immediate from Lemma \ref{SelfIndependenceLemma} and (ii) is obvious . \end{proof} We also record the following consequence for later reference: \begin{corollary}\label{phistarww} If $w$ is cyclically reduced and non-trivial, then $\widehat{\phi^*_w}(w) = 1$. \end{corollary} \begin{proof} Since $w$ is cyclically reduced we have $\#_w(w^n) = n$. By Lemma \ref{SelfIndependenceLemma} $w$ does not overlap with $w^{-1}$ we have $\#_{w^{-1}}(w^n) = 0$. We deduce that $\phi^*_w(w^n) = n$, from which the corollary follows. \end{proof} From now on we denote by $\mathcal G(S)$ the set of cyclically reduced, non-self-overlapping reduced words over $S \cup S^{-1}$. We pick a total order $\leq$ on $S \cup S^{-1}$ and denote by $\preceq$ the induced (total) lexicographic order on reduced words. Note that if two words in $\mathcal G(S)$ are conjugate, then they are cyclic permutations of each other. Among these conjugates there is a unique minimal element with respect to $\preceq$. We refer to such an element as \emph{conjugacy-minimal}. Given $w \in \mathcal G(S)$ let $w^*$ denote the conjugacy minimal element of the conjugacy class $[w]$ and let $w^{-*}$ denote the conjugacy-minimal element of $[w^{-1}]$. Then we define \[w^\dagger := \min\{w^*, w^{-*}\}.\] and denote by $\mathcal G(S, \leq)^+$ the collection of all the $w^\dagger$ for $w \in \mathcal G(S)$. By construction we have $\mathcal G^+ \cap (\mathcal G^{+})^{-1} = \emptyset$, and moreover $\mathcal G^+ \cup (\mathcal G^{+})^{-1} $ meets every conjugacy class in $\mathcal G(S)$ exactly once. \begin{theorem}[Grigorchuk \cite{Grigorchuk}]\label{Grig} For any choice of order $\leq$ on $S$ the family $(\widehat{\phi_g}\,|\,g \in \mathcal G(S, \leq)^+)$ is linearly independent and its span is dense in $\mathcal {H}(G)$ with respect to the topology of pointwise convergence. \end{theorem} In his original proof Grigorchuk actually works with homogenizations $\widehat{\phi^*_g}$ of non-overlapping counting quasimorphisms, but because of Corollary \ref{cor:SelfIndependence} this does not make any difference. \subsection{The case of free groups III: The ${\rm Out}(F)$-action}\label{SecNielsen}\label{SecProofHStar} We are now going to study the effect of the ${\rm Out}(F)$-action on hoc-quasimorphisms. Our first observation is that the ${\rm Out}(F)$-action on $\mathcal Q(F)$ preserves bounded functions, hence it descends to an action on the quotient $\mathcal Q(F)/\mathord\sim$, and under the identification $\mathcal Q(F)/\mathord\sim \cong \mathcal H(F)$ this action coincides with the usual ${\rm Out}(F)$-action on $\mathcal H(F)$. Thus in order to understand the effect of the ${\rm Out}(F)$-action on hoc-quasimorphisms it suffices to understand the effect of the action on (non-homogeneous, overlapping) counting quasimorphisms, which are more convenient for computations. Assume $F= F_n$ is free on $n$ generators and enumerate the generating system, writing $S = \{a_1, \dots, a_n\}$. Then ${\rm Out}(F_n)$ is generated as a group by the \emph{Nielsen transformations} $P_1, P_2, I, T$ which are respectively determined by having the following effect on the standard basis: \begin{eqnarray*} P_1(a_1, \dots, a_n) &=& (a_2, a_1, \dots, a_n),\\ P_2(a_1, \dots, a_n) &=& (a_2, \dots, a_n, a_1),\\ I(a_1, \dots, a_n) &=& (a_1^{-1}, a_2\dots, a_n),\\ T(a_1, \dots, a_n) &=& (a_1a_2, a_2, \dots, a_n). \end{eqnarray*} The first three transformations are of finite order, and $T$ is conjugate to its inverse by a product of these. It follows that the Nielsen transformations generate ${\rm Out}(F_n)$ as a \emph{semigroup}. The first three transformations preserve reduced words. It follows immediately that for any counting quasimorphism $\phi_{w_0}$ we have \[P_j^*\phi_{w_0}(w) = \phi_{P_j^{-1}w_0}(w), \quad I^*\phi_{w_0}(w) = \phi_{I^{-1}w_0}(w),\] and similarly for $\phi^*_{w_0}$ instead of $\phi_{w_0}$. In particular, $\mathcal H^*(F_n, S)$ is invariant under these transformations, and it remains only to understand the action of $T$. For~$T$, the letters~$a_1$ and~$a_2$ play a distinguished role. To save indices we define~$a:= a_1$ and~$b := a_2$. By definition of~$T$ we then have~$T(s) = T^{-1}(s)= s$ for~$s \in S\setminus \{ a\}$. We also have \[T(a) = ab, \; T(a^{-1}) = b^{-1}a^{-1}, \; T^{-1}(a) = ab^{-1},\;\text{and } T^{-1}(a^{-1}) = ba^{-1}.\] For computations involving $T$ and its inverse it is therefore convenient to use the following normal form of elements of $F_n$. Every $g \in F_n$ is uniquely represented by a reduced word of the form \begin{equation}\label{eq-normalform-NielsenT} g \equiv w = b^{n_0}s_1b^{n_1}\cdots s_lb^{n_l} \end{equation} subject to the conditions \begin{equation}\label{eq-normalform-NielsenT-conditions} l \geq 0, \quad n_j \in \mathbb{Z}, \quad s_j \in (S\cup S^{-1})\setminus\{b,b^{-1}\}, \quad \forall 1 \leq j\leq l-1: \;n_j = 0 \Rightarrow s_j \neq s_{j+1}^{-1}.\end{equation} We will express Equation~\eqref{eq-normalform-NielsenT} by writing \[g \equiv_S (n_0, s_1, n_1, \dots, s_l, n_l).\] Then we have \[Tg \equiv_S (\widetilde{n_0}, s_1, \widetilde{n_1}, \dots, s_l, \widetilde{n_l}),\] where (with the convention $s_0= s_{l+1} = \varepsilon$) \[\widetilde{n_j} = n_j + \#_a(s_j) - \#_{a^{-1}}(s_{j+1}).\] Note in particular, that Conditions \eqref{eq-normalform-NielsenT-conditions} are preserved. Similarly we have \[T^{-1}g \equiv_S (\widetilde{n_0}^*, s_1, \widetilde{n_1}^*, \dots, s_l, \widetilde{n_l}^*),\] where \[\widetilde{n_j}^* = n_j - \#_a(s_j) + \#_{a^{-1}}(s_{j+1}).\] To describe the action of $T$ on (certain) counting quasimorphisms, we define a \emph{truncation operator} $\tau_b$ on reduced words as follows: If $w$ is given as in Equation~\eqref{eq-normalform-NielsenT}, then \[\tau_b(w) := s_1b^{n_1}\cdots b^{l-1} s_l,\] i.e., $\tau_b$ forces reduced words to start and end in letters different from $b$ by truncating leading and final powers of $b$. We say a word~$w$ is~\emph{$b$-truncated} if it neither ends nor starts in a~$b$ power, i.e., if~$\tau_b(w) = w$. Then $b$-truncated subwords of~$g$ bijectively correspond to~$b$-truncated subwords of~$Tg$, and this is the key to the following lemma: \begin{lemma}\label{lem:non:b:power} Let $w$ be a reduced word and assume that $w$ is not a power of $b$. Then $T^*\phi_w$ is at bounded distance from a finite sum of counting quasimorphism. In particular, $T^*\widehat{\phi_w} \in \mathcal H^*(F_n, S)$. \end{lemma} \begin{proof} Using our normal form we can write $w \equiv_S (m_0, r_1, m_1, \dots, r_k, m_k)$. Let~$g\equiv_S (n_0, s_1, n_1, \dots, s_l, n_l)$ be an arbitrary word in~$F$. Our goal is to determine a formula for~$\#_w(Tg)$. As we noted above there is a one to one correspondence of~$b$-truncated subwords of~$T(g)$ to~$b$-truncated subwords of~$g$. If~$w' = (0, s_{t+1}, n_{t+1}, \dots, s_{t+k}, 0)$ is a~$b$-truncated subword of~$Tg$ then its corresponding~$b$-truncated subword in~$g$ is~$\widetilde{w'}^{*} = (0,s_{t+1}, \widetilde{n_{t+1}}^*, \dots, s_{t+k},0)$ and vice versa. In particular, the number of occurrences of $w'$ in $Tg$ is the same as the number of occurrences of $\widetilde{w'}^{*}$ in $g$. In order to count the occurrences of $w$ in $Tg$ we proceed as follows: Every $w$-subword of $Tg$ leads to a subword $w'$ as above satisfying \begin{equation}\label{w'} \forall i\in\{1,\ldots,k-1\}: r_i = s_{i+t}, \quad m_i = n_{i+t}. \end{equation} The corresponding subword $\widetilde{w'}^{*}$ in $g$ then satisfies $\widetilde{m_{i}}^{*} = \widetilde{n_{i+t}}^{*}$. Let us call $w'$ satisfying \eqref{w'} \emph{extendible} if it arises from an occurrence of $w$. This means that $w'$ is preceded by at least $|m_0|$ copies of $b$ or $b^{-1}$ (according to whether $m_0 \geq 0$ or $m_0 < 0$) and proceeded by at least $|m_k|$ copies of $b$ or $b^{-1}$ (according to whether $m_k \geq 0$ or $m_k < 0$). We are now going to reformulate these conditions in terms of $g$. To exclude border cases we suppose~$t>0$ and~$t+k< l$. The remaining cases will only result in an additive error of at most 2 in our count of $\#_w(T(g))$, which is irrelevant. Our assumption ensures in particular that~$s_{t}$ and~$s_{t+k+1}$ are well defined. For $w'$ to be extendible the following conditions must hold: For the left end of the word, it is necessary that~$m_0 \cdot n_t \geq 0$ and~$|n_t|\geq |m_0|$. Similarly on the right side of the word, it is necessary that~$m_{k} \cdot n_{t+k} \geq 0$ and~$|n_{t+k}|\geq |m_k|$. We translate these conditions on~$w'$ into conditions on~$\widetilde{w'}^{*}$. For the left end of the word, it requires that one of the following holds: \begin{center}$\begin{array}{lllcccl} m_0 = 0,\\ m_0 > 0 &\wedge &\widetilde{n_{t}}^{*} \geq m_0+1,\\ m_0 > 0 &\wedge &\widetilde{n_{t}}^{*} = m_0 &\wedge &(s_{t} = a \vee r_{1} \neq a^{-1}),\\ m_0 > 0 &\wedge &\widetilde{n_{t}}^{*} = m_0-1 &\wedge & s_{t} = a \wedge r_{1} \neq a^{-1},\\ m_0 < 0 &\wedge &\widetilde{n_{t}}^{*} \leq m_0-1,\\ m_0 < 0 &\wedge &\widetilde{n_{t}}^{*} = m_0 &\wedge & (s_{t} \neq a \vee r_{1} = a^{-1}), &\text{ or}\\ m_0 < 0 &\wedge &\widetilde{n_{t}}^{*} = m_0+1 &\wedge & s_{t} \neq a \wedge r_{1} = a^{-1}. \end{array}$\end{center} Note that these cases are disjoint. Since~$m_0$ and~$r_1$ are determined by the fixed word~$w$, the conditions only depend on~$s_t$ and~$\widetilde{n_{t}}^{*}$. Moreover, for the cases involving an inequality for~$\widetilde{n_{t}}^{*}$ the value of~$s_{t}$ is irrelevant. Thus, whether the conditions are fulfilled depends only on at most~$|m_0|+1$ letters to the left of~$\widetilde{w'}^{*}$. The requirements on the right side are similar by the symmetry of taking inverses and the same conclusions hold. This implies that the conditions can be expressed by a finite set of words~$\mathcal{W}$ in which~$\widetilde{w'}^{*}$ must be contained. More precisely: Let~$\mathcal{W}_{\text{left}} = $ \[\begin{cases} \{\varepsilon\}& \text{ if~$m_0 = 0$},\\ \{b^{m_0}\} \cup \{ab^{m_0-1}\} & \text{ if~$m_0 > 0$ and~$r_{t+1} \neq a^{-1}$},\\ \{b^{m_0+1}\}\cup \{ab^{m_0}\} & \text{ if~$m_0 > 0$ and~$r_{t+1} = a^{-1}$},\\ \{sb^{m_0} \mid s \in (S\cup S^{-1})\setminus \{a,b,b^{-1}\}\} \cup \{b^{m_0-1}\} & \text{ if~$m_0 < 0$ and~$r_{t+1} \neq a^{-1}$},\\ \{sb^{m_0+1} \mid s \in (S\cup S^{-1})\setminus \{a,b,b^{-1}\}\} \cup \{b^{m_0}\} & \text{ if~$m_0 < 0$ and~$r_{t+1} = a^{-1}$}.\\ \end{cases}\] Similarly we can define~$\mathcal{W}_{\text{right}}$. Consider the set of words~$\mathcal{W} = \mathcal{W}_{\text{left}} \cdot \widetilde{w'}^{*} \cdot \mathcal{W}_{\text{right}}$. If~$w'$ is a~$b$-truncated word of~$T(g)$ that is not a border case and~$\widetilde{w'}^{*}$ is the corresponding~$b$-truncated word of~$g$ then~$w'$ is extendible if and only if~$\widetilde{w'}^{*}$ is contained in a word in~$\mathcal{W}$ in~$g$. By definition, the number of occurrences of $w$ in $T(g)$ is precisely the number of extendible $w'$s or, equivalently, the corresponding $\widetilde{w'}^*$s. We claim that this number is precisely given by the number of subwords that are equal to some word in $\mathcal W$, in other words, every $\widetilde{w'}^*$ can only be extended in one way to an element in $\mathcal W$. Let us call a word in~$\mathcal{W}$ an extension of a word equal to~$\widetilde{w'}^{*}$ if it is obtained by attaching a word from~$\mathcal{W}_{\text{left}}$ to the left and a word from~$\mathcal{W}_{\text{right}}$ and to right. We need to argue that for every word equal to~$\widetilde{w'}^{*}$ there is at most one extension to a word in~$\mathcal{W}$ and every word in~$\mathcal{W}$ is the extension of at most one word equal to~$\widetilde{w'}^{*}$. Another way of saying this is the following claim: If~$w_l {w'}^{*} w_r = w'_l {w'}^{*} w'\maketitle _r$ with~$w_l,w'_l \in \mathcal{W}_{\text{left}}$ and~$w_r,w'_r \in \mathcal{W}_{\text{right}}$ then~$w_l = w_l'$ and~$w_r = w'_r$. This claim follows from the fact that no word in~$\mathcal{W}_{\text{left}}$ can be extended to another word in~$\mathcal{W}_{\text{left}}$ by adding letters on the right and the symmetric fact for~$\mathcal{W}_{\text{right}}$. We have thus shown that~$| \#_w(T(g)) - \sum_{u \in \mathcal{W}} \#_u(g) | \leq 2$. This implies that~$T^*\phi_w$ is equivalent to~$\sum_{u\in \mathcal{W}} \phi_u$. \end{proof} In order to obtain $T$-invariance of $H^*(F_n, S)$ it remains to deal with counting quasimorphisms of the form $\phi_{b^k}$. Here we use the following reduction step: \begin{lemma}\label{ReductionTrickGrig} Let $s \in S$ and let $w$ a reduced word with last letter $s$. Then the quasimorphism \[\phi_w - \sum_{s' \in S\setminus\{s^{-1}\}} \phi_{ws'}\] is bounded, hence has trivial homogenization. \end{lemma} \begin{proof} Assume $w$ occurs as a subword in some reduced word $w_0$. Then either $w$ contains the last letter, or there exists a letter $s'$ after the occurrence, which is different from $s$. Thus the above difference counts the difference between then number of occurrences as $w$ containing the last letter and the number of occurrences of $w^{-1}$ containing the first letter, which is bounded in absolute value by $1$. \end{proof} A variant of this lemma for certain non-overlapping counting quasimorphisms appears already in \cite{Grigorchuk}. However, for the lemma to hold in general, we need to work with overlapping rather than non-overlapping counting quasimorphisms. While the lemma is useful for the understanding of the ${\rm Out}(F_n)$-module structure of $\mathcal H(F_n)$ in general, here we are only interested in the following simple consequence: \begin{lemma}\label{LemmabPowers} For~$k\in \mathbb{Z}$, $T^*\phi_{b^k} \in \mathcal H^*(F_n, S)$. \end{lemma} \begin{proof} If suffices to show the statement for~$k> 0$. For~$k=1$ we have~$T^*(\phi_b) = \phi_a+\phi_b$. For~$k>1$, by Lemma~\ref{ReductionTrickGrig}, $\phi_{b^k}$ is equivalent to~$\phi_{b^{k-1}} - \sum_{s\in (S\cup S^{-1})\setminus \{b,b^{-1}\} }\phi_{b^{k-1}s}$ and the lemma follows by induction on~$k$ using Lemma~\ref{lem:non:b:power}. \end{proof} We deduce: \begin{corollary}\label{InvarianceHStar} The space $\mathcal H^*(F_n, S)$ is invariant under ${\rm Out}(F_n)$. \end{corollary} \begin{proof} Combining Lemma \ref{lem:non:b:power} and Lemma \ref{LemmabPowers} we see that $\mathcal H^*(F_n, S)$ is invariant under $T$. Since we have seen above that $\mathcal H^*(F_n, S)$ is also invariant under the other Nielsen transformations and since these generate ${\rm Out}(F_n)$ as a semigroup, we deduce that $\mathcal H^*(F_n, S)$ is invariant under ${\rm Out}(F_n)$. \end{proof} We can now finish the proof of Theorem \ref{HStar}. \begin{proof}[Proof of Theorem \ref{HStar}] We have seen in Corollary \ref{InvarianceHStar} that $\mathcal H^*(F, S)$ is invariant under ${\rm Out}(F)$. Since ${\rm Out}(F)$ acts transitively on free generating sets, it follows that $\mathcal H^*(F, S)$ is independent of the choice of $S$. This shows (i). Moreover, $\mathcal H^*(F, S)$ contains the span of the family $(\widehat{\phi_g}\,|\,g \in \mathcal G(S, \leq)^+)$. By Theorem \ref{Grig}, this span is countable dimensional and dense in $\mathcal H(F)$, hence $\mathcal H^*(F, S)$ is also dense and at least countable-dimensional. On the other hand, it is generated by countable-many elements, so it is indeed countable-dimensional. This establishes (ii) and (iii) and finishes the proof of Theorem \ref{HStar}. \end{proof} \begin{remark} The proof of Theorem \ref{HStar} does not exhibit any countable basis of $\mathcal H^*(F)$. It shows, however, that for any choice of ordered free generating set $(S, \leq)$ there exists a basis of $\mathcal H^*(F)$ containing the countable linearly independent subset $\{\widehat{\phi_g}\,| g\in \mathcal G(S, \leq)^+\}$. Finding such a basis would be of great interest for the understanding of the ${\rm Out}(F)$-action on $\mathcal H^*(F)$. \end{remark} \section{The category of homogeneous quasigroups}\label{SecHQGrp} \subsection{Basic properties}\label{SecBasic} We now return to the categories $\mathfrak{QGrp}$ and $\mathfrak{HQGrp}$ defined in the introduction. Our first goal is to establish some of their basic properties and in particularly to verify the various claims made in the introduction. By definition, a map $f: G \rightarrow H$ is called a quasimorphism if $f^*\alpha$ is a quasimorphism for every real-valued quasimorphism $\alpha: H \rightarrow \mathbb{R}$. We claim that it suffices to check this property for all homogeneous quasimorphisms $\alpha \in \mathcal H(H)$. Indeed, by Lemma \ref{sclStuff} every real-valued quasimorphism $\alpha$ can be written uniquely as $\alpha = \widehat{\alpha} + b$ where $\widehat{\alpha}$ is a homogeneous quasimorphism and $b: H \rightarrow \mathbb{R}$ a bounded function. Then $f^*\alpha = f^*\widehat{\alpha} + f^*b$ and the claim follows from the fact that $f^*b$ is bounded. The same argument also shows that quasimorphisms $f_1, f_2: G \rightarrow H$ are equivalent if and only if $f_1^*\alpha-f_2^*\alpha$ is bounded for all $\alpha \in \mathcal H(H)$. This in turn is equivalent to the vanishing of the homogenization of $f_1^*\alpha-f_2^*\alpha$, or equivalently to the condition \begin{equation}\label{InjectivityHomMap} \forall \alpha \in \mathcal H(H): \widehat{f_1^*\alpha} = \widehat{f_2^*\alpha}.\end{equation} The following criterion is often useful in constructing quasimorphisms: \begin{lemma}\label{lem:qmorphism:criterion} Let~$f \colon G \rightarrow H$ be a map. If there is a finite set~$E\subseteq H$ such that $\forall g_1,g_2\in G \, \exists h \in H \colon \, f(g_1g_2) \in E f(g_1) Eh Eh^{-1}E f(g_2) E$, then~$f$ is a quasimorphism. \end{lemma} \begin{proof} Let $\alpha \in \mathcal H(H)$ and $g_1, g_2 \in G$. By assumption there is an $h \in H$ such that $\alpha(f(g_1 g_2)) \in \alpha( E f(g_1) Eh Eh^{-1}E f(g_2) E)$. Thus, \begin{eqnarray*} D(f^*\alpha)&=&\sup_{g_1, g_2 \in G} \left| f^*\alpha(g_1g_2) - f^*\alpha(g_1) - f^*\alpha(g_2)\right|\\ &\leq& \sup_{g_1, g_2 \in G} \sup_{e_j \in E} \left|\alpha( e_1 f(g_1) e_2h e_3h^{-1}e_4 f(g_2) e_5) - \alpha(f(g_1)) - \alpha(f(g_2))\right|\\ &\leq&\left|\alpha(f(g_1)f(g_2)) - \alpha(f(g_1)) - \alpha(f(g_2))\right| + 9D(\alpha) + 5\max_{e \in E} \alpha(e)\\ &\leq& 10 D(\alpha) + 5\max_{e \in E} \alpha(e) < \infty. \end{eqnarray*} \end{proof} We will apply a version of this lemma to provide plenty of explicit examples of quasimorphism between free groups in Section \ref{SecFreeQM} below. We also record the following special case: \begin{corollary} Every homomorphism is a quasimorphism. In particular, the pullback of $\alpha \in \mathcal Q(H)$ by a homomorphism $f : G\rightarrow H$ satisfies $f^*\alpha \in \mathcal Q(G)$, whence the functor $\mathcal Q$ is well-defined. \end{corollary} Since the pullback of a homogeneous function by a homomorphism is obviously again homogeneous we also deduce that the functor $\mathcal H: \mathfrak{Grp} \rightarrow \mathfrak{Vect}$ is well-defined. Next let us check that the categories $\mathfrak{QGrp}$ and $\mathfrak{HQGrp}$ are well-defined. For $\mathfrak{QGrp}$ this follows immediately from the fact that composition of maps is associative. Concerning $\mathfrak{HQGrp}$ we establish: \begin{lemma} Let $f_1, f_2 \in {\rm Hom}_{\mathfrak{QGrp}}(G, H)$ and $g_1, g_2 \in {\rm Hom}_{\mathfrak{QGrp}}(H, K)$. If $f_1 \sim f_2$ and $g_1 \sim g_2$ then $g_1 \circ f_1 \sim g_2 \circ f_2$. \end{lemma} \begin{proof} Let $\alpha \in H(K)$. By assumption the functions $b_H := g_1^*\alpha - g_2^*\alpha$ and $b_G:= f_1^*(g_2^*\alpha) -f_2^*(g_2^*\alpha)$ are bounded. Thus, \begin{eqnarray*} (g_1 \circ f_1)^*\alpha - (g_2 \circ f_2)^*\alpha &=& f_1^*(g_2^*\alpha + b_H) - f_2^*(g_2^*\alpha)\\ &=& b_H \circ f_1 + b_G, \end{eqnarray*} which is bounded. \end{proof} At this point we have established that all the functors and categories introduced in the introduction are well-defined. Note that for maps $f: G \rightarrow \mathbb{R}$ we currently have two different notions of quasimorphism: Let us temporarily call $f$ a quasimorphism of the first kind if its defect is bounded and a quasimorphism of the second kind if it pulls back all quasimorphisms of the first kind to such quasimorphisms. We also have according notions of equivalence of the first and second kind. \begin{lemma} The two notions of quasimorphisms for maps $f: G \rightarrow \mathbb{R}$ coincide. Similarly, the two notions of equivalence coincide. \end{lemma} \begin{proof} Let $f$ be a quasimorphism of the first kind and $\alpha \in \mathcal H(\mathbb{R})$. By Lemma \ref{sclStuff}.(iv) we have $\alpha = \lambda \cdot {\rm Id}$ for some $\lambda \in \mathbb{R}$, whence $f^*\alpha = \lambda \cdot f \in \mathcal H(G)$, showing that $f$ is also of the second kind. The converse is obvious, since $f^*{\rm Id} = f$. The proof concerning equivalence is similar. \end{proof} \begin{corollary} ${\rm Hom}_{\mathfrak{QGrp}}(G; \mathbb{R}) = \mathcal Q(G)$ and ${\rm Hom}_{\mathfrak{HQGrp}}(G; \mathbb{R}) = \mathcal H(G)$. \end{corollary} For our further study of homsets we introduce the abbreviations \[ \widetilde{QQ}(G,H) := {\rm Hom}_{\mathfrak{QGrp}}(G; H), \quad QQ(G, H) := {\rm Hom}_{\mathfrak{HQGrp}}(G; H). \] We also keep the notation $\widehat{\alpha}$ to denote the homogenization of a quasimorphism $\alpha$. We can then reformulate Criterion \eqref{InjectivityHomMap} as follows: \begin{lemma}\label{QoutAction} The map \[ \iota: QQ(G, H) \rightarrow {\rm Hom}_{\mathfrak Vect}(\mathcal H(G), \mathcal H(H)), \quad \iota(f)(\alpha) = \widehat{f^*\alpha}, \] and hence also the map ${\rm QOut}(G) = QQ(G, G)^\times \rightarrow {\rm Aut}_{\mathfrak{Vect}}(\mathcal H(G))$ is injective. \end{lemma} We draw to immediate consequences. Firstly, we deduce the following claim from the introduction: \begin{corollary}\label{OutQout} The natural map ${\rm Aut}_{\mathfrak Grp}(G) \rightarrow {\rm QOut}(G) $ factors through ${\rm Out}(G)$. \end{corollary} Secondly we observe that the inclusion $QQ(G, H) \hookrightarrow {\rm Hom}(\mathcal H(G), \mathcal H(H))$ equips the homset $QQ(G,H)$ with the structure of a vector space. The abelian group structure actually has an intrinsic representation: The sum $[f_1]\oplus[f_2]$ is represented by the pointwise product $g\mapsto f_1(g)f_2(g)$ (or, alternately, $g \mapsto f_2(g)f_1(g)$), the neutral element is represented by the constant map $g \mapsto e_H$ and the inverse of $f$ is represented by $g\mapsto f(g)^{-1}$. \begin{lemma} The composition map $QQ(G, H) \times QQ(H,K) \rightarrow QQ(G,K)$ is bilinear. \end{lemma} \begin{proof} Let $g_1, g_2 \in \widetilde{QQ}(G, H)$, $h_1, h_2 \in \widetilde{QQ}(H,K)$ and $\alpha \in \mathcal H(K)$. Then there exist bounded functions $b_j(x)$, $j=1,2$ such that \begin{eqnarray*} (h_1\circ (g_1g_2))^*\alpha(x) &=&(h_1^*\alpha)(g_1(x)g_2(x)) \\ &=& (h_1^*\alpha)(g_1(x))+ (h_1^*\alpha)(g_2(x)) + b_1(x)\\ &=& (h_1 \circ g_1)^*\alpha(x) + (h_1 \circ g_2)^*\alpha(x) + b_1(x)\\ &=& [(h_1\circ g_1)(h_1 \circ g_2)]^*\alpha(x) + b_2(x), \end{eqnarray*} hence \[[h_1]\circ ([g_1]\oplus[g_2]) =[h_1 \circ (g_1g_2)]= [(h_1\circ g_1)(h_1 \circ g_2)] = ([h_1]\circ [g_1])\oplus([h_1] \circ [g_2]).\] Similarly, \begin{eqnarray*} (h_1h_2\circ g_1)^*\alpha(x) &=& (h_1 \circ g_1)^*\alpha(x) + (h_2 \circ g_1)^*\alpha(x) + b_1(x)\\ &=& [(h_1\circ g_1)(h_2 \circ g_1)]^*\alpha(x) + b_2(x). \end{eqnarray*} \end{proof} Since finite biproducts clearly exist in $\mathfrak{QGrp}$ we deduce: \begin{corollary}\label{CorAdditiveCategory} The category $\mathfrak{HQGrp}$ is an additive category. \end{corollary} \subsection{The group ${\rm QOut}(G)$} In the sequel we will always consider ${\rm QOut}(G)$ as a subgroup of ${\rm Aut}_{\mathfrak{Vect}}(\mathcal H(G))$ by means of Lemma \ref{QoutAction}. We will also denote by \[{\rm qout}_G: {\rm Out}(G) \rightarrow {\rm QOut}(G)\] the canonical map given by Corollary \ref{OutQout}. In general, this map is neither injective nor surjective. We will deal with the cases of amenable, respectively free groups in Section \ref{SecAmenable} and Section \ref{SecFreeQM} below. Before we turn to these computations we collect some properties of ${\rm QOut}(G)$ which can be derived purely formally. \begin{proposition} ${\rm QOut}(G)$ acts continuously on $\mathcal H(G)$ with respect to the topology of pointwise convergence. \end{proposition} \begin{proof} The proof is just as for ${\rm Out}(G)$: If $\alpha_n \rightarrow \alpha$ pointwise in $\mathcal H(G)$, i.e., $\alpha_n(g) \rightarrow \alpha(g)$ for all $g \in G$, and $[\phi]\in {\rm QOut}(G)$, then $\phi^* \alpha_n(g) =\alpha_n(\phi(g)) \rightarrow \alpha(\phi(g)) = \phi^*\alpha(g)$ for all $g \in G$. \end{proof} \begin{corollary} If $V < \mathcal H(G)$ is a subspace which is dense for the topology of pointwise convergence, then every $g \in {\rm QOut}(G)$ is uniquely determined by the restriction $g|_V: V \rightarrow \mathcal H(G)$. \end{corollary} The corollary is particularly useful for computations in free groups, where we can use the dense subspace $\mathcal H^*(F)$ constructed in Theorem \ref{HStar}. Recall that given a group $G$, a subgroup $H< G$ is called a \emph{retract} of $G$ if there exists a \emph{retraction} $r: G \rightarrow H$, i.e., a group homomorphism which is left-inverse to the inclusion $\iota_H: H \rightarrow G$. A left-inverse to the class $[\iota_H]$ in the category $\mathfrak{HQGrp}$ will be called a \emph{quasi-retraction}, and if such a quasi-retraction exists, then $H$ will be called a \emph{quasi-retract} of $G$. We observe: \begin{lemma} If $H<G$ is a quasi-retract then the restriction map ${\rm res}_H^G: \mathcal H(G) \rightarrow \mathcal H(H)$, $f \mapsto f|_H$ is onto. \end{lemma} \begin{proof} If $r: G \rightarrow H$ is a quasimorphism whose class is a quasi-retraction, then for every $\alpha \in \mathcal H(H)$ we have $\alpha = (\widehat{r^*\alpha})|_H$, since both are homogeneous quasimorphisms of bounded distance. \end{proof} \begin{lemma} If $r: G \rightarrow H$ is a quasimorphism such that $[r]$ is a quasi-retraction, then \[r^*: QQ(H,H) \rightarrow QQ(G,G), [f] \mapsto [\iota_H \circ f \circ r]\] is an injective algebra homomorphism. \end{lemma} \begin{proof} The map is well-defined and provides a homomorphism since $[r\iota_H] = [{\rm Id}_H]$. Now assume that $f \in {\ker(r^*)}$ and let $\psi \in \mathcal H(H)$. Let $\phi := r^*\psi \in \mathcal H(G)$ so that $\psi = \phi|_{H}$. Then we have \[\phi = (\iota_H \circ f \circ r)^*\phi = r^*f^*\iota_H^*\phi\Rightarrow \iota_H^*\phi = f^*\iota_H^*\phi \Rightarrow \psi = f^*\psi.\] This shows that $[f]$ is trivial in $QQ(H,H)$, so $r^*$ is injective. \end{proof} \begin{corollary} For groups $H_1, H_2$ there is a canonical injection \[{\rm QOut}(H_1) \times {\rm QOut}(H_2) \hookrightarrow {\rm QOut}(H_1 \times H_2).\] \end{corollary} \begin{proof} The projections from $G = H_1 \times H_2$ to the factors is a retraction, hence induces injective maps $QQ(H_j) \rightarrow QQ(G)$. Since the images act on different factors, the product map \[QQ(H_1) \times QQ(H_2) \rightarrow QQ(G)\] is still injective. Moreover, it maps $({\rm id}_{H_1}, {\rm id}_{H_2})$ to ${\rm id}_G$. It follows that it maps inverses to inverses and thus preserves the subgroups of invertible elements \end{proof} In general, the canonical map will not be surjective, e.g. the flip $(h_1, h_2) \mapsto (h_2, h_1)$ in ${\rm QOut}(H\times H)$ is not contained in its image. \subsection{The quasification functor} Our next goal is to construct a more efficient model of the category $\mathfrak{HQGrp}$. To this end we construct a projection functor $Q: \mathfrak{HQGrp} \rightarrow \mathfrak{HQGrp}$, which is an equivalence of categories. This will reduce computations in $\mathfrak{HQGrp}$ to computations in $Q(\mathfrak{HQGrp})$. We need some basic results concerning kernels of quasimorphisms: \begin{definition} Let $\alpha: G \rightarrow \mathbb{R}$ be a quasimorphism. A subgroup $N$ of $G$ is called a \emph{period subgroup} if $\alpha|_N$ is bounded. A quasimorphism is called \emph{aperiodic} if every period subgroup is trivial. If $H$ is a quotient of $G$ with canonical projection $p: G \rightarrow H$, then $\alpha$ is said to \emph{factor through} $H$ if $\alpha = p^*\beta$ for some quasimorphism $\beta: H \rightarrow \mathbb{R}$. \end{definition} We recall the following result from \cite{BSHLie}: \begin{lemma} Let $\{e\} \rightarrow N \rightarrow G \rightarrow H \rightarrow \{e\}$ be a short exact sequence of groups and $\alpha: G \rightarrow \mathbb{R}$ be a homogeneous quasimorphism. Then the following are equivalent: \begin{itemize} \item[(i)] $\alpha|_N$ is bounded. \item[(ii)] $\alpha|_N \equiv 0$. \item[(iii)] $\alpha$ factors through $Q$. \end{itemize} \end{lemma} Given $g \in G$ we denote by $N(g)$ the smallest normal subgroup of $G$ containing $g$. Then the last lemma implies (cf. \cite{BSHLie}): \begin{proposition} Let $G$ be a group and $\alpha: G \rightarrow \mathbb{R}$ a quasimorphism. Then there is a unique maximal period subgroup for $\alpha$, which is given by \[\ker(\alpha) = \{g \in G,|\, \alpha|_{N(g)} {\rm bounded}\}.\] Moreover, the homogenization of $\alpha$ factors through an aperiodic quasimorphism on $G/{\ker(\alpha)}$. \end{proposition} We refer to $\ker(\alpha)$ as the \emph{kernel} of $\alpha$. \begin{definition} Let $G$ be a group. The normal subgroup \[R_q(G) := \bigcap_{\alpha \in \mathcal H(G)} \ker(\alpha)\] is called the \emph{quasi-radical} of $G$ and the quotient $Q(G) := G/R_q(G)$ is called the \emph{quasification} of $G$. The group $G$ is \emph{quasi-separated} if $Q(G) = G$. \end{definition} Quasification has the following universal property: \begin{proposition}\noindent \label{FunctorQ} \begin{itemize} \item[(i)] Every homogeneous quasimorphism of $G$ factors through $Q(G)$ and $Q(G)$ is maximal with this property. \item[(ii)] If $f: G \rightarrow H$ is any homomorphism, then there is a unique homomorphism $Q(f): Q(G) \rightarrow Q(H)$ such that the diagram \[\begin{xy}\xymatrix{ G \ar[d]\ar[r]^f& H\ar[d]\\ Q(G)\ar[r]^{Q(f)}&Q(H) }\end{xy}\] commutes. \item[(iii)] For all groups $G$ and homomorphisms $f$ we have $Q(Q(G)) = Q(G)$ and $Q(Q(f)) =f$. \end{itemize} \end{proposition} \begin{proof} (i) holds by construction of $Q(G)$ and (ii) follows from Lemma \ref{sclStuff}.(i). For (iii) assume $x \in R_q(Q(G))$. Then $f(x) = 0$ for all homogeneous quasimorphisms $f: Q(G) \rightarrow \mathbb{R}$. Let $p: G \rightarrow Q(G)$ denote the projection and choose $y \in G$ with $p(y) = x$; then $p^*f(y) = 0$ for all $f$ as above. By (i) this implies $F(y) = 0$ for every homogeneous quasimorphism on $G$, hence $y \in R_q(G)$. This implies that $x= p(y) = e$, hence $R_q(Q(G))$ is trivial. \end{proof} The proposition can be restated as saying that $Q$ is a projection functor from the category of groups to the full subcategory of quasi-separated groups. The main result of this subsection is the following theorem: \begin{theorem}\label{QuasificationEquivalence} Quasification extends to a functor $Q: \mathfrak{HQGrp} \rightarrow \mathfrak{HQGrp}$, which is a self-equivalence of categories. In particular, every isomorphism class in $\mathfrak{HQGrp}$ is represented by a quasi-separated group. \end{theorem} Let us start by extending Q to a functor on $\mathfrak{HQGrp}$. \begin{proposition} Let $G, H$ be groups, $p_G : G \rightarrow Q(G)$, $p_H: H \rightarrow Q(H)$ the canonical projections and $f: G \rightarrow H$ a quasimorphism. Then there exists a unique up to equivalence quasimorphism $Q(f): Q(G) \rightarrow Q(H)$ such that \begin{eqnarray}\label{QuasiFunctoriality} Q(f)(x) \in p_H (f(p_G^{-1}(x)).\end{eqnarray} If $f$ is equivalent to a homomorphism, then so is $Q(f)$. \end{proposition} \begin{proof} Let $p_G(g_1) = p_G(g_2)$. Then $g_2 = g_1k$ for some $k \in G$ which is contained in the kernel of every homogeneous quasimorphism on $G$. Now assume that $\phi: H \rightarrow \mathbb{R}$ is a homogeneous quasimorphism; then \[|f^*\phi(g_2) -f^*\phi(g_1)| = |f^*\phi(g_1k)- f^*\phi(g_1)| \leq |f^*\phi(k)| +D(f^*\phi),\] and since $k \in \ker f^*\phi$ we obtain \begin{eqnarray}\label{QuasiFunc1} |f^*\phi(g_2) -f^*\phi(g_1)| &\leq& D(f^*\phi) \end{eqnarray} Now choose $Q(f): Q(G) \rightarrow Q(H)$ to be an arbitrary map subject to \eqref{QuasiFunctoriality} and let $g, h \in Q(G)$. By \eqref{QuasiFunctoriality} there exists elements $\hat{g}, \hat{h}, \widehat{gh} \in G$ in the respective $p_G$-fibers of $g, h, gh$ such that \[Q(f)(g) = p_H(f(\hat g)), \quad Q(f)(h) = p_H(f(\hat h)), \quad Q(f)(gh) = p_H(f(\widehat{gh})).\] Let $\psi: Q(H) \rightarrow \mathbb{R}$ be a homogeneous quasimorphism and $\phi := p_H^*\psi$. Then \begin{eqnarray*} &&|Q(f)^*\psi(gh)-Q(f)^*\psi(g)-Q(f)^*\psi(h)|\\ &=& |\psi(p_H(f(\widehat{gh})))-\psi(p_H(f(\hat g)))-p_H(f(\hat h))|\\ &=& |f^*\phi(\widehat{gh})-f^*\phi(\hat{g})-f^*\phi(\hat{h})|\\ &\leq& |f^*\phi(\widehat{gh})-f^*\phi(\widehat{g}\widehat{h})|+ |f^*\phi(\widehat{g}\widehat{h})-f^*\phi(\hat{g})-f^*\phi(\hat h)|\\ &\overset{\eqref{QuasiFunc1}}\leq& 2D(f^*\phi), \end{eqnarray*} hence $Q(f)^*\psi$ is a quasimorphism. This shows that $Q(f)$ is a quasimorphism. It follows from \eqref{QuasiFunc1} that if $Q_1(f), Q_2(f)$ are two different homogeneous quasimorphisms satisfying \eqref{QuasiFunctoriality}, then for every homogeneous quasimorphism $\phi: Q(H) \rightarrow \mathbb{R}$ the difference $Q_1^*\phi- Q_2^*\phi$ is uniformly bounded by \eqref{QuasiFunc1}, whence $Q_1(f)$ and $Q_2(f)$ are equivalent. The last statement of the proposition follows from a fact that every homomorphism $G \rightarrow H$ descends to $Q(G) \rightarrow Q(H)$ by Proposition~\ref{FunctorQ} and the uniqueness part. \end{proof} We emphasize that given a quasimorphism $f$ the quasimorphism $Q(f)$ is defined only up to equivalence. Nevertheless we will abuse notation and write $Q(f)$ to denote any fixed choice of representative. With this abuse of notation understood, we have a well-defined map \begin{equation}\iota_{G,H}: QQ(G, H) \rightarrow QQ(Q(G), Q(H)), \quad [f] \mapsto [Q(f)].\end{equation} Now the key step in the proof of Theorem \ref{QuasificationEquivalence} is the following observation: \begin{proposition} The map $\iota_{G,H}$ is an isomorphism of abelian groups. \end{proposition} \begin{proof} We first show that $\iota_{G,H}$ is a morphism of abelian groups. Let us fix a (set-theoretic) section $\sigma_G: Q(G) \rightarrow G$ of $p_G$. Then for any quasimorphism $f: G \rightarrow H$ a representative of the class $[Q(f)]$ is given by $Q(f)(x) = p_H(f(\sigma_G(x)))$. With this choice of representatives we see that for any pair $f_1, f_2: G \rightarrow H$ of quasimorphisms we have \begin{eqnarray*} (Q(f_1) \cdot Q(f_2))(x) &=& p_H (f_1(\sigma_G(x)))\cdot p_H (f_2(\sigma_G(x)))\\ &=& p_H(f_1(\sigma_G(x))f_2(\sigma_G(x)))\\ &=& p_H(f_1f_2(\sigma_G(x)))\\ &=& Q(f_1f_2)(x), \end{eqnarray*} showing that $[Q(f_1f_2)] = [Q(f_1)][Q(f_2)]$. Next let us compute the kernel of $\iota_{G,H}$. Suppose $[f] \in \ker(\iota_{G,H})$, i.e., $[Q(f)]$ is the class of the constant map $e$. Let $\beta \in \mathcal H(H)$ and observe that $\beta = p_H^*\alpha$ for some $\alpha \in \mathcal H(Q(H))$. We then have $[Q(f)^*\alpha] = [e^*\alpha] = [0]$ and hence \[ [f^*\beta] = [p_G^*Q(f)^*\alpha] = [0]. \] Since $\beta$ was arbitrary, we deduce that $[f]$ represents the trivial class, whence $\iota_{G,H}$ is injective. To see surjectivity fix a quasimorphism $g: Q(G) \rightarrow Q(H)$ and choose any function $f: G \rightarrow H$ satisfying \begin{equation}\label{LiftingQ} p_H \circ f = g \circ p_G. \end{equation} We claim that $f$ is a quasimorphism. Indeed let $\beta \in \mathcal H(H)$ and choose $\alpha \in \mathcal H(Q(H))$ with $\beta = p_H^*\alpha$. Then \[f^*\beta = (p_H \circ f)^*\alpha =p_G^*(g^*\alpha)\] is a quasimorphism, establishing the claim. It then follows from \eqref{LiftingQ} that $[Q(f)] = [g]$. \end{proof} \begin{corollary}\label{Qs} Let $G, H$ be groups. Then \[QQ(G, H) \cong QQ(G, Q(H)) \cong QQ(Q(G), H) \cong QQ(Q(G), Q(H)).\] \end{corollary} \begin{proof} Since $Q^2 = Q$ we have $QQ(G, Q(H)) \cong QQ(Q(G), Q(H))$. A similar argument yields $QQ(Q(G), H) \cong QQ(Q(G), Q(H))$. \end{proof} This finishes the proof of Theorem \ref{QuasificationEquivalence}. \subsection{Quasi-separated groups} In view of Theorem \ref{QuasificationEquivalence} the category of homogeneous quasi-groups is equivalent to its full subcategory of quasi-separated groups. We would thus like to understand the structure of quasi-separated groups. As far as amenable quasi-separated groups are concerned, it is easy to obtain a complete understanding: \begin{proposition}\label{AmenableQ} If $G$ is amenable, then $Q(G)$ is the quotient of the abelianization $G_{ab}$ of $G$ by its torsion subgroup. In particular, an amenable group is quasi-separated if and only if it is torsion-free abelian. \end{proposition} \begin{proof} If $G$ is amenable, then every homogeneous quasimorphism $f: G \rightarrow \mathbb{R}$ is a homomorphism by Lemma \ref{sclStuff}, hence factors through $G_{ab}$. Moreover, every homomorphism $G_{ab} \rightarrow \mathbb{R}$ factors through the torsion subgroup of $G_{ab}$. Thus $Q(G)$ is a quotient of $G_{ab}/{\rm Tor}(G_{ab})$. Then the proposition follows from the fact that homomorphisms into $\mathbb{R}$ separate points in a torsion-free abelian groups. \end{proof} On the other hand, there exist many non-abelian quasi-separated groups. For example, one can deduce from \cite{EpsteinFujiwara} that every torsion-free hyperbolic group is quasi-separated. More generally, one can consider the class of \emph{acylindrically hyperbolic groups}, which was introduced in \cite{Osin} where it is also shown to coincide with various other previously studied classes of groups with weak hyperbolicity properties. It comprises all non-elementary hyperbolic and relatively hyperbolic groups, all but finitely many mapping class groups and all outer automorphism groups of finitely-generated non-abelian free groups. We recall from \cite{Osin} that a group $G$ is called acylindrically hyperbolic if it admits an acylindrical isometric action on a Gromov-hyperbolic metric space $X$, which is non-elementary in the sense that the limit set of $G$ in $\partial X$ contains at least three points. Here, acylindricity of the action means that $\forall \varepsilon > 0$ $\exists R, N >0$ $\forall x,y \in X$: \[d(x,y) >R \Rightarrow \left|\left\{g \in G\mid \max\{d(x,gx), d(y, gy)\} < \varepsilon \right\}\right|<N \] Various equivalent characterizations of acylindrically hyperbolic groups are provided in \cite[Thm. 1.2]{Osin}. In particular, such groups contain proper infinite hyperbolically embedded subgroups. It then follows from \cite[Thm. 6.14]{DGO} that every acylindrically hyperbolic group $G$ contains a unique maximal finite normal subgroup $K(G)$. \begin{proposition}\label{Osin} Let $G$ be an acylindrically hyperbolic group maximal finite normal subgroup $K(G)$. Then the quasimorphic radical of $G$ is given by $R_q(G) = K(G)$. In particular, $G$ is quasi-separated if and only if it does not contain any finite normal subgroup. \end{proposition} \begin{proof}[Proof (D. Osin)] Since $K(G)$ is amenable, every real-valued homogeneous quasimorphism on $G$ restricts to a homomorphism on $K(G)$, but since $K(G)$ is torsion every such homomorphism vanishes. Thus $K(G) \subseteq R_q(G)$ and it remains only to show that if $K(G) = \{e\}$ then for every normal subgroup $N \lhd G$ there exists a real-valued quasimorphism of $G$ which is unbounded on $N$. Thus let us fix a group $G$ acting acylindrically and non-elementarily on a hyperbolic space $X$ with $K(G) = \{e\}$ and let $N$ be a non-trivial normal subgroup. Then for every $g \in G$ we have $|gNg^{-1}\cap N| = |N| = \infty$, since there are no finite normal subgroups. According to \cite[Lemma 7.2]{Osin} this property (called \emph{s-normality} in \cite{Osin}) implies that $N$ acts non-elementarily on $X$. Combining this with \cite[Theorem 1.1]{Osin} we see that $N$ contains two (in fact, infinitely many) independent loxodromic elements $x,y$. The construction given in the proof of \cite[Theorem 1]{BestvinaFujiwara} then yields a non-trivial homogeneous quasimorphism on $G$ which does not vanish on the subgroup generated by $x$ and $y$, hence is unbounded on $N$.\end{proof} Proposition \ref{Osin} provides a large supply of quasi-separable groups and shows thereby that our theory of quasimorphisms has some non-trivial content. Among the examples of quasi-separated groups covered by the proposition are not only torsion-free (relatively) hyperbolic groups, but also mapping class groups of closed surfaces of genus $\geq 3$ and outer automorphisms of free groups of finite rank $\geq 3$. \section{Quasioutomorphism groups of amenable groups}\label{SecAmenable} The goal of this section is to establish the following result: \begin{theorem}\label{AmenableMain1} Let $G, H$ be amenable groups and assume that the abelianization $H_{ab}$ of $H$ has finite rank. Then \[QQ(G, H) \cong {\rm Hom}(G, H_{ab} \otimes_{\mathbb{Z}} \mathbb{R}).\] In particular, if $G$ is amenable and $r := {\rm rk}(G_{ab}) < \infty$, then \[{\rm QOut}(G) \cong {\rm GL}_r(\mathbb{R}).\] \end{theorem} We start from the following observation: \begin{lemma}\label{TorsionFreeTarget} Let $G$ be any group and $H$ be a torsion-free abelian group. Then there is an embedding \[{\rm Hom}(G, H) \hookrightarrow QQ(G, H), \quad f \mapsto [f].\] \end{lemma} \begin{proof} Every $f \in {\rm Hom}(G, H)$ is a quasimorphism by Lemma \ref{sclStuff}. If $f_1, f_2$ are distinct homomorphisms then their classes in $QQ(G,H)$ are also distinct since $f_1 \neq f_2$ implies that there exists $h \in {\rm Hom}(H; \mathbb{R})$ such that $f_1^*h \neq f_2^*h$. (Here we use, that homomorphisms into $\mathbb{R}$ separate points in a torsion-free abelian group.) Now distinct homomorphisms cannot be at bounded distance, so $f_1^*h$ and $f_2^*h$ (and consequently $f_1$ and $f_2$) are not equivalent.\end{proof} \begin{lemma}\label{TensorTrick} Let $H$ be an abelian group and denote by $\iota: H \rightarrow H \otimes_{\mathbb{Z}} \mathbb{R}$ the natural map. Then $[\iota]$ is invertible in $QQ(H, H \otimes_{\mathbb{Z}} \mathbb{R})$. In particular, $H$ and $H \otimes_{\mathbb{Z}} \mathbb{R}$ are isomorphic as homogeneous quasigroups. \end{lemma} \begin{proof} Write $H$ additively and denote by $\lfloor\cdot\rfloor: \mathbb{R} \rightarrow \mathbb{Z}$ the floor function. Define a map \[\{\cdot\}: H \otimes_{\mathbb{Z}} \mathbb{R} \rightarrow H, \quad \{h \otimes \lambda\} := \lfloor \lambda \rfloor \cdot h.\] Then it is easy to check that $\{\cdot\}$ is a quasimorphism representing $[\iota]^{-1}$. \end{proof} \begin{corollary}\label{AmenablePre} Let $G$ be any group and $H$ amenable. Then ${\rm Hom}(G, H_{ab} \otimes_{\mathbb{Z}} \mathbb{R})$ embeds into $QQ(G, H)$. \end{corollary} \begin{proof} By Corollary \ref{Qs}, Lemma \ref{TensorTrick} and Proposition \ref{AmenableQ} we have \begin{eqnarray*}QQ(G, H) &\cong& QQ(G, Q(H)) \cong QQ(G, Q(H) \otimes_\mathbb{Z} \mathbb{R}) \cong QQ(G, H_{ab} \otimes_{\mathbb{Z}} \mathbb{R}).\end{eqnarray*} Now apply Lemma \ref{TorsionFreeTarget}. \end{proof} Under additional assumptions on $G$ and $H$ we can in fact obtain an isomorphism, based on the following observation: \begin{lemma}\label{HomAmenable} Let $G$ be a group and $V$ be a finite-dimensional $\mathbb{R}$-vector space. Assume that every homogeneous quasimorphism on $G$ is a homomorphism. Then the map \[{\rm Hom}(G, V) \rightarrow QQ(G, V), \quad f \mapsto [f]\] is onto. \end{lemma} \begin{proof} Let $f_0: G \rightarrow V$ be a quasimorphism. We aim to construct a homomorphism $f: G \rightarrow V$ which is equivalent to $f_0$. If $V = \mathbb{R}$ then we can define $f$ to be the homogenization of $f_0$. If $V = \mathbb{R}^n$ we observe that the coordinates of $f_0$ are real-valued quasimorphisms $(f_0)_j: G \rightarrow \mathbb{R}$. If follows that the limit \[ f(g) := \lim_{n \rightarrow \infty}\frac{f_0^n(g)}{n} \] exists and has coordinate functions given by $f_j = \widehat{(f_0)_j}$, the homogenization of $(f_0)_j$. Each $f_j$ is a homogeneous real-valued quasimorphism, hence a homomorphism by assumption. It follows that $f$ is a homomorphism, clearly equivalent to $f_0$. \end{proof} Combining this with Corollary \ref{AmenablePre} we get: \begin{corollary}\label{AmenableMain} Let $G$ be a group and $H$ an amenable group. Assume that every homogeneous quasimorphism on $G$ is a homomorphism and that $H_{ab}$ has finite rank. Then \[QQ(G, H) \cong {\rm Hom}(G, H_{ab} \otimes_{\mathbb{Z}} \mathbb{R}).\] \end{corollary} Note that Theorem \ref{AmenableMain1} is a special case of Corollary \ref{AmenableMain}. \section{Quasioutomorphism groups of free groups}\label{SecFreeQM} \subsection{Embedding ${\rm Out}(F_n)$} We have seen in the last section that quasimorphisms between amenable groups are essentially homomorphisms. Our next goal is to show that for non-amenable groups there may exist many proper quasimorphisms. We will focus on the case where $F$ is a finitely generated non-abelian free group. Throughout we denote by $n$ the rank of $F$ and fix a free generating set $S := \{a_1, \dots, a_n\}$. As before we identify elements of $F$ with reduced words over $S \cup S^{-1}$. If we want to emphasize the rank we also write $F_n$ for $F$. Our first observation concerns the injectivity of the canonical map ${\rm qout}_{F_n}: {\rm Out}(F_n) \rightarrow {\rm QOut}(F_n)$: \begin{proposition}\label{prop-out-embedding}The map ${\rm qout}_{F_n}: {\rm Out}(F_n) \hookrightarrow {\rm QOut}(F_n)$ is injective. \end{proposition} \begin{proof} Assume that $f \in {\rm Aut}_{\mathfrak Grp}(F_n)$ represents a class $[f]$ in the kernel of the map ${\rm Out}(F_n) \rightarrow {\rm QOut}(F_n)$. Write $f(a_j)$ as a reduced expression $f(a_j) \equiv x_jw_jx_j^{-1}$ with $w_j$ cyclically reduced. Consider the non-overlapping(!) $w_j$-counting quasimorphism $\phi^*_{w_j}$ and its homogenization $\widehat{\phi^*_{w_j}}$. Since $f^*\widehat{\phi^*_{w_j}} = \widehat{\phi^*_{w_j}}$ we deduce from Corollary \ref{phistarww} that \begin{eqnarray*} \widehat{\phi^*_{w_j}}(a_j) &=& \widehat{\phi^*_{w_j}}(f(a_j)) = \widehat{\phi^*_{w_j}}(x_jw_jx_j^{-1})\\ &=& \widehat{\phi^*_{w_j}}(w_j) = 1. \end{eqnarray*} This implies that for large $n$ we have $\phi^*_{w_j}(a_j^n) >0$, i.e., $w_j$ is a subword of $a_j^n$. Since $\widehat{\phi^*_{a_j^k}}(a_j) = \frac 1 k$ we deduce that $w_j=a_j$ is the only possibility.\footnote{Here it is important that we work with non-overlapping counting quasimorphisms, since $\widehat{\phi_{a_j^k}}(a_j) =1$.} It follows that for every $j=1, \dots, n$ there exists $x_j \in F_n$ and $n_j \in \mathbb N$ such that \begin{equation} f(a_j) = x_ja_jx_j^{-1}, \end{equation} and it remains to show only that $x_j$ is independent of $j$. Otherwise we may assume (after relabeling the generators) that $x_1 \neq x_2$. We write $x_1$ and $x_2$ as reduced expressions $x_1 = tu = t_1\cdots t_l u_1\cdots u_m$, $x_2=tv = t_1\cdots t_lv_1\cdots v_n$ with $t_j, u_j, v_j \in S\cup S^{-1}$, $m+n \geq 1$ and $u_1 \neq v_1$. The latter implies that the expressions $u^{-1}v = u_m^{-1}\cdots u_1^{-1}v_1\cdots v_n$ and $v^{-1}u$ are reduced. In particular, if we expand $f(a_1a_2)$ as \[ f(a_1a_2) = f(a_1)f(a_2) = x_1a_1x_1^{-1} x_2a_2x_2^{-1} = tua_1u^{-1}va_2v^{-1}t^{-1}, \] then the latter is a reduced expression. Let us abbreviate by $r := ua_1u^{-1}va_2v^{-1}$ the middle part of this expression. Then $r$ is reduced, and in fact cyclically reduced (since $v^{-1}u$ is reduced). We conclude in particular that $|r| \geq 4$. Moreover, Corollary \ref{phistarww} yields \begin{eqnarray*} \widehat{\phi^*_{r}}(a_1a_2) &=& \widehat{\phi^*_{r}}(f(a_1a_2)) = \widehat{\phi^*_{r}}(trt^{-1})\\ &=& \widehat{\phi^*_{r}}(r) = 1. \end{eqnarray*} By the same argument as above this implies that $r$ is a subword of $(a_1a_2)^N$ for some large $N$. Since we also require $\widehat{\phi^*_{r}}(a_1a_2) = 1$ the only possible choices are $r \in \{a_1, a_2, a_1a_2, a_2a_1\}$. This implies $|r| \leq 2$, which is a contradiction. \end{proof} Since ${\rm Out}(F_n)$ is non-amenable we conclude: \begin{corollary}\label{CorNonAmenability} The group ${\rm QOut}(F_n)$ is non-amenable. \end{corollary} \subsection{A criterion for quasiendomorphisms of free groups} Our further investigations of ${\rm QOut}(F_n)$ will be based on explicit constructions. The following criterion is a useful tool for proving that a map is a quasimorphism: \begin{proposition}\label{thm:qmorphism:criterion:for:reduced:words} Let $G$ be an arbitrary group and~$h \colon F_n \rightarrow G$ an arbitrary map. If there is a finite set~$E\subseteq G$ such that \begin{enumerate} \item \label{lem:prop:product} for all words $w_1$ and~$w_2$, for which~$w_1w_2$ is a reduced word, we have $h(w_1 w_2) \in E h(w_1) E h(w_2) E$ and \item \label{lem:prop:inverses} $\forall g\in F_n\colon \, h(g^{-1}) \in E h(g)^{-1} E$, \end{enumerate} then~$h$ is a quasimorphism. \end{proposition} \begin{proof} Without loss of generality we assume that~$E$ is closed under taking inverses. Given $g, g' \in F_n$ we can always find reduced words $w, w', x$ so that $g \equiv wx$, $g' \equiv x^{-1}w'$ and the words $wx$, $x^{-1}w'$ and $ww'$ are all reduced. Now assume that $h$ satisfies the assumptions of the theorem. Then~$h(g g')= h(w w') \in E h(w) E h(w') E$. Furthermore~$h(g) \in E h(w)E h(x)E$ which implies that~$h(w) \in E h(g) Eh(x)^{-1}E $. Similarly we conclude~$h(w') \in E h(x)E h(g') E $. Assembling the three statements we obtain~$h(g g') \in E^2 h(g)E h(x)^{-1} E^3 h(x) E h(g') E^2$. If we define $\widetilde{E} := (E\cup\{e\})^3$ then this implies $h(gg') \in \widetilde{E}h(g) \widetilde{E} h(x)^{-1} \widetilde{E} h(x) \widetilde{E} h(g') \widetilde{E}$, so the proposition follows from Lemma \ref{lem:qmorphism:criterion}. \end{proof} As a first illustration of our criterion, we describe a class of quasiendomorphisms of free groups which change a word by local manipulation. Given $k >0$, let us refer to a map $f \colon (S\cup S^{-1})^k_\text{red} \rightarrow S^{\ast}$ from the reduced words over~$S\cup S^{-1}$ of length~$k$ to words of arbitrary length as a local transformation of length $k$ provided $f(u^{-1}) = f(u)^{-1}$ for every reduced word~$u$ of length~$k$. Given such a map we define~$\phi_f \colon F \rightarrow F$ to be the map that maps a reduced word~$w= w_1\dots w_n$ over~$F$ to the word~$\phi_f(w) = f(w_1\dots w_k) f(w_2\dots w_{k+1}) \dots f(w_{n+1-k}\dots w_n)$ if~$n\geq k$ and to the empty word otherwise. \begin{lemma} For every local transformation $f$ the map~$\phi_f$ is a quasimorphism. \end{lemma} \begin{proof} We are going to show that the map~$\phi_f$ satisfies the assumptions of Proposition~\ref{thm:qmorphism:criterion:for:reduced:words}. Concerning the first condition, let~$E = (\{e\} \cup f((S\cup S^{-1})^k_\text{red}))^{k}$. Let~$w = w_1\dots w_t$ and~$w' = w_{t+1} \dots w_n$ be words in the free group~$F$ such that~$ww'$ is reduced. Then \[\phi_f(ww') = f(w_1\dots w_k) f(w_2\dots w_{k+1}) \dots f(w_{n+1-k}\dots w_n),\] which, by definition of~$E$, is a word that is contained in~$f(w)Ef(w') =$ \[f(w_1\dots w_k) \dots f(w_{t+1-k}\dots w_t) E f(w_{t+1}\dots w_{t+1+k}) \dots f(w_{t+1-k}\dots w_t).\] Concerning the second property, observe that for a reduced word~$w$ we have~\[f(w) = f(w_1\dots w_k) f(w_2\dots w_{k+1}) \dots f(w_{n+1-k}\dots w_n),\] while~\begin{eqnarray*} f(w^{-1}) &=& f(w_n\dots w_{n+1-k}) f(w_{n-1}\dots w_{n-k}) \dots f(w_{k}\dots w_1)\\ &=& f(w_{n+1-k}\dots w_n)^{-1} \dots f(w_1\dots w_k)^{-1}.\end{eqnarray*} This finishes the proof of the lemma. \end{proof} \begin{definition} Given a local transformation $f$ the quasimorphism $\phi_f$ is called the \emph{local quasimorphism} modeled according to $f$. \end{definition} Informally, a local quasimorphism maps a reduced word by manipulating every letter in a way that only takes into account the~$k-1$ letters that follow. One could try to construct a larger class of quasimorphisms by also allowing for a finite look-back, effectively taking into account the previous~$k'$ letters for some fixed~$k'$. However, up to equivalence this will not produce any new examples. Indeed, considering a look-back and a look-ahead of~$k$ we obtain a map given by~$\phi_f(w) = f(w_1\dots w_{k+1} \dots w_{2k+1}) \dots f(w_{i-k}\dots w_{i}\dots w_{i+k}) \dots f(w_{n-2k}\dots w_{n-k}\dots w_n)$, and this yields a function also that can also be interpreted as a function with a lookahead of~$2k$. \subsection{Torsion in ${\rm QOut(F_n)}$} Our next goal is to study the torsion of ${\rm QOut(F_n)}$. Recall that the wobbling group $W(\mathbb{Z})$ is the group of permutations of~$\mathbb{Z}$ for which the distance of every integer to its image is bounded. Clearly every finite group is a subgroup of the wobbling group. We are going to establish the following result: \begin{theorem}\label{thm:wobbling:embedding:and:consequences} For every $n \geq 2$ the wobbling group $W(\mathbb{Z})$ embeds into ${\rm QOut}(F_n)$. In particular, for all such $n$ the group ${\rm QOut}(F_n)$ is uncountable and contains torsion elements of any given order. \end{theorem} Note that, in stark contrast to the theorem, the maximum order of torsion elements in~${\rm Out}(F)$ is bounded~\cite{MR1655470}. In order to prove the theorem we are going to describe another class of quasi-endomorphisms of $F_n$, which do not just modify words locally. For the purposes of the proof we will use the following embedding of $W(\mathbb Z)$. We can embed $\mathbb{Z}$ into~$\mathbb{N}$ by mapping~$i$ to~$2i+2$ for~$i\geq 0$ and to~$(-2) i-1$ for~$i<0$. This induces an embedding of $W(\mathbb{Z})$ into the monoid \[ B(\mathbb N_0) := \{\sigma: \mathbb N_0 \rightarrow \mathbb N_0\,|\, \sigma(0) = 0, \max\{|\sigma(k)-k|\mid k\in \mathbb{N}\}<\infty\}. \] We deduce that it suffices to construct an injective monoid homomorphism from $B(\mathbb N_0)$ to~$QQ(F_n, F_n)$. Thus let $\sigma \in B(\mathbb N_0)$ and define a map $\pi_\sigma: F_n \rightarrow F_n$ as follows: Given any $g \in F_n$ we can represent $g$ uniquely as $w_1a_n^{i_1}w_2 \cdots w_{l-1}a_n^{i_{l-1}}w_l$ with $w_j \in F_{n-1}$, $w_2, \dots, w_{l-1} \neq \varepsilon$ and $i_j \in \mathbb{Z} \setminus\{0\}$. We then define \[ \pi_\sigma(g) = w_1a_n^{i_1'}w_2 \cdots w_{l-1}a_n^{i_{l-1}'}w_l, \] where \[i'_k = \begin{cases} \sigma(i_k) &\text{if~$i_k >0$,} \\ -\sigma(-i_k) &\text{if~$i_k < 0$} .\end{cases}\] Informally, we replace every occurrence of $a_n^i$ or $a_n^{-i}$ between two powers of other letters by $a_n^{\sigma(i)}$, respectively~$a_n^{-\sigma(i)}$. Applying Proposition~\ref{thm:qmorphism:criterion:for:reduced:words} with \[ E := \big\{a_n^i\mid |i| \leq \max\{|\sigma(k)-k|\mid k\in \mathbb{N}\}\big\} \] shows that~$\pi_\sigma$ is a quasimorphism for every $\pi \in B(\mathbb N_0)$, and evidently the map $\sigma\mapsto \pi_\sigma$ is a monoid homomorphism. Note that this quasimorphism is equivalent to a local quasimorphism only if $\sigma$ has finite support. In order to establish the embedding $W(\mathbb{Z}) \hookrightarrow {\rm QOut}(F_n)$ it remains to show only that the map $B(\mathbb N_0) \rightarrow {\rm QOut}(F_n)$ given by $\sigma \mapsto \pi_\sigma$ is injective. \begin{proof}[Proof of Theorem~\ref{thm:wobbling:embedding:and:consequences}] Let $\sigma, \sigma' \in B(\mathbb N_0)$ be distinct. We claim that the functions~$\pi_\sigma$ and $\pi_{\sigma'}$ are not equivalent. Suppose~$\sigma(i) = j> \sigma'(i)$. Let~$\varphi_{a_n^j}$ be the counting morphism that counts the number of occurrences of~$a_n^j$ and denote $w := a_1a_n^ia_1$. Then \[ \pi_\sigma^*\varphi_{a_n^j}(w^k) = \varphi_{a_n^j}(a_1a_n^ja_1)^k = k, \] whereas $\pi_{\sigma'}^*\varphi_{a_n^j}(w^k) = 0$. Thus $\pi_\sigma^*\varphi_{a_n^j}(w^k) \not \sim \pi_{\sigma'}^*\varphi_{a_n^j}$ and hence $\pi_\sigma \not \sim \pi_{\sigma'}$, finishing the proof. \end{proof} \begin{remark} Denote by ${\rm QOut}_{(k)}(F_n)$ the characteristic subgroup of ${\rm QOut}(F_n)$ generated by all elements of order at most $k$ and by ${\rm QOut}_{({\rm fin})}(F_n)$ the characteristic subgroup generated by all elements of finite order. By Theorem \ref{thm:wobbling:embedding:and:consequences} these groups are non-trivial and we have a chain \[{\rm QOut}_{(2)}(F_n) \lhd {\rm QOut}_{(3)}(F_n) \lhd \dots\lhd {\rm QOut}_{({\rm fin})}(F_n)\lhd {\rm QOut}(F_n).\] of characteristic subgroups. It would be interesting to know, whether any of these inclusions are proper. \end{remark} \subsection{Free groups are not quasi-Hopfian} We now provide a first application of the two classes of quasioutomorphisms of free groups constructed so far. Recall that every finitely-generated free group $G = F_n$ and every finitely generated free abelian group $G = \mathbb{Z}^n$ is \emph{Hopfian} in the sense that every epimorphism $G \rightarrow G$ is an isomorphism. \begin{definition} A quasi-separated $G$ is called \emph{quasi-Hopfian} if there does not exist a normal subgroup $N \lhd G$ and a surjective quasimorphism $f: G \rightarrow G$ with $f(N) = \{e\}$. \end{definition} By definition, every quasi-separated quasi-Hopfian group is Hopfian. Recall that by Proposition~\ref{AmenableQ} an amenable group $G$ is quasi-separated if and only if it is of torsion-free abelian. If such a group is finitely-generated then it is not only Hopfian, but even quasi-Hopfian: \begin{proposition}\label{prop:torsion:free:abelian:qhopf} If $G$ is a torsion-free abelian group of finite rank, then $G$ is quasi-Hopfian. \end{proposition} \begin{proof} Assume that for some $n\geq 1$ we could find a surjective quasimorphism $f: \mathbb{Z}^n \rightarrow \mathbb{Z}^n$ which vanishes on a normal subgroup $N \lhd \mathbb{Z}^n$. Then we obtain a surjective quasimorphism $g: \mathbb{Z}^k \oplus T \rightarrow \mathbb{Z}^n$ for some $k < n$ and a finite group $T$. If we denote by $\iota: \mathbb{Z}^n \rightarrow \mathbb{R}^n$ the canonical inclusion, then $\iota \circ g: \mathbb{Z}^k \oplus T \rightarrow \mathbb{R}^n$ is a quasimorphism, and by the proof of Lemma \ref{HomAmenable} it is at bounded distance from a homomorphism $g': \mathbb{Z}^k \oplus T \rightarrow \mathbb{R}^n$. It follows that there exists a $k$-dimensional subspace $V \subseteq \mathbb{R}^n$ such that~$g'(\mathbb{Z}^k \oplus T) \subseteq V$ and hence every $x\in \iota(f(Z^n))= \iota(g(Z^k \oplus T))$ is at bounded distance from $V$. But this contradicts the surjectivity of~$f$. \end{proof} On the contrary we can show that free groups of rank at least~$4$ are not quasi-Hopfian, which answers a question asked to us by Misha Kapovich. To show this, we first construct a surjective quasimorphism from~$F_{n-1}$ to~$F_{n}$ for~$n\geq 4$. \begin{lemma} For~$n\geq 4$ there exists a surjective quasimorphism from~$F_{n-1}$ to~$F_n$. \end{lemma} \begin{proof} Given~$n\geq 4$ we denote by~$S = \{a_1, \dots, a_n\}$ a basis of~$F_n$. Let~$S' = \{a_2,\dots,a_{n}\}$. We start from a map $f \colon ((S'\cup S'^{-1})^2_\text{red} \rightarrow S^{\ast}$ as follows: We define $f(a_2a_3) = a_1a_3^{-1}$ and $f(a_3^{-1}a_2^{-1}) = a_1^{-1}a_2$. For all other reduced words $s_1s_2$ of length $2$ we define $f(s_1s_2) = s_1$. The map $f$ is not quite a local transformation as defined above since the condition $f(w^{-1}) = f(w)^{-1}$ is violated. Nevertheless, we can consider the map $\phi_f: F_{n-1} \rightarrow F_n$ given by \[\phi_f(s_1 \cdots s_l) = f(s_1s_2)f(s_2s_3) \cdots f(s_{l-1}s_l).\] It turns out to be convenient to modify this slightly and to define $\phi: F_{n-1} \rightarrow F_n$ by $\phi(s_1\cdots s_l) = \phi_f(s_1\cdots s_l)s_l$. Then $\phi$ has the effect of replacing all occurrences of $a_2a_3$ (respectively $(a_3a_2)^{-1}$) by $a_1$ (respectively $a_1^{-1}$). It is immediate from this description that $\phi$ satisfies the conditions of Proposition \ref{thm:qmorphism:criterion:for:reduced:words}, hence defines a quasimorphism. Let~$\sigma$ be the map in~$B(\mathbb N_0)$ given by~$\sigma(0) = 0$ and~$\sigma(k)= k-1$ for~$k>0$. Let~$\pi_\sigma$ be the quasimorphism induced by this map, as defined in the previous section (i.e.,~$\pi_\sigma$ replaces all powers of~$a_n$ by powers of~$a_n$ whose absolute value is smaller by exactly one). Then~$\pi \circ \phi$ is a quasimorphism which we claim to be surjective. Indeed, we can write every reduced word~$w$ over~$S\cup S^{-1}$ in the form~$a_{n}^{t_0}s_1a_{n}^{t_1}\cdots s_la_{n}^{t_l}$ with~$t_j \in \mathbb{Z}$ for all~$j\in \{0,\ldots l\}$ and~$s_j \in (S\cup S^{-1})\setminus\{a_n,a_n^{-1}\}$ for all~$1 \leq j\leq l$. We consider a word~$w'$ of the form~$a_{n}^{t'_0}s_1a_{n}^{t'_1}\cdots s_la_{n}^{t'_l}$ with~$t_j' = t_j+1$ if~$t_j \geq 0$ and~$t_j = t_j-1$ if~$t_j <0$ for all~$j\in \{0,\ldots l\}$. Note that~$w'$ is a preimage of~$w$ under~$\pi$. Now we replace in~$w'$ all occurrences of~$a_1$ by~$a_2a_3$ and all occurrences of~$a_1^{-1}$ by~$(a_3a_2)^{-1}$ obtaining a word~$w''$, which is reduced since~$n\geq 4$. Then~$w''$ is a preimage of~$w'$ under~$\phi$ and thus~$w''$ is a preimage of~$w$ under~$\pi \circ \phi$. \end{proof} We remark that no surjective quasimorphism exists from~$F_1$ to~$F_2$ since any such quasimorphism would yield a surjective quasimorphism from~$F_1= \mathbb{Z}$ to~$\mathbb{Z}_2$, the abelianization of~$F_2$. Such a map cannot exists by the same arguments as those used in the proof of Proposition~\ref{prop:torsion:free:abelian:qhopf}. \begin{theorem} For $n \geq 4$ the group $F_n$ is not quasi-Hopfian. \end{theorem} \begin{proof} Given~$n\geq 4$ we denote by~$S = \{a_1, ..., a_n\}$ a basis of~$F_n$. The homomorphism~$\psi$ that maps~$a_1$ to~$\{e\}$ and fixes all other generators maps the normal subgroup generated by~$a_n$ to~$\{e\}$. Let~$\tau$ be a quasimorphism that maps~$F_{n-1}$ with generators~$\{a_2, ...,a_{n}\}$ surjectively to~$F_n$ mapping~$e$ to~$e$. Such a quasimorphism exists by the previous lemma. Consider the quasimorphism~$\tau \circ \psi$. This quasimorphism is surjective and maps the normal subgroup generated by~$a_1$ to~$\{e\}$, showing that~$F_n$ is not quasi-Hopfian. \end{proof} \subsection{Transitivity properties of ${\rm QOut}(F_n)$}\label{SecWordExchange} In order to study transitivity properties of ${\rm QOut}(F_n)$ we are going to introduce a third class of quasioutomorphism of $F_n$, whose effect on reduced words is given by replacing certain subwords. \begin{definition} Given a reduced word~$w$ over $S \cup S^{-1}$, a \emph{decomposition of~$w$} is a sequence of reduced words~$(u_1,\dots, u_t)$ such that~the concatenation $u_1 \dots u_t$ is reduced and coincides with $w$. Given a set ~$W = \{w_1,\dots,w_k\}$ of independent reduced words the \emph{number of occurrences of words from~$W$} in the decomposition~$(u_1,\dots,u_t)$ is the number of integers~$i\in \{1,\dots,t\}$ for which~$u_i\in W \cup W^{-1}$. A decomposition $(u_1,\dots,u_t)$ is called \emph{$W$-maximal} if this number is maximal. A decomposition~$(u_1,\dots, u_{t+1})$ is a simple refinement of~$(u'_1,\dots,u'_t)$ if there is a~$k$ such that~$u_i = u'_i$ for~$i< k$,~$u_ku_{k+1} = u_{k'}$ and~$u_{i+1} = u'_{i}$ for~$i>k$. A decomposition is a refinement of another decomposition if it is obtained by repeated simple refinement steps. We remind the reader that the notion of independent words was defined in Definition \ref{DefIndependent}. \end{definition} \begin{lemma} Given a set of independent words~$W = \{w_1,\dots,w_k\}$ and a word~$w$, there is a unique $W$-maximal decomposition $(u_1, \dots, u_t)$ of~$w$ that is minimal with respect to refinement among all $W$-maximal decompositions. \end{lemma} \begin{proof} The existence of a minimal $W$-maximal decomposition is obvious. Let~$u = (u_1,\dots,u_t)$ and~$u' = (u'_1,\dots,u'_{t'})$ be two $W$-maximal decompositions and assume that both are minimal with respect to refinement among such decompositions. By induction it suffices to show that~$u_1 = u'_1$. Suppose otherwise. Since the words in~$W$ are independent, this implies that~$u_1$ is not in~$W \cup W^{-1}$ or~$u_2$ is not in~$W \cup W^{-1}$. Without loss of generality we assume the former. Since~$u$ is minimal with respect to refinement,~$u_2$ is in~$W \cup W^{-1}$. If~$u'_1 \in W \cup W^{-1}$ then~$u'_1$ is shorter than~$u_1$ and thus~$u_1$ can be refined into two words, one of which is in~$W \cup W^{-1}$, increasing the number of occurrences of words from~$W$ in~$u$ and yielding a contradiction. If~$u'_1$ is not in~$W \cup W^{-1}$ we can without loss of generality assume that~$u'_1$ is shorter than~$u_1$. This implies~$u'_2$ is in~$W \cup W^{-1}$ and, due to independence, the length of~$u'_1u'_2$ is at most the length of~$u_1$. Again, we can split~$u_1$ into at most 3 words increasing the number of occurrences of words from~$W$ in~$u$ and yielding a contradiction. \end{proof} In the sequel we refer to this unique decomposition $(u_1, \dots, u_t)$ as the \emph{$W$-decomposition} of $w$. \begin{definition} Let $W := \{w_1, w_2\}$ be a set consisting of two independent words which share the same initial letter and also share the same final letter. Given a reduced word $w \in F_n$ with $W$-decomposition $(u_1, \dots, u_t)$, define a new word $w'$ as $w' := u_1'\cdots u_t'$, where \[u'_i := \begin{cases} w_2 &\mbox{if } u_i = w_1 \\ w_1 &\mbox{if } u_i = w_2 \\ {w_2}^{-1} &\mbox{if } u_i = {w_1}^{-1} \\ {w_1}^{-1} &\mbox{if } u_i = {w_2}^{-1} \\ u_i &\mbox{otherwise}.\end{cases}\] Then the map $f_{w_1, w_2}: F_n \rightarrow F_n$ given by $f_{w_1, w_2}(w) = w'$ is called the \emph{word replacement quasimorphism} associated with the pair $\{w_1, w_2\}$. \end{definition} Note that it follows from Proposition \ref{thm:qmorphism:criterion:for:reduced:words} that $f_{w_1, w_2}$ is indeed a quasimorphism, since we can choose $E$ to be the set of all words of length at most the maximal lengths of $w_1$ and $w_2$. Informally, $f_{w_1, w_2}$ switches occurrences of $w_1^{\pm 1}$ with occurrences of $w_2^{\pm 1}$. Since the words that are replaced start and end with the same letters, word replacement quasimorphisms map reduced words to reduced words. Moreover $f_{w_1, w_2}$ maps $\{w_1, w_2\}$-decompositions to $\{w_1, w_2\}$-decompositions. It follows that $f_{w_1, w_2}^2 = {\rm Id}$, whence \begin{equation}\label{WRQ} f_{w_1, w_2} \in {\rm QOut}_{(2)}(F_n). \end{equation} Note that in order for $\{w_1, w_2\}$ to be independent it is necessary that $w_1$ and $w_2$ are self-independent. If their length is at least $2$, then this implies that their respective initial letters are different from their final letters. We now study the action of word replacement quasimorphisms on counting quasimorphisms. By construction of $f_{w_1, w_2}$ the number of occurrences of the word~$w_2$ in a word~$w$ is no smaller than the number of occurrences of~$w_1$ in $f_{w_1, w_2}(w)$. Since $f_{w_1, w_2}$ is an involution, the same argument shows that the number of occurrences of~$w_1$ in the $f_{w_1, w_2}(w)$ is no smaller than the number of occurrences of the word~$w_2$ in a word~$w$. This shows that \begin{equation}\label{SwapWords} f^*_{w_1, w_2}(\phi_{w_1}) = \phi_{w_2}, \quad f^*_{w_1, w_2}(\phi_{w_2}) = \phi_{w_1}.\end{equation} We record the following consequence of \eqref{WRQ} and \eqref{SwapWords} for later use: \begin{lemma}\label{ExchangeCounting} Let $w_1$ and $w_2$ be self-independent words which share the same initial letter and also share the same final letter. If the set $\{w_1, w_2\}$ is independent, then $\widehat{\phi_{w_1}}$ and $\widehat{\phi_{w_2}}$ are contained in the same ${\rm QOut}_{(2)}(F_n)$-orbit in $\mathcal H(F_n)$. \end{lemma} The lemma motivates the study of pairs of independent words in $F_n$ with respect to our fixed generating set $S$. \begin{lemma}\label{lem:arbitrary:to:aibl} Let $w$ be a self-independent word of length $\geq 2$ with initial letter $a \in S\cup S^{-1}$ and final letter~$b \in S\cup S^{-1}$. There exist positive integers~$i,j$ such that~$\{w,a^i b^j\}$ is a set of independent words. \end{lemma} \begin{proof} If~$w$ is in~$a^{\ast} b^{\ast}$, there is nothing to show. Now suppose otherwise. Let~$\ell$ be the length of~$w$. We claim that~$\{w,w'\}$, with~$w'= a^{\ell} b^{\ell}$, is a set of independent words. If~$w$ contains a letter that is not in~$\{a,b,a^{-1},b^{-1}\}$ this is obvious, so we suppose otherwise. Since~$w$ is not in~$a^{\ast} b^{\ast}$ there are reduced words~$u,v,x$ such that~$w = aub^{\varepsilon} va^{\varepsilon'}x b$ with~$\varepsilon,\varepsilon' \in \{-1,1\}$. Since~$w'$ does not contain inverses of~$a$ or~$b$ and is longer than~$w$, the words~${w'}^{-1}$ and~$w$ do not overlap. We argue that no prefix of~$w$ is a postfix of~$w'$. The fact that no postfix of~$w$ is a prefix of~$w'$ follows by symmetry. If a postfix of~$w'$ were a prefix of~$w$ it would have length at most~$\ell$. Thus it would be of the form~$b^{i'}$ for some~$i'\leq \ell$. However, since~$w$ starts with~$a$ it cannot be a prefix of~$w$. \end{proof} \begin{lemma}\label{lem:abibia:to:aibl} For every pair of distinct elements $a, b \in S$ and all positive integers~$i,j>0$ the set~$\{ab^{-1}a^{-1}b,a^i b^j\}$ is a set of independent words. \end{lemma} \begin{proof} By similar arguments as in the previous lemma, it suffices to argue that no prefix of~$ab^{-1}a^{-1}b$ is a postfix of~$a^i b^j$. Suppose~$p$ is a prefix of~$ab^{-1}a^{-1}b$. If~$p$ has length 1 then~$p = a$ and~$p$ is not a postfix of~$a^i b^j$. If~$p$ has a length that is greater than 1 then~$p$ contains~$b^{-1}$ and is thus not a postfix of~$a^i b^j$ either. \end{proof} \begin{corollary}\label{Cor2orbits} Let $a,b \in S$ be two letters with $a \not \in \{b, b^{-1}\}$ and let \[\Phi(F_n, S) := \{\widehat{\phi_{w}}\,|\, w \text{ self-independent, reduced word over}\,S\} \subseteq \mathcal H^*(F) \subseteq \mathcal H(F).\] Then every ${\rm QOut}_{(2)}(F_n)$-orbit that intersects $\Phi(F_n, S)$ contains either $\widehat{\phi_{a}}$ or $\widehat{\phi_{ab}}$. In particular, the number of such orbits is at most $2$. \end{corollary} \begin{proof} Let $w$ be a self-independent reduced word over $S$ of length $\geq 2$ with initial letter $a$ and final letter $b\neq a$. Then by combining Lemma \ref{ExchangeCounting}, Lemma \ref{lem:arbitrary:to:aibl} and Lemma \ref{lem:abibia:to:aibl} the quasimorphism $\widehat{\phi_w}$ is in the same ${\rm QOut}_{(2)}(F_n)$-orbit as $\widehat{\phi_{ab}}$. We deduce that every orbit admits either a representative of the form $\widehat{\phi_{ab}}$ for letters $a,b \in S$ with $a \not \in \{b, b^{-1}\}$ or a representative of the form $\widehat{\phi_a}$ for some $a \in S$. In order to show that all these quasimorphisms are in the same ${\rm QOut}_{(2)}(F_n)$-orbit as either $\widehat{\phi_{ab}}$ or $\widehat{\phi_a}$ for some fixed pair $\{a,b\} \subseteq S$ we consider the subgroup $\Gamma < {\rm Out}(F_n)$ generated by those Nielsen transformations which we denoted $P_1, P_2, I$ in Section \ref{SecNielsen}. Since $\Gamma$ is generated by involutions we have $\Gamma < {\rm QOut}_{(2)}(F_n)$. Now the corollary follows from the fact that $\Gamma$ acts transitively on $S$ and also on the set \[(S \cup S^{-1})^2 \setminus \{(s, r)\,|\,s \in \{r, r^{-1}\}\},\] and that $A^*\widehat{\phi_w} = \widehat{\phi_{A^{-1}w}}$ for $A \in \{P_1, P_2, I\}$ and hence for all $A \in \Gamma$. \end{proof} We do not know whether there is an element of ${\rm QOut}_{(2)}(F_n)$ or even ${\rm QOut}(F_n)$ which maps the orbits of $\phi_a$ to the orbit of $\phi_{ab}$ for $a, b$ as in the corollary. However, we can show that the $\phi_{ab}$-orbit is contained in the orbit \emph{closure} of the span of the $\phi_{a}$ orbit. This is based on the following observation. \begin{lemma}\label{lem:behviour:of:a:under:replacement} If~$\{w_1, w_2\}$ a set of independent words that start with the same letters and end with the same letters then \[f_{w_1, w_2}^*\phi_{a} = \phi_{a} + (\phi_a(w_2) -\phi_a(w_1)) (\phi_{w_1}- \phi_{w_2}).\] \end{lemma} \begin{proof} Comparing the number of occurrences of $a$ in $f_{w_1, w_2}(w)$ with occurrences in $w$ we see that for every occurrence of $w_1$ in $w$, $\#_a(w_1)$ copies of $a$ get removed, whereas $\#_a(w_2)$ copies of $a$ get added. Similarly, for every occurrence of $w_2$ in $w$, $\#_a(w_2)$ copies of $a$ get removed, whereas $\#_a(w_1)$ copies of $a$ get added. Combining this with a similar count for $a^{-1}$ the lemma follows. \end{proof} \begin{corollary}\label{OrbitClosure} Let $a,b \in S$ with $a \not \in \{b, b^{-1}\}$. Then \[ \widehat{\phi_{ab}}\in \overline{{\rm span}({\rm QOut}_{(2)}(F_n).\widehat{\phi_a}}).\] \end{corollary} \begin{proof} We consider the words $w = ab^{-1}a^{-1}b$ and $w_k = ab^{-k}aba^{-1}b$ with~$k\geq 1$. Then for every $k\geq 1$ the set $W_k := \{w, w_k\}$ is a set of independent words with the same initial/final letters, which hence gives rise to an exchange quasimorphism $f_{w_k, w}$. Applying Lemma \ref{lem:behviour:of:a:under:replacement} and passing to homogenizations yields \[ f_{w_k, w}^*\widehat{\phi_{a}} = \widehat{\phi_{a}} + \widehat{\phi_{ab^{-1}a^{-1}b}} - \widehat{\phi_{ab^{-k}aba^{-1}b}}. \] Note that by Lemma \ref{lem:abibia:to:aibl} the words $ab^{-1}a^{-1}b^{-1}$ and $ab$ are independent, hence by Lemma \ref{ExchangeCounting} there exists $g \in {\rm QOut}_{(2)}(F_n)$ such that $g^*\widehat{\phi_{ab^{-1}a^{-1}b}} = \widehat{\phi_{ab}}$. Let us define $g_k := f_{w_k, w}g \in {\rm QOut}_{(2)}(F_n)$. Then \[ \widehat{{\phi}_{ab}} = g^*\widehat{\phi_{ab^{-1}a^{-1}b}} = g^*(f_{w_k, w}^*\widehat{\phi_{a}} - \widehat{\phi_{a}}+ \widehat{\phi_{ab^{-k}aba^{-1}b}}) = g_k^*\widehat{\phi_a} - g^*\widehat{\phi_a} + g^*\widehat{\phi_{ab^{-k}aba^{-1}b}}. \] Now clearly $\widehat{\phi_{ab^{-k}aba^{-1}b}} \rightarrow 0$ as $k \rightarrow \infty$ and hence also $g^*\widehat{\phi_{ab^{-k}aba^{-1}b}} \rightarrow g^*0 = 0$ by continuity of the action. We deduce that \[ \widehat{\phi_{ab}} = \lim_{k \rightarrow \infty}(g_k^*\widehat{\phi_{a}} -g^*\widehat{\phi_a}) \in \overline{{\rm span}({\rm QOut}_{(2)}(F_n).\widehat{\phi_a}}). \] \end{proof} Combining this with Grigorchuk's theorem we finally obtain the following result, which contains Theorem \ref{IntroMain} as a special case. \begin{theorem}\label{ThmMain} For every $n \geq 1$ we have \[\mathcal H(F_n) = \overline{{\rm span}({\rm QOut}_{(2)}(F_n).{\rm Hom}(F_n,\mathbb{R}))}.\] \end{theorem} \begin{proof} From Corollary \ref{Cor2orbits} and Corollary \ref{OrbitClosure} we deduce that \[ \Phi(F_n,S) \subseteq \overline{{\rm span}({\rm QOut}_{(2)}(F_n).{\rm Hom}(F_n,\mathbb{R}))}. \] However, by Theorem \ref{Grig} we have $\overline{{\rm span}({\Phi(F_n, S)})} = \mathcal H(F_n)$ \end{proof} \section{Comparison to other definitions of quasimorphisms}\label{SecComparison} The results in this article demonstrate that the definition of a quasimorphism between non-commutative groups provided by Definition \ref{DefQM} leads to a rich and substantial theory. If we take the (very classical) definition of a real-valued quasimorphism for granted, then it is the most general possible categorical definition of a quasimorphism. Here the word \emph{categorical} refers to the fact that we want the composition of two quasimorphisms to be again a quasimorphism. This seems to be a reasonable demand, and it is the only demand we make. One may criticize that our notion of quasimorphism is too general and try to define a more narrow notion of quasimorphism which is more closely modeled on the definition of a real-valued quasimorphism. In this section we discuss various such more restrictive notions of quasimorphisms and their properties. One of the most classical sources concerning quasimorphisms is Chapter 6 of Ulam's book \cite{Ulam}. Among other things, Ulam defines a map $f: G \rightarrow H$ between a group $G$ and a metric group $(H, d_H)$ to be a $\delta$-homomorphism if \[ \forall g_1, g_2 \in G:\quad d_H(f(g_1g_2), f(g_1)f(g_2))<\delta. \] If $H$ is the additive group of real numbers with the Euclidean distance, then this is precisely the definition of a quasimorphism (of defect at most $\delta$). However, Ulam's definition makes sense for arbitrary metric groups $H$. The best studied case besides the case of $\mathbb{R}$ is the one where $H$ is the unitary group of a (typically infinite-dimensional) Hilbert space, which leads to the study of Ulam stability. We refer the reader to \cite{BurgerOzawaThom} and the references therein for a recent account. If $H$ is a discrete metric group (i.e., the topology of $d_H$ induced on $H$ is the discrete topology) then $f: G\rightarrow H$ is a $\delta$-homomorphism in the sense of Ulam for some $\delta$ if and only if there exists a finite subset $E \subseteq H$ such that \begin{equation}\label{Ulam} \forall w_1,w_2 \in G: f(w_1w_2) \in f(w_1)f(w_2)E \end{equation} Let us call a map $f: G \rightarrow H$ satisfying this condition an \emph{Ulam quasimorphism}. Ulam quasimorphisms have recently been classified in the work of Fujiwara and Kapovich \cite{FujiwaraKapovich}. They are categorical in the sense defined above, thus form a subclass of quasimorphisms in the sense of Definition \ref{DefQM}. A slightly more general class of quasimorphisms is obtained by demanding only that there exists a finite subset $E \subseteq H$ such that \begin{equation}\label{weakUlam} f(w_1 w_2) \in E f(w_1) E f(w_2) E. \end{equation} Such generalized Ulam quasimorphisms were introduced in \cite[Sec. 2.6]{FujiwaraKapovich} under the name of \emph{algebraic quasihomomorphisms} ``inspired by a correspondence from Narutaka Ozawa''. Let us call two quasmorphisms $f_1, f_2: G \rightarrow H$ \emph{algebraically equivalent} if there exists a finite subset $E \subseteq H$ such that \[ f_2(g) \in Ef_1(H)E. \] Then algebraic quasihomomorphisms, unlike Ulam quasimorphisms, are closed under algebraic equivalence. It seems to be an open problem whether every algebraic quasihomomorphism is algebraically equivalent to an Ulam quasimorphism. Note that if $f: G \rightarrow H$ is an algebraic quasihomomorphism and $g \in G$, then \[ f(e) = f(gg^{-1}) = f(g^{-1}g) \in f(g)Ef(g^{-1}) \cap f(g^{-1})Ef(g) \] It follows that there exists a finite set $F$ such that $f(g^{-1}) \in Ff(g)^{-1}F$. Enlarging $E$ if necessary we may assume that in fact \begin{equation}\label{InverseCondition} f(g^{-1}) \in Ef(g)^{-1}E. \end{equation} We are now going to present a generalization of algebraic quasihomomorphisms in the context of free groups which preserves this property, but otherwise demands \eqref{weakUlam} only for reduced words: \begin{definition} Let $G$ be a group, $F$ a free group and $f: F \rightarrow G$ a map. Then $f$ is called a \emph{quasi-Ulam quasimorphism} if there exists a finite set $E \subseteq G$ such that \begin{enumerate} \item for all words $w_1$ and~$w_2$, for which~$w_1w_2$ is a reduced word, we have $f(w_1 w_2) \in E f(w_1) E f(w_2) E$ and \item $\forall g\in F\colon \, f(g^{-1}) \in E f(g)^{-1} E$. \end{enumerate} \end{definition} By the previous remark every algebraic quasihomomorphism (and thus every Ulam quasimorphism) is quasi-Ulam. On the other hand, Proposition~\ref{thm:qmorphism:criterion:for:reduced:words} says precisely that every quasi-Ulam quasimorphism is indeed a quasimorphism in the sense of Definition \ref{DefQM}. Finally, it is easy to see that quasi-Ulam quasimorphism are closed under composition. Requiring \eqref{weakUlam} only for reduced words looks like a minor technical modification of the definition at first sight. However, as we will show next, the consequences of this modification are quite dramatic. Let us denote by \[ {\rm QOut}_{U}(F_n) <{\rm QOut}_{qU}(F_n) < {\rm QOut}(F_n) \] the subgroups given by all equivalence classes of bijective Ulam, respectively quasi-Ulam quasimorphisms. We observe that all examples of quasioutomorphisms of free groups constructed in the present article are quasi-Ulam. (The reader can check that we used Proposition \ref{thm:qmorphism:criterion:for:reduced:words} in each of the proofs.) We thus obtain the following generalization of Theorem \ref{ThmMain} with the same proof: \begin{theorem}\label{qU} For every $n \geq 1$ denote by ${\rm QOut}_{qU}(F_n) < {\rm QOut}(F_n)$ the subgroup generated by all equivalence classes of bijective quasi-Ulam quasimorphisms. Then \[ \overline{{\rm span}({\rm QOut}_{qU}(F_n).{\rm Hom}(F_n,\mathbb{R}))} = \mathcal H(F_n).\] \end{theorem} For actual Ulam quasimorphism the picture is entirely different. It it is an immediate consequence of \cite[Theorem 3]{FujiwaraKapovich} that every bijective Ulam quasimorphism from a non-abelian free group $F$ to itself is at bounded distance from an automorphism. This implies: \begin{theorem}[Fujiwara-Kapovich] For every $n \geq 1$ denote by ${\rm QOut}_{U}(F_n) < {\rm QOut}(F_n)$ the subgroup generated by all equivalence classes of bijective Ulam quasimorphisms. Then \[\overline{{\rm span}({\rm QOut}_{U}(F_n).{\rm Hom}(F_n,\mathbb{R}))} = {\rm Hom}(F_n, \mathbb{R}).\] \end{theorem} We do not know, whether the theorem remains true if we replace Ulam quasimorphisms by algebraic quasihomomorphisms. To prove this, it would suffice to show that every algebraic quasihomomorphism is weakly equivalent to an Ulam quasimorphism, but as mentioned before this is an open problem. In any case, we see that the class of quasi-Ulam quasimorphisms is sufficiently general to provide many interesting examples of quasioutomorphisms on free groups (which the class of Ulam quasimorphisms is not) and at the same time still admits a concrete definition in the spirit of Ulam. It thus forms a very interesting class of quasimorphisms on free groups, which deserves further study. For general hyperbolic groups it is not obvious what a natural class of algebraically defined quasimorphisms to study is. The results of Fujiwara and Kapovich show also in this case that the class of Ulam quasimorphisms is too restrictive. One could define quasi-Ulam quasimorphisms on finitely generated group $\Gamma $ by demanding that they are quasimorphisms which pull back to quasi-Ulam quasimorphisms for some (or a fixed or every) surjective homomorphism $F_n \rightarrow \Gamma$, but whether this yields an interesting theory remains to be seen. Generally speaking, it would be desirable to find more examples of quasioutomorphisms of finitely generated groups. However, this goal is beyond the scope of the present article. \bigskip \textbf{Acknowledgments.} The core of this work was carried out during a fruitful stay of the authors at the Mittag-Leffler-Institut, Djursholm in Spring 2012. The authors would like to thank the organizers of the program on Geometric and Analytic Aspects of Group Theory and the staff of the institute for the excellent working conditions. They are also indebted to Koji Fujiwara, Misha Kapovich, Anders Karlsson, Denis Osin and Alexey Talambutsa for comments and remarks. Finally, they would like to thank Uri Bader for suggesting Definition \ref{DefQM} which is at the heart of the present work.
1,941,325,220,349
arxiv
\section{Introduction and statement of main results}\label{Sec:Intro} Multiple zeta values are natural generalizations of the Riemann zeta values. Let $\mathbb{N}$ be the set of positive integers. For any $d\in\mathbb{N}$ and any multi-index $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$ with $k_1\geqslant 2$, the multiple zeta value $\zeta(\mathbf{k})$ is defined by the following infinite series $$\zeta(\mathbf{k})=\zeta(k_1,\ldots,k_d)=\sum\limits_{m_1>\cdots>m_d>0}\frac{1}{m_1^{k_1}\cdots m_d^{k_d}}.$$ The condition $k_1\geqslant 2$ ensures the convergence of the above infinite series. And we call such a multi-index $\mathbf{k}$ admissible. The quantities $k_1+\cdots+k_d$ and $d$ are called weight and depth of $\mathbf{k}$, respectively. Different from other researchers' work on multiple zeta values, Kumar studied the order structure and the topological properties of the set $\mathcal{Z}$ of all multiple zeta values in \cite{Senthil Kumar}. Taking the usual order and the usual topology of the set $\mathbb{R}$ of real numbers, Kumar computed the derived sets of the topological subspace $\mathcal{Z}$ of $\mathbb{R}$, and showed that the set $\mathcal{Z}$, ordered by $\geqslant$, is well-ordered with the order type $\omega^3$, where $\omega$ is the smallest infinite ordinal. In this paper, we study the topological properties of some $q$-analogues of multiple zeta values. Let $q\in\mathbb{R}$ with $0<q<1$. For any $m\in\mathbb{N}$, let $[m:q]$ denote the $q$-integer $$[m:q]=\frac{1-q^m}{1-q}=1+q+\cdots+q^{m-1}.$$ Then for any admissible multi-index $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$, we define the multiple $q$-zeta value $\zeta[\mathbf{k}:q]$ by \begin{align} &\zeta[\mathbf{k}:q]=\zeta[k_1,\ldots,k_d:q]=\sum\limits_{m_1>\cdots>m_d>0}\frac{q^{m_{1}(k_{1}-1)+\cdots+m_d(k_d-1)}}{[m_{1}:q]^{k_{1}}\cdots [m_d:q]^{k_d}}. \label{Eq:qMZV} \end{align} This $q$-analogue was first studied by Bradley \cite{Bradley} and independently by Zhao \cite{Zhao1}. Here we introduce another $q$-analogue of multiple zeta values. Let $r\in\mathbb{N}$, then we define \begin{align} &\zeta[\mathbf{k};r:q)=\zeta[k_1,\ldots,k_d;r:q)=\sum\limits_{m_1>\cdots>m_d>m_{d+1}>0}\frac{q^{m_{1}(k_{1}-1)+\cdots+m_d(k_d-1)}}{[m_{1}:q]^{k_{1}}\cdots [m_d:q]^{k_d}m_{d+1}^{r}}. \label{Eq:qMZV-r} \end{align} Different from multiple zeta values, the multiple $q$-zeta values have a parameter $q$. Hence we work in the function space $\mathbf{B}(0,1)$, which is the set of bounded real-valued functions on the open interval $(0,1)$. Since the multiple $q$-zeta values we consider here belong to $\mathbf{B}(0,1)$ (see Corollary \ref{Cor:Belong-B}), we just study the following two subspaces of $\mathbf{B}(0,1)$: \begin{align*} \mathcal{QZ}=&\{\zeta[\mathbf{k}:q]\mid \mathbf{k}\text{\;is admissible}\},\\ \mathcal{QZZ}=&\{\zeta[\mathbf{k};r:q)\mid \mathbf{k}\text{\;is admissible}, r\in\mathbb{N}\}. \end{align*} Let $F$ be the set of functions from $(0,1)$ to $\mathbb{R}$. We define a partial order relation $\preccurlyeq$ on $F$ as follows. Let $f,g\in F$. The function $f$ is smaller than $g$, if $f(q)<g(q)$ for any $q\in (0,1)$. We denote this by $f\prec g$. Then $f\preccurlyeq g$ if $f\prec g$ or $f=g$. We can find the maximum element of $\mathcal{QZ}$. \begin{thm}\label{Thm:Maximum-Element} For any admissible multi-index $\mathbf{k}$, we have $\zeta[\mathbf{k}:q]\preccurlyeq \zeta[2:q]$. In other words, $\zeta[2:q]$ is the maximum element of $\mathcal{QZ}$. \end{thm} While for the subspace $\mathcal{QZZ}$, we only obtain an upper bound. \begin{thm}\label{Thm:Upper-Bound} For any admissible multi-index $\mathbf{k}$ and any $r\in\mathbb{N}$, we have $\zeta[\mathbf{k};r:q)\prec\zeta[2:q]$. In other words, $\zeta[2:q]$ is an upper bound of $\mathcal{QZZ}$. \end{thm} We prove Theorem \ref{Thm:Maximum-Element} and Theorem \ref{Thm:Upper-Bound} in Section \ref{Sec:Proof-Thm1-2}. As in \cite{Senthil Kumar}, we want to compute the derived sets of the subspace $\mathcal{QZZ}$. Hence some topology of $\mathbf{B}(0,1)$ is needed. In fact, $\mathbf{B}(0,1)$ is a complete normed space with the norm given by $$\|f\|=\sup\limits_{q\in(0,1)}|f(q)|,\quad \text{for each\;} f\in\mathbf{B}(0,1).$$ In the following we consider $\mathbf{B}(0,1)$ with respect to the topology induced by the above norm. We are interested to find the derived sequence $(\mathcal{QZZ}^{(n)})_{n\geqslant 0}$ of the subspace $\mathcal{QZZ}$ of $\mathbf{B}(0,1)$ where $\mathcal{QZZ}^{(0)}=\mathcal{QZZ}$ and for $n\geqslant 1$, $\mathcal{QZZ}^{(n)}$ is the set of accumulation points of $\mathcal{QZZ}^{(n-1)}$ in $\mathbf{B}(0,1)$. To state our results, we further need to introduce some more notations. For an admissible multi-index $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$ and a nonnegative integer $n$, we set \begin{align} &\zeta[\mathbf{k}:q]_n=\zeta[k_1,\ldots,k_d:q]_{n}=\sum\limits_{m_1>\cdots>m_d>n}\frac{q^{m_{1}(k_{1}-1)+\cdots+m_d(k_d-1)}}{[m_{1}:q]^{k_{1}}\cdots [m_d:q]^{k_d}}. \label{Eq:qMZV-Tails} \end{align} Obviously, we have $\zeta[\mathbf{k}:q]_0=\zeta[\mathbf{k}:q]$. We have the following theorem, the proof will be given in Section \ref{Sec:Proof-Thm3}. \begin{thm}\label{Thm:Derived-Set} We have $$\mathcal{QZZ}^{(1)}=\{\zeta[\mathbf{k}:q]_1\mid \mathbf{k}\text{\;is admissible}\}\cup \{0\}$$ and $\mathcal{QZZ}^{(2)}=\{0\}$. \end{thm} Throughout the paper, the notation $\{k\}^n$ stands for $\underbrace{k,\ldots,k}_{n\text{\;terms}}$. \section{Proofs of Theorem \ref{Thm:Maximum-Element} and Theorem \ref{Thm:Upper-Bound}} \label{Sec:Proof-Thm1-2} In this section, we give proofs of Theorem \ref{Thm:Maximum-Element} and Theorem \ref{Thm:Upper-Bound}. We need the following lemmas. \begin{lem}\label{Lem:OneTerm-Increasing} For any $m\in\mathbb{N}$, the function $f(q)=\frac{q^m}{[m:q]}$ is monotonically increasing on the interval $(0,1)$. In particular, we have $$0\prec\frac{q^m}{[m:q]}\prec\frac{1}{m},\quad \text{for each\;} q\in (0,1).$$ \end{lem} \noindent {\bf Proof.} We have $$(1-q^m)^2f'(q)=q^{m-1}\left[m-(m+1)q+q^{m+1}\right].$$ Set $g(q)=m-(m+1)q+q^{m+1}$, then one gets $$g'(q)=(m+1)(q^m-1)<0.$$ Hence for any $q\in (0,1)$, we have $$g(q)>g(1)=0,$$ which induces that $f'(q)>0$. \qed \begin{lem}\label{Lem:Del-One-Big} For positive integers $d,j,r,k_1,\ldots,k_d$ with $k_1\geqslant 2$ and $j\leqslant d$, and any nonnegative integer $n$, we have \begin{align*} &\zeta[k_1,\ldots,k_{j-1},k_{j}+1,k_{j+1},\ldots,k_d:q]\prec\zeta[k_1,\ldots,k_{j},\ldots,k_d:q],\\ &\zeta[k_1,\ldots,k_{j-1},k_{j}+1,k_{j+1},\ldots,k_d:q]_n\prec\zeta[k_1,\ldots,k_{j},\ldots,k_d:q]_n \end{align*} and $$\zeta[k_1,\ldots,k_{j-1},k_{j}+1,k_{j+1},\ldots,k_d;r:q)\prec\zeta[k_1,\ldots,k_{j},\ldots,k_d;r:q).$$ We also have $$\zeta[k_1,\ldots,k_d;r+1:q)\prec\zeta[k_1,\ldots,k_d;r:q)$$ and $$\zeta[k_1,\ldots,k_d;r:q)\prec\zeta[k_1,\ldots,k_d,1:q].$$ \end{lem} \noindent {\bf Proof.} From Lemma \ref{Lem:OneTerm-Increasing}, for any $m\in\mathbb{N}$ and any $q\in (0,1)$, we have $\frac{q^m}{[m:q]}<1$. Multiplying by $\frac{q^{(k_j-1)m}}{[m:q]^{k_j}}$ on both sides, we obtain $$\frac{q^{k_jm}}{[m:q]^{k_j+1}}<\frac{q^{(k_j-1)m}}{[m:q]^{k_j}},$$ which induces the first three inequalities stated in the lemma. For $m\in\mathbb{N}$, we have $$\frac{1}{m^{r+1}}\leqslant \frac{1}{m^r},\quad \frac{1}{m^r}\leqslant \frac{1}{[m:q]^r}\leqslant \frac{1}{[m:q]},$$ (where the equality holds only when $m=1$) from which, the last two inequalities of the lemma follows. \qed \begin{lem}\label{Lem:Del-Depth-Big} For any nonnegative integer $d$, we have $\zeta[2,\{1\}^{d+1}:q]\prec\zeta[2,\{1\}^{d}:q]$. \end{lem} \noindent {\bf Proof.} We use the duality formula for multiple $q$-zeta values proved by Bradley in \cite{Bradley}: for any nonnegative integers $n$ and $m$, one has \begin{equation} \zeta[2+n,\{1\}^{m}:q]=\zeta[2+m,\{1\}^{n}:q]. \label{Eq:Dual-Formula} \end{equation} From \eqref{Eq:Dual-Formula}, to prove the lemma, it is enough to show $\zeta[d+3:q]\prec\zeta[d+2:q]$, but this follows from Lemma \ref{Lem:Del-One-Big}. \qed Now we prove Theorem \ref{Thm:Maximum-Element} and Theorem \ref{Thm:Upper-Bound}. \noindent {\bf Proof of Theorem \ref{Thm:Maximum-Element}.} Let $\mathbf{k}=(k_1,\ldots,k_d)$. By Lemmas \ref{Lem:Del-One-Big} and \ref{Lem:Del-Depth-Big}, we have $$\zeta[\mathbf{k}:q]=\zeta[k_1,\ldots,k_d:q]\preccurlyeq \zeta[2,\{1\}^{d-1}:q]\preccurlyeq \zeta[2:q],$$ as desired. \qed \noindent {\bf Proof of Theorem \ref{Thm:Upper-Bound}.} Let $\mathbf{k}=(k_1,\ldots,k_d)$. By Lemmas \ref{Lem:Del-One-Big} and \ref{Lem:Del-Depth-Big}, we have $$\zeta[\mathbf{k};r:q)=\zeta[k_1,\ldots,k_d;r:q)\prec\zeta[2,\{1\}^{d}:q]\prec\zeta[2:q],$$ as desired. \qed We end this section with a corollary. \begin{cor}\label{Cor:Belong-B} The function spaces $\mathcal{QZ}$ and $\mathcal{QZZ}$ are subsets of $\mathbf{B}(0,1)$. \end{cor} \noindent {\bf Proof.} Since $$\lim\limits_{q\rightarrow 0}\zeta[2:q]=0,\qquad \lim\limits_{q\rightarrow 1}\zeta[2:q]=\zeta(2),$$ the function $\zeta[2:q]$ is continuous on the closed interval $[0,1]$. Hence $\zeta[2:q]\in \mathbf{B}(0,1)$. Now for any admissible multi-index $\mathbf{k}$, from Theorem \ref{Thm:Maximum-Element}, we have $\zeta[\mathbf{k}:q]\in \mathbf{B}(0,1)$. Then $\mathcal{QZ}$ is a subset of $\mathbf{B}(0,1)$. Similarly, from Theorem \ref{Thm:Upper-Bound}, we find $\mathcal{QZZ}$ is a subset of $\mathbf{B}(0,1)$. \qed \section{Proof of Theorem \ref{Thm:Derived-Set}}\label{Sec:Proof-Thm3} In this section, we give a proof of Theorem \ref{Thm:Derived-Set}. \subsection{Some preliminary results} We first compute the norms of some multiple $q$-zeta values. For this purpose, we require the following lemmas. \begin{lem}\label{Lem:Mono-Decreasing} For a fixed $q\in (0,1)$, the function $f(x)=\frac{xq^{x-1}}{1-q^x}$ is monotonically decreasing on the interval $[1,+\infty)$. \end{lem} \noindent {\bf Proof.} We have $$(1-q^x)^2f'(x)=q^{x-1}(1-q^x+x\log q).$$ Set $g(x)=1-q^x+x\log q$, then we have $$g'(x)=(1-q^x)\log q<0.$$ Hence we find $$g(x)\leqslant g(1)=1-q+\log q.$$ Set $h(x)=1-x+\log x$, where $x\in (0,1)$, then we have $$h'(x)=-1+\frac{1}{x}=\frac{1-x}{x}>0.$$ Finally, we get $$g(1)=h(q)<h(1)=0,$$ which implies that $f'(x)<0$. \qed \begin{lem}\label{Lem:Monotone} Let $d,m_1,\ldots,m_d,k$ be positive integers such that $m_{1}\geqslant \cdots \geqslant m_{d}$ and $k \geqslant d+1$. Then the function $f(q)=\frac{q^{m_{1}(k-1)}}{{[m_{1}:q]}^{k}[m_{2}:q]\cdots [m_{d}:q]}$ is monotonically increasing on the interval $(0,1)$. In particular, for $m,k\in\mathbb{N}$ with $k\geqslant 2$, the function $\frac{q^{m(k-1)}}{{[m:q]}^{k}}$ is monotonically increasing on the interval $(0,1)$. \end{lem} \noindent {\bf Proof.} Since $$f(q)=\frac{{(1-q)}^{k+d-1}q^{m_{1}(k-1)}}{(1-q^{m_{1}})^{k}(1-q^{m_{2}})\ldots(1-q^{m_{d}})},$$ taking the logarithmic derivative of $f(q)$, we get $$\frac{f'(q)}{f(q)}=\frac{m_{1}(k-1)}{q}-\frac{k+d-1}{1-q}+\frac{k{m_{1}}{q^{m_{1}-1}}}{1-q^{m_{1}}}+\sum_{i=2}^{d}\frac{{m_{i}}{q^{m_{i}-1}}}{1-q^{m_{i}}}.$$ Using Lemma \ref{Lem:Mono-Decreasing}, we get $$\frac{f'(q)}{f(q)}\geqslant \frac{m_{1}(k-1)}{q}-\frac{k+d-1}{1-q}+\frac{(k+d-1){m_{1}}{q^{m_{1}-1}}}{1-q^{m_{1}}},$$ which is equivalent to $$q(1-q)(1-q^{m_1})\frac{f'(q)}{f(q)}\geqslant g(q)$$ with $$g(q)=\left[m_{1}(k-1)(1-q)-(k+d-1)q\right](1-q^{m_1})+(k+d-1)m_{1}q^{m_{1}}(1-q).$$ Then it is enough to show that $$g(q)>0,\quad \text{for each\;} q\in (0,1).$$ In fact, we have \begin{align*} &g'(q)=m_{1}-m_{1}k-k-d+1+m_{1}^{2}dq^{m_{1}-1}+(k+d-1-m_{1}d)(m_{1}+1)q^{m_{1}},\\ &g''(q)=q^{m_{1}-2}m_{1}[m_{1}(m_{1}-1)d+(m_{1}+1)(k+d-1-m_{1}d)q]. \end{align*} Set $$h(q)=m_{1}(m_{1}-1)d+(m_{1}+1)(k+d-1-m_{1}d)q,$$ then, since $k\geqslant d+1$, we get $$h(0)=m_{1}(m_{1}-1)d\geqslant 0,\quad h(1)=m_{1}(k-d-1)+k+d-1>0.$$ Therefore for any $q\in (0,1)$, we have $h(q)>0$ and then $g''(q)>0$. This implies that $$g'(q)<g'(1)=0,\quad \text{for each\;} q\in (0,1),$$ and then $$g(q)>g(1)=0,\quad \text{for each\;} q\in (0,1).$$ This completes the proof.\qed From Lemma \ref{Lem:Monotone}, we get the norms of height one multiple $q$-zeta values. \begin{cor}\label{Cor:Norm-qMZV} For any nonnegative integers $n$ and $m$, we have $$\|\zeta[2+n,\{1\}^{m}:q]\|=\zeta(2+n,\{1\}^{m}).$$ \end{cor} \noindent {\bf Proof}. If $n\geqslant m$, we get the result from Lemma \ref{Lem:Monotone}. If $n\leqslant m$, applying the duality formula \eqref{Eq:Dual-Formula} and its multiple zeta values' version, we get the result from Lemma \ref{Lem:Monotone}. \qed Now we provide upper and lower bounds for tails of multiple $q$-zeta values. We set $$\Omega_1(\mathbf{k},n:q)={\left({\frac{q-1}{\log q}}\right)}^{d}{\left(\frac{q^{n+d}}{[n+d:q]}\right)}^{{k_{1}+\cdots+k_{d}-d}}\prod_{i=1}^{d}\frac{1}{k_{1}+\cdots+k_{i}-i}$$ and $$\Omega_2(\mathbf{k},n:q)={\left({\frac{q-1}{\log q}}\right)}^{d}{\left(\frac{q^{n}}{[n:q]}\right)}^{{k_{1}+\cdots+k_{d}-d}}\prod_{i=1}^{d}\frac{1}{k_{1}+\cdots+k_{i}-i},$$ where $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$ and $n\in\mathbb{N}$. \begin{lem}\label{Lem:Equivalence} For any admissible multi-index $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$ and any $n\in\mathbb{N}$, we have \begin{align} \Omega_1(\mathbf{k},n:q)\prec\zeta[\mathbf{k}:q]_{n}\prec\Omega_2(\mathbf{k},n:q). \label{Eq:Ineq-qMZV-Tail} \end{align} \end{lem} \noindent {\bf Proof.} We prove by induction on $d$. Let $q\in (0,1)$ be fixed. For $d=1$, we have to show that \begin{align} \frac{q-1}{\log q}\left(\frac{q^{n+1}}{[n+1:q]}\right)^{k_1-1}\frac{1}{k_1-1}<\zeta[k_1:q]_n<\frac{q-1}{\log q}\left(\frac{q^{n}}{[n:q]}\right)^{k_1-1}\frac{1}{k_1-1}. \label{Eq:Ineq-qMZV-Tail-D1} \end{align} In fact, set $f_{k_1}(x)=\frac{q^{x(k_{1}-1)}}{{(1-q^{x})}^{k_{1}}}$, then we have $$(1-q^x)^{k_1+1}f_{k_1}'(x)=q^{x(k_1-1)} (k_1-1+q^x)\log q<0,\quad (x\geqslant 1).$$ Hence $f_{k_1}(x)$ is monotonically decreasing on the interval $[1,+\infty)$ for any $k_{1}\in\mathbb{N}$. Then we obtain $${(1-q)}^{k_{1}}\int _{n+1}^{\infty}\frac{q^{x(k_{1}-1)}}{{(1-q^{x})}^{k_{1}}}dx < \zeta[k_1:q]_n<{(1-q)}^{k_{1}}\int _{n}^{\infty}\frac{q^{x(k_{1}-1)}}{{(1-q^{x})}^{k_{1}}}dx.$$ For $k_1\geqslant 2$, the substitution $y=q^x$ gives the following identity $${(1-q)}^{k_{1}}\int_{a}^{b}\frac{q^{x(k_{1}-1)}}{{(1-q^{x})}^{k_{1}}}dx =\left[\frac{{(1-q)}^{k_{1}}}{(k_{1}-1)\log q}{\left(\frac{y}{1-y}\right)}^{k_{1}-1}\right]_{y=q^{a}}^{q^{b}}, $$ from which we get \eqref{Eq:Ineq-qMZV-Tail-D1}. If $d>1$, we have $$\zeta[\mathbf{k}:q]_n=\zeta[k_1,\ldots,k_d:q]_{n}=\sum_{m_{d}>n}\frac{q^{m_{d}(k_{d}-1)}}{{[m_{d}:q]}^{k_{d}}}\zeta[k_1,\ldots,k_{d-1}:q]_{m_{d}}.$$ Using the induction hypothesis, we get $$\Lambda_l(q)<\zeta[\mathbf{k}:q]_n<\Lambda_r(q),$$ where \begin{align*} \Lambda_l(q)=&\sum_{m_{d}>n}\frac{q^{m_{d}(k_{d}-1)}}{{[m_{d}:q]}^{k_{d}}}{\left({\frac{q-1}{\log q}}\right)}^{d-1}\left(\frac{q^{m_{d}+d-1}}{{[m_{d}+d-1:q]}}\right)^{{k_{1}+\cdots+k_{d-1}-d+1}}\\ &\quad\times\prod_{i=1}^{d-1}\frac{1}{k_{1}+\cdots+k_{i}-i}, \end{align*} and \begin{align*} \Lambda_r(q)=&\sum_{m_{d}>n}\frac{q^{m_{d}(k_{d}-1)}}{{[m_{d}:q]}^{k_{d}}}{\left({\frac{q-1} {\log q}}\right)}^{d-1}\left(\frac{q^{m_{d}}}{{[m_{d}:q]}}\right)^{{k_{1}+\cdots+k_{d-1}-d+1}}\\ &\quad\times\prod_{i=1}^{d-1}\frac{1}{k_{1}+\cdots+k_{i}-i}. \end{align*} Since $$\frac{q^{m_d(k_d-1)}}{[m_d:q]^{k_d}}=(1-q)^{k_d}f_{k_d}(m_d)>(1-q)^{k_d}f_{k_d}(m_d+d-1)=\frac{q^{(m_d+d-1)(k_d-1)}}{[m_d+d-1:q]^{k_d}},$$ we find $$\Lambda_l(q)>\left(\frac{q-1}{\log q}\right)^{d-1}\prod_{i=1}^{d-1}\frac{1}{k_{1}+\cdots+k_{i}-i}\zeta[k_1+\cdots+k_d-d+1:q]_{n+d-1}.$$ Using the lower bound in the case of $d=1$, we have $$\Lambda_l(q)>{\left({\frac{q-1}{\log q}}\right)}^{d}{\left(\frac{q^{n+d}}{[n+d:q]}\right)}^{{k_{1}+\cdots+k_{d}-d}}\prod_{i=1}^{d}\frac{1}{k_{1}+\cdots+k_{i}-i},$$ as desired. Similarly, since $$\Lambda_r(q)=\left(\frac{q-1}{\log q}\right)^{d-1}\prod_{i=1}^{d-1}\frac{1}{k_{1}+\cdots+k_{i}-i}\zeta[k_1+\cdots+k_d-d+1:q]_{n},$$ using the upper bound in the case of $d=1$, we have $$\Lambda_r(q)<{\left({\frac{q-1}{\log q}}\right)}^{d}{\left(\frac{q^{n}}{[n:q]}\right)}^{{k_{1}+\cdots+k_{d}-d}}\prod_{i=1}^{d}\frac{1}{k_{1}+\cdots+k_{i}-i},$$ as desired. \qed \subsection{Convergent sequences in $\mathcal{QZZ}$} To prove Theorem \ref{Thm:Derived-Set}, we have to know the behaviour of the convergent sequences in the space $\mathcal{QZZ}$. We first introduce some notation. Let $(\mathbf{k}(n))_{n\in\mathbb{N}}=((k_1(n),\ldots,k_{d(n)}(n)))_{n\in\mathbb{N}}$ be a fixed sequence of admissible multi-indices. Set $$N_2=\{n\in\mathbb{N}\mid k_1(n)=2\}\subset\mathbb{N}.$$ For any $n\in N_2$, we define $$l(n)=\begin{cases} i & \text{if\;} d(n)\geqslant 2 \text{\;and\;}k_2(n)=\cdots=k_{i-1}(n)=1,k_i(n)\geqslant 2,\\ 1 & \text{otherwise}, \end{cases}$$ and $v(n)=d(n)-l(n)+1$. Then for $n\in N_2$ and $l(n)\geqslant 2$, we set $$\mathbf{k}(n)=(2,\{1\}^{l(n)-2},k_{l(n)}(n),\ldots,k_{d(n)}(n))=(2,\{1\}^{l(n)-2},s_1(n),\ldots,s_{v(n)}(n)).$$ Finally, we define some subsets of $\mathbb{N}$ as follows. Let \begin{align*} D=&\{d(n)\mid n\in N_2\},\\ V=&\{v(n)\mid n\in N_2\},\\ W=&\{k_1(n)+\cdots+k_{d(n)}(n)\mid n\in N_2\},\\ W'=&\{s_1(n)+\cdots+s_{v(n)}(n)\mid n\in N_2,l(n)\geqslant 2\}. \end{align*} Then we have the following theorem. \begin{thm}\label{Thm:Classification} Let $(\mathbf{k}(n))_{n\in\mathbb{N}}=((k_1(n),\ldots,k_{d(n)}(n)))_{n\in\mathbb{N}}$ be a sequence of admissible multi-indices and $(r(n))_{n\in\mathbb{N}}$ be a sequence of positive integers. Assume that $0$ is not an accumulation point of the sequence $(\zeta[\mathbf{k}(n);r(n):q))_{n\in\mathbb{N}}$ in $\mathbf{B}(0,1)$. The following holds. \begin{enumerate} \item [(i)] If $N_2$ is a finite set, then both the sets $\{d(n)\mid n\in\mathbb{N}\}$ and $\{k_1(n)+\cdots+k_{d(n)}(n)\mid n\in\mathbb{N}\}$ are bounded. \item [(ii)] Assume that $N_2$ is infinite. Then one of the following subcases holds. \begin{enumerate} \item [(ii-1)] If $D$ is bounded, then $W$ is bounded; \item [(ii-2)] If both $D$ and $V$ are unbounded, then there are only finitely many $n\in N_2$, such that $l(n)\geqslant 2$; \item [(ii-3)] If $D$ is unbounded while $V$ is bounded, then $W'$ is bounded. \end{enumerate} \end{enumerate} \end{thm} \noindent {\bf Proof.} (i) Assume that $N_2$ is a finite set. Without loss of generality, we may assume that $N_2=\emptyset$. Then for any $n\in\mathbb{N}$, we have $k_1(n)\geqslant 3$. Using Lemma \ref{Lem:Del-One-Big} and the duality formula \eqref{Eq:Dual-Formula}, we have \begin{align*} \zeta[k_{1}(n),\cdots,k_{d(n)}(n);r(n):q)\prec&\zeta[k_{1}(n),\cdots,k_{d(n)}(n),1:q]\\ \preccurlyeq&\zeta[3,\{1\}^{d(n)}:q]=\zeta[2+d(n),1:q]. \end{align*} Taking norms, we have $$\|\zeta[k_{1}(n),\cdots,k_{d(n)}(n);r(n):q)\|\leqslant \|\zeta[2+d(n),1:q]\|=\zeta(2+d(n),1),$$ where the last equality is from Corollary \ref{Cor:Norm-qMZV}. If $d(n)$ is unbounded, then there exists an infinite sequence $(n_l)$ of elements of $\mathbb{N}$, such that $$\lim\limits_{l\rightarrow \infty}d(n_l)=\infty.$$ Since $$\lim\limits_{d(n)\rightarrow \infty}\zeta(2+d(n),1)=0,$$ we get $$\lim\limits_{l\rightarrow \infty}\|\zeta[k_{1}(n_l),\cdots,k_{d(n_l)}(n_l);r(n_l):q)\|=0,$$ which implies that $0$ is an accumulation point of the sequence $(\zeta[\mathbf{k}(n);r(n):q))_{n\in\mathbb{N}}$, a contradiction. Hence $d(n)$ is bounded. Let $d$ be the maximal element of $\{d(n)\mid n\in\mathbb{N}\}$. For any $1\leqslant p\leqslant d$, set $$M_p=\{n\in\mathbb{N}\mid d(n)=p\}.$$ For a fixed $p$, we show that for any $1\leqslant j\leqslant p$, the sets $$\{k_j(n)\mid n\in M_p\}$$ are all bounded. If $M_p$ is a finite set, then the result follows easily. Now assume that $M_p$ is an infinite set. For $j=1$, as above we have \begin{align*} \|\zeta[k_{1}(n),\cdots,k_p(n);r(n):q)\|\leqslant&\|\zeta[k_{1}(n),\ldots,k_p(n),1:q]\|\\ \leqslant&\|\zeta[k_{1}(n),\{1\}^{p}:q]\|=\zeta(k_1(n),\{1\}^p). \end{align*} If $k_{1}(n)$ is unbound for $n\in M_p$, then there exists an infinite sequence $(n_l)$ of elements of $M_p$ such that $$\lim\limits_{l\rightarrow \infty}k_1(n_l)=\infty.$$ Therefore we have $$\lim\limits_{l\rightarrow\infty}\|\zeta[k_{1}(n_l),\cdots,k_p(n_l);r(n_l):q)\|=0,$$ a contradiction. Hence $k_1(n)$ is bounded for $n\in M_p$. Assume $2\leqslant j\leqslant p$. We may assume that for any $n\in M_p$, $k_j(n)\geqslant 2$. We have \begin{align*} &\zeta[k_{1}(n),\ldots,k_p(n);r(n):q)\\ =&\sum\limits_{m_1>\cdots>m_{j-1}>m_{j}}\prod_{i=1}^{j-1}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}} \sum\limits_{m_{j}>\cdots>m_p>m_{p+1}>0}\prod_{i=j}^{p}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}}\frac{1}{m_{p+1}^{r(n)}}\\ \prec&\zeta[k_{1}(n),\ldots,k_{j-1}(n):q]\zeta[k_{j}(n),\ldots,k_p(n);r(n):q). \end{align*} Assume that we have shown $k_1(n),\ldots,k_{j-1}(n)$ are all bounded for $n\in M_p$, then $\{\|\zeta[k_{1}(n),\ldots,k_{j-1}(n):q]\|\mid n\in M_p\}$ is bounded. If $k_j(n)$ is unbounded for $n\in M_p$, then as in the case of $j=1$, there exists an infinite sequence $(n_l)$ of elements of $M_p$, such that $$\lim\limits_{l\rightarrow\infty}\|\zeta[k_{j}(n_l),\ldots,k_p(n_l);r(n_l):q)\|=0.$$ Therefore, we again get a contradiction. Hence $k_j(n)$ is bounded for $n\in M_p$. Finally, we find that the weight of $\mathbf{k}(n)$'s are bounded, and hence the proof of (i) follows. \noindent Proof of (ii). Assume that $N_2$ is an infinite set. \noindent Proof of (ii-1). If $D$ is bounded, set $d=\max D$. We only need to prove that for any $2\leqslant j\leqslant d$, $k_j(n)$ is bounded for $n\in N_2$. Then one may use a similar argument as in (i) to get the result. \noindent Proof of (ii-2). Assume that both $D$ and $V$ are unbounded and there are infinitely many $n\in N_2$, such that $l(n)\geqslant 2$. Without loss of generality, we may assume that for any $n\in N_2$, $l(n)\geqslant 2$. Then for $n\in N_2$, we have \begin{align*} &\zeta[\mathbf{k}(n);r(n):q)=\zeta[k_{1}(n),\ldots,k_{l(n)+v(n)-1}(n);r(n):q)\\ =&\sum\limits_{m_1>\cdots>m_{l(n)-1}>m_{l(n)}}\prod_{i=1}^{l(n)-1}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}}\\ &\times\sum\limits_{m_{l(n)}>\cdots>m_{l(n)+v(n)-1}>m_{l(n)+v(n)}>0}\prod_{i=l(n)}^{l(n)+v(n)-1}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}}\frac{1}{m_{l(n)+v(n)}^{r(n)}}. \end{align*} For $m_{l(n)}>\cdots>m_{l(n)+v(n)-1}>m_{l(n)+v(n)}>0$, we have $m_{l(n)}\geqslant v(n)+1>v(n)$. Hence \begin{align*} &\zeta[\mathbf{k}(n);r(n):q)\prec\sum\limits_{m_1>\cdots>m_{l(n)-1}>v(n)}\prod_{i=1}^{l(n)-1}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}}\\ &\times\sum\limits_{m_{l(n)}>\cdots>m_{l(n)+v(n)-1}>m_{l(n)+v(n)}>0}\prod_{i=l(n)}^{l(n)+v(n)-1}\frac{q^{m_{i}(k_{i}(n)-1)}}{[m_{i}:q]^{k_{i}(n)}}\frac{1}{m_{l(n)+v(n)}^{r(n)}}\\ =&\zeta[k_1(n),\ldots,k_{l(n)-1}(n):q]_{v(n)}\zeta[k_{l(n)}(n),\ldots,k_{l(n)+v(n)-1}(n);r(n):q). \end{align*} Using Lemma \ref{Lem:Del-One-Big} and the duality formula \eqref{Eq:Dual-Formula}, we have \begin{align*} \zeta[\mathbf{k}(n);r(n):q)\prec&\zeta[2,\{1\}^{l(n)-2}:q]_{v(n)}\zeta[2,\{1\}^{v(n)}:q]\\ =&\zeta[2,\{1\}^{l(n)-2}:q]_{v(n)}\zeta[v(n)+2:q]. \end{align*} By Lemma \ref{Lem:Equivalence}, we get $$\zeta[\mathbf{k}(n);r(n):q)\prec\left(\frac{q-1}{\log q}\right)^{l(n)-1}\frac{q^{v(n)}}{[v(n):q]}\zeta[v(n)+2:q].$$ Using Lemma \ref{Lem:OneTerm-Increasing}, Corollary \ref{Cor:Norm-qMZV} and the fact that the function $\frac{q-1}{\log q}$ is monotonically increasing on $(0,1)$, we have $$\|\zeta[\mathbf{k}(n);r(n):q)\|\leqslant \frac{1}{v(n)}\zeta(v(n)+2).$$ Then from the unboundedness of $V$, we get a contradiction. \noindent Proof of (ii-3). Assume that $D$ is unbounded and $V$ is bounded. Then $l(n)$ is unbounded for $n\in N_2$. Hence $$\widetilde{N_2}=\{n\in N_2\mid l(n)\geqslant 2\}$$ is an infinite subset of $N_2$. Set $v=\max V$, and for $1\leqslant p\leqslant v$, set $$\widetilde{M_p}=\{n\in \widetilde{N_2}\mid v(n)=p\}.$$ For a fixed $p$, we need to show that $s_1(n),\ldots,s_p(n)$ are all bounded for $n\in \widetilde{M_p}$. Now \begin{align*} \zeta[\mathbf{k}(n);r(n):q)\prec&\zeta[2,\{1\}^{l(n)-2}:q]\zeta[s_1(n),\ldots,s_p(n);r(n):q)\\ =&\zeta[l(n):q]\zeta[s_1(n),\ldots,s_p(n);r(n):q). \end{align*} Then similarly as in the proof of (i), we get the result. \qed As a consequence, we get the following result, which is used to compute $\mathcal{QZZ}^{(1)}$. \begin{cor}\label{Cor:Classification} Let $(\mathbf{k}(n))_{n\in\mathbb{N}}=((k_{1}(n),\ldots,k_{d(n)}(n)))_{n\in\mathbb{N}}$ be a sequence of admissible multi-indices and $(r(n))_{n\in\mathbb{N}}$ be a sequence of positive integers. Assume that for any $n_1\neq n_2$, $\zeta[\mathbf{k}(n_1);r(n_1):q)\neq \zeta[\mathbf{k}(n_2);r(n_2):q)$. If $0$ is not an accumulation point of the sequence $(\zeta[\mathbf{k}(n);r(n):q))_{n\in\mathbb{N}}$ in $\mathbf{B}(0,1)$, then $((\mathbf{k}(n),r(n)))_{n\in \mathbb{N}}$ has a subsequence of one of the following types: \begin{align} &((k_{1},\ldots,k_{d},\varphi(n)+2))_{n\in \mathbb{N}}, \label{Eq:Type-1}\\ &((2,\{1\}^{\psi(n)},r))_{n\in \mathbb{N}}, \label{Eq:Type-2}\\ &((2,\{1\}^{\psi(n)},\varphi(n)+2))_{n\in \mathbb{N}}, \label{Eq:Type-3}\\ &((2,\{1\}^{\psi(n)},k_{1},\ldots,k_{d},r))_{n\in \mathbb{N}}, \label{Eq:Type-4}\\ &((2,\{1\}^{\psi(n)},k_{1},\ldots,k_{d},\varphi(n)+2))_{n\in \mathbb{N}}, \label{Eq:Type-5} \end{align} where $(k_{1},\ldots,k_{d})$ is a fixed admissible multi-index, $r$ is a fixed positive integer and ${(\psi(n))}_{n\in \mathbb{N}}$, ${(\varphi(n))}_{n\in \mathbb{N}}$ are strictly increasing sequences in $\mathbb{N}$. \end{cor} \noindent {\bf Proof.} We use the same notation as in Theorem \ref{Thm:Classification}. If $N_2$ is finite, then by Theorem \ref{Thm:Classification}, $d(n)$ and $k_1(n)+\cdots+k_{d(n)}(n)$ are bounded for $n\in\mathbb{N}$. Hence there exists an infinite subset $A$ of $\mathbb{N}$, such that $d(n)=d$ is a constant for any $n\in A$. Since $k_1(n)$ is bounded for $n\in A$, there exists an infinite subset $A_1$ of $A$, such that $k_1(n)=k_1$ is a constant for any $n\in A_1$. Similarly, there exists an infinite subset $A_2$ of $A_1$, such that $k_2(n)=k_2$ is a constant for any $n\in A_2$. And finally, there exists an infinite subset $B$ of $A$, such that $$k_1(n)=k_1,\ldots,k_d(n)=k_d$$ are all constants for any $n\in B$. Now $(r(n))_{n\in B}$ must be unbounded, hence $((\mathbf{k}(n),r(n)))_{n\in \mathbb{N}}$ has a subsequence of the form \eqref{Eq:Type-1}. Now assume that $N_2$ is an infinite set. If $D$ is bounded, then by Theorem \ref{Thm:Classification}, $W$ is bounded. A similar argument as above implies that $((\mathbf{k}(n),r(n)))_{n\in \mathbb{N}}$ has a subsequence of the form \eqref{Eq:Type-1}. If both $D$ and $V$ are unbounded, then by Theorem \ref{Thm:Classification}, there is an infinite subset $A$ of $N_2$, such that $l(n)=1$ for all $n\in A$. Then $((\mathbf{k}(n),r(n)))_{n\in \mathbb{N}}$ has a subsequence of the form \eqref{Eq:Type-2} or of the form \eqref{Eq:Type-3} according to the sequence $(r(n))_{n\in A}$ is bounded or unbounded. Finally, if $D$ is unbounded while $V$ is bounded, then by Theorem \ref{Thm:Classification}, $((\mathbf{k}(n),r(n))_{n\in \mathbb{N}})$ has a subsequence of the form \eqref{Eq:Type-4} or of the form \eqref{Eq:Type-5}. \qed To determine $\mathcal{QZZ}^{(2)}$, we need the following. \begin{thm}\label{Thm:qMZVTail-Classification} Let $(\mathbf{k}(n))_{n\in\mathbb{N}}=((k_{1}(n),\ldots,k_{d(n)}(n)))_{n\in\mathbb{N}}$ be a sequence of admissible multi-indices. Assume that for any $n_1\neq n_2$, $\zeta[\mathbf{k}(n_1):q]_1\neq \zeta[\mathbf{k}(n_2):q]_1$. If $0$ is not an accumulation point of the sequence $(\zeta[\mathbf{k}(n):q]_1)_{n\in\mathbb{N}}$ in $\mathbf{B}(0,1)$, then $(\mathbf{k}(n))_{n\in \mathbb{N}}$ has a subsequence of one of the following types: \begin{align} &((2,\{1\}^{\psi(n)}))_{n\in \mathbb{N}}, \label{Eq:qType-1}\\ &((2,\{1\}^{\psi(n)},k_{1},\ldots,k_{d}))_{n\in \mathbb{N}}, \label{Eq:qType-2} \end{align} where $(k_{1},\ldots,k_{d})$ is a fixed admissible multi-index and ${(\psi(n))}_{n\in \mathbb{N}}$ is a strictly increasing sequence in $\mathbb{N}$. \end{thm} \noindent {\bf Proof.} We can prove similarly as in Theorem \ref{Thm:Classification} and Corollary \ref{Cor:Classification}. While since for $k_{d(n)}(n)\geqslant 2$ it holds $$\zeta[k_{1}(n),\ldots,k_{d(n)}(n):q]_{1}\prec\zeta[k_{1}(n),\ldots,k_{d(n)-1}(n):q]\zeta[k_{d(n)}(n):q]_{1},$$ if we have shown $k_1(n),\ldots,k_{d(n)-1}(n)$ are all bounded, then $k_{d(n)}(n)$ is also bounded . \qed \subsection{Some lemmas} To prove Theorem \ref{Thm:Derived-Set}, we recall the concept of double tails of multiple zeta values of Akhilesh from \cite{Akhilesh}. Let $\mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{N}^d$ be an admissible multi-index and $n,p$ be two nonnegative integers. Then we define $$\zeta(\mathbf{k})_{p,n}=\zeta(k_1,\ldots,k_d)_{p,n}=\sum\limits_{m_1>\cdots>m_d>n}\binom{m_{1}+p}{p}^{-1}\frac{1}{m_1^{k_1}\cdots m_d^{k_d}}.$$ We need the duality formula of double tails of multiple zeta values. Any admissible multi-index has the form $$\mathbf{k}=(a_1+1,\{1\}^{b_1-1},\ldots,a_s+1,\{1\}^{b_s-1}),$$ where $s,a_1,b_1,\ldots,a_s,b_s\in\mathbb{N}$. Then the dual index of $\mathbf{k}$ is defined as $$\overline{\mathbf{k}}=(b_s+1,\{1\}^{a_s-1},\ldots,b_1+1,\{1\}^{a_1-1}).$$ \begin{lem}[\cite{Akhilesh}]\label{Lem:DoubleTail} Let $\mathbf{k}$ be an admissible multi-index and $\overline{\mathbf{k}}$ be its dual. Then for any nonnegative integers $p$ and $n$, we have \begin{align} \zeta(\mathbf{k})_{p,n}=\zeta(\overline{\mathbf{k}})_{n,p}. \label{Eq:DoubleTail-Duality} \end{align} \end{lem} Let $p=n=0$ in \eqref{Eq:DoubleTail-Duality}, we get the well-known duality formula of multiple zeta values $$\zeta(\mathbf{k})=\zeta(\overline{\mathbf{k}}).$$ We need to compare the values of the double tails of multiple zeta values. \begin{lem}\label{Lem:Compare-DoubleTail} For any $d,j,k_1,\ldots,k_d\in\mathbb{N}$ with $k_1\geqslant 2$ and $j\leqslant d$, we have $$\zeta(k_1,\ldots,k_{j-1},k_{j}+1,k_{j+1},\ldots,k_d)_{0,1}<\zeta(k_1,\ldots,k_{j},\ldots,k_d)_{0,1}$$ and $$\zeta(2,\{1\}^{d})_{0,1}<\zeta(2,\{1\}^{d-1})_{0,1}.$$ \end{lem} \noindent {\bf Proof.} The first inequality follows from $$\frac{1}{m_j^{k_j+1}}<\frac{1}{m_j^{k_j}}$$ for positive integer $m_j>1$. For the second inequality, using the duality formula \eqref{Eq:DoubleTail-Duality}, we have \begin{align*} \zeta(2,\{1\}^{d})_{0,1}=&\zeta(d+2)_{1,0}=\sum\limits_{m=1}^\infty\binom{m+1}{1}^{-1}\frac{1}{m^{d+2}}\\ <&\sum\limits_{m=1}^\infty\binom{m+1}{1}^{-1}\frac{1}{m^{d+1}}=\zeta(d+1)_{1,0}=\zeta(2,\{1\}^{d-1})_{0,1}, \end{align*} as desired. \qed Finally, to show some sequence of $\mathbf{B}(0,1)$ does not converge, we need the following simple result. \begin{lem}\label{lem:Uniform-Convergence} Let the sequence ${{(f_n)}_{n\in \mathbb{N}}}$ converge to $f$ in $\mathbf{B}(0,1)$ as $n$ tends to infinity. Then $\|f_n\|$ is convergent to $\|f\|$, and for any $q\in (0,1)$, $f_n(q)$ is convergent to $f(q)$ in $\mathbb{R}$. \end{lem} \noindent {\bf Proof.} We have $$\lim_{n\rightarrow\infty}\parallel f_n-f \parallel=0.$$ Then the facts $$|\|f_n\|-\|f\|| \leqslant \|f_n-f \|$$ and $$|f_n(q)-f(q)|\leqslant \|f_n-f \|,\quad (\text{for each\;} q\in (0,1))$$ imply the results. \qed \subsection{Proof of Theorem \ref{Thm:Derived-Set}} Now we prove Theorem \ref{Thm:Derived-Set}. We first compute $\mathcal{QZZ}^{(1)}$. Let $n\in\mathbb{N}$. Using Lemma \ref{Lem:Del-One-Big} and the duality formula \eqref{Eq:Dual-Formula}, we have $$\zeta[3,\{1\}^{n-1};1:q)\prec\zeta[3,\{1\}^n:q]=\zeta[n+2,1:q].$$ Then by Corollary \ref{Cor:Norm-qMZV}, we have $$\|\zeta[3,\{1\}^{n-1};1:q)\|\leqslant \|\zeta[n+2,1:q]\|=\zeta(n+2,1).$$ Since $\lim\limits_{n\rightarrow \infty}\zeta(n+2,1)=0$, we get $$\lim\limits_{n\rightarrow\infty}\|\zeta[3,\{1\}^{n-1};1:q)\|=0,$$ which implies that $0\in \mathcal{QZZ}^{(1)}$. Similarly, let $\mathbf{k}=(k_1,\ldots,k_d)$ be an admissible multi-index and $n\in\mathbb{N}$. We have \begin{align*} &\zeta[k_{1},\ldots,k_{d};n+2:q)-\zeta[k_1,\ldots,k_d:q]_{1}\\ =&\sum_{m_{d+1}=2}^{\infty}\sum\limits_{m_1>\cdots>m_{d}>m_{d+1}}\prod_{i=1}^{d}\frac{q^{m_{i}(k_{i}-1)}}{[m_{i}:q]^{k_{i}}}\cdot \frac{1}{m_{d+1}^{n+2}}\\ \prec&\zeta[k_1,\ldots,k_d:q]\sum\limits_{m=2}^\infty\frac{1}{m^{n+2}}, \end{align*} which implies that $$\|\zeta[k_{1},\ldots,k_{d};n+2:q)-\zeta[k_1,\ldots,k_d:q]_{1}\|\leqslant \|\zeta[k_1,\ldots,k_d:q]\|\sum\limits_{m=2}^\infty\frac{1}{m^{n+2}}.$$ Since $$\lim\limits_{n\rightarrow\infty}\sum\limits_{m=2}^\infty\frac{1}{m^{n+2}}=0,$$ we get $$\lim\limits_{n\rightarrow\infty}\zeta[k_{1},\ldots,k_{d};n+2:q)=\zeta[k_1,\ldots,k_d:q]_{1}.$$ And then $\zeta[k_1,\ldots,k_d:q]_{1}\in\mathcal{QZZ}^{(1)}$. Conversely, for any $f\in\mathcal{QZZ}^{(1)}$, there exists a sequence $(\zeta[\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ such that $\mathbf{k}(n)$ is admissible, $r(n)\in\mathbb{N}$ and $$\lim\limits_{n\rightarrow \infty}\zeta[\mathbf{k}(n);r(n):q)=f.$$ We may assume that $f\neq 0$ and for any $n_1\neq n_2$, $\zeta[\mathbf{k}(n_1);r(n_1):q)\neq \zeta[\mathbf{k}(n_2);r(n_2):q)$. By Corollary \ref{Cor:Classification}, the sequence $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ has a subsequence of one of the types \eqref{Eq:Type-1}-\eqref{Eq:Type-5}. Without loss of generality, we may assume that the sequence $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ itself is one of the types \eqref{Eq:Type-1}-\eqref{Eq:Type-5}. Let $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ be of the type \eqref{Eq:Type-1}. Similarly as above, we have $$\lim\limits_{n\rightarrow\infty}\|\zeta [k_{1},\ldots,k_{d};\varphi(n)+2:q)-\zeta[k_1,\ldots,k_d:q]_{1}\|=0.$$ Therefore, $f=\zeta[k_1,\ldots,k_d:q]_{1}$. Let $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ be one of the types \eqref{Eq:Type-2}-\eqref{Eq:Type-5}. On the one hand, using Lemma \ref{lem:Uniform-Convergence}, we have $$\|f\|=\lim\limits_{n\rightarrow\infty}\|\zeta[\mathbf{k}(n);r(n):q)\|.$$ As the function $\zeta[\mathbf{k}(n);r(n):q)$ is continuous in the interval $[0,1]$, by the definition of norms, we have $$\|\zeta[\mathbf{k}(n);r(n):q)\|\geqslant \zeta(\mathbf{k}(n),r(n)).$$ Using \cite[Theorem 3]{Senthil Kumar}, for $n$ large enough, we have $$\|\zeta[\mathbf{k}(n);r(n):q)\|\geqslant \zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d,\varphi(n)+2).$$ Here if $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ is of the type \eqref{Eq:Type-2} or \eqref{Eq:Type-3}, we take $(k_1,\ldots,k_d)$ to be any fixed admissible multi-index. And if $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ is of the type \eqref{Eq:Type-2} or \eqref{Eq:Type-4}, we take $(\varphi(n))_{n\in\mathbb{N}}$ to be any fixed strictly increasing sequence in $\mathbb{N}$. Now since \begin{align*} &\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d,\varphi(n)+2)\\ =&\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d)_{0,1}+\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d,\varphi(n)+2)_{0,1} \end{align*} and \begin{align*} &\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d,\varphi(n)+2)_{0,1}\\ <&\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d)\zeta(\varphi(n)+2)_{0,1}\\ <&\zeta(2,\{1\}^{\psi(n)})\zeta(\varphi(n)+2)_{0,1}\\ =&\zeta(\psi(n)+2)\zeta(\varphi(n)+2)_{0,1}\longrightarrow 0\quad (n\rightarrow \infty), \end{align*} we obtain $$\lim_{n\rightarrow \infty}\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d,\varphi(n)+2)=\lim\limits_{n\rightarrow\infty}\zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d)_{0,1}.$$ Using the duality formula \eqref{Eq:DoubleTail-Duality}, we get $$\|f\|\geqslant \lim\limits_{n\rightarrow\infty} \zeta(\overline{\mathbf{k}},\psi(n)+2)_{1,0}=\zeta(\overline{\mathbf{k}})_{1,1}>0,$$ where $\overline{\mathbf{k}}$ is the dual index of $(k_1,\ldots,k_d)$. On the other hand, by Lemmas \ref{Lem:Del-One-Big} and \ref{Lem:Del-Depth-Big}, we have $$0\prec\zeta[\mathbf{k}(n);r(n):q)\prec\zeta[2,\{1\}^{\psi(n)+1}:q]=\zeta[\psi(n)+3:q].$$ Since for a fixed $q\in (0,1)$, it holds \begin{align*} \zeta[\psi(n)+3:q]=&q^{\psi(n)+2}+\sum\limits_{m=2}^\infty\frac{q^{(\psi(n)+2)m}}{[m:q]^{\psi(n)+3}}\\ <&q^{\psi(n)+2}+\sum\limits_{m=2}^\infty\frac{1}{m^{\psi(n)+3}}\rightarrow 0,\quad (n\rightarrow \infty), \end{align*} we find $f(q)=0$ for any $q\in (0,1)$ from Lemma \ref{lem:Uniform-Convergence}. Hence the sequence $(\zeta[\mathbf{k}(n);r(n):q))_{n\in\mathbb{N}}$ does not converge in $\mathbf{B}(0,1)$ provided that $((\mathbf{k}(n),r(n)))_{n\in\mathbb{N}}$ is one of the types \eqref{Eq:Type-2}-\eqref{Eq:Type-5}. Summarily, we have $$\mathcal{QZZ}^{(1)}=\{\zeta[\mathbf{k}:q]_1\mid \mathbf{k}\text{\;is admissible}\}\cup \{0\}.$$ Now we compute $\mathcal{QZZ}^{(2)}$. Since $$\zeta[3,\{1\}^n:q]_1\prec\zeta[3,\{1\}^n:q]=\zeta[n+2,1:q],$$ we have $$\lim\limits_{n\rightarrow\infty}\|\zeta[3,\{1\}^n:q]_1\|=0.$$ And hence $0\in \mathcal{QZZ}^{(2)}$. Conversely, for any $f\in\mathcal{QZZ}^{(2)}$, there exists a sequence $(\zeta[\mathbf{k}(n):q]_1)_{n\in\mathbb{N}}$ such that $\mathbf{k}(n)$ is admissible and $$\lim\limits_{n\rightarrow \infty}\zeta[\mathbf{k}(n):q]_1=f.$$ We may assume that $f\neq 0$ and for any $n_1\neq n_2$, $\zeta[\mathbf{k}(n_1):q]_1\neq \zeta[\mathbf{k}(n_2):q]_1$. By Theorem \ref{Thm:qMZVTail-Classification}, the sequence $(\mathbf{k}(n))_{n\in\mathbb{N}}$ has a subsequence of the type \eqref{Eq:qType-1} or of the type \eqref{Eq:qType-2}. Without loss of generality, we may assume that the sequence $(\mathbf{k}(n))_{n\in\mathbb{N}}$ itself is of the type \eqref{Eq:qType-1} or of the type \eqref{Eq:qType-2}. One the one hand, using Lemma \ref{Lem:Compare-DoubleTail} and the duality formula \eqref{Eq:DoubleTail-Duality}, we have $$\|\zeta[\mathbf{k}(n):q]_1\|\geqslant \zeta(\mathbf{k}(n))_{0,1}\geqslant \zeta(2,\{1\}^{\psi(n)},k_1,\ldots,k_d)_{0,1}=\zeta(\overline{\mathbf{k}},\psi(n)+2)_{1,0},$$ where $\overline{\mathbf{k}}$ is the dual index of $(k_1,\ldots,k_d)$. Here if $(\mathbf{k}(n))_{n\in\mathbb{N}}$ is of the type \eqref{Eq:qType-1}, we take $(k_1,\ldots,k_d)$ to be any fixed admissible multi-index. Therefore, we find $$\|f\|=\lim\limits_{n\rightarrow\infty}\|\zeta[\mathbf{k}(n):q]_1\|\geqslant \lim\limits_{n\rightarrow\infty}\zeta(\overline{\mathbf{k}},\psi(n)+2)_{1,0}=\zeta(\overline{\mathbf{k}})_{1,1}>0.$$ On the other hand, using Lemmas \ref{Lem:Del-One-Big} and \ref{Lem:Del-Depth-Big}, we have $$0\prec \zeta[\mathbf{k}(n):q]_1\prec \zeta[\mathbf{k}(n):q]\preccurlyeq \zeta[2,\{1\}^{\psi(n)}:q]=\zeta[\psi(n)+2:q].$$ While as shown above, for any fixed $q\in (0,1)$, it holds. $$\lim\limits_{n\rightarrow \infty}\zeta[\psi(n)+2:q]=0.$$ Therefore $f(q)=0$ for any $q\in (0,1)$ from Lemma \ref{lem:Uniform-Convergence}. Hence $(\zeta[\mathbf{k}(n):q]_1)_{n\in\mathbb{N}}$ does not converge in $\mathbf{B}(0,1)$. Finally, we get $\mathcal{QZZ}^{(2)}=\{0\}$. And Theorem \ref{Thm:Derived-Set} is proved. \qed
1,941,325,220,350
arxiv
\section{Introduction} \begin{definition} Prime Numbers are those numbers which appear on the number line and are divisible by only 1 and the number itself. Hence these numbers have only two factors. \end{definition} In this paper we have presented a multivariate polynomial function termed as factor elimination function which is supposed to generate all prime numbers occurring on the number line. We call it as factor elimination because it generates a number by reducing the divisibility by most of the prime factors, we can generate small or big numbers from the function depending upon the factors and certain values taken under consideration. For the generated number some absolute conditions for primality are given. Probabilistic conditions explain why the image of the function cannot be prime, or could be a prime under particular probability conditions. The reason behind the various categories of prime numbers is also explained by this function. There are two cases, one is assured prime number generation where there is no need to pass primality test. While the second case requires a primality test to be passed and hence there is some definite probability associated. \section{Generalized Proof of Euclid's Theorem} \begin{theorem}For any finite set of prime numbers, there exists a prime number not in that set. \end{theorem} \begin{coro} There are infinitely many prime numbers. \end{coro} \begin{coro} There is no largest prime number. \end{coro} \begin{proof} Let us assume a set S consisting of prime numbers which is partitioned into two distinct sets of prime numbers A and B. \begin{align} A& = \left\{ {P_1, P_2, P_3,...,P_{i-1},P_i,P_{i+1},...,P_{n-1},P_n}\right\} \\ B& = \left\{ {O_1, O_2, O_3,...,O_{j-1},O_j,O_{j+1},...,O_{m-1},O_m}\right\} \end{align} Where $S=A\cup$B, $A\cap$B = $\emptyset$ and n and m are some positive integers. Consider the following mathematical operation defined as R \begin{align} R& = (P_1\times P_2\times ...\times P_i\times ... \times P_n) \pm (O_1\times O_2\times ...\times O_j\times ...\times O_m) \end{align} Let us now assume that $P_i$ is a prime factor of R. Let W = $R\div P_i$. Thus clearly W should be an integer, \begin{align} W&= [(P_1\times ...\times P_i\times ... \times P_n) \pm (O_1\times ...\times O_j\times ...\times O_m)]\div P_i \\ W&= (P_1\times ...\times P_i\times ...\times P_n) \pm (O_1\times ...\times O_j\times ...\times O_m)\div P_i \end{align} But the term, $(O_1\times ...\times O_j\times ...\times O_m)\div P_i$ can never be a whole number because, $P_i$ does not belong to set B, and a prime number cannot be factor of any other prime number. Hence by contradiction, it is proved that $P_i$ can never be a prime factor of R. Similarly, Let us consider each Prime number $P_i$ to be raised to the power $a_i$ and each $O_j$ raised by power $b_j$. Then the resultant: \begin{align} R& = (P^{a_1}_1\times ...\times P^{a_i}_i\times ... \times P^{a_n}_n) \pm (O^{a_1}_1\times ...\times O^{a_j}_j\times ...\times O^{a_m}_m) \end{align} Again, let us assume that $P_i$ is a prime factor of R then let W = $R\div P_i$. Thus W should be an integer which means, \begin{align} W&= [(P^{a_1}_1\times ...\times P^{a_i}_i\times ... \times P^{a_n}_n) \pm (O^{a_1}_1\times ...\times O^{a_j}_j\times ...\times O^{a_m}_m)]\div P_i \\ W&= (P^{a_1}_1\times ...\times P^{a_i-1}_i\times ...\times P^{a_n}_n) \pm (O^{a_1}_1\times ...\times O^{a_j}_j\times ...\times O^{a_m}_m)\div P_i \end{align} The term, $(O^{a_1}_1\times ...\times O^{a_j}_j\times ...\times O^{a_m}_m)\div P_i$ can never be a whole number because, $P_i$ does not belong to set B. Thus we similarly have proved, by contradiction, that $P_i$ can never be a prime factor of R. This proof of contradiction shows that at least one additional prime number exists that doesn't belong to set S. \end{proof} The important thing to consider here is that if in some case A or B are empty sets, then 1 must be considered as the only element of that set for e.g A=$\left\{2,3\right\}$ and B=$\left\{1\right\}$. \section{Theory of Factor Elimination} For two distinct sets A and B consisting of prime numbers, let P be the largest prime number in either of the sets. \begin{align} A\cap B &= \emptyset \\ A\cup B &= S \end{align} Let $x_i\in A$ and $y_i\in B$; then for \begin{align} R&= \mid\Pi x^{a_i}_i \pm \Pi y^{b_j}_j\mid \end{align} Let us name this function as the Factor Elimination function. We generate a pair of resultants; $R^+$ by addition and $R^-$ by subtraction. The probability that R is prime is very high and it must be prime if, \begin{align} \sqrt{R}&\le P \end{align} But practically, we realize that it is very difficult to verify $\sqrt{R}\le$ P, and highly efficient algorithms are required. In that case, we can depend on the probability that, R is most likely a prime number, and we can determine it by the primality test like the Rabin-Miller Probabilistic Primality Test [2]. It is well known that if a prime factor of R exists other than R itself, then at least one of those prime factors must be less than $\sqrt{R}$ [3]. \section{Probability for being a Prime Number} Any prime number that is less than or equal to P, cannot be a factor of R. If $\sqrt{R}\le$ P and C = $\sqrt{R}$, then let the number of prime numbers which may be prime factors of R lying in between P and C be denoted by N. For this, we can use the prime counting function[4] and hence the number of primes capable of dividing R is given by \begin{align} N=\pi(C)- \pi(P) \end{align} Where N represents the exact number of primes that exist between P and C. Instead, we can also use rough approximation by Prime Number Theorem[5] for calculating N represented by symbol $N^{\circ}$. \begin{align} N^{\circ} = \frac{C}{ln(C)}-\frac{P}{ln(P)} \end{align} As the value of R increases, the value of C also increases correspondingly, and the gap between P and C widens on the number line as a result of which the value of N also increases. This implies that the probability that a number could be a factor of R increases. So, it was concluded that, the closer R is to $P^2$, the probability of R being a prime number increases. The total number of primes not greater than C which is $\sqrt{R}$ is equal to $\pi(C)$. Thus N represents the primes which can be possible factors of R. We can conclude from here that, the probability for event X where X is defined as divisibility of R by set of primes comprising of N elements. \begin{align} P(X)=\frac{N}{\pi(C)}= \frac{\pi(C)- \pi(P)}{\pi(C)}= 1- \frac{\pi(P)}{\pi(C)} \end{align} \begin{align} 1-P(X)&= \frac{\pi(P)}{\pi(C)} \end{align} Where 1-P(X) represent the probablity of R to be a prime and for the case $P\ll$C this probability tends to zero. Let us suppose, we do not choose some prime between 1 and P. Let T be the set of such prime numbers. Consider R(P) as the residual prime function which counts the number of primes in T. Now the above equation (16) can be written as following \begin{align} 1-P(X)&= \frac{\pi(P)-R(P)}{\pi(C)} \end{align} For e.g \begin{align} A&=\left\{2,3\right\}, B=\left\{1\right\} \\ R&= 2^2 \times 3^2-1 = 36 -1 = 35 \\ C&\approx 5.916, \pi(C)=3, \pi(P)=2, R(P)=1 \end{align} Let 5 be an element of the set T. We must not consider those values of R where the difference or addition of last digits from both the multiplied result sets A and B is divisible by 5. We add an exception to accommodate R=5. For instance, in the example above 6 and 1 are last digits for 36 and 1 respectively. \\The consideration of the value of R(P) is very important when we talk of prime number generation algorithms and it should be minimum. Additionally the set T should consist of larger prime numbers only. \section{Twin Prime Conjecture} \begin{definition} A twin prime is a prime number that differs from another prime number by two. \end{definition} \begin{conjecture} There Are Infinitely Many Prime Twins. \end{conjecture} Considering large pair of twin primes, we can say that the probability of getting a twin prime is square of the probability that we get a prime at $P\ll$C. \begin{align} R&= \mid\Pi x^{a_i}_i \pm 1\mid \end{align} The important thing to notice here is that the probability of getting a pair of resultant as prime is equal in this case. \section{Classification of Prime Numbers} This factor elimination function is also useful for giving a general mathematical form for most of the various classifications of prime numbers given till date. This is presented in a tabular form given below. \begin{center} \begin{longtable}{|p{2cm}|p{2.6cm}|p{2.3cm}|p{4cm}|} \caption[Categorization of Prime Numbers]{Categorization of Prime Numbers} \label{grid_mlmmh} \\ \hline \multicolumn{1}{|c|}{\textbf{\parbox{2cm}{Prime\\Category}}} & \multicolumn{1}{c|}{\textbf{Initial Form}} & \multicolumn{1}{c|}{\textbf{Simplified Form}} & \multicolumn{1}{c|}{\textbf{\parbox{4cm}{Factor Elimination\\Criteria}}} \\ \hline \endfirsthead \multicolumn{3}{c}% {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline\multicolumn{1}{|c|}{\textbf{\parbox{2cm}{Prime\\Category}}} & \multicolumn{1}{c|}{\textbf{Initial Form}} & \multicolumn{1}{c|}{\textbf{Simplified Form}} & \multicolumn{1}{c|}{\textbf{\parbox{4cm}{Factor Elimination\\Criteria}}} \\ \hline \endhead \hline \multicolumn{4}{|r|}{{Continued on next page}} \\ \hline \endfoot \hline \hline \endlastfoot Carol primes & $(2^n$-$ 1)^2$ -– 2 & $(2^n$-$ 1)^2$ -– 2 & $R^+$, A = $\left\{2^n-1\right\}$, B=$\left\{2\right\}$, $a_1$= 2 \\ \hline Centered decagonal primes & $5(n^2$-$ n)$ + 1 & 5n(n - 1) + 1 & $R^+$, A = $\left\{5,n,n-1\right\}$, B=$\left\{1\right\}$ \\ \hline Centered heptagonal primes & $\frac{(7n^2 - 7n + 2)}{2}$ & 7n(n-1)$\div$ 2 + 1 &$R^+$, A=\{7,n/2,n-1\} if n is even, A=\{7,n,(n-1)/2\} if n is odd, B=\{1\} \\ \hline Centered square primes & $n^2$ + $(n+1)^2$ & 2n(n+1)+1 & $R^+$, A=\{2,n,n+1\}, B=\{1\} \\ \hline Centered triangular primes & $\frac{(3n^2 + 3n + 2)}{2}$ & 3n(n+1)/2 + 1 & $R^+$, A=\{3,n/2,n+1\} if n is even else A=\{3,n,(n+1)/2\} if n is odd \\ \hline \parbox{2cm}{Cuban primes\\(Case I)} & $\frac{m^3-n^3}{m-n}$, m = n+1 & 3n(n+1)+1 & $R^+$, A=\{3,n,n+1\}, B=\{1\} \\ \hline \parbox{2cm}{Cuban primes\\(Case II)} & $\frac{m^3-n^3}{m-n}$, m = n+2 & 3n(n+2)+$2^2$ & $R^+$, A=\{3,n,n+2\}, B=\{2\}, $b_1$=2 \\ \hline Cullen primes & $n\times 2^n$ + 1 & $n\times 2^n$ + 1 & $R^+$, A=\{2,n\}, B=\{1\}, $a_1$=n \\ \hline Double factorial primes & $ n!! \pm 1 $ & $\Pi x^{a_i}_i \pm 1$ & R, A=S, B=\{1\}, $a_i\geq 2$ $\forall$ $P_i$ $\in$ S \\ \hline Double Mersenne primes & $2^{2^{p -− 1}}$ -– 1 & $2^{2^{p -− 1}}$ – 1 & $R^-$, A=\{2\}, B=\{1\}, $a_1= 2^{p-1}$ where p is some Prime \\ \hline Eisenstein primes without imaginary part & 3n-1 & 3n-1 & $R^-$, A=\{3,n\}, B=\{1\} \\ \hline Euclid primes & $p_n \texttt{\#} $ + 1 & $\Pi p_i $ + 1 & $R^+$, A=S, B=\{1\} \\ \hline Factorial primes & $ n! \pm 1 $ & $\Pi x^{a_i}_i \pm 1$ & R, A=S, B=\{1\}, $a_i\geq 2$ $\forall$ $P_i$ $\in$ S \\ \hline Fermat primes & $2^{2^n}$ -– 1 & $2^{2^n}$ -– 1 & $R^-$, A=\{2\}, B=\{1\}, $a_1= 2^n$, where n is Positive Integer \\ \hline Fibonacci primes & $p_n \texttt{\#} $ + $P_m$ & $\Pi p_i $ + $P_m$ & $R^+$, A=S-\{$P_m$\}, B=\{$P_m$\}, Where $P_m$ is maximum prime \\ \hline Gaussian primes & 4n+3 & $2^2$n+3 & $R^+$, A=\{2,n\}, B=\{3\}, $a_1$=2 \\ \hline Generalized Fermat primes base 10 & $10^n$ + 1 & $2^n 5^n$+1 & $R^+$, A=\{2,5\},B=\{1\}, $a_1=a_2=n$ \\ \hline Kynea primes & $(2^n + 1)^2$ -− 2 & $(2^n + 1)^2$ -− 2 & $R^-$, A = \{$2^n$+1\} , B=\{2\}, $a_1$ = 2 \\ \hline Leyland primes & $m^n +n^m$ & $m^n +n^m$ & $R^+$, A=\{n\}, B=\{m\}, $a_i=m$, $b_1=n$ \\ \hline Mersenne primes & $2^p -− 1$ & $2^p -– 1$ & $R^-$, A=\{2\},B=\{1\}, $a_1=n$, Where p is some prime \\ \hline Odd primes & 2n − 1 & 2n − 1 & $R^-$, A=\{2,n\}, B=\{1\} \\ \hline Palindromic wing primes & $\frac{a(10^m-1)}{9}\pm b*10^\frac{m}{2}$ & $\frac{a(10^m-1)}{9}\pm b*10^\frac{m}{2}$ & R, A=\{ $\frac{a(10^m-1)}{9}$\} B=\{b,$10^\frac{m}{2}$\} \\ \hline Pierpont primes & $2^u3^v$ + 1 & $2^u3^v$ + 1 & R, A=\{2,3\}, B=\{1\}, $a_1$=u, $a_2=v$ \\ \hline Primes of the form n4 + 1 & $n^4$ + 1 & $n^4$ + 1 & $R^+$, A=\{n\}, B=\{1\}, $a_1$=4 \\ \hline Primorial primes & $p_n \texttt{\#} \pm 1 $ & $ \Pi p_i \pm 1$ & R, A=S, B={1} \\ \hline Proth primes & $k \times 2n + 1$, With odd k and k < 2n & $k \times 2n + 1$ & $R^+$, A={2,k,n}, B={1} , With odd k and k < 2n \\ \hline Pythagorean primes & 4n + 1 & $2^2n$+1 & $R^+$, A=\{2,n\}, B=\{1\} \\ \hline Quartan primes & $x^4 + y^4$ & $x^4 + y^4$ & $R^+$, A=\{x\}, B=\{y\}, $a_1= b_1=4$ \\ \hline Solinas primes & $2^a \pm 2^b \pm 1$ & $2^a \pm 2^b \pm 1$ & R, A=\{$2^a$\}, B=\{$2^b \pm 1$\} \\ \hline Star primes & 6n(n -− 1) + 1 & $2 \times 3 \times n (n −- 1)$ + 1 & $R^+$, A=\{2,3,n,n-1\}, B=\{1\} \\ \hline Thabit number primes & $3\times 2n$ − 1 & $3\times 2n$ -− 1 & $R^-$, A=\{2,3\}, B=\{1\}, $a_1=n$ \\ \hline Woodall primes & $n\times 2^n – 1$ & $n\times 2^n – 1$ & $R^-$, A=\{2,n\}, B=\{1\}, $a_1=n$ \\ \hline \end{longtable} \end{center} \begin{table}[h] \caption{Categorization which depends on occurrence of more than one prime.} \centering \begin{tabular}{|p{3cm}|p{4cm}|p{4cm}|} \hline Property & Meaning & Factor Elimination Criteria\\ \hline Twin primes & (p, p+2) are both prime. & $R= \mid\Pi x^{a_i}_i \pm 1\mid$ \\ \hline Sexy primes & (p, p+6) are both prime & $R= \mid\Pi x^{a_i}_i \pm 3\mid$\\ \hline Sophie Germain prime & p and 2p+1 are both prime & R, A=\{2,p\}, B=\{1\} \\ \hline Safe prime & \parbox{4cm}{p and (p−-1)/2 are both prime\\ Consider p = 2k+1} & R, A=\{2,k\}, B=\{1\} \\ \hline Prime triplets & \parbox{4cm}{(p, p+2, p+6) or (p, p+4, p+6) are all prime} & \parbox{4cm}{$R= \mid\Pi x^{a_i}_i \pm y\mid$, y =1,3,5\\Where at least one of the R is having divisibility by 3}\\ \hline Prime quadruplets & (p, p+2, p+6, p+8) are all prime & \parbox{4cm}{$R= \mid\Pi x^{a_i}_i \pm y\mid$, y =1,3,5\\Where at least one of the R is having divisibility by 3} \\ \hline Primes in residue classes & an + d for fixed a and d & $R^+$, A=\{a,n\}, B=\{d\} \\ \hline \end{tabular} \end{table} \section{Probability Comparison of a Simple Algorithm } Let us now compare the usefulness of this function. With the of simplest algorithms used for generating prime numbers for public key encryption. Considering the Factor Elimination Function, \begin{align} R&= \mid\Pi x_i \pm \Pi y_j\mid \end{align} $a_i$ and $b_j$ had been considered unity and an algorithm had been created, that can be briefly described as: \\ \\ 1. Two random lists were created with the help of all possible combinations of elements that are in the sets A and B. \\ 2. All the elements of A were multiplied. Similarly all elements of B were multiplied. These were saved as two separate results. \\ 3. The two results were either subtracted or added, and the absolute values were taken. \\ 4. The final resultants were verified by the Primality Test. \\ \\ The probability of generating an n-bit prime with this method was high as compared to earlier methods. In the previous implementation the probability that a number is prime is $\frac{1}{ln(2^n)}$ [7]. All possible combinations for a consecutive list of prime numbers of size L were analyzed. As the value of L was increased, the total number of combinations and the total number of prime numbers thus generated was calculated. The data obtained is tabulated below: \begin{table}[h] \centering \caption{Primes Distribution} \begin{tabular}{|p{3cm}|p{3cm}|p{2.5cm}|p{2.5cm}|} \hline Total Primes (P) & Combinations (C) & $\log_2(C)$ & Ratio ($\frac{P}{C}$) \\ \hline 1 & 1 & 1 & 1 \\ \hline 7 & 4 & 2 & 1.75 \\ \hline 25 & 16 & 4 & 1.563 \\ \hline 79 & 64 & 6 & 1.234 \\ \hline 256 & 256 & 8 & 1 \\ \hline 887 & 1024 & 10 & 0.866 \\ \hline 2808 & 4096 & 12 & 0.686 \\ \hline 10405 & 16384 & 14 & 0.635 \\ \hline 34450 & 65536 & 16 & 0.526 \\ \hline 120504 & 262144 & 18 & 0.46 \\ \hline 418223 & 1048576 & 20 & 0.399 \\ \hline 1597836 & 4194304 & 22 & 0.381 \\ \hline 5926266 & $1.70\times 10^7 $ & 24 & 0.353 \\ \hline $2.10\times10^7$ & $6.70\times 10^7$ & 26 & 0.318 \\ \hline $7.70 \times 10^7$ & $2.70 \times 10^8$ & 28 & 0.288 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% For instance, consider a prime list of length L=25. Then it can generate a prime number which is in the range of 42 to 119 bit length. The probability for generating that minimum bit length of prime number can be supposed to be $\frac{1}{ln(2^{42})}$ which is approximately equal to 0.0121 where the obtained probability was 0.3532. \begin{table}[htbp] \centering \caption{Probability Comparison} \begin{tabular}{|r|r|r|r|r|r|} \hline List Length (L) & Max Bit (g) & Min Bit (h) & $\frac{1}{ln(2^g)}$ & $\frac{1}{ln(2^h)}$ & Probablity ($\frac{P}{C}$) \\ \hline 11 & 38 & 11 & 0.038 & 0.1312 & 0.8662 \\ \hline 13 & 49 & 17 & 0.0294 & 0.0849 & 0.6855 \\ \hline 15 & 59 & 21 & 0.0245 & 0.0687 & 0.6351 \\ \hline 17 & 70 & 22 & 0.0206 & 0.0656 & 0.5257 \\ \hline 19 & 81 & 31 & 0.0178 & 0.0465 & 0.4597 \\ \hline 21 & 94 & 32 & 0.0153 & 0.0451 & 0.3988 \\ \hline 23 & 104 & 40 & 0.0139 & 0.0361 & 0.381 \\ \hline 25 & 119 & 42 & 0.0121 & 0.0343 & 0.3532 \\ \hline 27 & 126 & 52 & 0.0114 & 0.0277 & 0.3178 \\ \hline 29 & 143 & 55 & 0.0101 & 0.0262 & 0.2877 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% \begin{figure} \centering \includegraphics[width=0.7\linewidth]{graph.png} \caption{Prime Probability Distribution } \label{fig:graph} \end{figure} \\ \\ \\ \\ \section{Conclusion} Factor elimination function is capable of generating all the prime numbers as well as can be used as a powerful tool for developing highly efficient prime numbers generating algorithms. This function is a multivariate polynomial function. Because every integer on number line can be represented as the sum or difference of two integers and these integers can be written in the form of multiplication of their factors. These generated numbers do not follow a regular pattern or a sequence under the given probabilistic condition for being a prime. With the help of prime counting function, we can explain this finite probability. Here, we have explained that most of the categorization of prime numbers discovered till now, are in some form following factor elimination function. \section{Future Scope} As we have demonstrated the application of factor elimination function in generating large prime numbers for encryption. A lot of further research in number theory and prime numbers can be done with the help of this method. \section{References} [1] James Williamson (translator and commentator), The Elements of Euclid, With Dissertations, Clarendon Press, Oxford, 1782, page 63. \newline[2] Rabin, Michael O. (1980), "Probabilistic algorithm for testing primality", Journal of Number Theory 12 (1): 128–138, doi:10.1016/0022-314X(80)90084-0 \newline[3] Crandall, Richard; Pomerance, Carl (2005), Prime Numbers: A Computational Perspective (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-25282-7 \newline[4] Bach, Eric; Shallit, Jeffrey (1996). Algorithmic Number Theory. MIT Press. volume 1 page 234 section 8.8. ISBN 0-262-02405-5. \newline[5] N. Costa Pereira (August–September 1985). "A Short Proof of Chebyshev's Theorem". American Mathematical Monthly 92 (7): 494–495. doi:10.2307/2322510. JSTOR 2322510. \newline[6] Arenstorf, R. F. "There Are Infinitely Many Prime Twins." 26 May 2004. http://arxiv.org/abs/math.NT/0405509. \newline [7] Hoffman, Paul (1998). The Man Who Loved Only Numbers. Hyperion. p. 227. ISBN 0-7868-8406-1.5 \end{document}
1,941,325,220,351
arxiv
\section{Introduction} Despite significant progress in recent years, autonomy in traffic still remains a challenging problem. Within this area, single-lane car following is arguably a fundamental task tackled by automotive manufacturers. It is widely known that human car-following is inherently sub-optimal. Historically, Suigiyama and Tadaki have demonstrated in~\cite{sugiyama2008traffic,tadaki2013phase} that stop-and-go waves could emerge in a circular platoon of human vehicles without the presence of physical bottlenecks or lane changes. First coined in~\cite{swaroop1996string}, this unstable oscillatory behavior is referred to as \textit{string instability}. To improve human car-following, various \textit{local} controllers have been proposed~\cite{martinez2007safe,kesting2008adaptive,luo2010model,kerner2018physics,stern2018dissipation}. For example, Luo has described a model predictive control (MPC) formulation for fuel-efficient, fixed-headway adaptive cruise control~\cite{luo2010model}. Kerner et. al. have presented a variable-headway controller that can attenuate propagation of stop-and-go waves~\cite{kerner2018physics}. In addition, Stern et. al. have demonstrated that a single vehicle controlled by a proportional-integral controller is able to dampen stop-and-go waves induced by human drivers in a circular platoon~\cite{stern2018dissipation}. A common feature of the above controllers is that they achieve smoother, more efficient driving using only local observations. Beyond local methods, various approaches on \textit{non-local} controllers have been proposed in an effort to attenuate a wider spectrum of oscillations, such as~\cite{asadi2010predictive,bu2010design,schmied2015nonlinear,lelouvier2017eco,jia2018energy}. The key advantage of non-local controllers lies at the fact that with non-local knowledge of downstream traffic it can react in advance to dampen non-local oscillations. For instances, California PATH has described a cooperative adaptive cruise controller and demonstrated its string stability~\cite{bu2010design}. Schmied proposed a variable-headway nonlinear MPC method that can achieve smoother car-following by using information from downstream traffic lights~\cite{schmied2015nonlinear}. Furthermore, Jia has introduced three variations of~Schmied with different performance trade-offs~\cite{jia2018energy}. While non-local methods hold more promises in performance, they are usually more expensive to implement and are more vulnerable to estimation noises. In the present article, we design a novel non-local controller based on a hierarchical MPC scheme. In particular, we cast the control problem as a linearly constrained quadratic program (LCQP). Similar to~\cite{jia2018energy}, our method formulates nonlinear spatiotemporal constraints into two linear space headway constraints, but our method is more extensible and has a more accurate kinematic description. Unlike~\cite{schmied2015nonlinear,hyeon2019short}, which requires extensive sensing and communication infrastructure, the proposed method only requires an estimated time of arrival (ETA) estimator for prediction, which is widely available from many mainstream map providers. We summarize the main contributions of the paper below. \begin{itemize} \item We propose a novel MPC controller for longitudinal car-following based on a linearly constrained quadratic program. \item We analyze properties of the proposed controller and demonstrate how it can be extended to handle advanced driving scenarios such as cut-ins and cut-outs. \item We test the performance and running time of the proposed controller with several emulated ETA estimators through numerical simulations. \end{itemize} We formally describe of the two-vehicle car-following problem next. \section{Problem Statement} Consider a two-vehicle setup, where a considered vehicle follows a preceding vehicle that travels along a path of length $L$. Denote the initial and final times of the trip as $t_{0}$ and $t_{f}$, respectively. Let the state of a vehicle be $\vec{x} \coloneqq [s \quad v]^{\top}$, where $s$ and $v$ represent the position and speed of a vehicle, respectively. Define $a$ as the acceleration of a vehicle. Let $\ell$ be the length of a vehicle. We use superscript to identify to which vehicle a variable belongs, where the considered vehicle is indexed as one and the preceding vehicle zero. Thus, $\vec{x}^{1}$ refers to the state of the considered vehicle. Let the initial position of the considered vehicle be $w_{0}$, that is, $s^{1}(t_{0}) \coloneqq w_{0}$. Let the initial position of the preceding vehicle be $w_{1}$, that is, $s^{0}(t_{0}) \coloneqq w_{1}$. Let $w_{f} \coloneqq w_{1} + L$. Then, $s^{0}(t_{f}) \coloneqq w_{f}$. The two-vehicle scenario is illustrated in Figure~\ref{fig:problem_setup}. \begin{figure}[h] \centering \begin{tikzpicture} \node[xscale=-1] (considered_vehicle) at (0.75, 0.5) {\includegraphics[width=0.075\textwidth]{figures/car_pictogram.eps}}; \node[xscale=-1] (preceding_vehicle) at (3.5, 0.5) {\includegraphics[width=0.075\textwidth]{figures/car_pictogram.eps}}; \draw[->, >=latex, line width=0.35mm] (0, 0) to (8, 0) node[right] {$s$}; \draw (1.5, 0.1) to (1.5, 0) node[below] {$w_{0}$}; \draw (4.25, 0.1) to (4.25, 0) node[below] {$w_{1}$}; \draw (5, 0.1) to (5, 0) node[below] {$w_{2}$}; \draw (5.75, 0) node[below] {$\dots$}; \draw (6.5, 0.1) to (6.5, 0) node[below] {$w_{f-1}$}; \draw (7.25, 0.1) to (7.25, 0) node[below] {$w_{f}$}; \draw (0.75, 0.9) node[above] {$\vec{x}^{1}, a^{1}, \ell^{1}$}; \draw (0.75, 1.5) node[above] {Consid. Veh.}; \draw (3.5, 0.9) node[above] {$\vec{x}^{0}, a^{0}, \ell^{0}$}; \draw (3.5, 1.5) node[above] {Preced. Veh.}; \draw[decoration={brace}, decorate] (4.25, 0.2) -- node[above=3pt] {$\Delta{s}$} (5, 0.2); \draw[decoration={brace,mirror,raise=5pt}, decorate] (4.25, -0.25) -- node[below=6pt] {$L$} (7.25, -0.25); \end{tikzpicture} \caption{Scenario of the two-vehicle car-following problem. At $t_{0}$, the considered vehicle is at $w_{0}$ and the preceding vehicle at $w_{1}$. The preceding vehicle travels through a set of way points $(w_{1}, w_{2}, \dots, w_{f})$, where each pair of consecutive way points are spaced $\Delta{s}$ apart.} \label{fig:problem_setup} \end{figure} For the considered vehicle to properly follow the preceding vehicle, we impose the following five requirements. First and foremost, to prevent collision, we constrain the space headway between the two vehicles to be greater than some minimal headway $h_{\min}(t): \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$. Next, to \textit{follow} the preceding vehicle, we constrain the space headway to be smaller than some maximum headway $h_{\max}(t): \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$. The third constraint requires the speed of the considered vehicle to fall within some speed limits $[v_{\min}, v_{\max}]$. Next, we constrain the acceleration of the considered vehicle to be bounded within some range $[a_{\min}, a_{\max}]$. Last but not least, among all trajectories that satisfies the constraints above, we select one that is the smoothest, as measured the $\ell^{2}$-norm of the acceleration. We summarize the above specifications in Problem~\ref{prob:car_following}. \begin{problem} Provided $s^{0}(t)$ for $t \in [t_{0}, t_{f}]$ with some finite $t_{f}$, determine a trajectory of least acceleration in $\ell^{2}$-norm for the considered vehicle $s^{1}(t)$ for $t \in [t_{0}, t_{f}]$ such that \textsl{1}) $\min_{\forall t} s^{0}(t) - s^{1}(t) \geq h_{\min}(t)$, \textsl{2}) $\max_{\forall t} s^{0}(t) - s^{1}(t) \leq h_{\max}(t)$, \textsl{3}) $v^{1}(t) \in [v_{\min}, v_{\max}]$ for all $t$, and \text{4}) $a^{1}(t) \in [a_{\min}, a_{\max}]$ for all $t$. \label{prob:car_following} \end{problem} With the problem stated above, we present the proposed controller in the next section. \section{Optimal Car-Following Controller} In this section, we present a hierarchical MPC scheme consisting of \text{1}) a prediction layer, \textsl{2}) a planning layer, and \textsl{3}) a tracking layer. The layered controller design is illustrated in Figure~\ref{fig:controller_design}. We present the details of the three layers below. \begin{figure}[h] \centering \begin{tikzpicture} \node[draw, minimum width=4cm, minimum height=0.5cm, anchor=south west, text width=4cm, align=center] (predictor) at (0, 0) {Prediction Layer}; \node[draw, minimum width=4cm, minimum height=0.5cm, anchor=south west, text width=4cm, align=center] (planner) at (0, -1.1) {Planning Layer}; \node[draw, minimum width=4cm, minimum height=0.5cm, anchor=south west, text width=4cm, align=center] (controller) at (0, -2.2) {Tracking Layer}; \node[draw, minimum width=4cm, minimum height=0.5cm, anchor=south west, text width=4cm, align=center] (plant) at (0, -3.3) {Plant}; \draw[->, >=latex] (2, 0) to node[left] {Predicted traj. $\hat{s}^{0}(t)$} (2, -0.55); \draw[->, >=latex] (2, -1.1) to node[left] {Planned accel. $\check{a}^{1}(t)$} (2, -1.65); \draw[->, >=latex] (2, -2.2) to node[left] {Actual accel. $a^{1}(t)$} (2, -2.75); \draw[->, >=latex] (2, -3.3) to (2, -4) to (5, -4) to node[right, text width=0.5cm] {$\vec{x}^{0}(t)$ $\vec{x}^{1}(t)$ $a^{0}(t)$ (10~Hz)} (5, -1.9) to (4.25, -1.9); \draw[->, >=latex] (5, -1.95) to node[right, text width=1.5cm] {Updated predictor state (1~Hz)} (5, 0.275) to (4.25, 0.275); \end{tikzpicture} \caption{Diagram of the ontroller design. The layered controller scheme is designed to perform in receding horizon: prediction layer and planning layer are invoked every 1 second; tracking layer is triggered every 0.1 second.} \label{fig:controller_design} \end{figure} \subsection{Prediction layer} The prediction layer predicts the trajectory of the preceding vehicle based on a ETA estimator. Define a way point every $\Delta{s}$ from $s^{0}(t)$ to $s^{0}(t) + l \cdot \Delta{s}$ at some time $t \in [t_{0}, t_{f}]$ and for some spatial receding horizon $l$. Collect all way points into a sequence, we have $\mathcal{W}(t) \coloneqq (w_{1}(t), w_{2}(t), \dots, w_{l+1}(t)) = ( s^{0}(t) + i \cdot \Delta{s} \mid i = 0, 1, \dots, l )$. For convenience, we assume $w_{l+1} \leq w_{f}$. Define $\widehat{\mathcal{T}}^{0}(t)$ as the corresponding ETA set for the preceding vehicle, that is, $\widehat{\mathcal{T}}^{0}(t) \coloneqq (t_{1}^{0}(t), \hat{t}_{2}^{0}(t), \dots, \hat{t}_{l+1}^{0}(t))$, where $t_{i}^{0}, \hat{t}_{i}^{0}$ are the \textit{true} and \textit{estimated} arrival time of the preceding vehicle at the way point $w_{i} \in \mathcal{W}(t)$. For the problem to be well-posed, we require both true and estimated arrival time sequences to be strictly increasing. We assume there exists an ETA estimator\footnote{Common ETA service providers include Google Maps, Waze, INRIX, and Garmin.} $f: \mathcal{W}(t) \rightarrow \widehat{\mathcal{T}}^{0}(t)$, which estimates the \textit{arrival} time of the preceding vehicle at each way point in $\mathcal{W}(t)$. To model prediction errors, we assume that the relative estimation error is \textit{uniformly} distributed with an average at one and a radius of $\sigma$, that is, \begin{equation*} \frac{\hat{t}_{i+1}^{0} - \hat{t}_{i}^{0}}{t_{i+1}^{0} - t_{i}^{0}} \sim \mathcal{U}(1-\sigma, 1+\sigma), \quad i = 1, 2, \dots, l, \end{equation*} where $\hat{t}_{1}^{0} = t_{1}^{0} = t$. We use $f_{\sigma}$ to denote that an ETA estimator $f$ has an error radius of $\sigma$. With $\mathcal{W}(t)$ and $\widehat{\mathcal{T}}^{0}(t) = f_{\sigma}(\mathcal{W}(t))$, we can then generate a \textit{predicted} trajectory for the preceding vehicle $\hat{s}^{0} : [t_{1}^{0}, \hat{t}_{l}^{0}] \rightarrow [s^{0}(t), s^{0}(t) + l \cdot \Delta{s}]$ using a standard interpolation method. \subsection{Planning layer} The planning layer plans a reference trajectory for the considered vehicle based on the predicted trajectory of the preceding vehicle. With $\hat{s}^{0}(t)$ predicted by $f_{\sigma}$, we can derive the following four trajectories: \textsl{1}) $\hat{s}_{t}^{-}(t) \coloneqq \hat{s}^{0}(t - \Delta{t}^{-})$: predicted envelope of some minimal time headway $\Delta{t}^{-}$, \textsl{2}) $\hat{s}_{t}^{+}(t) \coloneqq \hat{s}^{0}(t - \Delta{t}^{+})$: predicted envelope of some maximal time headway $\Delta{t}^{+}$, \textsl{3}) $\hat{s}_{s}^{-}(t) \coloneqq \hat{s}^{0}(t) - \Delta{s}^{-}$: predicted envelope of some minimal space headway $\Delta{s}^{-}$, and \textsl{4}) $\hat{s}_{s}^{+}(t) \coloneqq \hat{s}^{0}(t) - \Delta{s}^{+}$: predicted envelope of some maximal space headway $\Delta{s}^{+}$. The above four trajectories are illustrated in Figure~\ref{fig:envelope_design}. While many possible designs are possible, in this study we select the minimal and maximal headway envelopes as follows \begin{equation}\begin{aligned} \hat{s}_{\min}\left( t \mid \mathcal{W}, \widehat{\mathcal{T}}^{0} \right) &\coloneqq \max\left( \min\left( \hat{s}_{s}^{-}(t), \hat{s}_{t}^{-}(t) \right), \hat{s}_{s}^{+}(t) \right),\\ \hat{s}_{\max}\left( t \mid \mathcal{W}, \widehat{\mathcal{T}}^{0} \right) &\coloneqq \min\left( \max\left( \hat{s}_{s}^{+}(t), \hat{s}_{t}^{+}(t) \right), \hat{s}_{s}^{-}(t) \right). \end{aligned} \label{eq:headway_constraints} \end{equation} The area between $\hat{s}_{\min}(t)$ and $\hat{s}_{\max}(t)$ is shaded in green in Figure~\ref{fig:envelope_design}. \begin{figure}[h] \begin{tikzpicture} \begin{axis}[ axis lines=middle, axis line style={-Latex}, xmin=0, xmax=125, ymin=0, ymax=100, xlabel=$t$, ylabel=$s$, xtick={5,15,35}, xticklabels={}, ytick={62.5,75,80}, yticklabels={}, legend style={at={(1,0.125)},anchor=south east}, width=0.45\textwidth, height=0.325\textwidth, ] \addplot+[no marks,domain=5:65,samples=100,black,thick] {x + 15}; \addplot+[mark=square*,domain=5:65,samples=3,red,thick] {x + 10}; \addplot+[mark=triangle*,domain=15:75,samples=3,pink,thick,name path=lower] {(x-10) + 15}; \addplot+[mark=diamond*,domain=5:65,samples=3,cyan,thick,name path=upper] {x - 2.5}; \addplot+[mark=*,domain=35:95,samples=3,blue,thick] {(x-30) + 15}; \addplot[mark=none,black,dotted] coordinates {(5,0) (5,20)}; \addplot[mark=none,black,dotted] coordinates {(15,0) (15,20)}; \addplot[mark=none,black,dotted] coordinates {(35,0) (35,20)}; \addplot[mark=none,black,dotted] coordinates {(0,80) (65,80)}; \addplot[mark=none,black,dotted] coordinates {(0,75) (65,75)}; \addplot[mark=none,black,dotted] coordinates {(0,62.5) (65,62.5)}; \draw [<->] (5,3) -- node[above] {$\Delta{t}^{-}$} (15,3); \draw [<->] (5,15) -- node[above] {$\Delta{t}^{+}$} (35,15); \draw [<->] (3,75) -- node[right] {$\Delta{s}^{-}$} (3,80); \draw [<->] (20,62.5) -- node[right] {$\Delta{s}^{+}$} (20,80); \legend{$\hat{s}^{0}(t)$, $\hat{s}^{-}_{s}(t)$, $\hat{s}^{-}_{t}(t)$, $\hat{s}^{+}_{s}(t)$, $\hat{s}^{+}_{t}(t)$}; \addplot[olive,opacity=0.25] fill between[of=lower and upper, soft clip={domain=15:65}]; \end{axis} \end{tikzpicture} \caption{Headway envelope design. $\hat{s}^{0}(\cdot)$ is the predicted trajectory of the proceeding vehicle. $\hat{s}^{-}_{s}(\cdot), \hat{s}^{+}_{s}(\cdot)$ are estimated minimum and maximum \textit{space} headway envelopes, respectively. $\hat{s}^{-}_{t}(\cdot), \hat{s}^{+}_{t}(\cdot)$ are estimated minimum and maximum \textit{time} headway envelopes, respectively. The shaded region represents the feasible set in space and time. } \label{fig:envelope_design} \end{figure} Let $m$ be the temporal planning horizon and $\Delta{t}_{p}$ be the temporal planning resolution. For all $i_{p} = 0, \dots, m$, denote $t_{i_{p}}^{p} \coloneqq t^{0}_{1} + i_{p}\Delta{t}_{p}$. We define $t_{m} \coloneqq \hat{t}_{l+1}^{0}$ and assume that $\hat{t}_{l+1}^{0} - t^{0}_{1}$ can be evenly divided by $\Delta{t}_{p}$. Because $\hat{t}_{l+1}^{0}$ changes with time, $m$ will be time-varying too as $\Delta{t}_{p}$ is fixed Define the planned states $\widecheck{\mathcal{X}}^{1p}$ and the planned accelerations $\widecheck{\mathcal{U}}^{1p}$ of the considered vehicle as follows: \begin{equation} \begin{aligned} \widecheck{\mathcal{X}}_{m}^{1p} &\coloneqq \{\check{\vec{x}}^{1}(t_{0}^{p}), \check{\vec{x}}^{1}(t_{1}^{p}), \dots, \check{\vec{x}}^{1}(t_{m}^{p}) )\},\\ \widecheck{\mathcal{U}}_{m-1}^{1p} &\coloneqq \{\check{a}^{1}(t_{0}^{p}), \check{a}^{1}(t_{1}^{p}), \dots, \check{a}^{1}(t_{m-1}^{p})\}. \end{aligned} \label{eq:decision_variables_planning} \end{equation} Here, we introduce the check accent $\check{x}$ to denote that a variable $x$ is a \textit{planned} decision variable. The planned states and accelerations may be solved via the following LCQP: \begin{mini!}|s|[2] {X} {\resizebox{.75\hsize}{!}{${\displaystyle \alpha \sum_{i_{p}=0}^{m-1} \left(\check{a}^{1}(t_{i_{p}}^{p})\right)^{2} + \beta \sum_{j_{p}=1}^{m} \left( \xi_{j_{p}}^{p} \right)^{2} + \gamma \sum_{j_{p}=1}^{m} \left( \zeta_{j_{p}}^{p} \right)^{2}}$} \label{eq:planning-lcqp-obj}} {\label{eq:planning-lcqp}} {} \addConstraint{\check{\vec{x}}^{1}(t_{0}^{p})}{ = \vec{x}^{1}_{0} \label{eq:planning-lcqp-con1}} \addConstraint{\check{\vec{x}}^{1}(t_{i_{p}+1}^{p})}{= \mat{A}_{i_{p}}^{p} \cdot \check{\vec{x}}^{1}(t_{i_{p}}^{p}) + \mat{B}_{i_{p}}^{p} \cdot \check{a}^{1}(t_{i_{p}}^{p}) \label{eq:planning-lcqp-con2}} \addConstraint{0 \leq \xi_{j_{p}}^{p}}{, \; \check{s}^{1}(t_{j_{p}}) - \hat{s}_{\min}\left( t_{j_{p}} \mid \mathcal{W}, \widehat{\mathcal{T}}^{0} \right) \leq \xi_{j_{p}}^{p} \label{eq:planning-lcqp-con3}} \addConstraint{0 \leq \zeta_{j_{p}}^{p}}{, \; -\check{s}^{1}(t_{j_{p}}) + \hat{s}_{\max}\left( t_{j_{p}} \mid \mathcal{W}, \widehat{\mathcal{T}}^{0} \right) \leq \zeta_{j_{p}}^{p} \label{eq:planning-lcqp-con4}} \addConstraint{v_{\min} \leq \check{v}^{1}(t_{j_{p}}^{p})}{\leq v_{\max} \label{eq:planning-lcqp-con5}} \addConstraint{a_{\min} \leq \check{a}^{1}(t_{i_{p}}^{p})}{\leq a_{\max}, \label{eq:planning-lcqp-con6}} \end{mini!} where \begin{equation*}\begin{aligned} X &\coloneqq \left\{ \widecheck{\mathcal{X}}_{m}^{1p},\widecheck{\mathcal{U}}_{m-1}^{1p}, \{ \xi_{j_{p}}^{p} \}_{1}^{m}, \{ \zeta_{j_{p}}^{p} \}_{1}^{m} \right\},\\ \mat{A}_{i_{p}}^{p} &\coloneqq e^{\mat{A}(t_{i_{p}+1}^{p} - t_{i_{p}}^{p})}, \quad \mat{A} = \left[[0 \; 0]^{\top} \: [1 \; 0]^{\top}\right],\\ \mat{B}_{i_{p}}^{p} &\coloneqq \int^{t_{i_{p}+1}^{p}}_{t_{i_{p}}^{p}} e^{\mat{A}(t_{i_{p}+1}^{p} - \tau)} d\tau \cdot \mat{B}, \quad \mat{B} = [0 \; 1]^{\top}, \end{aligned}\end{equation*} for $i_{p} = 0, \dots, m-1$ and $j_{p} = 1, \dots, m$. The receding horizon design is illustrated in the $t^{p}$ axis in Figure~\ref{fig:receding_horizon_spec}. We have $\mathcal{W}, \widehat{\mathcal{T}}^{0}$, $\alpha, \beta, \gamma$, and $\vec{x}_{0}^{1}$ as the inputs to the optimization program, where $0 < \alpha, \beta, \gamma \leq 1, \alpha + \beta + \gamma = 1$, and $\vec{x}_{0}^{1}$ is the initial state of the considered vehicle. In~\eqref{eq:planning-lcqp-obj}, the first term penalizes large accelerations, while the second and third term, coupled with~\eqref{eq:planning-lcqp-con3} and~\eqref{eq:planning-lcqp-con4}, regulate the vehicle inside the admissible space headway ranges. Constraints~\eqref{eq:planning-lcqp-con1} and~\eqref{eq:planning-lcqp-con2} impose kinematic constraints. Lastly, constraints~\eqref{eq:planning-lcqp-con5} and~\eqref{eq:planning-lcqp-con6} impose speed limits and actuation limits. \begin{proposition} If the feasible set is nonempty, problem~\eqref{eq:planning-lcqp} has unique global optimum. \label{prop:opt} \end{proposition} \begin{proof} Because $\alpha, \beta, \gamma > 0$, the objective function~\eqref{eq:planning-lcqp-obj} is strongly convex. By construction, the feasible set is a polyhedron, which is also convex. Therefore, a direct application of Lemma 8.2 and Theorem 8.6 in \cite{calafiore2014optimization} reveals that if the feasible set is nonempty, problem~\eqref{eq:planning-lcqp} has an unique global optimum. \end{proof} \begin{proposition} For some initial time $t_{0}$, if the feasible set of problem~\eqref{eq:planning-lcqp} is nonempty, then its corresponding receding-horizon policy has a solution for all $t > t_{0}$. \label{prop:feasibility} \end{proposition} \begin{proof} To show that problem~\eqref{eq:planning-lcqp} is persistently feasible, it suffices to show that it has a solution at every future time step. Because the feasible set is initially nonempty, the problem has a solution at $t_{0}$. For every other time step $t > t_{0}$, it is easy to verify that zero acceleration is a feasible point due to the fact that constraints~\eqref{eq:planning-lcqp-con3} and~\eqref{eq:planning-lcqp-con4} are soft. Therefore, it follows that problem~\eqref{eq:planning-lcqp-obj}, when repeatedly applied in receding horizon, always stays feasible. \end{proof} \begin{remark} When the initial velocity is too high or too low, the feasible set of~\eqref{eq:planning-lcqp} will become empty. For example, when the initial velocity is greater than $v_{\max}+a_{\max}\Delta{t}_{p}$, there will be no acceleration within the actuation limits that can steer the velocity below $v_{\max}$ in the next time step. To prevent any chance of infeasibility, we can also replace the hard constraints on velocity with suitable soft constraints like what we did in constraints~\eqref{eq:planning-lcqp-con3} and~\eqref{eq:planning-lcqp-con4}. \label{rem:inf} \end{remark} \begin{remark} Because the constraint on minimum headway is soft, it is possible for the considered vehicle to violate constraint~\eqref{eq:planning-lcqp-con3}, which includes the extreme event of colliding into the preceding vehicle. Note that this is chosen by design because the hard constraint version of~\eqref{eq:planning-lcqp-con3} can never be guaranteed in practice due to irregularity of human driving and inevitable errors in prediction. Therefore, the presented soft constraint in~\eqref{eq:planning-lcqp-con3} is to be viewed as a ``best-effort'' attempt to respect the minimum headway requirement. \label{rem:col} \end{remark} Furthermore, note that constraints~\eqref{eq:planning-lcqp-con3} and~\eqref{eq:planning-lcqp-con4} can be extended to account for additional driving behaviors. For example, it can be modified to handle cut-ins and cut-outs, as illustrated in Figure~\ref{fig:envelope_design_cutinandout}. Under this design, the considered vehicle will not overreact when a discontinuous change occurs in the position of the preceding vehicle. Rather, it gradually recovers to a comfortable headway over time. With the help a switching condition, this policy can be combined with that of~\eqref{eq:headway_constraints} to form a hybrid MPC controller that is capable of longitudinal car-following with cut-ins and cut-outs. \begin{figure}[ht] \centering \begin{tikzpicture} \begin{axis}[ axis lines=middle, axis line style={-Latex}, xmin=0, xmax=125, ymin=0, ymax=150, xlabel=$t$, ylabel=$s$, ticks=none, xtick={}, xticklabels={}, ytick={}, yticklabels={}, legend style={at={(1,0.125)},anchor=south east}, width=0.25\textwidth, height=0.225\textwidth, ] \addplot+[no marks,domain=5:25,samples=100,black,thick] {x + 40}; \addplot+[no marks,domain=25:85,samples=100,black,thick] {x + 20}; \addplot[mark=none,black,thick] coordinates {(25,65) (25,45)}; \addplot[mark=*,red,thick,name path=lower] coordinates {(5,20) (25,40) (85,80)}; \addplot[mark=triangle*,blue,thick,name path=upper] coordinates {(5,5) (25,25) (85,65)}; \addplot[olive,opacity=0.25] fill between[of=lower and upper, soft clip={domain=5:85}]; \draw [<->] (5,20) -- node[right] {\small $\Delta{s}^{*}$} (5,45); \draw [<->] (85,80) -- node[right] {\small $\Delta{s}^{*}$} (85,105); \draw [-latex] (25,110) node[above]{\small Cut-In} to (25,65); \end{axis} \end{tikzpicture} \begin{tikzpicture} \begin{axis}[ axis lines=middle, axis line style={-Latex}, xmin=0, xmax=125, ymin=0, ymax=150, xlabel=$t$, ylabel=$s$, ticks=none, xtick={}, xticklabels={}, ytick={}, yticklabels={}, legend style={at={(1.65,0.5)},anchor=east}, legend style={draw=none,font=\small}, width=0.25\textwidth, height=0.225\textwidth, ] \addplot+[no marks,domain=5:25,samples=100,black,thick] {x + 40}; \addplot[mark=*,red,thick,name path=lower] coordinates {(5,20) (25,40) (85,120)}; \addplot[mark=triangle*,blue,thick,name path=upper] coordinates {(5,5) (25,25) (85,105)}; \addplot[olive,opacity=0.25] fill between[of=lower and upper, soft clip={domain=5:85}]; \addplot+[no marks,domain=25:85,samples=100,black,thick] {x + 60}; \addplot[mark=none,black,thick] coordinates {(25,65) (25,85)}; \draw [<->] (5,20) -- node[right] {\small $\Delta{s}^{*}$} (5,45); \draw [<->] (85,120) -- node[right] {\small $\Delta{s}^{*}$} (85,145); \draw [-latex] (25,110) node[above]{\small Cut-Out} to (25,85); \legend{$\hat{s}^{0}(t)$, $\hat{s}_{\min}(t)$, $\hat{s}_{\max}(t)$}; \end{axis} \end{tikzpicture} \caption{Headway envelope design to handle cut-ins and cut-outs. (left) Design for cut-ins. (right) Design for cut-outs. The proposed designs aim to to keep a comfortable headway $\Delta{s}^{*}$. } \label{fig:envelope_design_cutinandout} \end{figure} \subsection{Tracking layer} To handle prediction errors, an additional tracking layer is introduced to track the planned accelerations, while guarding the considered vehicle from imminent collisions. Concretely, the layer takes the following form, which is a modified version of~\eqref{eq:planning-lcqp}. \begin{mini!}|s|[2] {Z} {\lambda \sum_{i_{c}=0}^{n-1} \left(\check{a}^{1}(t_{i_{c}}^{c}) - \bar{a}^{1}(t_{i_{c}}^{c})\right)^{2} + \mu \sum_{j_{c}=1}^{n} \left( \xi_{j_{c}}^{c} \right)^{2} \label{eq:tracking-lcqp-obj}} {\label{eq:tracking-lcqp}} {} \addConstraint{\bar{\vec{x}}^{1}(t_{0}^{c})}{ = \vec{x}^{1}_{0} \label{eq:tracking-lcqp-con1}} \addConstraint{\bar{\vec{x}}^{1}(t_{i_{c}+1}^{c})}{= \mat{A}_{i_{c}}^{c} \cdot \bar{\vec{x}}^{1}(t_{i_{c}}^{c}) + \mat{B}_{i_{c}}^{c} \cdot \bar{a}^{1}(t_{i_{c}}^{c}) \label{eq:tracking-lcqp-con2}} \addConstraint{0 \leq \xi_{j_{c}}^{c}}{, \quad s^{1}(t_{j_{c}}^{c}) - \tilde{s}_{\min}\left( t_{j_{c}}^{c} \mid a^{0}(t_{0}^{c}) \right) \leq \xi_{j_{c}}^{c} \label{eq:tracking-lcqp-con3}} \addConstraint{v_{\min} \leq v^{1}(t_{j_{c}}^{c})}{\leq v_{\max} \label{eq:tracking-lcqp-con4}} \addConstraint{a_{\min} \leq a^{1}(t_{i_{c}}^{c})}{\leq a_{\max}, \label{eq:tracking-lcqp-con5}} \end{mini!} where \begin{equation*}\begin{aligned} t_{i_{c}}^{c} &\coloneqq t + i_{c}\Delta{t}^{c},\\ Z &\coloneqq \left\{ \overline{\mathcal{X}}_{n}^{1c},\overline{\mathcal{U}}_{n-1}^{1c}, \{ \xi_{j_{c}}^{c} \}_{1}^{n} \right\},\\ \mat{A}_{i_{c}}^{c} &\coloneqq e^{\mat{A}(t_{i_{c}+1}^{c} - t_{i_{c}}^{c})}, \quad \mat{A} = \left[[0 \; 0]^{\top} \: [1 \; 0]^{\top}\right],\\ \mat{B}_{i_{c}}^{c} &\coloneqq \int^{t_{i_{c}+1}^{c}}_{t_{i_{c}}^{c}} e^{\mat{A}(t_{i_{c}+1}^{c} - \tau)} d\tau \cdot \mat{B}, \quad \mat{B} = [0 \; 1]^{\top}, \end{aligned}\end{equation*} for some horizon $n$, resolution $\Delta{t}^{c}$, $i_{c} = 0, \dots, n-1$, and $j_{c} = 1, \dots, n$. $\overline{\mathcal{X}}_{n}^{1c},\overline{\mathcal{U}}_{n-1}^{1c}$ are defined similar to those in~\eqref{eq:decision_variables_planning}. Note that here we use accent $\bar{x}$ to indicate that a variable $x$ is an internal variable of the tracking controller. The receding horizon design is illustrated in the $t^{c}$ axis in Figure~\ref{fig:receding_horizon_spec}. Unlike the planning layer, the minimum headway envelope $\tilde{s}_{\min}(\cdot \mid a^{0}(\cdot))$ is generated by assuming that the preceding vehicle accelerates constantly from $t_{0}^{c}$ to $t_{n}^{c}$. As a result, we have $a^{0}(t_{0})$, $\lambda, \mu$, and $\vec{x}_{0}^{1}$ as inputs to the optimization program, where $0 < \lambda, \mu \leq 1, \lambda + \mu = 1$. The modified LCQP in the tracking layer in~\eqref{eq:tracking-lcqp} is structurally identical to~\eqref{eq:planning-lcqp} except for: \textsl{1}) we remove the soft constraint on maximum headway; \textsl{2}) we adopt a shorter planning horizon $n$ and a smaller temporal resolution $\Delta{t}^{c}$; and \textsl{3}) we predict the trajectory of the preceding vehicle by assuming that it keeps its current acceleration. \begin{remark} Because the optimization problem~\eqref{eq:tracking-lcqp} may be viewed as a special case of problem~\eqref{eq:planning-lcqp} where $\gamma = 0$, it has the same properties as those highlighted in Propositions~\ref{prop:opt},\ref{prop:feasibility} and Remarks~\ref{rem:inf},\ref{rem:col}. \end{remark} \begin{figure}[h] \centering \begin{tikzpicture} \draw[->, >=latex, line width=0.35mm] (0, 1.0) to (7, 1.0) node[right] {$s$}; \draw (0, 1.1) to (0, 1.0) node[below] {$w_{1}$}; \draw (1.25, 1.1) to (1.25, 1.0) node[below] {$w_{2}$}; \draw (2.5, 1.1) to (2.5, 1.0) node[below] {$w_{3}$}; \draw (3.75, 1.1) to (3.75, 1.0) node[below] {$w_{4}$}; \draw (5.125, 1.0) node[below] {$\dots$}; \draw (6.5, 1.1) to (6.5, 1.0) node[below] {$w_{l+1}$}; \draw[decoration={brace}, decorate] (0, 1.2) -- node[above=3pt] {$\Delta{s}$} (1.25, 1.2); \draw[->, >=latex, line width=0.35mm] (0, 0) to (7, 0) node[right] {$\hat{t}^{0}$}; \draw (0, 0.1) to (0, 0) node[below] {$t_{1}^{0}$}; \draw (1.75, 0.1) to (1.75, 0) node[below] {$\hat{t}_{2}^{0}$}; \draw (3.00, 0.1) to (3.00, 0) node[below] {$\hat{t}_{3}^{0}$}; \draw (4.00, 0.1) to (4.00, 0) node[below] {$\hat{t}_{4}^{0}$}; \draw (5.25, 0) node[below] {$\dots$}; \draw (6.5, 0.1) to (6.5, 0) node[below] {$\hat{t}_{l+1}^{0}$}; \draw[->, >=latex, line width=0.35mm] (0, -1.1) to (7, -1.1) node[right] {$t^{p}$}; \draw (0, -1.0) to (0, -1.1) node[below] {$t^{p}_{0}$}; \draw (1, -1.0) to (1, -1.1) node[below] {$t^{p}_{1}$}; \draw (2, -1.0) to (2, -1.1) node[below] {$t^{p}_{2}$}; \draw (3, -1.0) to (3, -1.1) node[below] {$t^{p}_{3}$}; \draw (4, -1.0) to (4, -1.1) node[below] {$t^{p}_{4}$}; \draw (5.25, -1.1) node[below] {$\dots$}; \draw (6.5, -1.0) to (6.5, -1.1) node[below] {$t^{p}_{m}$}; \draw[decoration={brace}, decorate] (0, -0.9) -- node[above=3pt] {$\Delta{t}^{p}$} (1.0, -0.9); \draw[->, >=latex, line width=0.35mm] (0, -2.35) to (3.7, -2.35) node[right] {$t^{c}$}; \draw (0, -2.25) to (0, -2.35) node[below] {$t^{c}_{0}$}; \draw (0.2, -2.25) to (0.2, -2.35) node[below] {}; \draw (0.4, -2.25) to (0.4, -2.35) node[below] {$t^{c}_{1}$}; \draw (0.6, -2.25) to (0.6, -2.35) node[below] {}; \draw (0.8, -2.25) to (0.8, -2.35) node[below] {$t^{c}_{3}$}; \draw (1.0, -2.25) to (1.0, -2.35) node[below] {}; \draw (1.2, -2.25) to (1.2, -2.35) node[below] {$t^{c}_{5}$}; \draw (1.4, -2.25) to (1.4, -2.35) node[below] {}; \draw (1.6, -2.25) to (1.6, -2.35) node[below] {$t^{c}_{7}$}; \draw (1.8, -2.25) to (1.8, -2.35) node[below] {}; \draw (2.0, -2.25) to (2.0, -2.35) node[below] {$t^{c}_{9}$}; \draw (2.6, -2.35) node[below] {$\dots$}; \draw (3.2, -2.25) to (3.2, -2.35) node[below] {$t^{c}_{n}$}; \draw[decoration={brace}, decorate] (0, -2.15) -- node[above=3pt] {$\Delta{t}^{c}$} (0.2, -2.15); \end{tikzpicture} \caption{Specification of the receding-horizon control scheme. The prediction layer has a \textit{fixed} spatial horizon of $l$ with a spatial resolution of $\Delta{s}$ and a \textit{variable} temporal horizon. The planning layer has a temporal horizon of $m$ with a temporal resolution of $\Delta{t}^{p}$. Because $\hat{t}^{0}_{l+1} = t^{p}_{m}$ and $\hat{t}^{0}_{l+1}$ is variable, $m$ is not fixed and depends on the traffic conditions. The tracking layer has a temporal horizon of $n$ with a temporal resolution of $\Delta{t}^{c}$. } \label{fig:receding_horizon_spec} \end{figure} \section{Numerical Experiments} \begin{figure*}[h] \centering \begin{tabular}{ll} \begin{subfigure}{0.39\textwidth} \begin{tikzpicture} \node[inner sep=0pt] (i24) at (0,3){\includegraphics[width=0.6\textwidth]{figures/i24.png}}; \node[inner sep=0pt] (rav4) at (0,0){\includegraphics[width=0.6\textwidth]{figures/rav4_sideview.jpeg}}; \draw[draw] (-0.5,1.5) rectangle ++(1.6,1.75); \node[text width=3cm] at (1,3.5) {\footnotesize Trajectory Range}; \node[text width=4.5cm,black] at (0.25,1) {\footnotesize Toyota RAV4}; \end{tikzpicture} \end{subfigure}& \begin{subfigure}{0.59\textwidth} \hspace{-7em} \begin{tikzpicture} \begin{axis}[ trim axis right, name=position_plot, axis line style={-Latex}, xlabel={}, xticklabels={}, xmax=700, ylabel=$s - \hat{s}^{0}$ (m), width=\textwidth, height=0.275\textwidth, legend pos=outer north east, legend style={draw=none,font=\footnotesize}, enlarge x limits=false, mark repeat=25, ] \addplot[no marks,thick] table[x=t, y expr=\thisrowno{1}-\thisrowno{1},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\hat{s}^{0}(t)$}; \addplot[mark=truestar,olive,thick] table[x=t, y expr=\thisrowno{4}-\thisrowno{1},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\check{s}^{1}(t)$}; \addplot[mark=square*,red,name path=lower] table[x=t, y expr=\thisrowno{7}-\thisrowno{1},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\hat{s}_{\min}(t)$}; \addplot[mark=triangle*,blue,name path=upper] table[x=t, y expr=\thisrowno{8}-\thisrowno{1},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\hat{s}_{\max}(t)$}; \addplot[olive,opacity=0.25] fill between[of=upper and lower]; \end{axis} \end{tikzpicture} \hspace{-6.85em} \begin{tikzpicture} \begin{axis}[ name=position_plot, axis line style={-Latex}, xlabel={}, xticklabels={}, xmax=700, ylabel=$v$ (m/s), width=\textwidth, height=0.275\textwidth, legend pos=outer north east, legend style={draw=none,font=\footnotesize}, enlarge x limits=false, mark repeat=25, ] \addplot[no marks,thick] table[x=t, y expr=\thisrowno{2},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\hat{v}^{0}(t)$}; \addplot[mark=truestar,olive,thick] table[x=t, y expr=\thisrowno{5},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\check{v}^{1}(t)$}; \addplot[mark=square*,red,name path=lower,samples=100] table[x=t, y=v_min,col sep=comma] {data/7050_solution.csv}; \addlegendentry{$v_{\min}$}; \addplot[mark=triangle*,blue,name path=upper,samples=100] table[x=t, y=v_max,col sep=comma] {data/7050_solution.csv}; \addlegendentry{$v_{\max}$}; \addplot[olive,opacity=0.25] fill between[of=upper and lower]; \end{axis} \end{tikzpicture} \hspace{-6.95em} \begin{tikzpicture} \begin{axis}[ name=position_plot, axis line style={-Latex}, xlabel=$t$ (s), xmax=700, ylabel=$a$ (m/s$^2$), width=\textwidth, height=0.275\textwidth, legend pos=outer north east, legend style={draw=none,font=\footnotesize}, enlarge x limits=false, mark repeat=25, ] \addplot[no marks,thick] table[x=t, y expr=\thisrowno{3},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\hat{a}^{0}(t)$}; \addplot[mark=truestar,olive,thick] table[x=t, y expr=\thisrowno{6},col sep=comma] {data/7050_solution.csv}; \addlegendentry{$\check{a}^{1}(t)$}; \addplot[mark=square*,red,name path=lower,samples=100] table[x=t, y=a_min,col sep=comma] {data/7050_solution.csv}; \addlegendentry{$a_{\min}$}; \addplot[mark=triangle*,blue,name path=upper,samples=100] table[x=t, y=a_max,col sep=comma] {data/7050_solution.csv}; \addlegendentry{$a_{\max}$}; \addplot[olive,opacity=0.25] fill between[of=upper and lower]; \end{axis} \end{tikzpicture} \end{subfigure} \end{tabular} \caption{Numerical demonstration of the planning layer. We apply the planning layer to a recorded trajectory of human-manned Toyota RAV4 on a stretch of highway along Interstate 24. To evaluate the standalone performance of the planner, we assume perfect prediction and full planning horizon. The planned trajectory has significantly attenuated acceleration compared to that of the preceding vehicle.} \label{fig:planning} \end{figure*} To evaluate the proposed controller, we test its performance through three numerical experiments, each equipped with a distinct ETA estimator, and check if its average running time fits for real-time applications. For all performance evaluations, a preceding vehicle is emulated by replaying a recorded drive from~\cite{matthew_nice_2021_6366762}. The velocity and acceleration time series of the recorded drive are shown in the second and third rows of Figure~\ref{fig:planning}. Specifications of the three ETA estimators are $(l, \Delta{s}, \sigma) \in \{ (350, 10^{1}, 0.05), (35, 10^{2}, 0.15), (3.5, 10^{3}, 0.25) \}$. Parameters of the planning layer are: $\alpha = 0.2, \beta = 0.7, \gamma = 0.1, v_{\min} = 0, v_{\max} = \SI{35}{\meter\per\second}, a_{\min} = \SI{-1.5}{\meter\per\second\squared}, a_{\max} = \SI{3}{\meter\per\second\squared}$, and $\Delta{t}^{p} = \SI{1}{\second}$. Additional parameters in the tracking layer are: $\lambda = 0.1, \mu = 0.9, \Delta{t}^{p} = \SI{0.1}{\second}$, and $n = 30$. The results of the three MPC controllers are shown in the first three rows of Table~\ref{tab:numerical_results}. To quantify performance of each controller, we use the three terms in the objective function~\eqref{eq:planning-lcqp-obj}, as well as two measures that account for tracking errors and fuel efficiency. Note that the fuel reduction is computed using the fitted polynomial energy consumption model for a 2019 Toyota RAV4 described in~\cite{10.1145/3459609}. To establish a performance upper bound, we run the experiment with a controller that can perfectly foresee the trajectory of the preceding vehicle. We call such controller an oracle controller. The results of the numerical simulations are shown in the fourth row of Table~\ref{tab:numerical_results}. Position, velocity, and acceleration of the oracle controller are shown in Figure~\ref{fig:planning}. Similarly, to acquire a performance baseline, we run the experiment with a common human car-following model, i.e., intelligent driver model (IDM)~\cite{treiber2000congested}. Parameters of the IDM are chosen to match that of the MPC controller: $a = 1.5, b = 3, \delta = 4, s_{0} = 3.5, \ell = 4.65$. The results of the numerical simulations are shown in the last row of Table~\ref{tab:numerical_results}. \begin{table*} \centering \caption{Summary of the numerical results.} \label{tab:numerical_results} \begin{tabular}{rccccc} \toprule & $\alpha\sum a^{2}$ (m/s$^2$) & $\beta\sum \xi^{2}$ (m)$^{*}$ & $\gamma\sum \zeta^{2}$ (m)$^{*}$ & Tracking Error (m/s)$^{\dagger}$ & Fuel Reduction$^{\ddagger}$\\ \hline $\Delta{s} = 10^{1}, \sigma = 0.05$ & $1.17 \times 10^{1}$ & $8.28 \times 10^{-1}$ & $1.21 \times 10^{0}$ & $-0.0122 \pm 0.571$ & 23.8\%\\ $\Delta{s} = 10^{2}, \sigma = 0.15$ & $1.70 \times 10^{1}$ & $1.74 \times 10^{0}$ & $3.19 \times 10^{0}$ & $0.0268 \pm 0.752$ & 19.6\%\\ $\Delta{s} = 10^{3}, \sigma = 0.25$ & $5.01 \times 10^{1}$ & $3.04 \times 10^{0}$ & $2.61 \times 10^{2}$ & $-0.0342 \pm 2.11$ & -14.6\%\\ Oracle & $1.01 \times 10^{1}$ & $2.48 \times 10^{-2}$ & $2.48 \times 10^{-1}$ & n/a & 24.3\%\\ IDM & $1.65 \times 10^{1}$ & $1.81 \times 10^{-1}$ & $4.52 \times 10^{2}$ & n/a & 20.7\%\\ \bottomrule \multicolumn{6}{l}{\footnotesize $*$ Slack variables $\xi$ and $\zeta$ are computed against the perfect prediction assumed by the oracle vehicle.}\\ \multicolumn{6}{l}{\footnotesize $\dagger$ Tracking error is defined as the difference between the planned velocity $\check{v}$ and actual velocity $\bar{v}$.}\\ \multicolumn{6}{l}{\footnotesize $\ddagger$ Fuel reduction is computed as one minus the ratio of the fuel rates between the considered vehicle and the preceding vehicle.}\\ \end{tabular} \end{table*} As the oracle vehicle has demonstrated in Figure~\ref{fig:planning}, the optimal behavior is to avoid accelerating or decelerating unless there is an imminent collision with one of the two headway constraints. Table~\ref{tab:numerical_results} suggests that with a good ETA estimator the optimal trajectory can perform tight car-following with 20\% less fuel consumption compared to that os the preceding vehicle. Lastly, we measure the running time of the planning and tracking layers on an Intel i7-6700K Linux machine. The optimization solver is obtained from a python package called CVXOPT~\cite{andersen2020cvxopt}. The average running time of the planning layer is about 0.12 s and of the tracking layer about 0.04 s. As illustrated in Figure~\ref{fig:controller_design}, both times fit comfortably into their allocated running time budgets, i.e., 1 s and 0.1 s, respectively. \section{Discussions} In this section, we discuss two of the most consequential features that one needs to pay attention to when designing the controller, namely, \textsl{1}) the space between the maximum and minimum headway constraints and \textsl{2}) the resolution and accuracy of the ETA estimators. It is clear that the larger the space between the maximum and minimum headway constraints, the smoother the car following could be. For example, if the considered car is allowed to be arbitrarily far behind the preceding vehicle without incurring any penalties, it can simply wait for the preceding vehicle to exit the road and then accelerate to a very small constant velocity to complete the trip. Nevertheless, we know from common sense that such behavior is not acceptable in most of the real-world applications. On the contrary, when the space between the two headway constraints is small, the MPC controller will become sensitive to prediction errors. Consider a situation where the allowable headway interval is thin and the prediction errors are significant. Because the considered vehicle is already close to the boundaries of the headway constraints, an error in prediction could easily mislead the planning layer to falsely expect that it will soon violate one of the headway constraints. In an effort to steer away from violating such constraint, the planning layer overreacts, leading to undesirably large acceleration or deceleration. To resolve the above problem, one could modify the headway constraints proposed in~\eqref{eq:headway_constraints}. One possible solution is to enforce a minimum gap between the maximum and minimum headway envelope and to add a small penalty to encourage the considered vehicle to drive at the center of the allowable headway constraints. The performance of the controller also changes significantly with respect to the resolution and accuracy of the prediction. At spatial resolution of $\Delta{s} = 10$ and noise level $\sigma = 0.05$, the MPC controller is on par with the oracle controller and clearly outperforms the IDM. At spatial resolution of $\Delta{s} = 100$ and noise level $\sigma = 0.15$, the MPC controller is worse than the oracle controller but is mostly on par with the IDM except that it follows the preceding vehicle more tightly. At spatial resolution of $\Delta{s} = 1000$ and noise level $\sigma = 0.25$, the MPC controller is worse than both the oracle controller and the IDM. Therefore, a reasonable requirement on the ETA estimator could be that it should have a spatial resolution of $\Delta{s} \leq 100$ and an estimation noise of $\sigma \leq 0.15$. \section{Conclusions} In this article, we propose a hierarchical MPC control scheme based on a LCQP. With a good ETA estimator, we show that the controller can achieve significantly smoother and tighter car following compared to a baseline IDM. Additional constructions can be added to the formulation to handle cut-ins and cut-outs and to enhance its robustness to prediction errors. Interesting future works include studying the string stability of the controller and modifying its formulation for inter-vehicle collaboration. \bibliographystyle{unsrt}
1,941,325,220,352
arxiv
\section{Introduction} To gather feedback about computer systems' running state, it is a common practice for developers to insert logging statements inside the source code to have running programs' internal state and variables written to log files. This logging process enables developers and system administrators to analyze log files for a variety of purposes~\cite{bertero2017experience}, such as anomaly and problem detection~\cite{xu2009detecting,fu2009execution}, log message clustering~\cite{makanju2009clustering,vaarandi2015logcluster}, system profile building~\cite{hassan2008industrial}, code quality assessment~\cite{shang2015studying}, and compression of log files~\cite{tang2011log,makanju2009clustering}. Additionally, the wealth of information in the logs has also generated significant industrial interest and thus has initiated the development of commercialized log processing platforms such as Splunk~\cite{urlsplunk} and Elastic Stack~\cite{urlelastic}. Due to the free-form text format of log statements and lack of a general guideline, adding proper logging statements to the source code remains a manual, inconsistent, and error-prone task~\cite{chen2017characterizing}. As such, methods to automate logging \textit{\textbf{location}} and predict the \textit{\textbf{details}}, \textit{i.e.}, the \textit{`static text'} and verbosity level of the logging statement, are well sought after. For example, the log print statement (LPS): {\small \hlcyan{log.warn(``Cannot find BPService for bpid=" + id)}}, contains a textual part indicating the context of the log, \textit{i.e.}, \textit{description}, \textit{``Cannot find BPService for bpid=''}, a \textit{variable} part, \textit{`id'}, and a log \textit{verbosity level},\textit{`warn'}, indicating the importance of the logging statement and how the level represents the state of the program~\cite{log4x}. For practical concerns such as I/O and development costs, the \textit{quantity}, \textit{location}, and \textit{description} of logging statements should be decided efficiently~\cite{jia2018smartlog}. Logging too little may result in missing important runtime information that can negatively impact the postmortem dependability analysis~\cite{yuan2012conservative}, and excessive logging can consume extra system resources at runtime and impair the system's performance as logging is an I/O intensive task~\cite{zhao2017log20,ding2015log2}. In addition, due to the current \textit{ad hoc} logging practices, developers often make mistakes in log statements or even forget to insert a log statement at all~\cite{hassani2018studying,li2021studying}. Therefore, prior studies have aimed to automate the logging process and predict whether a code snippet requires a logging statement by utilizing machine learning methods to \textit{train} a model on a set of logged code snippets, and then \textit{test} it on a new set of unlogged code snippets~\cite{fu2014developers,zhu2015learning} (supervised learning). A recent work~\cite{he2018characterizing} has shown similar code snippets are useful for log statement description (LSD) suggestions by evaluating their \textit{BLEU}~\cite{papineni2002bleu} and \textit{ROUGE}~\cite{lin2004rouge} scores, similar to \textit{Precision} and \textit{Recall}, respectively. Thus, in our research, we specifically seek to utilize source code clones for log statement prediction and suggestion. Our goal in this research is to utilize code clones as a paradigm to improve the log statement automation task. This will ensure consistency and a higher quality of logging compared to the current developers' \textit{ad hoc} logging efforts. To summarize, the objectives of this research are to first investigate the suitableness of source code clones for log statement prediction, uncover their shortcomings, and then leverage them for automated log location and description prediction based on selecting appropriate source code features~\cite{gholamian2020logging}. In addition, we utilize deep learning NLP approaches along with code clones to also predict the log statement's description. Through an empirical study of seven open-source software projects, we demonstrate the applicability of similar code snippets for log prediction, and further analysis suggests that log-aware clone detection can achieve high BLEU and ROUGE scores in predicting log statement's description.\looseness=-1 \section{Motivating Example}\label{casestudy} Source code clones are exact or similar snippets of the code that exist among one or multiple source code projects~\cite{sajnani2016sourcerercc}. There are four main classes of code clones~\cite{rattan2013software}: Type-1, which is simply copy-pasting a code snippet, Type-2 and Type-3, which are clones that show syntax differences to some extent, and finally Type 4, which represents two code snippets that are syntactically very different but semantically equal, \textit{e.g.}, iterative versus recursive implementations of \textit{Fibonacci} series in Figure~\ref{code_sample}. In this research, we focus on \textbf{method-level code clones} and call the tuple ($MD_{i}$, $MD_{j}$) a \textit{`clone pair'}. Figure~\ref{code_sample} shows that the logging pattern in the original code, $MD_i$ on Line 3 can be learned to suggest logging statements for its clone, $MD_j$, which is missing a logging statement. \begin{figure}[h] \vspace*{-4mm} \centering \hspace*{-3mm} \begin{minipage}{9cm} \hspace*{-3mm} \begin{minipage}{0.47\linewidth} \scriptsize \hspace*{-5mm} \begin{lstlisting}[xleftmargin=-2cm,numbersep=2pt,linebackgroundcolor={% \ifnum\value{lstnumber}=1 \color{blue!10} \fi \ifnum\value{lstnumber}=11 \color{blue!10} \fi },label={w_o_log_lvl_2},style=base] //Original code - MD$_i$ int fibonacci(int n){ log.info("Calculating Fibo sequence for if(n==0||n==1) return n; else return fibonacci(n-1)+fibonacci(n-2); } \end{lstlisting} \end{minipage} \begin{minipage}{0.5\linewidth} \scriptsize \begin{lstlisting}[xleftmargin=-2cm,numbersep=2pt,linebackgroundcolor={% \ifnum\value{lstnumber}=1 \color{blue!10} \fi \ifnum\value{lstnumber}=11 \color{blue!10} \fi },label={w_o_log_lvl_2},style=base] //Clone Type 4 - MD$_j$ int getFibonacci(int n){ if(n==0){return 0;} if(n==1){return 1;} int n_2th=0,n_1th=1,nth=1; for(int i=2;i<=n;i++){ nth=n_2th+n_1th; n_2th=n_1th; n_1th=nth;} return nth; \end{lstlisting} \end{minipage} \end{minipage} \caption{Example for log prediction with code clones.} \label{code_sample} \vspace*{-2mm} \end{figure} \textbf{Practical Scenario.} To illustrate how our approach will be useful for developers during the development cycle of the software, we provide the following practical scenario. We consider a possible employment of our research as a recommender tool, which can be integrated as a plugin to code development environments, \textit{i.e.}, IDE software. Alex is a developer working on a large-scale software system and has previously developed method $MD_i$ in the code base. At a later time, Dave, Alex's colleague, is implementing $MD_{j}$. Our automated log suggestion\footnote{We use \textit{`suggestion'} and \textit{`prediction'} interchangeably.} approach can predict that if this new code snippet, $MD_{j}$, requires a logging statement by finding its clone, $MD_i$, in the code base. Then, the tool can suggest Dave, just in time, to add a log statement based on the prediction outcome.\looseness=-1 \begin{figure*}[h] \centering \includegraphics[scale=.85]{./graphics/steps_doctoral.pdf} \vspace*{-2mm} \caption{Research steps, including objectives, intermediate data, and findings.} \label{research_steps} \vspace*{-5mm} \end{figure*} \section{Related Work}\label{rwork} Prior work has tackled the automation of log statements with various approaches. Yuan \textit{et al.}~\cite{yuan2012conservative} proposed \textit{ErrorLog}, a tool to report error handling code, \textit{i.e.}, \textit{error logging}, such as \textit{catch clauses}, which are not logged and to improve the code quality and help with failure diagnosis by adding a log statement. Zhao \textit{et al.}~\cite{zhao2017log20} introduced \textit{Log20}, a performance-aware tool to inject new logging statements to the source code to disambiguate execution paths. \textit{Log20} introduces a logging mechanism that does not consider developers' logging habits or concerns. Moreover, it does not provide suggestions for \textit{logging descriptions}. Zhu \textit{et al.}~\cite{zhu2015learning} proposed \textit{LogAdvisor}, a learning-based framework, for automated logging prediction which aims to learn the frequently occurring logging practices automatically. Their method learns logging practices from existing code repositories for \textit{exception} and \textit{function return-value check} blocks by looking for textual and structural features within these code blocks with logging statements. Jia \textit{et al.}~\cite{jia2018smartlog} proposed an intention-aware log automation tool called \textit{SmartLog}, which uses an \textit{Intention Description Model} to explore the intention of existing logs, and Zhenhao et al.~\cite{li2020shall} categorized six block-level logging locations. Recently, Li \textit{et al}.~\cite{li2021studying} showed duplicate logging statements that are the outcome of shallow copy-pasting result in log-related anti-patterns (\textit{i.e.}, issues). Although their research has a negative connotation towards copy-pasted logging statements from code clones, it simultaneously shows the potential of code clones as a starting point for automated log suggestion and improvement. In other words, by automating and enhancing the log statements in the clone pairs, we can expedite the development process and avoid shallow copy-pasting that developers tend to do. Additionally, by automation, we reduce the risk of irregular and \textit{ad hoc} developers' logging practices, \textit{e.g.}, forgetting to log in the first place. \section{Research Approach} Based on the findings of He \textit{et al.}\cite{he2018characterizing} in logging description prediction based on \textit{edit distance}~\cite{ristad1998learning}, \ul{we hypothesize that similar code snippets, \textit{i.e.}, code clones, follow similar logging patterns which can be utilized for log statement location and description prediction}. Formally speaking, assuming set $CC_{MD_i}$ is the set of all code clones of Method Definition $MD_i$, if $MD_i$ has a log print statement (LPS), then its clones also have LPSs:\vspace{-1mm} \begin{equation*} \label{eq1} \vspace{-2mm} \resizebox{0.91\hsize}{!}{ \hspace{-2mm} $\exists LPS_i\in MD_i \implies \forall MD_{j}\in CC_{MD_i}, \exists LPS_j \in MD_{j}$ } \vspace*{1mm} \end{equation*} To evaluate the hypothesis, we guide our research with the following research objectives (\textbf{RO}s): \begin{itemize} \item \textbf{RO1:} Demonstrate whether code clones are consistent in their logging statements. \item \textbf{RO2:} Propose an approach to utilize code clones for log statement location prediction. \item \textbf{RO3:} Provide logging description suggestions based on code clones and deep learning NLP models. \item \textbf{RO4:} Utilize clones for predicting other details of log statements such as log verbosity level and variables. \end{itemize} Our research design comprises a preliminary data collection phase, Stage 0, and is followed by four stages, Stages I-IV, to address RO1-RO4, as illustrated in Figure~\ref{research_steps}. In the following, we provide the details of our methodology and current results for each RO.\looseness=-1 \subsection{\textbf{RO1:} Demonstrate whether code clones are consistent in their logging statements and their log verbosity level.} \textbf{\textit{Motivation}.} To enable code clones for log suggestion, we first require to compare their characteristics and show if clone pairs follow similar logging patterns. \textbf{\textit{Approach}.} For this purpose, we select seven large-scale open-source Java projects, \textit{i.e.}, \textit{Apache Hadoop, Zookeeper, CloudStack, HBase, Hive, Camel, and ActiveMQ}, based on the prior logging research~\cite{chen2017characterizing,he2018characterizing}. These projects are well-logged, stable, and well-used in the software engineering community, and also they enable us to compare our results with prior work, accordingly. We extract methods with logging statements and then find their clones. \textbf{\textit{Evaluation}.} We evaluate the existence of log statements, their verbosity levels, and clone types. \textbf{\textit{Results}.} The results show the majority of method clone pairs are consistent in their logging statements and their log verbosity levels also match to a high degree. Additionally, we find that the majority (in the range of 78\% to 90\%) of code clones are of Types 3 and 4, while the code pairs are matching in the existence of a logging statement. This observation signifies the effectiveness of code clones in suggesting the location of log statements in methods. In other words, although two snippets of clone pairs are syntactically different to a high degree, they still follow similar logging patterns.\looseness=-1 \vspace*{-1mm} \subsection{\textbf{RO2:} Propose an approach to utilize code clones for log statement location prediction.} \textbf{\textit{Motivation}.} Findings from RO1 show matching logging statements between clone pairs and motivate enablement of logging suggestions with code clones. The automated suggestion approach can help developers in making logging decisions and improve logging practices. \textbf{\textit{Approach}.} We initially observe and resolve two shortcomings of general-purpose clone detectors to make them more suitable for log prediction and reduce false positive and false negative cases~\cite{gholamian2021underreview}. We then utilize the clone pairs for suggesting logging statements for the methods which are missing an LPS by finding their clone pairs with a logging statement (Stage II in Figure~\ref{research_steps}). \textbf{\textit{Evaluation}.} We evaluate the performance of our approach by measuring \textit{Precision}, \textit{Recall}, \textit{F-Measure}, and \textit{Balanced Accuracy} (BA) on the set of the seven selected projects. \textbf{\textit{Results}.} Considering the average of BA values, our log-aware clone detection approach, LACCP, brings 15.60\% improvement over Oreo~\cite{saini2018oreo} across the experimented projects. With the higher accuracy that LACCP brings, it enables us to provide more accurate clone-based log statement suggestions. \subsection{\textbf{RO3:} Provide logging description suggestions based on code clones and NLP models.} \textbf{\textit{Motivation}.} Based on the experiment results for predicting the location of logging statements in RO2 and the additional available context from the clone pairs, \textit{i.e.}, the logging statement description available from the original method, $MD_i$, we notice it is a valuable research effort to explore whether it is also possible to predict the logging statements' \textit{description} automatically. With satisfactory performance, an automated tool that can predict the description of logging statements will be a great aid, as it can expedite the software development process and improve the quality of logging descriptions. \textbf{\textit{Approach}.} We base our method on the assumption that clone pairs tend to have similar logging statement descriptions. This assumption comes from the observations in predicting log statements for clone pairs. As logging descriptions explain the source code surrounding them, it is intuitive for similar code snippets to have comparable logging descriptions. Based on this assumption, we propose a deep learning-based method that combines code clones with NLP learning approaches (NLP CC'd). In particular, to generate the LSD for a logging statement in $MD_j$, we extract its corresponding code snippet and leverage LACCP to locate its clone pairs. Laterally, the NLP model provides next word suggestions for the LSDs from the knowledge base available in the training set for each project. \textbf{\textit{Evaluation}.} To measure the accuracy of our method in suggesting the log description, we utilize BLEU~\cite{papineni2002bleu} and ROUGE~\cite{lin2004rouge} scores. These scores are well-established for validating the usefulness of an auto-generated text in prior software engineering and machine learning research, such as comment and code suggestion~\cite{allamanis2018survey} and description prediction~\cite{he2018characterizing}. \textbf{\textit{Results}.} We experiment on seven open-source Java systems, and our analysis shows that by utilizing log-aware clone detection and NLP, our hybrid model, (\textit{NLP CC'd}), achieves 40.86\% higher performance on BLEU and ROUGE scores for predicting LSDs when compared to the prior research~\cite{he2018characterizing}, and achieves 6.41\% improvement over the No-NLP version~\cite{gholamian2021underreview}. \subsection{\textbf{RO4:} Utilize code clones for predicting other details of log statements such as log verbosity level and variables.} \textbf{\textit{Motivation}.} Besides log statement location and its LSD, prediction of other details of log statements such as \textit{Log verbosity level (LVL)} and its \textit{variables (VAR)} are useful research efforts and the focus of prior research~\cite{li2017log,anu2019approach,li2021deeplv}, as they further help the developers in more systematic logging and resolve suboptimal choices of log levels and variables~\cite{li2021deeplv}. \textbf{\textit{Approach}.} Log-aware clone detection, LACCP, is reasonably extendable to predict LVL and VAR alongside the LSD suggestion. Since we have access to the source code of the method that we are predicting the logging statement for and its clone pair code snippet, a reasonable starting point is to suggest the same LVL as of its clone pair, and then augment it with additional learning approaches such as~\cite{li2017log,anu2019approach} for more sophisticated LVL prediction. For VAR prediction, our approach can be augmented with deep learning~\cite{liu2019variables} and static analysis of the code snippet under consideration~\cite{yuan2012improving} to include log variables suggestions along with the predicted LSD. \textbf{\textit{Preliminary evaluation and results}.} Our preliminary analysis for the evaluated projects shows that code clones match in their verbosity levels in the range of (92, 97)\%, which confirms that using the verbosity level of the clone pair, $MD_i$, is a good starting point for log verbosity level suggestions for $MD_j$. We are pursuing RO4 as our future work and will provide additional results and findings subsequently. \section{Discussion} In this section, we compare and discuss the significance of our approach in relation to other existing log prediction and suggestion techniques. \textbf{Method-level log prediction rationale.} Although clone detection (and subsequently, log statement prediction) can be performed in different granularity levels, such as files, classes, methods, and code blocks, however, method-level clones appear to be the most favorable points of re-factoring for all clone types~\cite{kodhai2014method}. We emphasize that our approach also includes all of the logging statements which are nested inside more preliminary code blocks within method definitions, \textit{viz.}, logging statements nested inside code blocks, such as \textit{if-else} and \textit{try-catch}.\looseness=-1 \textbf{Comparison.} Orthogonal to our research, prior efforts such as \cite{zhu2015learning}, \cite{jia2018smartlog}, and~\cite{li2020shall} have proposed learning approaches for logging statements' \textit{location} prediction, \textit{i.e.}, \textit{where to log}. The approaches in \cite{zhu2015learning}, \cite{jia2018smartlog} are focused on error logging statements (ELS), \textit{e.g.}, log statements in \textit{catch clauses}, and are implemented and evaluated on C\# projects. Li \textit{et al.}~\cite{li2020shall} provide log location suggestions by classifying the logged locations into six code-block categories. Different from these works, our approach does not distinguish between error and normal logging statements, is evaluated on open-source Java projects, and leverages logging statement suggestions at method-level by observing logging patterns in similar code snippets, \textit{i.e.}, clone pairs. \textbf{Significance.} Prior approaches~\cite{zhu2015learning,li2020shall} rely on extracting features and training a learning model on logged and unlogged code snippets. Thus, they can predict if a new unlogged code snippet needs a logging statement by mapping its features to the learned ones. Although these methods initially appear similar to our approach in extracting log-aware features from code snippets~\cite{gholamian2020logging}, we believe our approach has an edge over the prior work. Because we also have access to the clone pair of the code under development, \textit{i.e.}, $MD_i$ in $(MD_i, MD_{j})$, this enables us to obtain and leverage the additional data from $MD_i$ to predict other aspects of log statements, \textit{e.g.}, LSD, which the prior work is unable to do. The significance of our approach becomes apparent in LSD prediction as we utilize the LSD of the clone pair as a starting point for suggesting the LSD of the new code snippet. Thus, our approach not only complements the prior work in providing logging suggestions for developers as they develop new code snippets, but it also has an edge over them by providing additional context for further prediction of LPS details, such as the LSD and the log's verbosity level. \section{Summary of Contributions} The contributions that become available as the outcomes of our research are as follows: \begin{enumerate*}[label=\protect\circled{\arabic*}] \item In RO1, we perform an experimental study on logging characteristics of code clones and show the potential for utilizing clone pairs for logging suggestions. \item In RO2, we introduce a log-aware clone detection tool (\textit{LACCP})~\cite{gholamian2020logging} for log statements' \textit{`location'} prediction, and resolve two clone detection shortcomings for log prediction and provide experimentation on seven projects and compare it with general-purpose state-of-the-art clone detector, Oreo~\cite{saini2018oreo}. \item In RO3, we initially show the natural characteristics of software logs and that enables us to utilize our findings for the application of NLP for LSD prediction~\cite{gholamian2021naturalness}. We then propose a deep-learning NLP approach, \textit{NLP CC'd}, to work in collaboration with \textit{LACCP} to automatically suggest log statements' descriptions. We calculate the BLEU and ROUGE scores for our auto-generated log statements' \textit{descriptions} by considering different sequences of LSD tokens, and compare our performance with the prior work~\cite{he2018characterizing}. \item Finally, as future work in RO4, we investigate the log verbosity level and variables prediction based on the information available through code clone pairs. \end{enumerate*} Thus far, our research findings have been published for RO1 and RO2 in \textit{ACM Symposium on Applied Computing (ACM SAC)}~\cite{gholamian2020logging} and \textit{IEEE/ACM Conference on Mining Software Repositories (MSR)}~\cite{gholamian2021naturalness}, respectively. We have also evaluated the trade-offs associated with the cost of logging statements in our paper accepted in the \textit{International Symposium on Reliable Distributed Systems (SRDS)}~\cite{gholamian2021distributed}. Lastly, the research paper summarizing our contributions for RO3 is currently under review~\cite{gholamian2021underreview}. \section{Conclusions and Future Work} The process of software logging is currently manual and lacks a unified guideline for choosing the location and content of log statements. In this research, with the goal of enhancing log statement automation, we present a study on the location and description of logging statements in open-source Java projects by applying code clones and deep-learning NLP models. We compare the performance of our proposed approaches, LACCP and NLP CC'd, for log location and description prediction, and show their superior performance compared to prior work. As our future work in RO4, we will provide automated suggestions for other details of the LPS, such as its verbosity level and variables. \bibliographystyle{IEEEtran}
1,941,325,220,353
arxiv
\section{Introduction} The idea of applying physics methods to social phenomena goes back centuries ago, e.g. with the first (unsuccessful) attempt to establish mortality tables, involving astronomer Halley, the ``sociology'' of Auguste Comte who taught analysis and mechanics around 1840, or the 1869 book by Quetelet ``Physique Sociale''. Majorana suggested to apply quantum physics in 1942 \cite{majorana}. Some contemporary physicists \cite{weidlich,galam} have worked on the field since some decades. But it became a physics fashion about a dozen years ago, with opinion dynamics, applications of complex networks, etc. \cite{schweitzer}. Presumably the best review is still the one from Italy during Berlusconi's rule \cite{fortunato}, which killed the present author's chances to get a Nobel prize (in literature: science fiction) for his four articles in \cite{granada} on languages (p. 49), opinions (p. 56). retirement demography (p. 69) and Bonabeau hierarchies (p. 75). The field is now far too wide to be covered in a short review, and thus here only some biased selection is presented. Lecture notes of Fortunato \cite{forthist} start with a nice introduction into the more ancient history of sociophysics, Galam wrote a recent book \cite{galambook}, with pages 75-77 on: The Soviet-Style Rewriting of the History of Sociophysics. We start with a discussion why it may be useful to apply physics research style to human beings, then we bring the Schelling model as an example where sociophysics was lacking for decades. The following three sections review opinion dynamics, combat, and citations, followed (for readers outside statistical physics) by a critique of mean field theory. Econophysics is regarded here as outside of sociophysics, and also ignored because of recent reviews are languages \cite{ssw,acs,newbook}, Penna ageing models \cite{newbook,penna}, networks \cite{cohen} and traffic jams \cite{schad}. \section{Does Sociophysics Make Any Sense?} People are not atoms. We may be able to understand quite accurately the structure of the hydrogen atom, but who really understands the own marriage partner. Nevertheless already Empedokles in Sicily more than two millennia ago found that people are like fluids: some people mix easily like water and wine (an ancient Greek crime against humanity), and some like water and oil do not mix. And a few months ago the German historian Imanuel Geiss died, who described the decay of empires with Newton's law of gravitational forces (but disliked simulations to explain diplomatic actions during the few weeks before World War I). It is the law of big numbers which allows the application of statistical physics methods. If we throw one coin we cannot predict how it will fall, and if we look at one person we cannot predict how this person will vote, when it will die, etc. But if we throw thousand coins, we can predict that about 500 will fall on one side and the rest on the opposite side (except when we cheat, or the coin sticks in the mud of a sport arena). And when we ask thousand randomly selected people we may get a reasonable impression for an upcoming election. Half a millennium ago, insurance against loss of ships in the Mediterranean trade became possible, and life insurance relies on mortality and the Gompertz law of 1825 that the adult probability to die within the next year (better: next month \cite{gavr} ) increases exponentially with age. Such insurance is possible because it relies on the large number of insured people: Some get money from the insurance and most don't. My insurance got years ago most of my savings and now pays me a monthly pension until I die; the more the journal referees make troubles to my articles, the sooner I die and the less loss my insurance company will make with me. Only when all the banks and governments are coupled together by their debts, the law of large numbers no longer is valid since they all become one single unit \cite{aleks}. (``Maastricht'' rules on sovereign debts were broken by governments in Euroland since 1998, a decade before the Lehman crash.) Outside physics the method of agent-based computer simulations is fashionable \cite{billari}. It has nothing to do with 007 James Bond, but refers to methods simulating single persons etc instead of averaging over all of them. Physicists do that at least since 1953 with the Metropolis algorithm of statistical physics, also in most of the simulations listed here. This book \cite{billari} by non-physicists, written about simultaneously and independently from one by physicists \cite{newbook}, covers similar fields and similar methods but barely overlaps in the references. Recent work from cognitive science and related disciplines is listed in \cite{cog}. Did sociophysics have practical applications? When I got the work of Galam, Gefen and Shapir \cite{ggs}, I told a younger colleague that I liked it. But after reading it he disliked it and remarked to me that the paper helps management to control its workers better if a strike is possible. In the three decades since then I had read about many strikes but not about any being prevented by this paper. Two decades later Galam \cite{chachacha} was criticised by other physicists for having helped terrorists with his percolation application to terror support. I am not aware that such percolation theory was applied in practice. However, a century ago physicist did not believe that nuclear energy can be used. Our own subsection ``Retirement Demography'' in ch. 6 of \cite{newbook} recommends immigration and higher retirement ages to balance the ageing of Europe; both aspects are highly controversial but I was not yet murdered. Helbing's \cite{galambook} simulation that a column before a door improves the speed of evacuation during a panic seems to me very practical. \section{Schelling Model for Urban Ghettos} The formation of urban ghettos is well known in the USA and elsewhere. Harlem in Manhattan (New York) is the most famous ``black'' district since nearly one century, extending over dozens of blocks in north-south direction. Was it formed by conscious discrimination e.g. from real-estate agencies, or was it the self-organised result of the preferences of residents to have neighbours of the same group? Of course, the Warsaw Ghetto, famous for its 1943 uprising, was formed by Nazi Germany. Four decades ago, Schelling \cite{schelling} showed by a simple Monte Carlo simulation (by hand, not by computer) of two groups A and B, that a slight preference of A people to have A neighbors, and of B people to have B neighbors, suffices to form clusters of predominantly A and predominantly B on a square lattice with some empty sites, out of an initially random distribution. Statistical physicists of course would think of the standard Ising model on a square lattice to understand such a question, with A people corresponding to up spins and B people to down spins. Their ferromagnetic interaction gives a preference of A for A neighbours, and the same for B. The temperature introduces randomness. Simulations with Kawasaki dynamics (conserved magnetisation) have been made since decades. However, it took three decades before physicists took up the Schelling model; see \cite{ortmanns,sumour9} for an early and some recent (physics) publications. It seems quite trivial that the equilibrium distribution is no longer random if people select their residence with A/B preferences; but does it lead to ``infinite'' ghettos, i.e. to domains which increase in size to infinity if the studied lattice tends to infinity? This phase separation is well studied in the two-dimensional Ising model: for temperatures $T$ above a critical temperature $T_c$, only finite clusters are formed. For temperatures below this critical temperature, one large domain consisting mainly of group A, and another large domain consisting mainly of group B, are formed after a simulation time proportional to a power of the lattice size. Schelling could not see that his model does not give large ghettos, only small clusters \cite{schelling} as for $T > T_c$ in Ising models,, but Jones \cite{jones} (from a sociology institute, publishing in a sociology journal) corrected that by introducing more randomness into the Schelling model; and Vinkovic (astrophysicist) and Kirman (economist) did it two decades after Jones. Then large ghettos are formed as for Ising models for $T < T_c$, Nevertheless, the Jones paper today is cited much more rarely than Schelling, and mostly by physicists, not by his sociology colleagues (see www.newisiknowledge.com for the Science Citation Index). Only now the physics and sociology communities show some cross-citations for the Schelling model. Of course, instead of merely two groups A and B one could look at several. Empirical data for the preferred neighbours among four groups listed as White, Black, Hispanic and Asian in Los Angeles are given by \cite{clark}. Corresponding Potts generalisations of the Schelling-Ising version of \cite{ortmanns} were published earlier \cite{schulze}. For Schelling models on networks, one may break the links between nodes occupied by neighbors from different groups \cite{henry}, as done before for other networks \cite{hohnisch}. In summary of this section, cooperation of physicists with sociologists could have pushed research progress by many years. \section{Opinion Dynamics} Much of the opinion simulations, \cite{lorenz} and ch. 6 in \cite{newbook}, is based on the voter or majority-vote models \cite{liggett}, the negotiators of Deffuant et al \cite{deffuant}, the opportunists of Hegselmann and Krause \cite{hegs}, and the Sznajd missionaries \cite{sznajd}, the latter three originating all near the year 2000. They check if originally randomly distributed opinions converge towards one (``consensus''), two (``polarisation'') or more (``fragmentation'') shared opinions. (Warning: In some fields ``polarisation'' means a non-centric consensus, like in ferroelectrics.) See also \cite {galam}. The {\it voter} or majority-vote models \cite{liggett} are Ising-like: Opinions are $+1$ or $-1$, and at each iteration everybody follows one randomly selected neighbour or the majority of the neighbourhood, respectively, except that with a probability $q$ (which corresponds to thermal noise) it refuses to do so. Ref.\cite{liggett} gave a recent application. {\it Negotiators} of Deffuant et al \cite{deffuant} each have an opinion which can be represented by a real number or by an integer. (Opinions on more than one subject are possible \cite{jacob} but we first deal with one opinion only. Integer opinions are simplifications and often used in opinion polls when people are asked if they agree fully, partly, or not at all with an assertion.) Each agent interacts during one time step with a randomly selected other negotiator. If their two opinions differ, each opinion shifts partly to that of the other negotiator, by a fraction of the difference. If that fraction is 1/2, they agree which is less realistic than if they nearly agree (fraction $<$ 1/2). But if the two opinions are too far apart, they don't even start to negotiate and their opinions remain unchanged. Thus there are no periodic boundary conditions applied to opinions; in contrast to real politics, the extreme Left and the extreme Right do not cooperate. (Axelrod studied some aspects already earlier \cite{axel}.) The {\it opportunists} also talk only with people who are not too far away from their own opinion (a real number). Each person at each time step takes the average opinion of all the people in the system which do not differ too much of their own past opinion. Thus in contrast to the binary interactions of negotiators we have multi-agent interactions of opportunists. (Instead of opportunism one can also talk of compromise here but that word applies better to the negotiators of Deffuant et al.) Finally, the missionaries of the Sznajd model \cite{sznajd} try to convince the neighbourhood of their own opinion. They succeed if and only if two neighbouring missionaries agree among themselves; then they force this opinion onto their neighbourhood (i.e. on six neighbours for a square lattice). For only two opinions on the square lattice, one has a phase transition depending on the initial fraction of randomly distributed opinions: The opinion which initially had a (small) majority attracts the whole population to its side. In one dimension no such transition takes place \cite{sznajd}, just as in the Ising model. For a review of Sznajd models see \cite{sznajd2}. A review of both negotiators and opportunists was given by Lorenz \cite{lorenz}. More information on missionaries, negotiators and opportunists, also for more than one subject one has an opinion on \cite{jacob}, are given in \cite{newbook}. \bigskip A connection between the above opinion dynamics and econophysics are the modifications of the usual Ising model to simulate tax evasion. Spin up corresponds to honest tax payers, and spin down to people who cheat on their income tax declaration. (I am not an experimental physicist). For $T > T_c$ without any modification, half of these Ising tax payers cheat. However, if every year with probability $p$ the declarations are audited and fraud is detected, and if discovered tax cheaters then become honest for $k$ consecutive years, the fraction of tax cheaters not surprisingly goes down towards zero for increasing $p$ and/or $k$ \cite{zaklan,limahohn}, also on various networks. Journal of Economic Psychology even plans a special issue on tax evasion. According to www.taxjustice.net, July 2012, hundreds of Giga-Dollars of taxes due are not paid world-wide each year. \section{Wars and Lesser Evils} Reality is not as peaceful as a computer simulation, and World War II was presumably the most deadly of all wars, with World War I far behind. The ``guilt'' for World War I was hotly debated for decades; in contrast to many books and articles, the Versailles peace treaty of 1919 did not blame only Germany for this war. Richardson \cite{rich} used simple diffential equations to explain this and other wars as coming from the other side's armaments and own dissatisfaction with the status quo, while the cost of armaments and war pushes for peace. Later work \cite{hermann} used human beings to simulate ten leaders during the weeks before World War I; these simulators made more peaceful decisions than the politicians of July 1914. Another work simulated only the German emperor and the Russian tsar \cite{pool} while \cite{holsti} for a more complex study used a supercomputer of that time. Historians criticised that work because it relied on outdated history books which explained the war more by accidents than by intentions. Except for Richardson \cite{rich} this early work is barely cited in physics journals. The creation of the anti-Hitler coalition of World War II was simulated by Axelrod and Bennett \cite{axel}. It contains numerous parameters describing the properties of European countries and their interrelations and thus is difficult to reproduce. Galam \cite {galamab} has more fundamental criticism of that work. \begin{figure}[hbt] \begin{center} \includegraphics[angle=-90,scale=0.48]{limamem153nn.ps} \end{center} \caption{Corner of a $300 \times 300$ square lattice with 1 \% Watts-Strogatz rewiring at intermediate times, with egoists (*), ethnocentrics (+), altruists (x) and cosmopolitans (square) \cite{hadz2}, showing domain formation within a large population. The last three groups help other by an amount proportional to the amount of help they got from them during the five preceding time steps.} \end{figure} \begin{figure}[hbt] \begin{center} \includegraphics[angle=-90,scale=0.48]{solomon6.ps} \end{center} \caption{Time until revolution versus $T_c/T$. A straight line here means an Arrhenius law; the line in the figure corresponds to $L = 1001$, the plus signs to $L = 301$; also $L = 3001$ (for higher temperatures only) gave about the same flip times. From \cite{kindler}. } \end{figure} The decay of Yugoslavia led to the most murderous wars in Europe after World War II, particularly in Bosnia-Hercegovina 1992-1995 including genocide. The emergence of local fighting between three groups was simulated by Lim et al. \cite{lim} using a Potts model. Intergroup fighting is possible if the regions where one group dominates are neither too small not too large. Lim et al. apply their model to the Bosnia-Hercegovina war but neglect the outside initiation and influence from Belgrade (Serbia) and Zagreb (Croatia) in that war \cite{courts}. Such influence was partially taken into account in a linguistic simulation later \cite{hadz}. Fig.1 shows computer simulations of egoist, ethnocentric, altruistic and cosmopolitan behaviour in a population \cite{hadz2}. Often the Yugoslavia wars are described as ethnic. ``Ethnic'', defined e.g. through language, religion \cite{ausloos} , history, biology (race, ``blood'', DNA) is now part of international law through UN resolutions since 1992 against ``ethnic cleansing''. Ethnicity is often construed or even imposed on people \cite{ethnic}, but the deaths and losses of homeland are real. How to win a war is another question. Kress \cite{kress} reviewed modern simulation methods of war and other armed conflicts. Mongin \cite{mongin} applied game theory to a military decision made by Napol\'eon before the battle of Waterloo, two centuries ago. Mongin concludes that the decision was correct; nevertheless Napol\'eon lost and ABBA won. Revolutions may lead to war; the ones of 2011 in Tunesia and Egypt did not and inspired an Ising model for revolutions \cite{kindler}. Ising spins point up for people wanting change, and down for staying with the government. They are influenced by an up-field proportional to the number of up spins \cite{bornholdt}, and by a random local down field measuring the conservative tendency of each individual ``spin''. Initially all spins are down, and they flip irreversibly up by heat bath kinetics. After some time, spontaneously through thermal fluctuations and without the help of initial revolutionaries like Lenin and Trotzky, enough revolutionary opinions have developed to flip the magnetisation from negative to positive values. This time obeys an Arrhenius law proportional to exp({\rm const}/T), Fig.2. The US War of Independence is the reason that the British game of football is called soccer in the USA. \cite{football} confirmed that single stars in the team are not enough; one needs multi-player team coordination. And \cite{janke} found that goals are not achieved randomly in time; scoring one goal increases the chances for the scoring team to make another goal soon thereafter. (See \cite{sire} for lesser games.) The sad state of football in the author's home town, mentioned by the New York Times, prohibits further discussion. \section{Citations} The Science Citation Index (www.newisiknowledge.com) is expensive but useful. One can find which later journal articles cited a given paper or book, provided the company of the Institute of Scientific Information subscribes to that journal. Since the end of the 1960s I check my citations on it. But one should be aware of the fact that cited books are listed not at all or only under the name of the first author, while cited journal articles are also listed under the name of the further authors. And citing books are ignored completely. For example, the most cited work of the late B.B. Mandelbrot is his 1982 book: The Fractal Geometry of Nature. The nearly 8000 citations can be found on the Web of Science under ``Cited Reference Search'', but if after ``Search`` and ``Create Citation Record'' one gets his whole list of publications, ranked by the number of citations, the book is missing there and a journal article with much less citations heads the ranked list. Thus it is dangerous to determine scientific quality by the number of citations or by the Hirsch index (h-index with ``Create Citation Record'') \cite{hirsch} as long as books are ignored. An author who knows the own books and their first authors can include the cited books on ``Cited Reference Search'', but automated citation counts like the h-index ignore them. If scholars get jobs or grants according to their h-index of other book-ignoring citation counts, this quality criterion will discourage them to write books, and push them to publish in Science or Nature. This is less dangerous for physics than for historiography, but seldomly mentioned in the literature. (To determine the h-index, the ranked list of journal articles produced by ``Create Citation Record'' starts with the most-cited paper with $n_1$ citations, then comes the second-most cited one with $n_2$ citations, and in general the $r$-ranked article with $n_r$ citations. Thus $n_r$ decreases with increasing rank $r$.The h-index is that value of $r$ for which $n_r=r$.) Physicists have systematically analysed citation counts (instead of merely counting their own and those of their main enemies) at least since Redner 1998 \cite{redner}. His work was cited more than 500 times, about half as much as the later h-index \cite{hirsch}. Recent papers by physicists deal with universality in citation statistics \cite{cast}, the tails of the citation distribution \cite{sol}, co-author ranking \cite{ausloos2}, allocation among coauthors \cite{galamsci}, and clustering within citation networks \cite{ren}. Other criticism of quality measures via citations are better known \cite{sil}. One should not forget, however, that experienced scientists often have to grade works of their students; are these evaluations more fair than citation counts? And what about peer review? My own referee reports are infallible, but those for my papers are nearly always utterly unfair. Measuring quality is difficult, but the one who evaluates others has to accept that (s)he is also evaluated by others. As US president and peace Nobel laureate Jimmy Carter said three decades ago: ``Life is unfair.'' And as the citation lists for Redner \cite{redner} and Hirsch \cite{hirsch} show, this problem is truely interdisciplinary. \section{Critique of Mean Field Theories} This section explains mean field theory for readers from outside statistical physics, as well as its dangers. If you want to get answers by paper and pencil, you can use the mean field approximation (also called molecular field approximation), which in economics correponds to the approximation by a representative agent. Let us take the Ising model on an $L \times L$ square lattice, with spins (magnetic moments, binary variables, Republicans or Democrats) $S_i = \pm 1$ and an energy $$E = -J \sum_{<ij>} S_i S_j - H \sum_i S_i $$ where the first sum goes over all ordered pairs of neighbor sites $i$ and $j$. Thus the ``bond'' between sites $A$ and $B$ appears only once in this sum, and not twice. The second sum proportional to the magnetic up field $H$ runs over all sites of the system. We approximate in the first sum the $S_j$ by its average value, which is just the normalised magnetisation $m = M/L^2 = \sum_i S_i/L^2$. Then the energy is $$E = -J \sum_{<ij>} S_i m - H \sum_i S_i = -H_{eff} \sum_i S_i$$ with the effective field $$H_{eff} = H + J\sum_j m = H + Jqm $$ where the sum runs over the $q$ neighbours only and is proportional to the magnetization $m$. Thus the energy $E_i$ of spin $i$ no longer is coupled to other spins $j$ and equals $\pm H_{eff}$. The probabilities $p$ for up and down orientations are now $$p(S_i=+1) = \frac{1}{Z} \exp(H_{eff}/T) ; \quad p(S_i=-1) = \frac{1}{Z} \exp(-H_{eff}/T) $$ with $$Z = \exp(H_{eff}/T) + \exp(-H_{eff}/T) $$ and thus $$ m = p(S_i=+1) - p(S_i=-1) = \tanh(H_{eff}/T) = \tanh[(H + Jqm)/T]$$ with the function tanh$(x) = (e^x - e^{-x})/(e^x+ e^{-x})$. This implicit equation can be solved graphically; for small $m$ and $H/T$, tanh$(x) = x-x^3/3 + \dots$ gives $$H/T = (1-T_c/T)m + \frac{1}{3} m^3 + \dots ; \quad T_c = qJ$$ related to Lev Davidovich Landau's theory of 1937 for critical phenomena ($T$ near $T_c$, $m$ and $H/T$ small) near phase transitions. All this looks very nice except that it is wrong: In the one-dimensional Ising model, $T_c$ is zero instead of the mean field approximation $T_c=qJ$. The larger the number of neighbours and the dimensionality of the lattice is, the more accurate is the mean field approximation. Basically, the approximation to replace $S_iS_j$ by an average $S_im$ takes into account the influence of $S_j$ on $S_i$ but not the fact that this $S_i$ again influences $S_j$ creating a feedback. Thus, instead of using mean field approximations, one should treat each spin (each human being, ...) individually and not as an average. Outside of physics such simulations of many individuals are often called ``agent based'' \cite{billari}, presumably the first one was the Metropolis algorithm published in 1953 by the group of Edward Teller, who is historically known from the US hydrogen bomb and Strategic Defense Initiative (Star Wars, SDI). Of course, physicists are not the only ones who noticed the pitfalls of mean field approximations. For example, a historian \cite{siegel} years ago criticised political psychology and social sciences: ``There is no collective individual'' or ``generalised individual''. And common sense tells us that no German woman gave birth to 1.4 babies, even though this is the average since about four decades. A medical application is screening for prostate cancer. Committees in USA, Germany and France in recent months recommended against routine screening for PSA (prostate-specific antigen) in male blood, since this simple test is neither a sufficient nor a necessary indication for cancer. However, I am not average, and when PSA concentration doubles each semester while tissue tests called biopsies fail to see cancer, then relying on PSA warnings is better than relying on averages. \section{Conclusion} Inspite of much talk since decades about the need for interdisciplinary research, the bibliographies on the same subject by authors from different disciplines do not show as much overlap as they should (and as they do in citation analysis). Perhaps the present (literature) review helps to improve this situation. \bigskip T. Hadzibeganovic, M. Ausloos, S. Galam, K. Kulakowski, S. Solomon helped with this manuscript.
1,941,325,220,354
arxiv
\section{Introduction} Consider a gamble on a binary event, say, that Obama will win the 2012 US Presidential election, where every $x$ dollars risked earns $x b$ dollars in net profit if the gamble pays off. How many dollars $x$ of your wealth should you risk if you believe the probability is $p$? The gamble is favorable if $b p - (1-p) > 0$, in which case betting your entire wealth $w$ will maximize your expected profit. However, that's extraordinarily risky: a single stroke of bad luck loses everything. Over the course of many such gambles, the probability of bankruptcy approaches 1. On the other hand, betting a small fixed amount avoids bankruptcy but cannot take advantage of compounding growth. The \emph{Kelly criteria} prescribes choosing $x$ to maximize the expected compounding growth rate of wealth, or equivalently to maximize the expected logarithm of wealth. Kelly betting is asymptotically optimal, meaning that in the limit over many gambles, a Kelly bettor will grow wealthier than an otherwise identical non-Kelly bettor with probability 1 \cite{Breiman1961,Cover2006,Kelly1956,Thorp1969,Thorp1997}. Assume all agents in a market optimize according to the Kelly principle, where $b$ is selected to clear the market. We consider the implications for the market as a whole and properties of the market odds $b$ or, equivalently, the market probability $p_m = 1/(1+b)$. We show that the market prediction $p_m$ is a wealth-weighted average of the agents' predictions $p_i$. Over time, the market itself---by reallocating wealth among participants---adapts at the optimal rate with bounded log regret to the best individual agent. When a true objective probability exists, the market converges to it as if properly updating a Beta distribution according to Bayes' rule. These results illustrate that there is no ``price of anarchy'' associated with well-run prediction markets. We also consider fractional Kelly betting, a lower-risk variant of Kelly betting that is popular in practice but has less theoretical grounding. We provide a new justification for fractional Kelly based on agent's confidence. In this case, the market prediction is a confidence-and-wealth-weighted average that empirically converges to a time-discounted version of objective frequency. Finally, we propose a method for agents to learn their optimal fraction over time. \section{Kelly betting}\label{sec:frac-kelly} When offered $b$-to-1 odds on an event with probability $p$, the Kelly-optimal amount to bet is $f^{*} w$, where \[ f^{*}=\frac{bp-(1-p)}{b}\] is the optimal fixed fraction of total wealth $w$ to commit to the gamble. If $f^{*}$ is negative, Kelly says to avoid betting: expected profit is negative. If $f^{*}$ is positive, you have an information edge; Kelly says to invest a fraction of your wealth proportional to how advantageous the bet is. In addition to maximizing the growth rate of wealth, Kelly betting maximizes the geometric mean of wealth and asymptotically minimizes the mean time to reach a given aspiration level of wealth \cite{Thorp1997}. Suppose fair odds of $1/b$ are simultaneously offered on the opposite outcome (e.g., Obama will \emph{not} win the election). If $bp-(1-p)<0$, then betting on this opposite outcome is favorable; substituting $1/b$ for $b$ and $1-p$ for $p$, the optimal fraction of wealth to bet becomes $1-p-bp$. An equivalent way to think of a gamble with odds $b$ is as a prediction market with price $p_m=1/(1+b)$. The volume of bet is specified by choosing a quantity $q$ of \emph{shares}, where each share is worth \$1 if the outcome occurs and nothing otherwise. The price represents the cost of one share: the amount needed to pay for a chance to win back \$1. In this interpretation, the Kelly formula becomes \[ f^{*}=\frac{p-p_m}{1-p_m}.\] The optimal action for the agent is to trade $q^{*}=f^{*} w / p_m$ shares, where $q^{*}>0$ is a buy order and $q^{*}<0$ is a sell order, or a bet against the outcome. Note that $q^{*}$ is the optimum of expected log utility \[ p\ln((1-p_m)q+w)+(1-p)\ln(-p_m q+w).\] This is not a coincidence: Kelly betting is identical to maximizing expected log utility. \section{Market model} Suppose that we have a prediction market, where participant $i$ has a starting wealth $w_i$ with $\sum_i w_i = 1$. Each participant $i$ uses Kelly betting to determine the fraction $f^*_i$ of their wealth bet, depending on their predicted probability $p_i$. We model the market as an auctioneer matching supply and demand, taking no profit and absorbing no loss. We adopt a competitive equilibrium concept, meaning that agents are "price takers", or do not consider their own effect on prices if any. Agents optimize according to the current price and do not reason further about what the price might reveal about the other agents' information. An exception of sorts is the fractional Kelly setting, where agents do consider the market price as information and weigh it along with their own. A market is in competitive equilibrium at price $p_m$ if all agents are optimizing and $\sum_i q_i^{*}=0$, or every buy order and sell order are matched. We discuss next what the value of $p_m$ is. \section{Market prediction} In order to define the prediction market's performance, we must define its prediction $b$, or the equilibrium payoff odds reached when all agents are optimizing, and supply and demand are precisely balanced. Recall that the market's probability implied by the odds of $b$ is $p_m = 1/(1+b)$. We will show that $p_m$ is $\sum_{i}w_{i}p_{i}$. \subsection{Payout balance} The first approach we'll use is payout balance: the amount of money at risk must be the same as the amount paid out. \begin{theorem}\label{thm:pricing}(Market Pricing) For all normalized agent wealths $w_i$ and agent beliefs $p_i$, $$ p_m = \sum_i p_i w_i $$ \end{theorem} \begin{proof} To see this, recall that $f_{i}^{*}=(p_i-p_m)/(1-p_m)$ for $p_{i}>p_m$. For $p_{i}<p_m$, Kelly betting prescribes taking the other side of the bet, with fraction \[ \frac{(1-p_{i})-(1-p_m)}{1-(1-p_m)}=\frac{p_m-p_{i}}{p_m}.\] So the market equilibrium occurs at the point $p_m$ where the payout is equal to the payin. If the event occurs, the payin is \[ (1 + b)\sum_{i:p_{i}>p_m}\frac{p_i-p_m}{1-p_m}w_{i} = \frac{1}{p_m}\sum_{i:p_{i}>p_m}\frac{p_i-p_m}{1-p_m}w_{i}. \] Thus we want \begin{align*} \frac{1}{p_m}\sum_{i:p_{i}>p_m}\frac{p_i-p_{m}}{1-p_m}w_{i} & = \sum_{i:p_{i}>p_m}\frac{p_{i}-p_m}{1-p_m}w_{i} \ + \\ & \qquad \: \sum_{i:p_{i}<p_m}\frac{p_m-p_{i}}{p_m}w_{i}, \quad \text{or}\\ \frac{1-p_m}{p_m}\sum_{i:p_{i}>p_m}\frac{p_i-p_{m}}{1-p_m}w_{i} & =\sum_{i:p_{i}< p_m}\frac{p_{m}-p_i}{p_m}w_{i}, \quad \text{or}\\ \sum_{i:p_{i}>p_m}(p_i-p_{m})w_{i} & =\sum_{i:p_{i}<p_m}(p_{m}-p_i)w_{i}, \quad \text{or} \\ \sum_{i}p_i w_{i} & = \sum_{i}p_{m}w_{i}. \end{align*} \noindent Using $\sum_{i}w_{i}=1$, we get the theorem. \end{proof} \subsection{Log utility maximization} An alternate derivation of the market prediction utilizes the fact that Kelly betting is equivalent to maximizing expected log utility. Let $q=x(b+1)$ be the gross profit of an agent who risks $x$ dollars, or in prediction market language the number of shares purchased. Then expected log utility is \begin{displaymath} E[U(q)] = p\ln((1-p_m)q+w) +(1-p)\ln(-p_m q+w). \end{displaymath} The optimal $q$ that maximizes $E[U(q)]$ is \begin{equation} q(p_m) = \frac{w}{p_m} \cdot \frac{p-p_m}{1-p_m}. \label{eq:linop-1-dem} \end{equation} \begin{proposition} In a market of agents each with log utility and initial wealth $w$, the competitive equilibrium price is \begin{equation} p_m = \sum_i w_i p_i \label{eq:linop-price} \end{equation} where we assume $\sum_{i}w_{i}=1$, or $w$ is normalized wealth not absolute wealth. \label{thm:market-linop} \end{proposition} \textbf{Proof.} These prices satisfy $\sum_i q_i = 0$, the condition for competitive equilibrium (supply equals demand), by substitution. $\Box$ \smallskip This result can be seen as a simplified derivation of that by Rubinstein \cite{Rubinstein74,Rubinstein75,Rubinstein76} and is also discussed by Pennock and Wellman \cite{Pennock01-mfpo-tr,Pennock99-thesis} and Wolfers and Zitzewitz \cite{Wolfers2006}. \section{Learning Prediction Markets} Individual participants may have varying prediction qualities and individual markets may have varying odds of payoff. What happens to the wealth distribution and hence the quality of the market prediction over time? We show next that the market \emph{learns} optimally for two well understood senses of optimal. \subsection{Wealth redistributed according to Bayes' Law} In an individual round, if an agent's belief is $p_{i}>p_m$, then they bet $\frac{p_{i}-p_m}{1-p_m}w_{i}$ and have a total wealth afterward dependent on $y$ according to: \begin{align*} \text{If}\quad y=1, & \quad \left(\frac{1}{p_m}-1\right)\frac{p_{i}-p_m}{1-p_m}w_{i}+w_{i}=\frac{p_{i}}{p_m}w_{i} \\ \text{If}\quad y=0, & \quad (-1)\frac{p_{i}-p_m}{1-p_m}w_{i}+w_{i}=\frac{1-p_{i}}{1-p_m}w_{i} \\ \end{align*} Similarly if $p_{i}<p_m$, we get: \begin{align*} \text{If}\quad y=1, &\quad (-1)\frac{p_m-p_{i}}{p_m}w_{i}+w_{i}=\frac{p_{i}}{p_m}w_{i} \\ \text{If}\quad y=0, &\quad \left(\frac{1}{1-p_m}-1\right)\frac{p_m-p_{i}}{p_m}w_{i}+w_{i}=\frac{1-p_{i}}{1-p_m}w_{i}, \end{align*} which is identical. If we treat the prior probability that agent $i$ is correct as $w_i$, Bayes' law states that the posterior probability of choosing agent $i$ is $$ P(i\mid y=1) = \frac{P(y=1\mid i)P(i)}{P(y=1)} = \frac{p_i w_i}{p_m} = \frac{p_i w_i}{\sum_i p_i w_i}, $$ which is precisely the wealth computed above for the $y=1$ outcome. The same holds when $y=0$, and so Kelly bettors redistribute wealth according to Bayes' law. \subsection{Market Sequences} \label{sec:sequence} It is well known that Bayes' law is the correct approach for integrating evidence into a belief distribution, which shows that Kelly betting agents optimally summarize all past information if the true behavior of the world was drawn from the prior distribution of wealth. Often these assumptions are too strong---the world does not behave according to the prior on wealth, and it may act in a manner completely different from any one single expert. In that case, a standard analysis from learning theory shows that the market has low \emph{regret}, performing almost as well as the best market participant. For any particular sequence of markets we have a sequence $p_{t}$ of market predictions and $y_{t}\in\{0,1\}$ of market outcomes. We measure the accuracy of a market according to log loss as \[ L\equiv\sum_{t=1}^{T}I(y_{t}=1)\log\frac{1}{p_{t}}+I(y_{t}=0)\log\frac{1}{1-p_{t}}.\] Similarly, we measure the quality of market participant making prediction $p_{it}$ as \[ L_{i}\equiv\sum_{t=1}^{T}I(y_{t}=1)\log\frac{1}{p_{it}}+I(y_{t}=0)\log\frac{1}{1-p_{it}}.\] So after $T$ rounds, the total wealth of player $i$ is \[ w_{i}\prod_{t=1}^{T}\left(\frac{p_{it}}{p_{t}}\right)^{y_{t}}\left(\frac{1-p_{it}}{1-p_{t}}\right)^{1-y_{t}},\] where $w_i$ is the starting wealth. We next prove a well-known theorem for learning in the present context (see for example~\cite{FSSW1997}). \begin{theorem}For all sequences of participant predictions $p_{it}$ and all sequences of revealed outcomes $y_{t}$, \[ L\leq\min_{i}L_{i}+\ln\frac{1}{w_{i}}.\] \end{theorem} This theorem is extraordinarily general, as it applies to \emph{all} market participants and \emph{all} outcome sequences, even when these are chosen adversarially. It states that even in this worst-case situation, the market performs only $\ln 1/w_i$ worse than the best market participant $i$. \begin{proof} Initially, we have that $\sum_{i}w_{i}=1$. After $T$ rounds, the total wealth of any participant $i$ is given by \[ w_{i}\prod_{t=1}^{T}\left(\frac{p_{it}}{p_{t}}\right)^{y_{t}}\left(\frac{1-p_{it}}{1-p_{t}}\right)^{1-y_{t}}=w_{i}e^{L-L_{i}}\leq1,\] where the last inequality follows from wealth being conserved. Thus $ \ln w_{i}+L-L_{i}\leq0$, yielding \[ L\leq L_{i}+\ln\frac{1}{w_{i}}.\] \end{proof} \section{Fractional Kelly Betting} \emph{Fractional Kelly betting} says to invest a smaller fraction $\lambda f^{*}$ of wealth for $\lambda < 1$. Fractional Kelly is usually justified on an ad-hoc basis as either (1) a risk-reduction strategy, since practitioners often view full Kelly as too volatile, or (2) a way to protect against an inaccurate belief $p$, or both \cite{Thorp1997}. Here we derive an alternate interpretation of fractional Kelly. In prediction market terms, the fractional Kelly formula is \[ \lambda\frac{p-p_m}{1-p_m}.\] With some algebra, fractional Kelly can be rewritten as \[ \frac{p'-p_m}{1-p_m}\] where \begin{equation} p' = \lambda p + (1-\lambda) p_m . \label{eq:update-glu} \end{equation} In other words, $\lambda$-fractional Kelly is precisely equivalent to full Kelly with revised belief $\lambda p + (1-\lambda)p_m$, or a weighted average of the agent's original belief and the market's belief. In this light, fractional Kelly is a form of confidence weighting where the agent mixes between remaining steadfast with its own belief ($\lambda=1$) and acceding to the crowd and taking the market price as the true probability ($\lambda=0$). The weighted average form has a Bayesian justification if the agent has a Beta prior over $p$ and has seen $t$ independent Bernoulli trials to arrive at its current belief. If the agent envisions that the market has seen $t'$ trials, then she will update her belief to $\lambda p + (1-\lambda)p_m$, where $\lambda = t/(t+t')$ \cite{Morris83,Pennock99-thesis,Rosenblueth92}. The agent's posterior probability given the price is a weighted average of its prior and the price, where the weighting term captures her perception of her own confidence, expressed in terms of the independent observation count seen as compared to the market. \section{Market prediction with fractional Kelly} When agents play fractional Kelly, the competitive equilibrium price naturally changes. The resulting market price is easily compute, as for fully Kelly agents. \begin{theorem}(Fractional Kelly Market Pricing) For all agent beliefs $p_i$, normalized wealths $w_i$ and fractions $\lambda_i$ \begin{equation} p_m = \frac{\sum_i \lambda_i w_i p_i}{\sum_l \lambda_l w_l} . \label{eq:linop-learn-price} \end{equation} \end{theorem} Prices retain the form of a weighted average, but with weights proportional to the product of wealth and self-assessed confidence. \begin{proof} The proof is a straightforward corollary of Theorem~\ref{thm:pricing}. In particular, we note that a $\lambda$-fractional Kelly agent of wealth $w$ bets precisely as a full-Kelly agent of wealth $\lambda w$. Consequently, we can apply theorem~\ref{thm:pricing} with $w_i' = \frac{\lambda_i w_i}{\sum_i \lambda_i w_i}$ and $p_i' = p_i$ unchanged. \end{proof} \section{Market dynamics with stationary objective frequency} The worst-case bounds above hold even if event outcomes are chosen by a malicious adversary. In this section, we examine how the market performs when the objective frequency of outcomes is unknown though stationary. The market consists of a single bet repeated over the course of $T$ periods. Unbeknown to the agents, each event unfolds as an independent Bernoulli trial with probability of success $\pi$. At the beginning of time period $t$, the realization of event $E_t$ is unknown and agents trade until equilibrium. Then the outcome is revealed, and the agents' holdings pay off accordingly. As time period $t+1$ begins, the outcome of $E_{t+1}$ is uncertain. Agents bet on the $t+1$ period event until equilibrium, the outcome is revealed, payoffs are collected, and the process repeats. In an economy of Kelly bettors, the equilibrium price is a wealth-weighted average (\ref{eq:linop-price}). Thus, as an agent accrues relatively more earnings than the others, its influence on price increases. In the next two subsections, we examine how this adaptive process unfolds; first, with full-Kelly agents and second, with fractional Kelly agents. In the former case, prices react exactly as if the market were a single agent updating a Beta distribution according to Bayes' rule. \subsection{Market dynamics with full-Kelly agents} \begin{figure} (a)\includegraphics[scale=0.9]{adapt-price-glu-2011} (b)\includegraphics[scale=0.9]{adapt-wealth-glu-2011} \caption {(a) Price (black line) versus the observed frequency (gray line) of the event over 150 time periods. The market consists of 100 full-Kelly agents with initial wealth $w_i=1/100$. (b) Wealth after 15 time periods versus belief for 100 Kelly agents. The event has occurred in 10 of the 15 trials. The solid line is the posterior Beta distribution consistent with observing 10 successes in 15 independent Bernoulli trials.} \label{fig:adapt-price-glu} \end{figure} Figure~\ref{fig:adapt-price-glu}.a plots the price over 150 time periods, in a market composed of 100 Kelly agents with initial wealth $w_i=1/100$, and $p_i$ generated randomly and uniformly on $(0,1)$. In this simulation the true probability of success $\pi$ is $0.5$. For comparison, the figure also shows the \emph{observed frequency}, or the number of times that $E$ has occurred divided by the number of periods. The market price tracks the observed frequency extremely closely. Note that price changes are due entirely to a transfer of wealth from inaccurate agents to accurate agents, who then wield more power in the market; individual beliefs remain fixed. Figure~\ref{fig:adapt-price-glu}.b illustrates the nature of this wealth transfer. The graph provides a snapshot of agents' wealth versus their belief $p_i$ after period 15. In this run, $E$ has occurred in 10 out of the 15 trials. The maximum in wealth is near 10/15 or 2/3. The solid line in the figure is a Beta distribution with parameters $10+1$ and $5+1$. This distribution is precisely the posterior probability of success that results from the observation of 10 successes out of 15 independent Bernoulli trials, when the prior probability of success is uniform on (0,1). The fit is essentially perfect, and can be proved in the limit since the Beta distribution is conjugate to the Binomial distribution under Bayes' Law. Although individual agents are not adaptive, the market's composite agent computes a proper Bayesian update. Specifically, wealth is reallocated proportionally to a Beta distribution corresponding to the observed number of successes and trials, and price is approximately the expected value of this Beta distribution.\footnote{As $t$ grows, this expected value rapidly approaches the observed frequency plotted in Figure~\ref{fig:adapt-price-glu}.} Moreover, this correspondence holds regardless of the number of successes or failures, or the temporal order of their occurrence. A kind of collective Bayesianity \emph{emerges} from the interactions of the group. We also find empirically that, even if not all agents are Kelly bettors, among those that are, wealth is still redistributed according to Bayes' rule. \subsection{Market dynamics with fractional Kelly agents} \begin{figure} (a)\includegraphics[scale=0.9]{adapt-price-glu-learn-2011}\\ (b) \includegraphics[scale=0.9]{adapt-price-glu-learn-disc-2011} \caption{(a) Price (black line) versus observed frequency (gray line) over 150 time periods for 100 agents with Kelly fraction $\lambda=0.2$. As the frequency converges to $\pi=0.5$, the price remains volatile. (b) Price (black line) versus discounted frequency (gray line), with discount factor $\gamma=0.96$, for the same experiment as (a).} \label{fig:adapt-price-glu-learn} \end{figure} In this section, we consider fractional Kelly agents who, as we saw in Section~\ref{sec:frac-kelly}, behave like full Kelly agents with belief $\lambda p + (1-\lambda)p_m$. Figure~\ref{fig:adapt-price-glu-learn}.a graphs the dynamics of price in an economy of 100 such agents, along with the observed frequency. Over time, the price remains significantly more volatile than the frequency, which converges toward $\pi=0.5$. Below, we characterize the transfer of wealth that precipitates this added volatility; for now concentrate on the price signal itself. Inspecting Figure~\ref{fig:adapt-price-glu-learn}.a, price changes still exhibit a marked dependence on event outcomes, though at any given period the effect of recent history appears magnified, and the past discounted, as compared with the observed frequency. Working from this intuition, we attempt to fit the data to an appropriately modified measure of frequency. Define the \emph{discounted frequency} at period $n$ as \begin{equation} d_n = \frac{\sum_{t=1}^n \gamma^{n-t} (1_{E(t)})} {\sum_{t=1}^n \gamma^{n-t} (1_{E(t)}) + \sum_{t=1}^n \gamma^{n-t} (1_{\overline{E(t)}}) }, \label{eq:discounted-frequency} \end{equation} where $1_{E(t)}$ is the indicator function for the event at period $t$, and $\gamma$ is the \emph{discount factor}. Note that $\gamma=1$ recovers the standard observed frequency. Figure~\ref{fig:adapt-price-glu-learn}.b illustrates a very close correlation between discounted frequency, with $\gamma=0.96$ (hand tuned), and the same price curve of Figure~\ref{fig:adapt-price-glu-learn}.a. While standard frequency provides a provably good model of price dynamics in an economy of full-Kelly agents, discounted frequency (\ref{eq:discounted-frequency}) appears a better model for fractional Kelly agents. To explain the close fit to discounted frequency, one might expect that wealth remains dispersed---as if the market's composite agent witnesses fewer trials than actually occur. That's true to an extent. Figure~\ref{fig:adapt-wealth-glu-learn-150} shows the distribution of wealth after 69 successes have occurred in 150 trials. Wealth is significantly more evenly distributed than a Beta distribution with parameters 69+1 and 81+1, also shown. However, the stretched distribution can't be modeled precisely as another, less-informed Beta distribution. \begin{figure} \includegraphics[scale=0.9]{adapt-wealth-glu-learn-150-2011} \caption{(a) Wealth $w_i$ versus belief $p_i$ at period 150 of the same experiment as Figure~\ref{fig:adapt-price-glu-learn} with 100 agents with Kelly fraction $\lambda=0.2$. The observed frequency is 69/150 and the solid line is $\mathtt{Beta}(69+1,81+1)$. The wealth distribution is significantly more evenly dispersed than the corresponding Beta distribution.} \label{fig:adapt-wealth-glu-learn-150} \end{figure} \section{Learning the Kelly fraction} In theory, a rational agent playing against rational opponents should set their Kelly fraction to $\lambda=0$, since, in a rational expectations equilibrium \cite{Grossman81}, the market price is by definition at least as informative as any agent's belief. This is the crux of the no-trade theorems \cite{Mil:82}. Despite the theory \cite{Gea:82}, people do agree to disagree in practice and, simply put, trade happens. Still, placing substantial weight on the market price is often prudent. For example, in an online prediction contest called ProbabilitySports, 99.7\% of participants were outperformed by the unweighted average predictor, a typical result.\footnote{\tt\small http://www.overcomingbias.com/2007/02/how\_and\_when\_to.html} In this light, fractional Kelly can be seen as an experts algorithm \cite{Cesa-Bianchi1997} with two experts: yourself and the market. We propose dynamically updating $\lambda$ according to standard experts algorithm logic: When you're right, you increase $\lambda$ appropriately; when you're wrong, you decrease $\lambda$. This gives a long-term procedure for updating $\lambda$ that guarantees: \begin{itemize} \item You won't do too much worse than the market (which by definition earns 0) \item You won't do too much worse than Kelly betting using your original prior $p$ \end{itemize} For example, if you allocate an initial weight of $0.5$ to your predictions and $0.5$ to the market's prediction, then the regret guarantee of section~\ref{sec:sequence} implies that at most half of all wealth is lost. \section{Discussion} We've shown something intuitively appealing here: self-interested agents with log wealth utility create markets which learn to have small regret according to log loss. There are two distinct ``log''s in this statement, and it's appealing to consider what happens when we vary these. When agents have some utility other than log wealth utility, can we alter the structure of a market so that the market dynamics make the market price have low log loss regret? And similarly if we care about some other loss---such as squared loss, 0/1 loss, or a quantile loss, can we craft a marketplace such that log wealth utility agents achieve small regret with respect to these other losses? What happens in a market without Kelly bettors? This can't be described in general, although a couple special cases are relevant. When all agents have constant absolute risk aversion, the market computes a weighted geometric average of beliefs~\cite{Pennock99-thesis,Pennock01-mfpo-tr,Rubinstein74}. When one of the bettors acts according to Kelly and the others in some more irrational fashion. In this case, the basic Kelly guarantee implies that the Kelly bettor will come to dominate non-Kelly bettors with equivalent or worse log loss. If non-Kelly agents have a better log loss, the behavior can vary, possibly imposing greater regret on the marketplace if the Kelly bettor accrues the wealth despite a worse prediction record. For this reason, it may be desirable to make Kelly betting an explicit option in prediction markets. \bibliographystyle{abbrv}
1,941,325,220,355
arxiv
\section{Introduction} A series of recent achievements \cite{Delft,Vienna,Boulder} have convincingly demonstrated that Bell's Inequalities (BI) are violated, and that all previous ``loopholes" can be closed, provided that they are experimentally testable \cite{Bell-EPR,aaview}. One can thus conclude that Bell's Hypotheses (BH), i.e. the physical and mathematical assumptions leading to BI, do not correspond to an acceptable description of nature. The precise implications of this statement remain open, especially if one asks about the resulting description of physical reality, as offered by Quantum Mechanics (QM). Quoting for instance Scott Aaronson \cite{Scott}, one would have to choose between ``to describe the ``reality" behind quantum processes via the Many-Worlds Interpretation, Bohmian mechanics, or following Bohr's Copenhagen Interpretation, refuse to discuss the ``reality'' at all''. Here we want to move away from this apparent dilemma, by considering that there is little to change to Bohr's Copenhagen Interpretation to obtain a fully consistent ``quantum realism", compatible with QM and with the above experiments, but also with physical realism, defined as the statement that {\it the goal of physics is to study entities of the natural world, existing independently from any particular observer's perception, and obeying universal and intelligible rules.}\footnote{There is a very old but still alive philosophical debate about what comes first: Are universal and intelligible rules a meaningful idealization of empirical reality, which ought to be considered as the only reality? This approach can be called the Aristotelian point of view. Or are universal and intelligible rules the ultimately real, existing on their own in the realm of Plato's world of Forms? This is usually viewed as the Platonistic approach. We do not have to take a position in this debate, since in our definition of physical reality, the ``entities of the natural world" (Aristotelian reality) and the ``universal and intelligible rules" (Platonistic reality) are both needed. As physicists we take for granted that the material world has an objective existence independent of observers, and that mathematical concepts are crucial to describe its properties. } This {\bf quantum realism} has been presented in ref. \cite{FooP}, under the acronym CSM, standing for Contexts, Systems and Modalities. Here we will briefly summarize its main features, and discuss in more details how to use it to better understand the failure of BH. As it is well known, this failure of BH corresponds to a rejection of local realism, but not - as we will show - of physical realism. It can rather be considered as an evidence for a quantum realism, which is clearly different from classical realism, and which has some specific non-local feature - however, these features have nothing to do with any ``spooky action a distance". Generally speaking, the compatibility of QM with physical realism has been much debated in the literature \cite{EPR,EPR-wave,Bohr,Heisenberg,Bell,Moon,Contextuality,pg2,Mermin,PBR}, giving rise to many different interpretations of QM \cite{Frank}. In our approach the quantum formalism and physical realism can perfectly coexist, at the price of a subtle but deep change in what is meant by physical properties: they are not any more considered as properties of the system itself, but jointly attributed to the system, and to the context in which it is embedded (definitions will be given below). We will show also that this ontological change has strong links with quantization as a basic physical phenomenon, and that this can explain why QM must be a probabilistic theory. This article is closely related to \cite{FooP}, with some parts condensed and others expanded, in order to spell out how the CSM approach explains quantum non-locality. \section{System, context, and modalities} To define an ontology within the physical framework we are interested in, we will start with the question: which phenomena can we predict with certainty, and obtain repeatedly~? Here certainty and repeatability of phenomena will be used to provide necessary conditions to be able to define a ``state''. Such an approach, supported by quantum experiments, has a clear relationship with the criteria for physical reality given in 1935 by Einstein, Podolsky and Rosen (EPR) \cite{EPR} -- but the ``object" to which it applies will be quite different \cite{Bohr}. Our quantum ontology involves three different entities. First comes the \textbf{system}, that is a subpart of the world that is isolated well enough to be studied. The system is in contact with other systems, that can be a measuring device, an environment - no need to be more specific at this point. The ensemble of these other systems will be called a \textbf{context}. A given context corresponds to a given set of questions, that can be asked together to the system about its physical properties. A set of answers that can be predicted with certainty and obtained repeatedly within such a context will be called a \textbf{modality}. Given these definitions, let us bind them together by the following rule: {\bf In QM, modalities are attributed jointly to the system and the context. } This principle will be called ``CSM", referring to the combination of Context, System, and Modality. As a set of certain and repeatable phenomena, a modality fulfills the above conditions for the objective definition of a quantum state, and within the usual QM formalism (which is not here yet), a modality corresponds to a pure state. On the other hand, the context is classical, in the sense that no other context has to be specified to define its state, and within the usual QM formalism, it corresponds to the parameters defining the observables as operators. We note that here neither size, nor a quantitative criterium has been made to draw the quantum-classical boundary: the quantum vs classical behavior is only related to the CSM principle itself, i.e., to the very definition of a modality. Taking a single polarized photon as an example\footnote{Interferences with polarized light (or in the quantum domain, with photon polarization) provides the simplest illustration of our approach. For interferences in the spatial domain, it is more convenient to use also two-mode systems, like the Mach-Zehnder interferometer \cite{wheeler}, where the contexts are either (i) the which-path detection within the interferometer (with two mutually exclusive modalities : upper path or lower path), or (ii) the interferometer outcome detection. In quantum information, such a system is also known as a ``dual rail qubit". Other interferometers like Young's slits, or Fresnel's biprism \cite{cachan}, are more complicated to analyse because they are multimode systems, with ``fringes" where the interference phase depends on the position at the detection screen. Then the outcome can be analyzed in a larger Hilbert space; anyway the structure of mutually exclusive modalities in a given context, and incompatible modalities between different contexts, is a very general framework for all quantum interference experiments.}, the system is the photon, the $\theta$-oriented polarizer is the context, and the two mutually exclusive modalities in this context are either ``transmitted", or ``reflected". In the CSM perspective, a photon does not ``own" a polarization, but the ensemble photon-polarizer does. If the context is known, and if the system is available, a modality defined in this same context can be recovered without error. This property has been exploited for years in quantum communications, and provides the core of quantum cryptography protocols \cite{BB84}. Here, we draw the consequences of this behavior in ontological terms. The resulting ontology is clearly different from the classical one, where it is expected that a state should ``exist" independently of any context. But even if CSM is fundamentally non-classical, physical realism is not lost: it still pertains to the ensemble made of context, system, and modality. Objectivity, defined as the independence from any particular observer's perception, is still guaranteed, but {\it the ``object" comprises both the system and the context, and its ``properties" are modalities \cite{Contextuality,pg2}}. \section{Quantization and probabilities} Now, a basic feature is that in a given context, the modalities are ``mutually exclusive'', meaning that if one modality is true, the others are wrong. On the other hand, modalities obtained in different contexts are generally not mutually exclusive: they are said to be ``incompatible", meaning that if one modality is true, one cannot tell whether the others are true or wrong. This terminology applies to modalities, not to contexts, that are classically defined: changing the context results from changing the measurement apparatus at the macroscopic level, that is, ``turning the knobs''. These definitions allow us to state the following quantization principle: {\bf (i) For each well-defined system and context, there is a discrete number $N$ of mutually exclusive modalities; the value of $N$ is a property of the system within the set of all relevant contexts, but does not depend on any particular context. (ii) Modalities defined in different contexts are generally not mutually exclusive, and they are said to be ``incompatible''.} Otherwise stated, whereas infinitely many questions can be asked, corresponding to all possible contexts, only a finite number $N$ of mutually exclusive modalities can be obtained in any of them\footnote{This principle is reminiscent of other approaches which bound the information extractable from a quantum system \cite{Rovelli,AZ}. However, in the realist perspective we chose, quantization has not a purely informational character, but characterizes reality itself.}. An essential consequence is that it is impossible to get more details on a given system by combining several contexts, because this would create a new context with more than $N$ mutually exclusive modalities, contradicting the above quantization principle. As shown in \cite{FooP}, this makes that quantum mechanics must be a probabilistic theory, not due to any ``hidden variables", but due to the ontology of the theory. Looking for instance at photon polarization, the number $N=2$ makes it impossible to define a (certain and repeatable) modality corresponding to the photon being transmitted through a polarizer oriented at $0^{\circ}$, {\bf and} through a polarizer oriented at $45^{\circ}$, because then there would be 4 such modalities, in contradiction with $N=2$. Therefore the only relevant question to be answered by the theory is: given an initial modality in context $C_1$, what is the {\it conditional probability} for obtaining another modality when the context is changed from $C_1$ to $C_2$ ? This probabilistic description is the unavoidable consequence of the impossibility to define a unique context making all modalities mutually exclusive, as it would be done in classical physics. It is therefore a joint consequence of the quantization and CSM principles, i.e. that modalities are quantized, and require a context to be defined\footnote{With different (non-ontological) approaches, many authors have emphasized the importance of contexts in QM, see e.g. \cite{c1,c2}.}. \section{About the EPR-Bell argument} \subsection{The EPR-Bohm argument} We can now discuss in more details the EPR argument \cite{EPR}\footnote{Here we consider the EPR-Bohm argument with spin 1/2 particles, rather than the original EPR argument with wave functions, which has some specific features discussed elsewhere \cite{EPR-wave}.} and Bell's theorem \cite{aaview,Bell-EPR}. To do so, let us consider two spin 1/2 particles in the singlet state, shared between Alice and Bob. The singlet state is a modality among four mutually exclusive modalities defined in a context relevant for the two spins, where measurements of the total spin (and any component of this spin) will certainly and repeatedly give a zero value. On the other hand, the singlet state is incompatible with any modality attributing definite values to the spin components of the two separate particles in their own (spatially separated) contexts. According to the previous section, the singlet modality is thus certain and repeatable in its own context (e.g., measurement of the total spin), but can only provide probabilities for the values of the spin components of the two separate particles. \subsection{What happens on Bob's side ? } Now, let us assume that Alice performs a measurement on her particle, far from Bob's particle. Alice's result is random as expected, but what happens on Bob's side? Since Bob's particle is far away, the answer is simply that nothing happens. How to explain the strong correlation between measurements on the two particles? By the fact that after her measurement, Alice can predict with certainty the state of Bob's particle; however, this certainty applies jointly to the new context (owned by Alice) and to the new system (owned by Bob). The so-called ``quantum non-locality" arises from this separation, and the hidden variables from the impossible attempt to attribute properties to Bob's particle only, whereas properties must be attributed jointly to Alice's context and Bob's system. Getting them together is required for any further step, hence the irrelevance of any influence on Bob's system following Alice's measurement. Here the separation between context and system is particularly obvious and crucial, since they are in different places. \subsection{What and where is the ``reality" ?} According to the above reasoning, after Alice's measurement on one particle from a pair of particles in a singlet state, the ``reality" is a modality for Bob's particle, within Alice's context. But Bob may also do a measurement, independently from Alice, and then the ``reality" will be a modality for Alice's particle, within Bob's context. Does that mean that we have two ``contradictories" realities ? Actually no, because these realities are contextual \cite{Contextuality,pg2}: for instance Alice's modality tells that if Bob uses the same context as Alice, he will find with certainty a result opposite to Alice's one (given the initial singlet state). This statement is obviously true, as well as the one obtained by exchanging Alice and Bob. But if Bob does a measurement in another context (different from Alice's), then one gets a probabilistic change of context for a $N=2$ system, as described before. If Alice and Bob both do measurements with different orientations of their analyzers, the simplest reasoning is to consider the complete context for both particles, which is initially a joint context (with a modality being the singlet state) and finally two separated contexts, again with 4 possible modalities due to the quantization postulate. Then this is now a probabilistic change of context for a $N=4$ system, again with the same result. \subsection{Where does CSM differ from Bell's hypothesis ?} It is interesting to write a few equations about these initial, ``intermediate" and final modalities, because this allows us to see more explicitly where CSM differs from Bell's hypothesis, even before the quantum formalism is introduced. So let us denote $a_i$, $b_j$ the modalities with results $i, \; j = \pm 1$ for some orientation (context) $a$ for Alice, and $b$ for Bob. Given some ``hidden variables" $\lambda$, and using the vertical bar ``|" as the usual notation for conditional probabilities $p(X|Y)$, the core of Bell's hypothesis is to assume the factorisability condition : \begin{equation} p(a_i, b_j | \lambda) = p(a_i | \lambda) \; p(b_j | \lambda) \label{bellhyp} \end{equation} The equivalent CSM equations, given the initial joint modality $\mu$, are for Alice, who knows $\mu$ and $a_i$ \begin{equation} p(a_i, b_j | \mu) = p(a_i | \mu) \; p(b_j | \mu, a_i) \label{csmalice} \end{equation} whereas they are for Bob, who knows $\mu$ and $b_j$ \begin{equation} p(a_i, b_j | \mu) = p(a_i | \mu, b_j) \; p(b_j | \mu). \label{csmbob} \end{equation} It is clear that Eqs. (\ref{csmalice}, \ref{csmbob}) differ from Bell's hypothesis Eq. (\ref{bellhyp}), and therefore Bell's inequalities can be violated in the CSM framework, without requiring any action at a distance, or faster than light signalling. However, there is some non-locality, in the sense that the result on one side depends on the result on the other side; but this is only through a (local) redefinition of the context, not through any influence at a distance onto the remote particle. Again, it is essential to consider that the modality belongs jointly to the particle(s) {\bf and} to the context, and not to the particle(s) only, otherwise one would be lead to Bell's hypothesis.\footnote{This argument shines light on the recent animated exchange between Tim Maudlin and Reinhard Werner, see for instance arXiv:1408.1826, arXiv:1408.1828, arXiv:1411.2120, https://tjoresearchnotes.wordpress.com/2013/05/13/guest-post-on-bohmian-mechanics-by-reinhard-f-werner/. \\ In this rich discussion it is correctly pointed out that an essential hypothesis of ``classicality" (or classical realism) is embedded in Bell's hypothesis. But according to CSM, it is not correct to claim that removing this hypothesis, and moving to some form of quantum realism, should eliminate all problems with locality. Everything will be fine as far as relativistic causality is concerned, but a fully non-classical form of non-locality will remain, due to the fact that the context and the system can be in different places. This would make no sense classically, but it is essential in a quantum framework, and explains why Bell's hypotheses do not hold. The CSM approach is the best way we know to spell out this specifically quantum non-locality, which does not imply any ``spooky action at a distance".} \subsection{Does CSM agree with QM, and why ?} Another important consequence is that if Alice and Bob both do measurements, their realities must ultimately agree together, since there will be a unique final modality $(a_i, b_j)$. Therefore their predictions must also agree together, and one must have $$p(a_i, b_j | \mu) = p(a_i | \mu) \; p(b_j | \mu, a_i) = p(a_i | \mu, b_j) \; p(b_j | \mu) $$ These equations are just the same as the ones we would obtain by the usual ``instantaneous reduction of the wave packet", though in our reasoning there is no wave packet, and no reduction, but only a measurement performed by either Alice or Bob on the known initial modality $\mu$. Even more, if we admit that $(\mu, a_i)$ is a new modality for Bob, and $(\mu, b_j)$ is a new modality for Alice, then $p(b_j | \mu, a_i)$ or $p(a_i | \mu, b_j)$ cannot be anything else than the one-particle conditional probabilities; for instance, it will be the usual Malus law for polarized photons. Finally, it is worth emphasizing that from a physical point of view, the modality $ (\mu, a_i) $ obtained after Alice's measurement on the entangled state is exactly the same as the one that would be obtained by transmitting a single particle in this same modality from Alice to Bob. This equivalence between an entanglement scheme and a ``prepare and measure" scheme has been extensively used in security proofs of quantum cryptography. So we get a simple explanation about the famous ``peaceful coexistence" between QM and relativity, i.e. why quantum correlations are non-local, but also ``no signalling" (they don't allow one to transmit any faster than light signal): this is because when Alice makes a measurement, the change from $\mu$ to $(\mu, a_i)$ corresponds to a change of context, and not to any influence at a distance. This change of context (from joint to separate) redefines a new modality, which always involve both a system and a context. Such a situation, though strongly non-classical, does not conflict with physical realism or causality: in the CSM perspective, quantum non-locality is a direct consequence of the bipartite nature of quantum reality. \section{Conclusion} Beyond the previous discussion on the EPR-Bell argument, CSM can answer several ontological questions posed by QM. In what follows, we conclude this paper with a short review of some other topics that this new perspective can help clarify. As this is being done, realism is again asserted, and objectivity is maintained, when applied to contexts, systems and modalities. \\ \noindent {\it Quantum realism and ontology.} Contextual objectivity \cite{Contextuality,pg2} allows for a quantum ontology, as the joint reality of the context, system, and modalities (CSM). This allows us to interpret quantum nonlocality as the situation where the context and the system are separated in space. Such a situation has no conflict with physical realism, but is irrelevant in classical physics, where the physical properties are carried by the system alone. \\ \noindent {\it Accepting the shifty split.} For many physicists, putting the context at the very heart of the theory implies an unacceptable ``shifty split" \cite{Bell,Mermin} between the quantum world (of the system) and the classical world (of the context). A lot of efforts have been made to get rid of it, and to make the classical world emerge from the quantum world, by attempts to describe contexts within the quantum formalism. Such attempts may exploit the fact that there is a lot of flexibility for defining the boundaries of the system, especially when considering that (weak or strong) measurements can be done by entangling the initial system with more and more ``ancillas", leading to the so-called ``Von Neumann regression" \cite{JvN}. But in our approach, extending measurements to include the context is self-contradictory: even by adding many ancillas, the system can never grow up to the point of including the context, simply because without the context, modalities cannot be defined. In other words, looking at the system as a fuzzy object including everything is not consistent with our physically realist ontology. The quantum-classical boundary has therefore a fundamental character, both from a physical and from a philosophical point of view \cite{FooP}. Without restricting the generality nor the applicability of QM, the CSM approach acknowledges that, as a scientific discipline, QM ``can explain anything, but not everything" \cite{Peres-Zurek}. \\ \noindent {\it The Copenhagen point of view.} In its practical consequences CSM is close to the usual Copenhagen point of view (CPV), so it may be interesting to discuss also the differences. A crucial one is that quantum reality as defined in CSM deviates from CPV, where reality is rather a word to be avoided \cite{Scott}. Whereas CPV may be accused of dogmatism (hidden behind mathematical formulas), the ontological claims of CSM have some flavor of empiricism, or phenomenology: their goal is to provide a physically realist view of QM ``as it is done", including in all the recent BI tests. \\ \noindent {\it Decoherence theory.} The practical side of CSM vs CPV can also be illustrated by considering ``decoherence theory" \cite{Zurek,PhT}. Considering that in an actual measurement, the system interacts with ancillas, entanglement is created, and observations are made, decoherence theory provides criteria to decide when and why a ``big" ancilla does not behave as a quantum system any more. But this is done by using QM, and thus - in the CSM view - this only makes sense with respect to an external context, always required for defining modalities and using the quantum formalism \cite{PhT}. Said otherwise, starting from a vector $|\psi \rangle$ in an Hilbert space, and then trying to ``deduce" the classical world, appears as circular by construction, because (from the beginning) $|\psi \rangle$ is a mathematical object associated with a modality, i.e. with a phenomenon involving both the ``classical" and ``quantum" worlds. Therefore decoherence theory perfectly fits within CSM, being admitted that the goal is not to reconstruct the classical world (it is already there) but to show that QM is a consistent theory. Said otherwise, QM is extraordinarily efficient for managing the ``split", but cannot get rid of it, because it is built within the quantum ontology and expressed in the quantum formalism - loosely speaking, as the difference between observables (contexts) and states (modalities). \vskip 2mm \noindent {\it The Bohr-Einstein debate.} As a final remark, Bohr's arguments in \cite{Bohr} were quite right, but perhaps failed to answer a major question asked in essence by EPR in \cite{EPR}: can a physical theory be ``complete" if it does not provide an ontology that should be clearly compatible with physical realism~? Unveiling such a realistic quantum ontology is what is proposed by our approach. \vskip 2mm \noindent{\bf Acknowledgements:} The authors thank Nayla Farouki for essential contributions, and Franck Lalo\"e, Francois Dubois, Anthony Leverrier, Maxime Richard, Augustin Baas, Cyril Branciard for many useful discussions.
1,941,325,220,356
arxiv
\section{Introduction} Given an integer $n \in \Zp$ and some numbers $\alpha,\beta \in \R$ such that $\alpha<\beta$, a sequence of real numbers $(a_i)_{i = 1}^k$ is said to \textbf{fluctuate at least $n$ times} across the interval $(\alpha,\beta)$ if there are indexes $1 \leq i_0 < i_1 < \dots < i_n \leq k$ such that \begin{aufziii} \item if $j$ is odd, then $a_{i_j} < \alpha$; \item if $j$ is even, then $a_{i_j} > \beta$. \end{aufziii} In this case it is clear that for every even $j$ we have \[ a_{i_j} > \beta \quad \text{and} \quad a_{i_{j+1}} < \alpha, \] i.e., $(a_i)_{i=1}^k$ has at least $\lceil \frac n 2 \rceil$ \textbf{downcrossings} from $\beta$ to $\alpha$ and at least $\lfloor \frac n 2 \rfloor$ \textbf{upcrossings} from $\alpha$ to $\beta$. If $(a_i)_{i \geq 1}$ is an infinite sequence of real numbers, we use the same terminology and say that $(a_i)_{i \geq 1}$ fluctuates at least $n$ times across the interval $(\alpha,\beta)$ if some initial segment $(a_i)_{i=1}^k$ of the sequence fluctuates at least $n$ times across $(\alpha,\beta)$. We denote the sets of all real-valued sequences having at least $n$ fluctuations across an interval $(\alpha, \beta)$ by $\mathcal{F}_{(\alpha,\beta)}^n$, and it will be clear from the context if we are talking about finite or infinite sequences. The main result of this article is the following theorem, which generalizes the results in \cite{kw1999} about fluctuations of averages of nonnegative functions. \begin{thmn} Let $\Gamma$ be a group of polynomial growth and let $(\alpha, \beta) \subset \Rps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \Rps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \end{thmn} The paper is structured as follows. We provide some background on groups of polynomial growth in Section \ref{ss.grouppolgr}, discuss some special properties of averages on groups of polynomial growth and a transference principle in Section \ref{ss.avgongrp} and prove effective Vitali covering theorem in Section \ref{ss.vitcov}. The main theorem of this paper is Theorem \ref{t.expdec}, which is proved in Section \ref{s.upcrineq}. This research was done during the author's PhD studies under the supervision of Markus Haase. I would like to thank him for his support and advice. \section{Preliminaries} \subsection{Groups of Polynomial Growth} \label{ss.grouppolgr} Let $\Gamma$ be a finitely generated group and $\{ \gamma_1,\dots,\gamma_k\}$ be a fixed generating set. Each element $\gamma \in \Gamma$ can be represented as a product $\gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l}$ for some indexes $i_1,i_2,\dots,i_l \in 1,\dots,k$ and some integers $p_1,p_2,\dots,p_l \in \mathbb{Z}$. We define the \textbf{norm} of an element $\gamma \in \Gamma$ by \[ \| \gamma \|:=\inf\{ \sum\limits_{i=1}^l |p_i|: \gamma = \gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l} \}, \] where the infinum is taken over all representations of $\gamma$ as a product of the generating elements. The norm $\| \cdot \|$ on $\Gamma$, in general, does depend on the generating set. However, it is easy to show \cite[Corollary 6.4.2]{ceccherini2010} that two different generating sets produce equivalent norms. We will always say what generating set is used in the definition of a norm, but we will omit an explicit reference to the generating set later on. For every $n \in \Rp$ let \[ \bl(n):= \{ \gamma \in \Gamma: \| \gamma \| \leq n\} \] be the closed ball of radius $n$. The norm $\| \cdot \|$ yields a right invariant metric on $\Gamma$ defined by \[ d_R(x,y):=\| x y^{-1}\| \quad (x,y \in \Gamma), \] and a left invariant metric on $\Gamma$ defined by \[ d_L(x,y):=\| x^{-1} y\| \quad (x,y \in \Gamma), \] which we call the \textbf{word metrics}. The right invariance of $d_R$ means that the right multiplication \[ R_g: \Gamma \to \Gamma, \quad x \mapsto x g \quad ( x \in \Gamma) \] is an isometry for every $g \in \Gamma$ with respect to $d_R$. Similarly, the left invariance of $d_L$ means that the left multiplications are isometries with respect to $d_L$. We let $d:=d_R$ and view $\Gamma$ as a metric space with the metric $d$. For $x\in \Gamma$, $r \in \Rp$ let \[ \bl(x,r):=\{ y \in \Gamma: d(x,y) \leq r\} \] be the closed ball of radius $r$ with center $x$. Using the right invariance of the metric $d$, it is easy to see that \[ \cntm{\bl(x,r)} = \cntm{\bl(y,r)} \quad \text{ for all } x,y \in \Gamma. \] Let $\mathrm{e} \in \Gamma$ be the neutral element. It is clear that \[ \bl(n) = \{ \gamma: d_R(\mathrm{e},\gamma) \leq n\} = \{ \gamma: d_L(\mathrm{e},\gamma) \leq n\}, \] i.e., the ball $\bl(n)$ is precisely the ball $\bl(\mathrm{e}, n)$ with respect to the left and the right word metric. It is important to understand how fast the balls $\bl(n)$ in the group $\Gamma$ grow as $n \to \infty$. The \textbf{growth function} $\gamma: \mathbb{N} \to \mathbb{N}$ is defined by \[ \gamma(n):=\cntm{\bl(n)} \quad (n \in \mathbb{N}). \] We say that the group $\Gamma$ is of \textbf{polynomial growth} if there are constants $C,d>0$ such that for all $n \geq 1$ we have \[ \gamma(n) \leq C(n^d+1). \] \begin{exa} \label{ex.zdex} Consider the group $\mathbb{Z}^d$ for $d \in \mathbb{N}$ and let $\gamma_1,\dots,\gamma_d \in \mathbb{Z}^d$ be the standard basis elements of $\mathbb{Z}^d$. That is, $\gamma_i$ is defined by \[ \gamma_i(j):=\delta_i^j \quad (j=1,\dots, d) \] for all $i=1,\dots,d$. We consider the generating set given by elements $\sum\limits_{k \in I} (-1)^{\varepsilon_k}\gamma_k$ for all subsets $I \subseteq [1,d]$ and all functions $\varepsilon_{\cdot} \in \{ 0,1\}^I$. Then it is easy to see by induction on dimension that $\bl(n) = [-n,\dots,n]^d$, hence \[ \cntm{\bl(n)} = (2n+1)^d \quad \text{ for all } n \in \mathbb{N} \] with respect to this generating set, i.e., $\mathbb{Z}^d$ is a group of polynomial growth. \end{exa} Let $d \in \Zp$. We say that the group $\Gamma$ has \textbf{polynomial growth of degree $d$} if there is a constant $C>0$ such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] It was shown in \cite{bass1972} that, if $\Gamma$ is a finitely generated nilpotent group, then $\Gamma$ has polynomial growth of some degree $d \in \Zp$. Furthermore, one can show \cite[Proposition 6.6.6]{ceccherini2010} that if $\Gamma$ is a group and $\Gamma' \leq \Gamma$ is a finite index, finitely generated nilpotent subgroup, having polynomial growth of degree $d \in \Zp$, then the group $\Gamma$ has polynomial growth of degree $d$ as well. A surprising fact is that the converse is true as well. Namely, it was proved in \cite{gromov1981} that, if $\Gamma$ is a group of polynomial growth, then there is a finite index, finitely generated nilpotent subgroup $\Gamma' \leq \Gamma$. It follows that if $\Gamma$ is a group of polynomial growth with the growth function $\gamma$, then there is a constant $C>0$ and an integer $d\in \Zp$, called the \textbf{degree of polynomial growth}, such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] An even stronger result was obtained in \cite{pansu1983}, where it is shown that, if $\Gamma$ is a group of polynomial growth of degree $d \in \Zp$, then the limit \begin{equation} \label{eq.pansu} c_{\Gamma}:=\lim\limits_{n \to \infty} \frac{\gamma(n)}{n^d} \end{equation} exists. As a consequence, one can show that groups of polynomial growth are amenable. \begin{prop} Let $\Gamma$ be a group of polynomial growth. Then $(\bl(n))_{n \geq 1}$ is a F{\o}lner sequence in $\Gamma$. \end{prop} \begin{proof} We want to show that for every $g \in \Gamma$ \[ \lim\limits_{n \to \infty} \frac{\cntm{g \bl(n) \sdif \bl(n)}}{\cntm{\bl(n)}} = 0. \] Let $m:=d(g,e) \in \Zp$. Then $g \bl(n) \subseteq \bl(n+m)$, hence \[ \frac{\cntm{g \bl(n) \sdif \bl(n)}}{\cntm{\bl(n)}} \leq \frac{\cntm{\bl(n+m)} - \cntm{ \bl(n)}} {\cntm{\bl(n)}} \to 0, \] where we use the existence of the limit in Equation \eqref{eq.pansu}. \end{proof} It will be useful later to have a special notion for the points which are `close enough' to the boundary of a ball in $\Gamma$. Let $W:=\bl(y,s)$ be some ball in $\Gamma$. For a given $r \in \Rps$ the \textbf{$r$-interior} of $W$ is defined as \[ \intr{r}(W):=\bl(y,(1-5/r)s). \] The \textbf{$r$-boundary} of $W$ is defined as \[ \bdr{r}(W):=W \setminus \intr{r}(W). \] If a set $\mathcal{C}$ is a disjoint collection of balls in $\Gamma$, we define the $r$-interior and the $r$-boundary of $\mathcal{C}$ as \[ \intr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \intr{r}(W) \] and \[ \bdr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \bdr{r}(W) \] respectively. It will be essential to know that the $r$-boundary becomes small (respectively, the $r$-interior becomes large) for large enough balls and large enough $r$. More precisely, we state the following lemma, whose proof follows from the result of Pansu (see Equation \eqref{eq.pansu}). \begin{lemma} \label{l.smallbdr} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist constants $n_0, r_0 \in \mathbb{N}$, depending only on $\Gamma$ and $\delta$, such that the following holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r > r_0$ \[ \cntm{\intr{r} ( \mathcal{C} )} > (1-\delta) \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \] and \[ \cntm{\bdr{r} ( \mathcal{C} )} < \delta \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W}. \] \end{lemma} \subsection{Averages on Groups of Polynomial Growth and a Transference Principle} \label{ss.avgongrp} We collect some useful results about averages on groups of polynomial growth in this subsection. At the end of the subsection we will discuss a transference principle, which will become essential later in Section \ref{s.upcrineq}. We start with a preliminary lemma, whose proof is straightforward. \begin{lemma} \label{l.ballgrowth} Let $f$ be a nonnegative function on a group of polynomial growth $\Gamma$. Let $\{ B_1, \dots, B_k\}$ be some disjoint balls in $\Gamma$ such that \[ \avg{g \in B_i} f(g) > \beta \quad \text{for each } i=1,\dots,k. \] Let $B$ be a ball in $\Gamma$, containing all $B_i$'s, such that \begin{equation*} \avg{g \in B} f(g) < \alpha. \end{equation*} Then \[ \frac{\sum\limits_{i=1}^k \cntm{B_i}}{\cntm{B}} < \frac{\alpha}{\beta}. \] \end{lemma} \noindent We refine this result as follows. \begin{lemma} \label{l.uskip} Let $\varepsilon \in (0,1)$. There is $n_0 \in \mathbb{N}$, depending only on the group of polynomial growth $\Gamma$ and $\varepsilon$, such that the following assertion holds. Given a nonnegative function $f$ on $\Gamma$, the condition \begin{equation} \label{eq.fluctcond} \avg{g \in \bl(n)} f( g ) > \beta \quad \text{and} \quad \avg{g \in \bl(m)} f( g ) < \alpha \end{equation} for some $n_0 \leq n < m$ and an interval $(\alpha,\beta) \subset \Rps$ implies that \[ \frac m n > (1-\varepsilon) \left(\frac {\beta}{\alpha} \right)^{1/d}. \] \end{lemma} \begin{proof} First of all, note that condition \eqref{eq.fluctcond} implies that \[ \frac{\cntm{\bl(m)}}{\cntm{\bl(n)}} > \frac{\beta}{\alpha} \] for \emph{all} indexes $n<m$ (see the previous lemma). Using the result of Pansu (Equation \eqref{eq.pansu}), we deduce that there is $n_0$ depending only on $\Gamma$ and $\varepsilon$ such that for all $n_0 \leq n<m$ we have \[ \frac{m^d}{n^d}>(1-\varepsilon)^d \frac{\cntm{\bl(m)}}{\cntm{\bl(n)}}. \] This implies that \[ \frac m n > (1-\varepsilon) \left( \frac{\beta}{\alpha}\right)^{1/d}, \] and the proof of the lemma is complete. \end{proof} \noindent Lemma \ref{l.uskip} has the following straightforward corollary. \begin{cor} \label{c.skip} For a constant $\varepsilon \in (0,1)$ and a group of polynomial growth $\Gamma$ let $n_0:=n_0(\varepsilon)$ be given by Lemma \ref{l.uskip}. Given a measure-preserving action of $\Gamma$ on a probability space $\prX$, a nonnegative function $f$ on $X$ and $x \in X$, the condition that the sequence \[ \left( \avg{g \in \bl(i)} f(g \cdot x) \right)_{i=n}^m \] fluctuates at least $k$ times across an interval $(\alpha, \beta) \subset \Rps$ with $n>n_0$ implies that \[ \frac m n > (1-\varepsilon)^{\lceil \frac k 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac k 2 \rceil} \cdot \frac 1 d} \] \end{cor} Finally, we will need an adapted version of the `easy direction' in Calder\'{o}n's transference principle for groups of polynomial growth. Suppose that a group $\Gamma$ of polynomial growth acts on a probability space $\prX=(X,\mathcal{B},\mu)$ by measure-preserving transformations and that we want to estimate the size of a measurable set $E$. Fix an integer $m \in \Zp$. For an integer $L \in \mathbb{N}$ and a point $x \in X$ we define the set \[ B_{L,m,x}:= \{ g: \ g \cdot x \in E \text{ and } \| g \| \leq L-m \} \subseteq \bl(L). \] The lemma below tells us that each universal upper bound on the density of $B_{L,m,x}$ in $\bl(L)$ bounds the measure of $E$ from above as well. \begin{lemma}[Transference principle] \label{l.caldtrans} Suppose that for a given constant $t \in \Rp$ the following holds: there is some $L_0 \in \mathbb{N}$ such that for all $L \geq L_0$ and for $\mu$-almost all $x \in X$ we have \[ \frac 1 {\cntm{\bl(L)}} \cntm{B_{L,m,x}} \leq t. \] Then \[ \mu(E) \leq t. \] \end{lemma} \begin{proof} Indeed, since $\Gamma$ acts on $\prX$ by measure-preserving transformations, we have \[ \sum\limits_{g \in \bl(L)} \int\limits_{\prX} \indi{E}(g \cdot x) d \mu = \cntm{\bl(L)} \mu(E). \] Then \begin{align*} \mu(E) &= \int\limits_{\prX} \left( \frac 1 {\cntm{\bl(L)}} \sum\limits_{g \in \bl(L)} \indi{E} (g \cdot x) \right) d \mu \leq \\ &\leq \int\limits_{\prX} \left( \frac{\cntm{B_{L,m,x}} + \cntm{\bl(L) \setminus \bl(L-m)}}{\cntm{\bl(L)}} \right) d \mu, \end{align*} and the proof is complete since $L$ can be arbitrarily large and $\Gamma$ is a group of polynomial growth. \end{proof} \subsection{Vitali Covering Lemma} \label{ss.vitcov} In this section we discuss the generalization of Effective Vitali Covering lemma from \cite{kw1999} to groups of polynomial growth. We fix some notation first. Given a number $t \in \Rp$ and a ball $B=\bl(x,r) \subseteq \prX$ in a metric space $\prX$, we denote by $t \cdot B$ the $t$-enlargement of $B$, i.e., the ball $\bl(x,rt)$. We state the basic finitary Vitali covering lemma first, whose proof is well-known. \begin{lemma} \label{l.fvc} Let $\mathcal{B}:=\{ B_1,\dots,B_n \}$ be a finite collection of balls in a metric space $\prX$. Then there is a finite subset $\{ B_{j_1},\dots, B_{j_m}\} \subseteq \mathcal{B}$ consisting of pairwise disjoint balls such that \[ \bigcup\limits_{i=1}^n B_i \subseteq \bigcup\limits_{l=1}^m 3 \cdot B_{j_l}. \] \end{lemma} Infinite version of this lemma is used, for example, in the proof of the standard Vitali covering theorem, which can be generalized to arbitrary doubling measure spaces. However, the standard Vitali covering theorem is not sufficient for our purposes. It was shown in \cite{kw1999} that the groups $\mathbb{Z}^d$ for $d \in \mathbb{N}$, which are of course doubling measure spaces when endowed with the counting measure and the word metric, enjoy a particularly useful `effective' version of the theorem. We prove a generalization of this result to groups of polynomial growth below. \begin{thm}[Effective Vitali covering] \label{t.evc} Let $\Gamma$ be a group of polynomial growth of degree $d$. Let $C \geq 1$ be a constant such that \[ \frac 1 C m^d \leq \gamma(m) \leq C m^d \quad \text{ for all } m \in \mathbb{N} \] and let $c:=3^d C^2$. Let $R,n,r>2$ be some fixed natural numbers and $X \subseteq \bl(R)$ be a subset of the ball $\bl(R) \subset \Gamma$. Suppose that to each $p \in X$ there are associated balls $A_1(p),\dots,A_n(p)$ such that the following assertions hold: \begin{aufzi} \item $p \in A_i(p) \subseteq \bl(R)$ for $i=1,\dots,n$; \item For all $i=1,\dots,n-1$ the $r$-enlargement of $A_i(p)$ is contained in $A_{i+1}(p)$. \end{aufzi} Let \[ S_i:=\bigcup\limits_{p \in X} A_i(p) \quad (i=1,\dots,n). \] There is a disjoint subcollection $\mathcal{C} $ of $\{ A_i(p) \}_{p \in X, i=1,\dots,n}$ such that the following conclusions hold: \begin{aufzi} \item The union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ together with the the set $S_n \setminus S_1$ covers all but at most $\left( \frac {c-1} c \right)^n$ of $S_n$; \item The measure of the union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ is at least $(1 - \left( \frac {c-1} c \right)^n)$ times the measure of $S_1$. \end{aufzi} \end{thm} \begin{rem} \label{r.maxball} Prior to proceeding to the proof of the theorem we make the following remarks. Firstly, we do not require the balls $A_i(p)$ from the theorem to be centered around $p$. Secondly, the balls of the form $A_i(p)$ for $i=1,\dots,n$ and $p \in X$ will be called \textbf{$i$-th level balls}. An $i$-th level ball $A_i(p)$ is called \textbf{maximal} if it is not contained in any other $i$-th level ball. It is clear that each $S_i$ is the union of maximal $i$-level balls as well. It will follow from the proof below that the balls in $\mathcal{C}$ can be chosen to be maximal. \end{rem} \begin{proof} To simplify the notation, let \[ s:=1+\frac 4 {r-2} \] be the scaling factor that is used in the theorem. The main idea of the proof is to cover a positive fraction of $S_n$ by a disjoint union of $n$-level balls via Lemma \ref{l.fvc}, then cover a positive fraction of what remains in $S_{n-1}$ by a disjoint union of $(n-1)$-level balls and so on. Thus we begin by covering a fraction of $S_n$ by $n$-level balls. Let $\mathcal{C}_n \subseteq \{ A_n(p) \}_{p \in X}$ be the collection of disjoint balls, obtained by applying Lemma \ref{l.fvc} to the collection of all $n$-th level \emph{maximal} balls. For every ball $B=\bl(p,m) \in \mathcal{C}_n$ we have \[ \cntm{3 \cdot B} \leq C (3m)^d \leq C^2 3^d \cntm{B}, \] hence \[ \cntm{S_n} \leq \cntm{\bigcup\limits_{B \in \mathcal{C}_n} 3 \cdot B} \leq \sum\limits_{B \in \mathcal{C}_n} c \cntm{B} \] and so \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_n} B} \geq \frac 1 c \cntm{S_n}. \] Let $U_n:=\bigsqcup\limits_{B \in \mathcal{C}_n} B$. The computation above shows that \begin{equation} \label{eq.stepa} U_n \text{ covers at least } \frac 1 c \text{-fraction of } S_n \end{equation} and \begin{equation} \label{eq.stepb} \cntm{S_1} - \cntm{U_n} \leq \cntm{S_1} - \frac 1 c \cntm{S_1} = \frac{c-1} c \cntm{S_1}. \end{equation} We proceed by restricting to $(n-1)$-level balls. Assume for the moment that the following claim is true. \begin{claimn} If a ball $A_{n-1}(p)$ has a nonempty intersection with $U_n$, then $A_{n-1}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n$ that it intersects. \end{claimn} \noindent Let \begin{align*} \widetilde \mathcal{C}_{n-1}:=\{ A_{n-1}(p): \ &A_{n-1}(p) \text{ is a maximal } (n-1)-\text{level ball} \\ &\text{ such that } A_{n-1}(p) \cap U_n = \varnothing\} \end{align*} be the collection of all maximal $(n-1)$-level balls disjoint from $U_n$ and let $\widetilde U_{n-1}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-1} \subseteq \widetilde \mathcal{C}_{n-1}$ of pairwise disjoint maximal balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B} \geq \frac 1 c \cntm{\widetilde U_{n-1}}. \] Let $U_{n-1}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B$. In order to show that \begin{equation} \label{eq.s1est} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \leq \left( \frac{c-1} c \right)^2 \cntm{S_1} \end{equation} it suffices to prove that \begin{equation} \label{eq.snm1} \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \geq \cntm{U_n} + \frac 1 c \cntm{S_{n-1} \setminus U_n}, \end{equation} due to the obvious inequalities \begin{align*} \cntm{S_{n-1} \setminus U_n} \geq &\cntm{S_{n-1}} - \cntm{U_n} \geq \cntm{S_1} - \cntm{U_n},\\ &\cntm{U_n} \geq \frac 1 c \cntm{S_1}. \end{align*} We decompose the set $S_{n-1} \setminus U_n$ as follows \[ S_{n-1} \setminus U_n = \widetilde U_{n-1} \sqcup \left( S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})\right). \] The part $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $(n-1)$-level balls intersecting $U_n$. Hence, if Claim 1 above is true, the set $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $s$-enlargements of balls in $\mathcal{C}_n$. Next, $U_{n-1}$ covers at least $\frac 1 c$ fraction of $\widetilde U_{n-1}$. It follows that the set $\bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}$ covers the set $U_n$ and at least $\frac 1 c$-fraction of the set $S_{n-1} \setminus U_n$. Thus we have proved inequalities \eqref{eq.snm1} and \eqref{eq.s1est}. A similar argument shows that \begin{align} \label{eq.2ndstepa} \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup & \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup (S_n \setminus S_{n-1}) \text{ covers all but} \\ &\text{ at most } \left( 1 - \frac 1 c\right)^2 \text{ of } S_n. \nonumber \end{align} Comparing Equations \eqref{eq.2ndstepa} and \eqref{eq.s1est} to the statements $\mathrm{(a)}$ and $\mathrm{(b)}$ of the theorem, we see that the proof would be complete apart from Claim 1 if $n$ was equal to $2$. So we proceed further to $(n-2)$-level balls and use the following claim. \begin{claimn} If a ball $A_{n-2}(p)$ has a nonempty intersection with $U_n \cup U_{n-1}$, then $A_{n-2}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n \cup \mathcal{C}_{n-1}$ that it intersects. \end{claimn} We let $\mathcal{C}_{n-2}$ be the collection of all maximal $(n-2)$-level balls disjoint from $U_n \cup U_{n-1}$ and let $\widetilde U_{n-2}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-2} \subseteq \widetilde \mathcal{C}_{n-2}$ of pairwise disjoint balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B} \geq \frac 1 c \cntm{\widetilde U_{n-2}} \] and let $U_{n-2}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B$. Similar arguments show that \begin{equation*} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup U_{n-2}} \leq \left( \frac{c-1} c \right)^3 \cntm{S_1} \end{equation*} and that the union of $s$-enlargements of balls in $\mathcal{C}_n$, $\mathcal{C}_{n-1}$ and $\mathcal{C}_{n-2}$, together with $S_n \setminus S_{n-2}$, covers all but at most $\left( 1 - \frac 1 c\right)^3$ of $S_n$. It is obvious that one can continue in this way down to the $1$-st level balls, using the obvious generalization of Claim 2. This would yield a collection of maximal balls \[ \mathcal{C}:=\bigcup\limits_{i=1}^n \mathcal{C}_i \] so that the union of $s$-enlargements of balls in $\mathcal{C}$ together with $S_n \setminus S_1$ covers all but most $\left( 1 -\frac 1 c\right)^n$ of $S_n$ and that the measure of the union of these $s$-enlargements is at least $\left( 1- \left( 1 - \frac 1 c\right)^n\right)$ times the measure of $S_1$. We conclude that the proof is complete once we prove the claims above and their generalizations. For this it suffices to prove the following statement: \begin{claimn} If $1 \leq i < j \leq n$ and $A_j(q)$ is a maximal ball, then for all $p \in X$ \[ A_i(p) \cap A_j(q) \neq \varnothing \Rightarrow A_{i}(p) \subseteq s \cdot A_j(q). \] \end{claimn} Suppose this is not the case. Let $x,y$ be the centers and $r_1,r_2$ be the radii of $A_{i}(p)$ and $A_j(q)$ respectively. Recall that $s = 1+\frac 4 {r-2}$. Since the $s$-enlargement of $A_j(q)$ does not contain $A_i(p)$, it follows $\frac {4 r_2} {r-2} \leq 2 r_1$, hence \[ r r_1 \geq 2 r_1 + 2 r_2. \] The intersection of $A_i(p)$ and $A_j(q)$ is nonempty, hence $d \leq r_1 + r_2$. This implies that \[ r r_1 \geq d+r_1+r_2, \] so the $r$-enlargement of the ball $A_i(p)$ contains $A_j(q)$. Since $r \cdot A_i(p) \subseteq A_{i+1}(p)$, we conclude that the ball $A_j(q)$ is not maximal. Contradiction. \end{proof} \begin{cor} \label{c.evccor} Suppose that in addition to all the assumptions of Theorem \ref{t.evc} we have \[ \cntm{S_n} \leq (c+1) \cntm{S_1}, \] where $c$ is the constant defined in Theorem \ref{t.evc}. Then there is a disjoint subcollection $\mathcal{C}$ of maximal balls such the union of $\left( 1+ \frac 4 {r-2}\right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1-(c+1) \left( \frac{c-1}{c}\right)^n \right)$ of $S_1$. \end{cor} \begin{proof} From the proof of Theorem \ref{t.evc} it follows that one can find a disjoint collection $\mathcal{C}$ of maximal balls satisfying assertions \textrm{(a)} and \textrm{(b)} of the theorem. The statement of the corollary is an easy consequence of \textrm{(a)}. \end{proof} As the main application we will use the corollary above in the proof of Theorem \ref{t.expdec}. It will be essential to know that one can ensure that the extra $\left( 1+\frac 4 {r-2}\right)$-enlargement does change the size of the union of the balls too much. \begin{lemma} \label{l.smallenl} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist integers $n_0, r_0 > 2$, depending only on $\Gamma$ and $\delta$, such that the following assertion holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r \geq r_0$ we have \[ \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \geq (1-\delta) \cntm{\bigcup\limits_{W\in \mathcal{C}} \left( 1+\frac 4 {r-2} \right) \cdot W}. \] \end{lemma} \noindent The proof of the lemma follows from the result of Pansu (see Equation \eqref{eq.pansu}). \section{Fluctuations of Averages of Nonnegative Functions} \label{s.upcrineq} The purpose of this section is to prove the following theorem. \begin{thm} \label{t.expdec} Let $\Gamma$ be a group of polynomial growth of degree $d \in \Zp$ and let $(\alpha, \beta) \subset \Rps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \Rps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \end{thm} To simplify the presentation we use the adjective \textbf{universal} to talk about constants determined by $\Gamma$ and $(\alpha,\beta)$. When a constant $c$ is determined by $\Gamma, (\alpha, \beta)$ and a parameter $\delta$, we say that $c$ is \textbf{$\delta$-universal}. Prior to proceeding to the proof of Theorem \ref{t.expdec}, we make some straightforward observations. \begin{rem} \label{r.fluctbd} It easy to see how one can generalize the theorem above for arbitrary functions bounded from below. If a measurable function $f$ on $X$ is greater than $-m$ for some constant $m \in \Rp$, then \[ \mu(\{ x: (\avg{ g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < \widetilde{c}_1 \widetilde{c}_2^N, \] where the constants $\widetilde{c}_1, \widetilde{c}_2$ are given by applying Theorem \ref{t.expdec} to the function $f+m$ and the interval $(\alpha+m,\beta+m)$. \end{rem} \begin{rem} \label{r.abclose} Recall that $\gamma: \Zp \to \Zp$ is a growth function of a group $\Gamma$. Let $C \geq 1$ be a constant such that \[ \frac 1 C r^d \leq \gamma(r) \leq C r^d \quad \text{ for all } r\in \mathbb{N} \] and let $c:=3^d C^2$. Then it suffices to prove Theorem \ref{t.expdec} only for intervals $(\alpha,\beta)$ such that \[ \frac{\beta}{\alpha} \leq \frac{c+1}{c}. \] If the interval does not satisfy this condition, we replace it with a sufficiently small subinterval and apply Theorem \ref{t.evc}. The importance of this observation will be apparent later. \end{rem} \begin{rem} \label{r.largen} Instead of proving the original assertion of Theorem \ref{t.expdec}, we will prove the following weaker assertion, which is clearly sufficient to deduce Theorem \ref{t.expdec}. \emph{There is a universal integer $\widetilde N_0 \in \mathbb{N}$ such that for any probability space $\prX=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\prX$ and any measurable $f \geq 0$ on $\prX$ we have \[ \mu(\{ x: (\avg{g \in \bl(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq \widetilde N_0$. }\end{rem} The upcrossing inequalities given by Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} allow for a short proof of the pointwise ergodic theorem on $\Ell{\infty}$ for actions of groups of polynomial growth. \begin{thm} \label{t.polergthm} Let $\Gamma$ be a group of polynomial growth acting on a probability space $\prX=(X, \mathcal{B}, \mu)$ by measure-preserving transformations. Then for every $f \in \Ell{\infty}(\prX)$ the limit \[ \lim\limits_{n \to \infty} \avg{g \in \bl(n)} f(g \cdot x) \] exists almost everywhere. \end{thm} \begin{proof} Let \[ X_0:= \{x \in X: \lim\limits_{n \to \infty} \avg{g \in \bl(n)} f(g \cdot x) \text{ does not exist} \} \] be the set of the points in $\prX$ where the ergodic averages do not converge. Let $((\alpha_i,\beta_i))_{i \geq 1}$ be a sequence of nonempty intervals such that each nonempty interval $(c,d) \subset \R$ contains some interval $(a_i,b_i)$. Then it is clear that if $x \in X_0$, then there is some interval $(a_i,b_i)$ such that the sequence of averages $\left( \avg{g \in \bl(n)} f(g \cdot x) \right)_{n \geq 1}$ fluctuates over $(a_i,b_i)$ infinitely often, i.e., \[ X_0 \subseteq \{ x \in X: \left(\avg{g \in \bl(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcup\limits_{i \geq 1} \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}. \] By Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} we have for every interval $(a_i,b_i)$ that \[ \mu(\{ x \in X: \left(\avg{g \in \bl(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}) = 0, \] hence $\mu(X_0) = 0$ and the proof is complete. \end{proof} We now begin the proof of Theorem \ref{t.expdec}, namely we will prove the assertion in Remark \ref{r.largen}. Assume from now on that the group $\Gamma$ of polynomial growth of degree $d \in \Zp$ and the interval $(\alpha, \beta) \subset \Rps$ are \emph{fixed}. Given a measure-preserving action of $\Gamma$ on a probability space $\prX =(X,\mathcal{B},\mu)$, let \[ E_N:=\{ x: \left( \avg{g \in \bl(k)} f(g \cdot x) \right)_{k \geq 1} \in \mathcal{F}_{(\alpha, \beta)}^N\} \] be the set of all points $x \in X$ where the ergodic averages fluctuate at least $N \geq \widetilde N_0$ times across the interval $(\alpha,\beta)$. Here $\widetilde N_0$ is a universal constant, which will be determined later. For $m \geq 1$ define, furthermore, the set \[ E_{N,m}:=\{ x: \left( \avg{g \in \bl(k)} f(g \cdot x) \right)_{k=1}^m \in \mathcal{F}_{(\alpha, \beta)}^N\} \] of all points such that the finite sequence $\left(\avg{\bl(k)} f(g \cdot x) \right)_{k=1}^m$ fluctuates at least $N$ times across $(\alpha,\beta)$. Then, clearly, $(E_{N,m})_{m \geq 1}$ is a monotone increasing sequence of sets and \[ E_N = \bigcup\limits_{m \geq N} E_{N,m}. \] We will complete the proof by giving a universal estimate for $\mu(E_{N,m})$ for all $m \geq N$. For that we use the transference principle (Lemma \ref{l.caldtrans}), i.e., for an integer $L > m$ and a point $x \in X$ we let \[ B_{L,m,x}:=\{g: \ g \cdot x \in E_{N,m} \text{ and } \| g \| \leq L-m \}. \] The goal is to show that the density of the set \[ B_0:=B_{L,m,x} \subset \bl(L) \] can be estimated by $c_1 c_2^N$ for some universal constants $c_1,c_2$. The main idea is as follows. For every point $z \in B_0$ the sequence of averages \[ k \mapsto \avg{g \in \bl(k)} f( (gz) \cdot x), \quad k = 1,\dots,m \] fluctuates at least $N$ times. Since the word metric $d=d_R$ on $\Gamma$ is right-invariant, the set $\bl(k)z$ is in fact a ball of radius $k$ centered at $z$ for each $k=1,\dots,m$. Given a parameter $\delta \in (0, 1-\sqrt{{\alpha}/{\beta}})$, we will pick some of these balls and apply effective Vitali covering theorem (Theorem \ref{t.evc}) multiple times to replace $B_0$ by a sequence \[ B_1,B_2,\dots, B_{\lfloor (N-N_0)/T \rfloor} \] of subsets of $\bl(L)$ for some $\delta$-universal integers $T, N_0 \in \mathbb{N}$ which satisfies the assumption \begin{equation} \label{eq.oddstep} B_{2i+1} \text{ covers at least } \left( 1-\delta \right)-\text{fraction of } B_{2i} \quad \text{ for all indices } i \geq 0 \end{equation} at `odd' steps and the assumption \begin{equation} \label{eq.evenstep} \cntm{B_{2i}} \geq \frac{\beta}{\alpha}(1-\delta) \cntm{B_{2i-1}} \quad \text{ for all indexes } i \geq 1 \end{equation} at `even' steps. Each $B_i$ is, furthermore, a union \[ \bigsqcup\limits_{B \in \mathcal{C}_i} B \] of some family $\mathcal{C}_i$ of disjoint balls with centers in $B_0$. If such a sequence of sets $B_1,\dots,B_{\lfloor (N-N_0)/T \rfloor}$ exists, then \begin{align*} \cntm{\bl(L)} \geq \cntm{B_{\lfloor (N-N_0)/T \rfloor}} \geq \left( \frac{\beta}{\alpha}(1-\delta)^2 \right)^{\lfloor \frac {N-N_0}{2 T} \rfloor } \cntm{B_0}, \end{align*} which gives the required exponential bound on the density of $B_0$ with \[ c_2:=\left( \frac{\alpha}{\beta}(1-\delta)^{-2} \right)^{ 1 / 2T} \] and a suitable $\delta$-universal $c_1$. To ensure that conditions \eqref{eq.oddstep} and \eqref{eq.evenstep} hold, one has to pick sufficiently large $\delta$-universal parameters $r$ and $n$ for the effective Vitali covering theorem. We make it precise at the end of the proof, for now we assume that $r$, $n$ are `large enough'. In order to force the sufficient growth rate of the balls (condition \textrm{(b)} of Theorem \ref{t.evc}), we employ the following argument. Let $K>0$ be the smallest integer such that \[ \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2} \right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \] Then, applying Corollary \ref{c.skip}, we obtain a universal integer $n_0 \in \mathbb{N}$ such that if a sequence \[ (\avg{g \in B(i)} f((gz) \cdot x))_{i=n}^m \quad \text{ for some } n>n_0, z \in B_0 \] fluctuates at least $K$ times across the interval $(\alpha,\beta)$, then \begin{equation} \label{eq.evccondb} \frac m n > \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2}\right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \end{equation} Let $n$ be large enough for use in effective Vitali covering theorem. We define $T:=2nK$ and let $N_0 \geq n_0$ be sufficiently large (this will be made precise later). The first $N_0$ fluctuations are skipped to ensure that the balls have large enough radius, and the rest are divided into $\lfloor (N-N_0) / T \rfloor$ groups of $T$ consecutive fluctuations. The $i$-th group of consecutive fluctuations is used to construct the set $B_i$ for $i=1,\dots,\lfloor (N-N_0)/T \rfloor$ as follows. We distinguish between the `odd' and the `even' steps. \noindent \textbf{Odd step:} First, let us describe the procedure for odd $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\bl(u,s)$ from $(i-1)$-th step with $u \in B_0$. If $i=1$, then $z \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\bl(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) > \beta, \] $A_2(z)$ be the $(2K+1)$-th ball $\bl(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) > \beta \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\bl(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond} \cntm{S_n} \leq (c+1) \cntm{S_1} \end{equation} or not. If \eqref{eq.corcond} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. Condition \eqref{eq.oddstep} is satisfied if $r$ and $n$ are large enough, and we proceed to the following `even' step. If, on the contrary, \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \end{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \begin{align*} B_i&:=B_{i-1}, \\ B_{i+1}&:=\bigsqcup\limits_{B \in \mathcal{C}} B \end{align*} and \begin{align*} \mathcal{C}_i&:=\mathcal{C}_{i-1}, \\ \mathcal{C}_{i+1}&:=\mathcal{C}. \end{align*} The conditions \eqref{eq.oddstep}, \eqref{eq.evenstep} are satisfied and we proceed to the next `odd' step. \noindent \textbf{Even step:} We now describe the procedure for even $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\bl(u,s)$ from $(i-1)$-th step with $u \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\bl(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) < \alpha, \] $A_2(z)$ be the $(2K+1)$-th ball $\bl(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) < \alpha \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\bl(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond1} \cntm{S_n} \leq (c+1) \cntm{S_1} \end{equation} or not. If \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr1} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \end{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \[ B_i:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and proceed to the following `odd' step. If \eqref{eq.corcond1} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. The goal is to prove that condition \eqref{eq.evenstep} is satisfied. If the balls from $\mathcal{C}_{i-1}$ were completely contained in the balls from $\mathcal{C}_i$, the proof would be completed by applying Lemma \ref{l.ballgrowth}. This, in general, might not be the case, so we argue as follows. First, we prove the following lemma. \begin{lemma} \label{l.bdrint} If a ball $W_1$ from $\mathcal{C}_{i-1}$ intersects $\intr{r}(W_2)$ for some ball $W_2 \in \mathcal{C}_i$, then $W_1 \subseteq W_2$. \end{lemma} \begin{proof} Let $W_1 = \bl(y_1,s_1)$ and $W_2=\bl(y_2,s_2)$ for some $y_1,y_2 \in B_0$. Since $W_1$ intersects $\intr{r}(W_2)$, we have \[ d(y_1,y_2) \leq s_2(1-5/r)+s_1. \] If $W_1$ is not contained in $W_2$, then $d(y_1,y_2) > s_2-s_1$. From these inequalities it follows that \[ s_1 \geq d(y_1,y_2)-s_2(1-5/r) >s_2-s_1-s_2+\frac{5s_2}{r}, \] hence $s_2<\frac{2rs_1}{5}$. We deduce that the $r$-enlargement of $W_1$ contains $W_2$. This is a contradiction since $W_2$ is maximal and the $r$-enlargement of $W_1$ is contained in $n$-th level ball $A_n(y_1)$. \end{proof} From the lemma above it follows that the set $B_{i-1}$ can be decomposed as \begin{align*} B_{i-1} = \left( \bigsqcup\limits_{W \in \mathcal{C}_{i-1}'} W \right) \sqcup (\bdr{r}(\mathcal{C}_i) \cap B_{i-1}) \sqcup (B_{i-1} \setminus B_{i}), \end{align*} where \[ \mathcal{C}_{i-1}':=\{ W \in \mathcal{C}_{i-1}: \ W \cap \intr{r}(V) \neq \varnothing \text{ for some } V \in \mathcal{C}_i \}. \] The rest of the argument depends on how much of $B_{i-1}$ is contained in $\bdr{r}(\mathcal{C}_i)$, so let \[ \Delta:=\frac{\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}}{\cntm{B_{i-1}}}. \] There are two possibilities. First, suppose that $\Delta>\frac{\delta} 3$. Then $\cntm{B_{i-1}} \leq \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\delta/3}$. Let $r$ and the radii of the balls in $\mathcal{C}_i$ be large enough (see Lemma \ref{l.smallbdr}) so that \[ \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\cntm{B_i}} < \frac{\alpha}{\beta} \frac{\delta} 3 (1-\delta)^{-1}. \] It is then easy to see that condition \eqref{eq.evenstep} is satisfied. Suppose, on the other hand, that $\Delta \leq \frac{\delta} 3$. Then, if $n$ and $r$ are large enough so that $\cntm{B_{i-1} \setminus B_i}$ is small compared to $\cntm{B_{i-1}}$, we obtain \begin{align*} \cntm{B_{i-1}} &\leq \frac{\alpha}{\beta}\cntm{B_i}+\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}+\cntm{B_{i-1} \setminus B_i} \leq \\ &\leq \frac{\alpha}{\beta}\cntm{B_i}+\frac{\delta}{3} \cntm{B_{i-1}}+\frac{\delta} 3 \cntm{B_{i-1}}, \end{align*} which implies that \[ \cntm{B_i} \geq \frac{\beta}{\alpha}(1-\frac{2 \delta} 3) \cntm{B_{i-1}}, \] i.e., condition \eqref{eq.evenstep} is satisfied as well. We proceed to the following `odd' step. The proof of the theorem is essentially complete. To finish it we only need to say how one can choose the constants $N_0, r, n$ and $\widetilde N_0$. Recall that $\delta \in (0, 1- \left( \alpha / \beta \right)^{1/2})$ is an arbitrary parameter. First, the integer $n \in \mathbb{N}$ is chosen so that \[ (c+1) \left( 1- \frac 1 c\right) ^n \leq 1-\sqrt{1-\delta / 4}. \] Next, we choose $r$ as the maximum of \begin{aufziii} \item the integer $r_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $r_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$. \end{aufziii} The integer $K>0$ is picked so that condition \eqref{eq.evccondb} is satisfied. We choose $N_0$ as the maximum of \begin{aufziii} \item the integer $n_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $n_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$; \item the integer $n_0$ given by Corollary \ref{c.skip} with the parameter $\frac{1-(\alpha/\beta)^{1/d}}{2}$; \end{aufziii} Finally, we define $\widetilde N_0$ as $\widetilde N_0:=N_0+4nK+1$. A straightforward computation shows that this choice of constants satisfies all requirements. We do not assert, however, that this choice yields \emph{optimal} constants $c_1$ and $c_2$. \qed \printbibliography[] \end{document}
1,941,325,220,357
arxiv
\section{Introduction\label{intro}} It is now 27 years since Bethe et al. 1979 (denoted as BBAL) was published. At that time we thought the supernova problem to be solved. Although a tremendous amount of work, much of it numerical, has gone on and even more is going on now, the better the physics input, the further the explosion is from success. However, see the paper by Adam Burrows in this volume. This sorry situation has led to all sorts of opinions about the nature and evolution of compact objects, the authors feeling free to extrapolate in all directions, since the basic mechanism for producing them does not work. It is not generally known, or if it is known, not generally believed that there is considerable phenomenology developed which connects different phenomena. This was developed chiefly by Woosley and his students in the study of numerical NS formation. On the evolutionary side and the connection of various types of compact stars, much of it was developed by van den Heuvel and his students. We used their techniques to make many connections between compact stars in the ``Formation and Evolution of Black Holes in the Galaxy" (Bethe, Brown, \& Lee 2003). In this paper we wish to expose in a simple way and summarize some of the connections. At this stage our connections mostly have to be considered in a pragmatic sense; they are as good as they work. We hope that we show that they work well, in that one can understand a lot through them. We adopt the method of population synthesis in evolving binaries with compact companions (Bethe and Brown 1998). We update this work in terms of the much more extensive and more accurate work of Belczynski et al. (denoted as BKB) (2002). These authors performed detailed parameter studies. Many more investigators have tried to estimate the number of binaries and their mergers from those we observe, and then extrapolating to the entire Galaxy and then from our Galaxy to many other galaxies. There are two main problems with this latter method. (i) It is not easy to see systems that emit radio emission only weakly; large corrections have to be made for those we don't see. This difficulty will become clear with our discussion in Sec.~\ref{sec3} of the newly discovered double pulsar which increases the number of observable gravitational mergers estimated from observed systems by at least an order of magnitude (Kalogera et al. 2004). (ii) Mergers of BH-NS binaries are much more probable than mergers of binary NS's. Indeed, the signal from the former will tend to be greater because of the larger chirp mass implied by the BH mass being greater than the NS mass. Yet there is of yet little probability of seeing the BH-NS binaries, the number of which we estimate to be 5 times greater than binary NSs, because the latter are observable for $\sim 100$ times longer than the former, due to the fact that the magnetic field of the first born NS (which turns into a BH in the BH-NS binaries) in a double NS binary gets recycled, by mass accreted from its companion, which brings its magnetic field down a factor $\sim 100$, and, as we outline, increases the time it can be observed by about the same factor (See our later discussion). In population synthesis one estimates the frequency of supernova explosions in our Galaxy from those in similar spiral galaxies. (Many in our Galaxy are thought to be obscured by the milky way.) Then from an estimate of binarity, say 50\%, one has the number of binaries in which both stars are massive enough to go supernova. In fact, the calculations of Bethe and Brown (1998) proceeded in parallel with those of Portegies Zwart and Yungelson (1998). The latter assumed a Galactic supernova rate of $0.015$ yr$^{-1}$, Bethe and Brown (1998), $0.0225$ yr$^{-1}$. \section{Connection of Fe core and compact star masses} \label{sec2} Most troublesome in the study of compact stars is the lack of connection between the Fe core, which can be and has been calculated, most recently by Alex Heger (Brown et al. 2001a) with Woosley's Kepler, and the mass of the compact object. Woosley chooses the outer edge of the Fe core to be at the location of a large discontinuous change in $Y_e$ which marked the outer extent of the last stage of convective silicon shell burning. Bethe and Pizzochero (1990) used for SN 1987A a schematic but realistic treatment of the radiative transfer problem, which allowed them to follow the position in mass of the photosphere as a function of time. They showed that the observations determine uniquely the kinetic energy of the envelope once its mass is known. They obtained a kinetic energy of the ejecta $\ge$ 1 foe ($=10^{51}$ erg), the energy scaling linearly with $M_{\rm env}$. The main point is that this was done without any input from the numerical models used to describe 1987A. From the envelope masses considered, the range of energies was $\sim 1-1.4$ foe. Using the fact that following the supernova explosion as the shock moves outwards the pressure is mainly in radiation, Bethe and Brown (1995) showed from the known value of $\sim 0.075\mbox{$M_\odot$}$ of $^{56}$Ni production in 1987A that an upper limit on the gravitational mass of $\sim 1.56 \mbox{$M_\odot$}$ could be obtained for the progenitor. The main point was that the matter is very dilute in the bifurcation region so that the amount of fallback depends only weakly on the precise separation distance chosen, also that the amount of fallback is roughly equal in magnitude to the binding energy of the compact object. Thus, the Fe core mass is a good estimate of the mass of the compact object. (See also Table~3 of Brown, Weingartner \& Wijers (1996) where the amounts of fallback material from distances of 3500 and 4500 km determined by Woosley are plotted.) Our conclusion is that we can use calculated Fe core masses as an estimate for the masses of the compact cores which will result. Woosley outlined in several publications the important role of the magnitude of the $^{12}$C($\alpha,\gamma$)$^{16}$O\ reaction in determining the ZAMS (zero age main sequence) mass at which stars would go into BHs. Whereas $^{12}$C is formed essentially by the triple $\alpha$-reaction which goes as the square of the density, the reaction $^{12}$C($\alpha,\gamma$)$^{16}$O\ goes with the density, being a binary reaction. With increasing ZAMS mass, the central density of stars decreases, the mass going roughly as the square root of the radius. Thus, there comes a ZAMS mass at which the carbon is removed by the $^{12}$C($\alpha,\gamma$)$^{16}$O\ reaction as fast as it is formed. At that mass, there is essentially no $^{12}$C to be burned. Now as long as $^{12}$C has to be burned, it does so at a temperature of $\sim 80$ keV, $\sim 4$ times greater than that at which the $^{12}$C($\alpha,\gamma$)$^{16}$O\ removes carbon. In burning at such a high temperature a lot of entropy is carried away, decreasing the entropy substantially. BBAL (Bethe et al. 1979) showed that the final entropy per nucleon is $\sim 1$ (in units where $k_B=1$) so that the way in which the higher entropy achieved when convective carbon burning is skipped, is that the Fe core increases substantially in mass. This is just the mass at which stars begin evolving into BHs. A more complete argument of this is given in Brown et al. (2001a). \begin{figure} \centerline{\epsfig{file=f2.eps,height=14cm}} \caption{Reproduction of Fig.~2 of Brown et al. (2001a). Comparison of the iron core masses resulting from the evolution of ``clothed" and ``naked" He cores. Filled circles and crosses correspond to the core masses of ``clothed" stars at the time of iron core implosion for a finely spaced grid of stellar masses (Heger, Woosley, Martinez-Pinedo, \& Langanke 2001). The circular black dots were calculated with the Woosley \& Weaver 1995 code, whereas the crosses employ the vastly improved Langanke, Martinez-Pinedo (2000) rates for electron capture and beta decay. Open circles (square) correspond to the naked He stars in case A$+$AB (B) mass transfer of Fryer et al. (2002), with reduced WR mass loss rate. If the assembled core mass is greater than $M_{\rm PC}= 1.8\mbox{$M_\odot$}$, where $M_{PC}$ is the proto-compact star mass as defined by Brown \& Bethe (1994), there is no stability and no bounce; the core collapses into a high mass BH. $M_{NS}=1.5\mbox{$M_\odot$}$ denotes the maximum mass of NS (Brown \& Bethe 1994). The mass of the heaviest known well-measured pulsar, PSR B1913$+$16, is also indicated with dashed horizontal line (Thorsett \& Chakrabarty 1999).}\label{fig1} \end{figure} Alexander Heger, using Woosley and Weaver's carbon burning rate of 170 keV barns, and the best current physics, reevolved the main sequence stars with the results shown in Fig.~\ref{fig1}. The input physics and the results given here were summarized in Heger et al. (2001). The experimental determination of the carbon burning rate is the Stuttgart one (Kunz et al. 2001): \begin{eqnarray} S_{\rm tot}^{300} = (165\pm 50)\ {\rm keV\ barns}. \end{eqnarray} A recent paper summarizing all of the Stuttgart work to date (Hammer et al. 2005), which also consider the data from elastic scattering and from the decay of $^{16}$N, adds up to: \begin{eqnarray} S_{\rm tot}^{300} = (162\pm 39)\ {\rm keV\ barns}. \end{eqnarray} The $^{12}$C($\alpha,\gamma$) experiment was so time-consuming that it required far more time than the usual 3-5 years generally devoted to an experiment. For the project, the Stuttgart team spent a total of 262 days of beam time, not counting all the days of preparation. In the Stuttgart experiment all up to date technical achievements were combined into a single experiment. One of the authors of the present paper (G.E.B.) tried a number of times to make a shell-model calculation of this rate. But the unperturbed one-particle, one-hole state and (deformed) three-particle, three hole state mix destructively, giving a small net contribution to the 7.12 MeV 1$^{-}$ state in $^{16}$O, so the matrix element to this state, which then decays to the $^{16}$O ground state by emitting the $\gamma$-ray, could not be accurately calculated. Willy Fowler once said that no nuclear reaction that can be measured in the laboratory should be determined by astronomical observations. However, Woosley's 170 keV barns, near the central value of the Stuttgart measurements, fits our phenomenology so well, especially the LMBH in SN1987A which had progenitor ZAMS mass of $18-20\mbox{$M_\odot$}$, combines the laboratory measurement and astronomical phenomenology. To within the accuracy of the former, we believe the question to be settled. The much used compilation by Schaller et al. (1992) uses an S-factor of $\sim 100 $ keV barns. They bring their central abundance of He down to 0.16 at the end of core He burning only for a $25\mbox{$M_\odot$}$ star, below the point for convective He burning, so we believe that they would start evolving high-mass BHs at this mass, $\sim 5\mbox{$M_\odot$}$ higher than with the Woosley rate of 170 keV barns. {}From Fig.~\ref{fig1} we see that the Fe cores, which we identify with the final compact objects, as outlined in the last section, increase rapidly in mass in the region around $20\mbox{$M_\odot$}$ going above our $1.5\mbox{$M_\odot$}$ maximum NS mass at $\sim 18\mbox{$M_\odot$}$, just the ZAMS mass of 1987A which we believe to have gone into a LMBH. We reiterate here that we have estimated the compact object mass to be the same as that of the Fe core, the fallback in the latter case compensating for the additional gravitational attraction in the former case. Then at $\sim 20\mbox{$M_\odot$}$ the Fe cores climb above $1.7-1.8\mbox{$M_\odot$}$ which is our limit for high-mass BHs; i.e., those in which the He envelope is not exploded outwards as in 1987A, but collapses inwards. In Lee, Brown, and Wijers (2002) we find the high-mass BHs in the Galaxy can be made from $20-30\mbox{$M_\odot$}$ stars. Even though the Fe core masses come down somewhat above $23\mbox{$M_\odot$}$, the envelopes are so massive that they will collapse inwards. We believe that the key to the different regions in which NS's, LMBHs (in the very narrow region of $18-20\mbox{$M_\odot$}$) and high-mass BHs can be evolved is the $^{12}$C($\alpha,\gamma$)$^{16}$O\ rate of $\sim 170$ keV barns introduced by Woosley. In any case, taking the Fe core masses to be those of the compact object, we do well on the phenomenology of the different regions of ZAMS masses which give NS's and LMBHs. Whereas a large number of high-mass BHs have been found by now, 17 in the transient sources (Lee, Brown, Wijers 2002) no LMBH in a binary has been identified. But the possible range of $\sim 18 -20 \mbox{$M_\odot$}$ is very narrow, with only the progenitor of SN 1987A from that range. \section{Evolution of Binary Neutron Stars} \label{sec3} In Table~\ref{tab1} is the compilation by Lattimer and Prakash (2004) of the compact objects in binaries. \begin{table} \caption{Compilation of the compact objects in binaries by Lattimer and Prakash (2004). References are given in their paper. $^\star$Brown et al. (1996) argue that 4U~1700$-$37, which does not pulse, is a LMBH. $^\dagger$We have added, following the comma, the recent measurement of Van der Meer et al. (2004). $^{\dagger\dagger}$Results for J0751$+$1807 is from Nice et al. (2005). } \label{tab1} \begin{center} \begin{tabular}{llll} \hline Object & Mass ($\mbox{$M_\odot$}$) \phantom{xxxxxx} & Object & Mass ($\mbox{$M_\odot$}$) \\ \hline \multicolumn{4}{l}{\it X-ray Binaries} \\ 4U1700$-$37$^\star$ & 2.44$^{+0.27}_{-0.27}$ & Vela X-1 & 1.86$^{+0.16}_{-0.16}$\\ Cyg X-1 & 1.78$^{+0.23}_{-0.23}$ & 4U1538$-$52 & 0.96$^{+0.19}_{-0.16}$ \\ SMC X-1$^\dagger$ & 1.17$^{+0.16}_{-0.16}$, 1.05$\pm$0.09 & XTE J2123$-$058 & 1.53$^{+0.30}_{-0.42}$ \\ LMC X-4$^\dagger$ & 1.47$^{+0.22}_{-0.19}$, 1.31$\pm$0.14 & Her X-1 & 1.47$^{+0.12}_{-0.18}$ \\ Cen X-3$^\dagger$ & 1.09$^{+0.30}_{-0.26}$, 1.24$\pm$0.24 & 2A 1822$-$371 & $> 0.73$ \\ \multicolumn{4}{l}{\it Neutron Star - Neutron Star Binaries} \\ 1518$+$49 & 1.56$^{+0.13}_{-0.44}$ & 1518$+$49 companion & 1.05$^{+0.45}_{-0.11}$\\ 1534$+$12 & 1.3332$^{+0.0010}_{-0.0010}$ & 1534$+$12 companion & 1.3452$^{+0.0010}_{-0.0010}$ \\ 1913$+$16 & 1.4408$^{+0.0003}_{-0.0003}$ & 1913$+$16 companion & 1.3873$^{+0.0003}_{-0.0003}$ \\ 2127$+$11C & 1.349$^{+0.040}_{-0.040}$ & 2127$+$11C companion & 1.363$^{+0.040}_{-0.040}$ \\ J0737$-$3039A & 1.337$^{+0.005}_{-0.005}$ & J0737$-$3039B & 1.250$^{+0.005}_{-0.005}$ \\ J1756$-$2251 & 1.40$^{+0.02}_{-0.03}$ & J1756$-$2251 companion & 1.18$^{+0.03}_{-0.02}$ \\ \multicolumn{4}{l}{\it Neutron Star - White Dwarf Binaries} \\ B2303$+$46 & 1.38$^{+0.06}_{-0.10}$ & J1012$+$5307 & 1.68$^{+0.22}_{-0.22}$ \\ J1713$+$0747 & 1.54$^{+0.007}_{-0.008}$ & B1802$-$07 & 1.26$^{+0.08}_{-0.17}$ \\ B1855$+$09 & 1.57$^{+0.12}_{-0.11}$ & J0621$+$1002 & 1.70$^{+0.32}_{-0.29}$ \\ J0751$+$1807$^{\dagger\dagger}$ & 2.10$^{+0.20}_{-0.20}$ & J0437$-$4715 & 1.58$^{+0.18}_{-0.18}$ \\ J1141$-$6545 & 1.30$^{+0.02}_{-0.02}$ & J1045$-$4509 & $<$ 1.48 \\ J1804$-$2718 & $<$ 1.70 & J2019$+$2425 & $<$ 1.51 \\ \multicolumn{4}{l}{\it Neutron Star - Main Sequence Binaries} \\ J0045$-$7319 & 1.58$^{+0.34}_{-0.34}$ \\ \hline \end{tabular} \end{center} \end{table} When we say ``evolution" we do not mean a complete one with calculation of kick velocities, etc. in NS formation. Rather, we chiefly discuss the difference in masses of the pulsar and its companion. Such a situation occurs in the scenario for making binary pulsars, in which the spiral-in of the NS through the companion supergiant expels the envelope, which is hydrodynamically coupled to the drag from the NS, leaving a He star as companion. Hypercritical accretion rates are encountered and Chevalier worked out that for $\dot M \mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 10^4\ \dot M_{\rm Edd}$ radiation pressure is unable to limit the accretion the photons are simply carried in by the adiabatic inflow.\footnote{In fact, this happens at a much lower rate, but the $10^4$ Eddington limit is needed to build up sufficient density in the accretion shock so that the energy can be carried off by neutrinos.} Chevalier estimated that sufficient mass would be accreted by the NS during the common envelope evolution to send it into a BH. Said more simply for such a high $\dot M$ the drift (random walk) velocity is inwards onto the NS. Although not generally accepted by astronomers, partly because it destroyed the usual scenario for binary pulsar evolution, the trapping of neutrinos and their being carried in by the adiabatic inflow in the collapse of large stars as in BBAL\cite{BBAL} involved the same mechanism, but of course with different parameters. Without the pressure from trapped neutrinos, supernova explosions wouldn't have any chance of succeeding. The Brown scenario was worked out in detail by Bethe and Brown (1998) who showed that the first born NS in the conventional scenario would accrete $\sim 1\mbox{$M_\odot$}$ in the about one year of common envelope evolution, taking it into a BH. A simple analytical treatment was given of the common envelope evolution. Belczynski et al. (2002) removed the approximation of neglecting the compact object mass made by Bethe and Brown, so that the common envelope problem could be accurately handled by solving a series of partial differential equations. This lowered the Bethe and Brown accretion by about 25\%. In the rest of this paper we shall use the Belczynski et al. way. The Hulse-Taylor pulsar 1913$+$16 of mass $1.442 \mbox{$M_\odot$}$ with companion mass $1.387 \mbox{$M_\odot$}$ is the most massive of the NS's in double NS binaries. Burrows and Woosley (1986) evolved it from ZAMS $\sim 20\mbox{$M_\odot$}$ giants. It can be seen from Fig.~\ref{fig1} that this is just where the Fe core masses increase rapidly with change in mass, possibly giving an explanation of why the difference in pulsar and companion masses is nearly 4\%, close to our upper limit for overlapping He burning. Going down in NS mass we reach 1534$+$12 and 2127$+$11C and its companion although the latter is in a globular cluster and often said to have been formed only later by exchange of other stars in binaries. In both of these binaries the two masses are within $\sim 1\%$ of each other. Note that in 1534 the companion mass is greater than that of the pulsar. It can be seen from Fig.~\ref{fig1} that right around ZAMS $15\mbox{$M_\odot$}$, there are fluctuations where the Fe core mass from the more massive ZAMS mass is lighter than from the less massive one. In any case, we believe that 1534 and 2127 come from ZAMS masses more or less in the middle of our range from $10-20\mbox{$M_\odot$}$. A ZAMS mass $15\mbox{$M_\odot$}$ corresponds to a He core of $\sim 4\mbox{$M_\odot$}$, so this is approximately the limit below which the He stars evolve in red giant. At the bottom of our interval comes the double pulsar J0737A, B and 1756$-$2251, which we take to have evolved from giants of ZAMS mass $10-12\mbox{$M_\odot$}$. In both of these the pulsars will have accreted some mass in the He-star red giant evolution, so called Case BB mass transfer. As we discuss later, this mass transfer is difficult to calculate quantitatively, because He-star winds are large and unpredictable. We assign an uncertainty of $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.2\mbox{$M_\odot$}$ to the amount of He accreted in Case BB mass transfer, which we return to later in our discussion. We believe that we have well-measured binaries at the top and bottom end of our interval ZAMS $10-20\mbox{$M_\odot$}$ and two well-measured binaries 1534 and 2127 in the middle. Using evolutionary calculations of S.E. Woosley of He stars of mass $<4\mbox{$M_\odot$}$ (roughly corresponding to $15\mbox{$M_\odot$}$ ZAMS) in which wind loss is included, Fryer and Kalogera (1997) show that only special conditions in terms of kick velocities allow the pulsar in close NS-He binaries to avoid the envelope of these low-mass He stars. In detail, Fryer and Kalogera (1997) found that if they worked backwards from the system of 1913$+$16 and assume there is no kick, then the preexplosion separation is less than the Roche lobe radius of the helium-star, NS system, leading to a pulsar, helium-star common envelope. Because in the standard scenario (not double helium-star) scenario of binary pulsar evolution, the first mass exchange of the two giant progenitors is usually during the red giant phase (Case B mass transfer) of the more massive progenitor, then the later red giant evolution of the helium star in the helium-star, NS phase, which implies a second common envelope, is called in the literature Case BB mass transfer. In any case, from Fig.~\ref{fig1} with our approximation of the Fe core mass plus fallback being equal to the NS mass, we see that the NS's in 1534$+$12 probably came from the middle of the range $\sim 10-20 \mbox{$M_\odot$}$. The small $\sim 1\%$ difference in masses in 1534$+$12 looks like a striking confirmation of the double He-star scenario. The companion is $\sim 1\%$ more massive than the pulsar. To bring its magnetic field down to $10^{12}$ G we estimate that with the Eddington rate and He burning time possibly as large as double that for the companion in the Hulse-Taylor pulsar, then the pulsar would accrete $\sim 0.03 f\mbox{$M_\odot$}$. We have motivated (Francischelli et al. 2002) $f\sim 0.1$ from the propeller effect, so this need not be much. We put off a discussion of the evolution of the double pulsar J0737$-$3039 and of J1756$-$2251 where the pulsar must have accreted matter during the helium star red giant phase of the companion, Case BB mass transfer, until after we have discussed the 4 more massive binaries. We now discuss the standard scenario for binary NS formation (van den Heuvel and van Paradijs 1993) and why it does not work. The Bethe and Brown (1998) work was analytical, and the approximation of neglecting the mass of the compact object in comparison with the companion, while in transition from main sequence to He star, mass was made. As noted above, BKB (2002) removed this approximation in their numerical calculations and we now adopt their method. \begin{table} \begin{center} \caption{Flat Distribution: Proximity probability to evolve NS binaries. $M_1$ is the giant mass, $\Delta M$ is the 4\% mass difference within which the binaries evolve into NS-NS binaries, and $P$ is the probability of having mass difference within 4\%.} \label{tabX1} \vskip 4mm \begin{tabular}{ccc} \hline $M_1$ [$\mbox{$M_\odot$}$] & $\Delta M=0.04\ M_1$ [$\mbox{$M_\odot$}$] & $P=\Delta M /(M_1 -10\mbox{$M_\odot$})$ \\ \hline 20 & 0.80 & 0.08 \\ 19 & 0.76 & 0.08 \\ 18 & 0.72 & 0.09\\ 17 & 0.68 & 0.10 \\ 16 & 0.64 & 0.11 \\ 15 & 0.60 & 0.12 \\ 14 & 0.56 & 0.14 \\ 13 & 0.52 & 0.17 \\ 12 & 0.48 & 0.24 \\ 11 & 0.44 & 0.44 \\ 10 & 0.40 & $-$\\ \hline \end{tabular} \end{center} \end{table} We follow Pinsonneault and Stanek (2006) in evolving NS binaries, but require the two massive progenitors to be within $4\%$ in mass in order to burn He at the same time, rather than the 5\% they use. Using a flat distribution and the fact that the IMF for the second star is not independent of the first star, because it must be of lower mass to evolve later than the first star, one finds that NS binaries should be formed 16\% of the time, but 44\% of the time if $M_1=11\mbox{$M_\odot$}$. That's where the twin is most likely formed. We show this in Table~\ref{tabX1}. Pinsonneault and Stanek (2006) assembled evidence that ``Binaries like to be Twins". They showed that a recently published sample of 21 detached eclipsing binaries in the Small Magellanic Cloud can be evolved in terms of a flat mass function containing 55\% of the systems and a ``twins" population with $q> 0.95$ containing the remainder. All of the binaries had orbital period $P< 5$ days, with primary masses $6.9 \mbox{$M_\odot$} <M_1 <27.3 \mbox{$M_\odot$}$. Historically large selection effects have been identified (Goldberg et al. 2003; Hogeveen 1992). These will lower the number of twins found by Pinsonneault and Stanek. The important role of twins is that the two giants are close enough in mass \footnote{Pinsonneault and Stanek used 5\% whereas we prefer 4\% as will be discussed.} that in Brown's (1995) scenario they can evolve into NS-NS binaries, whereas if they are further apart in mass they will evolve into a LMBH-NS binary (Chevalier 1993; Bethe and Brown 1998). Thus the twins may increase the number of NS-NS binaries. We suggest that the resulting number of short hard gamma-ray bursts, which result from the merging of the binaries, which to date are unable to differentiate between the two species, may not be changed much, some of the predicted large excess of LMBH-NS binaries appearing rather as NS-NS binaries. However, because the latter are so much more easier to observe, the role between what we see and what is present will be tightened. We point out that Belczy\'nski et al. (2002) in their simulation D2 in which the maximum NS mass is $1.5\mbox{$M_\odot$}$ and the mass proximity in the progenitor binaries (to evolve NS's) is taken, like Pinsonneault \& Stanek to be 5\%, obtain a ratio of 4 for (BH$+$NS)/(NS$+$NS) and would obtain the ratio of 5 had they used our 4\% proximity in masses. In their Case D2 BKB (2002) find a total gravitational merging rate of $0.45 \times 10^{-4}$ yr$^{-1}$ for the sum of their double NS and BH-NS mergings to compare with the Bethe \& Brown $0.70 \times 10^{-4}$ yr$^{-1}$ once the Bethe \& Brown supernova rate of $0.0225$ yr$^{-1}$ is lowered to the BKB $0.0172$ yr$^{-1}$. In short, there is general agreement amongst the authors quoted above, except that it is not clear how many twins will be left once selection effects are taken into account. For simplicity we shall use a total gravitational merging rate in our Galaxy of $10^{-4}$ yr$^{-1}$. Whereas we call the standard scenario of binary NS evolution that of van den Heuvel and van Paradijs (1993), BKB include hypercritical accretion in what they call their standard scenario. We believe that their case D2 with $M_{\rm NS}^{\rm max}=1.5\mbox{$M_\odot$}$ is strongly favored by the closeness in mass of the double NS binaries. The Bethe \& Brown (1998) work did not cover the Case BB mass transfers in the binaries from the less massive ZAMS masses $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 15\mbox{$M_\odot$}$, which we discuss in Sec.~\ref{sechyper}. \section{Observability Premium} The behavior of the pulsar magnetic field is crucial. Van den Heuvel (1994b) has pointed out that NS's formed with strong magnetic fields $10^{12} - 5\times 10^{12}$ G, spin down in a time \begin{eqnarray} \tau_{\rm sd} \sim 5\times 10^6\ {\rm yrs} \label{eq4} \end{eqnarray} and then disappear into the graveyard of NS's. (The pulsation mechanism requires a minimum voltage from the polar cap, which can be obtained from $B_{12}/P^2 \mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.2$ with $B_{12}=B/10^{12}$ G and $P$ in seconds.) The relativistic binary pulsar 1913$+$16 has a weaker field $B\simeq 2.5\times 10^{10}$ G, and therefore emits less energy in magnetic dipole radiation. Van den Heuvel estimates its spin-down time as $10^8$ yrs. There is thus a premium in observational time for lower magnetic fields, in that the pulsars can be seen for longer times. Wettig and Brown (1996) used van den Heuvel's idea to invent the Observability Premium \begin{eqnarray} \Pi = \frac{10^{12} \rm G}{B} \label{eq5} \end{eqnarray} where $B$ is the magnetic field of the pulsar. $\Pi$ gives the time relative to that of a $10^{12}$ G pulsar, that the pulsar can be observed. Taam and van den Heuvel (1986) found empirically that the magnetic field of a pulsar dropped roughly linearly with accreted mass. Thus, the Observability Premium is high, given a large amount of such mass. Wettig and Brown (1996) brought the Observability Premium $\Pi$ into the weighting in their evolution of binary pulsars, assuming because of the high winds during He burning that accretion occurred only in the NS, He-star stage. Since the maximal accretion of $\dot M = 3\times 10^{-8} \mbox{$M_\odot$}$ yr$^{-1}$ adds up to make the pulsar observable for a longer time, this gave an explanation of the relatively large number of narrow, short period binary pulsars. As noted earlier, the actual accretion rate may be an order of magnitude smaller because of the propeller effect (Francischelli et al. 2002). Whereas we have considered LMBH-NS binaries above, the formation of NS binaries at redshift $z\sim 1$ due to the higher star formation rate means that they should play a role increasing Kalogera et al.'s rate. The Hulse-Taylor pulsar and 1534$+$12 have magnetic fields $B\sim 10^{10}$ G so their observability premium; i.e., the longer time that they can be observed, is $\sim 100$ times that of the Crab pulsar, or $\sim 500$ megayears. In fact, the Hulse-Taylor pulsar is estimated to merge in $\sim 300$ Myr. One would expect the same observability premiums for NS binaries that follow from the $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 15\mbox{$M_\odot$}$ giant progenitors to be similarly recycled and have similar lifetimes because Wettig and Brown (1996) have pointed out that their recycling occurs mostly in the He star, pulsar stage of their formation, and the observational premium favors the formation of binaries sufficiently close so that the accretion from the He-star winds is at or near the Eddington limit. Thus, by a Gyr the pulsar will have run down, and the binary will be invisible. Because of the larger rate of star formation at redshift $z\sim 1$, there should be $\sim 8$ times more of these than the binaries such as Hulse-Taylor where one still sees the pulsar in operation. We shall develop this point in more detail in the next section. It is not clear that the matter accreted in the Case BB mass transfer, that in the He red giant stage, as in the double pulsar plays a similar role in bringing the magnetic field of the first born pulsar down, but here $B=6.3 \times 10^9$ G so that the pulsar would also run down and stop pulsing in Gyr. Thus, we would expect a factor $\sim 8$ increase in Kalogera et al.'s rate from the double NS binaries born some Gyr ago because of the higher star formation rate. \section{Short Hard Gamma-Ray Bursts} The exciting new development which occurred while writing this review is the observation of the short-hard gammy-ray bursts (SHBs), made possible by the satellites Swift and Hete-2, which are able to detect the $\sim 1$ second bursts and radio their positions to the telescopes which can observe their afterglows. The four afterglows of short-hard gamma-ray butsts and the progenitors inferred for these bursts $-$ as well as for another four SHBs with known or constrained redshift form the basis for the analysis of Nakar et al. (2005) which we follow here. (See the Physics Reports by Nakar in this volume.) We will discuss these SHBs as chiefly resulting from the LMBH-NS binaries of Bethe and Brown (1998) since we believe that they predominantly result from mergers of these binaries. There is some evidence of beaming in the SHBs, as in the longer GRBs. Two SHBs (050709 and 050724) have shown a steepening that can be interpreted as a hint of a jet. This interpretation would indicate a beaming factor of $\sim 50$ (Fox et al. 2005). Such a beaming would reduce substantially the average isotropic energy and, therefore, make it more difficult for LIGO to observe. The Nakar et al. (2005) ``best guess" at SHBs is $R_{SHB}\approx 10^5$ Gpc$^{-3}$ yr$^{-1}$ with the assumption of beaming factor of 50. The rate of Bethe-Brown mergers in our Galaxy was estimated at $\sim 10^{-4}$ yr$^{-1}$. Given $10^5$ galaxies within 200 Mpc, this amounts to $1.25\times 10^3$ Gpc$^{-3}$ yr$^{-1}$, to be increased by factor 8 to $10^4$ Gpc$^{-3}$ yr$^{-1}$ by the greater rate of star formation at the time of binary formation (at $z\sim 1$). Note that the Nakar et al. ``best guess" includes a factor of 50 for beaming, so there is a factor of 500 between the ``best guess" and the Bethe/Brown mergers for LIGO, which would not have a beaming. Without the beaming the factor would be only 10. As we shall come back to later, the final Nakar et al. estimate for initial LIGO if the merging binary is one of two NS's is 0.3 mergers per year, just 10 times the central Kalogera et al. (2004) value, or 3 mergers per year in the case the the binaries are NS-BH in nature, the 10 times larger value coming from the large chirp mass (from $10\mbox{$M_\odot$}$ BHs). We have somewhat different predictions as developed earlier in this paper. Going back to our factor of $\sim 5$ times more LMBH-BH mergings than those of binary NS's, we would begin with the Kalogera et al. 0.003 binary NS mergers in our Galaxy and multiply this number by factor 8 for the increased rate of star formation at redshift $z=1$. Our prediction is about 1 merging per year for LIGO I, mostly from BH-NS binaries. In the Nakar et al. (2005) the SHBs are found to result from binaries with long lifetimes as we discussed in our Sec.~\ref{sec3}. They suggest that these binaries are either old, invisible double NS or NS-BH binaries. For the latter case, they chose a BH mass of $\sim 10\mbox{$M_\odot$}$, which would be wonderful for LIGO because it would imply a large chirp mass and the gravitational waves from the merger of such a BH-NS binary should be observed rather soon in the LIGO observations. In the Bethe and Brown (1998) scenario the BHs in these binaries would be more like $\sim 2\mbox{$M_\odot$}$. A very important new point of the Nakar et al. work is that the invisible binaries come from a very old population $\sim$ 6 Gyr old; in other words they were formed when the Universe was only $\sim \frac 12 \tau_{Hubble}$. The NS in a BH-NS binary will not be recycled so that it will be observable for $\sim 5\times 10^6$ yrs (see eq.~(\ref{eq4})), or $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 1$ part in a million of the binary lifetime. Another important point of Nakar et al. is that if the binaries were born so long ago (We choose redshift $\sim 1$) then the star formation rate was substantially higher then. Nakar et al. (2005) base their redshift distribution model of star formation history on the Porciani and Madau (2001) \begin{eqnarray} SFR_2(z) \propto \frac{\exp(3.4 z)}{\exp(3.4 z) +22} \frac{ [\Omega_m (1+z)^3 + \Omega_\Lambda]^{1/2}}{(1+z)^{3/2}} \end{eqnarray} with $\Omega_m=0.3, \Omega_\Lambda =0.7$ and $H_0 = 70$ km$^{-1}$ Mpc$^{-1}$ in standard cosmology. We obtain a factor of 8 higher star formation at redshift 1 as compared with $z=0$. Whereas such a large factor should not be applied to the double NS merging rate because these binaries were not formed so long ago, it should be applied to the Bethe and Brown (1998) LMBH-NS binaries. We, of course, believe that we predicted the short-hard bursts. Our theory would explain the approximate uniformity in them, as arising from compact objects with essentially NS masses, increased $\sim 0.7\mbox{$M_\odot$}$ in the LMBHs by accretion. None the less we believe the Nakar et al. analysis of the SHBs to be very useful, in that they establish without model restrictions that LIGO should observe an order of magnitude mergings for the binary compact objects. \section{Maximum Neutron Star Mass } Brown and Bethe (1994) claimed, based on the numerical calculations of Thorsson et al. (1994), that because of kaon condensation which sets in at densities $\rho\sim 3 n_0$, where $n_0$ is nuclear matter density, the maximum NS mass is $\sim 1.5\mbox{$M_\odot$}$. We discussed our determination of the gravitational mass of SN1987A, which we believe went into a BH, in Sec.~\ref{sec2}. We found it to be $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 1.56\mbox{$M_\odot$}$. The maximum NS mass is decreased by kaon condensation because the electrons, which have Fermi energies change with kaon condensation into a Bose condensate of zero momentum kaons. This substantially decreases the pressure. There have been many theoretical calculations of kaon condensation in the past 10 years, none of them getting kaon condensation at as low a density as $3\rho_0$. However, the driving force in kaon condensation is the strangeness chiral symmetry breaking parameter $a_3 m_s$ for which we took results for the Thorsson et al. (1994) value of $a_3 m_s=-222$ MeV. Lattice gauge calculations have now determined this parameter to be $-231$ MeV to within a quoted accuracy of 3 to 4\% (Dong, Lagae, and Liu 1996). None of them used an $a_3 m_s$ this large in magnitude. Recently Brown, Lee, Park and Rho (2006) have greatly simplified the calculation of kaon condensation by calculating about the fixed point, the density for chiral restoration, in the Harada and Yamawaki (2003) Vector Manifestation of the hidden local symmetry theory. The kaon condensation is run completely by the vector meson degrees of freedom as their masses and coupling constants approach zero at the fixed point. We move back to the somewhat lower density for kaon condensation by increasing the Harada and Yamawaki parameter $a$, which is unity at the fixed point, to $a \mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 1.3$ obtained by renormalization group analyses. The value of $\Sigma_{KN}$, and the behavior of strange hyperons are irrelevant in the new analysis. The Thorsson et al. (1994) result is again arrived at. Since Table~\ref{tab1} contains three masses, those of 4U 1700$-$37, Vela X-1 and J0751$+$1807 which exceed our $1.5\mbox{$M_\odot$}$ maximum NS mass, we should comment briefly. {\it 4U 1700$-$37} : Although this compact object has the same accretion history as the other high-mass X-ray binaries, it doesn't pulse like the others. Brown, Weingartner and Wijers \cite{BWW} evolve the compact object as a LMBH. {\it Vela X-1} : J. van Paradijs et al.~ \cite{paradijs2} pointed out that in this binary with floppy B-star companion, the apparent velocity can in some cases increase by up to 30\% (from the surface elements of the companion swinging around faster than the center of mass) ``thereby increasing the apparent mass of the compact object by approximately the same amount". In any case, Barziv et al.\cite{barziv} from which the Vela X-1 NS mass in our table comes, say ``The best value of the mass of Vela X-1 is $1.86\mbox{$M_\odot$}$. Unfortunately, no firm constraints on the equation of state are possible, since systematic deviations in the radial-velocity curve do not allow us to exclude a mass around $1.4\mbox{$M_\odot$}$ as found for the other NS's." {\it J0751$+$1807} : We consider the measurement of a $2.1\mbox{$M_\odot$}$ NS mass in this NS white dwarf binary a serious challenge to our maximum NS mass (Nice et al. 2005). It will be clear in our Section~\ref{sec8} that in the evolution of NS, white dwarf binaries, sufficient mass is furnished during the red giant evolution of the white dwarf progenitor, often in conservative mass transfer so that if accepted by the NS, then most of them would have masses in the vicinity of the quoted mass in J0751$+$1807 or higher, as found by Tauris and Savonije (1999). These authors did not introduce the propeller effect, whereas Francischelli et al. (2002) found that in evolution of double NS binaries this effect often cuts the accretion down by an order of magnitude. We have given our reasons earlier \footnote{Bethe and Brown (1995) estimated the Ni production in 1987A to have come from a NS of maximum mass $1.56\mbox{$M_\odot$}$, whereas these authors believed that the NS had later evolved into a BH.} that the maximum NS mass cannot be far above $1.5\mbox{$M_\odot$}$. J0751$+$1807 has a short orbital period of $P_b=6.3$ hours. The short orbital period allows the detection of the effect of gravitational radiation emission. According to general relativity the time rate of change of the orbital period is \begin{eqnarray} \dot P_b &=& -\frac{192}{5}\left(\frac{P_b}{2\pi}\right)^{-5/3} \left(1+\frac{73}{24}e^2+\frac{37}{96} e^4\right) \nonumber\\ && \times (1-e^2)^{-7/2} T_\odot^{5/3} {m_{\rm WD} M_{\rm NS}}{(m_{\rm WD}+M_{\rm NS})^{-1/3}}. \end{eqnarray} Since $m_{\rm WD} \ll M_{\rm NS}$, the dependence on masses of $\dot P_b$ is proportional to $m_{\rm WD} M_{\rm NS}^{2/3}$ or for a given $\dot P_b$, $M_{\rm NS}\propto m_{\rm WD}^{-3/2}$. Thus, for $m_{\rm WD} \sim 0.24\mbox{$M_\odot$}$, $M_{\rm NS}\sim 1.5\mbox{$M_\odot$}$. The NS mass of Nice et al. (2005) has mass $2.1^{+0.4}_{-0.5}\mbox{$M_\odot$}$ at 95\% confidence level; i.e., at this level $M_{NS}$ could be as low as $1.61 \mbox{$M_\odot$}$. Brown et al. (2006b) have indicated that there is a small correction to Brown et al. (2006a) who calculated the maximum stable NS mass to be $1.5\mbox{$M_\odot$}$ by fluctuating about the fixed point in the Harada-Yamawaki (2003) renormalization group formalism. This arises because the $K^-$-mesons in the kaon condensate which causes the NS to collapse into a BH have a fermion substructure; i.e., $K^- =|\bar u s\rangle$ and the $\bar u$ and $s$ quarks are fermions. Thus, there is a repulsion between $K^-$-mesons when brought together from the Pauli exclusion principle. The quarks are, however, not very far from being current quarks, which they become as $n\rightarrow n_{\chi SR}$, the latter being the chiral symmetry restoration density, since the critical density for kaon condensation is \begin{eqnarray} n_c \sim \frac 34 n_{\chi SR}. \end{eqnarray} Current quarks are thought to be very small in extent. \begin{table} \begin{center} \caption{Calculated accretion onto the pulsar during H and possibly He red giant stage. $M_i$ is the initial pulsar mass, taken to be that of the companion, and therefore a lower limit, $\Delta M$ is the calculated mass accretion onto the first born NS, $M_f$ is the final pulsar mass following accretion, and $\hat P$ is the probability of unequal masses of compact objects in NS binaries, therefore, $1-P$ of Table~\ref{tabX1}. The He core mass of giant star is assumbed to be $M_{\rm He} = 0.08 (M_{\rm Giant}/\mbox{$M_\odot$})^{1.45} \mbox{$M_\odot$}$. The error in the companion masses less than $14\mbox{$M_\odot$}$ come from our $\sim 0.2\mbox{$M_\odot$}$ uncertainty in accretion in the He red giant evolution if a NS is to remain.} \label{tabX} \vskip 4mm \begin{tabular}{ccccc} \hline Giant Mass [$\mbox{$M_\odot$}$] & $M_i$ [$\mbox{$M_\odot$}$] & $\Delta M$ [$\mbox{$M_\odot$}$] & $M_f$ [$\mbox{$M_\odot$}$] & $\hat P$ \\ \hline 20 & 1.39 (B1913$+$16) & 0.87 & 2.26 & 0.92 \\ 19 & 1.38 & 0.84 & 2.22 & 0.92 \\ 18 & 1.37 & 0.82 & 2.19 & 0.91 \\ 17 & 1.36 & 0.79 & 2.15 & 0.90 \\ 16 & 1.35 & 0.76 & 2.11 & 0.89 \\ 15 & 1.34 (B1534$+$12) & 0.74 & 2.08 & 0.88 \\ 14 & 1.33 & 0.71 $-$ 0.91 & $2.04-2.24$ & 0.86 \\ 13 & 1.31 & $0.67-0.87$ & $1.98-2.18$ & 0.83 \\ 12 & 1.28 & $0.63-0.83$ & $1.91-2.11$ & 0.76 \\ 11 & 1.25 (J0737$-$3039B) & $0.60-0.80$ & $1.85-2.05$ & 0.56 \\ 10 & 1.18 (J1756$-$2251) & $0.55-0.75$ & $1.73-1.93$ \\ \hline \end{tabular} \end{center} \end{table} Astronomers are not easily convinced by formal arguments, so we take the approach of Lee et al. (2006). Using the extensive and accurate calculation of BKB (2002) of hypercritical accretion, which removed the approximation of neglecting the compact object mass of Bethe and Brown (1998), Lee et al. (2006) calculated a table of masses for the relativistic NS binaries, which we show in Table~\ref{tabX}. $M_f$ is the final pulsar mass after having accreted matter from the giant companion, while in common envelope with the latter while it is in red giant stage. The $0.2\mbox{$M_\odot$}$ uncertainty for the pulsars from giants of masses $<15\mbox{$M_\odot$}$ comes from accretion in the He star red giant stage when additional mass can be transferred. The higher probability $P$ for the more massive giants such as ZAMS $20\mbox{$M_\odot$}$ results because their companion must come from a giant of lower ZAMS mass, as in Table~\ref{tabX1}. We have given reasons earlier for our placement in the Table~\ref{tabX} of known pulsars B1913$+$16, etc. From the Table~\ref{tabX} we see that the overwhelming probability would be for the pulsars in B1913$+$16 and B1534$+$12 (and 2127$+$11C if not evolved from NS exchanges in the globular cluster), J0737$-$3039B and J1756$-$2251 to be much more massive than the companion. However, if the maximum NS mass is $1.7\mbox{$M_\odot$}$ or less, all of the binaries calculated in the Table~\ref{tabX} with hypercritical accretion will be BH-NS binaries. At 95\% confidence, the errors of $^{+0.4}_{-0.5}\mbox{$M_\odot$}$ are large, although they will be made smaller with longer observing time, and the central value of $2.1\mbox{$M_\odot$}$ for J0751$+$1807 is arrived at through a Bayesian analysis. We feel that it is fair to use the 95\% confidence limits, since it sticks up well above the other masses in Table~\ref{tab1} \section{Hypercritical Accretion in Case BB mass transfer} \label{sechyper} We now consider in more detail the two lowest mass binary NS systems, the double pulsar with mass $1.337 \mbox{$M_\odot$}$ for J0737$-$3039A and $1.250\mbox{$M_\odot$}$ for J0737$-$3039B, and J1756$-$2251 with pulsar mass of $1.40\mbox{$M_\odot$}$ and companion mass of $1.18\mbox{$M_\odot$}$, which would have come from $10\mbox{$M_\odot$}$ or $11\mbox{$M_\odot$}$ ZAMS giants. The He stars in the He-star, NS binary which precedes the double NS binary will have been the least massive in the binary NS evolution, so it is reasonable that they evolve into a He red giant (Dewi and van den Heuvel 2004). Dewi and van den Heuvel (2004) did not, however, consider that mass can be accreted onto the NS during the He-star common envelope evolution. Dewi and van den Heuvel inadvertently restricted the directions of the supernova kick imparted to PSR J0737-3039B to the presupernova plane. In the papers of Willems and Kalogera (2004) and Willems et al. (2005) the investigation was significantly extended by incorporating proper motion constraints and the kinematic history of the system in the Galaxy into the analysis. Hypercritical accretion requires $\dot M > 10^4 \dot M_{\rm Eddington}$ (Chevalier 1993). In the NS common envelope evolution in hydrogen red giant, $\dot M$ is typically $10^8 \dot M_{\rm Eddington}$. The typical hydrogen envelope has a mass of $\sim 10\mbox{$M_\odot$}$. The He star red giant evolution involves only an $\sim 1\mbox{$M_\odot$}$ evolved envelope, which is sufficient for hypercritical accretion. Using the BKB (2002) equations, which are now absolutely necessary because the Bethe and Brown approximation of neglecting the mass of the compact object would now mean neglecting the pulsar mass as compared with an $\sim 2\mbox{$M_\odot$}$ He star mass, we find the following results: For an initial $M_{{\rm He},i} = 2.5\mbox{$M_\odot$}$, final $M_{{\rm He},f} = 1.5\mbox{$M_\odot$}$ and $M_{\rm NS} = 1.25 \mbox{$M_\odot$}$, the latter being the mass of the first-born pulsar in the double pulsar system, we find that the pulsar accretes $0.2\mbox{$M_\odot$}$ and the orbit tightens by a factor of 6.7. For $M_{{\rm He},i} = 2.25\mbox{$M_\odot$}$, $M_{{\rm He},f}=1.25\mbox{$M_\odot$}$, the pulsar accretes $0.19\mbox{$M_\odot$}$ with tightening by factor 7.4. Since we want a first-born pulsar mass of $1.25\mbox{$M_\odot$}$, $2.25\mbox{$M_\odot$} < M_{{\rm He},i} < 2.5 \mbox{$M_\odot$}$ seems appropriate.\footnote{ $M_{{\rm He},f}$ corrected downward by the greater gravitational binding energy when it evolves into a NS.} We are unable to lower the accreted mass much below $0.20 \mbox{$M_\odot$}$, which is twice too much, but would fit the difference in mass between pulsar and companion in J1756$-$2251. It does not seem easy to lower the accretion by kick velocities, as Fryer \& Kalogera did for 1913$+$16 and 1534$+$12 because the double NS eccentricity is only 0.09, the smallest in all double NS's. It is possible that there has been substantial wind loss, which would diminish the envelope mass and lower the accreted mass. We would have expected the latter to be larger than it is. The point we wish to make is that the amount of mass accreted by the low-mass pulsars in the He envelope evolution of their companions is likely to be $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.2\mbox{$M_\odot$}$. This is not enough to send the pulsar into a BH. We believe that we can explain why the pulsar, the first born NS in the double pulsar and in J1756$-$2251 are substantially ($\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.2 \mbox{$M_\odot$}$) more massive than their companion stars. Certainly they have not gone through the van den Heuvel and van Paradijs (1993) scenario, or they would have accreted $\sim 0.5\mbox{$M_\odot$}$ from their hydrogen giant companion. But of course there must have been a double NS formation scenario because otherwise the first-born NS would have accreted $\sim 0.5\mbox{$M_\odot$}$ from the hydrogen envelope of its companion and then later another $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.2\mbox{$M_\odot$}$ from its He companion as the latter evolved. Thus, put together, the amount of matter accreted by the first-born NS would have been comparable to that accreted from the hydrogen red giant stage of the companion in the more massive $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 1.5\mbox{$M_\odot$}$ first born NS's. These would turn into BHs. However, if the BH is formed in the lower mass stars (ZAMS mass $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 15\mbox{$M_\odot$}$) then when the companion evolves into a He red giant, the BH will be in a common envelope and the binary will tighten just as effectively as if it were a NS. Thus, the double pulsar and J1756$-$2251, which must have formed, at least in our scenario, through the double He star scenario, should have an order of magnitude more BH-NS counterparts, with similar tight orbits. Thus, the double helium star scenario necessitated if the maximum NS mass is as small as we say, $1.7\mbox{$M_\odot$}$, to produce binary NS systems, results in an order of magnitude more BH-NS binaries, which will merge at about the same rate as binary NS's. We are, however, not finished yet in increasing the prediction for compact object mergers. To date these are mostly based on the NS binaries observed in the Galaxy, with extrapolation to include the number of similar systems that should exist in the Galaxy. The observation of the double pulsar increased the estimate by an order of magnitude. The radio signal in the double pulsar is much weaker than that in the Hulse-Taylor 1913$+$16 and it must be generally true that the radio signals from the pulsars from the lower mass ZAMS stars are generally much weaker than those from the pulsars already discovered in binaries. So we may be missing many of these. We end this section by referring to yet another channel suggested by Belczynski and Kalogera (2001). They develop the scenario following the double He star evolution. For the lower mass stars there is second stage of common envelope evolution, now in He red giant envelopes, similar to that for the nearly equal mass evolving hydrogen stars. The survival probability of the two (Type Ic SN) is larger, given the tight orbit before the explosions. The end product is a close NS-NS with very short merger time. Delgado and Thomas (1981) show that case BB mass transfer is similar to the double He star scenario for He stars of mass greater than the Chandrasekhar mass of white dwarfs, in that Case BB mass transfer can only start after the ignition of carbon, in the sense that helium core burning has already begun when the hydrogen red giant phase takes place in Case B mass transfer in the stars massive enough to evolve into NS's. However, the chance that the two He stars burn carbon at the same time is rather small, the ratio of carbon burning time to He burning time being only $\sim 0.03$ in ZAMS $12\mbox{$M_\odot$}$ stars (Schaller et al. 1992). We thus believe that only $\sim 1\%$ of the binaries will go through the double common envelope, the others involving Case BB mass transfer. BKB (2002) and Belczynski \& Kalogera (2001) have motivated a large increase in gravitational mergings by taking the effects of He star red giants; i.e. Case BB mass transfers, into account. Our more schematic estimates suggest that overall they were unduly conservative in their estimates. Be that as it may, we wish to add the order of magnitude more BH-NS binaries which can be inferred from the presently measured binary NS masses. In these the BHs will be more massive than the NS's because of accretion and their large chirp masses will make them observable to larger distances. \section{High Mass X-ray Binaries} \label{secHMXB} High mass X-ray binaries give the least reliable information about NS masses. In general the giant is quite ``floppy" with various pulsational excitations, the center of mass of the system is inside of the binary and the NS heats up the giant by radiation. Over the years the determination of the NS masses have changed substantially, but with two exceptions 4U 1700$-$37 and Vela X-1, the NS masses are at an average of $\sim 1.4\mbox{$M_\odot$}$ and are consistent with a limit of $<1.5\mbox{$M_\odot$}$. Brown, Weingartner, and Wijers (1996) considered 4U 1700$-$37, which does not pulse in X-rays and has a harder spectrum than other X-ray binaries. The accretion history is, however, similar to that of others. The indication is that the compact object is a BH. Wellstein and Langer (1999) argue that 4U 1700$-$37, composed of a $30\mbox{$M_\odot$}$ O star with $2.6^{+2.3}_{-1.4}\mbox{$M_\odot$}$ compact companion (Heap and Corcoran 1993, Rubin et al. 1996) must have gone through a nonconservative mass transfer in late Case B or Case C. In this way the system loses substantial amounts of mass and angular momentum and thus becomes a short period binary. (1700$-$37 has a period of 3.412 days.) In fact, in Brown et al. (2001a) had the first mass transfer from the primary to the secondary been case A, AB or B, the primary of $\sim 30\mbox{$M_\odot$}$ would have gone into a NS.\footnote{ Rate of wind loss during He burning is not sufficiently well known to say that the primary would have gone into a NS. It might very well have immediately gone into a LMBH.} However, with mass transfer from the evolving secondary, it would have gone into a LMBH. {}From an evolutionary point of view 4U 1700$-$37 is very interesting because it is the only binary with low-mass compact object, albeit BH, for which one can make the argument that it comes from such a massive region of $> 30\mbox{$M_\odot$}$. (Brown et al. 1996 estimate $40\pm 10\mbox{$M_\odot$}$.) Vela X-1 may well be the worst system in which to measure the mass of a NS. ``Another systematic effect, due to the distortion of the primary may be quite important in the case of X-ray binaries with a small mass ratio\footnote{ Authors: Vela X-1 is made up of an $18\mbox{$M_\odot$}$ B-star and NS. The center of mass is well within the giant.} as the Vela X-1 system. In such a system the radial velocities of certain individual surface elements of the primary are much greater than the orbital velocity of the center of mass of this star, in the case of synchronous rotation. When the primary is tidally distorted and has a variation of effective temperature and gravity across its surface, it is by no means clear that the observed radial velocity, which is given by some spectrophotometric average over the surface, can be identified with the motion of the center of mass of the object." (J. van Paradijs et al. 1977a) ``In a previous paper we presented a numerical study of the effect of the deformation of the primary on its apparent radial velocity (van Paradijs et al. 1977a). The apparent velocity amplitude can in some cases increase by up to 30 \%, thereby increasing the apparent mass of the compact object by approximately the same amount." (J. van Paradijs et al. 1977b) It is known that the light curve varies substantially from night to night. Indeed, in Barziv et al. (2001) from which the large NS mass is taken, the authors say ``The best estimate of the mass of Vela X-1 is $1.86\mbox{$M_\odot$}$. Unfortunately, no firm constraints on the equation of state are possible, since systematic deviations in the radial-velocity curve do not allow us to exclude a mass around $1.4\mbox{$M_\odot$}$ as found for other neutron stars." \section{Carbon-Oxygen White-Dwarf, Neutron Star Binaries} \label{sec6} The high-field eccentric binaries B2303$+$46, long thought to be a wide NS binary and J1141$-$65 have not gone through common envelope evolution which would have circularized them and brought their magnetic fields down. The magnetic field of B2303$+$46 is $7.9\times 10^{11}$ G, that of J1141$-$65, $1.3\times 10^{12}$ G. The relative time that such a binary can be seen, before it goes into the ``graveyard" goes inversely with $B$. Thus we have ``observability premium", Eq.~(\ref{eq5}), equal to essentially unity as an average for the two unrecycled binaries above. Tauris et al. (2000) proposed five binaries which they evolved through common envelope. Two of these recycled pulsars in relativistic orbits PSR 1157$-$5112 and J1757$-$5322 are discussed by Edwards and Bailes (2001). Two others J1435$-$6100 and J1454$-$5846 are discussed by Camilo et al. (2001). The fifth of the systems favored for common envelope evolution by Tauris, van den Heuvel and Savonije (2000) is J1022$+$1001. This closely resembles PSR J2145$-$0750, aside from a more massive white dwarf companion, as remarked by van den Heuvel (1994a), who evolved the latter through common envelope when the white dwarf progenitor was on the AGB (Case C mass transfer). Van den Heuvel suggested for J2145$-$0750 that there was considerable mass loss because of possible instabilities on the AGB caused by the presence of the NS. This is one possibility of saving our general theme; i.e., that most of the NS, carbon-oxygen white dwarf binaries would end up as LMBH carbon-oxygen white dwarf binaries, although some might be saved with NS's because of the possible instabilities caused by the NS while the white dwarf progenitor is on the AGB. Van den Heuvel chose $\lambda=1/2$ for the parameter that characterizes the structure of the hydrogen envelope of the massive star that is removed in common envelope evolution. Dewi \& Tauris (2001) have since carried out detailed calculations that in some cases, ``particularly on the asymptotic giant branch of lower-mass stars, it is possible that $\lambda>4$." This lowers the binding energy of the envelope by a large factor, so that it can be removed in common envelope evolution and still leave a reasonably wide orbit, as remarked by Dewi \& Tauris. We believe that this may be the reason that some binaries have survived common envelope evolution. \begin{table} \caption{Inferred magnetic fields $B$ and the observability premium $\Pi$ for recycled pulsars} \label{tabPS} \begin{center} \begin{tabular}{ccc} \hline Pulsars & $B$ & $\Pi$ \\ \hline J2145$+$0750 & $6\times 10^8$ G & 1667 \\ J1022$+$1001 & $8.4\times 10^8$ G & 1190 \\ J1157$-$5112 & $<6.3 \times 10^8$ G & $>$ 159 \\ J1453$-$58 & $6.1\times 10^9$ G & 164 \\ J1435$-$60 & $4.7\times 10^8$ G & 2127 \\ \hline \end{tabular} \end{center} \end{table} In four of the six recycled pulsars (assumed to have been evolved through common envelope evolution) the magnetic field have been inferred as in Table~\ref{tabPS}. The observability premium $\Pi$ is high for most of these pulsars. Since we see two unrecycled pulsars with high magnetic fields $B\sim 10^{12}$ G; therefore, $\Pi\sim 1$, we should see $\sim 20,000$ recycled pulsars. The above argument does not take into account the greater difficulty of observing pulsars with low magnetic fields $\sim 10^8 - 10^9$ G, which may remove some of the large predicted numbers of recycled pulsars in case they did not go into BHs during common envelope evolution. On the other hand, the NS in these cases goes through common envelope with a star of main sequence less than $\sim 10\mbox{$M_\odot$}$ since it must end up as a white dwarf, albeit a relatively massive carbon-oxygen one. As discussed by van den Heuvel (1994a) only part of the energy to remove the envelope will be connected with the accretion, the rest coming from wind losses, etc. Thus, our observation that most of these binaries must end up as white-dwarf, low-mass X-ray binaries gives credence to essentially all of the NS's which go through common envelope evolution in the evolution of binary NS's ending up as LMBHs. As noted earlier, there are hopes that during our lifetime - at least that of one or two of the authors - this can be tested, because the Bethe-Brown prediction of the factor 20 greater contribution of NS-LMBH to binary NS mergers should be robust, essentially independent of the number of the latter. \section{White Dwarf-Neutron Star Binaries} \label{sec8} This class of 12 (See Table~\ref{tab1}) is the most numerous class. They mostly would be expected to come from a NS with main sequence star of mass between $1\mbox{$M_\odot$}$ and $2\mbox{$M_\odot$}$, the $1\mbox{$M_\odot$}$ because they must evolve in a Hubble time. The main sequence star evolves, either transferring matter to the NS or matter is lost by wind, because it ends up as a typically quite low-mass white draft. \def\phantom{xxx}{\phantom{xxx}} \begin{table} \caption{White Dwarf Companion masses ($m_2$) and orbital period ($P$) in NS, He-White-Dwarf Binaries. Refs; 1) Thorsett \& Chakrabarty 1999, 2) Hansen \& Phinney 1997, 3) Tauris, van den Heuvel, \& Savonije 2000, 4) Lundgren, Zepka, \& Cordes 1995, 5) Navarro et al. 1995, 6) Thorsett, Arzoumanian, \& Taylor 1993, 7) van Kerkwijk et al. 2000, 8) Lyne et al. 1990, 9) Phinney \& Kulkarni 1994. } \label{tab2} $$ \begin{array}{lccl} \hline \rm Pulsar &\rm m_2\ (\mbox{$M_\odot$}) &\rm\phantom{xxx} P\ (days) \phantom{xxx} &\rm References\\ \hline J0034-0534 & 0.15 - 0.32 & 1.589 & 1,2\\ J0218+4232 & 0.2 & 2.029 & 1,5 \\ J0751+1807 & 0.15 & 0.263 & 1,2,4 \\ J1012+5307 & 0.165 - 0.215 & 0.605 & 1,2\\ J1045-4509 & < 0.168 & 4.084 & 1\\ J1232-6501 & 0.175 & 1.863 & 3\\ J1713+0747 & 0.15 - 0.31 & 67.825 & 1,2\\ B1744-24A \ (J1748-2446A) & 0.15 & 0.076 & 1,8 \\ B1800-27 \ (J1803-2712) & 0.17 & 406.781 & 1,9 \\ J1804-2718 & 0.185 - 0.253 & 11.129 & 1\\ B1855+09 \ (J1857+0943) & 0.19 - 0.26 & 12.327 & 1,2\\ J2129-5721 & 0.176 & 6.625 & 1\\ J2317+1439 & 0.21 & 2.459 & 1,9 \\ \hline J0437-4715 & 0.15 - 0.375 & 5.7 & 1,2\\ J1455-3330 & 0.305 & 76.174 & 1\\ B1620-26 \ (J1623-2631) & 0.3 & 191.443 & 1,6 \\ J1640+2224 & 0.25 - 0.45 & 175.460 & 1,2\\ J1643-1224 & 0.341 & 147.017 & 1\\ B1718-19 \ (J1721-1936) & 0.3 & 0.258 & 1,7 \\ B1802-07 \ (J1804-0735) & > 0.29 & 2.617 & 1\\ J1904+04 & 0.27 & 15.75 & 3\\ B1953+29 \ (J1955+2908) & 0.328 & 117.349 & 1\\ J2019+2425 & 0.264-0.354 & 76.512 & 1,2\\ J2033+17 & 0.290 & 56.2 & 1\\ J2229+2643 & 0.315 & 93.015 & 1\\ \hline \end{array} $$ \end{table} In Tab.~\ref{tab2} we have collected the helium white dwarf masses we could find. Note that in particular the binaries B2303$+$46 and J1141$-$6545, which we discussed in Sec.~\ref{sec6}, do not appear in our table. They have quite massive carbon-oxygen white dwarf companions, and were discussed in the last section. \begin{table} \caption{Statistics of the mass distribution of white dwarfs. Logarithmic distribution of the initial NS, main sequence star is assumed. Last column is the number of observed systems summarized in Table~\ref{tab2}.} \label{tab3} $$ \begin{array}{cccc} \hline \phantom{xxx} m_2 \ (\mbox{$M_\odot$}) \phantom{xxx} &\phantom{xxx} R_g \ (\mbox{$R_\odot$})\phantom{xxx} &\phantom{xxx} \ln (R_u/R_l)\phantom{xxx} &\phantom{xxx} \rm Observations\\ \hline 0.15-0.25 & 1.47-10.0 & 1.92 & 10 \\ 0.25-0.35 & 10.0-42.0 & 1.44 & 12 \\ 0.35-0.46 & 42.0-128 & 1.11 & 0 \\ \hline \end{array} $$ \end{table} In Tab.~\ref{tab3} we show the statistics of the mass distribution of white dwarfs. We note that all of our tabulated white dwarfs have masses $\mathrel{\rlap{\lower3pt\hbox{\hskip1pt$\sim$} 0.35\mbox{$M_\odot$}$, whereas single white dwarf tend to peak up at $\sim 0.6\mbox{$M_\odot$}$. This indicates to us immediately that the companion NS is strongly influential in increasing the wind loss. (See the suggestion of van den Heuvel about J1245$-$0750 in the last section that there was considerable mass loss because of possible instabilities on the AGB (asymptotic giant branch) caused by the presence of the NS.) The white dwarf, NS binaries have been evolved by Tauris and Savonije (1999). For evolution with stable mass transfer, i.e., for main sequence masses less than $M_{\rm NS}$, the evolution was basically conservative, matter accreting below or at the Eddington limit, onto the accretion disc of the white dwarf. In the case of the main sequence mass $M_{\rm MS} > M_{\rm NS}$, for $M_{\rm MS}$ up to $2\mbox{$M_\odot$}$, the evolution was still conservative from the standpoint of the accretion disk, but the amount above the Eddington limit for the white dwarf was expelled with the angular velocity of the NS. The mass distribution of NS's of Tauris and Savonije do not give NS masses that look anything like those shown in Table~\ref{tab1}. Even though Tauris and Savonije began with the somewhat small NS mass of $1.3\mbox{$M_\odot$}$, they have copious numbers up to $2\mbox{$M_\odot$}$. Only J0751$+$1807 in Tab.~\ref{tab1} really comes this high. We believe the reason for their high NS masses is explained in the last sentence of the caption to their Fig.~4: ``The post-accretion $M_{\rm NS}$ curves (bottom) assume no mass loss from accretion disk instabilities of propeller effects." As we discussed in Sec.~\ref{sec3}, Francischelli et al. (2002) found that the propeller effects decreased the mass accretion from He star wind onto the pulsar by an order of magnitude. In other words, the accretion may be scaled down from the $\sim 1\mbox{$M_\odot$}$ difference between progenitor main sequence and white dwarf masses, to $\sim 0.1\mbox{$M_\odot$}$ actually accepted, the remainder being lost through the agency of the propeller effect. Of course, because of different angular momenta in the different binaries we can only give order of magnitude estimates. Since Brown and Bethe (1994) ``A scenario for a large number of low-mass black holes in the Galaxy", at which time NS masses were spread rather widely, we have got used to seeing their masses fall below our projected maximum $1.5\mbox{$M_\odot$}$. Thus, at that time, the NS in J2019$+$2425 had an upper limit on its mass of $\sim 1.64\mbox{$M_\odot$}$ from the white dwarf mass-period relation. Nice et al. (2001) could constrain the inclination angle of the binary to $i<72^\circ$ from the proper motion of the binary, with a median likelihood value of $63^\circ$. A similar limit on inclination angle arose from the lack of a detestable Shapiro delay signal. The NS mass was determined to be at most $1.51\mbox{$M_\odot$}$. \section{Summary Conclusion} Our Selected Papers with Commentary is entitled ``Formation and Evolution of Black Holes in the Galaxy". We were shifted from NSs to BHs by SN 1987A which showed that a relatively low-mass compact core could evolve into a LMBH. Most important is our double helium star scenario for the evolution of binary NS's. The motivation of this began with Chevalier (1993) who estimated that a NS would accrete $\sim 1\mbox{$M_\odot$}$ in common envelope evolution. We confirmed and made more quantitative his estimate. With addition of $0.75\mbox{$M_\odot$}$, the NS will certainly go into a BH. The double helium star scenario avoids the NS having to go through common envelope evolution. The near equality of the masses of pulsar and companion in the binary NS's is observational support for our double NS scenario. Where the masses of pulsar and companion are substantially different, in the Hulse-Taylor binary, this difference tells us the range of ZAMS masses that the binary came from. In this case, from the most massive possible range $\sim 20\mbox{$M_\odot$}$ where the Fe core masses change most rapidly with main sequence mass, because the companion mass is almost 4\% lower than that of the pulsar. The importance of our double He star scenario is that one must then look for the fate of the NS's which do go through common envelope evolution in the envelope of the evolving giant. Bethe and Brown (1998) show that these do, indeed, produce LMBH fresh NS binaries. There are a factor of 5 more of these than double NS binaries and because of the higher mass of the BH, they give a larger factor in expected mergings. While the recently discovered double pulsar is very interesting (and was very improbable to be observed) it simply brings the binary NS contribution to LIGO up to snuff, and does not change our additional factor of 5, although it does greatly increase the merging rate of the binaries one sees. We believe that the many recently observed short hard gamma-ray bursts give credence to the large number of LMBH-NS binaries we predicted and even our factor 40, which includes the higher star formation rate in the early Universe, is easily subsumed in the number of SHBs. However, the gravitational waves from these mergings may still not be sufficient for LIGO observations in the next few years. What LIGO needs is a merger with large chirp mass, as predicted by Portegies Zwart and McMillan (2000) which overwhelms the background. Of course in time, possibly a few years, the LIGO sensitivities should observe the merger of the lower chirp mass binaries discussed here. \section{Discussion} Kip Thorne asked Hans Bethe and Gerry Brown to work out the mergings of NS-BH binaries while they were in Caltech in 1996. This was a new activity for Hans, who was 90 at the time, and he attacked it with gusto. From the paper of Brown (1995) it was clear that there were an order of magnitude more of these than of binary NS's because the standard scenario for making the latter always ended up with the first NS going into a BH during its common envelope with the companion while in red giant stage. The authors returned to this problem in 2003, after publishing their joint works (Bethe, Brown and Lee 2003) and Hans was engaged with this problem right until his death. In fact, he had a discussion of it on the telephone with Gerry Brown the morning of the day of his death. Crucial to our work on evolving the binary objects was Hans' analytic common envelope evolution. This is reproduced in his obituary by Brown (2005). It was carried through with Kepler's and Newton's Laws using elementary calculus. \section*{Acknowledgments} This is written in celebration of Hans Bethe's 100th birthday. We would like to thank Marten van Kerkwijk and Ralph Wijers for many helpful discussions. We wish to especially than Ed van den Heuvel, who has always stimulated us, and to our scenarios which sometimes contradict his earlier ones, says ``New ideas are always welcome". GEB was supported in part by the US Department of Energy under Grant No. DE-FG02-88ER40388. CHL was supported by the Korea Research Foundation Grant funded by the Korean Government(MOEHRD, Basic Research Promotion Fund) (KRF-2005-070-C00034). \renewcommand{\thesection}{Appendix~\Alph{section}}
1,941,325,220,358
arxiv
\section{Introduction} Many attempts to quantize gravity lead to the inclusion of higher-curvature corrections to the Einstein-Hilbert action. An understanding of the effects of these higher-curvature corrections on different types of solutions is therefore of significant interest. However, it is generally difficult to construct black hole solutions to higher-curvature modifications to general relativity, which often involve complicated sets of fourth-order differential equations. There are exceptions to this, though. Lovelock gravity \cite{lovelock1970,lovelock1971} is such a well-studied class of theories that are ghost-free on any background. However, curvature terms of order $k$ are topological invariants in $D = 2k$ dimensions, and vanish identically for $D < 2k$. Quasi-topological gravity \cite{oliva2010c,myers2010c} , a generalization of this class, presents more examples of higher-curvature theories, yet unfortunately is also trivial in four dimensions. Recently, a new class of theories have been discovered that are neither topological nor trivial in four dimensions and that have the same graviton spectrum as general relativity on constant curvature backgrounds. These properties make this class of theories, known as \textit{Generalized Quasi-topological Gravity} (GQTG), of considerable interest phenomenologically. The first representative of this class of theories to be found was Einsteinian Cubic Gravity (ECG) \cite{ Bueno:2016xff,hennigar2017b,hennigar2017e,Bueno:2016lrh}, which introduces to the action a unique set of terms cubic in the curvature tensor. This theory was discovered to admit non-hairy single-function generalizations to the Schwarzchild black hole \cite{hennigar2017e,bueno2017}. A rather thorough analysis of the basic phenomena of both ECG was performed not long ago \cite{Hennigar:2018hza,Poshteh:2018wqy}, and more recently was carried out for Generalized Quasi-topological gravities of order four, or Einstein Quartic Gravity (EQG) \cite{Khodabakhshi:2020hny,Khodabakhshi:2020ddv}, which introduces terms quartic in the curvature tensor. Both studies were in the context of spherically symmetric black holes. The inclusion of rotation involves significant additional complications. In fact, no analytic generalization of the Kerr solution has yet been constructed so far even for the case of Lovelock gravity. However, slowly rotating solutions can often be extracted. This is done using a metric of the form \begin{equation} ds^2 = -f(r)dt^2 + \frac{dr^2}{f(r)} +2ar^2p(r)\sin^2(\theta)dtd\phi +r^2[d\theta^2+\sin^2(\theta)d\phi^2]\label{metric} \end{equation} and working to linear order in the rotation parameter $a$. The Kerr solution in Einstein gravity is well-approximated by this form for slow rotations, and an analysis of slowly rotating black hole solutions has already been performed for ECG gravity \cite{adair2020}. In the cubic case, the GQTG Lagrangian density is unique. By contrast, in the Quartic case, there are four linearly independent such Lagrangian densities with analogous properties \cite{ahmed2017a}. In addition, in four dimensions (which we specialize to in this paper) an additional two Lagrangian densities appear, so that it is sensible to speak of six seperate theories of Einstein Quartic Gravity (EQG). For static, spherically symmetric, single-function solutions the same field equation was found to hold for all six theories \cite{ahmed2017a,Sajadi:2022pcz}; hence such black holes cannot be used to phenomenologically distinguish between them. The predictions for both cubic and quartic GQTGs that have been worked out \cite{Hennigar:2018hza,Khodabakhshi:2020hny,Khodabakhshi:2020ddv} thus leave open the general question as to how to empirically distinguish the six theories from one another as well as from the cubic theory. This is particularly pertinent in the context of gravitational wave-astronomy as a possible means of comparing astrophysical rotating black holes in Einstein gravity to those predicted by theories containing various higher-curvature corrections. Motivated by the above, in this paper we study solutions of the form \eqref{metric} in the six linearly independent theories of EQG that appear in four dimensions. We find that the third and fourth theories predict equivalent solutions, but, unlike in the case of static, spherically symmetric black holes, we obtain distinct results for each of the other theories. In Section 2, we show how to reduce the field equations to a set of second-order differential equations, one of which has already been studied in the context of ECG. We give near-horizon and asymptotic approximate solutions to these equations, as well as construct numerical solutions. We also obtain a continued fraction approximations that approximate the numerical solutions very well everywhere outside the event horizon. In Section 3, we study how various physical properties of the solution are modified in each theory, working perturbatively in the coupling constants. In particular, we study the angular momentum of the event horizon and properties of causal geodesics including the photon sphere, photon ring, and innermost stable circular orbit, as well as the modification to the black hole shadow. \section{Slowly Rotating Solutions}\label{cavity} The action for a general EQG in four dimensions is given by \begin{equation} S=\frac{1}{16\pi}\int d^4x\sqrt{-g}\left[R-\sum_{i=1}^{6}\hat{\lambda}_{(i)}\mathcal{S}_{(4)}^{(i)}\right] \end{equation} where $R$ is the Ricci scalar and the $\mathcal{S}_{(4)}^{(i)}$ are the generalized quasi-topological Lagrangian densities whose forms are presented in \cite{ahmed2017a}. We write the ansatz for the slowly rotating black hole in the following form by letting $x=\cos\theta$: \begin{equation} ds^2 = -N(r)^2f(r)dt^2 + \frac{dr^2}{f(r)} +2ar^2p(r)(1-x^2)dtd\phi +r^2\left[\frac{dx^2}{1-x^2}+(1-x^2)d\phi^2\right] \end{equation} Note that this form includes three independent functions of the radial coordinate. In Einstein gravity $p(r) = f(r) -1$ and $N(r)=1$ \cite{Gray:2021roq}; we will find that the latter condition holds without loss of generality but that $p(r)$ and $f(r)$ are two independent functions. Our goal is to solve the field equations for this ansatz, working to linear order in $a$. We consider each of the six theories separately. The field equation for the $i$th theory is given by \begin{equation} P_{acde}^{(i)}R_b^{cde}-\frac{1}{2}g_{ab}\mathcal{S}_{(4)}^{(i)}-2\nabla^c\nabla^d P_{acdb}^{(i)} = 0 \end{equation} where \begin{equation} P_{abcd}^{(i)} = \frac{\partial \mathcal{S}_{(4)}^{(i)}}{\partial R^{abcd}} \, . \end{equation} The exact expressions for the tensors $P_{acde}^{(i)}$ are lengthy and of little interest by themselves. At linear order in $a$, the field equations $\mathcal{E}_{tt}$ and $\mathcal{E}_{rr}$ are equivalent to one another and to the static case; these admit the solution $N(r)=$ constant, and so we may set $N(r)=1$ without loss of generality. Unlike the static case, the $\mathcal{E}_{t\phi}$ equation is no longer identically zero, but rather is a complicated fourth-order differential equation in $p(r)$. However, for all six theories, the following combination of the field equations yields a differential equation that is third order in $p(r)$: \begin{equation}\label{p-eq} \frac{r^4}{f(r)}\left[\mathcal{E}_t^\phi-\frac{arp(r)}{2}\frac{d\mathcal{E}_r^r}{dr} \right] = 0 \end{equation} Furthermore, the resulting equations depend only on the first three derivatives of $p(r)$ and not on the function itself, so that by making the substitution $g(r) = p'(r)$ we end up with a differential equation for each theory that is second-order in $g(r)$. As in the spherically symmetric case, the field equation determining $f(r)$ is equivalent for each of the six theories. We may write it, setting $G=1$, as \cite{ Bueno:2016xff, hennigar2017e,Bueno:2016lrh} \begin{align} 2M&=r \left( 1-f(r)\right) +\frac {24K}{5\,{r}^{3}} \left[ f'(r) r \left( \frac{rf'(r)}{2}+1-f(r) \right) f(r) f''(r) +3\, f'(r) ^{4}{r}^{2} \right. \nonumber\\ &\left.+8\,r \left( 1+\frac{f(r)}{2} \right) f'(r) ^{3}+24\,f(r) \left( 1-f(r)\right) f'(r) ^{2} \right] \end{align} This is valid for a general EQG theory— $K$ is a linear combination of the six coupling constants, given by \begin{equation} K = \frac{1}{2}\lambda_{(1)}+\frac{5}{4}\lambda_{(2)}+\frac{1}{2}\lambda_{(3)}+\lambda_{(4)}+2\lambda_{(5)}-\lambda_{(6)} \end{equation} By contrast, the equations for $g(r)$ are different for each theory— with the exception of those corresponding to $\mathcal{S}_{(4)}^{(3)}$ and $\mathcal{S}_{(4)}^{(4)}$, which are equivalent. For example, \eqref{p-eq} for the $\mathcal{S}_{(4)}^{(6)}$ theory yields \begin{align} C&= {r}^{4}g(r)+\lambda_{(6)}\left[ -\frac {104\,g(r) }{15\,{r}^{2}} \left( \frac {11\,f(r) {r}^{3}{}f'''(r) }{26}\right) \left( {\frac { f'(r) r}{2}}+1-f(r) \right) +\frac {{r}^{4} f''(r) ^{2}}{4} \left( \frac {22\,f(r) }{13}+1 \right) \right. \nonumber\\ &\left.+\frac {34\,{r}^{2}f''(r) }{13} \left( {\frac {29\,{r}^{2} f'(r) ^{2}}{136}}+{\frac {5\, f'(r) r}{68} \left( 1-{\frac {144\,f(r) }{5}} \right) }+ \left( 1-{\frac {93\,f(r) }{34}} \right) \left( 1-f(r) \right) \right) \right. \nonumber\\ &\left.-\frac {32\,{r}^{3} f'(r) ^{3}}{13}-{\frac {40\,{r}^{2} f'(r) ^{2}}{13} \left( 1-{\frac {373\,f(r) }{80}} \right) }-\frac {9\,r \left( 1-f(r) \right) f'(r) }{13} \left( 1-{\frac {104\,f(r) }{3}} \right) \right. \nonumber\\ &\left.+\frac {106\,f(r) }{13}-\frac {251 \left( f(r) \right) ^{2}}{13}+{\frac {132\, \left( f(r) \right) ^{3}}{13}} \right) +\frac {88\,g'(r) }{15} \left( \frac {20\,rf(r) f''(r)}{11} \left( {\frac {29\, f'(r) r}{40}}+1-f(r) \right) \right. \nonumber\\ &\left.\left.+ f'(r) \left( {\frac {29\,{r}^{2} f'(r) ^{2}}{44}}+{\frac {20\, f'(r) r}{11} \left( 1-{\frac {51\,f(r) }{40}} \right) }+ \left( 1-f(r) \right) \left( 1-{\frac {13\,f(r) }{11}} \right) \right) \right) \right. \nonumber\\ &\left.+{\frac {88\,f(r) g''(r) }{15} \left( {\frac {29\, f'(r) r}{22}}+1-f(r) \right) \left( {\frac { f'(r) r}{2}}+1-f(r) \right) } \right] \end{align} where $C$ is a constant of integration. The expressions for the remaining theories are in Appendix A. In order to agree with the Kerr solution in the weak coupling limit we will find that we must have $C = 6M$; the same turns out to be true in the other five theories as well. In the case of small coupling, we may approximately solve the above equations by assuming a perturbative expansion in the coupling constant. This results in the following expansions to second order: \begin{align}f(r) &=1-{\frac {2M}{r}}+K \left( {\frac {864{M}^{3}}{5{r}^{9}}}-{\frac {1552{M}^{4}}{5{r}^{10}}} \right) +K^{2} \left( -{\frac {69181952{M}^{7}}{25{r}^{19}}}+{\frac {68746752{M}^{6}}{25{r}^{18}}}-{\frac {17044992{M}^{5}}{25{r}^{17}}} \right) \\ {r}^{2}p \left( r \right) &=-{\frac {2M}{r}}+\lambda_{(6)}\left({\frac {1728{M}^{3}}{11{r}^{9}}}-{\frac {1552{M}^{4}}{5{r}^{10}}}\right)+\lambda_{(6)}^2\left(-{\frac {58558464{M}^{5}}{95{r}^{17}}}+{\frac {65788416{M}^{6}}{25{r}^{18}}}-{\frac {69181952{M}^{7}}{25{r}^{19}}}\right) \end{align} \subsection{Asymptotic solution} We begin with an asymptotic solution in the large-r region. This can be done by taking power series ansatz for $f(r)$ and $g(r)$ : \begin{align}\label{fperturb} f_{1/r}(r)&= 1- \frac{2M}{r}+\sum_{n=0}\frac{a_n}{r^n} \\g_{1/r}(r)&= + \frac{6M}{r^4}+\sum_{n=0}\frac{b_n}{r^n} \end{align} The coefficients $a_n$ and $b_n$ determine the behaviour of the solution in the large $r$ region. There is also an homogeneous part to the solution, discussed for $f(r)$ in \cite{Khodabakhshi:2020hny}, but it decays super-exponentially and can therefore be neglected for our purposes. Inserting these ansatz into the field equations, for $f(r)$ we find \begin{equation} f_{1/r}(r)= 1- \frac{2M}{r}+\frac{864}{5}\frac{KM^3}{r^9}-\frac{1552}{5}\frac{KM^4}{r^{10}} + O\left(r^{-17}\right) \end{equation} For $g(r)$, the six theories produce different asymptotic expansions, though all have their first correction appear at order $r^{12}$ : \begin{align} g_{1/r}^{(1)}(r)&= \frac{6M}{r^4}+1728\frac{\lambda_{(1)}M^3}{r^{12}}-\frac{9312}{5}\frac{\lambda_{(1)}M^4}{r^{13}}+O\left(r^{-20}\right) \\ g_{1/r}^{(2)}(r)&= \frac{6M}{r^4}+2592\frac{\lambda_{(2)}M^3}{r^{12}}-4656\frac{\lambda_{(2)}M^4}{r^{13}}+O\left(r^{-20}\right) \\ g_{1/r}^{(3)}(r)&= \frac{6M}{r^4}+1728\frac{\lambda_{(3)}M^3}{r^{12}}-\frac{9312}{5}\frac{\lambda_{(3)}M^4}{r^{13}}+O\left(r^{-20}\right) \\ g_{1/r}^{(4)}(r)&= \frac{6M}{r^4}+3456\frac{\lambda_{(4)}M^3}{r^{12}}-\frac{18624}{5}\frac{\lambda_{(4)}M^4}{r^{13}}+O\left(r^{-20}\right) \\ g_{1/r}^{(5)}(r)&= \frac{6M}{r^4}+3456\frac{\lambda_{(5)}M^3}{r^{12}}-\frac{37428}{5}\frac{\lambda_{(5)}M^4}{r^{13}}+O\left(r^{-20}\right) \\ g_{1/r}^{(6)}(r)&= \frac{6M}{r^4}+1728\frac{\lambda_{(6)}M^3}{r^{12}}-\frac{18624}{5}\frac{\lambda_{(6)}M^4}{r^{13}}+O\left(r^{-20}\right) \end{align} where the $\mathcal{S}_{(4)}^{(3)}$ and $\mathcal{S}_{(4)}^{(4)}$ are equivalent upon making the substitution $\lambda_{(3)} = 2\lambda_{(4)}$. Some of the remaining inequivalent theories coincidentally give equivalent asymptotic expansions, as is the case with $\mathcal{S}_{(4)}^{(1)}$ and $\mathcal{S}_{(4)}^{(3)}$. Before moving on, let us say a few words regarding the homogeneous part of the solution for $g(r)$ in the asymptotic region. To do this we insert $g^{(i)}(r) = g_{1/r}^{(i)}(r) + g_h^{(i)}(r)$ into the field equation and keep only the terms that are most significant at large $r$, with $g_h^{(i)}(r)$ the homogeneous part of the solution. The resulting equation for $g_h^{(i)}(r)$ takes the following form for all six theories: \begin{equation} M^2\alpha_{(i)}\lambda_{(i)}g_h''(r)-2\frac{M^2\alpha_{(i)}\lambda_{(i)}g_h'(r)}{r}+r^6g_h(r)=0 \end{equation} where there is no summation over the index $(i)$. Here each $\alpha_{(i)}$ is a positive constant whose value depends on the theory in question. The general solution to this equation is \begin{equation} g_h^{(i)}(r) = r^{3/2}\left[\tilde{A}I_{3/8}\left(\frac{r^4}{4M\sqrt{\alpha_{(i)}\lambda_{(i)}}}\right)+\tilde{B}K_{3/8}\left(\frac{r^4}{4M\sqrt{\alpha_{(i)}\lambda_{(i)}}}\right) \right] \end{equation} where $I_\nu(x)$ and $K_\nu(x)$ are the modified Bessel functions of the first and second kinds, respectively, and $\tilde{A}$ and $\tilde{B}$ are constants of integration. Absorbing various constants into the new constants into $A$ and $B$, we may approximate this to first order at large $r$ as the sum of a super-exponentially growing and decaying mode: \begin{equation} g_h^{(i)}(r)= \left[A\exp\left(\frac{r^4}{4M\sqrt{\alpha_{(i)}\lambda_{(i)}}}\right) + B\exp\left(-\frac{r^4}{4M\sqrt{\alpha_{(i)}\lambda_{(i)}}}\right)\right]\left(\sqrt{\frac{1}{r}} + O\left(r^{-9/2}\right)\right) \end{equation} Asymptotic flatness demands that we set $A = 0$, while the super-exponentially decaying mode clearly falls off far more rapidly than our particular solution in powers of $1/r$, and can therefore be neglected. As a final note, when we integrate each of the $g_{1/r}^{(6)}(r)$ to obtain the asymptotic form of $p(r)$ in each theory a constant of integration will appear, which we may call $\frac{\Omega_\infty}{a}$. This is related to the asymptotic angular velocity of spacetime, as \begin{equation} \Omega = \frac{g_{t \phi}}{g_{\phi \phi}} = -ap(r) \rightarrow \Omega_\infty \end{equation} By suitable choice of Killing coordinates $t$ and $\phi$ we may always set $\Omega_\infty = 0$ and from now on we will make that choice. This amounts to starting the integration for $p(r)$ at $r = \infty$. \subsection{Near horizon solution} We now wish to solve the equations of motion near the event horizon of the black hole. We begin using an ansatz for $f(r)$ given by \begin{equation} f_{nh}(r) = 4\pi T(r-r_+) +\sum_{n=2}^\infty a_n(r-r_+)^n \end{equation} chosen to ensure $f$ has a zero of first order at the horizon $r=r_+$, with $T = f'(r_+)/4\pi$ the usual Hawking temperature. Substituting this into the field equation we obtain the following two equations determining $T$ and $M$ from the linear and quadratic-order parts: \begin{equation} 2M={\frac {768K{\pi}^{4}{T}^{4}}{5r_+}}+{\frac {512K{\pi}^{3}{T}^{3}}{5{r_+}^{2}}}+r_+ \end{equation} \begin{equation} {\frac {256\lambda{\pi}^{4}K{T}^{4}}{5{ r_+}^{2}}}+{\frac {512{\pi}^{3}K{T}^{3}}{5{{ r_+}}^{3}}}-4\pi Tr_+ +1=0 \end{equation} These equations can be solved for $M$ and $T$ in terms of $r_+$; the resulting expressions are large but are presented in \cite{Khodabakhshi:2020hny}. Higher-order terms in the field equation do not determine the parameter $a_2$ in the above. However, the remaining coefficients $a_n$ for $n > 2$ can be determined by unwieldy expressions involving $K$, $T$, $M$, and $a_2$. We now use a similar power ansatz for $g(r)$, with the only significant difference being that $g(r)$ is not required to vanish at the event horizon. Using the ansatz \begin{equation} g_{nh}(r) = \sum_{n=0}^\infty g_n(r-r_+)^n \end{equation} we obtain (rather large) equations giving $g_n$ for $n > 0$ in terms of $K$, $T$, $M$, $g_0$, and near horizon coefficients up to $a_{n+1}$ at most. Like $a_2$, $g_0$ is an apparently free parameter. The values of both will be numerically determined in the following section, on the basis of producing a smooth fit with the asymptotic approximation already discussed. \subsection{Numerical solution and continued fraction approximation} Neither the asymptotic nor near horizon solutions are valid for all $r$, and so it is desirable to have a means of interpolating between them. Let us begin by constructing a numerical solution for $f(r)$; the procedure for $g(r)$ will be essentially the same. The idea is to use the initial data from the near horizon expansion in a numerical solver that works both outwards and inwards from the horizon radius. Specifically, for the initial data we write \begin{align} f(r_+ + \epsilon)&= 4\pi T \epsilon+a_2\epsilon^2 \nonumber\\ f'(r_+ + \epsilon)&= 4\pi T +2a_2\epsilon \end{align} where $\epsilon$ is a small quantity, chosen to be positive for the outer solution and negative for the inner one. A generic choice of $a_2$ will unphysically excite the super-exponentially growing mode. To avoid this and get the asymptotically flat solution, $a_2$ must be chosen with high precision. There is generally a unique value of $a_2$ at which the numerically generated solution agrees with the asymptotic solution to a high degree of accuracy. To find this $a_2$, we must first choose a value $r_\text{max}$ with the requirement that the asymptotic solution is a good approximation for $r > r_\text{max}$. Then $f(r)$ is recursively solved numerically using different values of $a_2$ until one is found for which $f(r)$ agrees with the numerical solution at $r_\text{max}$ to the desired precision. Inevitably even this solution will diverge super-exponentially at some point, though that point can be pushed out to larger values of $r$ by increasingly more precise choices of $a_2$. A full solution valid for all $r$ is obtained by switching to the asymptotic approximation at $r_\text{max}$. We plot the numerical solution for $f(r)$ in Figure \ref{fig:Fig1} for different values of the coupling $K$. We note that it has two interesting properties: first, that $f(r)$ remains bounded rather than diverging to $-\infty$ as $r\to 0$, and second, that increasing $K$ increases the event horizon radius $r_+$. \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{fPlots.png} \caption{$f(r)$ against $r$ (in units of $M$) for varying values of the coupling constant. The green curve is Einstein gravity, while from blue to red we have $K = 0.1, 1, 5, 10, 50$ }. \label{fig:Fig1} \end{figure} The idea for constructing the numerical solution for $g(r)$ is the same as for $f(r)$. We construct initial data via \begin{align} g(r_+ + \epsilon)&= g_0 + g_1\epsilon +g_2\epsilon^2 \nonumber\\ g'(r_+ + \epsilon)&= g_1 +2g_2\epsilon \end{align} using our near horizon expansion to rewrite $g_1$ and $g_2$ in terms of $g_0$ and the already-determined $a_2$. Just as for $a_2$, there is a unique choice of $g_0$ that avoids exciting the super-exponentially growing mode. We find it in precisely an analogous manner, and construct a full solution for $g(r)$ by switching from the numerical solution evaluated for this choice of $g_0$ to the asymptotic solution at a large enough $r$ that the latter becomes a good approximation. With the solution for $g(r)$ in hand, we may numerically integrate it to obtain $p(r)$ and hence $h(r) = r^2p(r)$. We start the integration at $r= \infty$ to $p(r) \rightarrow 0$ as $r \rightarrow \infty$. As discussed, this is equivalent to choosing a frame that does not rotate at infinity. We show these numerical results in Figure \ref{fig:Fig2} for each of the theories. A few interesting observations emerge. First, each of the theories except for that corresponding to $\mathcal{S}_{(4)}^{(5)}$ approaches zero as $r \rightarrow 0$. This behaviour is also observed in ECG and differs from Einstein gravity. $h(r)$ in the $\mathcal{S}_{(4)}^{(5)}$ theory is unique in that it approaches $-\infty$ in the limit $r \rightarrow 0$, but far more sharply and with manifestly different behaviour compared to Einstein gravity. \begin{figure}[h] \;\;\includegraphics[width=0.49\textwidth]{hPlots2.png}\quad\;\;\;\;\includegraphics[width=0.49\textwidth]{hPlots4.png} \newline \includegraphics[width=0.505\textwidth]{hPlots5.png}\quad\;\includegraphics[width=0.505\textwidth]{hPlots6.png} \caption {$h(r)$ against $r$ (in units of $M$) for varying values of the coupling constant. From left to right, top to bottom, we show theories $\mathcal{S}_{(4)}^{(2)}$, $\mathcal{S}_{(4)}^{(4)}$ (which is equivalent to $\mathcal{S}_{(4)}^{(3)}$), $\mathcal{S}_{(4)}^{(5)}$, and $\mathcal{S}_{(4)}^{(6)}$ . In each case the green curve represents Einstein gravity, while the remaining curves correspond to $\lambda_{(i)} = 0.5, 1, 5, 10,$ and $50$, increasing from blue to red. The exception is theory 2, the top left, for which only a narrow range of $\lambda_{(2)}$ produce a reasonable result: in this case $\lambda_{(2)} = 0.5, 0.6, 0.75, 1, 1.25$ is used. } \label{fig:Fig2} \end{figure} The near horizon and asymptotic solutions are only valid in their respective regions, and the numerical solution is computationally intensive to generate and inevitably breaks down at sufficiently large distance. Fortunately, we may use another approximation that yields an excellent analytic approximation to the numerical solution everywhere outside the horizon. This is the continued fraction approximation. This approximation has been previously employed in the static case in both ECG \cite{Hennigar:2018hza} and EQG \cite{Khodabakhshi:2020hny} and recently in the slowly rotating case in quadratic gravity \cite{Sajadi:2023smm}. Since the continued fraction equation for $f(r)$ in EQG is the same in the slowly rotating case we study, we can employ it here. We first perform the change of coordinates \begin{equation} x = 1 - \frac{r_+}{r} \end{equation} so that the spacetime interval outside the horizon is in the range $x \in [0,1)$. We then take an ansatz \begin{equation} \label{contfracf} f(x) = x\left[1 - \varepsilon (1-x) + (b_0-\varepsilon)(1-x)^2 +B(x)(1-x)^3 \right] \end{equation} where $B(x)$ is given by the continued fraction \begin{equation} \frac{b_1}{1+\frac{b_2x}{1+\frac{b_3x}{1+...}}} \end{equation} Inserting this ansatz in the relevant field equation yields \begin{equation} \varepsilon = \frac{2M}{r_+}-1, \quad b_0 = 0 \end{equation} Furthermore, expanding near the horizon the coefficients $b_i$ are straightforwardly obtained. Keeping terms up to $b_5$ produces an excellent analytic approximation for $f(r)$, valid for all $r$. In particular, the expressions for the first two coefficients are reasonably simple \cite{Khodabakhshi:2020hny}: \begin{equation} b_1=4\pi r_+ T+\frac{4M}{r_+}-3, \quad b_2 = -\frac{r_+^3a_2+16r_+^2T+6(M-r_+)}{4\pi r_+^2T+4M-3r_+} \end{equation} All higher-order terms in the continued fraction can be written in terms of $a_2$, $T$, and $r_+$, though the exact expressions are cumbersome, being presented in Appendix C. To produce an analogous approximation for $g(r)$, a natural first approximation might be to choose an ansatz of the form of \eqref{contfracf} for $h(r) = r^2p(r)$ as it is a similarly dimensionless function. Unfortunately, such an approach leads inevitably to the problem that the continued fraction coefficients are underdetermined -- equating terms at order $r^0$ involves two continued fraction coefficients, equating terms at order $r^1$ involves these and another continued fraction coefficient, and so on. We can, however, construct a continued fraction approximation for $g(r)$ using the following ansatz: \begin{equation} g(x) = \frac{1}{r^3}\left[-\gamma(1-x) + (d_0 - \gamma)(1-x)^2 + D(x)(1-x)^A\right] \end{equation} where \begin{equation} D(x) =\frac{d_1}{1+\frac{d_2x}{1+\frac{d_3x}{1+...}}} \end{equation} We can then numerically integrate this to obtain $p(r)$. While the approximation for $p(r)$ is not analytic, it is nonetheless far less computationally intensive to obtain than constructing a full numerical solution. Note that the exponent $A$ has been left arbitrary. Continued fraction coefficients can be computed with this ansatz for any integer $A > 2$, but we found that $A = 9$ produced the best agreement with the numerical solution everywhere outside the event horizon. Using this ansatz in an asymptotic expansion, along with the known continued fraction expansion for $f(r)$, yields \begin{equation} \gamma = d_0 = -\frac{6M}{r_+} \end{equation} To get the continued fraction coefficients $d_i$, we once again expand about the event horizon and compare to the near horizon coefficients. We find that generally $d_n$ can be expressed in terms of $M$, $r_h$, and the near horizon coefficients for $g(r)$ up to $g_{n-1}$, which themselves depend on $g_0$, $T$, and the coupling constant. We must therefore use the value of $g_0$ numerically determined by the shooting method described above. We find that the first two continued fraction coefficients are given by \begin{equation} d_1 = \frac{6M}{r_+} - g_0r_+^3, \quad d_2 = -\frac{24M+(A+3)r_+d_1+g_1r_+^5}{d_1r_+} \end{equation} The remaining coefficients are given by similar, longer expressions presented in Appendix C. As with $f(r)$, we keep the first five coefficients in the continued fraction. A comparison of the continued fraction approximation to the numerical solution is shown in Figure 3. \begin{figure}[h] \includegraphics[width=0.505\textwidth]{compPlot1.png}\quad\includegraphics[width=0.505\textwidth]{compPlot2.png} \newline \includegraphics[width=0.505\textwidth]{compPlot3.png}\quad\;\includegraphics[width=0.505\textwidth]{compPlot4.png} \caption {Comparison of the numerical solution (red) to the continued fraction approximation (blue) for $h(r)$ against $r$ (in units of $M$) in the $\mathcal{S}_{(4)}^{(6)}$ theory. From left to right, top to bottom, we take $\lambda_{(1)} = 0.5, 1, 5, 10$. Notice that, unlike the continued fraction for $f(r)$, the continued fraction approximation for $h(r)$ is generally only valid outside the event horizon, indicated on each plot by a green line. The approximation is similarly accurate for the other theories. } \label{fig:Fig3} \end{figure} \section{Properties of the solution} In this section we study various properties of slowly rotating black hole solutions in the six EQG theories. First we determine the angular velocity of the horizon. We then move to the study of geodesics in spacetimes described by these solutions. Much of the theory concerning geodesics using a metric of the form used in this paper is developed in \cite{adair2020}. Throughout, we will present perturbative results for the sixth theory only; similar expressions for the other five theories are given in Appendix D. \subsection{Angular velocity of the event horizon} The angular velocity of the event horizon is given by \begin{equation} \Omega = -\frac{g_{t\phi}}{g_{\phi \phi}}\bigg\rvert_{r=r_+} \end{equation} With a metric of the form given by \eqref{metric} this becomes \begin{equation} \Omega = -ap(r_+) \end{equation} We construct an expression for the angular velocity in the perturbative regime as follows. Setting $f(r_+) = 0$ in \eqref{fperturb} we construct an expression for the horizon radius up to second order in $K$. The result is \begin{equation} r_+ = 2M - \frac{11}{160}\frac{K}{M^5}-\frac{929}{51200}\frac{K^2}{M^{11}} + O\left({K^3}\right) \end{equation} Using the perturbative expression for $p(r)$, then, we obtain the following for each of the six theories: \begin{align} \Omega_{(1)} &=\frac{\chi}{M}\left[\frac{1}{4}+\frac{73}{2816}\frac{\lambda_{(1)}}{M^6}+-\frac{14659}{31129600}\frac{\lambda_{(1)}^2}{M^12} \right] \\ \Omega_{(2)} &=\frac{\chi}{M}\left[\frac{1}{4}-\frac{67}{5632}\frac{\lambda_{(2)}}{M^6}-\frac{162541}{24903680}\frac{\lambda_{(2)}^2}{M^{12}} \right] \\ \Omega_{(3)} &=\frac{\chi}{M}\left[\frac{1}{4}-\frac{73}{2816}\frac{\lambda_{(3)}}{M^6}+\frac{14659}{31129600}\frac{\lambda_{(3)}^2}{M^{12}} \right] \\ \Omega_{(4)} &=\frac{\chi}{M}\left[\frac{1}{4}+\frac{73}{1408}\frac{\lambda_{(4)}}{M^6}+\frac{14659}{7782400}\frac{\lambda_{(4)}^2}{M^{12}} \right] \\ \Omega_{(5)} &=\frac{\chi}{M}\left[\frac{1}{4}-\frac{35}{704}\frac{\lambda_{(5)}}{M^6}-\frac{27677}{1945600}\frac{\lambda_{(5)}^2}{M^{12}} \right] \\ \Omega_{(6)} &=\frac{\chi}{M}\left[\frac{1}{4}+\frac{35}{1408}\frac{\lambda_{(6)}}{M^6}-\frac{27677}{7782400}\frac{\lambda_{(6)}^2}{M^{12}} \right] \end{align} where we have defined $\chi = a/M$, and as before $\mathcal{S}_{(4)}^{(3)}$ and $\mathcal{S}_{(4)}^{(4)}$ are equivalent upon making the substitution $\lambda_{(3)} = 2\lambda_{(4)}$. These expressions work well in the regime $M \gg \lambda_{(i)}^{1/6}$. For smaller $M$, we must resort to the numerical solutions described in the previous sections. \subsection{Geodesics} We now study the geodesics of the slowly rotating solution. The theory to linear order in $a$ is developed in \cite{adair2020} for a metric of the form \begin{equation} ds^2 = -f(r)dt^2 + \frac{dr^2}{f(r)} +2ah(r)\sin^2(\theta)dtd\phi +r^2[d\theta^2+\sin^2(\theta)d\phi^2] \end{equation} This analysis results in a linear system of equations for the coordinate functions: \begin{align} r^2 \dot{t} &= \frac{Er^2 + ah(r)\ell_z}{f(r)} \\ r^2 \dot{\phi} &= \frac{\ell_z}{\sin ^2\theta} - a\frac{Eh(r)}{f(r)} \\ r^2 \dot{\theta} &= \pm \sqrt{j^2-\frac{\ell_z}{\sin ^2\theta}} \\ \dot{r}^2&=-f(r)\left(\xi^2 + \frac{j^2}{r^2}\right) + E^2 +\frac{2ah(r)E\ell_z}{r^2} \end{align} Asymptotically, $j^2$ represents the total angular momentum, while $\ell_z$ is the component along the $z$-axis (i.e. $\theta = 0, \pi$). $\xi$ is the norm of the tangent vector, where \begin{equation} \xi^2 = -g_{a b}\dot{x}^a\dot{x}^b \end{equation} \subsubsection{The Photon Sphere} For null geodesics we have $\xi^2 = 0$ and we may always choose $E = 1$. The equation for the radial coordinate can then be written \begin{equation} \dot{r^2}+V_{ph}(r) = 0 \quad \text{where}\quad V_{ph}(r)=\frac{j^2f(r) - 2a\ell_zh(r)}{r^2} - 1 \end{equation} The photon sphere is formed by constant-$r$ photon orbits, which appear when \begin{equation} V_{ph}(r_{ps})=V'_{ph}(r_{ps})=0 \end{equation} To first order in $a$ we expand $r_{ps}$ and $j^2_{ps}$ according to \begin{equation} r_{ps} = r_{ps}^{(0)} + ar_{ps}^{(1)}, \quad j^2_{ps} = (j_{ps}^{(0)})^2 + (aj_{ps}^{(1)})^2 \end{equation} With this expansion $r_{ps}^{(0)}$ is determined by \begin{equation} \label{rpsEqn} r_{ps}^{(0)}f'\left(r_{ps}^{(0)}\right) = 2f\left(r_{ps}^{(0)}\right) \end{equation} while the other photon sphere parameters can be evaluated in terms of this quantity: \begin{equation} r_{ps}=r_{ps}^{(0)}+ \frac{2a\ell_zf(rh'-2h)}{r(r^2f''-2f)}\bigg\rvert_{r=r_{ps}^{(0)}} \end{equation} \begin{equation} j_{ps}^2 = \frac{(r_{ps}^{(0)})^2}{f(r_{ps}^{(0)})}+\frac{2a\ell_zh(r_{ps}^{(0)})}{f(r_{ps}^{(0)})} \end{equation} In the perturbative regime we may use the solutions \eqref{fperturb} for $f(r)$ and $h(r)$ to evaluate these to second order in the coupling constant. We do this via a method analogous to that used to obtain the angular momentum, by first constructing a perturbative approximation for $r_{ps}^{(0)}$ using \eqref{rpsEqn} and using this in the equations for $r_{ps}$ and $j_{ps}^2$. The results for the $\mathcal{S}_{(4)}^{(6)}$ theory are as follows: \begin{align} r_{ps}&= 3M-{\frac {1648\lambda_{(6)}}{32805{M}^{5}}}+{\frac {23839744{\lambda_{(6)}}^{2}}{3228504075{M}^{11}}}+a\left( -{\frac {2l_{z}}{9M}}-{\frac {19808l_{z}\lambda_{(6)}}{885735{M}^{7}}}+{\frac {158446592l_{z}{\lambda_{(6)}}^{2}}{29056536675{M}^{13}}} \right) \\ j_{ps}^2&=27{M}^{2}-{\frac {208\lambda_{(6)}}{729{M}^{4}}}+{\frac {1031168{\lambda_{(6)}}^{2}}{39858075{M}^{10}}}+a\left( -4l_{z}-{\frac {3424l_{z}\lambda_{(6)}}{24057{M}^{6}}}+{\frac {4776791552l_{z}{\lambda_{(6)}}^{2}}{224919117225{M}^{12}}} \right) \end{align} with expressions for the other theories given in Appendix~\ref{AppD}. Alternatively, when $\lambda_{(i)}/M^6$ becomes of order 1 or larger we must again resort to the numerical solution for an accurate evaluation of these quantities. \subsection{Geodesics in the equatorial plane} Here we consider geodesics confined to the equatorial plane, which amounts to specializing the previous case by setting $\theta = \pi/2$ and $\dot{\theta} = 0$. Note that these constraints combine to enforce $j^2 = \ell_z^2$ so that there will be no need in this section to distinguish between these angular momenta. The $r$ equation can be interpreted as being analogous to that of a particle moving in a potential, \begin{equation}\dot{r}^2 + V_{\text{eff}}(r) = 0 \quad \text{where} \quad V_{\text{eff}}(r) = f(r)\left(\xi^2 + \frac{j^2}{r^2}\right) - \frac{2ah(r)Ej}{r^2}-E^2\end{equation} \subsubsection{Timelike geodesics: ISCO} First let us consider the case of circular, timelike geodesics, i.e. with $\xi^2 = 1$ and $\dot{r} = 0$. The conditions for these geodesics to exist are \begin{equation} V_{\text{eff}}(r) = V'_{\text{eff}}(r) = 0 \end{equation} The stability of the circular orbit is deduced from the sign of $V''_{\text{eff}}(r)$, with a positive sign indicating stability and a negative one instability. We determine the location of the innermost stable circular orbits (ISCO) by searching for orbits which are inflection points, i.e. for which $V''_{\text{eff}}(r) = 0$. To find these working to linear order in $a$, we write the following expansions for $r$, $j$ and $E$: \begin{equation} r_{\text{ISCO}} = r_{\text{ISCO}}^{(0)} + ar_{\text{ISCO}}^{(1)}, \quad j_{\text{ISCO}} = j_{\text{ISCO}}^{(0)} + aj_{\text{ISCO}}^{(1)}, \quad E_{\text{ISCO}} = E_{\text{ISCO}}^{(0)} + aE_{\text{ISCO}}^{(1)} \end{equation} Substituting these in to the equations $V_{\text{eff}}(r) = 0$, $V'_{\text{eff}}(r) = 0$, $V''_{\text{eff}}(r)=0$ and collecting in powers of $a$ results in a system of six equations that must be solved. These are very large and yield little insight on their own, but the solution procedure, at least in the perturbative regime, is essentially the same for all six theories. We present a perturbative solution for the $\mathcal{S}_{(4)}^{(6)}$ theory: \begin{align} r_{\text{ISCO}} &= 6M-{\frac {2221\lambda_{(6)}}{209952{M}^{5}}}-{\frac {82796797{\lambda_{(6)}}^{2}}{1322395269120{M}^{11}}} \\ \nonumber &\mp a\left(-{\frac {4\sqrt {6}}{3}}-{\frac {9715\lambda_{(6)}\sqrt {6}}{629856{M}^{6}}}-{\frac {15486310769{\lambda_{(6)}}^{2}\sqrt {6}}{79343716147200{M}^{12}}}\right) \\ j_{\text{ISCO}}&= \pm\left(2\sqrt {3}M-\frac{1373\sqrt {3}\lambda_{(6)}}{3149280{M}^{5}}-\frac {38293721\sqrt {3}\lambda_{(6)}^{2}}{19835929036800{M}^{11}}\right) \\ \nonumber &+ a\left(-{\frac {2\sqrt {2}}{3}}-{\frac {13621\sqrt {2}\lambda_{(6)}}{6298560{M}^{6}}}-{\frac {3080166109\sqrt {2}{\lambda_{(6)}}^{2}}{158687432294400{M}^{12}}}\right) \\ E_{\text{ISCO}}&={\frac {2\sqrt {2}}{3}}-{\frac {191\sqrt {2}\lambda_{(6)}}{6298560{M}^{6}}}-{\frac {26522027\sqrt {2}{\lambda_{(6)}}^{2}}{158687432294400{M}^{12}}} \\ \nonumber &\mp a\left(-{\frac {\sqrt {3}}{54M}}-{\frac {445363\sqrt {3}\lambda_{(6)}}{3741344640\,{M}^{7}}}-{\frac {91154747183\sqrt {3}{\lambda_{(6)}}^{2}}{74622765036441600{M}^{13}}} \right) \end{align} Here taking the positive sign corresponds to prograde orbits ($j_\text{ISCO}^{(0)} > 0$) while the negative sign corresponds to retrograde orbits (($j_\text{ISCO}^{(0)} < 0$). Once again, when $\lambda_{(i)}/M^6$ becomes of order 1 or larger we must resort to the numerical solutions for an accurate reporting of these corrections. \subsubsection{Null geodesics: photon rings} We now consider how rotation deforms the photon rings of the black hole. These are constant-r orbits describing null geodesics in the equatorial plane $\theta = \pi/2$. We therefore seek the simultaneous zeroes of the effective potential and its first derivative. Instead of $E$ and $j$, we work with the angular velocity $\omega = d\phi/dt$, which is conserved along the photon trajectory. The resultant equations determining the location of the photon rings read \begin{align} 0 &= \omega^2r^2 +2a \omega h(r) - f(r) \\0 &= 2\omega^2r +2a \omega h'(r) - f'(r) \end{align} The solution to these equations to first order in $a$ is given by \begin{align} r_{\text{pr}\pm} &= r_{\text{ps}} \pm a \frac{2\sqrt{f( r_{\text{ps}})}[ r_{\text{ps}}h'( r_{\text{ps}})-2h( r_{\text{ps}})]}{2f( r_{\text{ps}})- r_{\text{ps}}^2f''( r_{\text{ps}})} \\ \omega_{\text{pr}\pm} &= \mp\frac{\sqrt{f(r_{\text{ps}})}}{r_{\text{ps}}} - a\frac{h(r_{\text{ps}})}{r_{\text{ps}}^2} \end{align} In these expressions $r_{\text{ps}}$ is the radius of the photon sphere in the static, spherically symmetric solution, obtained by solving the equation \begin{equation} \frac{r_{\text{ps}}f'(r_{\text{ps}})}{2}-f(r_{\text{ps}})=0 \end{equation} Also, the positive sign again corresponds to the prograde photon ring, and the minus sign to the retrograde photon ring. Similar to what we have done previously, we can construct a perturbative solution for the photon ring parameters by first solving this equation to second order in the coupling constant. The resulting perturbative solution takes the following form for the $\mathcal{S}_{(4)}^{(6)}$ theory: \begin{align} r_{\text{pr}\pm} &= 3M-{\frac {1648\lambda_{(6)}}{32805{M}^{5}}}+{\frac {23839744{\lambda_{(6)}}^{2}}{3228504075{M}^{11}}}\\ \nonumber &\pm a\left( {\frac {2\sqrt {3}}{3}}+{\frac {6256\sqrt {3}\lambda_{(6)}}{98415{M}^{6}}}-{\frac {158876608\sqrt {3}{\lambda_{(6)}}^{2}}{9685512225{M}^{12}}} \right) \\ \omega_{\text{pr}\pm}&=\pm\left(-{\frac {\sqrt {3}}{9\,M}}-{\frac {104\,\sqrt {3}\lambda_{(6)}}{177147\,{M}^{7}}}+{\frac {1411552\,\sqrt {3}{\lambda_{(6)}}^{2}}{29056536675\,{M}^{13}}}\right)\\ \nonumber & + a\left( {\frac {2}{27\,{M}^{2}}}+{\frac {19984\,\lambda_{(6)}}{5845851\,{M}^{8}}}-{\frac {85092352\,{\lambda_{(6)}}^{2}}{198746710857\,{M}^{14}}} \right) \end{align} with expressions for the other theories given in Appendix~\ref{AppD}. \subsection{Black Hole Shadow} We now turn to a discussion of the black hole shadow. Similar to our study of geodesics, the general form of the black hole shadow is derived for theories of this type in \cite{adair2020}. Consider, without loss of generality, an observer at a distance $r_0$ at polar angle $\theta_0$ and azimuthal angle $\phi_0 = 0$. The observer then receives a photon moving along a trajectory with increasing $r$. Taking $\alpha$ as the angle between the photon trajectory and the azimuthal direction, and $\pi/2 - \delta$ as the angle of incidence of the photon on the plane $r = r_0$, the contour $\delta(\alpha)$ of the black hole shadow is approximately a circle of radius $R_{\text{sh}}$ centered at $\alpha = 0$, $r_0\delta = D $, where \cite{adair2020} \begin{equation} R_{\text{sh}} = \frac{r_\text{ps}^{(0)}}{\sqrt{f\left(r_\text{ps}^{(0)}\right)}}, \quad D = -\frac{a \sin \theta_0 h\left(r_\text{ps}^{(0)}\right)}{f\left(r_\text{ps}^{(0)}\right)}. \end{equation} The effect of the rotation is hence to shift the black hole shadow a distance $D$ from the radial direction. A perturbative solution for $R$ and $D$ in $\mathcal{S}_{(4)}^{(6)}$ is given by the following: \begin{align} R_{\text{sh}}&=3\sqrt {3}M-{\frac {104\sqrt {3}}{6561{M}^{5}}}\lambda_{(6)}+{\frac {4505056\sqrt {3}}{3228504075{M}^{11}}}{\lambda_{(6)}}^{2} \\ D &= a\sin(\theta_0)\left(-2-{\frac {1712}{24057{M}^{6}}}\lambda_{(6)}+{\frac {2388395776}{224919117225{M}^{12}}}{\lambda_{(6)}}^{2}\right) \end{align} Similar perturbative expressions for the other five theories are given in Appendix~\ref{AppD}. \section{Conclusion} We have constructed slowly rotating black hole solutions for all six Einstein Quartic Gravity theories $\mathcal{S}_{(4)}^{(J)}$ in 4 spacetime dimensions, with $J=1..6$ In the spherically symmetric case, all six theories yield the same metric, but in the slowly rotating case this degeneracy is broken. Five of the six theories yield distinct solutions, with $\mathcal{S}_{(4)}^{(3)}$ and $\mathcal{S}_{(4)}^{(4)}$ having degenerate soutions. Our results indicate that investigations of photon orbits and shadows of rotating black holes could be used to distinguish EQG theories from ECG, and even distinct EQG theories from each other (apart from the degeneracy noted above). Of course the general GQT action could be a linear combination of ECG, EQG, and other higher curvature theories, and so actual observations will in practice have to measure (or set bounds on) the various coefficients appearing in the ISCO, ring, and shadow parameters in Appendix~\ref{AppD}. Observed dependence of these quantities on mass can be used to distinguish the order of the curvature. Obtaining distinctions between parameters having the same mass dependence will be more challenging, and will have to rely on detailed statistical analysis of a broad range of black holes, as well as perhpas consideration of higher-order rotational corrections. As with the cubic theory \cite{adair2020}, we find that the order-reduction phenomenon observed is a general property in the slowly rotating case as well as the spherically symmetric case. This suggests that slowly-rotating solutions in a general GQT theory will exhibit this feature. It would be of interest to consider this problem in general. Unlike Lovelock gravity, in which slowly rotating solutions are completely characterized by the metric for the static solution, slowly rotating solutions in EQG (and ECG \cite{adair2020}) do not have this feature. Understanding the conditions in which this is manifest is an interesting problem for further investigation. Finally, an investigation of the thermodynamic properties of slowly rotating black holes would be of interest. However the entropy and temperature exhibit dependence on the rotation parameter only at order $a^2$, and so a proper study of thermodynamic behaviour would necessitate computing all relevant quantities to (at least) this order. A full solution for arbitrary values of the rotation parameter would be of greatest interest, but of course of considerably greater difficulty. \section*{Acknowledgments} This work was supported in part by the Natural Sciences and Engineering Council of Canada. We are grateful to R.A. Hennigar for helpful discussions.
1,941,325,220,359
arxiv
\section*{Acknowledgment}}{\par\addvspace{12pt} } \newtheorem{lemma}{Lemma}[section] \newtheorem{proposizione}{Proposition}[section] \newtheorem{corollario}{Corollary}[section] \newtheorem{osservazione}{Osservazione}[section] \newtheorem{risultato}{Risultato} \title {{A Phenomenological model of Myosin~\mbox{II}\ dynamics in the presence of external loads}\thanks{Work performed within a joint cooperation agreement between Japan Science and Technology Corporation (JST) and Universit\`{a} di Napoli Federico II, under partial support by MIUR (PRIN 2003) and by Campania Region.}} \author { \\[0.8cm] A. Buonocore$^1$, L. Caputo$^1$, Y. Ishii$^{2,3}$, E. Pirozzi$^1$, T. Yanagida$^{2,3}$ and L. M. Ricciardi$^{1,\dag}$\\[0.8cm] $^1$Dipartimento di Matematica e Applicazioni\\ Universit\`{a} di Napoli Federico II\\ {\{aniello.buonocore, enrica.pirozzi, luigi.ricciardi\}@unina.it}\\ [email protected]\\ $^\dag$Corresponding author\\[0.2cm] $^2$Department of Physiology and Biosignaling\\ Graduate School of Medicine, Osaka University\\ $^3$Single Molecule Processes Project, ICORP, JST\\ {\{ishii, yanagida\}}{@phys1.med.osaka-u.ac.jp} } \date{} \begin{document} \maketitle \begin{abstract} We address the controversial hot question concerning the validity of the loose coupling versus the lever-arm theories in the actomyosin dynamics by re-interpreting and extending the phenomenological washboard potential model proposed by some of us in a previous paper. In this new model a Brownian motion harnessing thermal energy is assumed to co-exist with the deterministic swing of the lever-arm, to yield an excellent fit of the set of data obtained by some of us on the sliding of Myosin II heads on immobilized actin filaments under various load conditions. Our theoretical arguments are complemented by accurate numerical simulations, and the robustness of the model is tested via different choices of parameters and potential profiles. \end{abstract} \vspace*{1cm} {\bfseries Keywords:} acto-myosin dynamics, loose coupling, lever-arm \section{Introduction} In 1985 an experimental set-up to measure the distance traveled by an actin filament interacting with a Myosin~\mbox{II}\ filament\footnote{Hereafter we shall use the word \lq\lq myosin\rq\rq\ as an abbreviation of \lq\lqMyosin~\mbox{II}\rq\rq.} during a complete ATP cycle was described~\cite{yan85}. Under low load conditions on the actin filament, traveled distances of up to \numdim{60}{nm} were observed. This value was not understandable according to the most popular idea of the sliding mechanism, rotation or tilting of the myosin head bound on actin molecule, conceived as directly coupled with the ATP hydrolysis (see~\cite{oos00}). Indipendent, successive experiments (see~\cite{fin94},~\cite{meh97} and references therein) then showed distances ranging between 4 and 10 nanometers. Such smaller range of traveled distances is coherent with the lever-arm theory (see~\cite{spu94} and~\cite{coo97}) that is still widely believed to account for the generation of the force responsible for the actin filament sliding. The level-arm theory can be essentially schematized as follows (more details can be found in~\cite{myohp}, where a graphical animation is also present): \begin{itemize} \item[--]{The binding of an ATP molecule on the catalytic site of myosin head produces the detachment of the head from the actin filament.} \item[--]{The ATP hydrolysis, that follows within about the next \numdim{10}{ms}, changes ATP to ADP.Pi, determining a free energy variation in one reaction cycle of about $20$~$k_{\mbox{\tiny B}}T$ (see~\cite{alb94}). ATP hydrolysis also generates a sensible rotation of the neck of the myosin head.} \item[--]{The transition M.\,ATP$\longrightarrow$ M.\,ADP.\,Pi is accompanied by an increase of affinity of this complex for actin (see~\cite{myohp}), which enhances the chance of myosin to get binded on an actin site. Since such binding is reversible, the myosin head can visit more than a single actin site.} \item[--]{While myosin head is binded at an actin site, its neck may suddenly switch back to its original position (this is the so called \lq\lq power stroke\rq\rq) with the successive release of the phosphoric radical Pi and the occurrence of conformational changes around the myosin coiled coil\footnote{The nature of such conformational changes has not been yet fully understood. With reference to kinesin, a possible mechanism is described in~\cite{kik01} where its validity also for the myosin family is conjectured.}. This process is reversible i.e. phosphoric radical Pi can re-establish the complex M.\,ADP.\,Pi. In the absence of such recombination, the above-mentioned conformational changes are very likely to generate the sliding of the actin filament with the release of ADP molecule after about \numdim{2}{ms}.} \item[--]{The myosin head stays then binded on the actin site till the arrival of a new ATP molecule, which starts the whole process afresh.} \end{itemize} \par Summing up, as far as reference is made exclusively to the mechanical phenomenon of the generation of the force responsible for the movement, the lever-arm theory is strictly deterministic: Each ATP cycle generates one single power-stroke that causes a sliding of constant length and preassigned direction of the actin filament. Within such framework, a tight coupling between ATP cycles and protein movements is envisaged. \par Even though the apparent disagreement of the above mentioned difference of travelled distances could be understood on the base of a low duty-ratio characterizing Myosin~\mbox{II}\ heads (see~\cite{how01}, pages 221--226), successive experiments by Kitamura {\emph et} al. (\cite{kit99}) raised anew the question of such a disagreement. Indeed, aiming at a continuous efforts towards increasingly accurate measurements of the traveled distances during an ATP cycle, it was possible to achieve great improvements of the technological set-up that included design and construction of \lq\lq home made\rq\rq\ highly sophisticated devices. By exploiting such a technology, it was proceeded as follows: A single myosin head was attached to the tip of a glass microneedle and placed near an actin filament that had been previously immobilized on a microscopy slide by means of optical tweezers. The deflections of the needle with respect to its resting position were then measured and recorded. From the obtained traces the following three features emerged that are in evident disagreement with the tight coupling theory: \begin{enumerate} \item{The total traveled distance (i.e. the total displacement) of the myosin head is not constant and it can be as large as \numdim{30}{nm}.}\label{Oss1} \item{This displacement is the sum of a random number of single \lq\lq steps\rq\rq, the amplitude of each of which equals the distance (\numdim{\simeq~5.3}{nm}) between two successive actin monomers. During the time elapsing between two successive steps, the myosin head randomly jitters around an equilibrium position.}\label{Oss2} \item{Steps mainly occurs in a fixed \lq\lq forward\rq\rq\ direction, although some of them occasionally take place in the opposite \lq\lq backward\rq\rq\ direction. Hereafter, forward steps will be taken as positive and backward steps as negative. The total displacement is thus the algebraic sum of the number of performed forward and backward steps.}\label{Oss3} \end{enumerate} Such evidence contrasts the one-to-one relation hypothesized between ATP hydrolysis and the occurrence of the mechanical event consisting of the power stroke in the myosin head. Furthermore the observation of the existence of random elements leads one to conjecture that a significant role could be played in such context by the thermal agitation of the environmental molecules of the watery solution in which the involved proteins are embedded. This is the motivation for the assumption of the existence of a loose coupling between ATP cycle and actomyosin dynamics (See, for instance, \cite{oos00}). \par While referring to~\cite{cyr00} for a lucid outline of the origin of the controversy existing among the supporters of the loose coupling approach and the community of those faithful to the lever-arm theory, and therefore also to the tight coupling vision, in the present paper we purpose to extend the phenomenological model earlier proposed by some of us (\cite{buo03}) by including in it the effect of the swing of the lever-arm. Thus doing we show that the experimental data of myosin sliding obtained in~\cite{kit99} under various load conditions can be very well accounted for. \par Specifically, via the comparison of our theoretical results with those yielded by our experiments, we shall test the assumption that during the time interval elapsing between the attachment of myosin head to the actin filament and the final release of the phosphoric radical (rising phase in the sequel), the position of the myosin head is determined, both by the lever-arm swing and by the action of Brownian motion, that includes a macroscopic deterministic force responsible for a non-zero average net displacement. Such a direction-orienting force can be viewed as the originated by changes of chemical states that accur within the myosin head and that are fuelled by energy supplied by myosin head itself. Such a view is coherent with the notion of \lq\lq effective driving potential\rq\rq\ in the sense, for instance, of Wang and Oster~\cite{wan02}. As a consequence, the coupling between the ATP cycle and the mechanical effects should appear to be less rigid, and the actomyosin dynamics should definitely exhibit variability features of the kind mentioned in the above~\ref{Oss1}.$\div$\ref{Oss3}. items. \par The above outlined conjectured random dynamics will be described in details in Section 2 with a specific reference to an earlier paper~\cite{buo03}. Here we limit ourselves to showing that, within such a framework, it is possible to handle a key point of the mentioned controversy. Indeed, in~\cite{cyr00} the essential relevance of the role played by the length of the myosin neck is stressed, since the lever-arm theory makes the sliding distance less than, and somewhat proportional to, such length. This would for instance imply that reducing the length of the myosin neck to a half should reduce to a half the sliding distance, which appears to be confirmed by the experiments in~\cite{war00}. On the contrary, certain performed experiments only show slight changes of the overall displacement of the myosin head even after complete removal of its neck~\cite{cyr00}. \par In order to reconcile these evident discrepancies, we shall assume that the displacement $X$ of the myosin head during the ATP cycle could in general be envisaged as a linear combination as follows: \begin{equation}\label{Spostamento} X=rX_R + dX_D \end{equation} where $X_R$ denotes the displacement induced by the biased thermal effects and $X_D$ the displacement generated by the power stroke, and where $r$ and $d$ are constants, each of which hereafter will be taken as equal to 0 or 1. Note that the case $r=0$ and $d=1$ yields the lever-arm theory; instead, setting $r=1$ and $d=0$ depicts a purely random situation, in the absence of any sliding due to the power stroke. Finally, the case $r=d=1$ leads to an integration of the two theories. With the choice $r=d=1$, the controversial results related to the length of the neck of myosin head can be overcome by assuming that the random displacement is a few times larger than the deterministic one. This assumption implies that only slight changes of the total distance traveled by myosin head would be observable when the contribution $X_D$ due the power stroke is made somewhat smaller by shortening the length of the myosin neck. \section{The model}\label{model} Let $L$ denote the distance between each pair of neighboring actin monomers. As suggested in recent literature (see~\cite{ish00} and ~\cite{kit01}) in our computations we shall take \numdim{L=5.5}{nm}. In addition, we shall assumed that the magnitude of the sliding induced by the swing of the lever-arm equals that of a step, and thus take $X_D\simeq L$. \par Our conjectured relation~(\ref{Spostamento}) with $r=d=1$ is preliminarily supported by the bar chart in Fig. 4c) of~\cite{kit99} showing that at least one step is performed by each and every myosin head during the rising phase. The second column of Table~\ref{DistribuzioneNumeroNettoSalti} shows the heights of the columns of the mentioned bar chart, whereas the first column indicates net step numbers, i.e. the integral part of the ratios $X/L$. The quoted experiment was performed under low load conditions by means of microneedles having stiffness less than \numdim{0.1}{pN$/$nm}. \par \begin{tabMacPc} {\begin{tabular}{|c|r||c|r|}\hline $\lfloor X/L \rfloor$ & Observed frequency & $\lfloor X_R/L \rfloor$ & Theoretical frequency\\ \hline\hline 1& 14 & 0 & 15 \\ \hline 2& 21 & 1 & 22 \\ \hline 3& 18 & 2 & 17\\ \hline 4& 10 & 3 & 8\\ \hline 5& 3 & 4 & 3\\ \hline 6& 0 & 5 & 1\\ \hline \hline total&66 & total&66\\ \hline \end{tabular}} {12 cm} {Observed distribution of net number of steps performed by myosin heads as indicated in~\cite{kit99}. Here $\lfloor X/L \rfloor$ is the total step number during the entire rising phase.} {DistribuzioneNumeroNettoSalti} \end{tabMacPc} As shown by the first two columns of Table~\ref{DistribuzioneNumeroNettoSalti}, one sees for instance that out of 66 observed myosin heads (all attached to a glass microneedle of stiffness less than \numdim{0.1}{pN$/$nm}) 14 performed a unit net steps, 21 a net step number equal to 2, etc., to conclude that 3 of them have performed a net step number equal to 5 implying total displacement of about \numdim{30}{nm}. Columns 3 and 4 of Table~\ref{DistribuzioneNumeroNettoSalti} show that $\lfloor X_R/L \rfloor=\lfloor X/L \rfloor-1$ is well fitted by a Poisson distribution with parameter $\hat{\eta}=1.5$ given by the ratio of the total number of performed net steps ($99$) to the number of considered myosin heads ($66$). \par The agreement between experimentally observed frequencies and those predicted via the Poisson distribution, jointly with the rarity of the backward steps, leads one to conclude that the \lq\lq dwell time\rq\rq\ (namely the time interval elapsing between two successive steps) should be, to a good approximation, exponentially distributed. Such conclusion is experimentally supported by the data summarized in Fig.~4b) of~\cite{kit99} leading to the estimate of about \numdim{5}{ms} for the mean dwell time. \par Our facing sequences of rare events with exponentially distributed interarrival times is strongly suggestive of a first-exit problem out of an interval for continuous Markov processes possessing an equilibrium point sufficiently far from at least one of the end points of the diffusion interval (See, for instance,~\cite{nob85}). \par On the ground of all foregoing considerations, with reference to the random part of the rising phase\footnote{By \lq\lq random part of the rising phase\rq\rq we denote the time interval during which the myosin head is subject exclusively to Brownian motion.} we are led to construct a model for the actomyosin dynamics that is based on the following assumptions: \begin{itemize} \item [(i)] The complex M.ADP.Pi + Energy is viewed as a point--size particle moving along an axis $X$ on which the abscissa $x$ denotes the displacement of the particle from the starting position. The positive direction of $X$ is that of the forward steps of myosin head. \item [(ii)] The particle is embedded in a fluid. Hence, it is not only subject to a dissipative viscous force characterized by a drag coefficient $\beta$, but also to microscopic forces originating from the thermal motion of the fluid molecules. On account of the fluctuation-dissipation theorem, such microscopic forces can be macroscopically described by means of a Gaussian white noise having intensity $2\beta$$k_{\mbox{\tiny B}}T$, where $k_{\mbox{\tiny B}}$\, is Boltzmann constant and $T$ the absolute temperature. \item [(iii)] The global interaction of the particle with the actin filament is synthesized in a conservative unique force deriving from a potential $U(x)$. The structure of the actin filament suggests that $U(x)$ be a periodic function with period equal to the distance $L$ between pairs of consecutive actin monomers: \begin{equation}\label{Potenziale} U(x)=U(x+rL), \qquad \forall r\in \mathbb{Z}. \end{equation} Henceforth we shall denote by $L_A$ ($0<L_A<L$) the minimum of $U(x)$, assume $U(L_A)=0$ and denote by $U_0:=U(0)=U(L)$ the depth of the potential well, namely the maximum of $U(x)$. \item [(iv)] The particle's dynamics is described by Newton's equation in which the total acting force is the sum of two terms: The first term is a deterministic force generated by the potential $U(x)$ and by the viscous force, while the second term is the random force due to the presence of the Gaussian white noise. \item [(v)] Two more (constant) forces, denoted by $F_i$ and $F_e$, act on the particle. We assume that $F_i$ is generated by a process that finds its origin in a part of the energy possessed by the particle, and take $F_iL\ll U_0$. Instead, $F_e$ is an external force, conceivable applied from the outside by the experimenter. \end{itemize} \par Summing up, we are assuming that the complex M.ADP.Pi + Energy can be looked at as a Brownian particle subject to a tilted potential $V(x)$: \begin{equation}\label{Potenzialeinclinato} V(x)=U(x)-Fx \\ \end{equation} where $U(x)$ has been indicated in (\ref{Potenziale}) and $F=F_i-F_e$. In the experimental conditions the height of the potential wells is \numdim{U_0\le 100}{pN$\cdot$nm}, the period of the potential is \numdim{L=5.5}{nm}, the particle mass is \numdim{m=2.2\cdot10^{-22}}{kg}, the drag coefficient is \numdim{\beta=90}{pN$\cdot$ns/nm} and the environmental temperature is \numdim{T=293}{K}. Therefore, Reynolds's number is much less than 1 (see, also, ~\cite{shi03}), so that the inertial term of the equation of motion can be disregarded. In conclusion, the overdamped equation describing the movement of the particle is the following Langevin equation: \begin{equation}\label{Eq.Langevin} \dot{x}=-\frac{1}{\beta}\,{V^\prime(x)}+\sqrt{\frac{2\mbox{$k_{\mbox{\tiny B}}$} T}{\beta}}\,\,\Lambda (t) \end{equation} where $\Lambda(t)$ is a zero-mean white Gaussian noise with unit intensity, ( $\dot{ }$ ) denotes time derivative and ( $^\prime$ ) space derivative. \par Within this framework, this idealized Brownian particle randomly moves around an equilibrium point located at a minimum $L_A$ of $U(x)$. Whenever it exits the current potential well, we conventionally say that the corresponding myosin head has made a step, in the forward or in the backward direction according to where the exit has taken place. Hence, we are facing a first-exit problem of the Brownian particle from the endpoint of the current potential well. Taking into account the above assumptions, we conclude that the distribution of the first-exit time of the Brownian particle from the potential well is exponential, since \begin{itemize} \item [--] the process described by equations~(\ref{Potenziale}), (\ref{Potenzialeinclinato}) and~(\ref{Eq.Langevin}) possesses an equilibrium point at the minimum $L_A$ of $U(x)$; \item [--] the time for the particle to travel the distance $L_A$ in the presence of the only force due to $U(x)$ is (see for instance,~\cite{luc99}) $\tau=\beta L_A^2/U_0$; \item [--] the standard deviation of the Gaussian steady-state distribution of the process modeling the particle's motion is $\sigma=\sqrt{(\mbox{$k_{\mbox{\tiny B}}$}T/\beta)\cdot\tau/2}\equiv L_A/\sqrt{2u_0}$, where we have set $u_0=U_0 /\mbox{$k_{\mbox{\tiny B}}$}T$; \item [--] the ratio $l_A:=L_A/\sigma=\sqrt{2u_0}$ falls well within the interval $(2,4\sqrt{2})$ for all choices of $u_0\in[2,16]$ to which we shall refer in the foregoing, so that the exponential approximation for the first-exit time is valid~\cite{nob85}. \end{itemize} \par In the sequel, we shall exploit the known formulas~\cite{lin01} for the conditional probability $p$ that the particle moves a step forward after leaving the current potential well, \begin{eqnarray} p&=&\frac{1}{1+\exp{\left(-FL/\mbox{$k_{\mbox{\tiny B}}$}T\right)}}\label{P}. \end{eqnarray} and for the mean first-exit time $\mu$ from a potential well: \begin{eqnarray} \mu&=&\beta \frac{p}{\mbox{$k_{\mbox{\tiny B}}$}T}\int_0^Ldx\,\exp\left\{\frac{V(x)}{\mbox{$k_{\mbox{\tiny B}}$}T}\right\}\int_{x-L}^xdy\,\exp\left\{-\frac{V(y)}{\mbox{$k_{\mbox{\tiny B}}$}T}\right\}\label{Mu}. \end{eqnarray} \section{Determination of $F_i$ and $U_0$} Hereafter, we shall view as constants the parameters $\beta$ and $T$ that characterize the thermal bath and the period $L$ of the potential $U(x)$, their values being given in Section~\ref{model}. Hence, the quantitative specification of the model described by Eqs.~(\ref{Potenziale}), (\ref{Potenzialeinclinato}) and~(\ref{Eq.Langevin}) requires that numerical values be attributed to three more parameters: the depth $U_0$ of the potential well, the position $L_A$ of the minimum of $U(x)$ in $(0,L)$ and the internal force $F_i$. The numerical specification of these parameters can be performed after the function $U(x)$ has been chosen. We shall preliminarily take $U(x)$ as a symmetric $(L_A=L/2)$ saw-tooth potential (see Table~\ref{Potenziali}). Successively, we shall test the robustness of our model by assuming alternative potential functions, henceforth called \lq\lq potential profiles\rq\rq. It should be noted that $F_i$ is somewhat related to the largest force that myosin is able to endogenously generate. Here, we shall not attempt to provide any biological justification of its genesis. We limit ourselves to pointing out that elsewhere~\cite{nis02}, where similar experiments were performed on Myosin~VI, it is conjectured that the week binding between actin and myosin is a source of distortion of the geometry of the two helixes in the actin filament. Such a distortion exposes the hydrophobic region of actin to myosin head, thus generating a tilt of the potential, and hence a constant force $F_i$. The existence of a tilt of the potential, conjectured in~\cite{buo03}, has been successively supported by means of simulations in~\cite{esa03} where it is shown that in the absence of such a tilt experimental available evidence on the myosin motion in the presence of contrasting applied loads cannot be accounted for by any of the other models therein considered. Within our strictly phenomenological framework the existence of this force $F_i$ is supported by Eq.~(\ref{P}) showing that in the absence of external applied forces (i.e. $F_e=0$) steps on either directions would be equally likely unless $F_i\ne0$, in contrast with the experimental evidence on the high degree of directionality exhibited by motion of the myosin head. \par To proceed along the quantitative specification of our model in a way to be able to attempt the fitting of available experimental data, the values of $F_i$ and of $U_0$ must be specified. This will be done by making use of the available experimental data~\cite{kitpr} shown in Table~\ref{Steps} and in Table~\ref{DwellTime}\footnote{Note that loads and dwell times in Tables~\ref{Steps}~and~\ref{DwellTime} must be viewed as averages to which confidence intervals are associated. For instance, the load \numdim{0.046}{pN} is the result of all measurements for which the product of the stiffness of the glass microneedle times the distance traveled by the myosin head falls around \numdim{0.046}{pN}. The corresponding dwell time \numdim{5.3}{ms} is to be viewed as the arithmetic average of the dwell times recorded during these measurements.}. \begin{table}[htb] \begin{minipage}[htb]{9 cm} \renewcommand{\arraystretch}{2.5} \caption{For three conditions of the applied load $C$,measured at the end of the rising phases, the table lists the recorded numbers $\hat{n}_f$ of forward steps, $\hat{n}_b$ of backward steps, the total number of steps and the percentage $\hat{p}$ of forward steps.} \label{Steps} \vspace{0.2 cm} \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline $C$ (pN) & $\hat{n}_f$ & $\hat{n}_b$ & $\hat{n}_f+\hat{n}_b$ & $\displaystyle{\hat{p} = \frac{\hat{n}_f}{\hat{n}_f+\hat{n}_b}}$\\ \hline $\big]0.0,0.5\big]$ & 54 & 9 & 63 & 0.8571 \\\hline $\big]0.5,1.0\big]$ & 40 & 9 & 49 & 0.8163 \\ \hline $\big]1.0,2.0\big]$ & 29 & 19 & 48 & 0.6042 \\ \hline \end{tabular} \end{center} \end{minipage} \hfill \begin{minipage}[htb]{5 cm} \caption {Recorded dwell times $\hat{\mu}$ for different values of the applied load $C$.} \label{DwellTime} \vspace{0.2 cm} \begin{center} \begin{tabular}{|r|r|}\hline $C$ (pN) & $\hat{\mu}$ (ms) \\ \hline 0.046 & 5.3 \\ \hline 0.190 & 5.7 \\ \hline 0.300 & 6.0 \\ \hline 0.470 & 7.1 \\ \hline 0.690 & 8.9 \\ \hline 0.830 & 6.2 \\ \hline 1.240 & 11.1 \\ \hline 1.890 & 11.0 \\ \hline \end{tabular} \end{center} \end{minipage} \end{table} From them it is evident that the frequency $\hat{p}$ of forward steps decreases as the applied load increases, while the dwell times increase with the load. \par For some fixed values of internal force $F_i$, use of Equation~(\ref{P}) has been made to calculate the theoretical probabilities of the particle's exit from the current potential well to the next well (i.e. the analogue of the forward step probabilities) as function of $F_e$. The results are shown in Figure~\ref{PversusFE}, where eight realistic values of $F_i$ have been chosen in the interval \numdim{\big[1.00}{pN}\numdim{,1.90}{pN}$\big]$. Vertical lines indicate the $3$ load intervals of Table~\ref{Steps}, whereas horizontal lines indicate the $3$ corresponding recorded frequencies. We see that for internal forces \numdim{F_i=1.00}{pN} and \numdim{F_i=1.90}{pN} the plotted curves do not meet the requirement of leading to the experimentally recorded frequency $\hat{p}=0.8571$, whatever value $F_e$ is chosen within interval \numdim{\big[0}{pN}\numdim{,0.50}{pN}$\big]$. Similarly, for \numdim{F_i=1.55}{pN} no value of the computed probability equals the frequency $\hat{p}=0.8163$ for $F_e$ ranging in \numdim{\big[0.50}{pN}\numdim{,1.00}{pN}$\big]$. Instead, all remaining $5$ curves referring to values of $F_i$ ranging from \numdim{1.60}{pN} to \numdim{1.80}{pN} in steps of magnitude \numdim{0.05}{pN}, from below upward, are in agreement with the experimental values of Table~\ref{Steps}. Hence, the interval of values for $F_i$ to be selected in order to secure the fitting of the experimental data is \numdim{\big[1.60}{pN}\numdim{,1.80}{pN}$\big]$. \par \begin{figMacPc} {Figure/Fig1.eps} {13cm} {The conditional probability $p$ that the particle moves a step forward after leaving the current potential well is plotted as function of the external applied force $F_e$ for various values of internal force $F_i$~(pN). Temperature and potential period are taken as \numdim{T=293}{K} and \numdim{L=5.5}{nm}, respectively. The three horizontal lines whose ordinates are $0.8571$, $0.8163$ and $0.6042$, respectively, denote the sample percentages $\hat{p}$ of forward steps under the three load conditions indicated in Table~\ref{Steps}.} {PversusFE} \end{figMacPc} We now come to the estimation of the last parameter, $U_0$, i.e. of the depth of the potential well. This is done by exploiting Eq.~(\ref{Mu}) in which the left-hand side is viewed as the function $\mu=\mu(F_i,F_e,U_0;L,\beta,T)$ and fixed to \numdim{5.3}{ms}, that (see Table~\ref{DwellTime}) corresponds to the smallest applied load \numdim{0.046}{pN}. Since temperature $T$, period $L$ of the potential $V(x)\equiv U(x)-(F_i-F_e)x$, and drag coefficient $\beta$ are specified, if we take \numdim{F_e=0.046}{pN} Eq.~(\ref{Mu}) makes $U_0$ an implicit function of $F_i$. \par Note that as $F_i$ increases (i.e. as the potential tilt increases) the mean first-exit time $\mu$ decreases. Hence, in order to keep $\mu$ constantly equal to \numdim{5.3}{ns}, while $F_i$ ranges in the interval \numdim{\big[1.60}{pN}\numdim{,1.80}{pN}$\big]$, depth $U_0$ must be taken as a monotonically increasing function of $F_i$. From $\mu(1.60,0.046,U_0)=5.3$ and $\mu(1.80,0.046,U_0)=5.3$, Eq.~(\ref{Mu}) yields \numdim{U_0\approx15.632}{$k_{\mbox{\tiny B}}T$} and \numdim{U_0\approx 15.755}{$k_{\mbox{\tiny B}}T$}, respectively. \par In order to test the agreement of our model with the experimental values of mean dwell times for the various loads (see Table~\ref{DwellTime}), we make use of Eq.~(\ref{Mu}) to determine $\mu$ as a function of $F_e$ for the pairs (\numdim{1.60}{pN},\numdim{15.632}{$k_{\mbox{\tiny B}}T$}) and (\numdim{1.80}{pN},\numdim{15.755}{$k_{\mbox{\tiny B}}T$}), involving the extrema of the determined values for $F_i$ and $U_0$. The obtained values are showing Figure~\ref{MuversusFE}, where the corresponding experimental mean dwell times are also indicated. Obviously, all other pairs of admissible values of $F_i$ and $U_0$ lead to curves lying inside of the above two pairs. \begin{figMacPc} {Figure/Fig2.eps} {13cm} {First-exit time $\mu$ is plotted as a function of the external applied force $F_e$. The chosen values of $F_i$~(pN) and $U_0$~($k_{\mbox{\tiny B}}T$) are indicated for each curve. Dots represent the experimental dwell times of Table~\ref{Steps}. Drag coefficient $\beta$ has been taken as \numdim{90}{pN$\,$ns$/$nm}, whereas temperature and period of the potential are the same as in Figure~\ref{PversusFE}.} {MuversusFE} \end{figMacPc} Inspection of Figure~\ref{MuversusFE}, jointly with the magnitudes of the confidence intervals associated to each experimental value~\cite{kit01}, shows that for small values of $F_e$ the agreement between experimental data and theoretical predictions is good. Indeed, up to the first $3$ values of Table~\ref{Steps} the agreement is excellent, to become more than acceptable for larger values of $F_e$ up to \numdim{0.83}{pN}. The discrepancies shown by the remaining two experimental data are a consequence of the crude assumption (vi) above by which the applied load is assumed to be strictly parallel to the direction of motion. This is acceptable for reasonably small loads, but certainly unrealistic for large loads: In this case a significant orthogonal component should be expected, whose effect is alike to an increase of the depth of the potential well $U_0$. Nevertheless, in the interval between the last two load values of Table~\ref{Steps}, a qualitative similar behavior of theoretical curves and experimental values is present. \section{Net step number distribution} In the present Section we shall implement the model described by Eqs.(\ref{Potenziale})--(\ref{Eq.Langevin}) in order to obtain, via a suitable simulation procedure, the distribution of the net number of steps performed by the myosin head during the time interval elapsing between the instant when the ATP molecule is hydrolyzed and the final release of the phosphoric radical, i.e. during the random part of the rising phase. The results of our simulations will then be discussed with reference to the experimental data of Table~\ref{DistribuzioneNumeroNettoSalti}. We recall that such data refer to 66 myosin heads that have globally performed 99 net steps. \par Our simulation procedure is based on a discretized version of Eq.~(\ref{Eq.Langevin}) and on a routine for generating Gaussian pseudorandom numbers in a way to determine step by step the positions achieved by 66 Brownian particles all originating at $x_0$ at time 0. The traveled distances at the end of the individual random rising phases are recorded, which finally leads one to the determination of the 66 net step numbers. Such procedure has been implemented on an IBM SP4 parallel supercomputer and repeated 1600 times to obtain a reliable statistics. \par In order to solve Eq.~(\ref{Eq.Langevin}) numerically, one has to specify preliminarily initial position $x_0$ of the Brownian particle and duration $\Theta$ of each sample path. We shall safely take $x_0=L/2$ since, whatever the actual initial position of the particle inside of the potential well, the its relaxation time is much smaller than the mean first-exit time. Incidentally, we note that such a choice is consistent with the procedure adopted in Kitamura et al.~\cite{kit99} (see legend to Figure~$2d$), where to determine the average recorded trajectory, the starting positions of the rising phases have been synchronized. The specification of sample path duration is somewhat more involved because $\Theta$ is a random variable of unknown distribution. Nevertheless, we can appeal to the approximate exponential distribution of the first-exit time, motivated by the depth of the potential well, to assume that $\Theta$ is approximately gamma-distributed, with probability density $\Gamma(\mu,\nu)$ where $\mu$ is the mean first-exit time from a potential well and $\nu$ is the mean total exits of the Brownian particle. While $\mu$ is obtained via Eq.~(\ref{Mu}), our estimate $\hat{\nu}$ of $\nu$ is obtained by using the data of Table~\ref{DistribuzioneNumeroNettoSalti} and Eq.~(\ref{P}): \begin{displaymath} \hat{\nu} = \frac{99}{66(2p-1)}. \end{displaymath} \par Finally, for each specified potential well $U_0$, the size $\Delta t$ of the time parsing in Eq.~(\ref{Mu}) is determined by progressively reducing it until the obtained distribution becomes appreciably invariant. For an immediate comparison, Table~\ref{pot.cinque} shows the distribution obtained via simulation for a potential \numdim{U_0=5}{$k_{\mbox{\tiny B}}T$} and a parsing time \numdim{\Delta t=0.25}{ns}\ jointly with the experimental distribution. The agreement between observed and numerical frequencies is more than satisfactory. The unique somewhat apparently appreciable discrepancy is that 2 (out of 66) simulated Brownian particles are seen to perform 1 negative net step number, implying a zero total displacement due to the superposition of the effect of the final power stroke. However, at the present stage, such a discrepancy should not be taken as particularly significant. Indeed, the presently available experimental setup does not allow one to monitor simultaneously the trajectory of the myosin head and the occurrence of hydrolysis of an ATP molecule. Therefore, observing at least one positive net step number of myosin head indicates that one ATP hydrolysis has occurred, and consequently such a case has been included among the recorded data. Instead, in the case when a zero net step number is observed, it is impossible to claim that an ATP molecule has been hydrolyzed, and hence such cases have been disregarded. Future endeavours could aim at the design of an experimental setup able to prove that a zero net step number can actually occur as a consequence of the hydrolysis of an ATP molecule. \par \begin{tabMacPc} {\begin{tabular}{|c|r||r||r||r|}\hline $\lfloor X_R/L \rfloor$ &Observed frequency & \multicolumn{3}{c|}{Numerical frequency} \\ \hline & &\numdim{F_i=1.70}{pN} &\numdim{F_i=1.75}{pN} &\numdim{F_i=1.80}{pN}\\ \hline\hline -1& & 2.05 & 1.90 & 1.76 \\ \hline 0& 14 &13.79 &13.63 &13.88 \\ \hline 1& 21 &20.72 &20.83 &20.93 \\ \hline 2& 18 &16.41 &16.43 &16.47 \\ \hline 3& 10 & 8.53 & 8.59 & 8.54 \\ \hline 4& 3 & 3.14 & 3.26 & 3.16 \\ \hline 5& 0 & 0.93 & 0.93 & 0.88 \\ \hline \hline \multicolumn{2}{|r||}{$\overline{CV}$} & 0.966 & 0.964 & 0.963 \\ \hline \multicolumn{2}{|r||}{$\overline{p}$} & 0.909 & 0.914 & 0.920\\ \hline \multicolumn{2}{|r||}{$p$} & 0.910 & 0.915 & 0.920\\ \hline \multicolumn{2}{|r||}{$\overline{\mu}$ (ns)} & 1235.057 & 1207.288 & 1180.852\\ \hline \multicolumn{2}{|r||}{$\mu$ (ns)} & 1233.784 & 1206.048 & 1178.695\\ \hline \end{tabular}} {14 cm} {Net step frequency distributions for \numdim{U_0=5}{$k_{\mbox{\tiny B}}T$} and for three values of $F_i$ chosen in the admissible interval. Here $F_e=0$, $x_0=L/2$ and \numdim{\Delta t=0.05}{ns}. Other parameters have been chosen as in Figure~\ref{MuversusFE}. The last 5 rows show the coefficient of variation, probability of a step forward and mean exit time, all obtained via simulations. The theoretical values of $p$ and $\mu$, obtained via Eqs.~(\ref{P}) and (\ref{Mu}), are also indicated.} {pot.cinque} \end{tabMacPc} The chosen value of $U_0$ is motivated by a twofold consideration. First of all, it is large enough to imply that the first-exit time distribution is approximately exponential. This is indeed supported by the numerically evaluated variation coefficient that has been found to be 0.96. Second, and most relevant consideration, is that it is reasonable to conceive that, under the assumed overdamped regime, the net step number is insensitive to the depth of the potential well. Indeed, the forward exit probability $p$ given by~(\ref{P}), under the fixed environmental temperature, only depends on the energy $FL$, namely on the difference of potential $V(x)$ over one period, thus being independent of the depth $U_0$. Table~\ref{pot.quattro&otto} evidently supports such conclusion. Indeed, it indicates that, for instance, by doubling the depth of the potential well $U_0$, the net step frequencies are not affected significantly, even for different choices of internal forces\footnote{The presence of non-integer numbers in Tables~\ref{pot.cinque}, ~\ref{pot.quattro&otto} and ~\ref{pot.cinquebis} is a consequence of our adopted estimation procedure. The half-width of the related 95\%-confidence intervals has been seen never to exceed $0.2$. For comparison reasons we have not rounded out the raw numbers to the nearest integers.}. \begin{tabMacPc} {\begin{tabular}{|c||r|r||r|r||r|r|}\hline & \multicolumn{2}{c||}{\numdim{F_i=1.70}{pN}}&\multicolumn{2}{c||}{\numdim{F_i=1.75}{pN}} & \multicolumn{2}{c|} {\numdim{F_i=1.80}{pN}} \\ \hline $\lfloor X_R/L \rfloor$ & \numdim{U_0=4}{$k_{\mbox{\tiny B}}T$} & \numdim{U_0=8}{$k_{\mbox{\tiny B}}T$} & \numdim{U_0=4}{$k_{\mbox{\tiny B}}T$} & \numdim{U_0=8}{$k_{\mbox{\tiny B}}T$} & \numdim{U_0=4}{$k_{\mbox{\tiny B}}T$} & \numdim{U_0=8}{$k_{\mbox{\tiny B}}T$} \\ \hline \hline -1& 2.11 & 2.05 & 1.94 & 1.90 & 1.85 & 1.81 \\ \hline 0& 13.56 & 13.96 & 13.54 & 13.76 & 13.48 & 14.00 \\ \hline 1& 21.13 & 20.26 & 20.92 & 20.26 & 21.30 & 20.46 \\ \hline 2& 16.70 & 16.11 & 16.93 & 16.29 & 16.98 & 16.09 \\ \hline 3& 8.43 & 8.58 & 8.53 & 8.61 & 8.40 & 8.56 \\ \hline 4& 2.95 & 3.40 & 3.01 & 3.53 & 2.91 & 3.49 \\ \hline 5& 0.75 & 1.12 & 0.78 & 1.15 & 0.76 & 1.11 \\ \hline \end{tabular}} {16 cm} {Net step frequency distribution numerically obtained. Here $F_e=0$ and $x_0=L/2$. Other parameters are chosen as in Figure~\ref{MuversusFE}. Parsing steps are chosen as follows: \numdim{\Delta t=0.1}{ns} for \numdim{U_0=4}{$k_{\mbox{\tiny B}}T$}, and \numdim{\Delta t=0.025}{ns} for \numdim{U_0=8}{$k_{\mbox{\tiny B}}T$}. The indicated values of $F_i$ belong to the admissible interval.} {pot.quattro&otto} \end{tabMacPc} \par Note that the parsing parameter $\Delta t$ must be determined with specific reference to the magnitude of $U_0$. Indeed, $\Delta t$ must be much smaller than the relaxation time of the force $-U^\prime(x)\propto \beta L^2/U_0$. Furthermore, it must be such as to cope with the random forces due to the thermal bath. Therefore, the mean square displacement per unit time should remain constant as the depth $U_0$ of $U(x)$ is made to change, which implies an inverse dependence of the magnitude of $\Delta t$ on the square of magnitude of the potential well. This is the motivation for the choices of $\Delta t$ indicated in Table~\ref{pot.quattro&otto}. \section{The role of potential forms and asymmetries} In~\cite{buo03} the idea and role of a washboard-potential played in myosin dynamics was introduced and exploited with the only reference to a parabolic potential (parabolic \lq\lq profile\rq\rq, in the currently adopted terminology), though without any consideration on model robustness. This task is now accomplished in the more general framework of the present model, by considering not only parabolic, but also other types of profiles: saw-toot, cosine-like and Lindner-type (see Table~\ref{Potenziali}). The effects of the natural asymmetry present in the acto-myosin system is investigated within our model with asymmetric saw-toot potential. A plot of the considered potentials over one period are shown in Figure~\ref{GraficiPotenziali}, whereas Figure~\ref{GraficiForze} refers to the corresponding generated forces. Lindner-type potential has been indicated for two values of parameter $\delta$. It is not difficult to see that $\delta\to 0$ yields the cosine profile, whereas the potential flattens down in the middle as $\delta$ increases. \par \begin{table}[htb] \begin{minipage}[htb]{9.7 cm} \renewcommand{\arraystretch}{2.0} \caption{Potentials' profiles.} \label{Potenziali} \vspace{0.2 cm} \begin{center} \begin{tabular}{|l|c|}\hline Saw-toot & $U_S(x)$ $=\left\{\begin{array}{l} \displaystyle{\frac{-U_0}{L_A}\left(x-L_A\right)},\quad 0 \le x\le L_A\\ \displaystyle{\frac{U_0}{L-L_A}\left(x-L_A\right)},\quad L_A \le x \le L\\ \end{array}\right.$ \\ \hline Parabolic & $U_P(x)$ $\displaystyle{=\frac{U_0}{L^2/4}\left(x-\frac{L}{2}\right)^2}$ \\ \hline Cosine & $U_C(x)$ $\displaystyle{=\frac{U_0}{2}\left[\cos\left(\frac{2\pi}{L} x\right)+1\right]}$ \\ \hline Lindner & $U_L(x)$ \makebox{$\displaystyle{=\frac{U_0}{e^{2\delta}-1}}\left\{e^{\displaystyle{\delta\left[\cos\left(\frac{2\pi}{L} x\right)+1\right]}}-1\right\}$\vspace{0.2 cm}} \\ \hline \end{tabular} \end{center} \end{minipage} \hfill \begin{minipage}[htb]{5 cm} \renewcommand{\arraystretch}{1.0} \caption{For each potential profile the depth $U_0$ of the potential well is indicated. Here \numdim{F_i=1.75}{pN}, \numdim{F_e=0.046}{pN} and the other parameters are the same as in Figure~\ref{MuversusFE}.} \label{Altezze} \vspace{0.2 cm} \begin{center} \begin{tabular}{|l|c|}\hline Type & $U_0$ ($k_{\mbox{\tiny B}}T$) \\ \hline Saw-toot & 15.723 \\ \hline Parabolic & 15.043 \\ \hline Cosine & 13.944 \\ \hline Lindner ($\delta=0.1$) & 13.942 \\ \hline Lindner ($\delta=0.5$) & 13.918 \\ \hline Lindner ($\delta=1$) & 13.851 \\ \hline Lindner ($\delta=2$) & 13.697 \\ \hline Lindner ($\delta=3$) & 13.611 \\ \hline Lindner ($\delta=4$) & 13.591 \\ \hline Lindner ($\delta=5$) & 13.604 \\ \hline Lindner ($\delta=10$) & 13.749 \\ \hline \end{tabular} \end{center} \end{minipage} \end{table} \begin{figure}[htb] \begin{minipage}[htb]{7.5 cm} \epsfxsize=7 cm \centerline{\epsfbox{Figure/Fig3.eps}} \caption{Plots of the potentials as function of the position within each pair of consecutive monomers.} \label{GraficiPotenziali} \end{minipage} \hfill \begin{minipage}[htb]{7.5 cm} \epsfxsize=7 cm \centerline{\epsfbox{Figure/Fig4.eps}} \caption{Conservative forces originated by the potentials of Figure~\ref{GraficiPotenziali}.} \label{GraficiForze} \end{minipage} \end{figure} The independence of the exit probability $p$ of the potential's profile implies that the net step distribution is profile-independent as well. This is also evident from Table~\ref{pot.cinquebis}, where the net step distributions are reported for each of the above-considered four potential profiles. \begin{tabMacPc} {\begin{tabular}{|c|r||r||r||r||r|}\hline $\lfloor X_R/L \rfloor$ &Observed frequency & \multicolumn{4}{c|}{Potentials' profiles} \\ \hline & &Saw-toot &Parabolic &Cosine &Lindner ($\delta=2$)\\ \hline\hline -1 & & 1.89 & 1.92 & 1.90 &1.99 \\ \hline 0 &14 &13.61 &13.69 &13.73 &13.81 \\ \hline 1 &21 &20.67 &20.69 &20.59 &20.46 \\ \hline 2 &18 &16.47 &16.42 &16.17 &16.20 \\ \hline 3 &10 & 8.63 & 8.60 & 8.70 &8.64 \\ \hline 4 & 3 & 3.33 & 3.29 & 3.38 &3.34 \\ \hline 5 & 0 & 0.98 & 0.95 & 1.05 &1.07 \\ \hline \hline \multicolumn{2}{|r||}{$\overline{CV}$ } &0.964 &0.970 &0.984 &0.982 \\ \hline \multicolumn{2}{|r||}{$\mu$ (ns) }& 1206.048 & 1403.504 & 2194.007 & 2270.677\\ \hline \end{tabular}} {14 cm} {The first two columns list the net number of steps performed by myosin heads and the corresponding observed frequencies. The remaining four columns show the net step distributions, obtained via numerical simulations using the indicated potential profiles all possessing \numdim{U_0=5}{$k_{\mbox{\tiny B}}T$}. In all cases \numdim{F_i=1.75}{pN}, $F_e=0$, $x_0=L/2$, and \numdim{\Delta t=0.05}{ns}. Other parameter have been chosen as in Figure~\ref{MuversusFE}. Variation coefficients and mean-exit times $\mu$, obtained via Eq.~(\ref{Mu}), are listed as well. } {pot.cinquebis} \end{tabMacPc} \par Next task is to pinpoint the effects of the potential profiles on the mean first-exit time. To this purpose, we refer to Table~\ref{DwellTime} showing that the mean dwell time in lowest load condition is \numdim{\mu=5.3}{ms}. After choosing \numdim{F_i=1.75}{pN}, we then make use of Eq.(\ref{Mu}) for each and every one of the four considered potentials imposing that the left hand side equals such value of $\mu$. By iterated numerical integrations, for each potential profile the corresponding value of $U_0$ is finally determined. The result are listed in Table~\ref{Altezze}, where the effect of the parameter $\delta$ in Lindner-type profile has been detailed. \par The behavior of mean first-exit time $\mu$ as a function of the magnitude of the external force is shown in Figure~\ref{MuversusFE+Profili}. All considered cases of Table~\ref{Altezze} lead to graphs falling in the region bounded by the lowest and the highest curves. We point out that changing the potential profiles never yields mean first-exit time changes exceeding $10\%$. Hence, we are led to conclude that the depth $U_0$ of the potential well can be tuned to acceptable biological values by a suitable selection of the potential profile, without affecting appreciably the value of the mean first-exit time. For instance, switching from saw-toot to Lindner-type potential with $\delta=2$, lowers $U_0$ by more than \numdim{2}{$k_{\mbox{\tiny B}}T$}. (See Table~\ref{Altezze}). It is thus conceivable that a variety of potential profiles exists such that the height of the potential well can be further lowered, in a way to switch from about \numdim{15}{$k_{\mbox{\tiny B}}T$} as indicated in~\cite{esa03} to about \numdim{5}{$k_{\mbox{\tiny B}}T$} as suggested in~\cite{luc99}. \begin{figMacPc} {Figure/Fig5.eps} {13cm} {For each potential profile the mean first-exit time $\mu$ is plotted as a function of the external force $F_e$. The corresponding values of $U_0$, for each potential, are listed in Table~\ref{Altezze}. We have taken \numdim{F_i=1.75}{pN}, while the other parameters have been chosen as in Figure~\ref{MuversusFE}.} {MuversusFE+Profili} \end{figMacPc} The quantitative analysis and comparison with available data has been performed under the assumption of rigorous symmetries exhibited by the profiles of the potentials generating the periodic force acting on the Brownian particles. It is, however, presumable that the complex biological reality underlying the observed motion of the myosin heads may require to relax the rigorous symmetry assumption. To test the effect of symmetry breaking, we take into consideration the saw-toot potential $U_S(x)$ of Table~\ref{Potenziali} and make it asymmetric by taking $L_A\ne L/2$, namely by shifting in either direction the point of minimum of the potential. Thus doing, the introduced asymmetry should affect the motion of the Brownian particles. While the probabilities of exit from the current potential well are insensitive to the potential's profile, and hence also to its asymmetry, the mean first-exit time $\mu$ is clearly affected by it, as shown by Eq.~(\ref{Mu}). The quantitative dependence of $\mu$ on the potential's asymmetry is indicated in Figure~\ref{MuversusFE+Asimmetria}, where on the abscissa the external force $F_e$ is indicated. Each curve is characterized by two parameters: the point $L_A$ of minimum and the depth $U_0$ of the potential. \begin{figMacPc} {Figure/Fig6.eps} {13cm} {For the saw-toot potential profile with the indicated value of asymmetry $L_A$, the mean first-exit time $\mu$ is plotted as a function of the external force $F_e$. The corresponding values of $U_0$ such that one obtains \numdim{\mu=5.3}{ms} for \numdim{F_e=0.046}{pN}, are also indicated on each curve. We have taken \numdim{F_i=1.75}{pN}, while the other parameters have been chosen as in Figure~\ref{MuversusFE}.} {MuversusFE+Asimmetria} \end{figMacPc} From top to bottom, the first curve is characterized by asymmetry \numdim{L_A=2.0}{nm}, implying a somewhat steeper rise of the potential in the backward direction. Such asymmetry is mitigated in the next curve (\numdim{L_A=2.5}{nm}). The third curve, indicated in bold, is used as a comparison tool, as it refer to symmetric profile ($L_A=L/2$). The remaining two curves are labeled by \numdim{L_A=3.0}{nm} and \numdim{L_A=3.5}{nm}, thus representing mirror-image situation with respect to the zero-asymmetry case. Now the forward edges of the potentials are steeper. For each curve, the indicated value of $U_0$ has been determined by imposing that \numdim{\mu=5.3}{ms} when \numdim{F_e=0.046}{pN} (lowest load), so that all curves originate at the same point. As Figure~\ref{MuversusFE+Asimmetria} shows, changes of the potential's asymmetry of 30$\%$ in either direction do not affect greatly the magnitude of the mean first-exit time. Finally, we remark that suitable choices of backward asymmetry (such \numdim{L_A=2.0}{nm} in Figure~\ref{MuversusFE+Asimmetria}) improve the fitting of the experimental data even for larger load values. \section{Concluding remarks} The model of actomyosin dynamics discussed in the foregoing rests on the assumption that the total energy made available to the myosin head by the ATP molecule hydrolysis and by the thermal bath has a two-fold overall role: To produce the power stroke predicted by the lever-arm model and also to generate the kind of sliding of myosin head on the actin filament in the \lq\lq loose coupling\rq\rq\ mechanism originally hypothesized in~\cite{oos86} and then experimentally demonstrated in~\cite{kit99}. This is expressed mathematically via the representation of the displacement of a particle consisting of a combination of a deterministic part and of a random component. The latter is generated by the simultaneous presence of a washboard-type potential and a random force arising from thermal fluctuations. Our model has then been tested by making use of a set of data on the dwell times and step frequencies of myosin heads under various load conditions. We have shown that by a suitable tuning of the internal force and depth of the potential well, the theoretically calculated probability $p$ and mean first-exit time $\mu$ of the representative Brownian particle are in good agreement with their biological counterpart. A second set of experimental data concerning the net step number distribution of myosin heads under low load conditions has then been exploited to show that the washboard potential used by us is able to reproduce such distribution within the mathematical analogy. To the rather small size of the experimental sample should be ascribed the 3$\%$ discrepancy represented by the two net backward displacements predicted by our model. \par Next, the robustness of our model has been tested by inserting 4 different potential profiles in the Langevin equation of motion. The consequently performed calculations have shown that the mean exit-time depends on the external force in a way that is essentially insensitive to the chosen potential's profile. The chosen profile, instead, has been seen to play an essential role in that it critically relates the depth of the potential well to magnitude of the mean exit-time. A finer tuning of the mean exit-time is finally achieved by regulating the level of asymmetry of the potential's profile. \acknowledgment{We thank Consorzio Universitario CINECA for providing computation time on IBM-SP4 supercomputer.}
1,941,325,220,360
arxiv
\section{Introduction} The Operator Product Expansion for scattering amplitudes \cite{Basso:2013vsa,Alday:2010ku} of planar maximally supersymmetric Yang-Mills theory in the dual language of the Wilson loop stretched on a null polygonal contour in superspace \cite{Alday:2007hr,Drummond:2007cf,Brandhuber:2007yx,CaronHuot:2010ek,Mason:2010yk,Belitsky:2011zm} paved a way for their weak and strong coupling analysis in a multi-collinear limit with a naturally built-in consistent scheme for inclusion of subleading corrections \cite{Basso:2013aha,Belitsky:2014rba,Basso:2014koa,Belitsky:2014sla,Basso:2014nra,Belitsky:2014lta,Belitsky:2015efa,Basso:2014hfa,Basso:2014jfa,Basso:2015rta,% Fioravanti:2015dma,Belitsky:2015qla,Bonini:2015lfr,Belitsky:2015lzw}. It is a based on geometrization of the contour in terms of a sequence of null squares with adjacent ones sharing a side merged into pentagons, see Fig.\ \ref{pentagonFig}. The bottom of the loop can be decomposed into an infinite series of excitations with the strength of contributions being exponentially suppressed with their number (or more precisely, their cumulative twist) in the collinear limit. These propagate upwards from the bottom through a series of pentagons and are absorbed at the top. Every pentagonal Wilson loop in the chain of transitions contains insertions of elementary fields of the theory with their total quantum numbers fixed by the choice of the component of the amplitude under study. These pentagons play a pivotal role in the entire construction. They obey a set of natural axioms \cite{Basso:2013vsa} that are inherited from the integrable dynamics of the $\mathcal{N}=4$ supersymmetric Yang-Mills theory. However, the question of their operatorial origin remains obscure. \begin{figure}[t] \begin{center} \mbox{ \begin{picture}(0,280)(80,0) \put(0,-170){\insertfig{20}{genericpentagon}} \end{picture} } \end{center} \caption{\label{pentagonFig} Tessellation of a polygon into null squares merged into pentagons (shown in different dashed contours). We picked an intermediate pentagon transition with flux-tube excitations inserted in the bottom and top portions of the contour.} \end{figure} \subsection{Embedding of different multiplets} Some time ago \cite{Belitsky:2014rba}, we studied the system of excitations of a single type interacting on the flux-tube. It was shown to be equivalent to solving the spectral problem for a noncompact open spin chain whose sl(2) invariance is broken by boundary Hamiltonians. Presently we will provide its generalization to the minimally supersymmetric sector of the $\mathcal{N} = 4$ super Yang-Mills theory in the planar limit. In the absence of a covariant superspace formulation of the theory, the light-cone formalism becomes advantageous. In this framework, all propagating fields in the maximally supersymmetric Yang-Mills theory can be accommodated into a single light-cone chiral superfield \cite{Brink:1982pd,Mandelstam:1982cb,Belitsky:2004sc}, \begin{align} \label{N4SYMsupefield} \Phi_{\mathcal{N} = 4} (x^\mu, \theta^A) &= \partial_+^{-1} A (x^\mu) + \theta^A \partial_+^{- 1} \bar\psi_A (x^\mu) + \frac{i}{2!} \theta^A \theta^B \phi_{AB} (x^\mu) \\ & + \frac{1}{3!} \varepsilon_{ABCD} \theta^A \theta^B \theta^C \psi^D (x^\mu) - \frac{1}{4!} \varepsilon_{ABCD} \theta^A \theta^B \theta^C \theta^D \partial_+ \bar{A} (x^\mu) \, , \nonumber \end{align} where $A$ and $\bar{A}$ are the holomorphic and antiholomorphic components of the gauge field, respectively, $\psi$ and $\bar\psi$ are the dynamical ``good'' components of the fermion fields transforming in the ${\bf 4}$ and ${\bf \bar{4}}$ of the internal $SU(4)$ symmetry group and, finally, $\phi_{AB}$ is a sextet of scalars. There are two possible subsectors we can analyze. One them is of Wess-Zumino type. It is is composed of a scalar and a fermion \begin{align} \label{WZsuperfield} \Phi_{s = 1/2} (x^\mu, \theta) = \phi (x^\mu) + \theta \psi (x^\mu) \end{align} and is obtained from \re{N4SYMsupefield} via the projection \cite{Belitsky:2007zp} \begin{align} \label{projectionWZmultiplet} \Phi_{\mathcal{N} = 4} (x^\mu, \theta^A) |_{\theta^2 = \theta, \theta^3 = 0} = \dots + \theta^1 \theta^4 \Phi_{s = 1/2} (x^\mu, \theta) \, . \end{align} The other one is the antiholomorphic part of the $\mathcal{N} = 1$ superYang-Mills multiplet, \begin{align} \label{N1SYMmultiplet} \Phi_{s = 1} (x^\mu, \theta) = \psi (x^\mu) - \theta \bar{F} (x^\mu) \, , \end{align} built from a fermion and antiholomorphic field strength $\bar{F} = \partial_+ \bar{A}$, found in the top two components of the $\mathcal{N} = 4$ superfield, \begin{align} \Phi_{\mathcal{N} = 4} (x^\mu, \theta^A) |_{\theta^4 = \theta} = \dots + \theta^1 \theta^2 \theta^3 \Phi_{s = 1} (x^\mu, \theta) \end{align} In both cases, we displayed the conformal spin of the minisuperfield, which is determined by the one of its lowest field component, as a subscript. \subsection{Superlight-cone operators and Hamiltonians} \label{MEhamiltonianSection} As can be seen from the representation of the pentagon transition in Fig.\ \ref{pentagonFig}, it is related to the correlation function of two $\Pi$-shaped Wilson loops \cite{Belitsky:2011nn,Sever:2012qp} with insertions of elementary fields into their bottom and top contours, schematically \begin{align} P ({\rm bottom} | {\rm top}) \sim \vev{O_{\Pi_{\rm top}} \mathcal{O}_{\Pi_{\rm bottom}} } \, , \end{align} where \begin{align} \label{PiLCOperator} \mathcal{O}_{\Pi} (\bit{Z}) = W^\dagger (0) \Phi_s (Z_1) \Phi_s (Z_2) \dots \Phi_s (Z_N) W (\infty) \, , \end{align} is built from superfields $\Phi$ inserted along the light-cone direction $z_n = x^-_n$ and depends on respective Grassmann variable $\theta_n$ that together can be encoded in a superspace coordinate $Z_n = (z_n, \theta_n)$. The gauge links between supercordinates $\bit{Z} = (Z_1, Z_2, \dots, Z_N)$ can be ignored due to the choice of the light-cone gauge condition $A^+ = 0$. The two light-like Wilson lines $W$ in the direction of particles propagating along the vertical segments of the pentagon are attached at its ends. At leading order of perturbation theory (and multicolor limit), the renormalization group evolution of these operators can be cast in the form of a Sch{\"o}dinger equation with Hamiltonian given by the sum of pairwise Hamiltonians between adjacent superfields supplemented with the interaction of the first and last one with the boundary Wilson lines. The latter read \begin{align} \label{HamiltonianII} \mathcal{H}_{01} W^\dagger (0) \Phi_s (Z_1) & = W^\dagger (0) \int_0^1 \frac{d \beta}{1 - \beta} \left[ \beta^{2 s - 1} \Phi_s (\beta Z_1) - \Phi_s (Z_1) \right] \, , \\ \mathcal{H}_{N \infty} \Phi_s (Z_N ) W (\infty) & = \int_1^\infty \frac{d \beta}{\beta - 1} \left[ \Phi (\beta z_N, \theta_N) - \beta^{-1} \Phi_s (Z_N) \right] W (\infty) \, . \end{align} We can use the light-cone superspace formulation of the $\mathcal{N} = 4$ dilatation operator \cite{Belitsky:2004sc} to project out following Ref.\ \cite{Belitsky:2007zp} the scalar-fermion sector in question or directly get the $\mathcal{N}=1$ superYang-Mills \cite{Belitsky:2004sc} for the multiplet \re{N1SYMmultiplet}. The $\mathcal{N} = 4$ pair-wise Hamiltonian for superfields $\Phi_{\mathcal{N} = 4}$ of conformal spin $s = - \ft12$ sitting away from the boundary Wilson lines is \begin{align} \mathcal{H}_{12} \Phi_{\mathcal{N} = 4} (Z_1) \Phi_{\mathcal{N} = 4} (Z_2) & = \int_0^1 \frac{d \alpha}{\alpha} \bigg[ (1 - \alpha)^{-2} \Phi_{\mathcal{N} = 4} ((1 - \alpha)Z_1 + \alpha Z_2) \Phi_{\mathcal{N} = 4} (Z_2) \nonumber\\ &\qquad\qquad + (1 - \alpha)^{-2} \Phi_{\mathcal{N} = 4} (Z_1) \Phi_{\mathcal{N} = 4} ((1 - \alpha)Z_2 + \alpha Z_1) \nonumber\\ &\qquad\qquad - 2 \Phi_{\mathcal{N} = 4} (Z_1) \Phi_{\mathcal{N} = 4} (Z_2) \bigg] \, . \end{align} Projecting out the Wess-Zumino multiplet via \re{projectionWZmultiplet} changes the power of the $\alpha$-dependent prefactor from $-2$ to $0$. For the antiholomorphic Yang-Mills multiplet \re{N1SYMmultiplet}, the same power changes from $-2$ to $1$. We can combine the two options by encoding them in the exponent $2s-1$. Let us change the integration variables in the integrand of $\mathcal{H}_{12}$, as well as modify the subtraction term, i.e., \begin{align} \mathcal{H}^\prime_{12} = \mathcal{H}_{12} + \delta \mathcal{H}_{12} \, , \qquad \mbox{with} \qquad \delta \mathcal{H}_{12} = \ln z_2/z_1 \, , \end{align} such that in the limit $z_2 \gg z_1$, we get the sum of two boundary Hamiltonians \re{HamiltonianII}. Here, the pair-wise Hamiltonian is split in two \begin{align} \label{IntermediateH12} \mathcal{H}^\prime_{12} \Phi_s (Z_1) \Phi_s (Z_2) = \mathcal{H}^+_{12} \Phi_s (Z_1) \Phi_s (Z_2) + \mathcal{H}^-_{12} \Phi_s (Z_1) \Phi_s (Z_2) \, , \end{align} that act in the following fashion on the nearest-neighbor fields \begin{align} \mathcal{H}^-_{12} \Phi_s (Z_1) \Phi_s (Z_2) & =\! \int_1^{z_2/z_1} \frac{d \beta}{\beta - 1} \\ & \times\! \left[ \left( \frac{z_2 - \beta z_1}{z_2 - z_1} \right)^{2 s - 1} \!\Phi_s\! \left( \beta z_1, \frac{z_2 - \beta z_1}{z_2 - z_1} \theta_1 + \frac{z_1 (\beta - 1)}{z_2 - z_1} \theta_2 \right) - \beta^{-1} \Phi_s (Z_1) \right] \Phi_s (Z_2) \, , \nonumber\\ \mathcal{H}^+_{12} \Phi_s (Z_1) \Phi_s (Z_2) & = \Phi_s (Z_1) \int_{z_1/z_2}^1 \frac{d \beta}{1 - \beta } \\ & \times\! \left[ \left( \frac{z_1 - \beta z_2}{z_1 - z_2} \right)^{2 s - 1} \Phi_s \left( \beta z_2, \frac{\beta z_2 - z_1}{z_2 - z_1} \theta_2 + \frac{z_2 (1 - \beta)}{z_2 - z_1} \theta_1 \right) - \Phi_s (Z_2) \right] \, . \nonumber \end{align} Thus the Hamiltonian that we have to solve the eigensystem for is \begin{align} \label{TotalHamiltonian} \mathcal{H}_N = \mathcal{H}_{01} + \mathcal{H}'_{12} + \dots + \mathcal{H}'_{N-1,N} + \mathcal{H}_{N\infty} \, . \end{align} Depending on the conformal spin of the superfields, it encodes both the Wess-Zumino and Yang-Mills multiplets. Since the two differ only by the value of the spin, the following discussion will be done for arbitrary $s$. This also points out that, while the scalars and fermions carry the R-charge in the $\mathcal{N}=4$ theory, this spin model will not be able to accommodate for nontrivial rational prefactors that arise in the pentagon approach, otherwise, these would arise in the fermion-gluon sectors as well. However, the latter is free from these `complications' since the gluon is singlet with respect to SU(4). Thus, the leading order description within the supersymmettric lattice model will provide information on the dynamical portion of the pentagons only. The rational factors as well as helicity form factors stemming from crossing conditions will not be accounted for within the current formalism. \section{Supersymmetric open spin chain} The light-cone chiral superfield $\Phi_s (Z)$ defines an infinite-dimensional chiral representation $\mathcal{V}_s$ of the superconformal sl$(2|1)$ algebra labeled by the conformal spin $s$. The generators of the algebra are realized as first order differential operators in bosonic $z$ and fermionic $\theta$ variables \begin{align} S^- & = - \partial_z \, , & S^+ & = z^2 \partial_z + 2 z s + z \theta \partial_\theta \, , & S^0 & = z \partial_z + s + \ft12 \theta \partial_\theta \, , & B & = \ft12 \theta \partial_\theta - s \, , \nonumber\\ V^- &= \partial_\theta \, , & W^- & = \theta \partial_z \, , & V^+ & = z \partial_\theta \, , & W^+ & = \theta (z \partial_z + 2 s) \, . \label{GeneratorsSL21diffrep} \end{align} Thus, the Hamiltonian \re{TotalHamiltonian} defines a non-periodic homogeneous open superspin chain. We will demonstrate below that it is in fact integrable. \subsection{Scalar product and involution properties of generators} As will be clear from our discussion it will be indispensable to introduce an inner product on the space of functions depending on superspace variable $Z$. While the bosonic variable lives on the real axis, it is instructive to address the spectral problem by promoting it to the upper half of the complex plane. This formulation is of paramount importance for the construction of eigenfunctions (holomorphic functions in the upper semiplane) in the representation of Separated Variables \cite{Sklyanin:1995bm,Derkachov:2002tf,Belitsky:2014rba} and computation of various inner product \cite{Belitsky:2014rba}. The flux-tube matrix elements entering the Operator Product Expansion can be regarded as their boundary values. The chiral scalar product on the space of superfunctions \begin{align} \bit{\Phi}_s (Z) = \Phi_s (z) + \theta \Phi_{s + 1/2} (z) \, , \end{align} holomorphic in the upper semiplane of the complex plane is defined as \begin{align} \label{sl21innerproduct} \vev{\bit{\Phi}'_s|\bit{\Phi}_s} = \int [D Z]_s \, \big( \bit{\Phi}'_s (Z) \big)^\ast \bit{\Phi}_s (Z) \, , \end{align} where the sl$(2|1)$ invariant measure reads \begin{align} \label{sl21measure} \int [D Z]_s = \frac{{\rm e}^{- i \pi (s - 1)}}{\pi} \int d \theta^\ast d \theta \int_{\Im{\rm m}[z] > 0} d z^\ast dz \, \left( z - z^\ast + \theta \theta^\ast \right)^{2 s - 1} \, . \end{align} Notice that the phases chosen in this inner product are correlated with the integration and involution rules adopted for Grassmann variables. Throughout the paper they obey the following rules \begin{align} \label{GrassmannInvolution} \int d \theta \, \theta = 1 \, , \qquad \big( \theta' \theta \big)^\ast = \theta'^\ast \theta^\ast \, . \end{align} In the component form, we find \begin{align} \label{sl21InnerInComponents} \vev{\bit{\Phi}'_s | \bit{\Phi}_s} = \int [Dz]_{s} \big( \Phi'_s (z) \big)^\ast \Phi_s (z) + \frac{1}{2 i s} \int [D z]_{s + 1/2} \big( \Phi'_{s+1/2} (z) \big)^\ast \Phi_{s+1/2} (z) \, , \end{align} where, e.g., $\Phi_s = \phi$ and $\Phi_{s + 1/2} = \psi$ for the field content of the $s=1/2$ multiplet \re{WZsuperfield}. Here we recognize in the first term the well-known expression for the bosonic sl$(2)$-invariant inner product with the measure \begin{align} \int [Dz]_{s} \equiv (2 s - 1) \frac{{\rm e}^{- i \pi (s - 1)}}{\pi} \int_{\Im{\rm m}[z] > 0} d z^\ast dz \, (z - z^\ast)^{2s - 2} \, . \end{align} Notice an extra phase in front of the second term in Eq.\ \re{sl21InnerInComponents} to make it real by virtue of Eq.\ \re{GrassmannInvolution} for fermionic fields. For $s = 1$, one has to change $\phi \to \psi$ and $\psi \to - \bar{F}$. Since the resulting superfield \re{N1SYMmultiplet} is fermionic, one has to multiply the inner product $\vev{\Phi'_s | \Phi_s}$ by an $i$ such that this phase will migrate from the second term to the first. We will imply this convention from now on so that we could avoid repetitive formulas corresponding to each case. This nuisance will not affect any of our considerations which follow. We conventionally define the adjoint operator with respect to the inner product \re{sl21innerproduct} as \begin{align} \vev{\bit{\Phi}' | G \bit{\Phi}} = \vev{G^\dagger \bit{\Phi}' | \bit{\Phi}} \, . \end{align} Then we can easily verify the following conjugation properties of the sl$(2|1)$ generators \re{GeneratorsSL21diffrep} using integration by parts \begin{align} \label{ConjugationGenerators} \left( S^{\pm,0} \right)^\dagger = - S^{\pm,0} \, , \qquad B^\dagger = B \, , \qquad \left( V^\pm \right)^\dagger & = - W^\pm \, . \end{align} Notice that the chirality generator is hermtitian compared to antihermitian generators of the sl(2) subalgebra. From the involution rules for Grassmann variables, it follows that \begin{align} \left( G G' \right)^\dagger = (-1)^{{\rm grad} G \, {\rm grad} G'} G'^\dagger G^\dagger \, . \end{align} The Hilbert space of the $N$-site model spanned on the light-cone operators \re{PiLCOperator} is formed by the tensor product of Hilbert spaces at the position of each superfield $\otimes_{k=1}^N \mathcal{V}_{s, k}$. Then, one can immediately proof the self-adjoint property of the Hamiltonian \re{TotalHamiltonian}, \begin{align} \mathcal{H}^\dagger_N = \mathcal{H}_N \end{align} with respect to the inner product for multivariable $\bit{Z} = (Z_1, Z_2, \dots, Z_N)$ function $\bit{\Phi}_s = \bit{\Phi}_s (\bit{Z})$, \begin{align*} \vev{\bit{\Phi}'_s|\bit{\Phi}_s} = \int \prod_{n = 1}^N [D Z_n]_s \, \big( \bit{\Phi}'_s (\bit{Z}) \big)^\ast \bit{\Phi}_s (\bit{Z}) \, . \end{align*} To see this more efficiently, it is convenient to recast the individual pair-wise Hamiltonians in the non-local form, \begin{align} \label{BoundaryDiffH} \mathcal{H}_{01} & = \psi (1) - \psi (z_1 \partial_{z_1} + \theta_1 \partial_{\theta_1} + 2s) \, , \\ \mathcal{H}_{N\infty} & = \psi (1) - \psi (- z_N \partial_{z_N}) \, , \end{align} for the boundary and \begin{align} \label{BulkDiffH} \mathcal{H}_{12} & = 2 \psi (1) - \psi (z_{12} \partial_{z_1} + \theta_{12} \partial_{\theta_1} + 2s) - \psi (z_{21} \partial_{z_2} + \theta_{21} \partial_{\theta_2} + 2s) \, , \\ \delta \mathcal{H}_{12} & = \ln z_2/z_1 \, , \end{align} for bulk ones, respectively. In fact, we can rearrange different contributions entering the bulk into the boundary Hamiltonians to better match them to the ones emerging from R-matrices. Namely, splitting the logarithmic terms in $\delta \mathcal{H}_{12}$, we can identify the bulk Hamiltonian with the sl(2$|$1) invariant one $h_{12} = \mathcal{H}_{12}$, \begin{align} h_{12}^- = \psi (1) - \psi (z_{12} \partial_{z_1} + \theta_{12} \partial_{\theta_1} + 2s) \, , \qquad h_{12}^+ = \psi (1) - \psi (z_{21} \partial_{z_2} + \theta_{21} \partial_{\theta_2} + 2s) \, , \end{align} while the boundary ones now read \begin{align} h_{01} = \mathcal{H}_{10} - \ln z_1 = - \ln \left(z_1^2 \partial_{z_1} + z_1 \theta_1 \partial_{\theta_1} + 2 s z_1 \right) \, , \qquad h_{N \infty} = \mathcal{H}_{N\infty} + \ln z_N = - \ln \partial_{z_N} \, . \end{align} \subsection{Integrals of motion and hermiticity issues} Let us construct the integrals of motion of the $N$-site Hamiltonian following the standard procedure of the R-matrix approach \cite{TakFad79}. The sl$(2|1)$ Lax operator acting on the direct product of a graded three-dimensional space and the chiral space $\mathcal{V}_{s,k}$ of the $k$-th site reads \begin{align} \mathbb{L}_k (u) = \left( \begin{array}{ccc} u + S^0_k - B_k & - W^-_k & S^-_k \\ - V^+_k & u - 2 B_k & V^-_k \\ S^+_k & - W^+_k & u - S^0_k - B_k \end{array} \right) \, . \end{align} It depends on the complex spectral parameter $u$. Notice that for a generic case, the representation of the algebra is parametrized by the conformal spin $s$ and chirality $|b| \neq s$. Thus the Lax operator can be viewed as a function of three linear combinations of $u$, $s$ and $b$, namely, \begin{align} \label{GenericLaxParameters} u_1 = u + s - b \, , \qquad u_2 = u - 2 b \, , \qquad u_3 = u - s - b \, , \end{align} such that $\mathbb{L} (u) = \mathbb{L} (u_1, u_2, u_3)$ is a function of $u_\alpha$. For the chiral case at hand, $b = - s$. However, we will use three distinct $u_\alpha$ parameters below to our advantage. The product of $N$ of these (with the increasing site number from left to right) defines the monodromy matrix \begin{align} \label{MonodromyTN} \mathbb{T}_N (u) = \mathbb{L}_1 (u) \mathbb{L}_2 (u) \dots \mathbb{L}_N (u) = \left( \begin{array}{cc} A_N^{[2]\times[2]} (u) & B_N^{[2]} (u) \\ C_N^{[2]} (u) & D_N (u) \end{array} \right) \, , \end{align} where we displayed the dimensions of the corresponding blocks as superscripts, e.g., $B_N^{[2]} = (B_N^1, B_N^2)$ etc. Our focus will be on the element $D_N (u)$. As can easily be found from the Yang-Baxter equation, $D_N (u)$ commutes with itself for arbitrary values of the spectral parameter $[D_N (u'), D_N (u)] = 0$. And as it will be established in the next section, it commutes with the Hamiltonian as well, \begin{align} \label{DHcommutativity} [D_N (u), \mathcal{H}_N] = 0 \, . \end{align} $D_N (u)$ thus generates a family of commuting charges which arise as coefficients of degree $N$ polynomial in $u$. However, we immediately find ourselves in a predicament, since the operator $i^N D_N (i w)$ is not self-adjoint! It is obvious already for one site, where the only charge ${d}_1$ reads \begin{align} \label{D1} D_1 (u) = u + {d}_1 \, , \qquad {d}_1 = - S^0_1 - B_1 \, , \end{align} with $S^0_1$ and $B_1$ having opposite conjugation properties in light of Eqs.\ \re{ConjugationGenerators}. This implies that the eigenvalues of the operator $D_N$ are not real. This is not a problem by itself, however, it implies that the Hamiltonian will share only a subset of the eigenfunctions of the latter, i.e., the ones that yield its real eigenvalues. In fact, the complex nature of $D_N$ eigenvalues will be a virtue rather than a bug explaining the incremental shift in energy eigenvalues for excitations propagating on the flux tube. One can always define a new self-adjoint operator \begin{align} \label{OmegaN} \Omega_N (w) = i^N D_N ( i w) + (- i)^N D^\dagger_N (- i w) \, , \end{align} that will possess real eigenvalues. However, since we will be devising a procedure to calculate the eigenstates of the Hamiltonian $\mathcal{H}_N$ based on a recursion for $D_N$, using $\Omega_N$ for this purpose will be a significant obstacle on this route. \subsection{Commutativity} \label{CommutativitySection} For one- and two-site cases, the proof of commutativity can be done by brute force. Namely, for $N=1$, the Hamiltonian \re{TotalHamiltonian} in the representation \re{BoundaryDiffH} can be rewritten in terms of generators as \begin{align} \mathcal{H}_1 = 2 \psi (1) - \psi \left( S^0_1 + B_1 + 2s \right) - \psi \left( - S^0_1 + B_1 + 2s \right) \, . \end{align} It is obviously self-adjoint and commutes with \re{D1} by virtue of the sl$(2|1)$ commutator algebra. For $N=2$, the operator $D_2 (u)$ is a second order polynomial in spectral parameter \begin{align} D_2 (u) = u^2 + u \, {d}_1+ {d}_2 \, , \end{align} with operator coefficients \begin{align} {d}_1 = - S^0_1 - S^0_2 - B_1 - B_2 \, , \qquad {d}_2 = S_1^+ S_2^- + (S_1^0 + B_1) (S_2^0 + B_2) - W_1^+ V_2^- \, . \end{align} While the commutativity of $\mathcal{H}_2$ with ${d}_1$ is almost obvious, the same property for the second-order differential operator ${d}_2 $ is far from this. In fact, the direct calculation results in the following relations for individual components of the two-site Hamiltonian, \begin{align} [{d}_2, \mathcal{H}_{12}] &= - z_1 \partial_{z_1} - \theta_1 \partial_{\theta_1} + z_2 \partial_{z_2} + \theta_2 \partial_{\theta_2} \, , \\ [{d}_2, \mathcal{H}_{01}] &= - z_1 \partial_{z_2} - \theta_1 \partial_{\theta_2} \, , \\ [{d}_2, \mathcal{H}_{2\infty}] &= \frac{z_1}{z_2} \left( z_1 \partial_{z_1} + 2 s + \theta_1 \partial_{\theta_1} \right) \, , \\ [{d}_2, \delta \mathcal{H}_{12}] &= - \frac{z_1}{z_2} \left( z_1 \partial_{z_1} + 2 s + \theta_1 \partial_{\theta_1} \right) + z_1 \partial_{z_2} + \theta_1 \partial_{\theta_2} \\ &\ \ \ \ + z_1 \partial_{z_1} + \theta_1 \partial_{\theta_1} - z_2 \partial_{z_2} + \theta_2 \partial_{\theta_2} \, . \nonumber \end{align} Adding these together, one recovers the anticipated result \re{DHcommutativity} for $N=2$. Beyond $N=2$, the direct proof becomes tedious and it is instructive to rely on the power of the R-matrix approach. In fact, as was demonstrated in the seminal paper \cite{Derkachov:2005hw}, the $\mathcal{R}$-operator obeying the conventional Yang-Baxter relation \begin{align} \check{\mathcal{R}}_{12} (v - u) \mathbb{L}_1 (u) \mathbb{L}_2 (v) = \mathbb{L}_1 (v) \mathbb{L}_2 (u) \check{\mathcal{R}}_{12} (v - u) \end{align} with $\check{\mathcal{R}}_{12} = \Pi_{12} \mathcal{R}_{12}$ having the two quantum spaces intechanged with the permutation $\Pi_{12}$, can be factorized in terms of three intertwiners \begin{align} \check{\mathcal{R}}_{12} (v - u) = \mathcal{R}^{(1)}_{12} (v_1 - u_1) \mathcal{R}^{(2)}_{12} (v_2 - u_2) \mathcal{R}^{(3)}_{12} (v_3 - u_3) \, , \end{align} each exchanging only a pair of combinations of spectral parameters introduced in Eq.\ \re{GenericLaxParameters}, e.g., \begin{align} \mathcal{R}^{(1)}_{12} (v_1 - u_1) \mathbb{L}_1 (v_1, u_2, u_3) \mathbb{L}_2 (u_1, u_2, u_3) &= \mathbb{L}_1 (v_1, u_2, u_3) \mathbb{L}_2 (u_1, u_2, u_3) \mathcal{R}^{(1)}_{12} (v_1 - u_1) \, , \\ \mathcal{R}^{(3)}_{12} (v_3 - u_3) \mathbb{L}_1 (v_1, v_2, v_3) \mathbb{L}_2 (v_1, v_2, u_3) &= \mathbb{L}_1 (v_1, v_2, u_3) \mathbb{L}_2 (v_1, v_2, v_3) \mathcal{R}^{(3)}_{12} (v_3 - u_3) \, , \end{align} where\footnote{The consideration $\mathcal{R}^{(2)}$ was also done in \cite{Belitsky:2006cp}, however, it will not play any role in our construction and is thus completely disregarded.} \begin{align} \mathcal{R}^{(1)}_{12} (v_1 - u_1) \equiv \mathcal{R}^{(1)}_{12} (v_1| u_1, u_2, u_3) \, , \qquad \mathcal{R}^{(3)}_{12} (v_1 - u_1) \equiv \mathcal{R}^{(3)}_{12} (v_1, v_2, v_3| u_3) \end{align} depend in a translation invariant manner on the displayed spectral parameters and are actually independent of the ones not shown. They thus solve simplified $RLL$ relations displayed above. For now, we will focus on $\mathcal{R}^{(3)}_{12}$ which is the generator of the bulk Hamiltonians. Namely, making use of its chiral limit $\mathbb{R}^{(3)}_{12}$ from the generic form derived in Ref.\ \cite{Belitsky:2006cp}, we find the following integral in the upper half of the complex plane that can be easily converted into the line integral representation for the function $\bit{\Phi}(Y_1, Y_2)$ of $Y_n = (y_n, \vartheta_n)$, \begin{align} \mathbb{R}^{(3)}_{12} (u) \bit{\Phi} (Y_1, Y_2) & = \int [DZ]_s (y_1 - z^\ast + \vartheta_1 \theta^\ast)^{- u - 2 s} (y_2 - z^\ast + \vartheta_2 \theta^\ast)^{u} \bit{\Phi} (Z, Y_2) \nonumber\\ & = \frac{\Gamma (2 s)}{\Gamma (- u) \Gamma (u+ 2s)} \int_0^1 d \tau \, \tau^{- u - 1} \bar\tau^{u + 2 s - 1} \bit{\Phi} (\bar\tau Y_1 + \tau Y_2 , Y_2) \, . \end{align} As can be easily verified expanding $\mathbb{R}^{(3)}_{12} (u)$ in the vicinity of $u = \varepsilon \to 0$, we find the Hamiltonian $h^-_{12}$, \begin{align} \mathbb{R}^{(3)}_{12} (\varepsilon) = 1 - \varepsilon h^-_{12} + O (\varepsilon^2) \, , \end{align} such that the $RLL$ relation to this order yields the commutation \begin{align} [h^-_{12}, \mathbb{L}_1 (v) \mathbb{L}_2 (v)] = \mathbb{M}^-_1 \mathbb{L}_2 (v) - \mathbb{L}_1 (v) \mathbb{M}^-_2 \, , \end{align} where \begin{align} \mathbb{M}^-_n = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ - z_n & \theta_n & 1 \end{array} \right) \, . \end{align} Similar relations can be found for $h^+_{12}$ by expanding in the vicinity of $u = - 2s + \varepsilon$ as $\varepsilon \to 0$. As one can see, the bulk Hamiltonians commute with the monodromy matrix $\mathbb{T}_N$ \re{MonodromyTN} up to boundary terms. The latter are cancelled by the boundary Hamiltonians $h_{01}$ and $h_{N\infty}$ in the same fashion as in the sl(2) case analyzed in \cite{Belitsky:2014rba}. \section{Brute force diagonalization} In this introductory section, we will perform the diagonalization of the Hamiltonian by solving the emerging differential equations for eigenfunctions of the generating function $D_N$ of conserved charges. We define an energy eigenstate of the flux-tube $\ket{E (\bit{\lambda})}$ with $N$ excitations possessing rapidities $\bit{\lambda} = (\lambda_1, \dots, \lambda_N)$. Then, the matrix element of the light-cone operator between the vacuum $\ket{0}$ and $\ket{E (\bit{\lambda})}$ \begin{align} \label{GenericMatrixElement} \bit{\Phi}_s (\bit{Z}; \bit{\lambda}) = \vev{ 0 | \mathcal{O}_{\Pi} (\bit{Z}) | E (\bit{\lambda}) } \end{align} will be an eigenfunction of $\mathcal{H}_N$. \subsection{One-particle matrix element} The solution of the one-particle problem is trivial as it arises from the first-order differential equation determining the eigensystem for $D_1$ \begin{align} D_1 (i w) \bit{\Phi}_s (Z_1; \lambda_1) = (i w - i \lambda_1 + s) \Phi_s (z_1; \lambda_1) + (i w - i \lambda_1 + s - \ft{1}{2}) \theta_1 \Phi_{s + 1/2} (z_1; \lambda_1) \, . \end{align} It yields for the individual eigenfunctions \begin{align} \Phi_s (z_1; \lambda_1) = z_1^{i \lambda_1 - s} \, , \qquad \Phi_{s + 1/2} (z_1; \lambda_1) = z_1^{i \lambda_1 - s - 1/2} \, , \end{align} that define the one-superparticle matrix element \begin{align} \label{1PmatrixElement} \bit{\Phi}_s (Z_1; \lambda_1) = \Phi_s (z_1; \lambda_1) + \theta_1 \Phi_{s + 1/2} (z_1; \lambda_1) \, . \end{align} These are plane wave with complex wave numbers. As we alluded to above, the eigenvalues of $D$ are complex. However, its eigenfunctions generate eigenvalues of the flux-tube Hamiltonian \begin{align} \mathcal{H}_1 \bit{\Phi}_s (Z_1; \lambda_1) = E_s (\lambda_1) \Phi_s (z_1; \lambda_1) + E_{s + 1/2} (\lambda_1) \theta_1 \Phi_{s + 1/2} (z_1; \lambda_1) \, , \end{align} with the well-known (one-loop) energy \begin{align} E_s (\lambda_1) = 2 \psi (1) - \psi (s - i \lambda_1) - \psi (s + i \lambda_1) \, . \end{align} \subsection{Two-particle matrix element} Now, we move on to the two-particle case. We decompose the eigenfunction of $D_2$ in double Grassmann series over the two fermionic variables \begin{align} \bit{\Phi}_s (\bit{Z}) = \Phi_{ss} (\bit{z}) + \theta_1 \Phi_{s+1/2, s} (\bit{z}) + \theta_2 \Phi_{s, s+1/2} (\bit{z}) + \theta_1 \theta_2 \Phi_{s+1/2, s+1/2} (\bit{z}) \, , \end{align} with individual components depending on the bosonic variables $\bit{z} = (z_1, z_2)$. We have to solve the following equation in the component form \begin{align} D_2 (i w) \bit{\Phi}_s (\bit{Z}) & = (i w - i \lambda_1 + s) (i w - i \lambda_2 + s) \Phi_{ss} (\bit{z}) \nonumber\\ & + (i w - i \lambda_1 + s - \ft12) (i w - i \lambda_2 + s) \left[ \theta_1 \Phi_{s+1/2,s} (\bit{z}) + \theta_2 \Phi_{s,s+1/2} (\bit{z}) \right] \nonumber\\ & + (i w - i \lambda_1 + s - \ft12) (i w - i \lambda_2 + s - \ft12) \theta_1 \theta_2 \Phi_{s+1/2,s+1/2} (\bit{z}) \, . \end{align} The first-order differential equations arising from it fix the overall plane-wave factors of various contributions. The second order differential equations determine the remaining function of the ratio $z_1/z_2$ accompanying the waves and read \begin{align} & \left[ - z_1 (z_1 - z_2) \partial_{z_1} \partial_{z_2} - 2s \, z_1 \partial_{z_2} - (i \lambda_1 - s) (i \lambda_2 - s) \right] \Phi_{ss} (\bit{z}) = 0 \, , \\ & \left[ - z_1 (z_1 - z_2) \partial_{z_1} \partial_{z_2} - 2 s \, z_1 \partial_{z_2} + z_1 \partial_{z_1} - (i \lambda_1 - s + \ft12) (i \lambda_2 - s) \right] \Phi_{s,s+1/2} (\bit{z}) = 0 \, , \\ \label{PsiOne} & \left[ - z_1 (z_1 - z_2) \partial_{z_1} \partial_{z_2} - ( (2s+1) z_1 - z_2) \partial_{z_2} - (i \lambda_1 - s + \ft12) (i \lambda_2 - s) \right] \Phi_{s+1/2,s} (\bit{z}) \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad - (z_1 \partial_{z_1} + 2s) \Phi_{s, s+1/2} (\bit{z}) = 0 \, , \\ & [ - z_1 (z_1 - z_2) \partial_{z_1} \partial_{z_2} - ( (2s+1) z_1 - z_2 ) \partial_{z_2} + z_1 \partial_{z_1} + 1 \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad - (i \lambda_1 - s + \ft12) (i \lambda_2 - s + \ft12) ] \Phi_{s+1/2,s+1/2} (\bit{z}) = 0 \, . \nonumber \end{align} The solutions to these equations can be found in a straightforwards fashion \begin{align} \label{PhiSS} \Phi_{ss} (\bit{z}) &= z_1^{i \lambda_1 - s} z_2^{i \lambda_2 - s} {_2F_1} \left. \left( {s + i \lambda_1, s - i \lambda_2 \atop 2 s} \right| 1 - \frac{z_1}{z_2} \right) \, , \\ \label{Eigenfunctionpsi1} \Phi_{s+1/2,s} (\bit{z}) &= (s + i \lambda_2) z_1^{i \lambda_1 - s - 1/2} z_2^{i \lambda_2 - s} {_2F_1} \left.\left( {s + \ft12 + i \lambda_1, s - i \lambda_2 \atop 2 s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, , \\ \label{Eigenfunctionpsi2} \Phi_{s, s+1/2} (\bit{z}) &= (s - i \lambda_2) z_1^{i \lambda_2 - s} z_2^{i \lambda_1 - s - 1/2} {_2F_1} \left.\left( {s + i \lambda_2, s + \ft12 - i \lambda_1 \atop 2 s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, , \\ \label{PhiS/2S/2} \Phi_{s + 1/2, s + 1/2} (\bit{z}) &= z_1^{i \lambda_1 - s - 1/2} z_2^{i \lambda_2 - s - 1/2} {_2F_1} \left. \left( {s + \ft12 + i \lambda_1, s + \ft12 - i \lambda_2 \atop 2 s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, . \end{align} Notice that the solution to \re{PsiOne} is not unique since one can always add to it a solution of the homogeneous equation with an arbitrary coefficient! Particularly noteworthy is the following ones that solves Eq. \re{PsiOne} \begin{align} \Phi'_{s+1/2,s} (\bit{z}) &= - (s - i \lambda_2) z_1^{i \lambda_1 - s - 1/2} z_2^{i \lambda_2 - s} {_2F_1} \left. \left( {s + 1 - i \lambda_2, s + 1/2 + i \lambda_1 \atop 2s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, , \end{align} since it is given by a single hypergeometric function and thus can be cast in a concise ``pyramid'' representation to be introduced later. The difference between the two solutions $\Phi'_{s+1/2,s} - \Phi_{s+1/2,s} $ is indeed a solution to the homogeneous equation. Finally for the mixed wave functions, there is yet another (trivial) solution to the eigenvalue equation for the $D_2$-operator, i.e., $\Phi_{s,s+1/2} = 0$, $\Phi_{s+1/2,s} \neq 0$, however, like the previous one, it does not lead to consistent eigenvalue equation for the Hamiltonian. With above results in our hands, we can immediately verify that they yield correct eigenvalues of the Hamiltonian $\mathcal{H}_2$, namely, we find \begin{align} \label{2PHeigenvalue} \mathcal{H}_2 \bit{\Phi}_s (\bit{Z} ; \bit{\lambda}) & = [E_s (\lambda_1) + E_s (\lambda_2)] \Phi_{ss} (\bit{z}) \nonumber\\ & + [E_{s+1/2} (\lambda_1) + E_{s+1/2} (\lambda_2)] \theta_1 \theta_2 \Phi_{s+1/2,s+1/2} (\bit{z}) \nonumber\\ & + [E_{s+1/2} (\lambda_1) + E_s (\lambda_2)] \left[ \theta_1 \Phi_{s+1/2,s} (\bit{z}) + \theta_2 \Phi_{s,s+1/2} (\bit{z}) \right] \, . \end{align} Notice that the two eigenfunctions $ \Phi_{s+1/2,s} (\bit{z}) $ and $ \Phi_{s,s+1/2} (\bit{z}) $ possess the same eigenvalue! \section{Algebraic construction of eigenfunctions} \label{AlgebraicEigenfunctionsSection} Beyond $N=2$, i.e., for three sites and more, the brute force solution of higher-order differential equations is hopeless. Therefore, we will devise a recursive algebraic procedure to find the eigenfunctions of the operator $D_N$. It will turn out that the formalism will produce only one representative solution at a given Grassmann degree. The rest however will be generated by means of supersymmetry. The procedure will be based on the intertwiner $\mathcal{R}^{(1)}_{12}$ introduced earlier in Section \ref{CommutativitySection} that will yield a closed recursion for the matrix element $D$ of the monodromy operator. However, we have to find first its representation on the space of chiral matrix elements. \subsection{Lowest component} \label{LowestComponentSection} To start with let us recall the solution for the lowest component $\Phi_{s \dots s} (\bit{z})$ of the $N$-particle supermatrix element \re{GenericMatrixElement}. It is determined by the sl(2) open spin chain that was addressed in Ref.\ \cite{Belitsky:2014rba}. The intertertwiner that is used in the recursive procedure to solve the eigenvalue equation for the bosonic counterpart of $D_N$ reads \cite{Belitsky:2014rba} \begin{align} {\rm r}_{12}^{(1)} (u) {\Phi} (y_1, y_2) & = \frac{\Gamma (2s + 1) \Gamma (y_{21} \partial_{y_2} + u + 2s)}{\Gamma (u + 2s + 1) \Gamma (y_{21} \partial_{y_2} + 2s)} {\Phi} (y_1, y_2) \nonumber\\ & = \int [D z]_s (y_1 - z^*)^{u} (y_2 - z^*)^{- u - 2s} {\Phi} (y_1, z) \, . \end{align} For instance, the two-particle eigenstate is \begin{align} \label{2Particle11} \Phi_{ss} (\bit{z}; \bit{\lambda}) = z_1^{i \lambda_1 - s} {\rm r}_{12}^{(1)} ( - i \lambda_1 - s ) z_2^{i \lambda_2 - s} = z_1^{i \lambda_1 - s} z_2^{i \lambda_2 - s} {_2 F_1} \left. \left( {s + i \lambda_1, s - i \lambda_2 \atop 2 s} \right| 1 - \frac{z_1}{z_2} \right) \, , \end{align} and agrees with Eq.\ \re{PhiSS} found earlier. For a generic $N$-particle case $\bit{z} = (z_1, z_2, \dots, z_N)$, we found \begin{align} \Phi_{s\dots s} (\bit{z}; \bit{\lambda}) = z_1^{i \lambda_1 - s} {\rm r}^{(1)}_{1 \dots N} (- i \lambda_1 - s) z_2^{i \lambda_2 - s} {\rm r}^{(1)}_{2 \dots N} (- i \lambda_2 - s) z_3^{i \lambda_3 - s} \dots {\rm r}^{(1)}_{N-1N} (- i \lambda_N - s) z_N^{i \lambda_N - s} \, , \end{align} where \begin{align} {\rm r}^{(1)}_{n \dots N} (u) = {\rm r}^{(1)}_{N-1,N} (u) {\rm r}^{(1)}_{N-2,N-1} (u) \dots {\rm r}^{(1)}_{n, n+1} (u) \, . \end{align} Let us now turn to further components in the Grassmann expansion. To this end we have to deduce the intertwiner that will simplify the solution for $D_N$ in the supersymmeric case. \subsection{Chiral limit of factorized matrices} The discussion in Ref.\ \cite{Belitsky:2006cp} was done for generic representations, i.e., involving both chiral and antichiral Grassmann variables $\theta$ and $\bar\theta$. We start therefore with derived there integral (zig-zag) representation for the intertwiner $\mathcal{R}^{(1)}$ and take its chiral limit. Ignoring a convention-dependent normalization factor, we define \begin{align} \mathcal{R}_{12}^{(1)} (u) \bit{\Phi} (\mathcal{Y}_1, \mathcal{Y}_2) = \int [\mathcal{D} \mathcal{Z}]_{j \bar{j}} \mathcal{K}_{0, -u} (\mathcal{Y}_1, \mathcal{Z}^*) \mathcal{K}_{j, \bar{j} + u} (\mathcal{Y}_2, \mathcal{Z}^*) \bit{\Phi} (\mathcal{Y}_1, \mathcal{Z}) \, , \end{align} where $\mathcal{Y} = (y, \vartheta, \bar\vartheta)$, $\mathcal{Z} = (z, \theta, \bar\theta)$ and the measure reads \begin{align} \int [\mathcal{D} \mathcal{Z}]_{j \bar{j}} = \frac{j + \bar{j}}{j \bar{j}} \int_{\Im{\rm m}[z] > 0} \frac{d^2 z}{\pi} \int d \theta d \theta^* \int d \bar\theta d \bar\theta^* \, (z_+ - z_+^* - \theta \theta^* )^{j} (z_- - z_-^* - \bar\theta \bar\theta^* )^{\bar{j}} \end{align} along with the reproducing kernel \begin{align} \mathcal{K}_{j\bar{j}} (\mathcal{Y}, \mathcal{Z}^*) = (y_+ - z_+^* - \vartheta \theta^* )^{- j} (y_- - z_-^* - \bar\vartheta \bar\theta^* )^{- \bar{j}} \, . \end{align} Here, we introduced a notation for (anti)chiral bosonic coordinates $z_\pm = z \pm \ft12 \bar\theta \theta$, $y_\pm = y \pm \ft12 \bar\vartheta \vartheta$ with conjugate ones found according to the rule \re{GrassmannInvolution}. To reach the chiral limit in the above expressions, we take into account that $\Phi$ depends on $\bar\theta$ only through the chiral bosonic coordinates, \begin{align} \bit{\Phi} (\mathcal{Y}_1, \mathcal{Y}_2) = \bit{\Phi} (Y_1, Y_2) \, , \end{align} with $Y = (y_+, \vartheta)$. Then, shifting the bosonic integration variables $z_+ \to z$, we can perform the integration with respect to $\bar\theta$ and $\bar\theta^*$ and send $\bar{j} \to 0$ and $j \to 2 s$ afterwards. We obtain \begin{align} \mathbb{R}^{(1)}_{12} (u) \bit{\Phi} (Y_1, Y_2) = \int [D Z]_s (y_1 - z^* - \theta \theta^*)^u (y_2 - z^* - \theta \theta^*)^{- u} (y_2 - z^* - \vartheta_2 \theta^*)^{- 2 s} \bit{\Phi} (Y_1, Z) \, , \end{align} where the chiral integration measure was introduced earlier in Eq.\ \re{sl21measure}. One can actually rewrite this operator in terms of a nonlocal differential operator using the properties of bosonic reproducing kernels such that it reads explicitly \begin{align} \mathbb{R}^{(1)}_{12} (u) = \frac{\Gamma (2s + 1) \Gamma (y_{21} \partial_{y_2} + u + 2s + 1)}{\Gamma (u + 2s + 1) \Gamma (y_{21} \partial_{y_2} + 2s + 1)} \, . \end{align} In turn, it can be cast as an integral on the real line adopting the well-known integral representation for the Euler Beta function, or the bosonic integral in the upper half of the complex plain, \begin{align} \mathbb{R}_{12}^{(1)} (u) \bit{\Phi} (Y_1, Y_2) & = \frac{\Gamma (2s + 1)}{\Gamma (- u)\Gamma (u + 2s + 1)} \int_0^1 d\tau \, \tau^{- u - 1} \bar{\tau}^{u + 2s} \bit{\Phi} (Y_1, \tau y_1 + \bar{\tau} y_2, \vartheta_2) \nonumber\\ & = \int [D z]_{s + 1/2} (y_1 - z^*)^{u} (y_2 - z^*)^{- u - 2s - 1} \bit{\Phi} (Y_1, z, \vartheta_2) \, . \end{align} Here, the conformal spin $s$ may be understood to admit two different values depending on the component field the operator it acts on, for instance for $s \to s - 1/2$, we fall back into the bosonic case discussed in the earlier section, $\mathbb{R}_{12}^{(1)} (u)|_{s \to s - 1/2} = {\rm r}^{(1)}_{12} (u)$. \subsection{Two-particle case} Let us start applying above results to the derivation of the two-site eigenfunctions. Making use of the known right factorization property of the Lax operator, \begin{align} \label{RightFactorizationL} \mathbb{L}_n (v_1, u_2, u_3) = \mathbb{L}_n (u_1, u_2, u_3) \mathbb{M}_n (u_1| v_1) \, , \quad\mbox{with}\quad \mathbb{M}_n (u_1| v_1) = \left( \begin{array}{ccc} v_1/u_1 & 0 & 0 \\ 0 & 1 & 0 \\ (v_1/u_1-1) z_n & 0 & 1 \end{array} \right) \, , \end{align} which allows us to restore the same spectral parameter in $\mathbb{L}_n$, we can write a relation between two- and one-site monodromy matrices \re{MonodromyTN} \begin{align} \mathbb{R}^{(1)}_{12} (v_1 - u_1) \mathbb{L}_1 (v_1, u_2, u_3) \mathbb{T}_{1} (u) = \mathbb{T}_2 (u) \mathbb{M}_2 (u_1 | v_1) \mathbb{R}^{(1)}_{12} (v_1 - u_1) \, . \end{align} Projecting out the $33$-entry of the monodromy matrix $D_2 (u) = [\mathbb{T}_2 (u)]_{33}$ in the right-hand side, we end up with the relation \begin{align} \label{2partRecursion} D_2 (u) \mathbb{R}_{12}^{(1)} (v_1 - u_1) = \mathbb{R}_{12}^{(1)} (v_1 - u_1) \big[ & z_1 (z_1 \partial_{z_1} + \theta_1 \partial_{\theta_1} + v_1 - u_3) B^1_1 (u) \\ - & \theta_1 (z_1 \partial_{z_1} + u_3 - u_2) B^2_1 (u) + (u_3 - z_1 \partial_{z_1} - \theta_1 \partial_{\theta_1}) D_1 (u) \big] \, , \nonumber \end{align} where the elements of the one-particle monodromy (i.e., the Lax operator itself) matrix in the right-hand side of this equation act only on the variables of the second site, i.e., \begin{align} B_1^1 (u) = - \partial_{z_2} \, , \qquad B_1^2 (u) = \partial_{\theta_2} \, , \qquad D_1 (u) = u_3 - z_2 \partial_{z_2} - \theta_2 \partial_{\theta_2} \, . \end{align} In order to construct a recursion, the first two terms in the right-hand side of Eq.\ \re{2partRecursion} have to vanish when acting on a state of our choice. There are two\footnote{In fact since $B_1^2 (u) = \partial_{\theta_2}$ is a derivative that annihilates a Grassmann constant, we can encode the lowest component into this bare function by adding a $\theta$-independent term $ \Phi^{(0)}_{s,s} (\bit{z})$. This eigenfunction was discussed in Section \ref{LowestComponentSection} already.} such choices cumulatively denoted by $\bit{\Phi}^{(0)} (\bit{Z})$, \begin{align} \bit{\Phi}^{(0)} (\bit{Z}) = \theta_1 \Phi^{(0)}_{s+1/2,s} (\bit{z}) + \theta_1 \theta_2 \Phi^{(0)}_{s+1/2,s+1/2} (\bit{z}) \, , \end{align} where $\bit{Z} = (Z_1, Z_2)$ and $\bit{z} = (z_1, z_2)$. Notice that the Grassmann structure $\theta_2$ will necessarily involve $B$-operators and will not be closed under recursion. However, as will be demonstrated below, it can be found by virtue of supersymmetry. Since different degree Grassmann components do not talk to each other, we can analyze them separately. Let us start with the $\theta_1$ component and cast it in the factorized form $\theta_1 \Phi^{(0)}_{s+1/2,s} (\bit{z}) = \theta_1 z_1^\alpha \Phi^{(0)}_{s} (z_2) $ and fix the value of $\alpha$ from the vanishing of the action of the first term in the brackets, $\alpha = u_3 - v_1 - 1$ and provides the eigenvalue of the first level of recursion $v_1 = i w - i \lambda_1 - s + \ft12$. \begin{align} D_2 (u) \mathbb{R}_{12}^{(1)} (v_1 - u_1) \theta_1 \Phi^{(0)}_{s+1/2,s} (\bit{z}) = (i w - i \lambda_1 - s + \ft12) \theta_1 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{12}^{(1)} (v_1 - u_1) D_1 (u) z_2^\beta \, , \end{align} such that $\alpha = i \lambda_1 - s - 1/2$ and $u_3 = i w$, $u_1 = u_2 = i w + s + 1/2$. Here we took into account that $\mathbb{R}_{12}^{(1)}$ acts on $z_2$ coordinate only such that we can move $z_1$-dependent factor to its left. Next, substituting $\beta = i \lambda_2 - s$ in $\Phi^{(0)}_{s} (z_2) = z_2^\beta$, we immediately obtain \begin{align} & D_2 (i w) \mathbb{R}_{12}^{(1)} ( - i \lambda_1 - s - \ft12) \Phi^{(0)}_{s+1/2,s} (\bit{z}) \\ &\qquad\quad = ( i w - i \lambda_1 + s - \ft12) ( i w - i \lambda_2 + s ) z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{12}^{(1)} ( - i \lambda_1 - s - \ft12) z_2^{i \lambda_2 - s} \, , \nonumber \end{align} with the resulting eigenfunction being \begin{align} \label{2Particle21} \Phi_{s+1/2,s} (\bit{z}; \bit{\lambda}) & = z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{12}^{(1)} (- i \lambda_1 - s - \ft12) z_2^{i \lambda_2 - s} \\ & = z_1^{i \lambda_1 - s - 1/2} z_2^{i \lambda_2 - s} {_2 F_1} \left. \left( {s + 1/2 + i \lambda_1, s - i \lambda_2 \atop 2 s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, . \nonumber \end{align} This, up to an overall normalization coefficient, is the result $\Phi_{s+1/2,s} (\bit{z})$ of the previous section. The missing prefactor that plays a crucial role in proper diagonalization of the Hamiltonian will be fixed making of supersymmetry later in this section. For the highest component $\theta_1 \theta_2 \Phi^{(0)}_{s+1/2,s+1/2} (\bit{z}) $ adopting an analogous factorizable Ansatz $\Phi^{(0)}_{s+1/2,s+1/2} (\bit{z}) = z_1^\alpha \Phi^{(0)}_{s+1/2}(z_2)$, with one-particle wave function $\Phi^{(0)}_{s+1/2}(z_2) = z_2^\beta$, we deduce in the same fashion \begin{align} & D_2 (i w) \mathbb{R}_{12}^{(1)} (- i \lambda_1 - s - 1/2) \theta_1 \theta_2 \Phi^{(0)}_{s+1/2,s+1/2} (\bit{z}) \\ &\qquad\quad = (i w - i \lambda_1 + s - \ft12) (i w - i \lambda_2 + s - \ft12) \theta_1\theta_2 \Phi_{s+1/2,s+1/2} (\bit{z}) \, , \nonumber \end{align} with the explicit eigenfunction being \begin{align} \label{2Particle22} \Phi_{s+1/2,s+1/2} (\bit{z}; \bit{\lambda}) & = z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{12}^{(1)} (- i \lambda_1 - s - 1/2) z_2^{i \lambda_2 - s - 1/2} \\ & = z_1^{i \lambda_1 - s - 1/2} z_2^{i \lambda_2 - s - 1/2} {_2 F_1} \left. \left( {s + 1/2 + i \lambda_1, s + 1/2 - i \lambda_2 \atop 2 s + 1} \right| 1 - \frac{z_1}{z_2} \right) \, . \nonumber \end{align} To summarize, the two-particle eigenfunctions constructed via the advocated algebraic procedure are (we added here the lowest component as well) \begin{align} \label{2ParticleEigenFunction} \bit{\Phi}_s (\bit{Z}; \bit{\lambda}) = \Phi_{ss} (\bit{z}; \bit{\lambda}) + \theta_1 \Phi_{s+1/2,s} (\bit{z}; \bit{\lambda}) + \theta_1 \theta_2 \Phi_{s+1/2,s + 1/2} (\bit{z}; \bit{\lambda}) \, , \end{align} where the individual components are given by Eqs.\ \re{2Particle11}, \re{2Particle21} and \re{2Particle22}, respectively. Though, this construction does not allow one to find all eigenfunctions in the Grassmann expansion, e.g., in front of $\theta_2$ for the case at hand, and endow them with correct coefficients that they enter the supereigenfunction, one can use a recipe to restore all of them as suggested below. \begin{figure}[t] \begin{center} \mbox{ \begin{picture}(0,170)(240,0) \put(0,-650){\insertfig{40}{pyramid}} \end{picture} } \end{center} \caption{\label{FigPyramid} Pyramid representation of the eigenfunction $\Phi_{s+1/2,s}$ (left panel) and its inverse (right panel).} \end{figure} Before, we outline it, let us introduce another representation for eigenfunctions which will be indispensable in the proof of their orthogonality as well as analytic verification of factorizability of multiparticle pentagon transitions. It is the so-called pyramid representation which gives diagrammatic interpretation for eigenfunctions in two-dimensional space. Making use of results in Appendix B of Ref.\ \cite{Belitsky:2014rba}, we can cast the above matrix elements in the form \begin{align} \label{2PlowestPyramid} \Phi_{ss} (\bit{z}; \bit{\lambda}) &= z_1^{i \lambda_1 - s} \int [D z]_{s} (z_1 - z^\ast)^{- i \lambda_1 - s} (z_2 - z^\ast)^{i \lambda_1 - s} z^{i \lambda_2 - s} \, , \\ \label{2PhighestPyramid} \Phi_{s+1/2,s+1/2} (\bit{z}; \bit{\lambda}) &= z_1^{i \lambda_2 - s - 1/2} \int [D z]_{s + 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s - 1/2} (z_2 - z^\ast)^{i \lambda_2 - s - 1/2} z^{i \lambda_1 - s - 1/2} \, , \end{align} for the same-flavor components and \begin{align} \label{psi1One} \Phi_{s+1/2,s} (\bit{z}; \bit{\lambda}) &= z_1^{i \lambda_1 - s - 1/2} \int [D z]_{s + 1/2} (z_1 - z^\ast)^{- i \lambda_1 - s - 1/2} (z_2 - z^\ast)^{i \lambda_1 - s - 1/2} z^{i \lambda_2 - s} \\ &= z_1^{i \lambda_2 - s} \int [D z]_{s + 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s - 1} (z_2 - z^\ast)^{i \lambda_2 - s} z^{i \lambda_1 - s - 1/2} \, , \end{align} for the $\theta_1$ component. Its graphical representation of (the second form of) this eigenfunction in terms of a ``pyramid'' is shown in Fig.\ \ref{FigPyramid}. Now the missing eigenfunction can be simply found by promoting the internal bosonic propagators in the second representation to their supersymmetric extension \begin{align} \label{SuperPropagator} (z' - z^\ast)^{- \alpha} \to [Z' - Z^\ast]^{- \alpha} \equiv (z' - z^\ast + \theta' \theta^\ast)^{- \alpha} \, . \end{align} The Grassmann degree-one two-particle pyramid \begin{align} \Phi^{[1]}_2 (\bit{Z}; \bit{\lambda}) \equiv \theta_1 \Phi_{s+1/2, s} (\bit{z}; \bit{\lambda}) + \theta_2 \Phi_{s, s+1/2} (\bit{z}; \bit{\lambda}) \, , \end{align} reads \begin{align} \label{2PmixedPyramid} \Phi^{[1]}_2 (\bit{Z}; \bit{\lambda}) = \int d \theta^\ast z_1^{i \lambda_2 - s} \int [D z]_{s + 1/2} [Z_1 - Z^\ast]^{- i \lambda_2 - s} [Z_2 - Z^\ast]^{i \lambda_2 - s} z^{i \lambda_1 - s - 1/2} \, . \nonumber \end{align} Expanding the integrand in the fermionic variables, we uncover the missing solution $\Phi_{s, s+1/2}$ as well automatically produce the correct relative coefficients as functions of the rapidity variables. \subsection{Three-particle case and beyond} The one-third of the Yang-Baxter equation for the three-site case reads \begin{align} \mathbb{R}_{123}^{(1)} (v_1 - u_1) & \mathbb{L}_1 (v_1, u_2, u_3) \mathbb{L}_2 (u_1, u_2, u_3) \mathbb{L}_3 (u_1, u_2, u_3) \nonumber\\ = & \mathbb{L}_1 (u_1, v_2, v_3) \mathbb{L}_2 (u_1, u_2, u_3) \mathbb{L}_3 (v_1, u_2, u_3) \mathbb{R}_{123}^{(1)} (v_1 - u_1) \, , \end{align} where \begin{align} \mathbb{R}_{123}^{(1)} (v_1 - u_1) \equiv \mathbb{R}_{23}^{(1)} (v_1 - u_1) \mathbb{R}_{12}^{(1)} (v_1 - u_1) \, . \end{align} Making use of Eq.\ \re{RightFactorizationL}, this relation can be rewritten for the momodromy matrices with decreasing number of sites \begin{align} \mathbb{R}_{123}^{(1)} (v_1 - u_1) \mathbb{L}_1 (v_1, u_2, u_3) \mathbb{T}_{2} (u) = \mathbb{T}_{3} (u) \mathbb{M}_3 (u_1 | v_1) \mathbb{R}_{123}^{(1)} (v_1 - u_1) \, . \end{align} Extracting the $33$-matrix component from both sides and acting with the result on a test function $\Phi^{(0)} (\bit{Z})$ of three variables $\bit{Z} = (Z_1, Z_2, Z_3)$, we find \begin{align} \label{Diff3Particles} \mathbb{R}_{123}^{(1)} (v_1 - u_1) \big[ & z_1 (z_1 \partial_{z_1} + \theta_1 \partial_{\theta_1} + v_1 - u_3) B^1_{2} (u) - \theta_1 (z_1 \partial_{z_1} + u_3 - u_2) B^2_{2} (u) \nonumber\\ & + (u_3 - z_1 \partial_{z_1} - \theta_1 \partial_{\theta_1}) D_{2} (u) \big] \Phi^{(0)} (\bit{Z}) = D_{3} (u) \mathbb{R}_{123}^{(1)} (v_1 - u_1) \Phi^{(0)} (\bit{Z}) \, , \end{align} where \begin{align} B_{2}^1 (u) &= - (u_1 + z_2 \partial_{z_2}) \partial_{z_3} - \theta_2 \partial_{z_2} \partial_{\theta_3} - \partial_{z_2} (u_3 - z_3 \partial_{z_3} - \theta_3 \partial_{\theta_3}) \, , \\ B_{2}^2 (u) &= (z_2 \partial_{\theta_2} - (u_2 - u_1) \theta_2) \partial_{z_3} + (u_2 - \theta_2 \partial_{\theta_2} ) \partial_{z_3} + \partial_{\theta_2} (u_3 - z_3 \partial_{z_3} - \theta_3 \partial_{\theta_3}) \, . \end{align} To construct a self-contained recursion, we have to choose the bare three-particle wave function $\bit{\Phi}^{(0)} (\bit{Z})$ that eliminates the first two terms in Eq.\ \re{Diff3Particles}. It is achieved by the factorized Ansatz \begin{align} \bit{\Phi}^{(0)} (\bit{Z}) = \theta_1 z_1^{i \lambda_1 - s - 1/2} \bit{\Phi} (Z_2, Z_3) \, , \end{align} where we set $v_1 = i w - i \lambda_1$ and $\bit{\Phi} (Z_2, Z_3)$ is the two-particle eigenfunction whose three components were computed in the previous subsection. So the three of the three-particle eigenfunctions are \begin{align} \bit{\Phi}_s (\bit{Z}) = \theta_1 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{123}^{(1)} (- i \lambda_1 - s - \ft12) \bit{\Phi}_s (Z_2, Z_3) \, , \end{align} with $\bit{\Phi}_s (Z_2, Z_3)$ given in Eq.\ \re{2ParticleEigenFunction} with shifted labels of supercoordinates $k \to k+1$. Finally, the lowest, i.e., $\theta$-independent component of the eigenfunction can be found by eliminating any reference to Grassmann variables in the above equations, $\theta \to 0$, $\partial_\theta \to 0$, and was quoted in Section \ref{LowestComponentSection}, \begin{align} \Phi_{sss} (\bit{z}) = z_1^{i \lambda_1 - s} \mathbb{R}_{123}^{(1)} - i \lambda_1 - s) \Phi_{ss} (z_2, z_3) \, , \end{align} where the operator $\mathbb{R}_{123}^{(1)}$ is understood as the one with the shift in the spin $s \to s - 1/2$. Thus the vector of eigenfunction that can be obtained by means of the above algebraic constructions are \begin{align} \label{3ParticleEigenFunction} \bit{\Phi}_s (\bit{Z}; \bit{\lambda}) &= z_1^{i \lambda_1 - s} \mathbb{R}_{123}^{(1)} z_2^{i \lambda_2 - s} \mathbb{R}_{23}^{(1)} z_3^{i \lambda_3 - s} + \theta_1 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{123}^{(1)} z_2^{i \lambda_2 - s} \mathbb{R}_{23}^{(1)} z_3^{i \lambda_3 - s} \\ & + \theta_1 \theta_2 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{123}^{(1)} z_2^{i \lambda_2 - s - 1/2} \mathbb{R}_{23}^{(1)} z_3^{i \lambda_3 - s} + \theta_1 \theta_2 \theta_3 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{123}^{(1)} z_2^{i \lambda_2 - s - 1/2} \mathbb{R}_{23}^{(1)} z_3^{i \lambda_3 - s - 1/2} \, . \nonumber \end{align} The generalization to $N$-particle case is now straightforward, \begin{align} \label{NParticleEigenFunction} \bit{\Phi}_s (\bit{Z}) &= z_1^{i \lambda_1 - s} \mathbb{R}_{1 2 \dots N}^{(1)} z_2^{i \lambda_2 - s} \mathbb{R}_{2 \dots N}^{(1)} z_3^{i \lambda_3 - s} \dots \mathbb{R}_{N-1, N}^{(1)} z_N^{i \lambda_N - s} \\ & + \theta_1 z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{1 2 \dots N}^{(1)} z_2^{i \lambda_2 - s} \mathbb{R}_{2 \dots N}^{(1)} z_3^{i \lambda_3 - s} \dots \mathbb{R}_{N-1, N}^{(1)} z_N^{i \lambda_N - s} \, , \dots \, , \ \nonumber\\ & + \theta_1 \theta_2 \dots \theta_N z_1^{i \lambda_1 - s - 1/2} \mathbb{R}_{1 2 \dots N}^{(1)} z_2^{i \lambda_2 - s - 1/2} \mathbb{R}_{2 \dots N}^{(1)} z_3^{i \lambda_3 - s - 1/2} \dots \mathbb{R}_{N-1, N}^{(1)} z_N^{i \lambda_N - s - 1/2} \, . \nonumber \end{align} Along the same route as was done in two-particle case, one can recover all eigenfunctions by employing supersymmetry at each level of odd variables in the pyramid representation of the eigenfunctions and, thus, restore relative coefficients accompanying them. \subsection{Orthogonality} \label{OrthogonalitySection} Before, we move on to using the above eigenfunctions for the calculation of pentagon transitions, let us prove their orthogonality first. In fact the technique that will be used for it here is readily adoptable for the calculation of the latter as well. \subsubsection{One site} To keep track of different components in the Grassmann expansion, it is convenient to introduce a marker variable $\varepsilon$ via $\theta \to \varepsilon \theta$ for the in-state eigenfunction and, correspondingly, $\varepsilon'$ for the out state. Then, using \re{sl21InnerInComponents}, we find \begin{align} \vev{\bit{\Phi}_s (\lambda'_1) | \bit{\Phi}_s (\lambda_1)} = \vev{\Phi_s (\lambda'_1) | \Phi_s (\lambda_1)} + \frac{\varepsilon' \varepsilon}{2 i s} \vev{\Phi_{s + 1/2} (\lambda'_1) | \Phi_{s + 1/2} (\lambda_1)} \end{align} where the component inner products \begin{align} \label{Component1PinnerProduct} \vev{\Phi_s (\lambda'_1) | \Phi_s (\lambda_1)} = 2 \pi {\rm e}^{- \pi \lambda_1} \mu^{- 1}_s (\lambda_1) \delta (\lambda'_1 - \lambda_1) \, , \end{align} are expressed in terms of the measure \begin{align} \label{FTmeasureMUs} \mu_s (\lambda) = \frac{\Gamma (s + i \lambda_1) \Gamma (s - i \lambda_1)}{\Gamma (2 s)} \, , \end{align} for the spin-$s$ flux-tube excitation. For $s = 1/2$, these reduce to the hole and fermion excitations for $(\varepsilon \varepsilon')^0$ and $(\varepsilon \varepsilon')^1$, respectively. While for $s = 1$, they accommodate the fermion as the lowest and the gauge field as the highest component of the $\mathcal{N} = 1$ gauge supermultiplet. \subsubsection{Permutation identity in superspace} \begin{figure}[t] \begin{center} \mbox{ \begin{picture}(0,150)(240,0) \put(0,-590){\insertfig{35}{superpermutation}} \end{picture} } \end{center} \caption{\label{FigSuperpermutation} Superpermutation identity \re{superPermutationIdentity}.} \end{figure} To work out the two particle case and beyond, we have to introduce an identity that will be instrumental in the concise proof of orthogonality. Namely, it is indispensable to use the permutation identity in the language of Feynman graphs lifted to the superspace. Introducing the superpropagator \re{SuperPropagator}, from the superpoint $Z = (z, \theta)$ to $Z' = (z', \theta')$, one can show that \begin{align} \label{superPermutationIdentity} [Z'_1 - Z_1^\ast]^{i \lambda' - i \lambda} \bit{X} \left( \bit{Z}; \lambda | \bit{Z}'; \lambda' \right) = \bit{X} \left( \bit{Z}; \lambda' | \bit{Z}'; \lambda \right) [Z'_2 - Z_2^\ast]^{i \lambda - i \lambda'} \, , \end{align} where the supercross is given by \begin{align} \bit{X} (\bit{Z}, \lambda | \bit{Z}', \lambda') \equiv \int [D Y]_s [Y - Z_1^\ast]^{i \lambda - s} [Y - Z_2^\ast]^{- i \lambda - s} [Z'_1 - Y^\ast]^{- i \lambda' - s} [Z'_2 - Y^\ast]^{i \lambda' - s} \, . \end{align} It depends on four superpoints through $\bit{Z} = (Z_1, Z_2)$ and $\bit{Z}' = (Z'_1, Z'_2)$. Its form in terms of Feynman graphs is demonstrated in Fig.\ \ref{FigSuperpermutation}. The identity reduces to its known bosonic counterpart, when all external Grassmann variables are set to zero, see Appendix \ref{FeynmanGraphsAppendix}. \subsubsection{Two sites and more} \label{SectionOrthogonalityPhi} \begin{figure}[t] \begin{center} \mbox{ \begin{picture}(0,220)(270,0) \put(0,-430){\insertfig{30}{bosonicproduct}} \end{picture} } \end{center} \caption{\label{BosonicInnerReduction} Steps in evaluation of the bosonic inner product \re{bosonicproduct}.} \end{figure} For two excitations, the eigenfunctions of the matrix elements are given in Eqs.\ \re{2PlowestPyramid}, \re{2PhighestPyramid} for the same-flavor case and \re{2PmixedPyramid} for the mixed one. For $\Phi_{ss}$ and $\Phi_{s+1/2,s+1/2}$ eigenfunctions, the proof of the orthogonality condition repeats the steps of the bosonic consideration \cite{Belitsky:2014rba}. Namely, using a chain of transformations, exhibited in Fig.\ \ref{BosonicInnerReduction}, which consists of using (i) the chain rule \re{ChainRules}, (ii) the permutation identity \re{bosonicPermutationIdentity}, (iii) the chain rule (twice again), one reduces the inner product to the one-particle case, analyzed above, such that we immediately find \begin{align} \label{bosonicproduct} \vev{\Phi_{ss} (\bit{\lambda}') | \Phi_{ss} (\bit{\lambda})} = a_{s} (s - i \lambda_1, s + i \lambda'_2) a_{s} (s + i \lambda'_1, s - i \lambda_2) \vev{\Phi_{s} (\lambda'_2) | \Phi_{s} (\lambda_2)} \vev{\Phi_{s} (\lambda'_1) | \Phi_{s} (\lambda_1)} \, . \end{align} Here, the inner product involves the spin-$s$ component in the Grassmann expansion of the one-particle eigenfunction \re{1PmatrixElement}. To understand what to anticipate for the mixed $\Phi_{s,s+1/2}$ and $\Phi_{s+1/2,s}$ eigenfunctions, let us point out that we are dealing with a degenerate case. Namely, the two-particle mixed sector can be cast in the following matrix form \begin{align*} \left( \begin{array}{cc} H_{11} & H_{12} \\ H_{21} & H_{22} \end{array} \right) \left( \begin{array}{c} \ket{\psi_1} \\ \ket{\psi_2} \end{array} \right) = E \left( \begin{array}{c} \ket{\psi_1} \\ \ket{\psi_2} \end{array} \right) \, , \end{align*} where the two eigenstates $\ket{\psi_1} \to \Phi_{21}$ and $\ket{\psi_2} \to \Phi_{12}$ share the same eigenvalue $E$, see Eq.\ \re{2PHeigenvalue}. Then multiplying this equation from the left by the conjugate two-vector of eigenfunctions, we find \begin{align*} (E^\prime - E) \left[ \vev{\psi^\prime_1|\psi_1} + \vev{\psi^\prime_2|\psi_2} \right] = 0 \, , \end{align*} so that \begin{align*} \vev{\psi^\prime_1|\psi_1} + \vev{\psi^\prime_2|\psi_2} = \delta (E^\prime - E) \, , \end{align*} and not separately for each eigenstate. Thus, the orthogonality has to emerge from the sum of integrals \begin{align} \label{MixedInnerProductStart} \vev{\Phi_{s,s+1/2} (\bit{\lambda}')|\Phi_{s,s+1/2} (\bit{\lambda})} & + \vev{\Phi_{s+1/2,s} (\bit{\lambda}')|\Phi_{s+1/2,s} (\bit{\lambda})} \\ &= \int [D z_1]_{s +1/2} \int [D z_2]_{s} \Big( \Phi_{s+1/2,s} (\bit{z}; \bit{\lambda}') \Big)^\ast \Phi_{s+1/2,s} (\bit{z}; \bit{\lambda}) \nonumber\\ &+ \int [D z_1]_{s} \int [D z_2]_{s +1/2} \Big( \Phi_{s,s+1/2} (\bit{z}; \bit{\lambda}') \Big)^\ast \Phi_{s,s+1/2} (\bit{z}; \bit{\lambda}) \, . \nonumber \end{align} Making use of the pyramid representation for each eigenfunction \begin{align} \label{psi1Two} \Phi_{s+1/2,s} (\bit{z}) & = (s + i \lambda_2) z_1^{i \lambda_2 - s} \int [D z]_{s + 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s - 1} (z_2 - z^\ast)^{i \lambda_2 - s} z^{i \lambda_1 - s - 1/2} \, , \\ \label{psi2Two} \Phi_{s,s+1/2} (\bit{z}) & = (s - i \lambda_2) z_1^{i \lambda_2 - s} \int [D z]_{s + 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s} (z_2 - z^\ast)^{i \lambda_2 - s - 1} z^{i \lambda_1 - s - 1/2} \, , \end{align} a simple-minded application of the rules used in the bosonic subsector fails at the second step. In spite of the fact that one can find a way out of this predicament by using inversion\footnote{We would like to thank Sasha Manashov for this suggestion.} as demonstrated in Appendix \ref{2PorthogonalityAlternative}, we will follow a different route that can be applied for pentagon transitions studied later in the paper. \begin{figure}[p] \begin{center} \mbox{ \begin{picture}(0,490)(250,0) \put(0,-50){\insertfig{30}{orthogonality}} \end{picture} } \end{center} \caption{\label{FigMixedInnerProduct} Reduction steps in the evaluation of the inner product \re{MixedInnerProductStart}.} \end{figure} First, we introduce a conjugate pyramid. It will be defined by the same graph as the original one but with all lines reversed and changed sign of all rapidities. It is proportional to the complex conjugate wave function up to a phase factor that can be easily established from the involution rules \begin{align} \big( (z' - z^\ast)^{-\alpha} \big)^\ast = {\rm e}^{i \pi \alpha^\ast} (z - z'^\ast)^{-\alpha^\ast} \, . \end{align} This way the wave function $\left( \Phi_{s+1/2,s} (\bit{z}) \right)^\ast$ is determined by the reversed graph with overall phase factor \begin{align} \label{2Ptphase} {\rm e}^{i \pi (s + i \lambda_2)} {\rm e}^{i \pi (s + 1/2 + i \lambda_1)} {\rm e}^{i \pi (2s + 1)} \, . \end{align} Here, the first two factors stem from lines connecting vertices with $w = 0$ and the rest arise from the internal lines. The same prefactor accompanies the definition of $\left( \Phi_{s,s+1/2} (\bit{z}) \right)^\ast$. Now we proceed with the verification of orthogonality. The sum in Eq.\ \re{MixedInnerProductStart} is shown by the top row in Fig.\ \ref{FigMixedInnerProduct}, up to an overall phase \re{2Ptphase}, where all rapidities have to be dressed with primes since they emerge from the wave function in the out state. Starting the reduction from right to left, we integrate first with respect to the vertex $z_2$ by means of the chain rules \re{ChainRules}. This will yield different $a$-factors that accompany the reduced graphs, due to the different spins of the corresponding integration measures. Pulling out the overall factor $$ {\rm e}^{- i \pi s} a_s (s - i \lambda_2, s + i \lambda'_2) $$ the two contributions with corresponding rapidity-dependent coefficients are shown in the middle row in Fig.\ \ref{FigMixedInnerProduct}. The subsequent reduction is based on the use of the permutation identity in superspace, which allows us to move the right vertical propagator through the entire graph to the left. To achieve this, we choose the coordinates as \begin{align} Z_1 = (w, 0) \, , \qquad Z_2 = (z, \theta) \, , \qquad Z'_1 = (w', 0) \, , \qquad Z'_2 = (z', \theta') \, , \end{align} in Eq.\ \re{superPermutationIdentity}, where obviously $w = w' = 0$. Collecting terms accompanying the Grassmann structure $\theta' \theta^\ast$, we find the relation \begin{align} \label{PermutationIdentityBosonFermion} & (z' - z^\ast)^{i \lambda' - i \lambda} (s - i \lambda') (s + i \lambda) X_{s + 1/2} (w, z; \lambda | w', \, z'; \lambda') \nonumber\\ &\qquad - 2 i s (i \lambda' - i \lambda) (z' - z^\ast)^{i \lambda' - i \lambda - 1} X_{s} (w, z, ; \lambda | w', z' ; \lambda') \nonumber\\ &\qquad\qquad = (w' - w^\ast)^{i \lambda - i \lambda'} (s - i \lambda) (s + i \lambda') X_{s + 1/2} (w, z ; \lambda' | w', z'; \lambda) \, , \end{align} between the crosses with the spin-$s$ and spin-$(s+\ft12)$ measures, \begin{align} & X_{s} (w, z; \lambda|w', \, z' ; \lambda') \\ &\qquad \equiv \int [D y]_s (y - w^\ast)^{i \lambda - s} (w' - y^\ast)^{- i \lambda' - s} (y - z^\ast)^{- i \lambda - s} (z' - y^\ast)^{i \lambda' - s} \, , \nonumber\\ & X_{s + 1/2} (w, z; \lambda | w', z'; \lambda') \\ &\qquad \equiv \int [D y]_{s + 1/2} (y - w^\ast)^{i \lambda - s} (w' - y^\ast)^{- i \lambda' - s} (y - z^\ast)^{- i \lambda - s - 1} (z' - y^\ast)^{i \lambda' - s - 1} \, . \nonumber \end{align} These arise from the Grassmann expansion of the supercross. We can recognize right away, in the left-hand side of Eq.\ \re{PermutationIdentityBosonFermion}, the sum of contributions with correct accompanying coefficients in Fig.\ \ref{FigMixedInnerProduct} (middle row). This allows us to use this permutation identity and pass to the leftmost graph in the bottom row of diagrams in Fig.\ \ref{FigMixedInnerProduct}, where we relied on the identity \re{OrthogIdentity} for $w' = w = 0$ which yielded the inner product of one-particle spin-$s$ component eigenfunctions \re{1PmatrixElement} along with the corresponding phase, \begin{align} (s - i \lambda_2) (s + i \lambda'_2) {\rm e}^{- i \pi (s + i \lambda'_2)} \vev{\Phi_s (\lambda'_2)| \Phi_s (\lambda_2)} \, . \end{align} We also included the overall factor of rapidities that stems from the right-hand side coefficient in the permutation identity \re{PermutationIdentityBosonFermion}. This completes the first level in recursive reduction. At the next step, we use the chain rule twice, at the vertices $z$ and $z'$. This procedure generates the multiplicative factors \begin{align} {\rm e}^{- i \pi (2s + 1)} a_{s+1/2} (\ft12 + s - i \lambda_1, 1 + s + i \lambda'_2) a_{s+1/2} (\ft12 + s + i \lambda'_1, 1 + s - i \lambda_2) \, , \end{align} accompanying the integral that can be computed by means of the chain rule (rightmost graph in the last row of Fig.\ \ref{FigMixedInnerProduct}), or rather the orthogonality identity, giving \begin{align} {\rm e}^{- i \pi (s + 1/2 + i \lambda'_1)} \vev{\Phi_{s + 1/2} (\lambda'_1) | \Phi_{s + 1/2} (\lambda_1)} \, . \end{align} Combining everything together, we realize that all phases cancel out and we end up with the anticipated orthogonality relation \begin{align} \vev{\Phi_{s,s+1/2} (\bit{\lambda}')|\Phi_{s,s+1/2} (\bit{\lambda})} & + \vev{\Phi_{s+1/2,s} (\bit{\lambda}')|\Phi_{s+1/2,s} (\bit{\lambda})} \\ &= (s - i \lambda_2) (s + i \lambda'_2) a_{s+1/2} (\ft12 + s - i \lambda_1, 1 + s + i \lambda'_2) \nonumber\\ &\times a_{s+1/2} (\ft12 + s + i \lambda'_1, 1 + s - i \lambda_2) \vev{\Phi_{s} (\lambda'_2) | \Phi_{s} (\lambda_2)} \vev{\Phi_{s + 1/2} (\lambda'_1) | \Phi_{s + 1/2} (\lambda_1)} , \nonumber \end{align} in terms of the individual one-particle component \re{1PmatrixElement} inner products defined in Eq.\ \re{Component1PinnerProduct}. Since the procedure is inductive, the above reduction procedure suffices in the proof of the generic $N$-site case. \section{From matrix elements to wave functions} In the previous sections, we were dealing with the matrix elements \re{GenericMatrixElement} of the flux-tube operators that diagonalize the light-cone Hamiltonian \re{TotalHamiltonian}. Let us pass to the flux-tube wave function $\bit{\Psi}_s (\bit{X}; \bit{ \lambda})$ of $N$ excitations, --- localized at supercoordinates $\bit{X} = (X_1, X_2, \dots, X_N)$ where $X_n = (x_n, \vartheta_n)$ with the bosonic component belonging to the real axis, i.e., $\Im{\rm m} [x_n] = 0$, and having rapidities $\bit{\lambda} = (\lambda_1, \lambda_2, \dots, \lambda_N)$, --- that underlines the physics of the flux-tube for scattering amplitudes. A flux-tube state $\ket{E (\bit{\lambda}) }$ can be represented in its terms as \begin{align} \ket{E (\bit{\lambda}) } = \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \mathcal{O}_\Pi (\bit{X}) \ket{0} \, , \end{align} where the differenial measures are \begin{align} d^N \bit{x} = d x_1 d x_2 \dots dx_N \, , \qquad d^N \bit{\vartheta} = d \vartheta_1 d \vartheta_2 \dots d \vartheta_N \end{align} and the integration with respect to the bosonic variables is performed over the simplex $\mathcal{S} = \{ \infty > x_N \geq x_{N-1} \geq \dots \geq x_1 \geq 0 \}$. Notice that bosonic and fermionic content of corresponding components jump places in the wave function compared to the superfield operator, e.g., for one-particle $\bit{\Psi}_s (X_1; \lambda_1) = \Psi_{s + 1/2} (x_1; \lambda_1) + \vartheta_1 \Psi_s (x_1; \lambda_1)$, where, for instance, for the $s = 1/2$ case, the lowest and highest components are fermion and boson, respectively, i.e., opposite to the matrix element \re{1PmatrixElement}. One can easily deduce the representation of the sl$(2|1)$ generators on the space of the wave functions. Making use of \begin{align} \mathcal{G} \ket{E (\bit{\lambda})} = \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \bit{\Psi} (\bit{X}; \bit{\lambda}) \sum_{n = 1}^N G_n \mathcal{O}_\Pi (\bit{X}) \ket{0} \, , \end{align} where $\mathcal{G}$ acts on the Hilbert space of the flux-tube states and $G$ being its representation on flux-tube superfields $[\mathcal{G} , \mathcal{O}_\Pi (\bit{X})] = \sum_{n = 1}^N G_n \mathcal{O}_\Pi (\bit{X})$, and integrating by parts, we find the representation $\widehat{G}$ on wave functions \begin{align} \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \sum_{n = 1}^N G_n \mathcal{O}_\Pi (\bit{X}) \ket{0} = \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \left( \sum_{n = 1}^N \widehat{G}_n \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \right) \mathcal{O}_\Pi (\bit{X}) \ket{0} \, . \end{align} Their explicit expressions read \begin{align} & \widehat{S}^- = \partial_x \, , \qquad\qquad \widehat{S}^+ = - x^2 \partial_x + x (2 s - 1) - x \vartheta \partial_\vartheta \, , \qquad\qquad \widehat{S}^0 = - x \partial_x + s - \ft12 - \ft12 \vartheta \partial_\vartheta \, , \nonumber\\ & \widehat{B} = - \ft12 \vartheta \partial_\vartheta - s + \ft12 \, , \quad \widehat{V}^- = \partial_\vartheta \, , \quad \widehat{W}^- = \vartheta \partial_x \, , \quad \widehat{V}^+ = x \partial_\vartheta \, , \quad \widehat{W}^+ = \vartheta \left(x \partial_x + 1 - 2 s \right) \, . \end{align} \subsection{Wave function Hamiltonians} In this section, we will derive Hamiltonians acting on the space of wave functions. To start with, it is instructive to recall the bosonic case but we defer this discussion to Appendix \ref{SL2WFhamiltonianAppendix}, which the reader should consult first. Below, we proceed directly to the sl$(2|1)$ case and address the problem in two ways, first, by integration by parts and, then, using an intertwiner. \subsubsection{Integration by parts} Since the two-particle case contains all required elements, i.e., the bulk and boundary Hamiltonians, \begin{align} \mathcal{H}_2 = \mathcal{H}_{01} + \mathcal{H}_{12}^+ + \mathcal{H}_{12}^- + \mathcal{H}_{1\infty} \, , \end{align} as alluded to in Section \ref{MEhamiltonianSection}, we will use it as a representative example. Following the same steps as above, we can calculate the Hamiltonian for the sl$(2|1)$ wave function. The latter enters the definition of the two-particle state \begin{align*} \ket{E (\bit{\lambda})} = \int_{\mathcal{S}} d^2 \bit{x} \int d^2 \bit{\vartheta} \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \mathcal{O}_\Pi (\bit{X}) \ket{0} \, . \end{align*} Starting with the action of the Hamiltonian $\mathbb{H}$ on the Hilbert space of flux-tube excitations, \begin{align} \label{IntegrationByPartsSL21} \mathbb{H} \ket{E (\bit{\lambda})} = \int_{\mathcal{S}} d^2 \bit{x} \int d^2 \bit{\vartheta}\, \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \, \mathcal{H} \mathcal{O}_\Pi (\bit{X}) \ket{0} = \int_{\mathcal{S}} d^2 \bit{x} \int d^2 \bit{\vartheta} \, \left( \widehat{\mathcal{H}} \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \right) \mathcal{O}_\Pi (\bit{X}) \ket{0} \, , \end{align} we immediately obtain its integral representation on the space of wave functions \begin{align} \label{TwoParticleWaveHamiltonian} \widehat{\mathcal{H}}_2 = \widehat{\mathcal{H}}_{01} + \widehat{\mathcal{H}}_{12}^+ + \widehat{\mathcal{H}}_{12}^- + \widehat{\mathcal{H}}_{2\infty} \end{align} with individual components \begin{align} \widehat{\mathcal{H}}_{01} \bit{\Psi}_s (\bit{X}) & = \int_1^{x_2/x_1} \frac{d \alpha}{\alpha - 1} \left[ \alpha^{1 - 2 s} \bit{\Psi}_s (\alpha X_1, X_2) - \frac{1}{\alpha} \bit{\Psi}_s (X_1, X_2) \right] \, , \\ \widehat{\mathcal{H}}_{2\infty} \bit{\Psi}_s (\bit{X}) & = \int_{x_1/x_2}^1 \frac{d \alpha}{1 - \alpha} \left[ \bit{\Psi}_s (X_1, \alpha x_2, \vartheta_2) - \bit{\Psi}_s (X_1, X_2) \right] \, , \\ \widehat{\mathcal{H}}_{12}^+ \bit{\Psi}_s (\bit{X}) & = \int_1^\infty \frac{d \alpha}{\alpha - 1} \\ &\times \left[ \left( \frac{\alpha x_2 - x_1}{x_2 - x_1} \right)^{1 - 2s} \bit{\Psi}_s \left( X_1, \alpha x_2, \frac{\alpha x_2 - x_1}{x_2 - x_1} \vartheta_2 - \frac{(\alpha - 1) x_2}{x_2 - x_1} \vartheta_1 \right) - \frac{1}{\alpha} \bit{\Psi}_s ( X_1, X_2) \right] \, , \nonumber\\ \widehat{\mathcal{H}}_{12}^- \bit{\Psi}_s (\bit{X}) & = \int_0^1 \frac{d \alpha}{1 - \alpha} \\ &\times \left[ \left( \frac{x_2 - \alpha x_1}{x_2 - x_1} \right)^{1 - 2s} \bit{\Psi}_s \left( \alpha x_1, \frac{x_2 - \alpha x_1}{x_2 - x_1} \vartheta_1 - \frac{(1 - \alpha) x_1}{x_2 - x_1} \vartheta_2, X_2 \right) - \bit{\Psi}_s (X_1, X_2) \right] \, , \nonumber \end{align} where, for brevity, we did not display the dependence of $\bit{\Psi}_s$ on $\bit{\lambda}$. \subsubsection{Intertwiner} A generalization of the intertwiner to the sl$(2|1)$ case is relatively straightforward. Rather than being multiplication by a function as in the bosonic case, see \cite{Belitsky:2014rba} and Appendix \ref{SL2WFhamiltonianAppendix}, it becomes an operator in Grassmann variables. Namely, it admits the following form \begin{align} \label{SL21intertwiner} \mathcal{W}_N = (2s)^{- N} \int d \vartheta'_1 d \vartheta'_2 \dots d \vartheta'_N \Big( (x_1 + \vartheta'_1 \vartheta_1) (x_{21} + \vartheta'_{21} \vartheta_{21}) \dots (x_{N,N-1} + \vartheta'_{N,N-1} \vartheta_{N,N-1}) \Big)^{2 s} \, , \end{align} and induces the change from the matrix element to the wave function representations \begin{align} \label{SL21Intertwiner} \widehat{\mathcal{H}}_N \, \mathcal{W}_N = \mathcal{W}_N \, \mathcal{H}_N \, . \end{align} More specifically, we deduce the following relations between the two-particle bulk and boundary Hamiltonians \begin{align} & \widehat{\mathcal{H}}^-_{12} \, \mathcal{W}_2 = \mathcal{W}_2 \, \mathcal{H}_{01} \, , \qquad \widehat{\mathcal{H}}^+_{12} \, \mathcal{W}_2 = \mathcal{W}_2 \, \mathcal{H}_{2\infty} \, , \nonumber\\ & \widehat{\mathcal{H}}_{01} \, \mathcal{W}_2 = \mathcal{W}_2 \, \mathcal{H}_{12}^- \, , \qquad \widehat{\mathcal{H}}_{2\infty} \, \mathcal{W}_2 = \mathcal{W}_2 \, \mathcal{H}_{12}^+ \, . \end{align} Though the individual components of the Hamiltonians transform differently under the operation of integration by parts and by means of the intertwiner, the total sums are obviously the same. \subsection{Wave functions} We can adopt the above intertwiner in order to find the form of wave functions from the eigenfunctions of matrix elements via the relation \begin{align} \label{PsiPhiIntertwining} \bit{\Psi}_s (\bit{X}; \bit{\lambda})= \mathcal{W}_N \bit{\Phi}_s (\bit{X}; \bit{\lambda}) \, . \end{align} However, as a cross check of the formalism that we developed here, it is instructive to solve for them explicitly diagonalizing the generator of conserved changes $\widehat{D}_N$. Below, we will limit ourselves to the case of one- and two-particle excitations. For a single excitation, as a solution to the eigenvalue equation \begin{align} \widehat{D}_1 (i w) \bit{\Psi}_s (X_1; \lambda_1) = (i w + i \lambda_1 + s - \ft12) \Psi_{s+1/2} (x_1; \lambda_1) + (i w + i \lambda_1 + s) \vartheta_1 \Psi_{s} (x_1; \lambda_1) \, , \end{align} we find for the component wave functions \begin{align} \Psi_s (x_1; \lambda_1) & = x_1^{2s - 1} \Phi_s (x_1, \lambda) = x_1^{i \lambda_1 + s - 1/2} \, , \\ \Psi_{s + 1/2} (x_1; \lambda_1) & = (2s)^{-1} x_1^{2s} \Phi_{s + 1/2} (x_1, \lambda) = (2s)^{-1} x_1^{i \lambda_1 + s - 1} \, . \end{align} These expressions are in agreement with the intertwining relation \re{PsiPhiIntertwining} with the one-particle matrix element from Eq.\ \re{1PmatrixElement}. Moving on to two excitations, the eigenfunction is decomposed in the component form as \begin{align} \bit{\Psi}_s (\bit{X}; \bit{\lambda}) = \Psi_{s+1/2, s+1/2} (\bit{x}) + \vartheta_1 \Psi_{s, s + 1/2} (\bit{x}) + \vartheta_2 \Psi_{s + 1/2, s} (\bit{x}) + \vartheta_1 \vartheta_2 \Psi_{ss} (\bit{x}) \, , \end{align} and the solution to the eigenvalue equation for $\widehat{D}_2$ \begin{align} \widehat{D}_2 (i w) \bit{\Psi}_s (\bit{X}; \bit{\lambda}) & = (i w + i \lambda_1 + s - \ft12) (i w + i \lambda_2 + s - \ft12) \Psi_{s+1/2,s+1/2} (\bit{x}; \bit{\lambda}) \\ & + (i w + i \lambda_1 + s ) (i w + i \lambda_2 + s ) \vartheta_1 \vartheta_2 \Psi_{s,s} (\bit{x}; \bit{\lambda}) \nonumber\\ & + (i w + i \lambda_1 + s - \ft12) (i w + i \lambda_2 + s) \left[ \vartheta_1 \Psi_{s,s+1/2} (\bit{x}; \bit{\lambda}) + \vartheta_2 \Psi_{s+1/2,s} (\bit{x}; \bit{\lambda}) \right] \, , \nonumber \end{align} generates the solutions \begin{align} \Psi_{s + 1/2, s + 1/2} (\bit{x}) & = - (2s)^{-2} x_1^{i \lambda_1 + s - 1/2} x_2^{i \lambda_2 + s - 1/2} \left( 1 - \frac{x_1}{x_2} \right)^{2s} {_2F_1} \left.\left( { s + \ft12 + i \lambda_1, s + \ft12 - i \lambda_2 \atop 2s + 1} \right| 1 - \frac{x_1}{x_2} \right) \, , \\ \Psi_{ss} (\bit{z}) & = x_1^{i \lambda_1 + s - 1} x_2^{i \lambda_2 + s - 1} \left( 1 - \frac{x_1}{x_2} \right)^{2s - 1} {_2F_1} \left.\left( { s + i \lambda_1, s - i \lambda_2 \atop 2s} \right| 1 - \frac{x_1}{x_2} \right) \, , \\ \label{Solpsi1} \Psi_{s, s + 1/2} (\bit{x}) & = x_1^{i \lambda_1 + s - 1/2} x_2^{i \lambda_2 + s - 1} \left( 1 - \frac{x_1}{x_2} \right)^{2s - 1} {_2F_1} \left.\left( { s + \ft12 + i \lambda_1, s - i \lambda_2 \atop 2s} \right| 1 - \frac{x_1}{x_2} \right) \, , \\ \label{Solpsi2} \Psi_{s + 1/2, s} (\bit{x}) & = - x_1^{i \lambda_2 + s} x_2^{i \lambda_1 + s - 3/2} \left( 1 - \frac{x_1}{x_2} \right)^{2s - 1} {_2F_1} \left.\left( { s + \ft12 - i \lambda_1, s + i \lambda_2 \atop 2s} \right| 1 - \frac{x_1}{x_2} \right) \, . \end{align} A simple use of well-known connection formulas for hypergeometric functions, allows one to rewrite these expressions as a sum of incoming and outgoing waves with modulated profiles. For $s = 1/2$, we reproduce results also obtained in Ref.\ \cite{BasSchVie15} found by diagonalizing $\Omega_2$ defined in Eq.\ \re{OmegaN}. Acting with \re{TwoParticleWaveHamiltonian} on the wave function derived above, we find the expected eigenvalues \begin{align} \widehat{\mathcal{H}} \bit{\Psi}_s (\bit{X}; \bit{\lambda}) & = (E_{s + 1/2} (\lambda_1) + E_{s + 1/2} (\lambda_2)) \Psi_{s + 1/2, s + 1/2} (\bit{x}; \bit{\lambda}) + \vartheta_1 \vartheta_2 (E_{s} (\lambda_1) + E_{s} (\lambda_2)) \Psi_{ss} (\bit{x}; \bit{\lambda}) \nonumber\\ & + (E_{s + 1/2} (\lambda_1) + E_{s} (\lambda_2)) \left[ \vartheta_1 \Psi_{s, s + 1/2} (\bit{x}; \bit{\lambda}) + \vartheta_2 \Psi_{s + 1/2, s} (\bit{x}; \bit{\lambda}) \right] \, . \end{align} The intertwiner involves all components at a given order in Grassmann decomposition, \begin{align} \label{MixedIntertwiner} \mathcal{W}_2 \Phi^{[g]}_{2} = \Psi^{[g]}_{2} \, , \end{align} where $\Psi^{[g]}$ with $N \geq g \geq 0$ is defined in the same fashion as for the matrix element. Using the two-particle $\mathcal{W}_2$, we can verify the above formulas by means of well-known relations between hypergeometric functions \begin{align} \label{2PbosonicWF} \Psi_{ss} (\bit{x}; \bit{\lambda}) & = (x_1 x_{21})^{2s - 1} \Phi_{ss} (\bit{x}; \bit{\lambda}) \, , \\ \Psi_{s+1/2, s+1/2} (\bit{x}; \bit{\lambda}) & = - (2s)^{-2} (x_1 x_{21})^{2s} \Phi_{s+1/2, s+1/2} (\bit{x}; \bit{\lambda}) \, , \\ \label{2PfermionicWF12} \Psi_{s, s+1/2} (\bit{x}; \bit{\lambda}) &= (2s)^{-1} (x_1 x_{21})^{2s - 1} \left[ x_1 \Phi_{s,s+1/2} + x_2 \Phi_{s+1/2, s} \right] (\bit{x}; \bit{\lambda}) \, , \\ \label{2PfermionicWF21} \Psi_{s+1/2,s} (\bit{x}; \bit{\lambda}) &= - (2s)^{-1} (x_1 x_{21})^{2s - 1} \left[ x_1 \Phi_{s,s+1/2} + x_1 \Phi_{s+1/2, s} \right] (\bit{x}; \bit{\lambda}) \, . \end{align} These indeed coincide with Eqs.\ \re{PhiSS} -- \re{PhiS/2S/2}. \section{Inner product on the line} Making use of the above properties of the intertwining operator, we can introduce the following inner product for the boundary value of the matrix element eigenfunctions on the real line \begin{align} ( \bit{\Phi}' | \bit{\Phi} ) \equiv \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \big( \bit{\Phi}' (\bit{X}) \big)^\ast \mathcal{W}_N \bit{\Phi} (\bit{X}) \, , \end{align} where $\bit{X} = (X_1, \dots, X_N)$ with $X_n = (x_n, \vartheta_n)$. Employing Eqs.\ \re{SL21Intertwiner} and \re{IntegrationByPartsSL21}, it is straightforward to verify that the Hamiltonian is hermitian with respect to this inner product, \begin{align} ( \bit{\Phi}' | \mathcal{H}_N \bit{\Phi} ) = ( \mathcal{H}_N \bit{\Phi}' | \bit{\Phi} ) \, . \end{align} We can relate the above inner product to the one in the upper half-plane of the complex plane. We substitute in form replying on the defining property of the reproducing kernel \begin{align} \bit{\Phi} (\bit{X}) = \int [D^N \bit{Z}]_s \mathbb{K}_j (\bit{X}, \bit{Z}^\ast) \bit{\Phi} (\bit{Z}) \, , \end{align} where $\bit{X} = (X_1, \dots, X_N)$ belongs to the real axis while $\bit{Z} = (Z_1, \dots Z_N)$ with $Z_n = (z_n, \theta_n)$ is being complex. The measure and reproducing kernels are \begin{align} [D^N \bit{Z}]_s = \prod_{n = 1}^N [D Z_n]_s \, , \qquad \mathbb{K}_s (\bit{X}, \bit{Z}^\ast) = \prod_{n = 1}^N \mathbb{K}_s (X_n, Z_n^\ast) \, , \end{align} with \begin{align} \mathbb{K}_s (X, Z^\ast) = (x - z^\ast + \vartheta \theta^\ast)^{-2s} \, . \end{align} Then we deduce the relation \begin{align} \label{RelationBetweenInnerProducts} ( \bit{\Phi}' | \bit{\Phi} ) = \vev{ \bit{\Phi}' | \mathcal{X}_N | \bit{\Phi}} \, , \end{align} where the operator $\mathcal{X}_N$ is determined by its integral kernel \begin{align} \mathcal{X}_N \bit{\Phi} (\bit{Z}) & = \int [D^N \bit{W}]_s \big( \mathbb{K}_s (\bit{Z}) | \mathbb{K}_s (\bit{W}^\ast) \big) \bit{\Phi} (\bit{W}) \\ & = \int_{\mathcal{S}} d^N \bit{x} \int d^N \bit{\vartheta} \, \mathbb{K}_s (\bit{Z}, \bit{X}) \mathcal{W}_N \bit{\Phi} (\bit{X}) \, . \end{align} Finally, using the properties \begin{align} & (S^0_Z + B_Z) \mathbb{K}_s (Z, X) = - (S^0_X - B_X) \mathbb{K}_s (Z, X) \, , \qquad S^\pm_Z \mathbb{K}_s (Z, X) = - S^\pm_X \mathbb{K}_s (Z, X) \, , \\ & W^+_Z \mathbb{K}_s (Z, X) = - V^+_X \mathbb{K}_s (Z, X) \, , \qquad\qquad\qquad\qquad V^-_Z \mathbb{K}_s (Z, X) = - W^-_X \mathbb{K}_s (Z, X) \, , \end{align} one can prove commutativity with $D_N$ \begin{align} [\mathcal{X}_N, D_N] = 0 \, . \end{align} \subsection{Eigenvalues of $\mathcal{X}$} Let us turn to evaluation of the eigenvalues of the operator $\mathcal{X}_N$ on the eigenfunctions $\Phi_s$. \subsubsection{One excitation} For the eigenfunction of one-particle matrix element, we find \begin{align} \mathcal{X}_1 \bit{\Phi}_s (Z_1; \lambda_1) = \int d x_1 \int d \vartheta_1 \mathbb{K}_j (Z_1, X_1) \mathcal{W}_1 \bit{\Phi}_s (X_1; \lambda_1) \, , \end{align} where, see Eq. \re{SL21intertwiner}, \begin{align} \mathcal{W}_1 \bit{\Phi}_s (X_1) = \int d \vartheta'_1 (x_1 + \vartheta'_1 \vartheta_1) \bit{\Phi}_s (x_1, \vartheta'_1) \, . \end{align} Substituting Eq.\ \re{1PmatrixElement}, we find \begin{align} \mathcal{X}_1 \bit{\Phi}_s (Z_1; \lambda_1) = \mathcal{X}_s (\lambda_1) \Phi_{s} (x_1; \lambda_1) + \mathcal{X}_{s + 1/2} (\lambda_1) \vartheta_1 \Phi_{s + 1/2} (x_1; \lambda_1) \, , \end{align} with the eigenvalues arising from the evaluation of the integral \begin{align} \label{1PXeigenvalue} \mathcal{X}_s (\lambda_1) = \int_0^\infty d y (y - 1)^{- 2 s} y^{i \lambda_1 + s - 1} = {\rm e}^{- \pi (\lambda_1 + s)} \mu_s (\lambda_1) \, , \end{align} where the spin-$s$ flux-tube measure was introduced in Eq.\ \re{FTmeasureMUs}. \subsubsection{Two excitations and more} \label{TwoParticleIntegralsSection} In the two-particle case, the action of $\mathcal{X}_2$ reads explicitly \begin{align} \mathcal{X}_2 \bit{\Phi}_s (\bit{Z}; \bit{\lambda}) & = \int_{\mathcal{S}} d^2 \bit{x} \int d^2 \bit{\vartheta} (z_1 - x_1 + \theta_1 \vartheta_1)^{-2s} (z_2 - x_2 + \theta_2 \vartheta_2)^{-2s} \nonumber\\ & \times \int d^2 \bit{\vartheta}' (x_1 + \vartheta'_1 \vartheta_1)^{2s} (x_{21} + \vartheta'_{21} \vartheta_{21})^{2s} \bit{\Phi}_s (x_1, \vartheta'_1, x_2, \vartheta'_2; \bit{\lambda}) \, . \end{align} For the lowest and highest Grassmann components, i.e., \begin{align} \Phi^{[0]}_2 (\bit{X}; \bit{\lambda}) = \Phi_{ss} (\bit{X}; \bit{\lambda}) \, , \qquad \Phi^{[2]}_2 (\bit{X}; \bit{\lambda}) = \vartheta_1 \vartheta_2 \Phi_{s+1/2, s+1/2} (\bit{X}; \bit{\lambda}) \, , \end{align} according to the terminology of Section \ref{AlgebraicEigenfunctionsSection}, we get the anticipated result as in the purely bosonic model \cite{Belitsky:2014rba} for different values of the conformal spin, \begin{align} \mathcal{X}_2 \Phi^{[0]}_2 (\bit{Z}; \bit{\lambda}) = \mathcal{X}_s (\lambda_1) \mathcal{X}_s (\lambda_2) \Phi^{[0]}_2 (\bit{Z}; \bit{\lambda}) \, , \qquad \mathcal{X}_2 \Phi^{[2]}_2 (\bit{Z}; \bit{\lambda}) = \mathcal{X}_{s + 1/2} (\lambda_1) \mathcal{X}_{s + 1/2} (\lambda_2) \Phi^{[2]}_2 (\bit{Z}; \bit{\lambda}) \, . \end{align} These results can be easily found going to the asymptotic region $x_2 \gg x_1$ and making use of the asymptotic form of the eigenfunctions $\Phi_{ss} (\bit{x}; \bit{\lambda}) \simeq x_1^{i \lambda_1 - s} x_2^{i \lambda_2 - s}$ and the same for $\Phi_{s+1/2, s+1/2}$ with an obvious shift of the spin. For the mixed components, \begin{align} \Phi^{[1]}_2 (\bit{X}; \bit{\lambda}) = \vartheta_1 \Phi_{s+1/2, s} (\bit{x}; \bit{\lambda}) + \vartheta_2 \Phi_{s, s+1/2} (\bit{x}; \bit{\lambda}) \, , \end{align} the situation is trickier and we want to perform the diagonalization exactly. Namely, after performing the Grassmann integration we obtain \begin{align} \mathcal{X}_2 \Phi^{[1]}_2 (\bit{Z}; \bit{\lambda}) &= \theta_1 \int_{\mathcal{S}} d^2 \bit{x} (z_1 - x_1)^{- 2 s - 1}(z_2 - x_2)^{- 2s} \Psi_{s+1/2, s} (\bit{x}; \bit{\lambda}) \\ & + \theta_2 \int_{\mathcal{S}} d^2 \bit{x} (z_1 - x_1)^{- 2s}(z_2 - x_2)^{- 2s - 1} \Psi_{s, s+1/2} (\bit{x}; \bit{\lambda}) \, , \end{align} where the integrand is given in terms of two-particle wave functions \re{2PfermionicWF12} and \re{2PfermionicWF21}. A calculation, following the steps outlined in Appendix C.2 of Ref.\ \cite{Belitsky:2014rba}, demonstrates that \begin{align} \label{IntPsi2Topsi1} \int_{\mathcal{S}} d^2 \bit{x} (z_1 - x_1)^{- 2s - 1}(z_2 - x_2)^{- 2s} \Psi_{s+1/2,s} (x_1, x_2) = \mathcal{X}_{s+1/2, s} (\bit{\lambda}) \Phi_{s+1/2,s} (z_1, z_2) \, , \end{align} and \begin{align} \label{IntPsi1Topsi2} \int_{\mathcal{S}} d^2 \bit{x} (z_1 - x_1)^{- 2s}(z_2 - x_2)^{- 2s-1} \Psi_{s,s+1/2} (x_1, x_2) = \mathcal{X}_{s+1/2, s} (\bit{\lambda}) \Phi_{s,s+1/2} (z_1, z_2) \, , \end{align} so that $\Phi^{[1]}_2 (\bit{Z}; \bit{\lambda})$ is an eigenfunction of $\mathcal{X}_2$ \begin{align} \label{MixedXeigenvalue} \mathcal{X}_2 \Phi^{[1]}_2 (\bit{Z}; \bit{\lambda}) = \mathcal{X}_{s+1/2, s} (\bit{\lambda}) \Phi^{[1]}_2 (\bit{Z}; \bit{\lambda}) \, , \end{align} with the same eigenvalue for its both components \begin{align} \mathcal{X}_{s+1/2, s} (\bit{\lambda}) = \mathcal{X}_{s + 1/2} (\lambda_1) \mathcal{X}_s (\lambda_2) \, . \end{align} It is expressed in terms of the one-particle eigenvalues \re{1PXeigenvalue}. This result immediately generalizes to any $N$. For the Grasssmann degree-$g$ $N$-particle case, we have \begin{align} \mathcal{X}_N \Phi^{[g]}_N (\bit{Z}; \bit{\lambda}) = \left( \prod_{n = 1}^g \mathcal{X}_{s + 1/2} (\lambda_n) \right) \left( \prod_{n = g + 1}^N \mathcal{X}_{s} (\lambda_n) \right) \Phi^{[g]}_N (\bit{Z}; \bit{\lambda}) \, , \end{align} again observing factorization of multiparticle eigenvalues. \section{Square transitions} \label{SectionBoxTranstitions} The $N$-particle wave functions have to be orthogonal with respect to the so-called square transitions, i.e., when both the incoming and outgoing states are in the same conformal frame. Namely, we define \begin{align} \bit{B} (\bit{\lambda}| \bit{\lambda}' ) & \equiv \vev{E (\bit{\lambda}') | E (\bit{\lambda}) } \nonumber\\ & = \int d^N \bit{X}' \int d^N \bit{X} \, \big( \bit{\Psi}_s (\bit{X}'; \bit{\lambda}') \big)^\ast \bit{G} (\bit{X}', \bit{X}) \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \, , \end{align} where $\bit{G}$ is a product \begin{align} \bit{G} (\bit{X}', \bit{X}) = \prod_{n=1}^N G (X'_n, X_n) \end{align} of supersymmetric propagators on the real axis \begin{align} G (X'_n, X_n) = \left( x_n + x'_n + \theta_n \theta'_n \right)^{- 2 s} \, . \end{align} We will demonstrate its relation to the inner product in the upper half of the complex plane of the matrix element eigenfunctions for the two-to-two transition. As before, since components of different Grassmann degree do not talk to each other, we would like to keep track of these by using a marker variable as in Section \ref{OrthogonalitySection}. So the decomposition of the supersquare transition into three independent Grassmann components is \begin{align} \bit{B} (\bit{\lambda} |\bit{\lambda}') = B_{s+1/2,s+1/2|s+1/2,s+1/2} (\bit{\lambda} |\bit{\lambda}' ) + \varepsilon \varepsilon' B_{s,s+1/2|s,s+1/2} (\bit{\lambda} | \bit{\lambda}' ) + (\varepsilon \varepsilon')^2 B_{ss|ss} (\bit{\lambda} | \bit{\lambda}' ) \, . \end{align} These are related via the equations \begin{align} B_{s+1/2,s+1/2|s+1/2,s+1/2} = B_{s+1/2,s+1/2} \, , \quad B_{s,s+1/2|s,s+1/2} = B_{s,s+1/2} + B_{s+1/2,s} \, , \quad B_{ss|ss} = B_{ss} \, , \end{align} to individual integrals $B_{s_1 s_2}$ involving spin-$(s_1, s_2)$ wave functions connected with propagators from top to bottom of the square \begin{align} B_{s_1s_2} (\bit{\lambda} | \bit{\lambda}') = \int_{\mathcal{S}'} d^2 \bit{x}' \int_{\mathcal{S}} d^2 \bit{x} \, \frac{\bit( \Psi_{s_1 s_2} (\bit{x}'; \bit{\lambda}') \big)^\ast \Psi_{s_1 s_2} (\bit{x}; \bit{\lambda}) }{(x'_1 + x_1)^{2 s_1} (x'_2 + x_2)^{2 s_2}} \, . \end{align} These integrals were computed in Section \ref{TwoParticleIntegralsSection}, with the result \begin{align} \int_{\mathcal{S}} d^2 \bit{x}\, \Psi_{s_1 s_2} (\bit{x}; \bit{\lambda}) (x'_1 + x_1)^{- 2s_1} (x'_2 + x_2)^{-2s_2} = \mathcal{X}_{s_1 s_2} (\bit{\lambda}) \Phi_{s_1 s_2} (\bit{x}'; \bit{\lambda}) \, , \end{align} where $\mathcal{X}_{s_1 s_2} (\bit{\lambda})$ is the eigenvalue of the operator $\mathcal{X}_2$, such that \begin{align} B_{s_1 s_2} (\bit{\lambda} | \bit{\lambda}') \equiv (\Phi_{s_1s_2} (\bit{\lambda}') | \Phi_{s_1s_2} (\bit{\lambda})) = \mathcal{X}_{s_1 s_2} (\bit{\lambda}) \int_{\mathcal{S}'} d^2 \bit{x}' \big( \Psi_{s_1 s_2} (\bit{x}'; \bit{\lambda}') \big)^\ast \Phi_{s_1s_2} (\bit{x}'; \bit{\lambda}) \, . \end{align} Since the wave functions $\Psi$ of the top and bottom components are related to the matrix elements $\Phi$ by means of a multiplicative factor of bosonic coordinates, we recognize in the right-hand side of the above relation, the inner product in the upper half plane for $\Phi$. Their orthogonality was demonstrated in Ref.\ \cite{Belitsky:2014rba}, as well as was recapitulated above in Section \ref{SectionOrthogonalityPhi}. Only the mixed component require special attention. Starting from the relation between the inner products \re{RelationBetweenInnerProducts}, we can extract the mixed components and find \begin{align} & \int_{\mathcal{S}} d^2 \bit{x} \big[ \left( \Phi_{s+1/2, s} (\bit{x}; \bit{\lambda}') \right)^\ast \Psi_{s+1/2, s} (\bit{x}; \bit{\lambda}) + \left( \Phi_{s, s+1/2} (\bit{x}; \bit{\lambda}') \right)^\ast \Psi_{s, s+1/2} (\bit{x}; \bit{\lambda}') \big] \nonumber\\ &\qquad = \int_{\mathcal{S}} d^2 \bit{x} \big[ \left( \Psi_{s+1/2, s} (\bit{x}; \bit{\lambda}') \right)^\ast \Phi_{s+1/2, s} (\bit{x}; \bit{\lambda}) + \left( \Psi_{s, s+1/2} (\bit{x}; \bit{\lambda}') \right)^\ast \Phi_{s, s+1/2} (\bit{x}; \bit{\lambda}') \big] \nonumber\\ &\qquad\qquad = \mathcal{X}_{s+1/2, s} (\bit{\lambda}) \left\{ \vev{ \Phi_{s+1/2, s} (\bit{\lambda}') | \Phi_{s+1/2, s} (\bit{\lambda})} + \vev{ \Phi_{s,s+1/2} (\bit{\lambda}') | \Phi_{s,s+1/2} (\bit{\lambda})} \right\} \, . \end{align} In the right-hand side of the above equation, we used the eigenvalue equation \re{MixedXeigenvalue}, while in the left-hand side, we employed the relation between the matrix elements and wave functions \re{2PfermionicWF12} and \re{2PfermionicWF21}. The right-hand side of the above equation was calculated in Section \ref{SectionOrthogonalityPhi} and shows orthogonality of wave functions with respect to the square transitions. Generalization to $N$-particle square transitions goes along the same lines making use of results of the previous section. \section{Pentagon transitions} \label{SectionPentagonTranstitions} The $N$-particle super-wave functions define the pentagon transitions, i.e., the building blocks of the Operator Product Expansion for scattering amplitudes as was reviewed in the Introduction. Namely \begin{align} \bit{P} (\bit{\lambda} | \bit{\lambda}' ) & \equiv \vev{E (\bit{\lambda}') | \mathcal{P} | E (\bit{\lambda}) } \nonumber\\ & = \int_{\mathcal{S}'} d^N \bit{X}' \int_{\mathcal{S}} d^N \bit{X} \, \big( \bit{\Psi}'_s (\bit{X}'; \bit{\lambda'}) \big)^{\ast} \bit{G} (\bit{X}', \bit{X}) \bit{\Psi}_s (\bit{X}; \bit{\lambda}) \, , \end{align} where compared to the just discussed box transitions, the wave function in the final state is in a different conformal frame (to be specified later on) compared to the initial one. The reduction of the $N$-particle pentagon to $N-1$ pentagon goes through the same chain of transformation as the inductive proof for the orthogonality condition. Thus we will demonstrate it for first non-trivial case, i.e., two-site wave functions. In complete analogy with the above consideration, one finds the following component expansion for the two-to-two pentagon transition \begin{align} \bit{P} (\bit{\lambda} | \bit{\lambda}' ) = P_{s+1/2,s+1/2|s+1/2,s+1/2} (\bit{\lambda} | \bit{\lambda}' ) + \varepsilon \varepsilon' P_{s,s+1/2|s,s+1/2} (\bit{\lambda} | \bit{\lambda}' ) + (\varepsilon \varepsilon')^2 P_{ss|ss} (\bit{\lambda} | \bit{\lambda}' ) \, , \end{align} where we adopted a notation $P_{s_1s_2|s'_1,s'_2}$ used in the pentagon approach \cite{Basso:2013aha,Belitsky:2014rba,Basso:2014koa,Belitsky:2014sla,% Basso:2014nra,Belitsky:2014lta,Belitsky:2015efa,Basso:2014hfa,Basso:2014jfa,Basso:2015rta,Fioravanti:2015dma,Belitsky:2015qla,Bonini:2015lfr,Belitsky:2015lzw} for particles with spins $(s_1,s_2)$ undergoing a transition to particles with spins $(s'_1,s'_2)$. These are related via the equations \begin{align} P_{s+1/2,s+1/2|s+1/2,s+1/2} = P_{s+1/2,s+1/2} \, , \quad P_{s+1/2,s|s+1/2,s} = P_{s,s+1/2} + P_{s+1/s,s} \, , \quad P_{ss|ss} = P_{ss} \, , \end{align} to integrals involving an overlap of wave functions in different conformal frames \begin{align} P_{s_1,s_2} (\bit{\lambda}|\bit{\lambda}') & = \int_{\mathcal{S}'} d^2 \bit{x}' \int_{\mathcal{S}} d^2 \bit{x} \frac{\left( \Psi_{s_1s_2} (\bit{x}'; \bit{\lambda}') \right)^\ast \Psi_{s_1s_2} (\bit{x}; \bit{\lambda})}{(x_1 + x'_1)^{2 s_1} (x_2 + x'_2)^{2s_2}} \, . \end{align} Before we move on to its calculation, we will take a detour by calculating the inverse wave functions first. \subsection{Inversion of wave functions} As we will show in the next subsection, the pentagon transitions can be reduced to the inner product of matrix elements inverted with respect to the origin and one of them shifted away from it. Thus, we will introduce the operation of inversion \begin{align} z \to z^I = 1/z \, , \end{align} and construct the resulting wave functions. To start with, the spin-$s$ measure changes according to the rule \begin{align} [D z]_{s} \to [D z]^I_{s} = (z z^\ast)^{- 2 s} [D z]_{s} \, . \end{align} Since the same-flavor wave functions and corresponding pentagons were already discussed in Ref.\ \cite{Belitsky:2014rba}, we will not repeat it here. Thus, we address only the mixed-flavor case. For $\Phi_{s+1/2, s}$ in Eq.\ \re{psi1Two}, we have \begin{align} \Phi^I_{s + 1/2, s} (\bit{z}; \bit{\lambda}) & \equiv {\rm e}^{i \pi (2s + 1)} z_1^{- 2 s - 1} z_2^{- 2s} \, \Phi_{s + 1/2, s} (\bit{z}^I; \bit{\lambda}) \nonumber\\ &= (s + i \lambda_2) \int [D z]_{s + 1/2} \, z^{- i \lambda_1 - s - 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s - 1} (z_2 - z^\ast)^{i \lambda_2 - s} z_2^{- i \lambda_2 - s} \, , \end{align} where $z_1^{- 2 s - 1} z_2^{- 2s}$ is the scaling factor with exponents proportional to the conformal weights of the points $z_1$ and $z_2$ and overall phase factor was introduced to get rid of the one emerging from the inversion. Similarly we find for \re{psi2Two}, \begin{align} \Phi^I_{s,s+1/2} (\bit{z}; \bit{\lambda}) & \equiv {\rm e}^{i \pi (2s + 1)} z_1^{- 2 s} z_2^{- 2s - 1} \, \Phi_{s,s + 1/2} (\bit{z}^I; \bit{\lambda}) \nonumber\\ &= (s - i \lambda_2) \int [D z]_{s + 1/2} z^{- i \lambda_1 - s - 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s} (z_2 - z^\ast)^{i \lambda_2 - s - 1} z_2^{- i \lambda_2 - s} \, . \end{align} The graphical representation for $\Phi^I_{s+1/2,s} $ is shown in Fig.\ \ref{FigPyramid} (on the right panel). \subsection{Mixed pentagons} Let us calculate the pentagon transitions corresponding to the wave function $\Psi_{s+1/2,s}$ and $\Psi_{s,s+1/2}$. They are \begin{align} P_{s,s+1/2} & = \int_{\mathcal{S}'} d^2 \bit{x}' \int_{\mathcal{S}} d^2 \bit{x} \frac{\left( \Psi_{s,s+1/2} (\bit{x}'; \bit{\lambda}') \right)^\ast \Psi_{s,s+1/2} (\bit{x}; \bit{\lambda})}{(x_1 + x'_1)^{2 s} (x_2 + x'_2)^{2s+1}} \, , \\ P_{s+1/2,s} & = \int_{\mathcal{S}'} d^2 \bit{x}' \int_{\mathcal{S}} d^2 \bit{x} \frac{\left( \Psi_{s+1/2,s} (\bit{x}'; \bit{\lambda}') \right)^\ast \Psi_{s+1/2,s} (\bit{x}; \bit{\lambda})}{(x_1 + x'_1)^{2 s + 1} (x_2 + x'_2)^{2s}} \, , \end{align} where the wave functions are connected point-by-point with spin-$s$ propagators. They are related to the matrix elements via Eqs.\ \re{2PfermionicWF12} and \re{2PfermionicWF21}. Notice that the wave function in the out-state is in a different conformal frame with respect to the incoming ones, i.e., \begin{align} \Psi'_{s_1s_2} (\bit{x}'; \bit{\lambda}) = \left( \frac{\partial x''_1}{\partial x'_1} \right)^{1 - s_1} \left( \frac{\partial x''_2}{\partial x'_2} \right)^{1 -s_2} \Psi_{s_1 s_2} (\bit{x}''; \bit{\lambda}) \, , \qquad x'' = \frac{x'}{1 - x'} \, . \end{align} Making use of the integrals displayed in Eqs.\ \re{IntPsi2Topsi1} and \re{IntPsi1Topsi2}, we can rewrite the above pentagons as \begin{align} P_{s,s+1/2} (\bit{\lambda}'|\bit{\lambda}) & = \mathcal{X}_{s+1/s,2} (\bit{\lambda}) \int_{\mathcal{S}'} d^2 \bit{x}' \left( \Psi'_{s,s+1/2} (\bit{x}';\bit{\lambda}') \right)^\ast \Phi_{s,s+1/2} (\bit{x};\bit{\lambda}) \, , \\ P_{s+1/2,s} (\bit{\lambda}'|\bit{\lambda}) & = \mathcal{X}_{s+1/s,2} (\bit{\lambda}) \int_{\mathcal{S}'} d^2 \bit{x}' \left( \Psi'_{s+1/2,s} (\bit{x}';\bit{\lambda}') \right)^\ast \Phi_{s+1/2,s} (\bit{x};\bit{\lambda}) \, . \end{align} It is important to realize that individually we cannot relate these integrals to the inner product on the line since the intertwiner $\mathcal{W}$ (see Eq.\ \re{SL21intertwiner}) acts on the superwave function $\Phi_2^{[2]} = \theta_1 \Phi_{s+1/2,s} + \theta_2 \Phi_{s, s+1/2}$, not its separate components $\Phi_{s_1 s_2}$ as shown in Eq.\ \re{MixedIntertwiner}. Further, to relate the product on the line to the one in upper half-plane, one has to take into account that the operator $\mathcal{X}$ has well-defined eigenvalue again only on the total mixed superfunction \re{MixedXeigenvalue}, not on its components in the Grassmann decomposition. This immediately implies that (as shown in Section \ref{SectionBoxTranstitions}) \begin{align} & \int_{\mathcal{S}} d^2 \bit{x} \Big[ \left( \Psi'_{s+1/2,s} (\bit{x}; \bit{\lambda}') \right)^\ast \Phi_{s+1/2,s} (\bit{x}; \bit{\lambda}) + \left( \Psi'_{s,s+1/2} (\bit{x}; \bit{\lambda}') \right)^\ast \Phi_{s,s+1/2} (\bit{x}; \bit{\lambda}) \Big] \nonumber\\ &\qquad = \mathcal{X}_{s+1/2,s} \Big[ \vev{ \Phi'_{s+1/2,s} (\bit{x}; \bit{\lambda}') | \Phi_{s+1/2,s} (\bit{x}; \bit{\lambda}) } + \vev{ \Phi'_{s,s+1/2} (\bit{x}; \bit{\lambda}') | \Phi_{s,s+1/2} (\bit{x}; \bit{\lambda}) } \Big] \, . \end{align} Therefore, we can relate the mixed pentagon to the sum of the inner products of the matrix element eigenfunctions in the upper half-plane of the complex plane, i.e., \begin{align} P_{s+1/2,s} (\bit{\lambda}' |\bit{\lambda}') + P_{s,s+1/2} (\bit{\lambda}' |\bit{\lambda}') = \vev{ \Phi'_{s+1/2,s} (\bit{\lambda}') | \Phi_{s+1/2,s} (\bit{\lambda})} + \vev{ \Phi'_{s,s+1/2} (\bit{\lambda}') | \Phi_{s,s+1/2} (\bit{\lambda})} \, . \end{align} The latter can be rewritten as \begin{align} \label{PentagonsInnerProducts} \vev{ \Phi'_{s+1/2,s} (\bit{\lambda}') | \Phi_{s+1/2,s} (\bit{\lambda})} & + \vev{ \Phi'_{s,s+1/2'} (\bit{\lambda}') | \Phi_{s,s+1/2} (\bit{\lambda})} \\ & = \vev{ \Phi^I_{s+1/2,s} (\bit{\lambda}'; 0) | \Phi^I_{s+1/2,s} (\bit{\lambda}; 1)} + \vev{ \Phi^I_{s,s+1/2} (\bit{\lambda}'; 0) | \Phi^I_{s,s+1/2} (\bit{\lambda}; 1)} \, , \nonumber \end{align} in terms of the inverted eigenfunctions at position $- \gamma$, \begin{align} \Phi^I_{s+1/2,s} (\bit{\lambda}; \gamma) & = (s + i \lambda_2) \nonumber\\ &\times \int [D z]_{s + 1/2} \, (z + \gamma)^{- i \lambda_1 - s - 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s - 1} (z_2 - z^\ast)^{i \lambda_2 - s} (z_2 + \gamma)^{- i \lambda_2 - s} \, , \\ \Phi^I_{s,s+1/2} (\bit{\lambda}; \gamma) &= (s - i \lambda_2) \nonumber\\ &\times \int [D z]_{s + 1/2} (z + \gamma)^{- i \lambda_1 - s - 1/2} (z_1 - z^\ast)^{- i \lambda_2 - s} (z_2 - z^\ast)^{i \lambda_2 - s - 1} (z_2 + \gamma)^{- i \lambda_2 - s} \, . \end{align} Let us calculate the transition \begin{align} T^\gamma_{s+1/2,s} (\bit{\lambda}'|\bit{\lambda}) = \vev{ \Phi^I_{s+1/2,s} (\bit{\lambda}'; 0) | \Phi^I_{s+1/2,s} (\bit{\lambda}; \gamma)} + \vev{ \Phi^I_{s,s+1/2} (\bit{\lambda}'; 0) | \Phi^I_{s,s+1/2} (\bit{\lambda}; \gamma)} \, , \end{align} for a generic values of $\gamma$. The calculation of the inner product in the right-hand side of this equation goes along the same lines as the one for the evaluation of the inner product discussed in Section \ref{SectionOrthogonalityPhi}, this time, only from left to right. The first step, shown in the first line of Fig.\ \ref{FigPentagon}, consists in the integration over the vertex at $z_1$ making use of the chain rule \re{ChainRules} and pulling out the overall factor $$ {\rm e}^{- i \pi s} a_s (s + i \lambda_2, s - i \lambda'_2) $$ from both contributions, which yields the relative coefficient for the first graph \begin{align} {\rm e}^{- i \pi/2} \frac{(s + i \lambda_2)(s - i \lambda'_2) a_{s+1/2} (1 + s + i \lambda_2, 1 + s - i \lambda'_2)}{a_s (s + i \lambda_2, s - i \lambda'_2)} = 2 s (\lambda_2 - \lambda'_2) \, . \end{align} as shown in Fig.\ \ref{FigPentagon} (middle row diagrams). \begin{figure}[p] \begin{center} \mbox{ \begin{picture}(0,490)(260,0) \put(0,-50){\insertfig{30}{pentagonreduction}} \end{picture} } \end{center} \caption{\label{FigPentagon} Procedure for the calculation of the mixed pentagons \re{PentagonsInnerProducts}.} \end{figure} The subsequent step requires the application of the superpermutation identity. For the case at hand, we introduce the following supercoordinates in Eq.\ \re{superPermutationIdentity} to do the job \begin{align} Z_1 = (z, \theta) \, , \qquad Z_2 = (- \gamma, 0) \, , \qquad Z'_1 = (z', \theta') \, , \qquad Z'_2 = (0, 0) \, , \end{align} and rapidities $\lambda' = \lambda'_2$, $\lambda = \lambda_2$. Keeping the $\theta' \theta^\ast$ term in the Grassmann expansion on both sides of the permutation identity, we find the relation \begin{align} \label{SuperCrossGrassmannComponent} & (z' - z^\ast)^{i \lambda'_2 - i \lambda_2} (s - i \lambda_2) (s + i \lambda'_2) X_{s + 1/2} (z, - \gamma; \lambda_2 | z', 0; \lambda'_2) \nonumber\\ &\qquad - 2 i s (i \lambda'_2 - i \lambda_2) (z' - z^\ast)^{i \lambda'_2 - i \lambda_2 - 1} X_{s} (z, - \gamma ; \lambda_2 | z', 0 ; \lambda'_2) \nonumber\\ &\qquad\qquad = \gamma^{i \lambda_2 - i \lambda'_2} (s - i \lambda'_2) (s + i \lambda_2) X_{s + 1/2} (z, - \gamma; \lambda'_2 | z', 0; \lambda_2) \, , \end{align} between crosses with spin-$s$ and spin-$(s+\ft12)$ measures, \begin{align} X_{s} (z, \gamma; \lambda_2 | z', 0; \lambda'_2 ) & = \int [D y]_{s} (y - z^\ast)^{i \lambda_2 - s} (y + \gamma)^{-i \lambda_2 - s} (z' - y^\ast)^{- i \lambda'_2 - s} (- y^\ast)^{i \lambda'_2 - s} \, , \\ X_{s + 1/2} (z, \gamma; \lambda_2 | z', 0; \lambda'_2) & = \int [D y]_{s + 1/2} (y - z^\ast)^{i \lambda_2 - s - 1} (y + \gamma)^{-i \lambda_2 - s - 1} (z' - y^\ast)^{- i \lambda'_2 - s} (- y^\ast)^{i \lambda'_2 - s} \, . \end{align} Finally, using the chain rule three times, we acquire factors \begin{align} {\rm e}^{- i \pi (2 s + 1)} a_{s+1/2} (\ft12 + s + i \lambda_1, 1 + s - i \lambda'_2) a_{s+1/2} (\ft12 + s - i \lambda'_1, 1 + s + i \lambda_2) \end{align} and \begin{align} {\rm e}^{- i \pi (s + 1/2)} a_{s+1/2} (\ft12 + s + i \lambda_1, \ft12 + s - i \lambda'_1) \, , \end{align} from the steps shown in Fig.\ \ref{FigPentagon} in the left and right panels of the last row graphs, respectively. Assembling everything together (along with phases that emerge from conjugated pyramid), we find \begin{align} T_{s+1/2,s}^\gamma (\bit{\lambda} | \bit{\lambda}') & = {\rm e}^{- \pi \sum_{n = 1}^2 \lambda_n} \gamma^{i \sum_{n = 1}^2 (\lambda_n - \lambda'_n) } (s + i \lambda_2) (s - i \lambda'_2) a_{s+1/2} (\ft12 + s + i \lambda_1, \ft12 + s - i \lambda'_1) \nonumber\\ &\times a_s (s + i \lambda_2, s - i \lambda'_2) a_{s+1/2} (\ft12 + s + i \lambda_1, 1 + s - i \lambda'_2) a_{s+1/2} (\ft12 + s - i \lambda'_1, 1 + s + i \lambda_2) \, . \end{align} This relation implies factorizable structure of multiparticle pentagons, i.e., two-to-two in the current case, \begin{align} P_{s+1/2,s| s+1/2,s} (\bit{\lambda} | \bit{\lambda}') & = {\rm e}^{- \pi \sum_{n = 1}^2 \lambda_n} \\ &\times P_{s+1/2|s+1/2} (\lambda_1 | \lambda'_1) P_{s+1/2|s} (\lambda_1 | \lambda'_2) P_{s|s+1/2} (\lambda_2 | \lambda'_1) P_{s|s} (\lambda_2 | \lambda'_2) \, , \nonumber \end{align} in terms of one-particle pentagon transitions \begin{align} P_{s|s} (\lambda | \lambda') & = \frac{\Gamma (i \lambda - i \lambda') \Gamma (2s)}{\Gamma (s + i \lambda) \Gamma (s - i \lambda')} \, , \\ P_{s+1/2|s+1/2} (\lambda | \lambda') & = \frac{\Gamma (i \lambda - i \lambda') \Gamma (2s + 1)}{\Gamma (s + \ft12 + i \lambda) \Gamma (s + \ft12 - i \lambda')} \, , \\ P_{s|s + 1/2} (\lambda | \lambda') & = \frac{\Gamma (\ft12 + i \lambda - i \lambda') \Gamma (2 s+ 1)}{\Gamma (s + i \lambda) \Gamma (s + \ft12 - i \lambda')} \, . \end{align} Finally $P_{s+1/2|s} (\lambda | \lambda') = P_{s|s + 1/2} (- \lambda' | - \lambda)$. Had we chosen the normalization of the wave functions $\Psi_{s,s+1/2}$ and $\Psi_{s+1/2,s}$ according to the coordinate Bethe Anzats, such that the asymptotic incoming wave come with a unit amplitude, we would cancel the prefactor ${\rm e}^{- \pi \sum_{n = 1}^2 \lambda_n}$ in the right-hand side of the above relation as well as recover pentagons $P_{s+1/2|s} (\lambda'_1|\lambda'_2) P_{s|s+1/2} (\lambda_2|\lambda_1)$ entering the denominator of the transitional factorized form of the pentagons, as was shown for bosonic case in Ref.\ \cite{Belitsky:2014rba}. We will not do it here though, since the above form already proves the factorized form of multiparticle pentagons \cite{Basso:2013vsa,Belitsky:2015efa}. Generalization to arbitrary $N$ is straightforward since the procedure is inductive. \section{Conclusions} In this paper we solved an open superspin chain model that describes minimally supersymmetric sectors of the $\mathcal{N}=4$ flux tube. Depending on the conformal spin assignment for the lowest component of the supermultiplet of flux-tube fields, it encodes either hole-fermion or fermion-gluon excitations. The bulk interactions between the adjacent superfields building up the light-cone operators inherit the sl$(2|1)$ invariance of four-dimensional theory, however, the presence of the boundary breaks it down to the diagonal subgroup. Using the factorized R-matrix structure of the sl$(2|1)$ symmetric spin chain, we constructed the eigenfunctions of the model, analytically continued to the upper half of the complex plane, in the form of multiple integrals which admit an intuitive Feynman graph representation. The latter was indispensable for analytical proof of their orthogonality. The same framework was applied to calculate the so-called pentagon transitions between the states of the flux-tube in different conformal planes. The latter serve as building blocks in the framework of the Operator Product Expansion to null polygonal Wilson superloops. The outcome of this analysis revealed factorizable structure of the dynamical part of multiparticle pentagons in terms of single-particle ones as was already extensively used in the past. Our consideration can be extended to include all propagating modes of the maximally supersymmetric Yang-Mills theory and encode them in the noncompact sl$(2|4)$ superchain. From the technical point of view the changes would appear to be minimal: one would have to replace single Grassmann variables $\theta$ by an SU(4) vector $\theta^A$ and any product of conjugate ones by the sum, $\theta \theta^\ast \to \theta^A \theta^\ast_A$. The question remains open however whether this construction will be able to encode and unravel the matrix part of the pentagon transitions. \section*{Acknowledgments} I would like to thank Benjamin Basso, Fidel Schaposhnik and Pedro Viera for rekindling my interest in the problem and participating at the initial stage of the work. I am deeply indebted to Sasha Manashov for multiple insightful and clarifying discussions at later stages of the project and Didina Serban for useful conversations. This research was supported by the U.S. National Science Foundation under the grants PHY-1068286 and PHY-1403891.
1,941,325,220,361
arxiv
\section{{\bf Introduction:}} Black holes are one of the most fascinating parts of theoretical, astrophysical and cosmological physics ever since Einstein's discovery of the theory of general relativity of gravitation. They are very important members of the universe. Because of their huge gravitational force no objects, not even light, can escape from them. There exists a region called `event horizon' beyond which all objects are strongly attracted towards the centre of a black hole leaving absolutely no chance for them to crossover the event horizon to the outer region. So they are completely isolated from the rest of the universe and have absolute zero temperature. However this is one part of black hole physics where everything is treated classically, but one has to check what happens when quantum effects are taken into account. The inspiration of incorporating quantum theory for black holes is present within classical gravity itself. The four laws of ``{\it black hole mechanics}'' derived by Bardeen, Carter, Hawking \cite{Bardeen} are closely similar to the ``laws of thermodynamics'' if black holes are allowed to have some temperature. Around the same time of the above work, Bekenstein argued for black hole entropy based on simple aspects of thermodynamics \cite{Beken} which require that entropy of the universe cannot decrease due to the capture of any object by black holes. For making the total entropy of the universe at least unchanged, a black hole should gain the same amount of entropy which is lost from the rest of the universe. Bekenstein then gave some heuristic arguments to show that black hole entropy must be proportional to its horizon area. He also fixed the proportionality constant as $\frac{ln2}{8\pi}$. The idea of Bekenstien was given a solid mathematical ground when Hawking incorporated quantum fields moving in a background of classical gravity and showed that black holes do emit particles having a black body spectrum with physical temperature $\frac{\hbar\kappa}{2\pi}$, where $\kappa$ is the surface gravity of a black hole \cite{Hawk}. Knowing this expression of black hole temperature (``{\it Hawking temperature}'') one can make an analogy with the `first law of black hole mechanics' and the `first law of thermodynamics' to identify entropy as $S=\frac{A}{4}$, where $A$ is the horizon area of the black hole. Thus it was proven that Bekenstein's constant of proportionality was incorrect and the new proportionality constant is $\frac{1}{4}$. The work of Bekenstein and Hawking thereby leads to the semiclassical result for black hole entropy encapsuled by the Bekenstein-Hawking area law, given by \begin{eqnarray} S_{\textrm {BH}}=\frac{A}{4}. \label{1.1} \end{eqnarray} Thereafter a lot of effort has been made for studying thermodynamic aspects of black holes. Indeed there are several approaches to calculate Hawking temperature and entropy of a black hole. Among these a simple and physically intuitive picture is provided by the tunneling mechanism \cite{Wilczek, Cai, Paddy, Kern}. It has two variants namely null geodesic method \cite{Wilczek, Cai} and Hamilton-Jacobi method \cite{Paddy, Kern}. Recently in \cite{Majhiflux}, Hawking flux from the tunneling mechanism has been derived which shows that black holes have perfect black body spectrum with the correct Hawking temperature. In tunneling method pair creation occurs just inside the event horizon where one mode moves towards the centre of the black hole while the other mode just tunnels through the event horizon to the outer region and reaches infinity. Besides temperature, there have been various studies related to the obtention of entropy. Although till now there is no microscopic description of black hole entropy, several approaches have shown that the semiclassical Bekenstein-Hawking entropy (\ref{1.1}) undergoes corrections. These approaches are mainly based on field theory \cite{Fursaev}, quantum geometry \cite{Partha}, statistical mechanics \cite{Das}, Cardy formula \cite{Carlip}, brick wall method \cite{Hooft} and tunneling method \cite{R. Banerjee, Majhi1, Modak, Majhitrace}. But none of these is successful to include all the black hole spacetimes. In this paper we construct a different framework for studying entropy using a basic property of ordinary thermodynamics that ensures entropy ($S$) must be a {\it state function}. This naturally yields the `first law of black hole thermodynamics' where one does not need the first law of black hole mechanics. The fact that $dS$ is an exact differential gives three integrability conditions which are analogous to {\it Maxwell's equations} in ordinary thermodynamics. Unlike the usual approach where the Bekenstein-Hawking entropy is read-off by a comparison of the first law of thermodynamics with the first law of black hole mechanics, we are able to directly calculate the entropy by taking all work terms into consideration. It is revealed that although the work terms have some role to play in between, they do not contribute to the final result for the semiclassical Bekenstein-Hawking entropy. Our analysis is performed for a general black hole defined by the Kerr-Newman metric. The main strength of our approach, however, lies in finding the corrections to the semiclassical value of entropy. For this we first calculate corrections to semiclassical Hawking temperature using both scalar particle and fermion tunneling in the Kerr-Newman spacetime. Equivalent results are obtained. We also find that the corrected Hawking temperature, as calculated by tunneling mechanism, has several arbitrary coefficients. We determine these coefficients by demanding that the corrected entropy ($S_{\textrm {bh}}$) of a stationary black hole also has to be a {\it state function}. The integrability conditions (analog to Maxwell's relations) on $dS_{\textrm {bh}}$ fix most of the coefficients. Then following the usual technique to solve an exact differential equation, we calculate the corrected entropy for the Kerr-Newman black hole. In the limiting case the whole analysis is valid to give the corrected entropy for other black holes, as for example (i) Kerr, (ii) Reissner-Nordstrom and (iii) Schwarzschild black hole. The general form of the corrected entropy includes logarithmic terms and inverse area terms as leading and next to leading order corrections. However in the expression of the corrected entropy, there is one arbitrary coefficient present with each correctional term. The remainder of this paper then deals with fixing the coefficient ($\tilde\beta_1$) of the logarithmic term. We successfully fix this coefficient for all spacetimes. It is related to the trace anomaly of the stress tensor. The concept of Komar conserved quantity corresponding to a Killing vector plays a crucial role for the explicit calculation of $\tilde\beta_1$. We consider various stationary black hole spacetimes in (3+1) dimensions and perform an integration over the trace anomaly to give the final result for $\tilde\beta_1$. From our analysis it is revealed that $\tilde\beta_1$ is a pure number for both Schwarzschild and Kerr spacetime and, more importantly, the values are exactly equal. This is consistent since there is no difference in the dynamics for these two black holes as they only differ in their geometrical behaviour. For the other two charged black holes (Reissner-Nordstrom and Kerr-Newman) $\tilde\beta_1$ is not a pure number but in the limit $Q=0$ they reproduce the result for Schwarzschild and Kerr black holes respectively. The paper is organised as follows. In Section 2 we deduce the `first law of black hole thermodynamics' from a different viewpoint by considering entropy as a state function and calculate the semiclassical Bekenstein-Hawking entropy for stationary black holes. In Section 3 both scalar particle and fermion tunneling is used to calculate the corrected Hawking temperature. Section 4 is devoted to find the general form of a corrected area law which is valid for all stationary spacetimes in (3+1) dimensions. In Section 5 the coefficient of the leading (logarithmic) correction to the area law is fixed. Section 6 is left for our conclusions and discussions. We give our notations and definitions in an appendix which also includes a very brief review of Komar conserved quantities. \section{{\bf Exact differential and semiclassical Area Law }} Long time back (1973) within the realm of classical general relativity Bardeen, Carter and Hawking gave the ``first law of black hole mechanics'' which states that for two nearby black hole solutions the difference in mass ($M$), area ($A$) and angular momentum ($J$) must be related by \cite{Bardeen} \begin{eqnarray} \delta M= \frac{1}{8\pi}{\kappa\delta A}+ \Omega_{\textrm H} \delta J. \label{bhmech} \end{eqnarray} In addition some more terms can appear on the right hand side due to the presence of other matter fields. They found this analogous to the ``first law of thermodynamics'', which states, the difference in energy ($E$), entropy ($S$) and other state parameters of two nearby thermal equilibrium states of a system is given by \begin{eqnarray} dE= T dS+ {\textrm {``work terms''}}. \label{thermo} \end{eqnarray} Therefore even in classical general relativity the result (\ref{bhmech}) is appealing due to the fact that both $E$ and $M$ represent the same physical quantity, namely total energy of the system. Although at that time this result was quite surprising as classically, temperature of black holes was absolute zero. So the identification of temperature with surface gravity, as shown by (\ref{bhmech}) and (\ref{thermo}), was meaningless. Consequently, identification of entropy with horizon area was inconsistent. However the picture was changed dramatically when Hawking (1975), incorporating quantum effects, discovered \cite{Hawk} that black holes do radiate all kinds of particles with a perfect black body spectrum with temperature $T_{\textrm H}=\frac{\kappa}{2\pi}$. From this mathematical identification of the Hawking temperature ($T_{\textrm H}$) with the surface gravity ($\kappa$) in (\ref{bhmech}), one is left with some analogy between entropy ($S$) and the area of the event horizon($A$), suggested by (\ref{bhmech}) and (\ref{thermo}). The result $S=\frac{A}{4}$ follows from this analogy. For such an identification, the horizon area of a black hole is playing the ``mathematical role'' of entropy and does not have a solid physical ground. Also, this naive identification remains completely silent about the role of ``work terms''. But if one does not use this mathematical analogy, rather tries to {\it calculate} entropy, it may appear that these work terms might have some role to play. Therefore the role of these work terms is not transparent in the process of identifying entropy. Moreover, in this analysis one can obtain the ``first law of black hole thermodynamics'' only by deriving the ``first law of black hole mechanics'' and then identifying this with the ordinary ``first law of thermodynamics''. Now we want to obtain the ``first law of black hole thermodynamics'' by directly starting from the {\it thermodynamical} viewpoint where one does not require the ``first law of black hole mechanics''. From such a law the entropy will be explicitly calculated and not identified, as usually done, by an analogy between (\ref{bhmech}) and (\ref{thermo}). For this derivation we interpret Hawking's result of black hole radiation \cite{Hawk} as \begin{itemize} \item {{\it black holes are thermodynamical objects having mass ($M$) as total energy ($E$) and they are immersed in a thermal bath in equilibrium with physical temperature ($T_{\textrm H}$)}.} \end{itemize} Therefore following the ordinary ``first law of thermodynamics'' we are allowed to write the ``first law of black hole thermodynamics'' as \begin{eqnarray} dM= T_{\textrm H}dS + {\textrm {``work terms on black hole''}}, \label{newlaw} \end{eqnarray} where $M$ is the mass of the black hole and $T_{\textrm H}$ is the Hawking temperature. Usually, without deriving the ``first law of black hole mechanics'' one is not able to find ``work terms on black hole'' exactly. But we can always make a dimensional analysis to construct these two terms as proportional to $\Omega_{\textrm H} dJ$ and $\Phi_{\textrm H} dQ$ where $J$ and $Q$ are the angular momentum and charge of the black hole. This is possible since the form of ``angular velocity ($\Omega_{\textrm H}$)'' and ``potential ($\Phi_{\textrm H}$)'' at the event horizon are known individually from classical gravity. These terms can be brought on the right hand side of (\ref{newlaw}) with some prefactors given by dimensionless constants `$a$' and `$b$', such that (\ref{newlaw}) becomes \begin{eqnarray} dM=T_{\textrm H} dS+a\Omega_{\textrm H} dJ+ b\Phi_{\textrm H} dQ. \label{4.11} \end{eqnarray} To fix the arbitrary constants `$a$' and `$b$' let us first rewrite (\ref{4.11}) in the form \begin{eqnarray} dS=\frac{dM}{T_{\textrm H}}+(-\frac{a\Omega_{\textrm H}}{T_{\textrm H}})dJ+(-\frac{b\Phi_{\textrm H}}{T_{\textrm H}})dQ \label{4.41} \end{eqnarray} From the principle of ordinary first law of thermodynamics one must interpret entropy as a {\it state function}. For the evolution of a system from one equilibrium state to another equilibrium state, entropy does not depend on the details of the evolution process, but only on the two extreme points representing the equilibrium states. This universal property of entropy must be satisfied for black holes as well. In fact the entropy of any stationary black hole should not depend on the precise knowledge of its collapse geometry but only on the final equilibrium state. Hence we can conclude that entropy for a stationary black hole is a state function and consequently $dS$ has to be an {\it exact differential}. As a result the coefficients of the right hand side of (\ref{4.41}) must satisfy the three integrability conditions \begin{align} \frac{\partial}{\partial J}(\frac{1}{T_{\textrm H}})\big|_{M,Q}& =\frac{\partial}{\partial M}(-\frac{a\Omega_{\textrm H}}{T_{\textrm H}})\big|_{J,Q}\notag\\ \frac{\partial}{\partial Q}(-\frac{a\Omega_{\textrm H}}{T_{\textrm H}})\big|_{M,J}& =\frac{\partial}{\partial J}(-\frac{b\Phi_{\textrm H}}{T_{\textrm H}})\big|_{M,Q}\notag\\ \frac{\partial}{\partial M}(-\frac{b\Phi_{\textrm H}}{T_{\textrm H}})\big|_{J,Q}& =\frac{\partial}{\partial Q}(\frac{1}{T_{\textrm H}})\big|_{J,M}. \label{4.71} \end{align} As one can see, these relations are playing a role similar to {\it Maxwell's relations} of ordinary thermodynamics. Like Maxwell's relations these three equations do not refer to a process but provide relationships between certain physical quantities that must hold at equilibrium. The only known stationary solution of Einstein-Maxwell equation with all three parameters, namely Mass ($M$), Charge($Q$) and Angular momentum ($J$) is given by the Kerr-Newman spacetime. All the necessary information for that metric is provided in Appendix 7.1 and one can readily check that the first, second and third conditions are satisfied only for $a=1,~~a= b~$ and $~b=1$ respectively, leading to the unique solution $a=b=1$. As a result, (\ref{4.11}) immediately reduces to the standard form \begin{eqnarray} dM=T_{\textrm H} dS+\Omega_{\textrm H} dJ+\Phi_{\textrm H} dQ, \label{sthermo} \end{eqnarray} This completes the obtention of the ``first law of black hole thermodynamics'', for a rotating and charged black hole, without using the ``first law of black hole mechanics''. One can make an analogy of (\ref{sthermo}) with the standard first law of thermodynamics given by $dE= TdS- pdV +\mu dN$. Knowing $E=M$ (since both represent the same quantity which is the energy of the system) one can infer the correspondence $-\Omega_{\textrm H}\rightarrow p,~J\rightarrow V,~\Phi_{\textrm H}\rightarrow \mu,~Q\rightarrow N$ between the above two cases. Indeed $\Omega_{\textrm H} dJ$ is the work done on the black hole due to rotation and is the exact analogue of the $-pdV$ term. Likewise the electrostatic potential $\Phi_{\textrm {H}}$ plays the role of the chemical potential $\mu$. It is now feasible to calculate the entropy by using properties of exact differentials. The first step is to rewrite (\ref{sthermo}) as \begin{eqnarray} dS=\frac{dM}{T_{\textrm H}}+(\frac{-\Omega_{\textrm H}}{T_{\textrm H}})dJ+(\frac{-\Phi_{\textrm H}}{T_{\textrm H}})dQ, \label{sthermo1} \end{eqnarray} where $dS$ is now an exact differential. Any first order partial differential equation \begin{eqnarray} df(x,y,z)=U(x,y,z)dx+V(x,y,z)dy+W(x,y,z)dz \label{4.8} \end{eqnarray} is exact if it fulfills these integrability conditions \begin{eqnarray} \frac{\partial U}{\partial y}\big|_{x,z}=\frac{\partial V}{\partial x}\big|_{y,z};~~~\frac{\partial V}{\partial z}\big|_{x,y}=\frac{\partial W}{\partial y}\big|_{x,z};~~~\frac{\partial W}{\partial x}\big|_{y,z}=\frac{\partial U}{\partial z}\big|_{x,y}. \label{4.9} \end{eqnarray} If these three conditions hold then the solution of (\ref{4.8}) is given by \begin{eqnarray} f(x,y,z)=\int{ Udx} +\int{Xdy}+\int{Ydz}, \label{4.12} \end{eqnarray} where \begin{eqnarray} X=V-\frac{\partial}{\partial y}{\int{Udx}} \label{4.13} \end{eqnarray} and \begin{eqnarray} Y=W-\frac{\partial}{\partial z}[\int{Udx}+{\int Xdy}]. \label{4.14} \end{eqnarray} Now comparing (\ref{sthermo1}) and (\ref{4.8}) we find the following dictionary \begin{align} (f\rightarrow S,~~x\rightarrow M,~~y\rightarrow J,~~z\rightarrow Q)\nonumber\\ (U\rightarrow\frac{1}{T_{\textrm H}},~~V\rightarrow\frac{-\Omega_{\textrm H}}{T_{\textrm H}},~~W\rightarrow\frac{-\Phi_{\textrm H}}{T_{\textrm H}}). \label{4.15} \end{align} Using this dictionary and (\ref{4.12}), (\ref{4.13}) and (\ref{4.14}) one finds, \begin{eqnarray} S=\int{\frac{dM}{T_{\textrm H}}}+\int{XdJ}+\int{YdQ}, \label{4.19} \end{eqnarray} where \begin{eqnarray} X=(-\frac{\Omega_{\textrm H}}{T_{\textrm H}})-\frac{\partial}{\partial J}{\int\frac{dM}{T_{\textrm H}}} \label{4.20} \end{eqnarray} and \begin{eqnarray} Y=(-\frac{\Phi_{\textrm H}}{T_{\textrm H}})-\frac{\partial}{\partial Q}[{\int\frac{dM}{T_{\textrm H}}}+{\int XdJ}]. \label{4.21} \end{eqnarray} In order to calculate the semiclassical entropy we need to solve (\ref{4.19}), (\ref{4.20}) and (\ref{4.21}). Note that all the ``work terms'' are appearing in the general expression of the semiclassical entropy of a black hole (\ref{4.19}). Let us first perform the mass integral to get \begin{eqnarray} \int\frac{dM}{T_{\textrm H}}=\frac{\pi}{\hbar}\left(2M{[M+(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}]-Q^2\right), \label{4.22} \end{eqnarray} where the expression (\ref{4.5}) has been substituted for $T_{\textrm H}^{-1}$. With this result one can check the following equality \begin{eqnarray} \frac{\partial}{\partial J}\int\frac{dM}{T_{\textrm H}}=-\frac{\Omega_{\textrm H}}{T_{\textrm H}} \label{4.23} \end{eqnarray} holds, where $\Omega_{\textrm H}$ is defined in (\ref{angv}). Putting this in (\ref{4.20}) it follows that $X=0$. Using (\ref{4.22}) one can next calculate, \begin{eqnarray} \frac{\partial}{\partial Q}\int\frac{dM}{T_{\textrm H}}=-\frac{\Phi_{\textrm H}}{T_{\textrm H}}, \label{4.24} \end{eqnarray} where $-\frac{\Phi_{\textrm H}}{T_{\textrm H}}$ is given in (\ref{4.7}). With this equality and the fact that $X=0$, we find, using (\ref{4.21}), $Y=0$. Exploiting all of the above results, the semiclassical entropy for Kerr-Newman black hole is found to be, \begin{eqnarray} S=\int{\frac{dM}{T_{\textrm H}}}=\frac{\pi}{\hbar}\left(2M{[M+(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}]-Q^2\right)=\frac{A}{4\hbar}=S_{{\textrm {BH}}}, \label{4.25} \end{eqnarray} which is the standard semiclassical Bekenstein-Hawking area law for Kerr-Newman black hole. The expression for the area ($A$) of the event horizon follows from (\ref{area}). Now it is trivial, as one can check, that all other stationary spacetime solutions, for example Kerr or Reissner-Nordstrom, also fit into the general framework to give the semiclassical Bekenstein-Hawking area law. Thus the universality of the approach is justified. \section{{\bf Correction to semiclassical Hawking temperature}} For convenience of our analysis let us first rewrite the original Kerr-Newman metric (given in Appendix 7.1) in the following form, \begin{align} ds^{2} & =-F(r,\theta)dt^{2}+\frac{dr^{2}}{\tilde g(r,\theta)}+K(r,\theta )(d\phi-\frac{H(r,\theta)}{K(r,\theta)}dt)^{2}+\Sigma(r)d\theta^{2},% \label{2.3}\\ F(r,\theta) & =\tilde f(r,\theta)+\frac{H^{2}(r,\theta)}{K(r,\theta)}=\frac{\Delta(r)\Sigma(r,\theta)}{(r^{2}+a^{2})^{2}-\Delta(r)a^{2}\sin^{2}\theta} \nonumber \end{align} In course of finding the correction to the semiclassical Hawking temperature we follow the method developed in \cite{Majhi1}. Therefore the first aim is to isolate the `$r-t$' sector of the metric (\ref{2.3}) from the angular part. In a previous analysis for rotating BTZ black hole we did a similar work \cite{Modak}. The idea is to take the near horizon form of the metric and thereby redefine the angular part in such a way that the $r-t$ sector becomes isolated. This redefinition only changes the total energy of the tunneling particle \cite{Modak,Kerner1,Ang1} and does not affect the thermodynamical entities. In the case of Kerr-Newman black hole this issue is little more subtle since the metric coefficients also depend on $\theta$. However, because of the presence of an ergosphere, $\tilde f(r,\theta)$ in (\ref{2.1}) is positive on the horizon for two specific values of $\theta$, say, $\theta_0=0$ or $\pi$. For these two values of $\theta$ the ergosphere and the event horizon coincide. For the tunneling of any particle through the horizon of the Kerr-Newman black hole only these two specific values of $\theta$ are allowed. When we take the near horizon limit of the metric (\ref{2.3}) the value of $\theta$ is first fixed to $\theta_0$. The form of the metric near the horizon for fixed $\theta=\theta_{0}$\ is given by \cite{Kerner1}, \begin{equation} ds^{2}=-F'(r_{+},\theta_{0})(r-r_{+})dt^{2}+\frac{dr^{2}}{\tilde g'(r_{+},\theta_{0})(r-r_{+})}+K(r_{+},\theta_{0})(d\phi-\frac{H(r_{+},\theta_{0})}{K(r_{+},\theta_{0})}dt)^{2} \label{2.4} \end{equation} where, \begin{eqnarray} \frac{H(r_+,\theta)}{K(r_+,\theta)}=\frac{a}{r_{+}^{2}+a^{2}}=\Omega_{\textrm H} \label{2.5} \end{eqnarray} is the angular velocity of the event horizon. A coordinate transformation \begin{eqnarray} d\chi=d\phi-\Omega_{\textrm H}dt\implies\chi=\phi-\Omega_{\textrm H} t \label{2.51} \end{eqnarray} will take the metric (\ref{2.4}) into the desired form, % \begin{equation} ds^{2}=-F'(r_{+},\theta_{0})(r-r_{+})dt^{2}+\frac{dr^{2}}{\tilde g'(r_{+},\theta_{0})(r-r_{+})}+K(r_{+},\theta_{0})d\chi^{2}, \label{2.6} \end{equation} where the `$r-t$' sector is isolated from the angular part $d\chi^2$. Note that the `$r-t$' sector of the metric (\ref{2.6}) has the form, \begin{eqnarray} ds^2=-f(r)dt^2+{\frac{1}{g(r)}}{dr^2}, \label{2.7} \end{eqnarray} where \begin{eqnarray} f(r)=F'(r_{+},\theta_{0})(r-r_{+}) \label{2.71}\\ g(r)=\tilde g'(r_{+},\theta_{0})(r-r_{+}). \nonumber \end{eqnarray} \subsection{{\bf Scalar Particle tunneling}} The massless particle in spacetime (\ref{2.6}) is governed by the Klein-Gordon equation \begin{equation} -\frac{\hbar^2}{\sqrt{-g}}{\partial_\mu[g^{\mu\nu}\sqrt{-g}\partial_{\nu}]\Phi}=0. \label{2.8} \end{equation} In the tunneling approach we are concerned about the radial trajectory, so that only the $r-t$ sector (\ref{2.7}) of the metric (\ref{2.6}) is relevant. Note that in the analysis given in \cite{Majhi1}, for a Shwarzschild black hole, the structure of the `$r-t$' sector was similar to (\ref{2.7}). But it should be remembered that now we are dealing with a black hole having three parameters ({$M, Q, J$}). As a consequence a major difference will appear later on. Equation (\ref{2.8}), with the background metric (\ref{2.7}) cannot be solved exactly. Therefore we start with the following standard WKB ansatz for $\Phi$ as \begin{eqnarray} \Phi(r,t)=exp[\frac{i}{\hbar}{{\cal S}(r.t)}], \label{2.9} \end{eqnarray} and substitute it in (\ref{2.8}) to yield, \begin{eqnarray} &&\frac{i}{\sqrt{f(r)g(r)}}\Big(\frac{\partial S}{\partial t}\Big)^2 - i\sqrt{f(r)g(r)}\Big(\frac{\partial S}{\partial r}\Big)^2 - \frac{\hbar}{\sqrt{f(r)g(r)}}\frac{\partial^2 S}{\partial t^2} + \hbar \sqrt{f(r)g(r)}\frac{\partial^2 S}{\partial r^2} \nonumber \\ &&+ \frac{\hbar}{2}\Big(\frac{\partial f(r)}{\partial r}\sqrt{\frac{g(r)}{f(r)}}+\frac{\partial g(r)}{\partial r}\sqrt{\frac{f(r)}{g(r)}}\Big)\frac{\partial S}{\partial r}=0. \label{2.9a} \end{eqnarray} Then expanding the action ${\cal S}(r,t)$ in the powers of $\hbar$ \begin{eqnarray} {\cal S}(r,t)={\cal S}{_0}(r,t)+ \sum_i{\hbar^i {\cal S}_i(r,t)}, \label{2.10} \end{eqnarray} and putting this in (\ref{2.9a}) one gets a set of differential equations for different order of $\hbar$ and those can be simplified to obtain, \begin{eqnarray} \hbar^0~:~\frac{\partial S_0}{\partial t}=\pm \sqrt{f(r)g(r)}\frac{\partial S_0}{\partial r}, \label{2.11} \end{eqnarray} \begin{eqnarray} \hbar^1~:~&&\frac{\partial S_1}{\partial t}=\pm \sqrt{f(r)g(r)}\frac{\partial S_1}{\partial r}, \nonumber \\ \hbar^2~:~&&\frac{\partial S_2}{\partial t}=\pm \sqrt{f(r)g(r)}\frac{\partial S_2}{\partial r}, \nonumber \\ . \nonumber \\ . \nonumber \\ . \nonumber \end{eqnarray} and so on. Note that the $n$-th order solution is expressed by, \begin{eqnarray} \frac{\partial S_n}{\partial t}=\pm \sqrt{f(r)g(r)}\frac{\partial S_n}{\partial r}, \label{2.11a} \end{eqnarray} where ($n=~0,~i;~i= 1,2,...) $. The most general form of semiclassical action in the original Kerr-Newman spacetime is given by \begin{eqnarray} {\cal S}_0(r,t,\theta,\phi)=-Et+ P_{\phi}\phi+ \tilde {\cal S}_0(r,\theta), \label{2.12} \end{eqnarray} where $E$ and $P_{\phi}$ are the Komar conserved quantities \cite{Komar} ({\it see} Appendix 7.2) corresponding to the two Killing vectors $\partial_t$ and $\partial_{\phi}$. In the near horizon approximation for fixed $\theta=\theta_0$ and using (\ref{2.51}) one can isolate the semiclassical action for the `$r-t$' sector as, \begin{eqnarray} {\cal S}_0(r,t)=-\omega t+ \tilde {\cal S}_0(r), \label{2.13} \end{eqnarray} where \begin{eqnarray} \omega=(E-P_{\phi}\Omega_{H}) \label{2.13a} \end{eqnarray} is identified as the total energy of the tunneling particle. The solution for other ${\cal S}_i(r,t)$' s, subjected to a choice similar to (\ref{2.13}), can at best differ by a proportionality factor, since they satisfy generically identical equations as (\ref{2.11a}). The most general form of action including the contribution from all orders of $\hbar$ is then given by \cite{Majhi1, Modak, Majhitrace} \begin{eqnarray} {\cal S}(r,t)=(1+\sum\gamma_i \hbar^i) {\cal S}_0(r,t), \label{2.14} \end{eqnarray} It is clear that the dimension of $\gamma_i$ is equal to the dimension of $\hbar^{-i}$. Let us now perform the following dimensional analysis to express these $\gamma_i$' s in terms of dimensionless constants. In (3+1) dimensions in the unit of $G= c= \kappa_B= \frac{1}{4\pi\epsilon_0}= 1$, $\sqrt\hbar$ is proportional to Plank length ($l_p$), Plank mass ($m_p$) and Plank charge ($q_p$) {\footnote {$l_p=\sqrt{{\frac{\hbar G}{c^3}}} , m_p=\sqrt{\frac{\hbar c}{G}} , q_p= \sqrt{c\hbar 4\pi\epsilon_0 }.$ }}. Therefore the most general term which has the dimension of $\hbar$ can be expressed in terms of black hole parameters as \begin{equation} H_{\textrm {KN}}(M,J,Q)= {a_1r^2_{+}}+a_2 {Mr_+}+a_3{M^2}+a_4{r_+ Q}+a_5{MQ}+a_6{Q^2}. \label{2.15} \end{equation} Using this the action in (\ref{2.14}) now takes the form \begin{eqnarray} {\cal S}(r,t)=(1+\sum\frac{\beta_i\hbar^i}{H_{\textrm {KN}}^{i}}) {\cal S}_0(r,t). \label{2.16} \end{eqnarray} where $\beta_i$'s are dimensionless constants. To find the solution for $S_0(r,t)$, let us put (\ref{2.13}) in the first partial differential equation in (\ref{2.11}) and integrate to obtain \begin{eqnarray} \tilde{{\cal S}_0}(r) = \pm \omega\int_C\frac{dr}{\sqrt{f(r)g(r)}} \label{2.17} \end{eqnarray} The + (-) sign indicates that the particle is outgoing (ingoing). Using the expression for ${\cal S}_0(r,t)$ from (\ref{2.13}) and (\ref{2.17}) one can write (\ref{2.16}) as \begin{eqnarray} {\cal S}(r,t)=(1+\sum\frac{\beta_i\hbar^i}{H_{\textrm {KN}}^{i}})(-\omega t \pm \omega\int_C\frac{dr}{\sqrt{f(r)g(r)}}) . \label{2.18} \end{eqnarray} The solution for the ingoing and outgoing particle of the Klein-Gordon equation under the background metric (\ref{2.7}) follows from (\ref{2.9}), \begin{eqnarray} \Phi_{{\textrm {in}}}= {\textrm{exp}}\Big[\frac{i}{\hbar}(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i})\Big(-\omega t -\omega\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big)\Big] \label{2.19} \end{eqnarray} and \begin{eqnarray} \Phi_{{\textrm {out}}}= {\textrm{exp}}\Big[\frac{i}{\hbar}(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i})\Big(-\omega t +\omega\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big)\Big]. \label{2.20} \end{eqnarray} The paths for the ingoing and outgoing particle crossing the event horizon are not same. The ingoing particle can cross the event horizon classically, whereas, the outgoing particle trajectory is classically forbidden. The metric coefficients for `$r-t$' sector alter sign at the two sides of the event horizon. Therefore, the path in which tunneling takes place has an imaginary time coordinate (${\textrm {Im}}~t$). The ingoing and outgoing probabilities are now given by, \begin{eqnarray} P_{{\textrm{in}}}=|\Phi_{{\textrm {in}}}|^2= {\textrm{exp}}\Big[-\frac{2}{\hbar}(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i})\Big(-\omega{\textrm{Im}}~t -\omega{\textrm{Im}}\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big)\Big] \label{2.21} \end{eqnarray} and \begin{eqnarray} P_{{\textrm{out}}}=|\Phi_{{\textrm {out}}}|^2= {\textrm{exp}}\Big[-\frac{2}{\hbar}(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i})\Big(-\omega{\textrm{Im}}~t +\omega{\textrm{Im}}\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big)\Big]. \label{2.22} \end{eqnarray} Since in the classical limit ($\hbar\rightarrow 0$) $P_{{\textrm {in}}}$ is unity, one has, \begin{eqnarray} {\textrm{Im}}~t = -{\textrm{Im}}\int_C\frac{dr}{\sqrt{f(r)g(r)}}. \label{2.23} \end{eqnarray} The presence of this imaginary time component is in agreement with \cite{Pilling, Majhiconnect}, where it is shown that for the Schwarzschild black hole if one connects the two patches (in Kruskal-Szekeres coordinates) exterior and interior to the event horizon, there is a contribution coming from the imaginary time coordinate. The value of this contribution is $2\pi i M$ which exactly coincides with (\ref{2.23}) evaluated for the Schwarzschild case with $f(r)= g(r)= (1- \frac{2M}{r})$ \cite{Majhiconnect}. As a result the outgoing probability for the tunneling particle becomes, \begin{eqnarray} P_{{\textrm{out}}}={\textrm{exp}}\Big[-\frac{4}{\hbar}\omega\Big(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i}\Big){\textrm{Im}}\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big]. \label{2.24} \end{eqnarray} The principle of ``detailed balance'' \cite{Paddy} for the ingoing and outgoing probabilities states that, \begin{eqnarray} P_{{\textrm{out}}}= {\textrm {exp}}\Big(-\frac{\omega}{T_{\textrm {bh}}}\Big)P_{\textrm{in}}={\textrm{exp}} \Big(-\frac{\omega}{T_{\textrm bh}}\Big) \label{2.25} \end{eqnarray} Comparing (\ref{2.24}) and (\ref{2.25}) the corrected Hawking temperature for the Kerr-Newman black hole is given by \begin{eqnarray} T_{\textrm {bh}}=T_{\textrm H}\Big(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm {KN}}^i}\Big)^{-1}, \label{2.26} \end{eqnarray} where \begin{eqnarray} T_{\textrm H} = \frac{\hbar}{4}\Big({\textrm{Im}}\int_C\frac{dr}{\sqrt{f(r)g(r)}}\Big)^{-1} \label{2.27} \end{eqnarray} is the semiclassical Hawking temperature. Using the expressions of $f(r)$ and $g(r)$ form (\ref{2.71}) it follows that, \begin{eqnarray} T_{\textrm H} = \frac{\hbar\sqrt{F'(r_+,\theta_0)g'(r_+,\theta_0)}}{4\pi}=\frac{\hbar}{2\pi}\frac{(r_+ -M)}{(r^2_+ +a^2)}, \label{2.28} \end{eqnarray} which is the familiar result for the semiclassical Hawking temperature for the Kerr-Newman black hole. \subsection{Fermion tunneling} In this section we discuss Hawking effect through the tunneling of fermions. Although a reasonable literature exists for the computation of the semiclassical Hawking temperature \cite{fermion, fermoth, Majhifermion}, there is no analysis on possible corrections, for a general metric, within this framework. There is a paper \cite{Majhifermion} which discusses such corrections but only for the Schwarzschild metric. Here we shall do the analysis for the tunneling of massless fermions from the Kerr-Newman spacetime and reproduce the expressions (\ref{2.26}) and (\ref{2.28}) which were obtained for a scalar particle tunneling. The Dirac equation for massless fermions is given by \begin{eqnarray} i\gamma ^{\mu }D_{\mu }\psi =0, \label{3.1} \end{eqnarray} where the covariant derivative is defined as, \begin{eqnarray} D_{\mu } =\partial _{\mu }+\frac{1}{2}i\Gamma _{\text{ \ }\mu }^{\alpha \text{ \ }% \beta }\Sigma _{\alpha \beta }\nonumber\\ \Gamma^{\alpha~\beta}_{~\mu}= g^{\beta\nu}\Gamma^{\alpha}_{\mu\nu} \label{3.2} \end{eqnarray} and \begin{eqnarray} \Sigma _{\alpha \beta } =\frac{1}{4}i[\gamma _{\alpha },\gamma _{\beta }] \label{3.3} \end{eqnarray} The $\gamma^{\mu }$ matrices satisfy the anticommutation relation $\{\gamma ^{\mu },\gamma ^{\nu}\}=2g^{\mu \nu }\times {\bf 1}$. We are concerned only with the radial trajectory and for this it is useful to work with the metric (\ref{2.7}). Using this one can write (\ref{3.1}) as \begin{eqnarray} i\gamma^{\mu}\partial_{\mu}\psi- \frac{1}{2}\left(g^{tt}\gamma^{\mu}\Gamma_{\mu t}^{r}-g^{rr}\gamma^{\mu}\Gamma_{\mu r}^{t}\right)\Sigma_{rt}\psi=0 \label{3.4} \end{eqnarray} The nonvanishing connections which contribute to the resulting equation are \begin{eqnarray} \Gamma^{r}_{tt}= \frac{f'g}{2}; \Gamma^{t}_{tr}= \frac{f'}{2f}. \label{3.5} \end{eqnarray} Let us define the $\gamma$ matrices for the `$r-t$' sector as \begin{eqnarray} \gamma ^{t} &=&\frac{1}{\sqrt{f(r)}}\gamma ^{0},~~~~~\gamma^{r}=\sqrt{g(r)}\gamma ^{3}, \label{3.6} \end{eqnarray} where $\gamma^0$ and $\gamma^3$ are members of the standard Weyl or chiral representation of $\gamma$ matrices \cite{fermion} in Minkwoski spacetime, expressed as \begin{eqnarray} \gamma ^{0} &=&\left( \begin{array}{cc} 0 & I \\ -I & 0% \end{array}% \right) \ \text{ \ \ \ }\gamma ^{3}=\left( \begin{array}{cc} 0 & \sigma ^{3} \\ \sigma ^{3} & 0% \end{array}% \right). \label{3.7} \end{eqnarray} Using (\ref{3.3}), (\ref{3.5}) and (\ref{3.6}) the equation of motion (\ref{3.4}) is simplified as, \begin{eqnarray} i\gamma^t\partial_t\psi+i\gamma^r\partial_r\psi+\frac{f'(r)g(r)}{2f(r)}\gamma^{t}\Sigma_{rt}\psi=0, \label{3.8} \end{eqnarray} where \begin{eqnarray} \Sigma_{rt}=\frac{i}{2}\sqrt\frac{f(r)}{g(r)}{\left(\begin{array}{c c c c} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 1 \end{array}\right).} \label{3.9} \end{eqnarray} The spin up (+ ve `$r$' direction) and spin down (- ve `$r$' direction) ansatz for the Dirac field have the following forms respectively, \begin{eqnarray} \psi_\uparrow(t,r)= \left(\begin{array}{c} A(t,r) \\ 0 \\ B(t,r) \\ 0 \end{array}\right){\textrm{exp}}\Big[\frac{i}{\hbar}I_\uparrow (t,r)\Big] \label{3.91} \end{eqnarray} and \begin{eqnarray} \psi_\downarrow(t,r)= \left(\begin{array}{c} 0 \\ C(t,r) \\ 0 \\ D(t,r) \\ \end{array}\right){\textrm{exp}}\Big[\frac{i}{\hbar}I_\downarrow (t,r)\Big]. \label{3.10} \end{eqnarray} Here $I_{\uparrow}(r,t)$ is the action for the spin up case and will be expanded in powers of $\hbar$. We shall perform our analysis only for the spin up case since the spin down case is fully analogous. On substitution of the ansatz (\ref{3.91}) in (\ref{3.8}) and simplifying, we get the following two nonzero equations, \begin{eqnarray} B(t,r)[\partial_t{I_{\uparrow}(r,t)}+\sqrt{fg}\partial_rI_{\uparrow}(r,t)]=0 \label{3.11} \end{eqnarray} and \begin{eqnarray} A(t,r)[\partial_t{I_{\uparrow}(r,t)}-\sqrt{fg}\partial_rI_{\uparrow}(r,t)]=0. \label{3.12} \end{eqnarray} Now let us expand all the variables in the `$r-t$' sector in powers of $\hbar$, as \begin{eqnarray} I_{\uparrow}(r,t)=I(r,t)=I_0(r,t)+\displaystyle\sum_i \hbar^i I_i(r,t)\nonumber\\ A(r,t)=A_0(r,t)+\displaystyle\sum_i \hbar^i A_i(r,t)\label{3.13}\\ B(r,t)=B_0(r,t)+\displaystyle\sum_i \hbar^i B_i(r,t).\nonumber \end{eqnarray} Substituting all the terms from (\ref{3.13}) into (\ref{3.11}) and (\ref{3.12}) yields (for $a= 0, 1, 3...$) \begin{eqnarray} B_a(r,t)\left(\partial_t I_a(r,t)+\sqrt{fg}~\partial_rI_a(r,t)\right)=0\nonumber\\ A_a(r,t)\left(\partial_t I_a(r,t)-\sqrt{fg}~\partial_rI_a(r,t)\right)=0. \label{3.14} \end{eqnarray} Thus we have the following sets of solutions, respectively, for $B_a{\textrm {'s}}\neq0$ and $A_a{\textrm {'s}}\neq0$, \begin{eqnarray} {\textrm{Set-I}:}~~~ \partial_t I_a(r,t)+\sqrt{fg}~\partial_rI_a(r,t)=0 \label{3.15} \end{eqnarray} \begin{eqnarray} {\textrm{Set-II}:}~~~ \partial_t I_a(r,t)-\sqrt{fg}~\partial_rI_a(r,t)=0. \label{3.16} \end{eqnarray} Similar to the scalar particle tunneling here also one can separate the semiclassical action for the `$r-t$' sector as \begin{eqnarray} I_0(r,t)=-\omega t +W_0(r), \label{3.17} \end{eqnarray} where $\omega=(E-P_{\phi}\Omega_{\textrm H})$. Substituting (\ref{3.17}) in (\ref{3.15}) and (\ref{3.16}) for ($a=0$) and integrating we get \begin{eqnarray} W_0^{\pm}(r)=\pm\omega\int_C\frac{dr}{\sqrt{f(r)g(r)}} \label{3.18} \end{eqnarray} and subsequently \begin{eqnarray} I_0(r,t)=\left(-\omega t \pm\omega\int_C\frac{dr}{\sqrt{f(r)g(r)}}\right), \label{3.19} \end{eqnarray} where + (-) sign implies that the particle is outgoing (ingoing). Because of the similar structure of (\ref{3.15}) and (\ref{3.16}), for all ($a=0, 1, 2... $), the solutions for $I_i(r,t)$' s can at most differ by a proportionality factor from $I_0(r,t)$ and the most general solution for $I(r,t)$ is given by \begin{eqnarray} I(r,t)=(1+{\displaystyle\sum_i{\gamma_i\hbar^i}})\left(-\omega t \pm\omega\int_C\frac{dr}{\sqrt{f(r)g(r)}}\right). \label{3.191} \end{eqnarray} This is an exact analogue of the scalar particle tunneling case (\ref{2.18}) and one can check this will lead to an identical expression of corrected Hawking temperature as given by (\ref{2.26}) and (\ref{2.28}) by exactly mimicking the steps discussed there. \section{Exact differential and Corrected Area Law} With the result of corrected Hawking temperature (\ref{2.26}) we now proceed with the calculation of the corrected entropy and area law. The modified form of first law of thermodynamics for Kerr-Newman black hole in the presence of corrections to Hawking temperature is \begin{eqnarray} dS_{\textrm {bh}}=\frac{dM}{T_{\textrm {bh}}}+(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})dJ+(-\frac{\Phi_{\textrm H}}{T_{\textrm {bh}}})dQ. \label{5.1} \end{eqnarray} In this context we want to stress that, \begin{itemize} \item{{\it Entropy must be a state function for all stationary spacetimes even in the presence of the quantum corrections to the semiclassical value.}} \end{itemize} This implies that $dS_{\textrm {bh}}$ has to be an exact differential. In the expression for $T_{\textrm {bh}}$ in (\ref{2.26}) there are six undetermined coefficients ($a_1$ to $a_6$) present in $H_{{\textrm {KN}}}$ (\ref{2.15}). The first step in the analysis is to fix these coefficients in such a way that $dS_{\textrm {bh}}$ in (\ref{5.1}) remains an exact deferential. By this restriction we make the corrected black hole entropy independent of any collapse process. For (\ref{5.1}) to be an exact differential the following relations must hold: \begin{eqnarray} \frac{\partial}{\partial J}(\frac{1}{T_{\textrm {bh}}})\big|_{M,Q}=\frac{\partial}{\partial M}(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})\big|_{J,Q} \label{5.2} \end{eqnarray} \begin{eqnarray} \frac{\partial}{\partial Q}(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})\big|_{M,J}=\frac{\partial}{\partial J}(-\frac{\Phi_{\textrm H}}{T_{\textrm {bh}}})\big|_{M,Q} \label{5.3} \end{eqnarray} \begin{eqnarray} \frac{\partial}{\partial M}(-\frac{\Phi_{\textrm H}}{T_{\textrm {bh}}})\big|_{J,Q}=\frac{\partial}{\partial Q}(\frac{1}{T_{\textrm {bh}}})\big|_{J,M}. \label{5.4} \end{eqnarray} Using the expression of $T_{\textrm {bh}}$ from (\ref{2.26}) and the semiclassical result from (\ref{4.71}), the first condition (\ref{5.2}) reduces to \begin{eqnarray} \frac{\partial}{\partial J}\displaystyle\sum_i \frac{\beta_i\hbar^i}{H^i_{\textrm {KN}}}\big|_{M,Q}=-\Omega_{\textrm H}\frac{\partial}{\partial M}\displaystyle\sum_i\frac{\beta_i\hbar^i}{H^i_{\textrm {KN}}}\big|_{J,Q} \label{5.41} \end{eqnarray} Expanding this equation in powers of $\hbar$, one has the following equality \begin{eqnarray} \frac{\partial H_{{\textrm {KN}}}}{\partial J}\big|_{M,Q}=-\Omega_{\textrm H}\frac{\partial H_{{\textrm {KN}}}}{\partial M}\big|_{J,Q}. \label{5.5} \end{eqnarray} Similarly the other two integrability conditions (\ref{5.3}) and (\ref{5.4}) lead to other conditions on $H_{\textrm {KN}}$, \begin{eqnarray} \frac{\partial H_{{\textrm {KN}}}}{\partial Q}\big|_{M,J}=\left(\frac{\Phi_{\textrm H}}{\Omega_{\textrm H}}\right)\frac{\partial{H_{\textrm {KN}}}}{\partial J}\big|_{M,Q} \label{5.6} \end{eqnarray} \begin{eqnarray} \frac{\partial H_{{\textrm {KN}}}}{\partial M}\big|_{J,Q}=-\frac{1}{\Phi_{\textrm H}}\frac{\partial H_{\textrm {KN}}}{\partial Q}\big|_{J,M} \label{5.61} \end{eqnarray} respectively. The number of unknown coefficients present in $H_{{\textrm {KN}}}$ is six and we have only three equations involving them, so the problem is under determined. As a remedy to this problem let us first carry out the dimensional analysis for Kerr spacetime and then use the result to reduce the arbitrariness in $H_{\textrm {KN}}$. For $Q=0$ the Kerr-Newman metric reduces to the rotating Kerr spacetime and one can carry the same analysis to find the corrections to Hawking temperature for both scalar particle and fermion tunneling from Kerr spacetime. An identical calculation will be repeated with $Q=0$. The only difference will appear in the dimensional analysis (\ref{2.15}). Since Kerr metric is chargeless the most general expression for corrected Hawking temperature will come out as \begin{eqnarray} T_{\textrm {bh}}=T\Big(1+\sum_i\beta_i\frac{\hbar^i}{H_{\textrm K}^i}\Big)^{-1}, \label{5.7} \end{eqnarray} where $H_{\textrm K}$ is now given by \begin{eqnarray} H_{{\textrm K}}=H_{\textrm {KN}}(Q=0)= a_1r^2_+ + a_2Mr_++a_3M^2. \label{5.8} \end{eqnarray} The first law of thermodynamics for Kerr black hole is \begin{eqnarray} dS=\frac{dM}{T_{\textrm H}}+(-\frac{\Omega_{\textrm H}}{T_{\textrm H}})dJ, \label{5.9} \end{eqnarray} where $T_{\textrm H}$ and $\Omega_{\textrm H}$ for Kerr black hole are obtained from their corresponding expressions for the Kerr-Newman case, for $Q=0$, as given in Appendix 7.1. With these expressions one can easily check that $dS$ is an exact differential for Kerr black hole as well since the only integrability condition \begin{eqnarray} \frac{\partial}{\partial J}(\frac{1}{T_{\textrm H}})\big|_{M}=\frac{\partial}{\partial M}(-\frac{\Omega_{\textrm H}}{T_{\textrm H}})\big|_{J} \label{5.10} \end{eqnarray} is satisfied. As stated earlier the idea behind introducing the Kerr spacetime is to carry out the dimensional analysis for Kerr spacetime first, then demanding that for $Q=0$ the dimensional parameter $H_{{\textrm {KN}}}$ will be same as $H_{{\textrm K}}$. The form of first law for Kerr black hole in presence of corrections to the Hawking temperature, is given by \begin{eqnarray} dS_{\textrm {bh}}=\frac{dM}{T_{\textrm {bh}}}+(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})dJ, \label{5.11} \end{eqnarray} where the general form of $T_{\textrm {bh}}$ is given in (\ref{5.7}). Now demanding that the corrected entropy of Kerr black hole must be a state function, the following integrability condition \begin{eqnarray} \frac{\partial}{\partial J}(\frac{1}{T_{\textrm {bh}}})\big|_{M}=\frac{\partial}{\partial M}(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})\big|_{J} \label{5.12} \end{eqnarray} must hold. Using the semiclassical result from (\ref{5.10}) and considering corrections to all orders in $\hbar$ to the Hawking temperature in (\ref{5.7}) it follows that the above integrability condition is satisfied if the following relation holds \begin{eqnarray} \frac{\partial H_{{\textrm K}}}{\partial J}\big|_{M}=-\Omega_{\textrm H}\frac{\partial H_{{\textrm K}}}{\partial M}\big|_{J}. \label{5.13} \end{eqnarray} From (\ref{5.8}) it follows that this equality holds only for \begin{eqnarray} a_1= 0= a_3 \label{5.131} \end{eqnarray} and the form of $H_{\textrm K}$ is given by \begin{eqnarray} H_{{\textrm K}}=a_2Mr_+. \label{5.14} \end{eqnarray} Therefore, the corrected form for the Hawking temperature obeying the integrability condition (\ref{5.12}) for the Kerr black hole is given by \begin{eqnarray} T_{\textrm {bh}}=T_{\textrm H}\left(1+\displaystyle\sum_i\frac{\beta_i\hbar^i}{(a_2Mr_+)^i}\right)^{-1}=T_{\textrm H}\left(1+\displaystyle\sum_i\frac{\tilde\beta_i\hbar^i}{(Mr_+)^i}\right)^{-1}. \label{5.141} \end{eqnarray} The natural expectation from the dimensional term ($H_{\textrm {KN}}$) in (\ref{2.15}) is that for $Q=0$ it gives the correct dimensional term ($H_{\textrm K}$) in (\ref{5.14}). To fulfil this criterion we must have $a_1= 0= a_3$ in (\ref{2.15}) and this leads to \begin{align} H_{{\textrm {KN}}}=a_2Mr_+ + a_4{r_+ Q}+a_5{MQ}+a_6{Q^2}\notag\\ =a_2(Mr_+ + \tilde a_4{r_+ Q}+\tilde a_5{MQ}+\tilde a_6{Q^2}), \label{5.15} \end{align} where $\tilde a_j=\frac{a_j}{a_2}$. Now we are in a position to find the precise form of the dimensional term ($H_{\textrm {KN}}$) satisfying the integrability conditions given in (\ref{5.5}), (\ref{5.6}) and (\ref{5.61}). Note that with the modified expression (\ref{5.15}) the problem of under determination of six coefficients by only three integrability conditions for Kerr-Newman spacetime has been removed. With this expression of $H_{\textrm {KN}}$ one has effectively three undetermined coefficients with three equations and it is straightforward to calculate those coefficients. Putting the new expression of $H_{\text {KN}}$ in (\ref{5.5}), (\ref{5.6}) and (\ref{5.61}) one obtains, \begin{eqnarray} \tilde a_5 - \tilde a_4\frac{r_+}{M}=0 \label{5.16} \end{eqnarray} \begin{eqnarray} 2\tilde a_6 Q + \tilde a_4 \left(\frac{Mr_+ +Q^2}{M}\right)+ \tilde a_5M=-Q \label{5.17} \end{eqnarray} \begin{eqnarray} 2\tilde a_6 Q + \tilde a_4 \left(r_+ + \frac{J^2Q^2}{M^3(r^2_+ + J^2/M^2)}\right)+ \tilde a_5\left(M+\frac{Q^2r_+}{r^2_+ +J^2/M^2}\right)=-Q. \label{5.171} \end{eqnarray} The simultaneous solution of these three equations yields, \begin{eqnarray} \tilde a_4= 0= \tilde a_5\notag\\ \tilde a_6= -\frac{1}{2}. \label{5.172} \end{eqnarray} As a result the final form of $H_{\text {KN}}$ derived by the requirements:\\ (i) $H_{\text {KN}}$ must satisfy the integrability conditions (\ref{5.5}, \ref{5.6}, \ref{5.61}), \\ (ii) $H_{\text {KN}}= H_{\text K}$ for $Q=0$,\\ is given by \begin{eqnarray} H_{{\textrm {KN}}}=a_2(Mr_+-\frac{1}{2}Q^2). \label{5.18} \end{eqnarray} Hence the corrected Hawking temperature for Kerr-Newman black hole is found to be \begin{eqnarray} T_{\textrm {bh}}=T\left(1+\displaystyle\sum_i\frac{\beta_i\hbar^i}{a^i_2(Mr_+-\frac{Q^2}{2})^i}\right)^{-1}=T\left(1+\displaystyle\sum_i\frac{\tilde\beta_i\hbar^i}{(Mr_+-\frac{Q^2}{2})^i}\right)^{-1}, \label{5.19} \end{eqnarray} where $\tilde\beta_i=\frac{\beta_i}{a^i_2}$. We are now in a position to compute the corrected entropy and find the deviations from the semiclassical area law. Comparing (\ref{4.8}) and (\ref{5.1}) with $T_{\textrm {bh}}$ given above we find a similar dictionary as (\ref{4.15}) by modifying semiclassical terms with corrected versions, where necessary, as \begin{align} (f\rightarrow S_{\textrm {bh}},~~x\rightarrow M,~~y\rightarrow J,~~z\rightarrow Q)\nonumber\\ (U\rightarrow\frac{1}{T_{\textrm {bh}}},~~V\rightarrow\frac{-\Omega_{\textrm H}}{T_{\textrm {bh}}},~~W\rightarrow\frac{-\Phi_{\textrm H}}{T_{\textrm {bh}}}). \label{5.20} \end{align} Following this dictionary and (\ref{4.12}), (\ref{4.13}) and (\ref{4.14}) the corrected entropy for Kerr-Newman black hole has the form \begin{eqnarray} S_{\textrm {bh}}=\int{\frac{dM}{T_{\textrm {bh}}}}+\int{XdJ}+\int{YdQ}, \label{5.21} \end{eqnarray} where \begin{eqnarray} X=(-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}})-\frac{\partial}{\partial J}{\int\frac{dM}{T_{\textrm {bh}}}} \label{5.22} \end{eqnarray} and \begin{eqnarray} Y=(-\frac{\Phi_{\textrm H}}{T_{\textrm {bh}}})-\frac{\partial}{\partial Q}[{\int\frac{dM}{T_{\textrm {bh}}}}+{\int XdJ}]. \label{5.23} \end{eqnarray} It is possible to calculate $S_{\textrm {bh}}$ analytically up to all orders of $\hbar$. However we shall restrict ourselves up to second order correction to the Hawking temperature. Integration over $M$ yields, \begin{eqnarray} \int\frac{dM}{T}=\frac{\pi}{\hbar}(2Mr_+-Q^2)+ 2\pi\tilde\beta_1\hbar\log(2Mr_+-Q^2)-\frac{4\pi\tilde\beta_2\hbar^2}{(2Mr_+-Q^2)^2}+{\textrm {const.}}+{\textrm {higher order terms}}. \label{5.24} \end{eqnarray} With this result of integration one can check the following relation, \begin{eqnarray} \frac{\partial}{\partial J}\int\frac{dM}{T_{\textrm {bh}}}=-\frac{\Omega_{\textrm H}}{T_{\textrm {bh}}}. \label{5.25} \end{eqnarray} Therefore $X=0$. Furthermore we get \begin{eqnarray} \frac{\partial}{\partial Q}\int\frac{dM}{T_{\textrm {bh}}}=-\frac{\Phi_{\textrm H}}{T_{\textrm {bh}}}, \label{5.26} \end{eqnarray} and using this equality together with $X=0$ we find $Y=0$. The fact that both $X$ and $Y$ pick the most trivial solution as zero in the black hole context, both with or without quantum corrections, is quite unique. The final result for the entropy of the Kerr-Newman black hole in presence of quantum corrections is now given by \begin{eqnarray} S_{\textrm {bh}}=\frac{\pi}{\hbar}(2Mr_+-Q^2)+ 2\pi\tilde\beta_1\log(2Mr_+-Q^2)-\frac{4\pi\tilde\beta_2\hbar}{(2Mr_+-Q^2)}+{\textrm {const.}}+{\textrm {higher order terms}}. \label{5.27} \end{eqnarray} In terms of the semiclassical black hole entropy and horizon area this can be expressed, respectively, as \begin{eqnarray} S_{\textrm {bh}}=S_{\textrm {BH}}+ 2\pi\tilde\beta_1\log S_{\textrm {BH}}-\frac{4\pi^2\tilde\beta_2}{S_{\textrm {BH}}}+{\textrm {const.}}+{\textrm {higher order terms}}. \label{5.28} \end{eqnarray} and \begin{eqnarray} S_{\textrm {bh}}=\frac{A}{4}+ 2\pi\tilde\beta_1\log A-\frac{16\pi^2\tilde\beta_2}{A}+{\textrm {const.}}+{\textrm {higher order terms}}. \label{5.29} \end{eqnarray} The first term in the expression (\ref{5.28}) is the usual semiclassical Bekenstein-Hawking entropy and the other terms are due to quantum corrections. The logarithmic and inverse area terms have appeared as the leading and non leading corrections to the entropy and area law. \section{Determination of the leading correction to entropy by trace anomaly} In the expression for entropy in (\ref{5.29}) the leading order correction includes an arbitrary coefficient $\tilde\beta_1$. In this section we shall determine this coefficient by using trace anomaly. Consider the scalar particle tunneling case. The expression for the action for the Kerr-Newman spacetime is given by (\ref{2.16}) \begin{eqnarray} {\cal S}(r,t)=\left({\cal S}_0(r,t)+\sum_i\hbar^i{\cal S}_i(r,t)\right)=\left({\cal S}_0(r,t)+\sum_i\frac{\tilde\beta_i\hbar^i}{(Mr_+ -\frac{Q^2}{2})^i}{\cal S}_0(r,t)\right), \label{6.1} \end{eqnarray} where the appropriate form for $H_{KN}$ from (\ref{5.18}) is considered. Taking the first order ($\hbar$) correction in this equation we can write the following relation for the imaginary part of the outgoing particle action \begin{eqnarray} {\textrm {Im}{\cal S}_1^{\textrm {out}}}(r,t)=\frac{\tilde\beta_1}{(Mr_+ -\frac{Q^2}{2})}{\textrm {Im}}{\cal S}_0^{\textrm {out}}(r,t). \label{6.2} \end{eqnarray} The imaginary part for the semiclassical action for an outgoing particle can be found from (\ref{2.13}), (\ref{2.17}) and (\ref{2.23}) as \begin{eqnarray} {\textrm {Im}}{\cal S}_0^{\textrm {out}}(r,t)=-2\omega {\textrm {Im}}\int_C{\frac{dr}{\sqrt{f(r)g(r)}}}. \label{6.3} \end{eqnarray} Let us make an infinitesimal scale transformation of the metric coefficients in (\ref{2.7}) parametrized by the constant factor `$k$' \cite{Hawkzeta, Majhitrace} such that $\bar f(r)=k f(r)\simeq (1+\delta k)f(r)$ and $\bar g(r)=k^{-1} g(r)\simeq (1+\delta k)^{-1}g(r)$. From the scale invariance of the Klein-Gordon equation in (\ref{2.8}) it follows that the Klein-Gordon field ($\Phi$) should transform as $\Phi=k^{-1}\Phi$. Since $\Phi$ has a dimension of mass, one interprets that the black hole mass ($M$) should transform as $M=k^{-1}M\simeq (1+\delta k)^{-1}M$ under the infinitesimal scale transformation. Therefore the other two black hole parameters ($Q$, $a$) and the particle energy $\omega$ should also transform as $M$ does. Using these it is straightforward to calculate the transformed form of (\ref{6.2}) and (\ref{6.3}) to get \begin{eqnarray} {\textrm {Im}\overline{\cal S}_1^{\textrm {out}}}(r,t)=\frac{\tilde\beta_1}{(\overline M\overline r_+ -\frac{\overline Q^2}{2})}{\textrm {Im}}\overline{\cal S}_0^{\textrm {out}}(r,t)=\frac{\tilde\beta_1}{(Mr_+ -\frac{Q^2}{2})}(1+\delta k){\textrm {Im}}{\cal S}_0^{\textrm {out}}(r,t), \label{6.4} \end{eqnarray} and \begin{eqnarray} \frac{\delta {\textrm {Im}}{{\cal S}}^{{\textrm {out}}}_1(r,t)}{\delta k}=\frac{\tilde\beta_1}{(Mr_+ -\frac{Q^2}{2})}{\textrm {Im}}{\cal S}_0^{\textrm {out}}(r,t). \label{6.5} \end{eqnarray} Now considering the scalar field Lagrangian it can be shown that under a constant scale transformation of the metric coefficients the action is not invariant in the presence of trace anomaly and this lack of conformal invariance is given by the following relation \begin{eqnarray} \frac{\delta {{\cal S}}(r,t)}{\delta k}=\frac{1}{2}\int{d^4x\sqrt{-g}(<T^{\mu}_{~\mu}>^{(1)}+<T^{\mu}_{~\mu}>^{(2)}+...)}, \label{6.6} \end{eqnarray} where $<T^{\mu}_{~\mu}>^{(i)}$' s are the trace of the regularised stress energy tensor calculated for $i$-th loop. However, in the literature \cite{Hawkzeta, Dewitt}, only the first order loop calculation has been carried out and this gives \begin{eqnarray} \frac{\delta {\textrm {Im}}{{\cal S}}^{{\textrm {out}}}_1(r,t)}{\delta k}=\frac{1}{2}{\textrm {Im}}\int{d^4x\sqrt{-g}(<T^{\mu}_{~\mu}>^{(1)})}, \label{6.7} \end{eqnarray} where, for a scalar background, the form of trace anomaly is given by \begin{eqnarray} <T^{\mu}_{~\mu}>^{(1)}=\frac{1}{2880\pi^2}\left(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-R_{\mu\nu}R^{\mu\nu}+\nabla_{\mu}\nabla^{\mu}R\right) \label{6.9} \end{eqnarray} Now integrating (\ref{6.3}) around the pole at $r=r_+$ we get \begin{eqnarray} {\textrm {Im}}{\cal S}_0^{\textrm {out}}(r,t)=-2\pi\omega\frac{(r_+M-\frac{Q^2}{2})}{(r_+-M)} \label{6.9a} \end{eqnarray} Now putting this in (\ref{6.5}) and comparing with (\ref{6.7}) we find \begin{eqnarray} \tilde\beta_1=-\frac{(M^2-Q^2-\frac{J^2}{M^2})^{1/2}}{4\pi\omega}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}. \label{6.8} \end{eqnarray} Equation (\ref{6.8}) gives the general form of the coefficient associated with the leading correction to the semiclassical entropy for any stationary black hole. To get $\tilde\beta_1$ for a particular black hole in (3+1) dimensions one needs to solve both (\ref{6.8}) and (\ref{6.9}) for that black hole. Now we shall take different spacetime metrics and explicitly calculate $\tilde\beta_1$ for them. \subsection{Schwarzschild Black Hole} For $Q=0=J$ the Kerr-Newman spacetime metric reduces to the Schwarzschild spacetime and from (\ref{6.8}) it follows that \begin{eqnarray} \tilde\beta_1=-\frac{M}{4\pi\omega}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}. \label{6.10} \end{eqnarray} The identification of particle energy by (\ref{2.13a}) is now given by $\omega=E$, where `$E$' is the Komar conserved quantity corresponding the timelike Killing vector $\frac{\partial}{\partial t}$ for the spherically symmetric Schwarzschild spacetime. An exact calculation \cite{Carrol} of the Komar integral gives $E=M$ ({\it see} Appendix 7.2), where $M$ is the mass of Schwarzschild black hole. Therefore we get \begin{eqnarray} \tilde\beta_1=-\frac{1}{4\pi}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}. \label{6.11} \end{eqnarray} A similar result was found by Hawking \cite{Hawkzeta}, where the path integral approach based on zeta function regularization was adopted. The path integral for standard Einstein-Hilbert gravity was modified due to the fluctuations coming from the scalar field in the black hole spacetime. To find the trace anomaly of the stress tensor (\ref{6.9}) we calculate the following invariant scalars for Schwarzschild black hole, given by \begin{align} R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} &=\frac{48M^2}{r^6},\notag\\ R_{\mu\nu}R^{\mu\nu} &=0\label{6.11a}\\ R=0.\notag \end{align} Using these we can find $<T^{\mu}_{~\mu}>^{(1)}$ from (\ref{6.9}) and inserting it in (\ref{6.11}) yields, \begin{align} \tilde\beta_1^{({\textrm {Sch}})} &=-\frac{1}{4\pi}\frac{1}{2880\pi^2}{\textrm {Im}}\int_{r=2M}^{\infty}\int_{\theta=0}^{\pi}\int_{\phi=0}^{2\pi}\int_{t=0}^{-8\pi i M}{\frac{48M^2}{r^6}r^2 \sin\theta dr d\theta d\phi dt}\notag\\ &=\frac{1}{180\pi}. \label{6.12} \end{align} The corrected entropy/area law (\ref{5.28}, \ref{5.29}) is now given by, \begin{align} S_{\textrm {bh}}^{\textrm {(Sch)}}=S_{\textrm {BH}}+ \frac{1}{90}\log S_{\textrm {BH}}+{\textrm {higher order terms}}, \nonumber\\ =\frac{A}{4}+\frac{1}{90}\log A + {\textrm {higher order terms.}} \label{6.13} \end{align} This reproduces the result existing in the literature \cite{Hawkzeta, Fursaev, Majhitrace}. \subsection{Reissner-Nordstrom Black Hole} For the Reissner-Nordstrom black hole, putting $J=0$ in (\ref{6.8}), we get \begin{eqnarray} \tilde\beta_1=-\frac{(M^2-Q^2)^{1/2}}{4\pi\omega}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}, \label{6.14} \end{eqnarray} where the particle energy is again given by the Komar energy integral corresponding to the timelike Killing field $\frac{\partial}{\partial t}$. Unlike the Schwarzschild case, however, the effective energy for Reissner-Nordstrom black hole observed at a distance $r$, is now given by ({\it see} Appendix 7.2), \begin{eqnarray} \omega=E=(M-\frac{Q^2}{r}). \label{6.15} \end{eqnarray} For a particle undergoing tunneling $r=r_+=(M+\sqrt{M^2-Q^2})$, we get $\omega=(M^2-Q^2)^{1/2}$ and therefore (\ref{6.14}) gives \begin{eqnarray} \tilde\beta_1=-\frac{1}{4\pi}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}. \label{6.16} \end{eqnarray} This has exactly the same functional form as (\ref{6.11}). To calculate this integral, we first simplify the integrand given in (\ref{6.9}), for a Reissner-Nordstrom black hole, \begin{align} R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} &=\frac{8(7Q^4-12MQ^2r+6M^2r^2)}{r^8},\notag\\ R_{\mu\nu}R^{\mu\nu} &=\frac{4Q^4}{r^8},\label{6.16a}\\ R=0.\notag \end{align} With these results $<T^{\mu}_{~\mu}>^{(1)}$ is obtained and, finally, \begin{align} \tilde\beta_1^{(\textrm {RN})} &=-\frac{1}{4\pi}\frac{1}{2880\pi^2}{\textrm {Im}}\int_{r=r_+}^{\infty}\int_{\theta=0}^{\pi}\int_{\phi=0}^{2\pi}\int_{t=0}^{-i\beta}{<T^{\mu}_{~\mu}>^{(1)}r^2 \sin\theta dr d\theta d\phi dt}\notag\\ &=\frac{1}{180\pi}(1+\frac{3}{5}\frac{r_-^2}{r_+^2-r_+r_-}). \label{6.16b} \end{align} Therefore the corrected entropy/area law for a Reissner-Nordstrom black hole is now given by ({\it see} (\ref{5.28}, \ref{5.29})) \begin{eqnarray} S_{\textrm {bh}}^{\textrm {(RN)}}=S_{\textrm {BH}}+ \frac{1}{90}(1+\frac{3}{5}\frac{r_-^2}{r_+^2-r_+r_-})\log S_{\textrm {BH}}+{\textrm {higher order terms}}. \nonumber\\ =\frac{A}{4}+ \frac{1}{90}(1+\frac{3}{5}\frac{r_-^2}{r_+^2-r_+r_-})\log{A}+{\textrm {higher order terms.}} \label{6.16c} \end{eqnarray} Unlike the Schwarzschild black hole here the prefactor of the logarithmic term is not a pure number. This is because the presence of charge on the outer region of the event horizon includes a contribution to the matter sector. Therefore the charge ($Q$) directly affects the dynamics of the system which in turn is related to entropy. It is interesting to see that in the extremal limit the prefactor of the logarithmic term blows up, suggesting that there cannot be a smooth limit from non-extremal to the extremal case. This is in agreement with a recent paper \cite{Carrolpaper} where it is argued that the {\it extremal limit} of the Reissner-Nordstrom black hole is different from the extremal case itself. For the extremal case the region between inner and outer horizons disappears but in the {\it extremal limit} this region no longer disappears, rather it approaches a patch of $AdS_{2}\times S^{2}$. As a result the non-extremal to extremal limit is not continuous. \subsection{Kerr Black Hole} The Kerr black hole is the chargeless limit of the Kerr-Newman black hole. This is an axially symmetric solution of Einstein's equation and has two Killing vectors $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial \phi}$. Therefore it has two conserved quantities corresponding to those Killing directions. None of the Killing vectors is individually time-like, but the combination $(\frac{\partial}{\partial t}+\Omega \frac{\partial}{\partial \phi})$ is time-like throughout the spacetime (outside the event horizon). This combination however cannot be treated as a Killing vector because in general $\Omega$ is not constant. At the horizon $\Omega=\Omega_{\textrm H}$ is identified as the angular velocity of the horizon and the above time-like vector becomes null. This time-like vector plays a crucial role in the process of evaluating Komar integrals. For a Kerr black hole (\ref{6.8}) reduces to \begin{eqnarray} \tilde\beta_1=-\frac{(M^2-\frac{J^2}{M^2})^{1/2}}{4\pi\omega}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}} \label{6.17} \end{eqnarray} where $\omega=(E-\Omega_H P_{\phi})$. In the Boyer-Lindquist coordinate, the Komar integrals corresponding to the Killing vectors $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial \phi}$ are given by $E=M$ and $P_{\phi}=2J$ respectively \cite{Katz}. Here $M$ and $J$ are respectively the mass and angular momentum of the Kerr black hole. Using these expressions together with the angular velocity ($\Omega_H$) ({\it see} Appendix 7.2) we get $\omega=(M^2-\frac{J^2}{M^2})^{1/2}$ and therefore (\ref{6.17}) becomes \begin{eqnarray} \tilde\beta_1=-\frac{1}{4\pi}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}} \label{6.18} \end{eqnarray} which is exactly same as the other two previous cases. The invariant scalars for Kerr spacetime are given by \begin{align} R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} &=-\frac{96M^2\left(\alpha_1+15\alpha_2\cos{2\theta}+6a^4(a^2-10r^2)\cos{4\theta}+a^6\cos{6\theta}\right)}{(a^2+2r^2+a^2\cos{2\theta})^6},\notag\\ \alpha_1 &=(10a^6-180a^4r^2+240a^2r^4-32r^6), \notag\\ \alpha_2 &=(a^4-16a^2r^2+16r^4)\notag\\ R_{\mu\nu}R^{\mu\nu} &=0\label{6.18a}\\ R=0,\notag \end{align} from which the trace $<T^{\mu}_{~\mu}>^{(1)}$ in (\ref{6.9})is obtained. Now performing the integration we get \begin{align} \tilde\beta_1^{(\textrm {K})} &=-\frac{1}{4\pi}\frac{1}{2880\pi^2}{\textrm {Im}}\int_{r=r_+}^{\infty}\int_{\theta=0}^{\pi}\int_{\phi=0}^{2\pi}\int_{t=0}^{-i\beta}{<T^{\mu}_{~\mu}>^{(1)}r^2 \sin\theta dr d\theta d\phi dt}\notag\\ &=\frac{1}{180\pi}. \label{6.18b} \end{align} Therefore the corrected entropy/area law for a Kerr black hole that follows from (\ref{5.28}, \ref{5.29}) is given by, \begin{eqnarray} S_{\textrm {bh}}^{\textrm {(K)}}=S_{\textrm {BH}}+ \frac{1}{90}\log S_{\textrm {BH}}+{\textrm {higher order terms}}, \nonumber\\ =\frac{A}{4}+\frac{1}{90}\log A + {\textrm {higher order terms.}} \label{6.18c} \end{eqnarray} This result is identical to the Schwarzschild black hole. This can be physically explained by the following argument. The difference between the Schwarzschild and Kerr spacetimes is due to spin($J$). Unlike charge ($Q$), which has a contribution to the matter part, spin is arising in Kerr spacetime because of one extra Killing direction corresponding to the Killing field ($\partial_\phi$). This difference is purely geometrical and has nothing to do with the dynamics of the system and as a result there is no difference between the structure of corrected entropy in these two cases. \subsection{Kerr-Newman Black Hole} The general expression for $\tilde\beta_1$ in (\ref{6.8}) involves the total energy of the tunneling particle, given by (\ref{2.13a}). Unlike the Kerr black hole, in this case the effective energy faced by a particle at a finite distance from the horizon is not the same as felt at infinity. Because of the presence of electric charge ($Q$) it is modified. This was also the case for Reissner-Nordstrom black hole where one extra term ($-\frac{Q^2}{r}$) arose in (\ref{6.15}) due to the charge of the black hole. However, for Kerr-Newman black hole, because of its geometric structure the calculation of the extra contributions due to charge is technically more involved. Similar conclusions hold for the other conserved quantity $P_{\phi}$. On the other hand to get the exact form of total energy of the tunneling particle one needs to calculate both $E$ and $P_{\phi}$ in a closed form. In an earlier work \cite{Cohen} the closed form of $E$ was derived from the Komar integral but the explicit closed form calculation of $P_{\phi}$ from the Komar integral is still missing. For our analysis we have calculated the Komar integrals upto leading correction to both $E$ and $P_{\phi}$ ({\it see} Appendix 7.2) \begin{align} E &=M-\frac{Q^2}{r}+ {\cal O}(\frac{1}{r^2})\notag\\ P_{\phi} &=2(J-\frac{2Q^2a}{3r})+ {\cal O}(\frac{1}{r^2}) \label{6.19} \end{align} Putting this (with $r=r_+$) in (\ref{2.13a}) and taking upto the ${\cal O}(\frac{1}{r_+})$ we get $\omega=(M^2-Q^2-\frac{J^2}{M^2})^{1/2}$. Therefore for the leading order we find $\tilde\beta_1$ as, \begin{eqnarray} \tilde\beta_1=-\frac{1}{4\pi}{\textrm {Im}}\int{d^4x\sqrt{-g}<T^{\mu}_{~\mu}>^{(1)}}, \label{6.20} \end{eqnarray} which is identical to the previous expressions. For now we shall take (\ref{6.20}) and perform the integral to find $\tilde\beta_1$ for Kerr-Newman black hole. The invariant scalars for Kerr-Newman black holes are given by \begin{align} R_{\mu \nu \rho \sigma } R^{\mu \nu \rho \sigma } &= \frac{128 }{(a^2+2 r^2+a^2 \cos{2 \theta })^6} \nonumber\\ &[192 r^4 (Q^2-2m r)^2-96 r^2 (Q^2-3 m r) (Q^2-2 m r)(a^2+2 r^2+a^2 \cos{2 \theta})) \nonumber \\ &+ (7 Q^2-18 m r)(Q^2-6 m r) (a^2+2 r^2+a^2 cos{2 \theta})^2-3 m^2 (a^2+2 r^2+a^2 \cos{2 \theta})^3)],\notag\\ R_{\mu\nu}R^{\mu\nu} &= \frac{64 Q^4}{\left(a^2+2 r^2+a^2 \cos{2 \theta}\right)^4},\label{6.20a}\\ R=0.\notag \end{align} Simplifying $<T^{\mu}_{\mu}>^{(1)}$ in (\ref{6.9}) and performing the integration in (\ref{6.20}) one finds \begin{align} \tilde\beta _1^{\textrm {KN}} &= \frac{r_+^2 + r_+ r_- -Q^2}{5760 \pi r_+^4 (r_+ -r_-) (r_+ r_- -Q^2)^{5/2}}\left(\alpha_1 + \frac{r_+\sqrt{r_+r_- -Q^2}}{r_+^2 + r_+r_- - Q^2} (9Q^8-\alpha_2r_+ + \alpha_3r_+^2r_-^2)\right)\label{6.20b}\\ &{\textrm {with,}}\notag\\ \alpha_1 &=9 Q^4 [ r_+^4\tan^{-1}{(\frac{r_+}{\sqrt{-Q^2+r_- r_+}})} + (Q^2-r_-r_+)^2\cot^{-1}{\frac{r_+}{\sqrt{-Q^2+r_- r_+}}}]\notag\\ \alpha_2 &=6Q^6r_+-41Q^4r_+^3+32r_+^4r_-^3+2Q^2r_-(9Q^4+13q^2r_+^2+32r_+^4)\notag\\ \alpha_3 &= 9Q^4+64Q^2r_+^2+32r_+^4.\notag \end{align} The corrected entropy/area law now follows from (\ref{5.28}) and (\ref{5.29}), \begin{align} S_{\textrm {bh}}^{\textrm {(KN)}}=S_{\textrm {BH}}+ \frac{r_+^2 + r_+ r_- -Q^2}{2880 r_+^4 (r_+ -r_-) (r_+ r_- -Q^2)^{5/2}}\left(\alpha_1 + \frac{r_+\sqrt{r_+r_- -Q^2}}{r_+^2 + r_+r_- - Q^2} (9Q^8-\alpha_2r_+ + \alpha_3r_+^2r_-^2)\right)\log S_{\textrm {BH}}~\nonumber\\ +{\textrm {higher order terms}}.\nonumber\\ =\frac{A}{4}+\frac{r_+^2 + r_+ r_- -Q^2}{2880 r_+^4 (r_+ -r_-) (r_+ r_- -Q^2)^{5/2}}\left(\alpha_1 + \frac{r_+\sqrt{r_+r_- -Q^2}}{r_+^2 + r_+r_- - Q^2} (9Q^8-\alpha_2r_+ + \alpha_3r_+^2r_-^2)\right)\log A\nonumber\\ +{\textrm {higher order terms}}. \label{6.20c} \end{align} For $Q=0$ the above prefactor of the logarithmic term reduces to $\frac{1}{90}$, the coeffecient for the Kerr spacetime. \section{Conclusions} Let us now summarise the findings in the present paper. We have given a new and simple approach to derive the ``first law of black hole thermodynamics'' from the thermodynamical perspective where one does not require the ``first law of black hole mechanics''. The key point of this derivation was the observation that ``black hole entropy'' is a {\it {state function}}. In the process we obtained some relations involving black hole entities, playing a role analogous to {\it {Maxwell's relations}}, which must hold for any stationary black hole. Based on these relations, we presented a systematic calculation of the semiclassical Bekenstein-Hawking entropy taking into account all the ``work terms on a black hole''. This approach is applicable to any stationary black hole solution. The standard semiclassical area law was reproduced. An interesting observation that has been come out of the calculation was that the work terms did not contribute to the final result of the semiclassical entropy. To extend our method for calculating entropy in the presence of quantum corrections we first computed the corrected Hawking temperature in the tunneling mechanism. Both the tunneling of scalar particles and fermions were considered and they gave the same result for the corrected Hawking temperature. However this result involved a number of arbitrary constants. Demanding that the corrected entropy be a state function it was possible to find the appropriate form of the corrected Hawking temperature. By using this result we explicitly calculated the entropy with quantum corrections. In the process we again found that work terms on black hole did not contribute to the final result of the corrected entropy. This analysis was done for the Kerr-Newman spacetime and it was trivial to find the results for other stationary spacetimes like (i) Kerr, (ii) Reissner-Nordstrom and (iii) Schwarzschild by taking appropriate limits. It is important to note that the functional form for the corrected entropy is same for all the stationary black holes. The logarithmic and inverse area terms as leading and next to leading corrections were quite generic upto a dimensionless prefactor. It was shown that the coefficient of the logarithmic correction was related with the trace anomaly of the stress tensor and explicit calculation of this coefficient was also done. This was a number ($\frac{1}{90}$) for both Schwarzschild and Kerr black hole. The fact that both Kerr and Schwarzschild black holes have identical corrections was explained on physical grounds (the difference between the metrics being purely geometrical and not dynamical) thereby serving as a nontrivial consistency check on our scheme. It may be noted that the factor ($\frac{1}{90}$) was also obtained (for the Schwarzschild case) in other approaches \cite{Hawkzeta, Fursaev} based on the direct evaluation of path integrals in a scalar background. For the charged spacetime (Reissner-Nordstrom and Kerr-Newman) the coefficients were not pure numbers, however in the $Q=0$ limit they reproduced the expressions for the corresponding chargeless versions. \section{Appendix} \subsection{Glossary of formulae for Kerr-Newman black hole} The spacetime metric of the Kerr-Newman black hole in Boyer-Linquist coordinates ($t,~r,~\theta,~\phi$) is given by, \begin{equation} ds^{2} =-\tilde f(r,\theta )dt^{2}+\frac{dr^{2}}{\tilde g(r,\theta )}-2H(r,\theta )dtd\phi +K(r,\theta )d\phi ^{2}+\Sigma (r,\theta )d\theta ^{2} \label{2.1} \end{equation} with the electromagnetic vector potential, $$A_{a} =-\frac{Qr}{\Sigma (r,\theta)}[(dt)_{a}-a\sin ^{2}\theta (d\phi )_{a}]$$ and, \begin{align} \tilde f(r,\theta )& =\frac{\Delta (r)-a^{2}\sin ^{2}\theta }{\Sigma (r,\theta )} \\ \tilde g(r,\theta )& =\frac{\Delta (r)}{\Sigma (r,\theta )}, \notag \\ H(r,\theta )& =\frac{a\sin ^{2}\theta (r^{2}+a^{2}-\Delta (r))}{\Sigma (r,\theta )} \notag \\ K(r,\theta )& =\frac{(r^{2}+a^{2})^{2}-\Delta (r)a^{2}\sin ^{2}\theta }{% \Sigma (r,\theta )}\sin ^{2}(\theta ) \notag \\ \Sigma (r,\theta )& =r^{2}+a^{2}\cos ^{2}\theta \notag \\ \Delta (r)& =r^{2}+a^{2}+Q^{2}-2Mr \notag\\ a=\frac{J}{M} \notag \end{align} The Kerr-Newman metric represents the most general class of stationary black hole solution of Einstein-Maxwell equations having all three parameters Mass $(M)$, Angular momentum $(J)$ and Charge $(Q)$. All other known stationary black hole solutions are encompassed by this three parameter solution.\\ (i)For $ Q=0$ it gives the rotating Kerr solution, (ii) $ J= 0$ leads to the Reissner-Nordstrom black hole, and (iii) for both $ Q=0$ and $J=0$ the standard Schwarzschild solution is recovered.\\ For the non-extremal Kerr-Newman black hole the location of outer ($r_+$, event) and inner ($r_-$) horizons are given by setting $g^{rr}=0=g_{tt}$ or equivalently $\Delta =0$, which gives \begin{equation} r_{\pm}=M\pm \sqrt{M^{2}-a^{2}-Q^{2}}. \label{2.2} \end{equation} The angular velocity of the event horizon, which follows from the general expression of angular velocity for any rotating black hole, is given by \begin{eqnarray} \Omega_{\textrm H}= \Big[-\frac{g_{\phi t}}{g_{\phi \phi}}-\sqrt{{(\frac{g_{t\phi}}{g_{\phi\phi}})^2}-\frac{g_{tt}}{g_{\phi \phi}}}\Big]_{r=r_+}= \frac{a}{r^2_+ + a^2}. \label{angv} \end{eqnarray} The electric potential at the event horizon is given by, \begin{eqnarray} \Phi_{\textrm H}= \frac{r_+ Q}{r^2_+ + a^2}. \label{epot} \end{eqnarray} The area of the event horizon is given by, \begin{eqnarray} A= \int_{r_+} {{\sqrt{g_{\theta\theta}g_{\phi\phi}}}d\theta d\phi}= 4\pi (r^2_+ + a^2) \label{area} \end{eqnarray} The semiclassical Hawking temperature in terms of surface gravity ($\kappa$) of the Kerr-Newman black hole is given by \begin{eqnarray} T_{\textrm H}=\frac{\hbar\kappa}{2\pi}=\frac{\hbar}{2\pi}\frac{(r_+ -M)}{(r^2_+ +a^2)}. \label{hawktemp} \end{eqnarray} Using (\ref{2.2}), (\ref{angv}), (\ref{epot}) and (\ref{hawktemp}) one can find the following quantities, \begin{eqnarray} \frac{1}{T_{\textrm H}}=\frac{2\pi}{\hbar}\left(\frac{2M{[M+(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}]-Q^2}{{(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}}\right), \label{4.5} \end{eqnarray} \begin{eqnarray} -\frac{\Omega_{\textrm H}}{T_{\textrm H}}=-\frac{2\pi J}{\hbar M}\left(\frac{1}{(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}\right), \label{4.6} \end{eqnarray} \begin{eqnarray} -\frac{\Phi_{\textrm H}}{T_{\textrm H}}=-\frac{2\pi Q[M+(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}]}{\hbar(M^2-\frac{J^2}{M^2}-Q^2)^{1/2}}. \label{4.7} \end{eqnarray} \subsection{Komar conserved quantities} The Komar integral gives the conserved quantity corresponding to a Killing vector field. We take the following definition for the conserved quantities corresponding to the Killing fields $\partial_t$ and $\partial_{\phi}$ in Kerr-Newman spacetime, respectively, as {\footnote{ Our normalisation for $P_{\phi}$ is consistent with \cite{Katz}.}} \begin{eqnarray} E=\frac{1}{4\pi}\int_{\partial\Sigma}{d^{2}x\sqrt{\gamma^{(2)}}n_{\mu}\sigma_{\nu}\nabla^{\mu}K^{\nu}} \label{komenergy} \end{eqnarray} and \begin{eqnarray} P_{\phi}=-\frac{1}{4\pi}\int_{\partial\Sigma}{d^{2}x\sqrt{\gamma^{(2)}}n_{\mu}\sigma_{\nu}\nabla^{\mu}R^{\nu}}. \label{komangm} \end{eqnarray} The above two integrals are defined on the boundary ($\partial{\Sigma}$) of a spacelike hypersurface $\Sigma$ and $\gamma_{ij}$ is the induced metric on $\partial{\Sigma}$. Also, $n^{\mu}$ and $\sigma^{\nu}$ are unit normal vectors associated with $\Sigma$ and $\partial{\Sigma}$ respectively, whereas, $K^{\mu}$ and $R^{\nu}$ are timelike and rotational Killing vectors. For the spherically symmetric spacetime (Schwarzschild and Reissner-Nordstrom) there is only one Killing vector ($\partial_t$) and correspondingly only one conserved quantity given by (\ref{komenergy}) $E_{\textrm {Sch}}= M$ and $E_{\textrm {RN}}= (M-\frac{Q^2}{r})$ respectively. For the Kerr spacetime, in Boyer-Linquist coordinates ($t,~r,~\theta,~\phi$), $E_{\textrm K}= M$ and $P_{\phi}^{\textrm K}= 2J$. For the Kerr-Newman black hole, in the evaluation of (\ref{komenergy}) and (\ref{komangm}), there will be extra contributions due to charge ($Q$) \cite{Cohen}. A closed form expression for ${P_{\phi}^{\textrm {KN}}}$ is not available. Calculating upto the leading ${\cal O}(\frac{1}{r})$ we obtain $E_{\textrm {KN}}= (M-\frac{Q^2}{r})$ and $P_{\phi}^{\textrm {KN}}= 2(J-\frac{2Q^2a}{3r})$.\\\\ {\it {\bf {Acknowledgements:}}} Authors wish to thank Debraj Roy for many useful discussions and technical help. One of the authors (S.K.M) thanks the Council of Scientific and Industrial Research (C.S.I.R), Government of India, for financial support.
1,941,325,220,362
arxiv
\section{Introduction} \label{intro} Buildings are responsible for approximately 32\% of global energy use and 19\% of CO$_2$ emissions and their longevity as well as the ability to reduce these values have made it a significant priority for emissions reduction \citep{IPCC2018}. In this context, the ability to manage buildings most efficiently is critical. Significant research and best practice development have formed a body of commissioning practices implemented throughout the building life-cycle that can reduce energy consumption by 15-30\% \citep{rothetal2008}. Recent developments in machine learning and cloud computing provide new opportunities to implement Smart and Ongoing Commissioning (SOCx); preliminary field studies have demonstrated this approach to provide savings as high as 70\% \citep{stocketal2021}. However, there is limited uptake due to the lack of a supporting computational architecture to integrate machine learning and artificial intelligence with building systems. This paper seeks to address this research gap, building upon the state-of-the-art to present the first reference architecture for autonomic smart buildings to enable SOCx throughout the building life-cycle. The Building Systems Management Autonomic Reference Template (B-SMART) can be used to dramatically accelerate the design of information systems, reduce the building energy footprint, and facilitate the application of Artificial Intelligence (AI) in smart buildings. \subsection{Autonomic Systems} \label{auto-systems} In the key technical white-paper outlining their vision of autonomic computing \citep{ibm2005architectural}, IBM introduced the following definition of autonomic computing: "\textit{A computing environment with the ability to manage itself and dynamically adapt to change in accordance with business policies and objectives. Self-managing environments can perform such activities based on situations they observe or sense in the IT environment rather than requiring IT professionals to initiate the task. These environments are self-configuring, self-healing, self-optimizing, and self-protecting.}". This initial definition was fairly narrow, and since then significant research was done to expand the set of autonomic properties that need to be supported (discussed further in Sub-section ~\ref{backgr-early}). Also, autonomic computing environments and systems do not exist in a void. They must interact with other actors in their environment, and so in this work we define an autonomic cybernetic system or computing environment as follows: "\textit{An autonomic cybernetic system is a computing environment that is capable of recognizing and responding to change in it's internal operating characteristics or external operating environment with minimal human intervention, and capable interfacing with other humans or autonomic systems in accordance with business objectives, rules or policies.}" \subsection{Smart Buildings and SOCx} \label{intro-smt-bld-socx} The term \textit{Smart Buildings} was initially defined as those buildings whose "design and construction require the integration of complex new technologies into the fabric of the building” \citep{dreweretal1994}. In the intervening decades, several new developments have expanded the definition of Smart Buildings, but at the core remains this need for integration of a diversity of new technologies including those yet to be developed. This anticipation of future learning has led to significant applications of artificial intelligence and machine learning within this domain. Driven by the urgency of climate change, energy management has been a particular focus for this research with sub-focuses on monitoring, fault detection and diagnosis, and scheduling problems \citep{aguilar2021}. Advances in Building Information Modeling (BIM) have been developed to integrate the information required to support such Smart Building applications throughout the building life-cycle \citep{panteli2020}. Complementing this, significant research has developed solutions for the streaming of Building Automation System (BAS) and other sensor network data \citep{misicetal2020}. However, legacy BAS systems pose a significant challenge to this integration. Further complicating this challenge is the ever-evolving diversity of Internet of Things (IoT) devices and applications enabling Smart Building operations \citep{jia2019, sharmaetal2018}. Within this context of complex, heterogeneous systems integrated into Smart Buildings, the ongoing optimization of building performance is a significant challenge. SOCx is the integration of traditional commissioning processes with online monitoring and data analysis drawn from IoT devices and traditional building systems such as BAS to maintain optimum building performance \citep{gilani2020, noyeetal2016, minolietal2017}. These SOCx systems will benefit building operators and facility managers in three key ways: (1) a reduction in nuisance alarms and their automatic correction; (2) new insights into fault detection, including their resolution when human intervention is not required; (3) and improved energy performance of the facility. Recognizing the diversity of data required to support these practices, several SOCx ontologies have been developed, as have data streaming approaches to collect these data \citep{misicetal2020} and algorithms to support fault detection and energy optimization \citep{MARIANOHERNANDEZ2021101692}. Despite this innovation, however, there remains a paucity of literature regarding autonomic smart building approaches. Few autonomic architectures for smart buildings exist, and those currently representing the state-of-the-art fail to fully address the needs of SOCx processes as they lack key autonomic system properties. In addition, there remains a heavy reliance on manual data collection, transformation, and analysis to populate them, which are slow and expensive processes. This is a significant factor slowing smart building adoption. Ideally a smart building, like advanced Industrial IoT systems \citep{koziolek2018}, should be able to commission itself, re-commission itself if the situation calls for it, and maintain optimal systems performance on an on-going basis. The future of smart buildings is autonomic. To overcome the barriers to implementation discussed above, we propose the first reference architecture developed specifically to address the needs of smart buildings. Our architecture supports all the key properties of autonomic systems and is tailored specifically to support SOCx implementation in Smart Buildings. Through this contribution, we seek not only to enable their implementation, but to provide a guide for continuing research and development activities in this very important area. This paper is laid out as follows. First, we present a review of existing autonomic architectures for smart buildings, contextualized within the broader development of autonomic computing. Next, we outline our methodology, which draws from this literature review to develop the requirements for reference architecture development. In Sections ~\ref{comp-arch}, ~\ref{layering}, and ~\ref{ctrl-loop}, we present the B-SMART – the first reference architecture for autonomic smart buildings. In Section ~\ref{basintegration} we discuss how legacy BAS can be incorporated into B-SMART. In Section ~\ref{supporting-socx} we discuss how B-SMART supports the SOCx processes. In Section ~\ref{case-study} we present an example that illustrates how B-SMART can be applied on an existing smart building to help plan the development of additional smart building features. Finally, we discuss the applications and implications for this reference architecture and conclude with a summary of our findings, the limitations of this work, and recommendations for future research. \section{Background} \label{background} Recent advances in AI and Machine Learning (ML) have triggered evolutionary changes across many industries known as Industry 4.0. Examples include self-driving cars, autonomous drones, robotic production lines, AI assisted medical diagnosis, social network algorithms, facial-recognition features in smart phones and cameras, and many other examples. Such modern autonomic systems pervasively apply ML algorithms to identify objects that collectively comprise their environment and classify them into useful categories. Once detected and classified, the autonomic system needs to decide what actions need to be performed on, or in response to, those objects. The actions to be performed could be either learned using ML algorithms or selected using a rules-based system. Autonomous systems typically need to co-exist and interact with humans. This interaction can be either \textit{human-in-the-loop} and \textit{human-on-the-loop}. Human-in-the-loop systems typically identify and classify the objects in their environment, present a human operator with a proposed set of actions, and wait for the human to confirm which action should be taken. Human-on-the-loop systems provide the human operator with the same level of information but do not wait for human operator to confirm; instead, the action will be performed automatically. The human operator will be able to observe and, if necessary, intervene in the autonomic cycle. \subsection{Early Autonomic Computing Architectures} \label{backgr-early} The field of autonomic computing was introduced by IBM in 2001 with the goal of creating computer systems capable of self management. Since then, there has been considerable research focusing on developing autonomic systems. Originally IBM defined autonomic systems to exhibit the following characteristics: \begin{enumerate} \item \textbf{\textit{Self-configuration}}. The ability to configure its own components without human intervention. \item \textbf{\textit{Self-healing}}. The ability to recognize and correct faults without human intervention. \item \textbf{\textit{Self-optimization}}. The ability to monitor and optimize their own performance. \item \textbf{\textit{Self-protection}}. The ability of the system to detect intrusion and defend itself from its unwanted effects. \end{enumerate} In 2005 IBM published its reference architecture for autonomic systems \citep{ibm2005architectural}. This reference architecture, referred to as MAPE-K, is shown in Figure ~\ref{mape-k}. The acronym MAPE-K stands for Monitor (M), Analyze (A), Plan (P), Execute (E), and Knowledge (K). \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/mapek.jpg} \end{center} \caption{\label{mape-k} The IBM MAPE-K reference architecture for autonomic managers \citep{ibm2005architectural}.} \end{figure} The IBM MAPE-K reference architecture remains the most widely used autonomic reference architecture today. For example, it is referenced in all works cited in Table 1 focusing on big data, networking, and cloud computing; note that the cited works focusing on the smart building space are not really autonomic architectures. It was observed that key to the definition of autonomic systems was the notion of supporting certain autonomic system properties. MAPE-K is a conceptual architecture that does not provide guidance on how to effectively leverage the algorithms, techniques and technologies from the rapidly evolving AI domain when designing smart buildings. Smart building developers would need to acquire deep AI skills and define a more domain-relevant architecture, complete with a technology mapping. This is expensive and time-consuming. To address this issue our autonomic reference architecture for smart buildingsuses the MAPE-K reference architecture as a starting point, and enhances it with domain specific insight. Other researchers \citep{nami2007, proslad2009} have expanded these desirable characteristics to add: \begin{enumerate} \item \textbf{\textit{Self-regulation}}. The ability to maintain a key parameter, such as the quality of service. \item \textbf{\textit{Self-learning}}. The ability to learn about its environment without external intervention. \item \textbf{\textit{Self-awareness}}. The ability to be aware of its constituent components and key external dependencies. \item \textbf{\textit{Self-organization}}. The ability to the system to organize its own structure. \item \textbf{\textit{Self-creation/Self-assembly/Self-replication}}. The ability to organize itself in response to changing strategic goals. \item \textbf{\textit{Self-Management/Self-Governance}}. The ability to manage all its constituent components. \item \textbf{\textit{Self-description/Self-representation}}. Humans should be able to understand an autonomous system. \end{enumerate} The MAPE-K architecture does not explicitly explain how these autonomic properties should be implemented. Most autonomic architectures listed in Table ~\ref{review-table} focus on implementing the self-optimization property. The other properties are not nearly as widely discussed in currently available scientific literature. \subsection{Autonomic Architecture for Smart Buildings} \label{backgr-smt} There have been very few published studies focusing on how to make smart buildings autonomic. None of them describe an autonomic reference architecture. \cite{chevallier2020} present a reference architecture for a smart building digital twin, which is used to model physical building performance. Their reference architecture focuses on how to collect, store, and expose the static and dynamic information produced by smart buildings for query by human operators. Their reference architecture does not specifically address the autonomic aspect of smart building operation, nor does it discuss the implementation of autonomic properties, encouraging instead human interaction with the digital twin. \cite{bashir2020} describe a conceptual framework called Integrated Big Data Management and Analytics (IBDMA), a reference architecture, and a metamodel for smart buildings. The meta-model describes the different conceptual entities and processes that comprise the smart building, and how they interact with each other. The reference architecture describes how the big data stack technologies can be used to implement these processes. but do not discuss the autonomic properties, nor how these can be implemented in smart buildings. \cite{mazzara2019} also propose a reference architecture for smart and software defined buildings. Their reference architecture also does not specifically focus on the autonomic aspects of managing smart buildings. It does include a layered view of smart building technologies. The key layers include: Hardware; Network; Management; and Application and Service. The study also discusses automating some of the smart building features but does not describe or discuss the implementation of autonomic properties by their architecture layers. Their architecture targets the more conventional smart building that relies on human interaction rather than autonomic operation and is supplemented by a rules engine to manage and optimize building operation. \cite{aguilar2019} propose a self-managing architecture for multi-HVAC systems in buildings based on an “Autonomous cycle of Data Analysis Tasks” (ACODAT) concept. While their work is a significant step towards defining an autonomic architecture for smart buildings, it has several shortcomings. Their architecture does not explain layering, or separation of responsibilities among the different technologies. It also does not support several key autonomic systems properties – such as self-organization and self-creation. Further, this study focuses on the optimization aspect of a smart building already in operation, but do not engage with questions of new implementation where there will be no historical data to work with, and thus there must be a lengthy period when it will not be possible to optimize the building power consumption. \FloatBarrier \begin{table}[h] \caption{\label{review-table} Review of Autonomic Architectures Applicable to Smart Buildings.} \begin{center} \begin{tabularx}{\textwidth}{X|X|X} \hline Architecture&Domain&Supported Autonomic Properties\\ \hline ACODAT \citep{aguilar2019, aguilar2021}&Smart Buildings&Self-optimization\\ \cite{chevallier2020}&Smart Buildings&None\\ SSDB \citep{mazzara2019}&Smart Buildings&None\\ IBDMA \citep{bashir2020}&Smart Buildings&None\\ \cite{qin2014}&Smart Grid/Cloud&Self-optimization\\ PSP \citep{elnaffar2009}&Database&Self-optimization \newline Self-configuration\\ KERMIT \citep{genkin2019, genkin2020}&Big Data&Self-optimization \newline Self-configuration\\ ANA \citep{bouabene2009}&Networking&Self-configuration\\ CAN \citep{elsawy2015}&Networking&Self-configuration \newline Self-optimization\\ VANET \citep{tomar2018}&Cloud Computing&Self-optimization\\ \cite{gergin2014}&Cloud Computing&Self-optimization \newline Self-healing\\ \hline \end{tabularx} \end{center} \end{table} \FloatBarrier In summary, most prior works focus on implementing the self-configuration and self -optimization autonomic properties. The MAPE-K autonomic reference architecture, and especially the autonomic control loop described in it, is almost exclusively used. All the reviewed architectures had a notion of technology layering. Autonomic architectures can be either centralized or distributed. Many autonomic architectures leverage ML algorithms and AI techniques that are often grouped into a separate cognitive layer which, among other things, is responsible for detecting change and updating the models. These observations have been reflected in the design of the B-SMART reference architecture, as discussed in the following sections. \section{Methodology} \label{methodology} A reference architecture is typically created as a digest of existing architectures. It encapsulates the best practices learned by implementing architectures in the field and serves as a template for new architectures. The objective of the reference architecture is to facilitate creation of new architectures, reduce costs associated with architectural design, and promote interoperability and standardization. The design process involved the following steps: \begin{enumerate} \item Literature review focusing specifically on autonomic architectures for smart buildings. \item Literature review focusing on autonomic architectures developed in related computer science fields: 1 – big data analytics; 2 – cloud computing; 3 – networking. \item A computational architecture concept based on MAPE-K was used as the starting point. \item Conceptual layers were defined to ensure they were fully decoupled to avoid circular dependencies. Each layer in the architecture was defined to support one or more of the key autonomic system properties. \item A mapping of AI sub-domains to each layer was developed. \item A mapping of currently available technologies that could be used to implement each layer was provided to serve as an example of how to apply the reference architecture. \item A control loop was added to support autonomic functionality. \end{enumerate} The B-SMART reference architecture was organized into three parts, or architectural views: \begin{enumerate} \item \textbf{The B-SMART High-Level Component Diagram}: This view describes the high-level logical components that comprise the reference architecture. \item \textbf{The Functionality and Technology Layering}: This explains the static relationships between conceptual and technological layers and sub systems, and the responsibilities of each layer. \item \textbf{The B-SMART Autonomic Control Loop}: This explains the dynamic relationships among the B-SMART technology layers and the relationship between the control loop and the SOCx processes. \end{enumerate} Sections ~\ref{comp-arch}, ~\ref{layering}, and ~\ref{ctrl-loop}, discuss these in greater depth. \section{The B-SMART High-Level Component Architecture of Autonomic Smart Buildings} \label{comp-arch} Figure ~\ref{compdiag} shows the high-level component architecture for autonomic smart buildings. This diagram also shows main categories of information that must be stored and accessed by the components, and the primary relationships between the components and the data. The arrows indicate architectural relationships among the high-level components rather then the flow of data. Corresponding functionality layers (discussed in sections below) from the B-SMART layered architecture view are shown in bold italics. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/compdiag} \end{center} \caption{\label{compdiag} The B-SMART high-level component diagram.} \end{figure} \subsection{Building Knowledge Repository (BKR)} \label{bkr} As shown in Figure ~\ref{compdiag}, Knowledge is the central and essential component of the IBM MAPE-K autonomic manager reference architecture. The B SMART reference architecture builds on this concept and defines essential types of information and knowledge repository organization required by the smart building autonomic manager. The B-SMART reference architecture does not mandate any specific implementation technologies, deployment physical form factor such asan on-premises appliance for example, or physical topology (distributed vs. centralized) for the BKR. The BKR must be able to store the following categories of information: \begin{enumerate} \item Description and classification types of all key building systems capable of generating data. \item Manufacturer’s operating characteristics for sensors and actuators, and for the building systems. \item Initial statistical baselines for all the smart building systems. \item Historical information characterizing the performance of the smart building systems. \item Real-time information characterizing current performance of the smart building’s systems. \end{enumerate} The rationale for requiring these information types is discussed in depth in sections focusing on Functionality and Technology Layering and the Autonomic Control Loop. The BKR should be subdivided into two zones which manage the building operating data (see Figure ~\ref{compdiag}): \begin{enumerate} \item \textbf{Real Time Data Zone:} This zone contains data describing the current state of the smart building, and a small subset of data describing the building performance in the very recent (e.g. last 24 hours) past. These data are relatively volatile multi-variate time-series, that can include both raw data, results of real time data transformation, results of on-the-fly data aggregation, and near-real-time analytics. Performance characteristics of this zone must support high rates of inserts and updates as well as near-real-time operations on small volumes of data. The B-SMART reference architecture does not mandate any specific time segment duration that needs to be stored in the Real Time Zone, leaving this for individual smart building implementations to determine. We recommend, however, that the Real Time Zone maintain at least data collected over the previous 24-hour period, in addition to data describing the current state of the building. This is because the day-night cycle results in periodicity within smart building sensor data, that needs to accounted for as part of real-time analytic analysis. The degree of availability and fault tolerance which the Real Time Zone supports should be determined by individual smart building implementations based on each client’s ability to absorb the extra costs associated with sub-optimal energy performance versus the cost of supporting these qualities of service. \item \textbf{Historical Data Zone:} This zone contains data describing the technical specifications and the historical performance characteristics of the building. Data could be in the form of multi-variate time series containing raw or transformed data, relations, analytic reports summarizing the building performance, and relations containing building equipment specifications. The Historical Data Zone may also contain persistent storage for rules used by the smart building AI. Performance characteristics of this zone must support fast analytics on large volumes of data (relative to the Real Time zone) and batch updates. The B-SMART reference architecture does not mandate any specific duration for the time segment of data that must be stored in the Historical Time Zone, but we recommend that at least one year of historical data is retained to enable predictive analytics that need to account for seasonal trends in the multivariate time series data. The Historical Time Zone must implement high availability and disaster recovery protocols because loss of these data will require re-commissioning of the smart building (discussed in the Section ~\ref{supporting-socx}). \end{enumerate} B-SMART components, and the manner in which they leverage the BKR are discussed in the following sections. \subsection{Streaming Platform} \label{streamingplatform} Messages containing sensor data generated by the smart building are transmitted to the Streaming Platform component via the smart building network infrastructure (Fabric layer code described in more detail in the section ‘Fabric’ below). The primary function of the Streaming Platform is to ensure that messages generated by the smart building sensors and actuators are not lost, and the temporal order of their arrival is preserved. It must allow for secure access to sensor data by the Streaming Engine component discussed in the next section. The Streaming Platform should support multiple queues or topics to allow separation of messages emitted by different sensors. For the B-SMART architecture, we recommend that the Streaming Platform support once-and-once-only guaranteed message delivery. This recommendation stems from the fact that smart building sensors send only change-of-value data. The Real Time Analytics component described in Sub-section ~\ref{rt-analytics} will need to impute data in order to construct a complete view of the smart building state. Repeat sensor messages can trigger unnecessary data imputation and optimization, degrading performance. B-SMART does not mandate the use of any specific streaming implementation technology for this component. Currently popular technological choices would be Apache Kafka \citep{kafka2022}, or Apache ActiveMQ \citep{acitveMQ2022}. \subsection{Streaming Engine} \label{streamingengine} Data stored in the Streaming Platform are read in near-real-time by the Streaming Engine component. The B-SMART reference architecture does not require true real-time processing semantics and leaves the establishment of this requirement to the discretion of the implementing team for each smart building. A message stored in the Streaming Platform component contains data generated by one of the IoT devices (sensors and actuators) that instrument the smart building. Most sensors and actuators generate new messages only when they sense a change in the condition that they are monitoring – for example a change of value in room temperature above a defined threshold. The role of the streaming engine component is to read the disparate data messages stored in the Streaming Platform component and transform them into an integrated and coherent stream of multi-variate time-series data that can be consumed and operated on by the Real Time Analytics component described in the section below. The Streaming Engine thus: \begin{enumerate} \item Reads the messages from the topics or queues in the Streaming Platform \item Transforms message data into formats that can be aggregated to form a cohered multi-variate time series \item Imputes data that is missing at any given time (messages from various sensors arrive at disparate times) \item Aggregates sensor data to form a coherent multi-variate time-series that describes the state of the smart building at any given time \textit{t} \item Persists the integrated multi-variate time-series data in the Real Time Data Zone of the BKR at a pre-configured interval. \end{enumerate} The B-SMART reference architecture also does not mandate any specific streaming technology to implement the Streaming Engine component. Some examples of currently popular technologies include Apache Spark \citep{spark2022} and Apache Storm \citep{storm2022}. \subsection{Real Time Analytics (RT Analytics)} \label{rt-analytics} The RT Analytics component can run either in-process with the Streaming Engine, or in parallel. It consumes the raw event data from the Streaming Engine, aggregates and normalizes these data, and applies the change detection and real time optimization algorithms. This component persists the aggregated and normalized data into the BKR real time data zone. It also reads building equipment operating specifications and historical building performance data from the historical data zone of the BKR to enable change detection and on-line optimization algorithms. These algorithms are discussed in more depth in the Sub-section ~\ref{cdo} in Section ~\ref{layering}. The RT Analytics component invokes the Building AI component when it detects faults. The Building AI component is implemented by the Interfacing layer (discussed in more detail in Section ~\ref{layering}) code and reads and executes the operating rules stored in the historical data zone of the BKR to guide interactions with external actors (e.g. other smart buildings, facility maintenance personnel, other smart city elements such as smart grids, etc.). \subsection{Batch Analytics} \label{batch-analytics} The architecture of the autonomic smart building does not need to be strictly real time. The Batch Analytics component runs asynchronously, to perform Extract, Transform and Load (ETL) operations on the data stored in the BKR. Data stored in the real time zone must be periodically extracted, processed, and moved to the historical zone of the BKR to ensure that the real-time zone performance does not degrade with increasing data volume. The B-SMART reference architecture does not mandate how much data, either by volume or time horizon, must be stored in the respective zones of the BKR, and leaves this up to the specific implementation to determine. The Batch Analytics component also executes the machine learning pipelines that are required to support the operations of the RT Analytics component. These pipelines can involve automated training and evaluation of supervised machine learning algorithms, execution of clustering and anomaly detection algorithms, automated statistical report generation, and simulations designed to support reinforcement learning approaches. The B-SMART reference architecture does not mandate any specific implementation technology for the Batch Analytics component. Popular currently available choices include Apache Spark \citep{spark2022} and Apache Hadoop \citep{hadoop2022}, among others. \subsection{Building AI} \label{building-ai} The building AI component is a specialized AI trained to interact with external human and non-human actors to handle situations that cannot be resolved autonomically. The Building AI component is activated either explicitly by external actors, such as building maintenance personnel and tenants, or by the RT Analytics component when it detects faults. The Building AI component runs as a separate process to the Streaming Engine and RT Analytics components to allow building monitoring and optimization activities to continue while interaction with external actors takes place. The architecture of this component is discussed in the section below, as part of the discussion focusing on the architecture of the Interfacing layer. \section{Functionality and Technology Layering} \label{layering} In computer science and software engineering, the layered architectural pattern is used to establish the conceptual cascading dependency relationship between the sub-systems and components that comprise the architecture. The B-SMART layered architecture pattern is shown in Figure ~\ref{layers}. As with a building, lower layers of the architecture form the foundation. Each successive layer in the architecture builds its functionality using functionality provided by the lower layer. Functionality provided by each layer is discussed in the paragraphs below. Table ~\ref{layered-table} summarizes the autonomic properties that must be implemented by each layer of the architecture, and its key responsibilities. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/layers} \end{center} \caption{\label{layers} The B-SMART layered architecture view.} \end{figure} \begin{xltabular}{\linewidth}{XXX} \caption{The B-SMART Architectural Layers and Supported Autonomic Properties.} \label{layered-table} \\ \hline \multicolumn{1}{c}{Layer} & \multicolumn{1}{c}{Key Autonomic Properties} & \multicolumn{1}{c}{Responsibilities} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable{} -- continued from previous page} \\ \hline \multicolumn{1}{c}{Layer} & \multicolumn{1}{c}{Key Autonomic Properties} & \multicolumn{1}{c}{Responsibilities} \\ \hline \endhead \hline \multicolumn{3}{r}{{Continued on next page}} \\ \hline \endfoot \hline \endlastfoot Building Equipment&None&Must be capable of being instrumented with sensor and actuators.\\ Sensors and Actuators&Self-configuration \newline Self-description&Enable automation for the building equipment layer.\\ Fabric&Self-assembly&Ensures connectivity among the sensors and actuators and the cognitive sub-system components. Sensors, actuators, and fabric networking elements form a self-organizing system.\\ Discovery and Classification&Self-learning&Searches the Fabric layer to discover and classify sources of data.\\ Integration&Self-organization&Organizes data sources discovered by the D\&C layer into more complex data flows. Incorporates legacy BIM/BAS.\\ Change Detection and Optimization&Self-optimization \newline Self-protection \newline Self-awareness \newline Self-regulation&Detects faults and concept drift in building systems. Mitigates faults and optimizes building performance in response to changes.\\ Interfacing&Self-description \newline Self-regulation&Interacts with human operators and systems external to the building.\\ \hline \end{xltabular} The layered view of B-SMART (Figure ~\ref{layers}) describes the static relationships between two autonomic smart building sub-systems – the Foundation Sub-system and the Cognitive Sub-system. In the following sections we provide more detailed descriptions of these sub-systems and their individual architectural layers. In the foreseeable future, most smart buildings will be traditional buildings which have been converted to be smart. It is likely that in many cases this conversion will happen in stages, with each stage adding more and more ‘smart’ functionality to the building. Architectural layering of AI-related functionality encourages this process. The Building Equipment (BE), the Sensors and Actuators (S\&A), and the Fabric layers form the Foundation Sub-System of the B-SMART reference architecture. The overall objective for this sub-system is to enable the operation of the Cognitive Sub-system – consisting of the Discovery and Classification (DC), Integration, Change Detection and Optimization (CDO), and Interfacing layers discussed in detail in section 4.2 – that implements the AI aspects of the autonomic smart building. Table ~\ref{layered-tech} summarizes the B-SMART architectural layers, the relevant sub-fields of AI research, and example technologies available today to help implement each layer. The B-SMART reference architecture does not promote any specific implementation technologies but encourages functional and technological layering for smart building implementations. \begin{xltabular}{\linewidth}{XXX} \caption{Mapping of AI research fields and the existing technology stack to the B-SMART Architectural Layers.} \label{layered-tech} \\ \hline \multicolumn{1}{c}{Layer} & \multicolumn{1}{c}{Relevant AI sub-fields} & \multicolumn{1}{c}{Existing Technologies} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable{} -- continued from previous page} \\ \hline \multicolumn{1}{c}{Layer} & \multicolumn{1}{c}{Relevant AI sub-fields} & \multicolumn{1}{c}{Existing Technologies} \\ \hline \endhead \hline \multicolumn{3}{r}{{Continued on next page}} \\ \hline \endfoot \hline \endlastfoot Building Equipment&N/A&N/A\\ Sensors and Actuators&N/A&Many vendors – e.g. Schneider Electric, ABB, Siemens, Honeywell, GE, others.\\ Fabric&Self-organizing networks&BACnet \citep{bacnet2022}, Modbus \citep{modbus2022}\\ Discovery and Classification&Unsupervised machine-learning, supervised machine learning, deep learning&TensorFlow \citep{tensorflow2022}, PyTorch \citep{pytorch2022}, Apache Spark \citep{spark2022}, scikit-learn \citep{scikit-learn}.\\ Integration&AI-assisted data mapping&There are gaps in this filed. Some currently available tools include: Talend \citep{talend2022}, and CloverDX \citep{cloverdx2022}. These require human interaction.\\ Change Detection and Optimization&Reinforcement learning, supervised machine learning, evolutionary computing&There are gaps in this field. Currently change detection algorithms need to be hand coded.\\ Interfacing&NLP, rules engines, rule induction&NLTK \citep{nltk2022}, TextBlob \citep{textblob2022} , Gensim \citep{gensim2022}, spaCy \citep{spacy2022}, polyglot \citep{polyglot2022}, scikit-learn \citep{scikit-learn}, others. Drools \citep{drools2022}, Easy Rules \citep{easyrules2022}, others.\\ \hline \end{xltabular} \subsection{The Foundation Sub-system} \label{found-sub-system} \subsubsection{Elements (BE)} \label{be} The BE layer groups all the key energy-related systems of the building and their constituent equipment, for example a cooling system consisting of a chiller, pumps, and terminal units or a responsive shading system to minimize heat gains. This layer also includes additional energy producing and storage systems such as electrical panels, transformers, shading systems and capacitors. These systems do not necessarily need to be ‘smart’, but do need to be capable ofbeing instrumented with sensors and actuators to gather data. Most smart buildings will be conversions of traditional buildings that will, to differing extents, include existing sensors and/or sensor networks such as thermostats, building automation systems (BAS), etc. It is important to note that while the BE elements may be designed with integrated sensors and actuators, this is not mandated by the B-SMART reference architecture. Instead, B-SMART allows for monitoring and control technologies that are part of the S\&A layer to evolve separately because the computer and electronic hardware used to implement the sensors and actuators have historically evolved at a much faster rate than BE elements. Because the BE layer is part of the Foundation sub-system of B-SMART, it should not be imbued with AI capabilities because local equipment optimization may miss the overall building optimization point. Instead, a whole-building approach is prescribed by B-SMART, reserving optimization to the higher levels where information is available from all relevant building systems, sub-systems, and interfaces. \subsubsection{Sensors and Actuators (S\&A)} \label{sanda} The S\&A layer communicates the inputs and outputs between the building elements and the B-SMART cognitive sub-system, transmitted via the fabric layer. Within the S\&A layer, the sensors serve as information sources for the higher layers in the reference architecture while the actuators enable actions to be performed on BE elements. While some actuators are physical, such as motorized valve actuators, others may be virtual, for example the signal to change the supply water temperature set point for the building’s boiler or chiller. Note that while a sensor or actuator may be integral to a BE element, the B-SMART architecture treats it as different conceptual entity to allow it to be separately configured and upgraded. This maximizes the system flexibility, allowing S\&A and BE technologies to evolve at different rates. S\&A devices could then be upgraded if necessary multiple times during the building life cycle. Table ~\ref{layered-tech} lists the currently available technologies that can be used to implement this layer. \subsubsection{Fabric} \label{fabric} The Fabric layer connects the sensors and actuators to each other and to the functionality in the higher Discovery and Classification (D\&C) layer. The Fabric layer is predominantly a hardware layer, with software-defined aspects, that consists of specialized controllers and networking elements such as WiFi routers, network switches, network wires, telephone cables, the required power supplies, and data storage needed to establish secure and reliable communications and support the functionality of the D\&C layer. The currently used BACnet \citep{bacnet2022} or Modbus \citep{modbus2022} protocols conceptually fits here as well. To support the cognitive function, however, these legacy systems are insufficient. The Fabric layer should ultimately be an autonomic networking layer – capable of supporting the self-creation, self-description, self-configuration, and self-management autonomic properties. Architectures summarized in \cite{movahedi2012} provide examples of how this can be accomplished. \subsection{Cognitive Sub-system} \label{cognitive} The Cognitive sub-system implements the main autonomic control loop of the building and supports the SOCx process. The overview of the functionality of each layer in the SOCx sub-system is given in the paragraphs below. \subsubsection{Discovery and Classification (D\&C)} \label{dandc} The role of the D\&C layer is to support the self-learning key autonomic system property by automatically finding and classifying different types of sensors, actuators, and other IoT devices available in the building. With the data feeds from the sensors and actuators on the S\&A layer, devices and their points can be classified by type using machine learning algorithms \citep{elmokhtari2021}. The D\&C layer relies on the data feeds exposed by the Fabric layer to be able to discover and connect to the different devices in the building. The D\&C layer updates the BKR with information describing the available data feeds and corresponding classifications, thus making it available to the Integration layer, described below. \subsubsection{Integration} \label{integration} The next layer in the B-SMART architecture is the Integration layer, which supports the self-organization autonomic property. This layer takes as input the raw, but identified and classified, data feeds from the D\&C layer and transforms them so that they can be effectively used by the higher cognitive layers. Data transformation operations can include mapping and transformation of data to a canonical format, normalization, imputation, filtering, anonymization, and security and confidentiality-related operations. Integration layer code also includes middleware used to implement the Streaming Platform and the Streaming Engine components (shown in Figure ~\ref{compdiag}). \subsubsection{Change Detection and Optimization (CDO)} \label{cdo} The CDO layer uses data stored in the BKR Real Time and Historical Data zones (see Figure ~\ref{compdiag}) to continually find the optimal operating parameters for the smart building. The CDO layer is also responsible for detecting changes in the operating characteristics of the smart building. Change detection is a very important stage in autonomic processing. Several researchers working on autonomic systems in various domains noted that continuous optimization incurs an overhead and reduces the eventual benefit (e.g. \cite{genkin2016}). To maximize the overall benefit, the frequency of parameter space searches must be optimized. This can be accomplished by searching the parameter space only when change is detected. Changes can be broadly classified into two types: \textit {faults} and \textit{concept drift}. Change detection can be implemented using a variety of techniques including, but not limited to, Bayesian statistical methods, clustering algorithms, or supervised machine learning methods. These can be used to identify faults and concept drift affecting the building’s HVAC systems and other peripheral systems such as solar panels, shading systems, elevators, security, communications, etc. Computer vision techniques can be used to detect changes in building occupancy levels, broken pipes and other mechanical failures affecting the building systems, or other warning signs (e.g. flooding, fire, and smoke). CDO layer code is packaged and deployed as part of the RT Analytics and Batch Analytics components. Once changes are detected, they can trigger one of three types of optimization response: a parameter search without engaging the Interfacing layer, a parameter space search engaging the Interfacing layer, or the direct engagement of the Interfacing layer without a parameter space search. \subsubsection{Interfacing} \label{interfacing} Serving as the top layer of the B-SMART, the Interfacing layer receives input from the CDO layer. When changes are detected in the operating characteristics of the smart building, this later interfaces with the appropriate external actor(s) to respond. These actions typically involve initiating workflows with humans in the loop, for example to replace faulty equipment. Figure ~\ref{sysctxt} shows the system context diagram for an autonomic smart building, and the role of this layer. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/sysctxt} \end{center} \caption{\label{sysctxt} System context diagram for an autonomic smart building and the role of the Interfacing layer.} \end{figure} As implied by this figure, the Interfacing layer hosts a specialized AI that provides the cognitive functionality necessary to maintain and operate the building and interact with several human and non-human external actors. These include: \begin{enumerate} \item \textbf{Building tenants:} The human inhabitants of the smart building need to be able to interact with the building to adapt the building environment to suit their comfort level. Currently a common interaction involves adjusting the temperature setting of the smart meter or filling in a Web-based form indicating the comfort level or lack there of. Future interaction techniques should focus on NLP and video as primary interaction channels between the tenant and the smart building AI. The interactions between the tenants and the smart building could be initiated by either party, but most commonly it will be the tenants who will contact the smart building AI to address a particular concern. \item \textbf{Building maintenance and management personnel:} This is another very important category of human actors that the smart building needs to interact with. These interactions could be focused on routine maintenance or as a response to detected faults. For example, if the smart CDO layer detects that the sensors responsible for controlling the building chiller are reporting erroneous values, the Interfacing layer will contact the appropriate building maintenance technician. To determine which technician to contact, and how to contact him or her, the Interfacing layer may engage a rules engine. While today most interactions will involve contacting the maintenance technician via e-mail, or text message, future directions should encourage NLP voice interactions between the smart building AI and these human actors. \item \textbf{Human-driven vehicles:} Vehicles frequently need to interact with smart buildings. One example of such an interaction could involve a human driver contacting the smart building to enquire about the availability of parking spaces and charging stations for electric vehicles. The B-SMART architecture encourages the use of voice and video channels and NLP techniques to accomplish these types of interactions. \item \textbf{Smart grids:} Vehicles frequently need to interact with smart buildings. One example of such an interaction could involve a human driver contacting the smart building to inquire about the availability of parking spaces and charging stations for electric vehicles. The B-SMART architecture encourages the use of voice and video channels and NLP techniques to accomplish these types of interactions. \item \textbf{Other smart buildings:} Other smart buildings may have indirect relationships with the target building, for example through connection to the same smart grid, or direct relationships, for example through shared ownership, occupants, or building systems. \item \textbf{Autonomous driving vehicles:} As with human-driven vehicles autonomous driving vehicles will need to interact with smart buildings. B-SMART architecture encourages the use NLP, rather than APIs, as the universal communication interface for both human-driven and autonomous vehicles. The main advantage of this approach is that NLP-based communication can be readily understood and audited by humans. \item \textbf{Other AIs (responsible for complimentary domains):} The smart building AI will likely need to communicate with other AIs, such as Google’s Google or Apple’s Siri, to perform its functions or accommodate queries from its residents. One example could involve an inquiry by a building resident about upcoming weather conditions in the immediate vicinity of the smart building. This query, after pre-processing to augment it with the precise location of the building, could be passed on to another AI that specializes in this type of query. B-SMART encourages the use of NLP techniques for these types of interactions to facilitate auditing by humans. \end{enumerate} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/interfacinglayers} \end{center} \caption{\label{interfacinglayers} Interfacing layer functionality and technology sub-layers.} \end{figure} The Interfacing layer itself should be implemented using the layered architectural pattern shown in Figure ~\ref{interfacinglayers}. The Interfacing layer leverages the information stored in the BKR to establish the current state of the smart building and to communicate relevant information to human and non-human autonomic actors. The NLP sub-layer supports Text, Voice, and Video communication with human actors. The B-SMART reference architecture strongly encourages the use of NLP-based communications because they can be easily understood and audited by human actors with little to no IT and computer science knowledge. As discussed earlier in this work, the need to hire or acquire IT skills is a major inhibitor in smart building adoption. The Translation sub-layer converts verbal and text natural language queries into queries that can be understood by the Native Query Interface sub-layer that is used by the BKR implementation. The B-SMART reference architecture does not mandate any specific implementation for the Native Query Interface sub-layer. Each commercial or open-source technology will likely provide their own database-specific implementation of this layer. The B-SMART reference architecture encourages the development of new technologies to implement the NLP and Translation sub-layers. To fully implement the self-description property, the smart building must be able to query the BKR and return the relevant information to the correct external actor. Interactions between the smart building and human actors could include financial reporting, technical reporting, proactive maintenance requests, interactions with the building occupants regarding their comfort level, and reactive maintenance requests. These interactions could be implemented using either exposed APIs or via using NLP. To implement self-healing, configurable rules would determine which actions need to be performed for a given failure scenario, and in which sequence. For example, if a power failure is detected, the smart building could automatically start the back-up power generator. The building could also send notifications of the event to human actors responsible for building maintenance. Once the building detects that the main power is back on, it would stop the back-up power generator and switch to the main power supply. There are many other failure scenarios, some of which may involve self-healing, while other would require intervention by external human or autonomic actors. \section{The B-SMART Autonomic Control Loop} \label{ctrl-loop} In this section, we present the Autonomic Control Loop of the B-SMART reference architecture. The autonomic cycle view of our reference architecture, which describes the dynamic relationships between the smart building sub-systems, is shown in Figure ~\ref{ctrlloopdiag}. Table ~\ref{ctrlstages} summarizes the main stages of the autonomic control loop, the B-SMART architectural layers involved in each stage, the most relevant AI sub-fields, and the currently available technologies that can be used to implement each stage. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/ctrlloopdiag} \end{center} \caption{\label{ctrlloopdiag} The B-SMART autonomic cycle.} \end{figure} \FloatBarrier \begin{table}[h] \caption{\label{ctrlstages} Mapping of the most relevant AI research fields and the existing technology stack to the B-SMART autonomic control loop stages.} \begin{center} \begin{tabularx}{\textwidth}{XXXX} \hline Stage&Involved Layers&Relevant AI Sub-fields&Applicable AI Technologies\\ \hline Monitor&S\&A, Fabric, CDO&Computer vision&OpenCV \citep{opencv2022}\\ Detect Change&CDO&Change detection algorithms, unsupervised machine learning, supervised machine learning&Apache Spark \citep{spark2022}, scikit-learn \citep{scikit-learn}, SciPy \citep{scipy2022}\\ Optimize&CDO&Reinforcement learning, supervised machine learning, evolutionary computing&Open-AI \citep{openai2022}, Keras-RL \citep{kerasrl2022}\\ Interface&Interfacing&NLP, Rules engines&TensorFlow \citep{tensorflow2022}, PyTorch \citep{pytorch2022}\\ \hline \end{tabularx} \end{center} \end{table} \FloatBarrier \subsection{Monitoring, Optimizing, and Interfacing} \label{mon-opt-int} The autonomic smart building will spend most of its time simply monitoring the building state (Monitor stage) rather than performing actions. The actual monitoring of the building state should be handled entirely by the CDO layer code packaged in the RT Analytics component, which consumes data generated by the S\&A layer, transported by the Fabric layer, and/or processed by the Integration layer. The CDO layer compares the data in the current observation window with historical data stored in the BKR Historical Data zone. If change is detected, it is first classified. If the change is classified as a \textit{fault}, then the CDO layer passes all the relevant information to the Interfacing layer and continues processing. The Interfacing layer further analyzes the fault and determines the correct set of actions that need to be executed to address the fault. If the change is classified as \textit{concept drift} the CDO layer performs a parameter space search (Optimize stage in Fig. ~\ref{ctrlloopdiag}) to determine the optimal parameters for the new set of conditions. The newly discovered optimal set of parameters is then stored in the BKR. The CDO layer code continuously monitors the state of the BE systems, detects changes and updates BKR as required. Interactions between the CDO layer and the Interfacing layer should be asynchronous, allowing for continuous monitoring and optimization of different smart building systems even as some of the systems may be taken offline due to faults and/or scheduled maintenance. The B-SMART reference architecture does mandate real-time or near-real-time semantics for the Monitor and Detect Change stages. The architecture could be implemented using batch semantics as well, with important processing, such as machine learning algorithms training, happening during operational maintenance windows. B-SMART does encourage asynchronous interaction semantics between the CDO and the Interfacing layers because this leads to a more resilient and fault tolerant overall architecture. Interaction with external human and non-human actors should be handled entirely by the Interfacing layer. Once the CDO layer identifies faults that need to be handled by the Interfacing layer it passes context information to the latter. The Interfacing layer then establishes the most appropriate sequence of actions that need to be executed in response to this fault and executes them (\textit{Interface stage} in Fig. ~\ref{ctrlloopdiag}, \textit{Interfacing state} in Fig. ~\ref{statediag}). These actions should be performed asynchronously and in parallel. Consider the following scenario. At time t the CDO layer may detect that the building chiller is functioning outside its documented operating parameters. The CDO layer passes the context describing this situation to the Interfacing layer and, in parallel, performs a parameter space search on the chiller to establish the new optimal schedule for temperature set-points to use. At time \textit{t+1} the CDO layer determines that a power failure has occurred. The CDO layer passes this observation to the Interfacing layer to handle. In response to these events the Interfacing layer: \begin{enumerate} \item At time \textit{t}, contacts building maintenance personnel and/or vendor responsible for the chiller maintenance and interacts with them to schedule a maintenance visit. \item At time \textit{t+1}, starts the building backup-power generator. \item Contacts building maintenance personnel responsible for power maintenance, contacts the smart grid and/or smart city points responsible for information that describes the state of the power grid. \item Contacts the building tenants and management personnel to notify them of the power failure and provide an update on when normal power service is expected to be restored. \end{enumerate} While these interactions are happening the CDO layer continues monitoring the state of the building systems. Because the power failure at time \textit{t+1} is more serious it will likely be handled and fixed by the maintenance personnel before the chiller issue determined at time \textit{t}. While waiting for maintenance the chiller will continue to operate using the newly established optimal schedule of temperature set-points. Once the maintenance personnel perform chiller maintenance, they will confirm this to the Interfacing layer. The Interfacing layer will notify the CDO layer (asynchronously) that a new parameter space search needs to be performed on the building chiller. The CDO layer performs the parameter space search on the chiller and stores the new optimal parameters in the BKR. These types of interactions between the Interfacing and the CDO layers f rm the on-going phase of the SOCx process. \subsection{Smart Building State Transitions} \label{state-transitions} Figure ~\ref{statediag} presents the state transition diagram for an autonomic smart building. While the start-up commissioning SOCx process is on-going the autonomic smart building remains in the‘Initializing’ state. Once the start-up SOCx process deploys the BKR and populates it with initial operating parameters the building transitions into the ‘Optimizing’ state, because, at this point in time, the optimal operating parameters for the building systems have not yet been established. Once the optimal operating parameters have been found the building transitions into the ‘Detecting Change’ state. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/statediag} \end{center} \caption{\label{statediag} The B-SMART building state transitions.} \end{figure} The smart building remains in this state until the CDO layer code detects one of two change events. If the change event is classified as concept drift, the smart building transitions into the ‘Optimizing’ state, during which the parameter space search is performed. Once the new optimum is found the smart building transitions back into the ‘Detecting Change’ state. If the change is classified as a fault, the smart building transitions into ‘Interfacing’ state. While in this state the Interfacing layer code interacts with external actors to resolve the fault condition. Once the fault is resolved, if repairs do not involve changing, adding, or upgrading BE systems, the smart building transitions back into the ‘Detecting Change’ state. If the building systems are changed, added, or upgraded, the maintenance personnel will send a message to the Interfacing layer to put the building into the ‘Interfacing’ state. During the upgrade procedure the building will transition into the ‘Initializing’ state. Once the new initial operation characteristics have been entered into the BKR, the building transitions into the ‘Optimizing’ state. This flow is exactly the same as for the start-up commissioning process described above. \subsubsection{Reacting to Changes in the Building Equipment} \label{equipment-changes} Conversion from a traditional building to a smart building will likely be performed iteratively. As new sensors and BE systems are added to the building, external actors responsible for installing them notify the Interfacing layer. Sensor data feeds are automatically discovered by the Fabric layer, classified by the D\&C layer, and integrated into the data stream by the Integration layer. The Interfacing layer communicates with the external actors to populate the BKR with information describing the newly added systems and sensors and rules describing how to handle failures affecting these systems. The CDO layer automatically initiates parameter space search on any newly discovered systems. Once optimal operating parameters are established, they are stored in the BKR. The CDO layer adds the newly installed systems to the monitoring and change detection activities. These interactions comprise the SOCx on-going commissioning process. \section{Integrating with Legacy Building Automation Systems (BAS)} \label{basintegration} Many traditional buildings are equipped with legacy BAS. These systems may implement some of the capabilities described in B-SMART but will not have the same degree of autonomic functionality and functional and technological decoupling. Legacy BAS are not currently capable of automatic discovery and classification of the different sensors and actuators but rely on manual configuration, the D\&C functionality must be provided separately. In those situations when BAS expose APIs that allow it to send messages with payload describing the state of systems managed by the BAS to the Streaming Platform they can be integrated with the other smart building components. The Integration layer code packaged in the Steaming Engine can then incorporate these data into the multi-variate time-series that form the smart building data stream. These data can then be captured in the Real Time and Historical Data Zones of the BKR. These data can then be operated on by the CDO layer code packaged in the RT Analytics component (see Figure ~\ref{compdiag}). Legacy BAS systems also typically provide dashboards accessible over the Web. This allows for an on-the-glass integration strategy in addition, or as an alternative to, message-level integration. This integration strategy will require the Interfacing layer code packaged in the Building AI component (see Figure ~\ref{compdiag}) to display and/or describe using NLP contents of the dashboard elements exposed by the legacy BAS. The B-SMART preferred integration strategy is message-level integration, on the middle tier, via message-oriented middleware. This successful integration approach was described, for example, in a case study by El Mokhtari et al. (El Mokhtari, et al., 2022). Alternate integration strategies involving the database and presentation tier are likely to be complicated by the fact that legacy systems need to adhere to more limited data models and user interfaces than those implemented using the latest AI technologies. \section{Supporting the SOCx Processes} \label{supporting-socx} There are two SOCx processes to consider: \begin{enumerate} \item Start-up Commissioning \item On-going commissioning. \end{enumerate} Autonomic smart buildings must support both processes. Sections below discuss how this can be accomplished in more detail. \subsection{Start-up Commissioning} \label{start-up} Start-up commissioning of smart buildings (as different from and in addition to activities performed during the commissioning of traditional buildings) must include: \begin{enumerate} \item Identifying all of the sensors and actuators used by the BE layer systems. \item Classifying them by type. \item Ensuring that the data being generated by the sensors has values within the standard operating parameters established by the manufacturers of both the sensor and the BE system that it instruments. \item Capturing and storing the statistical baselines for each piece of equipment. \item Creating and populating the BKR with above data. \end{enumerate} Start-up commissioning involves leveraging the functionality provided by the D\&C layer and the Integration layer code to establish the data streams that will continuously update the BKR. The autonomic control loop starts once the start-up commission process successfully completes. Monitor functionality of the CDO layer is used to analyze the incoming data. Change detection functionality of the same layer is used to identify situations that require an action to be performed by the autonomic smart building manager. The Integration layer includes and supports the Start-Up Commissioning process. Another way to look at this relationship is to consider that one of the key goals of the start-up commissioning process for a smart building must include achieving tight integration of all sensors and actuators, and legacy BIM/BMS, with layers implementing higher-level autonomic functionality. The start-up commissioning process bootstraps the main autonomic control loop of the smart building by populating the BKR. The Interfacing layer coordinates both the start-up commissioning SOCx stage, and the on-going SOCx commissioning activities. While the ultimate goal is to fully automate these activities, it is understood that at least in the near-term future most will require interaction with human actors. \subsection{On-going Commissioning (OCx)} \label{ocx} Throughout the building lifecycle the CDO layer and Interfacing layer code update data stored int the BKR. Building state transitions shown in Figure ~\ref{statediag} comprise the on-going commissioning process of the SOCx. Whenever the CDO layer code detects change and triggers an optimization of the building operating parameters, this action in effect represents automated re-commissioning of some of the smart buildings systems. Once the new optimum is found it is recorded in the BKR, creating a new baseline for the smart building. Transition from the Optimize stage to the Detect Change stage also forms a part of the on-going commissioning process. Once the CDO layer code identifies a fault and invokes the Interfacing layer to engage external actors, such as the building maintenance staff, it has in effect triggered re-commissioning of some of the buildings systems and forms a part of the on-going commissioning process. The on- going SOCx process can be generalized to involve the following steps: \begin{enumerate} \item Change detection. If the building is in a steady state, then no re-commissioning of systems is needed. \item Repair or replacement. If the change is a fault, then external actors need to repair or replace some of the buildings systems. If the change is not a fault, then this step is skipped. \item Optimization. Performance of newly repaired or replaced components must be automatically optimized. \item Update. Newly established optimal operating characteristics need to be stored in the BKR and become the new baseline for the CDO layer code. \end{enumerate} The B-SMART autonomic reference architecture is thus designed to natively support both the start-up commissioning and on-going commissioning processes for smart buildings. \section{Applying B-SMART - A Case Study} \label{case-study} As noted in the Section ~\ref{integration}, reference architectures are used to accelerate the software and systems design cycle. This section presents an example case study during which we applied the B-SMART reference architecture on an existing smart building to map out the research and development road map for future enhancements. The Daphne Cockwell Complex (DCC) building of the Toronto Metropolitan University campus has been a showcase smart building and was used for our case study. Smart building research conducted on the DCC has been presented at many scientific conferences \citep{misic2021, mokhtari2022}. Although this building has been extensively instrumented and studied, the general consensus is that it's journey to becoming a truly autonomic smart building is far from complete. In sub-sections below we first present a fit-gap analysis of the DCC relative to our B-SMART reference architecture. \subsection{Fit-Gap Analytis of DCC} The fit-gap analysis of the current DCC architecture vs. B-SMART was performed. To accomplish this the following methodology was followed: \begin{enumerate} \item Compare the current DCC architecture to the B-SMART high-level component diagram (Figure ~\ref{compdiag}) to identify missing conceptual elements. \item Compare current DCC interfacing capabilities to the B-SMART interfacing capabilities identified in the Section ~\ref{interfacing}, and shown in Figures ~\ref{sysctxt} and ~\ref{interfacinglayers} to identify missing interfacing capabilities. \item Compare current DCC autonomic control capabilities to the B-SMART autonomic control capabilities identified in the Section ~\ref{ctrl-loop}, and shown in Figure ~\ref{ctrlloopdiag} to identify missing capabilities required to implement the building autonomic control loop. \end{enumerate} The Figure ~\ref{compdiag_appl} shows the DCC technical solution - technologies currently being used to implement the operating DCC systems - mapped onto the B-SMART high-level component diagram. The DCC Streaming Platform component is implemented using the Apache Kafka messaging platform deployed in the Amazon Cloud. Custom Python scripts are used to route change-of-value messages from the DCC BAS system to topics on the Kafka broker. The Streaming Engine component is implemented using custom Python code that picks up messages from Kafka topics, extracts payload contents, transforms the contents to the format required by the Amazon Timesteam database, and persists that data in that database. The BKR is currently implemented using the Amazon Timestream database. This database is optimized for storing and working with time-series data. The database stores raw sensor change-of-value data. This database is used to store both real-time data, and to perform historical data analysis. Building equipment specifications and the relationships among them (ontology) are stored in the neo4j graph database. Although there is no dedicated rules engine, some of the building operating rules are stored in the DCC Archibus data base. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/compdiag_appl} \end{center} \caption{\label{compdiag_appl} The DCC technical solution mapped onto the B-SMART high-level component diagram.} \end{figure} Notable gaps (see Figure ~\ref{compdiag_appl}) are: \begin{itemize} \item \textbf{Bulding AI}. \item \textbf{RealTime Analytics}. \item \textbf{Batch Analytics}. \end{itemize} Currently there is no Building AI that oversees, monitors, and coordinates the DCC systems, and is capable of interacting with other actors as described in the Sub-section ~\ref{building-ai}. Although some of the building systems are capable of semi-autonomous operation, they are not coordinated in any way. For example, the DCC is equipped with an adjustable shading system. The automatically operated shades adjust position based on a pre-programmed schedule. The building cooling and heating systems also adjust the chiller and boiler set points based on a pre-programmed schedule. These schedules, however, are pre-programmed separately by human maintenance personnel. The two systems are not aware of each other. In many cases it may be possible to maintain the building comfort level by adjusting the shading system without changing the chiller or boiler set point, and this would likely result in additional power savings. This, however, is not currently possible due to a lack of an AI that is integrated with all key building systems, and capable of coordinating their operation. The DCC IT architecture also currently lacks fully functional RT Analytics and Batch Analytics components. The RT Analytics component, as discussed in the Sub-section ~\ref{rt-analytics} must be capable of aggregating and normalizing data in real-time. To accomplish this it needs to be capable of imputing missing data on-the-fly, because the DCC BAS dispatches only change-of-value messages. This component needs to be capable of performing these operations, and then performing statistical computations, on incoming data and comparing the results with historical data aggregated and produced by the Batch Analytics component. While research projects are currently under way to address this gap, the results are not currently deployed in production. This is further discussed in the Sub-section ~\ref{tech-road-map}. Figure ~\ref{sysctxt_appl} shows the results of fit-gap analysis of the DCC interfacing capabilities vs. the B-SMART system context diagram ~\ref{sysctxt} and ~\ref{interfacinglayers}. Currently the DCC is able to interface only with the building maintenance personnel via the DCC BAS Web-based interface that serves status information for the building systems as HTML pages. The DCC is not currently capable of interfacing with the building tenants, management personnel, other smart buildings, other AIs, smart grids, or vehicles. The DCC BAS, which is the only system capable of interfacing with external actors, has no NLP capability. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/sysctxt_appl} \end{center} \caption{\label{sysctxt_appl} The DCC technical solution mapped onto the B-SMART system context diagram to highlight interfacing gaps.} \end{figure} Figure ~\ref{ctrlloopdiag_appl} shows the results of fit-gap analysis vs. the B-SMART autonomic control loop shown in Figure ~\ref{ctrlloopdiag}. Currently, the Detect Change, Interface, and Optimize stages of the autonomic loop are only partially implemented. While there are research projects in progress to address the gap needed to fully implement the Detect Change stage, this effort needs to continue until it is deployed in production (see discussion in the sub-section below). \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/ctrlloopdiag_appl} \end{center} \caption{\label{ctrlloopdiag_appl} The DCC technical solution mapped onto the B-SMART autonomic control loop diagram.} \end{figure} Limited interfacing capability exists via the DCC BAS Web-based interface. This allows the DCC to transition from the Detect Change stage to the Interface stage and back. The DCC BAS is not currently capable of automatically transitioning from the Detect Change stage to the Optimize stage and back, and from the Optimize stage to the Interface stage and back. Currently these transitions would have to be completely manually-implemented human-driven procedures. Additional detail can be gleaned by performing fit-gap analysis on Figures ~\ref{interfacinglayers} and ~\ref{statediag}. Information obtained by doing so can be used during the detailed design stage to refine the newly proposed technical solution. This discussion is omitted in this work for brevity. \subsection{Technology Road-map} \label{tech-road-map} The fit-gap exercise of comparing the existing DCC architecture with our B-SMART reference architecture thus provides us with a number of architectural short-comings (gaps) that can be used to build out a research and development road-map for making the DCC an autonomic smart building. Figure ~\ref{road_map} shows the proposed research and development road map that was developed for the DCC based on the results of the fit-gap analysis against the B-SMART relative architecture presented in the previous sub-section. The horizontal axis in the figure represents the passage of time beginning with the year 2022 and spanning the next six years. The vertical axis represents the technological maturity of smart building features that are proposed to be introduced into the DCC to make it 'smarter' and more autonomic. Research and development activities are grouped into three stages: \begin{enumerate} \item \textbf{CDO Enablement}. Change detection and optimization capabilities are required to be in place febore autonomic enablement can begin, because they are used to trigger state transitions (see Figure ~\ref{state-transitions}). Therefore, the initial focus has to be on these activities. \item \textbf{Autonomic Enablement}. This stage focuses on resolving the gaps and fully automating the control loop shown in Figure ~\ref{ctrl-loop} and state transitions shown in Figure ~\ref{statediag}. \item \textbf{NLP Enablement}. NLP capabilites are not required to be in place before the previous two stages can proceed, because API-based interfacing capabilities can be used initially. Some of the NLP activities can proceed in parallel with CDO Enablement and Autonomic Enablement activities. The bulk of the NLP enablement activities should proceed, though, once the Autonomic Enablement activities reach sufficient level of technical maturity, because NLP-based interfacing will need to be coordinated by autonomic rules and algorithms. \end{enumerate} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/road_map} \end{center} \caption{\label{road_map} The proposed research and development road map for the DCC.} \end{figure} The first gaps to be addressed would have to be those related to RT Analytics and Batch Analytics components, as part of the CDO Enablement stage. Work focusing on real-time fault detection is currently in progress to help close this gap \citep{mokhtari2022}, and it is anticipated that it will reach technical maturity in 2023. To complement fault detection, concept drift detection needs to be addressed as well. It is proposed to do so in 2024. Once the RT Analytics and Batch Analytics components have been fully implemented and are capable of supporting automated fault and real-time concept drift detection, work to implement autonomic control loop transition gaps can begin. It is proposed to do so in 2024 and 2025, with projects focusing in on-line energy optimization algorithms and rules engine integration. Works focusing on NLP enablement of the DCC can begin relatively early with a project to put an NLP interface in front of an existing DCC BAS to complement the existing Web-based interface used by the building maintenance staff. Subsequent projects could extend this approach to interface with the DCC management staff, tenants, vehicles, smart grids, other smart buildings, and other AIs. These projects can be staged over a period of 5 years, with the bulk of the development activities taking place after 2025, to benefit from the rules integration that will be implemented as part of the Autonomic Enablement stage. The DCC BKR will be enhanced iteratively with features and technologies as the above project proceed. Enhancements may include, for example, introducing a database optimized for data warehousing queries, such as Apache Hive. \section{Discussion} \label{discussion} Smart buildings, and by extension smart cities, represent a new and rapidly evolving field. Smart buildings are generally designed and developed by multi-disciplinary teams that include the more traditional engineers and architects, computer scientists, and IT personnel. Although the rapidly advancing field of AI presents great opportunities for enabling smart buildings with intellect, today little exists to help guide these teams in this endeavor. The B-SMART component model presents a proven, repeatable pattern that demonstrates how to combine AI technologies and big data into a coherent autonomic solution. The layering of the B-SMART architecture encourages different vendors to build interoperable solutions. Systems integrators working on implementing smart buildings should be better able to mix and match products from different vendors to suit their building requirements. This can bring additional benefits when we consider smart cities. For example, basing the Interfacing layer for all smart buildings on NLP will eventually enable smart builds in a smart city to literally talk to each other, as well as to their human occupants and maintenance personnel, in a manner that can be readily understood by humans who don’t have IT training. The main limitation of this study is related to the general level of maturity of the smart builds field. This field is still very young and is far from maturity. There are very few documented examples of autonomic smart builds. Despite this limitation, B-SMART represents a significant advance in this area and will stimulate both technological advances and architectural maturation of this important field of engineering. Leveraging B-SMART to speed up the architectural design cycle for a smart building will reduce the costs associated with implementing a smart building, will improve the success rate for these projects, and will reduce resistance to converting traditional buildings to smart buildings. It will produce smart building designs that are well liked by building occupants and are able to evolve and improve by iterative introduction of new technologies over time. \section{Conclusions} \label{conclusions} In this paper we presented the Building Systems Management Autonomic Reference Template (B-SMART) - the first reference architecture for autonomic smart buildings. The B-SMART reference architecture is designed to guide the development of new generation of autonomic building automation systems for smart buildings. B-SMART can dramatically reduce the duration and cost of the IT and software design cycle for smart buildings by providing IT architects and software developers working on smart builds with a clear pattern to follow. Our reference architecture explains how to apply a wide range of AI, IoT, and big data technologies to construct a coherent autonomic smart building solution. It encourages decoupling of functionality and independent evolution for key technologies used to construct smart buildings, and encourages standardization and interoperability among smart buildings, smart grids, and smart cities. B-SMART is designed to support the critically important SOCx processes. This contributes to the ongoing discourse within the AEC community regarding Smart Building performance optimization, providing a framework for the development of future SOCx systems to optimize and maintain building performance. We also presented an example application of B-SMART by developing a technology road map for the DCC smart building of the Toronto Metropolitan University campus. This road map will serve as a guide for research and development activities focusing on DCC for the next five years. \bibliographystyle{elsarticle-harv}
1,941,325,220,363
arxiv
\section{Introduction} \subsection{Introduction of the model} We denote by $\mathbb{N}$ the set of positive integers, and by $\mathbb{N}_0$ the set of non-negative integers. A plane partition is a double sequence of non-increasing integers whith a finite number of non-zero elements. More precisely, $\pi=(\pi_{i,j})_{i,j \geq 1} \in \mathbb{N}_0^{\mathbb{N}\times \mathbb{N}}$ is a plane partition if and only if : \begin{align*} \pi_{i+1,j} & \leq \pi_{i,j} \text{ and } \pi_{i,j+1} \leq \pi_{i,j} \text{ for all } i,j \in \mathbb{N}, \\ |\pi| & := \sum_{i,j \geq 1} \pi_{i,j} < +\infty. \end{align*} To a plane partition we associate a subset of $E:=\mathbb{Z}\times \frac{1}{2}\mathbb{Z}$ via the map : \begin{align*} \pi \mapsto \mathfrak{S}(\pi):= \{ (i-j,\pi_{i,j}-(i+j-1)/2), \hspace{0.1cm} i,j \geq 1 \}. \end{align*} Our probability space is the space of configuration on $E$ : $\Omega := \{0,1\}^E$ equipped with the usual Borel structure generated by the cylinders. The first coordinate of a point $(t,h) \in E$ might be interpreted as the time coordinate, and the second as the space coordinate. For a plane partition $\pi$ and $(t,h) \in E$, we define $c_{(t,h)}(\pi) \in \Omega$ by : \begin{align*} c_{(t,h)}(\pi)=1 \text{ if and only if } (t,h) \in \mathfrak{S}(\pi), \end{align*} and for a subset $m= \{ m_1,...,m_l \} \subset E$, the configuration $c_m(\pi)$ is the product : \begin{align*} c_m=c_{m_1}...c_{m_l}. \end{align*} For $q \in (0,1)$, we consider the geometric probability measure $\mathbb{P}_q$ on the set of all plane partition given by: \begin{align*} \mathbb{P}_q(\pi) = M q^{|\pi|} \end{align*} where $M$ is the normalization constant given by MacMahon formula (\cite{stanley}, corollary 7.20.3) : \begin{align*} M=\prod_{n=1}^{+\infty}(1-q^n)^n. \end{align*} By the inclusion-exclusion principle, $\P_q$ is fully characterized by the quantities : \begin{align*} \mathbb{E}_q \left[ c_m \right]= \P_q \left( m \subset \mathfrak{S}(\pi) \right), \quad m \subset E \text{, $m$ is finite}. \end{align*} The main result of this note, Theorem \ref{thm1} below, is a law of large numbers, as $q=e^{-r}$ tends to $1$, for local patterns $m$ in plane partitions distributed according to $\P_q$ ; it states that the normalized sum : \begin{align*} \Sigma(f,m,r) = r^2 \sum_{(t,h)} f(rt,rh) c_{(t,h)+m}, \end{align*} converges with respect to $\P_q$ to a constant, as $q=e^{-r}$ tends to $1$. Here the sum is taken over a subset of $E$ which can be regarded as the limit shape of a large plane partition scaled by a factor $1/r^2$, and $f$ is a continuous compactly supported function defined on the plane. The constant is explicitly given in terms of the discrete extended sine kernel. \par The determinantal formula due to Okounkov and Reshetikhin (\cite{okounkovreshetikhin}, see also \cite{ferrarispohn}, \cite{borodingorin} or \cite{borodinrains}) is at the center of our proof, although the statement of Theorem \ref{thm1} does not require its knowledge. \subsection{The determinantal formula} Okounkov and Reshetikhin have shown in \cite{okounkovreshetikhin} that the pushforward of $\mathbb{P}_q$ onto $\Omega$ is a determinantal point procces. Let us state this fact precisely. We define the kernel $\mathcal{K}_q : E \times E \rightarrow \mathbb{R}$ by : \begin{align} \label{defKq} \mathcal{K}_{q}(t_1,h_1;t_2,h_2)= \frac{1}{(2i\pi)^{2}}\int_{|z|=1 \pm \epsilon}\int_{|w|=1 \mp \epsilon} \frac{1}{z-w} \frac{\Phi(t_1,z)}{\Phi(t_2,w)}\frac{dz dw}{z^{h_1+\frac{|t_1|+1}{2}}w^{-h_2-\frac{|t_2|+1}{2}}} \end{align} where one picks the plus sign for $t_1 \geq t_2$ and the minus sign otherwise. The function $\Phi$ is defined by : \begin{align*} \Phi(t,z)= &\frac{(q^{1/2}/z;q)_\infty}{(q^{1/2+t}z;q)_\infty} \quad \text{for $t \geq 0$} \\ &\frac{(q^{1/2-t}/z;q)_\infty}{(q^{1/2}z;q)_\infty} \quad \text{for $t < 0$}, \end{align*} where $(x;q)_\infty$ is a $q$ version of the Pochhammer symbol : \begin{align*} (x;q)_{\infty}=\prod_{k=0}^{\infty}(1-xq^{k}), \end{align*} and $\epsilon$ is a sufficiently small positive number, which allows to avoid the singularities of the ratio : \begin{align*} \frac{\Phi(t_1,z)}{\Phi(t_2,w)}. \end{align*} Observe that there is no need of defining the square root in formula (\ref{defKq}), since, for $(t,h) \in E$, if there exists a plane partition $\pi$ such that $(t,h) \in \mathfrak{S}(\pi)$, then we have by construction that : \begin{align*} h+ \frac{|t|+1}{2} \in \mathbb{Z}. \end{align*} Okounkov-Reshetikhin determinantal formula is then the following statement : \begin{thm}[Okounkov-Reshetikhin, \cite{okounkovreshetikhin}, 2003] \label{thmokresh} For any positive integer $l \in \mathbb{N}$ and any subset $m=\{ (t_1,h_1),...,(t_l,h_l) \} \subset E$, we have : \begin{align} \label{detformula} \mathbb{E}_q[c_m(\pi)]=\det \left( \mathcal{K}_q(t_i,h_i;t_j,h_j) \right)_{i,j=1}^l \end{align} where $\mathbb{E}_q$ denotes the expectation with respect to $\mathbb{P}_q$. \end{thm} The measure $\P_q$ is a particular case of a Schur process. Schur processes, first introduced in \cite{okounkovreshetikhin}, are dynamical generalizations of Schur measures (\cite{okounkovschurmeasures}). For a more elementary treatment of Schur measures and Schur processes, see e.g. \cite{borodinrains}, \cite{borodingorin} and references therein. See also \cite{johansson} for an other approach. \subsection{The limit process} \label{seclimproc} In the same article, Okounkov and Reshetikhin proved a scaling limit theorem for $\mathbb{P}_q$, when $r=-\log(q)$ tends to $0^+$, which we now formulate. Let us define : \begin{align*}A:= \lbrace (t,x) \in \mathbb{R}^{2} , |2\cosh(t/2)-e^{-x}| < 2 \rbrace \subset \mathbb{R}^2, \end{align*} and for $(\tau,\chi) \in A$, let $z(\tau,\chi)$ be the intersection point of the circles $C(0,e^{-\tau/2})$ and $C(1,e^{-\tau/4-\chi/2})$ with positive imaginary part, see figure 1. \begin{figure} \includegraphics[scale=0.5,clip=true,trim=0cm 0cm 0cm 0cm]{cerclesinter.png} \caption{The circles $C(0,e^{-\tau/2})$ and $C(1,e^{-\tau/4-\chi/2})$ and their intersection points $z=z(\tau,\chi)$ and its complex conjugate $z'=\overline{z(\tau,\chi)}$.} \end{figure} The condition $(\tau,\chi) \in A$ guarantees that $z(\tau,\chi)$ exists and is not real. For $(\tau,\chi) \in A$, we define the translation invariant kernel $\mathcal{S}_{z(\tau,\chi)} : E \rightarrow \mathbb{C}$ : \begin{align} \label{defdynsin} \mathcal{S}_{z(\tau,\chi)}(\Delta t, \Delta h)=\frac{1}{2i\pi}\int_{\overline{z(\tau,\chi)}}^{z(\tau,\chi)}(1-w)^{\Delta t}w^{-\Delta h - \frac{\Delta t}{2}} \frac{dw}{w}, \end{align} where the integration path crosses $(0,1)$ for $\Delta t \geq 0$ and $(-\infty,0)$ for $\Delta t < 0$. For reasons explained below, this kernel will be called the \textit{extended sine kernel}. Then, the following holds : \begin{thm} [Okounkov-Reshetikhin, \cite{okounkovreshetikhin}, 2003] \label{thmcvokresh} For all $(\tau, \chi) \in A$ and all $m=\{ (t_1,h_1),...,(t_l,h_l) \} \subset E$, we have : \begin{align*} \lim_{r\rightarrow 0} \mathbb{E}_{e^{-r}} \left[c_{\frac{1}{r}(\tau,\chi)+m } \right]= \det \left( \mathcal{S}_{z(\tau,\chi)}(t_i-t_j,h_i-h_j) \right)_{i,j=1}^l. \end{align*} \end{thm} In lemma \ref{lemcv} below, we give the speed of convergence. \par For $(\tau, \chi) \in A$, the kernel $\mathcal{S}_{z(\tau,\chi)}$ defines a determinantal point process on $E$, i.e. a probability measure $\P_{(\tau,\chi)}$ on $\Omega$ defined by : \begin{align} \label{sindynproc} \forall m=\{ (m^1_1,m^2_1), ..., (m^1_l,m^2_l) \} \subset E, \quad \mathbb{E}_{(\tau,\chi)} [c_m] = \det\left( \mathcal{S}_{z(\tau,\chi)}(m_i^1-m_j^1,m_i^2-m_j^2)\right)_{i,j=1}^l, \end{align} where $\mathbb{E}_{(\tau,\chi)}$ is the expectation with respect to $\P_{(\tau,\chi)}$. Theorem \ref{thmcvokresh} means that, when $q$ approaches $1$, if we scale each coordinate of a point process coming from a plane partition distributed according to $\P_q$ by a factor of $r=-\log(q)$ and then zoom around $(\tau,\chi)$, the obtained point process behaves as if it were distributed according to $\P_{(\tau,\chi)}$. This point process can be seen as a two-dimensional or dynamical version of the usual discrete sine-process on $\mathbb{Z}$ (see e.g. \cite{boo} or \cite{borodingorin} and references therein). Indeed, setting $\Delta t =0$ in (\ref{defdynsin}) leads to : \begin{align*} \mathcal{S}_{z(\tau,\chi)}(0,\Delta h) = e^{\frac{\tau \Delta h}{2}} \frac{\sin(\phi\Delta h)}{\pi\Delta h} \end{align*} where $z(\tau,\chi)= e^{-\frac{\tau}{2}+i\phi}$. Note that we can ignore the factor $e^{\frac{\tau \Delta h}{2}}$ since it will disappear from any determinant of the form (\ref{sindynproc}). \subsection{Main result} For $r >0$, we define the set $A_{r} \subset E$ by : \begin{align*} A_{r}= r^{-1}A \cap E. \end{align*} For brievety, we write $\P_r$ (resp. $\mathbb{E}_r$) instead of $\P_{e^{-r}}$ (resp. $\mathbb{E}_{e^{-r}}$). For a continuous compactly supported function $f : \mathbb{R}^2 \rightarrow \mathbb{R}$, and a finite subset $m \subset E$, we define the random variable : \begin{align} \label{sum} \Sigma(f,m,r)=r^2 \sum_{(t,h) \in A_{r} }f(rt,rh)c_{(t,h)+m} \end{align} and the deterministic integral : \begin{align*} I(f,m) = \int_{A} f(\tau,\chi)\mathbb{E}_{(\tau,\chi)} [ c_m ] d\tau d\chi. \end{align*} Our Theorem establishes that, under $\P_r$, the sum $\Sigma(f,m,r)$ converges to $I(f,m)$. This theorem can thus be interpreted as a weak law of large numbers for functionals of random plane partitions. \begin{thm} \label{thm1}For every continuous compactly supported function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$, every finite subset $m \subset E$, one has : \begin{align} \forall \varepsilon >0, \quad \lim_{r \rightarrow 0} \mathbb{P}_{r} \left( |\Sigma(f,m,r) - I(f,m) | > \varepsilon \right) =0. \end{align} \end{thm} \begin{rem}The assumption of compactness of the support of the function $f$ is used for the uniformity of constants in estimations of averages and variances. It might interesting to see if Theorem \ref{thm1} still holds for a wider class of functions, e.g. Schwartz functions. \end{rem} \subsection{Comparison with other models} In the context of the Plancherel measure on usual partitions, the same theorem was proved by A.I. Bufetov in \cite{bufetovgafa}, lemma 4.4., in order to prove the Vershik-Kerov conjecture concerning the entropy of the Plancherel measure. Indeed, the poissonization of the Plancherel measure is a determinantal point process with the discrete Bessel kernel : \begin{align*} \mathcal{J}_\theta(x,y)=\theta \frac{J_x(2\theta)J_{y+1}(2\theta)-J_{x+1}(2\theta)J_y(2\theta)}{x-y}, \quad x,y \in \mathbb{Z}, \end{align*} where $J_x(2\theta)$ are the Bessel functions. While the crucial inequality \begin{align*} |\mathcal{J}_\theta(x,y)| \leq \frac{C}{|x-y|+1}, \end{align*} which expresses the decay of correlations for the poissonized Plancherel measure, is here almost immediate, such an inequality in our two-dimensional model requires some efforts. In lemma \ref{lemdecK} below, we prove analogous inequalities for the covariances, which estimate the decay of correlations. We were not able to control the value of the kernel $\mathcal{K}_q$ at different points, but rather for products of the kernel. While the kernel $\mathcal{J}_\theta$ is symmetric, our kernel $\mathcal{K}_q$ is not, and this facts reflects in the need of taking products ; besides, the single value of the kernel of a determinantal process at different points is not always relevant, as one has to consider determinants. \\ \par The extended sine kernel appears in many other models as the kernel of the bulk scaling limit of two dimensional statistical mechanics models, for example non-intersecting paths (see e.g. \cite{gorinpaths} and references therein). It also has a continuous counter-part arising in the Dyson's brownian motion model (see e.g. \cite{katoritanemura}). We thus think that a similar law of large numbers should hold for a wide class of discrete determinantal point processes which admit the process with the extended sine-kernel as a limit in the bulk. \subsection{Organisation of the paper}The paper is organized as follows. In section \ref{sec2}, we introduce notations and state preliminary lemmas, for which the proofs are given later. \par Lemma \ref{lemcv} says that the error term in Theorem \ref{thmcvokresh} is of order less than $r$, and as a consequence, the expectation of the sum $\Sigma$ converges to the integral $I$. \par Lemma \ref{lemdecK} gives a suitable control on the decay of correlations, and, together with Lemma \ref{lemdiag}, implies that the variance of the sum $\Sigma$ tends to zero. \par In section \ref{sec3}, we prove Theorem \ref{thm1}, admitting the lemmas from the preceding section. \par Section \ref{sec4} is devoted to the proofs of the lemmas. \subsection{Acknowledgements} I would like to thank Alexander Bufetov for posing the problem to me and for helpful discussions. I also would like to thank Alexander Boritchev, Nizar Demni and Pascal Hubert for helpful discussions and remarks. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement N 647133). \section{Notation and preliminary results} \label{sec2} \subsection{Notation} For functions $f,g : [0,+\infty) \rightarrow \mathbb{C}$, we write : \begin{align*} f=O(g) \end{align*} or : \begin{align*} f \lesssim g \end{align*} if there exists $C>0$ and $r_0 >0$, such that for all $r \leq r_0$, one has : \begin{align*} |f(r)| \leq C|g(r)|. \end{align*} We write : \begin{align*} f \asymp g \end{align*} whenever $f=O(g)$ and $g=O(f)$.\\ \par The real part of a complex number $z\in \mathbb{C}$ will be denoted by $\mathfrak{R}z$. \subsection{Preliminary results} The first lemma we need estimates the error term in Theorem \ref{thmcvokresh}. \begin{lem} \label{lemcv} For any compact $K \subset \mathbb{R}^2$, and any finite subset $m \subset E$, there exists $C>0$, such that for all $r>0$ sufficiently small and all $(\tau,\chi) \in A \cap K$, one has : \begin{align} |\mathbb{E}_r[c_{\frac{1}{r}(\tau,\chi)+m}]-\mathbb{E}_{(\tau,\chi)}[c_m]| \leq Cr. \end{align} \end{lem} The following lemma expresses the decay of correlations of the process $\P_r$. \begin{lem}\label{lemdecK} Let $K \subset \mathbb{R}^2$ be a compact set, and let $m \subset \mathbb{E}$ be finite. Let $\overline{m}$ denote the supremum norm of $m$ : \begin{align*} \overline{m}=\max \{ |t|,|h|, \hspace{0.1cm} (t,h) \in m \}. \end{align*}Then for any $\alpha \in (0,1)$, there exists $C$ which only depends on $\alpha$, $K$ and $m$, such that for all $r >0$ sufficiently small and any $(\tau_1,\chi_1),(\tau_2,\chi_2) \in A\cap K$ such that : \begin{align} \label{conddist} \max \{ |\tau_1-\tau_2|, |\chi_1-\chi_2| \} > \overline{m}r, \end{align} one has : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right]-\mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}\right]\mathbb{E}_r\left[c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] \right | \leq \frac{C\exp\left(-r^{-\alpha}\right)}{|\tau_1-\tau_2|^2} \end{align*} when $\tau_1 \neq \tau_2$, and : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi_1)+m}c_{\frac{1}{r}(\tau,\chi_2)+m} \right]-\mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi_1)+m}\right]\mathbb{E}_r\left[c_{\frac{1}{r}(\tau,\chi_2)+m} \right] \right | \leq \frac{Cr}{|\chi_1-\chi_2|} \end{align*} when $\tau_1=\tau_2=\tau$. \end{lem} The last lemma we need is obtained as a simple corollary of proposition \ref{propcont} below. \begin{lem}\label{lemdiag} For any compact $K \subset \mathbb{R}$ and any finite subsets $m,m' \subset E$, there exists $C$ such that for any $(\tau,\chi) \in A\cap K$, and any sufficiently small $r>0$ : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m} c_{\frac{1}{r}(\tau,\chi)+m'}\right] - \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m}\right] \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m'} \right] \right| \leq C. \end{align*} \end{lem} \section{Proof of Theorem \ref{thm1}} \label{sec3} Let $K \subset \mathbb{R}^2$ be a compact containing the support of $f$. We denote by $A_{r,K}$ the set : \begin{align*} A_{r,K}=r^{-1}(A \cap K ) \cap E. \end{align*} By lemma \ref{lemcv}, we have : \begin{align*} \mathbb{E}_r \Sigma( f,m,r) - r^2 \sum_{(t,h) \in A_{r,K}} f(rt,rh) \mathbb{E}_{(rt,rh)} \left( c_{m} \right) \lesssim |A_{r,K}| r^{3}, \end{align*} where $|A_{r,K}|$ is the cardinality of $A_{r,K}$. We first remark that : \begin{align} \label{estimArK} |A_{r,K}| \asymp r^{-2}, \end{align} which implies : \begin{align*} \mathbb{E}_{r} \Sigma (f,m,r)= r^2 \sum_{(t,h) \in A_{r,K}} f(rt,rh) \mathbb{E}_{(rt,rh)} \left(c_{m}\right) + O(r). \end{align*} Observing then that \begin{align*} r^2 \sum_{(t,h) \in A_{r,K}} f(rt,rh) \mathbb{E}_{(rt,rh)} \left(c_{m}\right) \end{align*} is a Riemann sum for the integral $I(f,m)$, we obtain that : \begin{align*} \lim_{r \rightarrow 0} \mathbb{E}_{r} \Sigma(f,m,r) = I(f,m). \end{align*} By the Chebyshev inequality, it suffices now to prove that : \begin{align} \label{var0} \text{Var}_{r} \left( \Sigma(f,m,r) \right) \rightarrow 0 \end{align} as $r$ tends to $0$, where : \begin{multline} \label{variance} \text{Var}_{r} \left( \Sigma(f,m,r) \right) = \mathbb{E}_r \left[ \left(\Sigma(f,m,r) - \mathbb{E}_r \Sigma(f,m,r) \right)^2 \right] \\ =r^4 \sum_{(t_1h_1),(t_2,h_2) \in A_{r,K}} f(rt_1,rh_1)f(rt_2,rh_2) \\ \times \left(\mathbb{E}_r ( c_{(t_1,h_1)+m}c_{(t_2,h_2)+m} ) - \mathbb{E}_r ( c_{(t_1,h_1)+m} ) \mathbb{E}_r ( c_{(t_2,h_2)+m} ) \right). \end{multline} We set $ \overline{m}=\max\{ |t|,|h|, \hspace{0.1cm} (t,h) \in m \}$, and we partition $A_{r,K}^2$ into three sets : \begin{align*} A_{r,K}^2=A_{r,k}^{>} \sqcup A_{r,K}^{>=} \sqcup A_{r,K}^{\leq}, \end{align*} where : \begin{align*} A_{r,k}^{>} &=\left\{ (t_1,h_1), (t_2,h_2) \in A_{r,K}, \hspace{0.1cm} \max\{ |t_1-t_2|, |h_1-h_2|\} > \overline{m}, \hspace{0.1cm} t_1 \neq t_2 \right\}, \\ A_{r,K}^{>=} &= \left\{ (t,h_1), (t,h_2) \in A_{r,K}, \hspace{0.1cm} |h_1-h_2| > \overline{m} \right\},\\ A_{r,K}^{\leq} &= A_{r,K}^2 \setminus \left(A_{r,k}^{>} \sqcup A_{r,K}^{>=} \right) = \left\{ (t_1,h_1),(t_2,h_2) \in A_{r,K}, \hspace{0.1cm} \max\{ |t_1-t_2|, |h_1-h_2|\} \leq \overline{m} \right\}. \end{align*} We first estimate the variance (\ref{variance}) by : \begin{multline} \label{estimvar} \text{Var}_{r} \left( \Sigma(f,m,r) \right) \leq C r^4 \left( \sum_{\left((t_1h_1),(t_2,h_2)\right) \in A_{r,K}^>} \left|\mathbb{E}_r ( c_{(t_1,h_1)+m}c_{(t_2,h_2)+m} ) - \mathbb{E}_r ( c_{(t_1,h_1)+m} ) \mathbb{E}_r ( c_{(t_2,h_2)+m} ) \right| \right. \\ + \sum_{ \left((t,h_1),(t,h_2)\right) \in A_{r,K}^{>=} } \left|\mathbb{E}_r ( c_{(t,h_1)+m}c_{(t,h_2)+m} ) - \mathbb{E}_r ( c_{(t,h_1)+m} ) \mathbb{E}_r ( c_{(t,h_2)+m} ) \right| \\ \left. + \sum_{\left((t_1h_1),(t_2,h_2)\right) \in A_{r,K}^{\leq}} \left|\mathbb{E}_r ( c_{(t_1,h_1)+m}c_{(t_2,h_2)+m} ) - \mathbb{E}_r ( c_{(t_1,h_1)+m} ) \mathbb{E}_r ( c_{(t_2,h_2)+m} ) \right|\right), \end{multline} where $C$ only depends on $f$. Let $(t_1,h_1),(t_2,h_2) \in A_{r,K}$. By definition, there exists $(\tau_1,\chi_1),(\tau_2,\chi_2) \in A \cap K \cap rE$ such that : \begin{align*} (t_1,h_1)=\frac{1}{r}(\tau_1,\chi_1), \hspace{0.1cm} (t_2,h_2) =\frac{1}{r}(\tau_2,\chi_2). \end{align*} We first consider the case when $\left((t_1,h_,),(t_2,h_2)\right) \in A_{r,K}^>$. The corresponding points $(\tau_1,\chi_1),(\tau_2,\chi_2)$ satisfy condition (\ref{conddist}), and thus by lemma \ref{lemdecK}, we have in particular the estimate : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right]-\mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}\right]\mathbb{E}_r\left[c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] \right | \leq Cr, \end{align*} where $C$ is uniform. Since : \begin{align*} A_{r,K}^> \asymp r^{-4}, \end{align*} we obtain that : \begin{align} \label{estimcov1} \sum_{\left((t_1h_1),(t_2,h_2)\right) \in A_{r,K}^>} \left|\mathbb{E}_r ( c_{(t_1,h_1)+m}c_{(t_2,h_2)+m} ) - \mathbb{E}_r ( c_{(t_1,h_1)+m} ) \mathbb{E}_r ( c_{(t_2,h_2)+m} ) \right| \leq Cr^{-3} \end{align} where $C$ only depends on $f$ and $m$. \par In the case when $\left((t_1,h_,),(t_2,h_2)\right) \in A_{r,K}^{>=}$, we have by lemma \ref{lemdecK} that : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right]-\mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m}\right]\mathbb{E}_r\left[c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] \right | \leq C \end{align*} where $C$ is uniform. Since : \begin{align*} |A_{r,K}^{<=}| \asymp r^{-3}, \end{align*} we have : \begin{align}\label{estimcov2} \sum_{(t,h_1),(t,h_2) \in A_{r,K}^{>=}} \left|\mathbb{E}_r ( c_{(t,h_1)+m}c_{(t,h_2)+m} ) - \mathbb{E}_r ( c_{(t,h_1)+m} ) \mathbb{E}_r ( c_{(t,h_2)+m} ) \right| \leq Cr^{-3}, \end{align} where $C$ only depends on $K$ and $m$. \par When $\left((t_1,h_1),(t_2;h_2)\right) \in A_{r,K}^{\leq}$, there exists finite subsets $m',m'' \subset E$ and $(\tau,\chi) \in A\cap K$ such that : \begin{align*} (t_1,h_1)+m= \frac{1}{r}(\tau,\chi) +m', \hspace{0.1cm} (t_2,h_2)+m = \frac{1}{r}(\tau,\chi) +m''. \end{align*} Observe that there are only a finite number of possible sets $m'$ and $m''$.Thus, by lemma \ref{lemdiag}, we have : \begin{align*} \left| \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m'} c_{\frac{1}{r}(\tau,\chi)+m''}\right] - \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m'}\right] \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m''} \right] \right| \leq C \end{align*} where $C$ is uniform. Since : \begin{align*} |A_{r,K}^{\leq} |\asymp r^{-2}, \end{align*} we have : \begin{align} \label{estimcov3} \sum_{(t,h_1),(t,h_2) \in A_{r,K}^{\leq} }\left| \mathbb{E}_r ( c_{(t,h_1)+m}c_{(t,h_2)+m} ) - \mathbb{E}_r ( c_{(t,h_1)+m} ) \mathbb{E}_r ( c_{(t,h_2)+m} ) \right| \leq Cr^{-2} \end{align} Thus, recalling the estimation (\ref{estimvar}), the inequalities (\ref{estimcov1}), (\ref{estimcov2}) and (\ref{estimcov3}) establish (\ref{var0}). Theorem \ref{thm1} is proved, assuming lemmas \ref{lemcv}, \ref{lemdecK} and \ref{lemdiag}. \section{Proof of lemmas \ref{lemcv}, \ref{lemdecK} and \ref{lemdiag}} \label{sec4} \subsection{Proof of lemma \ref{lemcv}}We here follow the proof of \cite{okounkovreshetikhin}, giving the error terms in the asymptotics we use. We define the dilogarithm function as being the analytic continuation of the series : \begin{align*} \text{dilog}(1-z)=\sum_{n \geq 1} \frac{z^n}{n^2}, \quad |z|<1, \end{align*} with a cut along the half-line $(1,+\infty)$. We first prove that : \begin{align} \label{estimdilog} -\log(z,q)_\infty = r^{-1}\text{dilog}(1-z) + O(1). \end{align} as $q=e^{-r}$ tends to $1^-$. Indeed, we have : \begin{align*} \log(z,q)_{\infty}&=\sum_{k \geq 0} \log(1-zq^k) = -\sum_{k \geq 0} \sum_{n \geq 1} \frac{z^nq^{nk}}{n} \\ &=-\sum_{n \geq 1} \frac{z^n}{n} \sum_{k \geq 0} q^{nk} =-\sum_{n \geq 0} \frac{z^n}{n} \frac{1}{1-q^n}. \end{align*} With $q=e^{-r}$, we have : \begin{align*} \frac{r}{1-e^{-nr}}=\frac{1}{n -n^2r + ...}=\frac{1}{n}(1+nr+...) \end{align*} and thus : \begin{align*} \left| \frac{z^n}{n}\left(\frac{1}{1-e^{-nr}}- \frac{1}{n}\right)\right| \leq rn|z|^n, \end{align*} which establishes (\ref{estimdilog}). \par Let $K \subset \mathbb{R}^2$ be compact and let $(\tau,\chi) \in A\cap K$. We assume that $\tau \geq 0$, see \ref{secrk} below for the case $\tau \leq 0$. We introduce the function : \begin{align*} S(z;\tau,\chi)=-(\tau/2+ \chi)\log(z)-\text{dilog}(1-1/z)+ \text{dilog}(1-e^{-\tau}z), \end{align*} and denote by $\gamma_\tau$ the circle : \begin{align*} \gamma_\tau = \{ z \in \mathbb{C}, \hspace{0.1cm} |z|=e^{\tau/2} \}. \end{align*} By the estimate (\ref{estimdilog}) and formula (\ref{defKq}), we have that, for all $z$ and $w$ sufficiently closed to $\gamma_\tau$ : \begin{multline} \displaystyle \left| \frac{\Phi(\tau/r+ t_1,z)}{\Phi(\tau/r+t_2,w)}\frac{1}{z^{\chi/r+h_1+\tau/2r+(t_1+1)/2}w^{-\chi/r-h_2-\tau/2r-(t_2+1)/2}} \right| \\ = \exp \left( \frac{1}{r} \left( \mathfrak{R} S(z;\tau,\chi)-\mathfrak{R}S(w;\tau,\chi) \right) + O(1) \right) , \end{multline} where the $O(1)$ term only depens on $K$, $(t_1,h_1)$ and $(t_2,h_2)$. An observation made in \cite{okounkovreshetikhin} states that the real part of $S$ on the circle $\gamma_\tau$ is constant, namely : \begin{align} \label{reS} \mathfrak{R}S(z;\tau,\chi)=-\frac{\tau}{2}(\tau/2+\chi),\quad z \in \gamma_\tau. \end{align} It is also shown in \cite{okounkovreshetikhin} that, since $(\tau,\chi) \in A$, the function $S$ has two distinct critical points on $\gamma_\tau$ : $e^{\tau}z(\tau,\chi)$ and its complex conjugate. The computation of the gradient of the real part of $S$ on $\gamma_\tau$ lead then the authors of \cite{okounkovreshetikhin} to deform the circle $\gamma_\tau$ into simple contours $\gamma_\tau^>$ and $\gamma_\tau^<$, both crossing the two critical points and verifying : \begin{equation} \begin{split} z \in \gamma_\tau^> \Rightarrow \mathfrak{R}S(z;\tau,\chi) \geq -\frac{\tau}{2}(\tau/2+\chi), \\ z \in \gamma_\tau^< \Rightarrow \mathfrak{R}S(z;\tau,\chi) \leq -\frac{\tau}{2}(\tau/2+\chi), \end{split} \end{equation} with equality only for $z\in \left\{e^\tau z(\tau,\chi), e^\tau \overline{z(\tau,\chi)} \right\}$, see figure 2. \begin{figure}[!h] \label{fig1} \centering \includegraphics[scale=0.6,clip=true,trim=0cm 0cm 0cm 0cm]{cerclesdeformes1.png} \caption{The contours $\gamma_\tau^>$ and $\gamma_\tau^<$ are the thick contours and the circle $\gamma_\tau$ is the dotted circle.} \end{figure} These simple facts imply that the integral : \begin{align} \label{intdef} \int_{z \in \gamma_\tau^<} \int_{w \in \gamma_\tau^>} \exp \left( \frac{1}{r} \left( S(z;\tau,\chi)-S(w;\tau,\chi) \right)\right) \frac{dzdw}{z-w} \end{align} goes to zero as $r$ tends to zero. Actually, the dominated convergence theorem implies that the integral (\ref{intdef}) is $O\left(\exp\left(-r^{-\alpha}\right)\right)$ for any $\alpha \in (0,1)$. \par Picking the residue at $z=w$, we arrive at : \begin{multline} \label{sumKq} \mathcal{K}_{e^{-r}} \left( \frac{\tau}{r} + t_1, \frac{\chi}{r} +h_1 ; \frac{\tau}{r} + t_2, \frac{\chi}{r} +h_2\right) = \frac{1}{(2i\pi)^2} \int_{z \in \gamma_\tau^<} \int_{w \in \gamma_\tau^>} \exp \left( \frac{1}{r} \left( S(z;\tau,\chi)-S(w;\tau,\chi) \right)+ O(1) \right) \frac{dzdw}{z-w} \\ + \frac{1}{2i\pi} \int_{e^\tau \overline{z(\tau,\chi)}}^{e^\tau z(\tau,\chi)} \frac{(q^{1/2+\tau/r+t_2}w;q)_\infty}{(q^{1/2+ \tau/r +t_1};q)_\infty} \frac{dw}{w^{h_1-h_2+(t_1-t_2)/2}}, \end{multline} where the path of integration for the second integral crosses the interval $(0,e^\tau)$ for $t_1\geq t_2$ and the half-line $(-\infty,0)$ otherwise. By the preceding discussion, the first integral rapidly tends to zero. Observe now that : \begin{align*} \frac{(q^{1/2+\tau/r+t_2}w;q)_\infty}{(q^{1/2+ \tau/r +t_1};q)_\infty} = (1+O(r))\left(1-e^{-\tau}w\right)^{t_1-t_2} \end{align*} where the $O(r)$ term only depends on $K$, $t_1$ and $t_2$. Performing the change of variable $w \mapsto e^{-\tau}w$ in the second integral of (\ref{sumKq}), we arrive at : \begin{align*} \displaystyle \frac{1}{2i\pi} \int_{e^\tau \overline{z(\tau,\chi)}}^{e^\tau z(\tau,\chi)} \frac{(q^{1/2+\tau/r+t_2}w;q)_\infty}{(q^{1/2+ \tau/r +t_1};q)_\infty} \frac{dw}{w^{h_1-h_2+(t_1-t_2)/2}} = \left( 1 +O(r) \right) e^{-\tau(h_1-h_2-(t_1-t_2)/2)} \mathcal{S}_{\tau,\chi }( t_1-t_2,h_1-h_2). \end{align*} The factor $e^{-\tau(h_1-h_2-(t_1-t_2)/2)}$ can be ignored, since it disappears from any determinant of the form (\ref{defdynsin}). Lemma \ref{lemcv} is proved. $\square$. \subsection{A remark and a proposition} \label{secrk} For $\tau <0$, one has to replace the function $S$ by : \begin{align*} \tilde{S}(z,\tau,\chi)=-(|\tau|/2+\chi)\log(z)-\text{dilog}(1-z)+\text{dilog}(1-e^{-|\tau|}/z). \end{align*} The function $\tilde{S}$ innerhits the same properties than the function $S$ : it is constant on the circle $\gamma_{|\tau|}$ and has two complex conjugated critical points on this circle provided $(\tau,\chi) \in A$. This is why we will only consider positve values of $\tau$ in the sequel.\\ \par The critical points of $S$ are the roots of the quadratic polynomial : \begin{align*} (1-1/z)(1-e^{-\tau}z)=e^{-\tau/2-\chi}. \end{align*} For this reason, we have the following proposition : \begin{prop} \label{propcont} For any fixed $(\Delta t, \Delta h) \in E$, the function : \begin{align*} (\tau,\chi) \mapsto \mathcal{S}_{\tau,\chi}(\Delta t, \Delta h) \end{align*} is continuous. \end{prop} \subsection{Proof of lemma \ref{lemdecK}} Let $m \subset E$ be finite, of cardinality $l$, let $K \subset \mathbb{R}^2$ be a compact set and let $(\tau_1,\chi_1), (\tau_2,\chi_2) \in A\cap K$ be as in the statement of the lemma. The condition (\ref{conddist}) implies that the sets $\frac{1}{r}(\tau_1,\chi_1)+m$ and $\frac{1}{r}(\tau_2,\chi_2)+ m$ are disjoints. Thus, the expectation : \begin{align*} \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m} c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] \end{align*} is a determinant of size $2l$. In the expansion of the determinant as an alternate sum over permutations of size $2l$, one considers the permutations leaving the sets $\frac{1}{r}(\tau_1,\chi_1)+m$ and $\frac{1}{r}(\tau_2,\chi_2)+m$ invariant. The alternate sum over all such permutations is nothing but the product of determinants : \begin{align*} \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m} \right ] \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right]. \end{align*} Thus, the terms of the remaining sum : \begin{align*} \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m} c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] - \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_1,\chi_1)+m} \right ] \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau_2,\chi_2)+m} \right] \end{align*} all involve factors of the form : \begin{align} \label{prodK} \mathcal{K}_{e^{-r}}\left(\frac{1}{r}(\tau_1,\chi_1)+m_{i_1}; \frac{1}{r}(\tau_2,\chi_2)+m_{j_1}\right)\mathcal{K}_{e^{-r}}\left(\frac{1}{r}(\tau_2,\chi_2)+m_{j_2}; \frac{1}{r}(\tau_1,\chi_1)+m_{i_2}\right). \end{align} By lemma \ref{lemcv} and proposition \ref{propcont}, the other factors are bounded by a bound only depending on $K$ and $m$. By similar methods as in the proof of Lemma \ref{lemcv}, we will show that the product (\ref{prodK}) is small. \par The product (\ref{prodK}) can be written as a quadruple integral : \begin{multline*} \displaystyle \mathcal{K}_{e^{-r}}\left(\frac{1}{r}(\tau_1,\chi_1)+m_{i_1}; \frac{1}{r}(\tau_2,\chi_2)+m_{j_1}\right)\mathcal{K}_{e^{-r}}\left(\frac{1}{r}(\tau_2,\chi_2)+m_{j_2}; \frac{1}{r}(\tau_1,\chi_1)+m_{i_2}\right) \\ = \frac{1}{(2i\pi)^4} \int_{z \in (1+\varepsilon)\gamma_{\tau_1}} dz\int_{w \in (1-\varepsilon)\gamma_{\tau_2}} dw\int_{z' \in (1-\varepsilon)\gamma_{\tau_2}}dz' \int_{w' \in (1+\varepsilon)\gamma_{\tau_1}} dw' \\ \frac{\exp\left(\frac{1}{r}\left(S(z;\tau_1,\chi_1)-S(w;\tau_2,\chi_2)+S(z';\tau_2,\chi_2)-S(w';\tau_1,\chi_1)\right)+O(1)\right)}{(z-w)(z'-w')} \end{multline*} We first consider the case when $\tau_1 \neq \tau_2$, and by symmetry, we assume that $\tau_1 > \tau_2$. One can then deform the contours as previously. Precisely, we now integrate over : \begin{align*} z \in \gamma_{\tau_1}^<, \hspace{0.1cm} w \in \gamma_{\tau_2}^>, \hspace{0.1cm} z' \in \gamma_{\tau_2}^<, \hspace{0.1cm} w' \in \gamma_{\tau_1}^> \end{align*} in order to have \begin{align*} \mathfrak{R}\left(S(z;\tau_1,\chi_1)-S(w';\tau_1,\chi_1) \right) <0, \quad \text{and} \quad \mathfrak{R}\left(S(z';\tau_2,\chi_2)-S(w;\tau_2,\chi_2) \right) <0, \end{align*} see figure 3. \begin{figure}[!h] \label{fig2} \centering \includegraphics[scale=0.3,clip=true,trim=0cm 0cm 0cm 0cm]{cerclesdeformes2.png} \caption{The contours $\gamma_{\tau_1}^>$, $\gamma_{\tau_1}^<$ are the thick contours near the dotted circle with the largest radius, the circle $\gamma_{\tau_1}$ ; the contours $\gamma_{\tau_2}^>$ and $\gamma_{\tau_2}^<$ are the thick contours near the dotted circle with the smallest radius $\gamma_{\tau_2}$.} \end{figure} These deformations do not affect the value of the integrals, because they involve separate variables. Since, for all $\alpha \in (0,1)$, we have : \begin{align*} \frac{\exp\left(\frac{1}{r}\left(S(z;\tau_1,\chi_1)-S(w;\tau_2,\chi_2)+S(z';\tau_2,\chi_2)-S(w';\tau_1,\chi_1)\right)\right)}{\exp(r^{-\alpha})} \rightarrow 0 \end{align*} as $r \rightarrow 0$, for all $z,z',w,w'$ in the new contours except at a finite number of points, and since \begin{align*} \left| \frac{\exp\left(\frac{1}{r}\left(S(z;\tau_1,\chi_1)-S(w;\tau_2,\chi_2)+S(z';\tau_2,\chi_2)-S(w';\tau_1,\chi_1)\right)\right)}{(z-w)(z'-w')} \right| \leq \frac{1}{|e^{\tau_1}-e^{\tau_2}|^2} \end{align*} for all $z,z',w,w'$ in the new contours except at a finite number of points, this concludes the proof for $\tau_1 >\tau_2$. \par For $\tau_1=\tau_2=\tau$, the proof is as follows. One deform the contours as for the preceding case, but now, the deformations affect the value of the kernel since we can avoid the residues at $z=w$ and $z'=w'$. We have for example the following case : \begin{equation} \label{prodKq2} \begin{split} \mathcal{K}&_{e^{-r}}\left(\frac{1}{r}(\tau,\chi_1)+(t_1^1,h_1^1); \frac{1}{r}(\tau,\chi_2)+(t_2^1,h_2^1)_1\right)\mathcal{K}_{e^{-r}}\left(\frac{1}{r}(\tau,\chi_2)+(t_1^2,h_1^2)_2; \frac{1}{r}(\tau,\chi_1)+(t_2^2,h_2^2)_2\right) \\ &=(1+O(1))\left( \frac{1}{(2i\pi)^2}\int_{z \in \gamma_\tau^{<,1}}\int_{w \in \gamma_\tau^{>,2}}... +\frac{1}{2i\pi}\int_w Res_{z=w} f(z,w;\tau,\chi_1,\chi_2)dw \right) \\ &\times \left(\frac{1}{(2i\pi)^2} \int_{z' \in \gamma_\tau^{<,2}}\int_{w' \in \gamma_\tau^{>,1}}... + \frac{1}{2i\pi}\int_{w'} Res_{z'=w'}f(z',w';\tau,\chi_2,\chi_1)dw' \right), \end{split} \end{equation} where : \begin{equation} \label{res} \begin{split} \int_w Res_{z=w}f(z,w;\tau,\chi_1,\chi_2)dw=\int_{|w|=e^{\tau/2}, \hspace{0.1cm} |\arg(w)|<\phi_{\tau,\chi_2}} \frac{(q^{1/2+\tau/r+t_2}w;q)_\infty}{(q^{1/2+\tau/r+t_1}w;q)_\infty}\frac{dw}{w^{1/r(\chi_1-\chi_2)+h_1^1-h_2^1 + t_1^1-t_2^1}}, \\ \int_{w'} Res_{z'=w'}f(z',w';\tau,\chi_2,\chi_1)dw=\int_{|w'|=e^{\tau/2}, \hspace{0.1cm} |\arg(w')|<\phi_{\tau,\chi_2}} \frac{(q^{1/2+\tau/r+t_2}w';q)_\infty}{(q^{1/2+\tau/r+t_1}w';q)_\infty}\frac{dw'}{w'^{1/r(\chi_2-\chi_1)+ h_1^2-h_2^2 + t_1^2-t_2^2}}, \end{split} \end{equation} the argument $\phi_{\tau,\chi_2}$ being an argument of $z(\tau,\chi_2)$, see figure 4. Equality (\ref{prodKq2}) is valid when : \begin{align*} t_1^1 \geq t_2^1, \hspace{0.1cm} t_1^2 \geq t_2^2, \hspace{0.1cm} \text{and } \arg \left( z(\tau,\chi_1) \right) <\arg \left( z(\tau,\chi_2) \right), \end{align*} and the other cases can be treated in a similar way. \begin{figure}[!h] \centering \includegraphics[scale=0.5,clip=true,trim=0cm 0cm 0cm 0cm]{cerclesdeformes3.png} \caption{The thick contours $\gamma_{\tau}^{>,1}$ and $\gamma_\tau^{<,1}$ cross the dotted circle $\gamma_\tau$ at points $z1=e^\tau z(\tau,\chi_1)$ and $z1'=e^\tau\overline{z(\tau,\chi_1)}$, while the thick contours $\gamma_{\tau}^{>,2}$ and $\gamma_{\tau}^{<,2}$ cross the circle $\gamma_\tau$ at points $z2=e^\tau z(\tau,\chi_2)$ and $z2'=e^\tau \overline{z(\tau,\chi_2)}$.} \end{figure} Note that the factor : \begin{align*} f_r(w):=\frac{(q^{1/2+\tau/r+t_2}w;q)_\infty}{(q^{1/2+\tau/r+t_1}w;q)_\infty} \end{align*} is bounded, since it tends to : \begin{align*} (1-e^{-\tau}w)^{t_1-t_2} \end{align*} as $r$ tends to $0$. Integrating (\ref{res}) by parts leads to : \begin{equation} \label{ineqres} \begin{split} \left| \int_w Res_{z=w}f(z,w;\tau,\chi_1,\chi_2)dw \right| &\leq C \frac{\exp\left(\frac{\tau}{2r}(\chi_2-\chi_1)\right)}{1/r|\chi_1-\chi_2|}\\ &+\frac{\exp\left(\frac{\tau}{2r}(\chi_2-\chi_1)\right)}{1/r|\chi_1-\chi_2|}\left| \int_{|w|=1, \hspace{0.1cm} \arg(w)<\phi_{\tau,\chi_2}}f_r'(e^{-\tau/2}w)\frac{dw}{w^{1/r(\chi_1-\chi_2)+\Delta h + \Delta t-1}}\right| \\ & \leq C\frac{r}{|\chi_1-\chi_2|}\exp \left( \frac{\tau}{2r}(\chi_2-\chi_1)\right), \end{split} \end{equation} and : \begin{equation} \label{ineq2} \left| \int_{w'} Res_{z'=w'}f(z',w';\tau,\chi_2,\chi_1)dw \right| \leq C\frac{r}{|\chi_1-\chi_2|}\exp \left( \frac{\tau}{2r}(\chi_1-\chi_2)\right). \end{equation} It is clear that, by construction, we have : \begin{equation}\label{ineqint} \left|\int_{z \in \gamma_\tau^{<,1}}\int_{w \in \gamma_\tau^{>,2}}... \right| \leq C \exp \left( \frac{\tau}{2r}(\chi_2-\chi_1)\right), \end{equation} and : \begin{equation}\label{ineqint2} \left|\int_{z' \in \gamma_\tau^{<,2}}\int_{w' \in \gamma_\tau^{>,1}}... \right| \leq C \exp \left( \frac{\tau}{2r}(\chi_1-\chi_2)\right). \end{equation} We now expand the product in (\ref{prodKq2}). The term : \begin{align*} \int_{z \in \gamma_\tau^{<,1}}\int_{w \in \gamma_\tau^{>,2}}... \times \int_{z' \in \gamma_\tau^{<,2}}\int_{w' \in \gamma_\tau^{>,1}}... \end{align*} is by construction dominated by any polynomial in $r$. The estimates (\ref{ineqres}) and (\ref{ineq2}) imply that the product of the integrals of the residues is smaller than : \begin{align*} \frac{Cr^2}{|\chi_1-\chi_2|^2}, \end{align*} while the combinations of (\ref{ineqres}) and (\ref{ineqint2}), and (\ref{ineq2}) and (\ref{ineqint}) entail that the remaining terms are smaller than : \begin{align*} \frac{Cr}{|\chi_1-\chi_2|}. \end{align*} Lemma \ref{lemdecK} is proved. $\square$. \subsection{Proof of Lemma \ref{lemdiag}} Lemma \ref{lemdiag} is proved using Lemma \ref{lemcv} and proposition \ref{propcont}. Let $K \subset \mathbb{R}^2$ be a compact set, and let $m,m' \subset E$ be finite. Let $(\tau,\chi) \in A\cap K$. By Lemma \ref{lemcv}, we have that : \begin{multline*} \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m} c_{\frac{1}{r}(\tau,\chi)+m'}\right] - \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m}\right] \mathbb{E}_r \left[ c_{\frac{1}{r}(\tau,\chi)+m'} \right] \\ = \mathbb{E}_{(\tau,\chi)} \left[ c_{m} c_{m'}\right] - \mathbb{E}_{(\tau,\chi)} \left[ c_m\right] \mathbb{E}_{(\tau,\chi)} \left[ c_{m'} \right]+O(r). \end{multline*} Now, by proposition \ref{propcont}, the function : \begin{align*} (\tau,\chi) \mapsto \mathbb{E}_{(\tau,\chi)} \left[ c_{m} c_{m'}\right] - \mathbb{E}_{(\tau,\chi)} \left[ c_m\right] \mathbb{E}_{(\tau,\chi)} \left[ c_{m'} \right] \end{align*} is bounded, as long as $(\tau, \chi)$ belong to a compact set. Lemma \ref{lemdiag} is proved, and Theorem \ref{thm1} is completely proved. $\square$.
1,941,325,220,364
arxiv
\section{Introduction} \label{intro} In the past there were several attempts to model causality. Yet, there exists no mathematical definition of this notion. Loosely understood is such a relationship between a cause $\xi$ and its effect $\eta$. The usual dependence relationship between $\xi$ and $\eta$ is symmetric (often expressed by a type of correlation). However, there are situations where we have non-symmetric dependence between variables, e.g., the past of a time-series may influence its future, but not vice-versa. Or in hydrology, high water level in a river may cause high water level dawn the river flow, but usually not vice-versa. Thus, we need also a non-symmetric relationship between a cause and its effect. The aim of this paper is to provide a mathematically rigorous model of non-symmetric causality. It will be based on Granger's approach to causality \cite{G}. From the algebraic point of view, our model will be based on horizontal sums of Boolean algebras. A preliminary version of our model was published in \cite{BKN1}. This is an extended version of the paper \cite{BKN1}. Our paper is organized as follows: in section \ref{sec:1} we provide a historical overview of various models of causality. In section \ref{sec:2} we recall the Granger's definition of causality for stationary time-series. Section \ref{sec:3} contains basic definitions and some known facts on orthomodular lattices. In section \ref{sec:4} we provide a theoretical background for our model of causality. It has two subsections:\\ -- \ref{sec:4-1} contains a theory of bivariate states on orthomodular lattices,\\ -- \ref{sec:4-2} contains an approach that enables to add non-compatible observables. Finally, section \ref{sec:5} contains our model of causality on horizontal sums of Boolean algebras. It has again two subsections:\\ -- \ref{sec:5-1} provides a comparison of random vectors on Boolean algebras and of vectors of observables on horizontal sums of Boolean algebras,\\ -- \ref{sec:5-2} is devoted to the Granger's model of causality modified for horizontal sums of Boolean algebras. \section{Historical overview of modelling causality in economics} \label{sec:1} In the 20th century causal inference was frequently associated with multiple correlation and regression. As it is well known, the regression of Y on X produces coefficient estimates that are not the algebraic inverses of those produced from the regression of X on Y. Although regressions may have a natural causal direction, there is nothing in the data on their own that reveals which direction is the correct one – each of them is an equally appropriate rescaling of a symmetrical and non-causal correlation. This is a problem of \emph{observational equivalence}. For example, we can mention the problem of econometric identification: \emph{how to distinguish a supply curve from a demand curve}. A standard solution to this identification problem is to look for additional causal determinants that discriminate between otherwise simultaneous relationships. Possible solution of this problem gives us the language of exogenous and endogenous variables. Exogenous variables can also be regarded as the causes of the endogenous ones \cite{Hoo08}. In the 1930s Jan Tinbergen \cite{Tin} introduced structural models in modern econometrics. These models express causality in a diagram that uses arrows to indicate causal connections among time-dated variables. Another approach is known as process analysis. Process analysis emphasizes the asymmetry of causality, typically grounded in Hume's criterion of temporal precedence \cite{Mor}. Wold's process analysis belongs to the time-series tradition that ultimately produces Granger causality and vector autoregression. The Wold's approach relates causality to the invariance properties of the structural econometric model. This approach emphasizes the distinction between endogenous and exogenous variables and the identification and estimation of structural parameters. Herbert Simon \cite{Sim} has shown that causality could be defined in a structural econometric model, not only between exogenous and endogenous variables, but also among the endogenous variables themselves. And he has shown that the conditions for a well-defined causal order are equivalent to the well-known conditions for identification \cite{Hoo08}. Hans Reichenbach \cite{GE12,Rei56}, taken the idea that simultaneous correlated events must have prior common causes, tried to use them to infer the existence of unobserved and unobservable events and to infer causal relations from statistical relations. Reichenbach's common cause principle is a time-asymmetric principle that can be formulated as follows: simultaneous correlated events have a prior common cause that screens off the correlation. It means, if simultaneous values of quantities $A$ and $B$ are correlated, then there are common causes $C_1,C_2,\ldots,C_n$ such that conditioned upon any combination of values of these quantities at an earlier time, the values of $A$ and $B$ are probabilistically independent, see \cite{Af10,Uff99}. Reichenbach's common cause principle was adopted by Penrose and Percival \cite{pp62} into the law of conditional independence and by Spirtes et al. \cite{SGS93} into causal Markov condition. Some open problems concerning Reichenbachian common cause systems are formulated and solved by Hofer-Szabó and Rédei in many papers. Hofer-Szabó and Rédei \cite{HR06} have shown that given any non-strict correlation in $(\Omega ,\mathcal S, P)$ and given any finite natural number $n>2$, the probability space $(\Omega ,\mathcal S, P)$ can be embedded into a larger probability space in such manner that the larger space contains a Reichenbachian common cause system of size $n$ for the correlation. Another approach, given by Clive W. J. Granger \cite{G}, introduces the data-based concept without direct reference to background economic theory. This concept has become a fundamental notion for studying dynamic relationships among time series. Granger's causality is an example of the modern probabilistic approach to causality, and it is a natural successor to Hume (see, e.g., \cite{Sup70}). Where Hume requires constant conjunction of cause and effect, probabilistic approaches are content to identify cause with a factor that raises the probability of the effect: $A$ causes $B$ if $P(B|A) > P(B)$, where the vertical ‘$|$’ indicates ‘conditioned on’. The asymmetry of causality is secured by requiring the cause $(A)$ to occur before the effect $(B)$ (see \cite{Hoo08}). But the probability criterion is not enough on its own to produce asymmetry since $P(B|A) > P(B)$ implies $P(A|B) > P(A)$. Granger's causality helps us to understand and measure the relative roles of different causal systems, e.g., between commodity prices and exchange rates. Granger causality has important implications in financial decision making, especially for market participants with short horizons. From a macroeconomic perspective, this can also be useful for interpreting exchange rate movements, financial market monitoring and monetary policy. Basic economic reasoning on currency demand suggests that the currencies of countries whose exports depend heavily on a particular commodity should be strongly influenced by its price, so commodity price movements should lead (Granger-cause) exchange rate movements (macroeconomic/trade mechanism). \section{Introduction to the Granger's model of causality} \label{sec:2} In statistics, the notion of causality is usually identified with a kind of stochastic dependence. Of course, this dependence (e.g. between two random variables) is a symmetric notion. In \cite{G}, Granger defined a causality between two stationary time-series ${\mathbf X}=\{X_t\}_{t\in{\mathbb Z}}$ and ${\mathbf Y}=\{Y_t\}_{t\in{\mathbb Z}}$ in a non-symmetric way. There are two basic principles upon which this notion of causality (and a relationship between a cause and its effect) is based. \begin{itemize} \item Cause always happens prior to its effect. \item Cause makes unique changes in the effect. In other words, the causal series contains unique information about the effect series that is not available otherwise. \end{itemize} The precise definition of Granger's causality is the following: \begin{definition}[\cite{G}]\label{def-G} Denote by $U_{t-1}$ all information of the universe that is accumulated by time $t-1$ and $\bar{X}_{t-1}=\{X_{t-i}\}_{i=1}^{\infty}$ the past of ${\mathbf X}$ by time $t-1$. Let $\tilde{U}_{\bar{X}_{t-1}}$ denote all the information of the universe that is accumulated by time $t-1$ apart from the past $\bar{X}_{t-1}$. Then if $\sigma^2(Y_t|U_{t-1})<\sigma^2(Y_t|\tilde{U}_{\bar{X}_{t-1}})$, we say that ${\mathbf X}$ is causing ${\mathbf Y}$. \end{definition} \begin{remark}\rm\quad \begin{enumerate} \item Since we assume stationarity of the time series ${\mathbf X}$ and ${\mathbf Y}$, the condition $\sigma^2(Y_t|U_{t-1})<\sigma^2(Y_t|\tilde{U}_{\bar{X}_{t-1}})$ is fulfilled for all $t$ whenever it is fulfilled for one $t$. This means that the past of the time series ${\mathbf X}$ influences the future of ${\mathbf Y}$ (expressed by means of the conditional variance in Definition \ref{def-G}). \item As Granger pointed out in \cite{G}, we could skip the condition that the time series ${\mathbf X}$ and ${\mathbf Y}$ are stationary, but then the causality between ${\mathbf X}$ and ${\mathbf Y}$ would become time-dependent, i.e., it might occur that at some time stamps $t_i$ $X_{t_i}$ is causing $Y_{t_i}$ and at some other time stamps ${t_j}$, $X_{t_j}$ would not cause $Y_{t_j}$. \end{enumerate} \end{remark} In fact, this notion of causality is based on the Kolmogorovian conditional probability theory. Granger's theory is used especially in econometrics and finance to model one-sided dependencies, as we have mentioned in the historical overview. \section{Basic definitions and known facts on orthomodular lattices} \label{sec:3} We have already mentioned in Introduction that our model of causality works on horizontal sums of Boolean algebras. Since horizontal sums of Boolean algebras are special cases of orthomodular lattices, we recall some basic facts also on general orthomodular lattices. For more information on orthomodular lattices and their properties one can consult, e.g., \cite{D-P,Kalm-83,PtakPulm,Var}. \begin{definition}\label{def:OML} Let $(L,{\mathbf 0}_L,{\mathbf 1}_L,\vee ,\wedge ,\,' )$ be a lattice with the greatest element ${\mathbf 1}_L$ and the least element ${\mathbf 0}_L$. Let $':L\to L$ be a unary operation on $L$ with the following properties: \begin{enumerate} \item for all $a\in L$ there is a unique $a'\in L$ such that $(a')' =a$ and $a\vee a'= {\mathbf 1}_L$ \item if $a,b\in L$ and $a\le b$ then $b'\le a' $; \item if $ a,b\in L$ and $a\le b$ then $b=a\vee (a'\wedge b)$ (orthomodular law). \end{enumerate} Then $(L,{\mathbf 0}_L,{\mathbf 1}_L,\vee ,\wedge ,\,' )$ is said to be \emph{an orthomodular lattice}. \end{definition} In this paper, for the sake of brevity, we will write briefly an orthomodular lattice $L$, skipping the operations whenever it will not cause any confusion. In general, an orthomodular lattice is not distributive. For arbitrary $a,b\in L$ just the following property is guaranteed \[ (a\wedge b)\vee (a\wedge b' )\leq a. \] On the other hand, if $L$ is distributive then it is a Boolean algebra. \begin{definition}\label{def:komort} Let $L$ be an orthomodular lattice. Then elements $a,b\in L$ are called \begin{enumerate} \item[\rm (o1)] \emph{orthogonal} ($a\perp b$) if $a\le b' $, \item[\rm (o2)] \emph{compatible} ($a\leftrightarrow b $) if $a=(a\wedge b)\vee (a\wedge b' )$ and $b=(a\wedge b)\vee (a' \wedge b )$. \end{enumerate} \end{definition} \emph{An orthomodular sub-lattice $L_1$ of $L$} is an orthomodular lattice such that $L_1\subset L$, with operations inherited from $L$ and possessing the same greatest and least elements ${\mathbf 1}_L$ and ${\mathbf 0}_L$, respectively. A distributive orthomodular sub-lattice $\cal B$ is called a Boolean sub-algebra of $L$. Every orthomodular lattice $L$ is a collection of blocks \cite{R2}. A \emph{block} is the maximal set of pairwise compatible elements of $L$, i.e. $L=\bigcup_j B_j$, where blocks $B_j$ have operations inherited from $L$. Each block in $L$ is a Boolean algebra. \begin{definition}[See, e.g., \cite{D-P}]\label{hor-sum} Let $L$ be an orthomodular lattice with the greatest element ${\mathbf1}_L$ and the least element ${\mathbf0}_L$. Moreover, let $L=\bigcup\limits_{j\in J} B_j$, where $B_j$ are blocks in $L$ for all $j\in J$. We say that $L$ is a horizontal sum of Boolean algebras $B_j$ if \[ B_i\cup B_j=\{{\mathbf0}_L,{\mathbf1}_L\}\quad\mbox{for all $i,j\in J$ such that $i\ne j$.} \] \end{definition} \begin{example}\label{exam-OML}\rm Let $L_1=\{{\mathbf0}_{L_1}, a,a',b,b',{\mathbf1}_{L_1}\}$, as sketched on Fig.\ref{fig-1}. Then $L_1$ is an orthomodular lattice whose blocks are ${B}_1=\{ {\mathbf0}_{L_1}, a,a',{\mathbf1}_{L_1}\}$ and ${B}_2=\{{\mathbf0}_{L_1},b,b',{\mathbf1}_{L_1}\}$. The lattice $L_1$ is the horizontal sum of Boolean algebras ${B}_1$ and ${B}_2$, since ${B}_1\cap{B}_2=\{{\mathbf0}_{L_1},{\mathbf1}_{L_1}\}$. Let $L_2=\{{\mathbf0}_{L_2},a,b,c,d,e,a',b',c',d',e',{\mathbf1}_{L_2}\}$ such that $a'=b\vee c$, $b'=a\vee c$, $c'=a\vee b=d\vee e$, $d'=c\vee e$, $e'=c\vee d$. $L_2$ is sketched on Fig. \ref{fig-2}. The orthomodular lattice has also 2 blocks, namely ${B}_3=\{ {\mathbf0}_{L_2}, a,b,c,a',b',c',{\mathbf1}_{L_2}\}$ and ${B}_4=\{{\mathbf0}_{L_2},c,d,e,c',d',e',{\mathbf1}_{L_2}\}$. The lattice $L_2$ is not the horizontal sum of Boolean algebras ${B}_3$ and ${B}_4$ since ${B}_3\cap{B}_4=\{{\mathbf0}_{L_2},c,c',{\mathbf1}_{L_2}\}\ne\{{\mathbf0}_{L_2},{\mathbf1}_{L_2}\}$. \begin{figure} \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=4.3cm]{OML-pasting.eps} \caption{\label{fig-1}OML $L_1$} \end{minipage} \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=4.3cm]{OML-2.eps} \caption{\label{fig-2}OML $L_2$} \end{minipage} \end{figure} \end{example} \newpage In this paper we will deal only with $\sigma$-complete orthomodular lattices $L$. Such $\sigma$-complete orthomodular lattices are called orthomodular $\sigma$-lattices ($\sigma$-OML, for brevity). \begin{definition}\label{def:state} A map $m:L\to [0,1]$ is called \emph{a $\sigma$-additive state on $L$}, if for arbitrary at most countable system of mutually orthogonal elements $a_i\in L$, $i\in I\subset {\mathbb N}$, the following holds \[ m\left(\bigvee\limits_{i\in I} a_i\right)=\sum\limits_{i\in I}m(a_i) \] and $m({\mathbf 1}_L)=1$. \end{definition} As it was proven by Greechie \cite{Gree}, there exist orthomodular lattices with no state. \begin{definition}\label{def:observ} Let $L$ be a $\sigma$-OML. A $\sigma-$homomorphism $x$ from Borel sets $\mathcal B({\mathbb R})$ to $L$, such that $x({\mathbb R})={\mathbf 1}_L$ is called \emph{an observable on $L$}.\\ By $\mathcal O$ we will denote the set of all observables on $L$. \end{definition} \begin{definition}\label{def:ranspec} Let $L$ be a $\sigma$-OML and $x$ be an observable on $L$. Then \begin{enumerate} \item[\rm (q1)] the set $R(x)=\{x(E); E\in\mathcal B({\mathbb R})\}$ is called \emph{the range of the observable $x$ on $L$}; \item[\rm (q2)] the set $\sigma(x)=\cap\{E\in\mathcal B({\mathbb R}); x(E)={\mathbf 1}_L\}$ is called \emph{the spectrum of an observable}. \end{enumerate} \end{definition} Directly from the properties of $\sigma $-homomorphism it follows that $R(x)$ is a Boolean sub-$\sigma$-algebra of $L$ (e.g., \cite{PtakPulm,Var}). \begin{definition}\label{def:comob} Let $L$ be a $\sigma$-OML. Observables $x,y$ are called \emph{compatible} ($x\leftrightarrow y $) if $x(A)\leftrightarrow y(B)$ for all $A,B\in\mathcal B({\mathbb R})$. \end{definition} \begin{theorem}[Loomis-Sikorski Theorem \cite{Var}]\label{L-S} Let $L$ be a $\sigma$-OML and $x,y$ be compatible observables on $L$. Then there exists a $\sigma$-homomorphism $h$ and real functions $f,g$ such that $x(A) =h(f(A))$ and $y(A) =h(g(A))$ for each $A\in\mathcal B({\mathbb R})$ (briefly $x =h\circ f$ and $y =h\circ g$). \end{theorem} If $x\in\mathcal O$ and $m$ is a $\sigma$-additive state on $L$, then $m_x(B)=m(x(B))$, $B\in\mathcal{B({\mathbb R})}$ is a probability distribution of $x$. Let $(\Omega ,\mathcal S,P)$ be a probability space. Then $\mathcal S$ is a Boolean $\sigma$-algebra and $P$ is a $\sigma$-additive state. Hence $\mathcal S$ is a $\sigma$-OML. Furthermore, if $\xi $ is a random variable on $(\Omega ,\mathcal S, P)$, then $\xi^{-1}$ is an observable. It means that, if we have an observable $x$ on a $\sigma$-OML $L$, we are in the same situation as in the classical probability space. We use only another language for the standard situation. Problems occur if we have more then just one observable, and their ranges are not compatible. \section{Causality and orthomodular lattices} \label{sec:4} In this section we show how it is possible to introduce causality on orthomodular lattices between observables. As a first step we need conditional states and joint distributions (s-maps). \subsection{Bivariate states on orthomodular lattices} \label{sec:4-1} Conditional states and s-maps were introduced in \cite{Ns,Nc} resp., and their properties were studied for example in \cite{NP08}. For a given $\sigma$-OML $L$ with a $\sigma$-additive state, $L_{0}$ will denote the set of all elements $a\in L$ for which there exists a $\sigma$-additive state $m_a$ such that $m_a(a)=1$. In this paper we will assume that \begin{equation}\label{L-0} L_0=L\setminus\{{\mathbf 0}_L\}. \end{equation} \begin{definition}\label{def:constate} Let $L$ be a $\sigma$-OML. Let $f:L\times L_0\to [0,1]$ be a function fulfilling the following \begin{enumerate} \item[\rm(c1)] for each $a\in L_0$ $f(.|a)$ is a $\sigma$-additive state on $L$; \item[\rm(c2)] for each $f(a|a)=1$ \item[\rm(c3)] for mutually orthogonal (at most countably many) elements $a_1,a_2,...\in L_0$ and for all $b\in L$ the following is satisfied $$f\left(b\left|\bigvee_i \right.a_i\right)=\sum_if(b|a_i) f\left(a_i\left|\bigvee_ia_i\right.\right).$$ \end{enumerate} Then $f$ is called \emph{a conditional state on $L$}. \end{definition} \begin{remark}\rm Assume that $L$ is an orthomodular lattice fulfilling \eqref{L-0}. If we want to define a conditional state $f:L\times L_0\to [0,1]$ on $L$, then of course, the fulfillment of \eqref{L-0} is a necessary condition for the existence of $f$. However, it is an open problem if this condition is also sufficient. \end{remark} \begin{definition}\label{def:indep} Let $L$ be a $\sigma$-OML and let $f(.|.)$ be a conditional state on $L$. For $a,b\in L$ we say that \emph{$b$ is independent of $a$ with respect to the state $f(.|{\mathbf 1}_L)$} if $f(b|{\mathbf 1}_L)=f(b|a)$. \end{definition} In fact, $f(\cdot|{\mathbf 1}_L)$ plays the role of a prior state (prior probability distribution) in the classical definition of independence. This means that Definition \ref{def:indep} is just re-written from the Kolmogorovian probability theory. But unlike the Kolmogorovian theory, the independence of elements of a $\sigma$-OML is not necessarily symmetric. In \cite{Nc} a conditional state $f$ was constructed in such a way that there are elements $a,b\in L$ for which $f(b|{\mathbf 1}_L)=f(b|a)$ and $f(a|{\mathbf 1}_L)\neq f(a|b)$ (see also Example \ref{exam-smap} later in this paper). This fact implies that the well-known Bayes Theorem may be violated on a $\sigma$-OML. \begin{definition}\label{def:smap} Let $L$ be a $\sigma$-OML. A map $p:L\times L\to [0,1]$ will be called \emph{an s-map on $L$} if the following conditions are fulfilled: \begin{enumerate} \item[\rm(s1)] $p({\mathbf 1}_L,{\mathbf 1}_L)=1$; \item[\rm(s2)] for all $a,b\in L$ if $a\perp b$ then $p(a,b)=0$; \item[\rm(s3)] for an arbitrary sequence $\{a_i\}_{i\in I}$ of elements of $L$ and arbitrary $b\in L$ if $a_i\perp a_j$ for $i\ne j\in I$, then \[p\left(\bigvee_{i\in I}a_i, b\right)=\sum\limits_{i\in I}p(a_i,b)\quad\mbox{\rm and}\quad p\left(b,\bigvee_{i\in I}a_i\right)=\sum\limits_{i\in I}p(b,a_i). \] \end{enumerate} \end{definition} Let $\cal P$ denote the system of all s-maps on $L$ (for fixed $L$), which are $\sigma$-additive in both variables. The relationship between s-maps $p\in{\cal P}$ and conditional states is given by the following proposition \begin{proposition}[\cite{Ns,N-K14}] Let $L$ be a $\sigma$-OML fulfilling property \eqref{L-0}. \\ {\bf (a)} Assume that there exists a conditional state $f:L\times L_0\to[0,1]$. Then there exists an s-map $p\in{\cal P}$ such that for all $a\in L$ and all $b\in L_0$ we have \[ p(a,b)=f(a|b)\,f(b|{\mathbf 1}_L). \] {\bf (b)} Assume that there exists an s-map $p\in{\cal P}$. Then there exists a conditional state $f:L\times L_0\to[0,1]$ if and only if for all $b\in L_0$ there exists a $\sigma$-additive state $m_b:L\to[0,1]$ with $m_b(b)=1$. In such a case the conditional state $f$ can be expressed as \[ f_p(a|b)=\left\{ \begin{array}{ll} \frac{p(a,b)}{p(b,b)},\quad&\mbox{if $p(b,b)\ne0$,}\\ m_b(a),\quad&\mbox{if $p(b,b)=0$,} \end{array}\right. \] where $m_b$ is an arbitrary $\sigma$-additive state for which $m_b(b)=1$. \end{proposition} Let $L$ be a $\sigma$-OML and $p\in {\cal P}$ be an s-map on $L$. Denote $\mu_p(a)=p(a,a)$ for all $a\in L$. Then the following statements hold: \begin{enumerate} \item[\rm(p1)] $\mu_p:L\to [0,1]$ is a $\sigma$-additive state on $L$. \item[\rm(p2)] For all $a,b\in L$ we have that $p(a,b)\leq p(a,a)$. \item[\rm(p3)] If $a\leftrightarrow b$, then $p(a,b)=\mu_p (a\wedge b )$. \item[\rm(p4)] For arbitrary $a,b\in L$ the following equivalence holds \[ f_p(b|{\mathbf 1}_L)=f(b|a)\quad\Leftrightarrow \quad p(b,a)=p(b,b)p(a,a). \] \end{enumerate} In what follows, for a given s-map $p\in{\cal P}$ we use the notation $\mu_p(a)=p(a,a)$ for all $a\in L$. \begin{proposition}[\cite{AN}]\label{Jauch-P} Let $L$ be a $\sigma$-OML and $p$ be an s-map on $L$. If $a,b\in L$ are such that $\mu_p(a)=\mu_p(b)=1$, then $p(a,b)=p(b,a)=1$ and moreover $p(a,c)=p(c,a)$ for all $c\in L$. \end{proposition} \begin{definition} Let $L$ be a $\sigma$-OML and $p$ be an s-map on $L$. We say that an s-map $p$ on $L$ is causal if there exist elements $a,b\in L$ such that $p(a,b)\ne p(b,a)$. In this case elements $a,b$ are said to be $p$-causal. \end{definition} We will use the following notation \begin{eqnarray*} {\mathcal P}_S &=&\{p\in\mathcal P; p(a,b)=p(b,a)\quad \forall a,b\in L\},\\ {\mathcal P}_N&=&\mathcal P\setminus\mathcal P_S. \end{eqnarray*} \begin{remark}\rm The property from Proposition \ref{Jauch-P} of an s-map \[\mu_p(a)=\mu_p(b)=1\quad \Rightarrow\quad p(a,b)=p(b,a)=1\] is called the \emph{Jauch Piron property}. This means that causality (the importance of the order, $p(a,c)$ or $p(c,a)$) can be achieved only if $p(a,a)\ne1$.\\ ${\mathcal P}_S$ contains all non-causal s-maps and ${\mathcal P}_N$ all causal s-maps. \end{remark} \begin{example}\label{exam-smap}\rm Let us consider the orthomodular lattices $L_1$ and $L_2$ from Example \ref{exam-OML}. We will show examples of causal s-maps on these lattices. First we construct an s-map $p_1:L_1^2\to [0,1]$. Because of additivity of $p_1$ in both variables it is enough to present values $p_1(x,y)$ such that $x,y$ are atoms. Besides of these values we give also such values where $x$ or $y$ is equal ${\mathbf1}$ since $p_1(\cdot,{\mathbf1})=p_1({\mathbf1},\cdot)$ is a univariate state. \begin{table} \caption{\label{tab-p1}Values of the s-map $p_1$} \begin{tabular}{c|ccccc} \hline\noalign{\smallskip} &$a$ &$a'$ &$b$ &$b'$ &${\mathbf1}$\\ \hline $a$& $0.3$& $0$& $0.2$& $0.1$& $0.3$\\ $a'$& $0$& $0.7$& $0.3$& $0.4$& $0.7$\\ $b$& $0.15$& $0.35$& $0.5$& $0$& $0.5$\\ $b'$& $0.15$& $0.35$& $0$& $0.5$& $0.5$\\ ${\mathbf1}$& $0.3$& $0.7$& $0.5$& $0.5$& $1$\\ \noalign{\smallskip}\hline \end{tabular} \end{table} Now we construct an s-map $p_2:L_2^2\to [0,1]$. Also in this case we present only values $p_2(x,y)$ such that $x,y$ are atoms and values where $x$ or $y$ is equal ${\mathbf1}$. \begin{table} \caption{\label{tab-p2}Values of the s-map $p_2$} \begin{tabular}{c|cccccc} \hline\noalign{\smallskip} &$a$ &$b$ &$c$ &$d$ &$e$ &${\mathbf1}$\\ \hline $a$& $0.2$& $0$& $0$& $0.2$& $0$& $0.2$\\ $b$& $0$& $0.4$& $0$& $0.1$& $0.3$ &$0.4$\\ $c$& $0$& $0$& $0.4$& $0$& $0$& $0.4$\\ $d$& $0.15$& $0.15$& $0$& $0.3$& $0$ &$0.3$\\ $e$& $0.05$& $0.25$& $0$& $0$& $0.3$& $0.3$\\ ${\mathbf1}$& $0.2$& $0.4$& $0.4$& $0.3$& $0.3$& $1$\\ \noalign{\smallskip}\hline \end{tabular} \end{table} As we can see in Tables \ref{tab-p1} and \ref{tab-p2}, s-maps $p_1$ and $p_2$ are non-symmetric, i.e., they are causal. But there is one significant difference between these two s-maps. For $p_1$ we have \[ p_1(a,b)\ne p_1(a,{\mathbf1})\cdot p_1({\mathbf1},b),\qquad p_1(b,a)=p_1(b,{\mathbf1})\cdot p_1({\mathbf1},a), \] i.e., $a$ depends on $b$ but $b$ is independent of $a$ (in the lattice $L_1$).\\ For $p_2$ we have \[ p_2(a,d)\ne p_2(a,{\mathbf1})\cdot p_2({\mathbf1},d)\quad\mbox{and}\quad p_2(d,a)\ne p_1(d,{\mathbf1})\cdot p_1({\mathbf1},a), \] i.e., both $a$ is dependent on $d$ as well as $d$ is dependent on $a$ (in the lattice $L_2$). \end{example} We say that an s-map $p:L\times L\to[0,1]$ is \emph{strongly causal} if it is causal and there exists a pair of elements $a,b\in L$ such that $a$ is dependent on $b$ but $b$ is independent of $a$. The s-map $p_1$ from Example \ref{exam-smap} is strongly causal. The s-map $p_2$ from that example is causal, but not strongly causal. \medskip An important notion for our considerations is also that of a conditional expectation. \begin{definition} Let $L $ be a $\sigma$-OML, $p\in \mathcal P$ be an s-map, $x\in{\mathcal O}^1_p$ an observable, whose expected value exists, and $\mathcal B$ be a Boolean sub-$\sigma$-algebra of $L$. \emph{A version of conditional expectation of the observable $x$ with respect to $\mathcal B$} is an observable $z$ (notation $z=E_p(x|\mathcal B)$) such that $R(z)\subset \mathcal B$ and moreover $E_{p}(z|a)=E_{p}(x|a)$ for arbitrary $a\in \{u\in\mathcal B;\mu_p(u)\neq 0\}$. \end{definition} Since for arbitrary observable $y$ $R(y)$ is a Boolean sub-$\sigma$-algebra of $L$ we will write simply $E_p(x|y)=E_p(x|R(y))$. \begin{remark}\rm In fact, the conditional expectation $z=E_p(x|\mathcal B)$ is a projection of the observable $x$ into the Boolean $\sigma$-algebra $\cal B$. This means, if we have $z= E_p(x|y)$ then we have $z\leftrightarrow y$. This property implies that the conditional expectation $E_p(x|y)$ behaves exactly as we are used to from the conditional expectation of random variables in the Kolmogorovian probability theory. \end{remark} \subsection{Sum of non-compatible observables} \label{sec:4-2} For compatible observables $x,y$ on a $\sigma$-OML $L$ due to Theorem \ref{L-S} there exist a $\sigma $-homomorphism $h$ and real functions $f,g$ such that $x=h\circ f$ and $y=h\circ g$. This means that $x+y$ is defined by $x+y=(f+g)\circ h$. If $x,y$ are non-compatible then we cannot apply this procedure and $x+y$ does not exist in this sense. In \cite{N-K14} a sum of non-compatible observables was defined. \begin{definition}[\cite{N-K14}] Let $L$ be a $\sigma$-OML and $p\in\mathcal P$. A map $\oplus_p:{\mathcal O}^1_p\times {\mathcal O}^1_p\to {\mathcal O}^1_p$ is called \emph{a summability operator} if it yields the following conditions \begin{enumerate} \item[\rm (d1)] $R(\oplus_p(x,y))\subset R(y)$; \item[\rm (d2)] $\oplus_p(x,y)=E_p(x|y)+y$. \end{enumerate} \end{definition} The following basic properties of $\oplus_p$ are proven in \cite{N-K14}. \begin{proposition}[\cite{N-K14}]\label{suma} Let $L$ be a $\sigma$-OML, $\mathcal B$ be a Boolean sub-$\sigma$-algebra of $L$, and $p\in\mathcal P$. Assume $x,y\in{\mathcal O}^1_p$. Then the following statements are satisfied \begin{enumerate} \item[\rm(e1)] if $x\leftrightarrow y$ then $\oplus_p(x,y)\leftrightarrow\oplus_p(y,x)$; \item[\rm(e2)] $\oplus_p^\mathcal{B}(x,y)=\oplus_p^\mathcal{B}(y,x)$; \item[\rm(e3)] $E_p\left(\oplus_p^\mathcal{B}(x,y)\right)=E_p(\oplus_p(x,y))=E_p(x)+E_p(y)$; \item[\rm(e4)] if $\sigma (x)=\{x_1,x_2,...,x_n\}$ and $\sigma (y)=\{y_1,y_2,...,y_k\}$ then \[E_p(x)+E_p(y)=\sum\limits_{i}\sum\limits_{j}(x_i+y_j)p(x(\{x_i\}),y(\{y_j\}).\] \end{enumerate} \end{proposition} \section{A model of Granger causality on horizontal sums of Boolean algebras} \label{sec:5} Before turning our attention to the Granger causality, we should say something on random vectors and stochastic processes as a generalization of random vectors. \subsection{Random vectors versus vectors of observables} \label{sec:5-1} We will deal with a measurable space $(\Omega,{\cal S})$ where ${\cal S}$ is a $\sigma$-algebra of measurable events. Denote ${\cal B}$ the $\sigma$-algebra of Borel subsets of ${\mathbb R}$. A random variable $\xi:\Omega\to{\mathbb R}$ is an ${\cal S}$-measurable function, i.e., for every $B\in{\cal B}$ $\xi^{-1}(B)\in{\cal S}$. Further, let ${\cal B}^2$ and ${\cal S}^2$ denote the direct products ${\cal B}\times{\cal B}$ and ${\cal S}\times{\cal S}$, respectively. By $\sigma({\cal B}^2)$ and $\sigma({\cal S}^2)$ we will denote the least set $\sigma$-algebra containing the corresponding direct products. Let $\xi$ and $\eta$ be ${\cal S}$-measurable functions. In the Kolmogorovian probability theory the random vector $(\xi,\eta)$ is modelled as a bivariate function such that for every $B\in\sigma({\cal B}^2)$ $(\xi,\eta)^{-1}(B)\in\sigma({\cal S}^2)$. This model works perfectly if $\xi$ and $\eta$ are measurable simultaneously (e.g., two parameters measured on the same objects). But also in this case we are usually interested in knowing probabilities for $P(\xi^{-1}(A),\eta^{-1}(B))$ where $A,B\in{\cal B}$. This means that instead of constructing $\sigma({\cal B}^2)$ and $\sigma({\cal S}^2)$ it is enough (might be up to some exceptions) to work with the corresponding direct products ${\cal B}^2$ and ${\cal S}^2$. Thus the model becomes slightly different from the Kolmogorovian one, especially when we extend this consideration to stochastic processes. A different situation occurs if we consider a random vector $(\xi,\eta)$, but $\xi$ and $\eta$ are not simultaneously measurable. Of course one possibility how to model this situation is to stay within the Kolmogorovian model. In this case we know that $P((\xi,\eta)^{-1}\in A\times B)=P((\eta,\xi)^{-1}\in B\times A)$, where $A,B\in{\cal B}$. Instead of random variables $\xi$ and $\eta$ we can use observables $\xi^{-1}$ and $\eta^{-1}$. The fact that observables $\xi^{-1}$ and $\eta^{-1}$ are not simultaneously measurable, can be interpreted as their non-compatibility. We have seen in Example \ref{exam-smap} that unlike the probability measure, s-maps are not necessarily symmetric. This means, if we denote $a=\xi^{-1}(A)$ and $B=\eta^{-1}(B)$, we might get $p(a,b)\ne p(b,a)$. However, to get non-compatibility, we must leave Boolean algebras and switch to more general structures. We will consider two copies of the $\sigma$-algebra ${\cal S}$ denoted by ${\cal S}_1$ and ${\cal S}_2$. Assume ${\cal S}_1\cap{\cal S}_2=\{\emptyset,\Omega\}$. $\emptyset$ and $\Omega$ are the bottom and top elements, respectively, of these two $\sigma$-algebras. This means that we can make their horizontal sum in the same way as we have made it with blocks $B_1$ and $B_2$ in Example \ref{exam-OML} when we constructing the OML $L_1$. The corresponding horizontal sum of ${\cal S}_1$ and ${\cal S}_2$ will be denoted by $\tilde{\cal S}$. In such a way for arbitrary $A,B\in{\cal B}$ we have $(\xi^{-1}(A),\eta^{-1}(B))\in\tilde{\cal S}\times\tilde{\cal S}$ and $(\eta^{-1}(B),\xi^{-1}(A))\in\tilde{\cal S}\times\tilde{\cal S}$. In this situation we have one s-map $p$ modelling the (possibly non-symmetric) distribution of both vectors of observables, $(\xi^{-1},\eta^{-1})$ and $(\eta^{-1},\xi^{-1})$. \begin{remark}[Interpretation of the non-symmetric distribution]\label{rem}\rm We design two different experiments. In experiment Nr. 1 we measure first a parameter corresponding to $\xi^{-1}$ and then $\eta^{-1}$. In the second experiment we change the order of $\xi^{-1}$ and $\eta^{-1}$. We admit that the relative frequencies of $(\xi^{-1},\eta^{-1})\in A\times B$ and that of $(\eta^{-1}, \xi^{-1})\in B\times A$, might be different (order-dependent). \end{remark} \subsection{Modelling of Granger causality} \label{sec:5-2} Assume that $\{\mathbb X_t\}_{t\in T}$ is a stochastic process. For every time-stamp $t\in T$, $X_t$ is a ${\cal S}$-measurable random variable where ${\cal S}$ is a Boolean $\sigma$-algebra. If we want to model causality (in the sense of non-symmetric dependence), we have to make the same procedure as above (with random vectors) when we have abandoned Boolean algebras and considered horizontal sums of Boolean algebras, instead. We will consider $card(T)$ copies of the $\sigma$-algebra ${\cal S}$, i.e., we will have a family $\{{\cal S}_t\}_{t\in T}$ and we make their horizontal sum. By $\hat{\cal S}$ we denote the resulting horizontal sum. For every time-stamp $t\in T$ and every Borel set $A\in{\cal B}$ we will have $X^{-1}_t(A)\in\hat{\cal S}$. Then, for $s\ne t$, $X^{-1}_t$ and $X^{-1}_s$ are non-compatible observables. We know already that there exists a joint distribution of $X^{-1}_t$ and $X^{-1}_s$ (or equivalently, conditional distribution $f(X^{-1}_s|X^{-1}_t)$ which is interesting especially when $s>t$), and by Proposition \ref{suma}, having the conditional distribution $f(X^{-1}_s|X^{-1}_t)$, there exists also their sum. \medskip\noindent {\bf Granger causality.} Assume that we have two (not necessarily stationary) stochastic processes, $\{\mathbf X_t\}_{t\in T}$ and $\{\mathbf Y_t\}_{t\in T}$, where $T$ is a set of all possible time-stamps. According to Definitions 2 and 5 in \cite{G2}, $\{\mathbf Y_t\}_{t\in T}$ causes $\{\mathbf X_t\}_{t\in T}$ if $F(X_{t+1}|Y_t)\ne F(X_{t+1})$, where $F(\cdot|\cdot)$ is a conditional distribution function and $F(\cdot)$ is an unconditioned distribution function. \medskip To model causality between stochastic processes $\{\mathbf X_t\}_{t\in T}$ and $\{\mathbf Y_t\}_{t\in T}$, we need to have an equivalent of a measurable space such that for every $t,s\in T$ observables $X^{-1}_t$ and $Y^{-1}_s$ are non-compatible. This means that we need two copies of $\hat{\cal S}$ and make their horizontal sum. We denote this newly constructed lattice by $\hat{\cal S}_2$. In this way we get that $F_{(X^{-1}_t,Y^{-1}_t)}$ and $F_{(Y^{-1}_t,X^{-1}_t)}$ may be different functions. In experiment we are not able to distinguish the order $(X^{-1}_t,Y^{-1}_t)$ and $(Y^{-1}_t,X^{-1}_t)$ if we measure $X$ and $Y$ at the same time stamp. This means that, as we have already commented in Remark \ref{rem}, measuring the non-symmetric causality experimentally has to follow exactly what Granger proposed in \cite{G,G2}. \section{Conclusion} \label{sec:6} In this paper we have shown parallels between the Granger causality \cite{G} and modelling of causality on horizontal sums of Boolean algebras which is based on s-maps and conditional states \cite{Ns,Nc,NP08}. The basic property of Granger's causality is its non-symmetry, i.e., the ability to distinguish between a cause and its effect. Causality based on s-maps and conditional states on orthomodular lattices (and on horizontal sums of Boolean algebras as special orthomodular lattices) bears the same property of non-symmetry. This non-symmetry is suitable for modelling of causality (dependencies) in stochastic processes (as we have shown in Section \ref{sec:5}) where we are able, in a natural way, to distinguish the cause and its effect. As we have pointed out in Remark \ref{rem} such non-symmetry (order-dependence) may occur also when measuring two different parameters, $\xi$ and $\eta$, by designing two different experiments -- first measuring $\xi$ then $\eta$, or vice versa.
1,941,325,220,365
arxiv
\section{\uppercase{Introduction}} When programming the small devices that constitutes the nodes of the Internet of Things (IoT), one has to adapt to the limitations of these devices. Apart from their very limited processing power (especially compared to the current personal computers, and even mobile devices like smartphones and tablets), the main specificity of the devices is that they are operated on small batteries (e.g.: AAA or button cells). Thus, one of the main challenges with these motes is the need to reduce as much as possible their energy consumption. We want their batteries to last as long as possible, for economical but also practical reasons: it may be difficult---even almost impossible---to change the batteries of some of these motes, because of their locations (e.g.: on top of buildings, under roads, etc.) IoT motes are usually very compact devices: they are usually built around a central integrated chip that contains the main processing unit and several basic peripherals (such as timers, A/D and D/A converters, I/O controllers\ldots) called microcontroller units or MCUs. Apart from the MCU, a mote generally only contains some ``physical-world'' sensors and a radio transceiver for networking. The main radio communication protocol currently used in the IoT field is IEEE 802.15.4. Some MCUs do integrate a 802.15.4 transceiver on-chip. Among the various components that constitute a mote, the most power-consuming block is the radio transceiver. Consequently, to reduce the power consumption of IoT motes, a first key point is to use the radio transceiver only when needed, keeping it powered-off as much as possible. The software element responsible to control the radio transceiver in an adequate manner is the \emph{MAC~/ RDC (Media Access Control \& Radio Duty Cycle)} layer of the network stack. A efficient power-saving strategy for IoT motes thus relies on finding the better trade-off between minimizing the radio duty cycle while keeping networking efficiency at the highest possible level. This is achieved by developing new, ``intelligent'' MAC~/ RDC protocols. To implement new, high-performance MAC~/ RDC protocols, one needs to be able to react to events with good reactivity (lowest latency possible) and flexibility. These protocols rely on precise timing to ensure efficient synchronization between the different motes and other radio-networked devices of a \emph{Personal Area Network (PAN)}, thus allowing to turn on the radio transceivers \emph{only} when needed. At the system level, being able to follow such accurate timings means having very efficient interruption management, and the extensive use of hardware timers, that are the most precise timing source available. The second most power-consuming element in a mote, after the radio transceiver, is the MCU itself: every current MCU offers ``low-power modes'', that consist in disabling the various hardware blocks, beginning with the CPU core. The main way to minimize energy consumption with a MCU is thus to disable its features as much as possible, only using them when needed: that effectively means putting the whole MCU to sleep as much as possible. Like for the radio transceiver, using the MCU efficiently while keeping the system efficient and reactive means optimal use of interruptions, and hardware timers for synchronization. Thus, in both cases, we need to optimally use interruptions as well as hardware timers. Being able to use them both efficiently without too much hassle implies the use of a specialized operating system (OS), especially to easily benefit from multitasking abilities. That is what we will discuss in this paper. \section{\uppercase{Previous work and problem statement}} Specialized OSes for the resource-constrained devices that constitute wireless sensor networks have been designed, published, and made available for quite a long time. \subsection{TinyOS} The first widely used system in this domain was \emph{TinyOS} \cite{TinyOS}. It is an open-source OS, whose first stable release (1.0) was published in september 2002. It is very lightweight, and as such well adapted to limited devices like WSN motes. It has brought many advances in this domain, like the ability to use Internet Protocol (IP) and routing (RPL) on 802.15.4 networks, including the latest IPv6 version, and to simulate networks of TinyOS motes via TOSSIM \cite{TOSSIM}. Its main drawback is that one needs to learn a specific language---named nesC---to be able to efficiently work within it. This language is quite different from standard C and other common imperative programming languages, and as such can be difficult to master. The presence of that specific language is no coincidence: TinyOS is built on its own specific paradigms: it has an unique stack, from which the different components of the OS are called as statically linked callbacks. This makes the programming of applications complex, especially for decomposing into various ``tasks''. The multitasking part is also quite limited: tasks are run in a fixed, queue-like order. Finally, TinyOS requires a custom GNU-based toolchain to be built. All of these limitations, plus a relatively slow development pace (last stable version dates back to august 2012) have harmed its adoption, and it is not the mainly used OS of the domain anymore. \subsection{Contiki} The current reference OS in the domain of WSN and IoT is \emph{Contiki} \cite{ContikiOS}. It's also an open-source OS, which was first released in 2002. It is also at the origin of many assets: we can mention, among others, the uIP Embedded TCP/IP Stack \cite{uip}, that has been extended to uIPv6, the low-power Rime network stack \cite{Rime}, or the Cooja advanced network simulator \cite{Cooja}. While a bit more resource-demanding than TinyOS, Contiki is also very lightweight and well adapted to motes. Its greatest advantage over TinyOS is that it is based on standard, well-known OS paradigms, and coded in standard C language, which makes it relatively easy to learn and program. It offers an event-based kernel, implemented using cooperative multithreading, and a complete network stack. All of these features and advantages have made Contiki widespread, making it the reference OS when it comes to WSN. Contiki developers also have made advances in the MAC/RDC domain: many of them have been implemented as part of the Contiki network stack, and a specifically developed, ContikiMAC, has been published in 2011 \cite{ContikiMAC} and implemented into Contiki as the default RDC protocol (designed to be used with standard CSMA/CA as MAC layer). However, Contiki's extremely compact footprint and high optimization comes at the cost of some limitations that prevented us from using it as our software platform. Contiki OS is indeed not a real-time OS: the processing of ``events''---using Contiki's terminology---is made by using the kernel's scheduler, which is based on cooperative multitasking. This scheduler only triggers at a specific, pre-determined rate; on the platforms we're interested in, this rate is fixed to 128~Hz: this corresponds to a time skew of up to 8~milliseconds (8000~microseconds) to process an event, interruption management being one of the possible events. Such a large granularity is clearly a huge problem when implementing high-performance MAC/RDC protocols, knowing that the transmission of a full-length 802.15.4 packet takes bout 4~milliseconds (4000~microseconds), a time granularity of 320~microseconds is needed, corresponding to one backoff period (BP). To address this problem, Contiki provides a real-time feature, \texttt{rtimer}, which allows to bypass the kernel scheduler and use a hardware timer to trigger execution of user-defined functions. However, it has very severe limitations: \begin{itemize} \item only one instance of \texttt{rtimer} is available, thus only one real-time event can be scheduled or executed at any time; this limitation forbids development of advanced real-time software---like high-performance MAC~/ RDC protocols---or at least makes it very hard; \item moreover, it is unsafe to execute from \texttt{rtimer}, even indirectly, most of the Contiki basic functions (i.e.: kernel, network stack, etc.), because these functions are not designed to handle pre-emption. Contiki is indeed based on cooperative multithreading, whereas the \texttt{rtimer} mechanism seems like a ``independent feature'', coming with its own paradigm. Only a precise set of functions known as ``interrupt-safe'' (like \texttt{process\_poll()}) can be safely invoked from \texttt{rtimer}, using other parts of Contiki's meaning almost certainly crash or unpredictable behaviour. This restriction practically makes it very difficult to write Contiki extensions (like network stack layer drivers) using \texttt{rtimer}. \end{itemize} Also note that this cooperative scheduler is designed to manage a specific kind of tasks: the \emph{protothreads}. This solution allows to manage different threads of execution, without needing each of them to have its own separate stack \cite{Protothreads}. The great advantage of this mechanism is the ability to use an unique stack, thus greatly reducing the needed amount of RAM for the system. The trade-off is that one must be careful when using certain C constructs (i.e.: it is impossible to use the \texttt{switch} statement in some parts of programs that use protothreads). For all these reasons, we were unable to use Contiki OS to develop and implement our high-performance MAC/RDC protocols. We definitely needed an OS with efficient real-time features and event handling mechanism. \subsection{Other options} There are other, less used OSes designed for the WSN/IoT domain, but none of them fulfilled our requirements, for the following reasons: \begin{description} \item[SOS] \cite{SOS} This system's development has been cancelled since november 2008; its authors explicitly recommend on their website to ``consider one of the more actively supported alternatives''. \item[Lorien] \cite{LorienOS} While its component-oriented approach is interesting, this system seems does not seem very widespread. It is currently available for only one hardware platform (TelosB/SkyMote) which seriously limits the portability we can expect from using an OS. Moreover, its development seems to have slowed down quite a bit, since the latest available Lorien release was published in july 2011, while the latest commit in the project's SourceForge repository (r46) dates back to january 2013. \item[Mantis] \cite{MantisOS} While this project claims to be Open Source, the project has made, on its SourceForge web site, no public release, and the access to the source repository (\texttt{http://mantis.cs.colorado.edu/viewcvs/}) seems to stall. Moreover, reading the project's main web page shows us that the last posted news item mentions a first beta to be released in 2007. The last publications about Mantis OS also seems to be in 2007. All of these elements tend to indicate that this project is abandoned\ldots \item[LiteOS] \cite{LiteOS} This system offers very interesting features, especially the ability to update the nodes firmwares over the wireless, as well as the built-in hierarchical file system. Unfortunately, it is currently only available on IRIS/MicaZ platforms, and requires AVR Studio for programming (which imposes Microsoft Windows as a development platform). This greatly hinders portability, since LiteOS is clearly strongly tied to the AVR microcontroller architecture. \item[MansOS] \cite{MansOS} This system is very recent and offers many interesting features, like optional preemptive multitasking, a network stack, runtime reprogramming, and a scripting language. It is available on two MCU architectures: AVR and MSP430 (but not ARM). However, none of the real-time features we wanted seems to be available: e.g. only software timers with a 1~millisecond resolution are available. \end{description} In any case, none of the alternative OSes cited hereabove offer the real-time features we were looking for. \bigskip On the other hand, ``bare-metal'' programming is also unacceptable for us: it would mean sacrificing portability and multitasking; and we would also need to redevelop many tools and APIs to make application programming even remotely practical enough for third-party developers who would want to use our protocols. \bigskip We also envisioned to use an established real-time OS (RTOS) as a base for our works. The current reference when it comes to open-source RTOS is \emph{FreeRTOS} (\texttt{http://www.freertos.org/}). It is a robust, mature and widely used OS. Its codebase consists in clean and well-documented standard C language. However, it offers only core features, and doesn't provide any network subsystem at all. Redeveloping a whole network stack from scratch would have been too time-consuming. (Network extensions exist for FreeRTOS, but they are either immature, or very limited, or proprietary and commercial software; and most of them are tied to a peculiar piece of hardware, thus ruining the portability advantage offered by the OS.) \subsection{Summary: Wanted Features} To summarize the issue, what we required is an OS that: \begin{itemize} \item is adapted to the limitations of the deeply-embedded MCUs that constitute the core of WSN/IoT motes; \item provides real-time features powerful enough to support the development of advanced, high-performance MAC~/ RDC protocols; \item includes a network stack (even a basic one) adapted to wireless communication on 802.15.4 radio medium. \end{itemize} However, none of the established OSes commonly used either in the IoT domain (TinyOS, Contiki) nor in the larger spectrum of RTOS (FreeRTOS) could match our needs. \section{\uppercase{The RIOT Operating System}} Consequently, we focused our interest on \emph{RIOT OS} \cite{RIOT}. This new system---first released in 2013---is also open-source and specialized in the domain of low-power, embedded wireless sensors. It offers many interesting features, that we will now describe. It provides the basic benefits of an OS: portability (it has been ported to many devices powered by ARM, MSP430, and---more recently---AVR microcontrollers) and a comprehensive set of features, including a network stack. Moreover, it offers key features that are otherwise yet unknown in the WSN/IoT domain: \begin{itemize} \item an efficient, interrupt-driven, tickless \emph{micro-kernel}; \item that kernel includes a priority-aware task scheduler, providing \emph{pre-emptive multitasking}; \item a highly efficient use of \emph{hardware timers}: all of them can be used concurrently (especially since the kernel is tickless), offering the ability to schedule actions with high granularity; on low-end devices, based on MSP430 architecture, events can be scheduled with a resolution of 32~microseconds; \item RIOT is entirely written in \emph{standard C language}; but unlike Contiki, there are no restrictions on usable constructs (i.e.: like those introduced by the protothreads mechanism); \item a clean and \emph{modular design}, that makes development with and \emph{into} the system itself easier and more productive. \end{itemize} The first three features listed hereabove make RIOT a full-fledged \emph{real-time} operating system. We also believe that the tickless kernel and the optimal use of hardware timers should make RIOT OS a very suited software platform to optimize energy consumption on battery-powered, MCU-based devices. A drawback of RIOT, compared to TinyOS or Contiki, is its higher memory footprint: the full network stack (from PHY driver up to RPL routing with \mbox{6LoWPAN} and MAC~/ RDC layers) cannot be compiled for Sky/TelosB because of overflowing memory space. Right now, constrained devices like MSP430-based motes are limited to the role of what the 802.15.4 standard calls \emph{Reduced Function Devices (RFD)}, the role of \emph{Full Function Devices (FFD)} being reserved to more powerful motes (i.e.: based on ARM microcontrollers). However, we also note that, thanks to its modular architecture, the RIOT kernel, compiled with only PHY and MAC~/ RDC layers, is actually lightweight and consumes little memory. We consequently believe that the current situation will improve with the maturation of higher layers of RIOT network stack, and that in the future more constrained devices could also be used as FFD with RIOT OS. \medskip When we began to work with RIOT, it also had two other issues: the MSP430 versions were not stable enough to make real use of the platform; and beyond basic CSMA/CA, no work related to the MAC~/ RDC layer had been done on that system. This is where our contributions fit in. \section{\uppercase{Our contributions}} For our work, we use---as our main hardware platform---IoT motes built around MSP430 microcontrollers. MSP430 is a microcontroller (MCU) architecture from Texas Instruments, offering very low-power consumption, cheap price, and good performance thanks to a custom 16-bit RISC design. This architecture is very common in IoT motes. It is also very well supported, especially by the Cooja simulator \cite{Cooja}, which makes simulations of network scenarios---especially with many devices---much easier to design and test. RIOT OS has historically been developed first on legacy ARM devices (ARM7TDMI-based MCUs), then ported on more recent microcontrollers (ARM Cortex-M) and other architectures (MSP430 then AVR). However, the MSP430 port was, before we improved it, still not as ``polished'' as ARM code and thus prone to crash. Our contribution can be summarized in the following points: \begin{enumerate} \item analysis of current OSes (TinyOS, Contiki, etc.) limitations, and why they are incompatible with development of real-time extensions like advanced MAC~/ RDC protocols; \item add debugging features to the RIOT OS kernel, more precisely a mechanism to handle fatal errors: crashed systems can be ``frozen'' to facilitate debugging during development; or, in production, can be made to reboot immediately, thus reducing unavailability of a RIOT-running device to a minimum; \item port RIOT OS to a production-ready, MSP430-based device: the Zolertia Z1 mote (already supoorted by Contiki, and used in real-world scenarios running that OS); \item debug the MSP430-specific portion of RIOT OS---more specifically: the hardware abstraction layer (HAL) of the task scheduler---making RIOT OS robust and production-ready on MSP430-based devices.\\ Note that all of these contributions have been reviewed by RIOT's development team and integrated into the ``master'' branch of RIOT OS' Github repository (i.e.: they are now part of the standard code base of the system). \item running on MSP430-based devices also allows RIOT OS applications to be simulated with the Cooja simulator; this greatly improves speed and ease of development. \item thanks to these achievements, we now have a robust and full-featured software platform offering all the features needed to develop high-performance MAC/RDC protocols---such as all of the time-slotted protocols. \end{enumerate} As a proof of concept of this last statement, we have implemented one of our own designs, and obtained very promising results, shown in the next section. \section{\uppercase{Use Case: implementing the S-CoSenS RDC protocol}} \subsection{The S-CoSenS Protocol} The first protocol we wanted to implement is S-CoSenS \cite{TheseBNefzi}, which is designed to work on top of the IEEE 802.15.4 physical and MAC (i.e.: CSMA/CA) layers. It is an evolution of the already published CoSenS protocol \cite{CosensConf}: it adds to the latter a sleeping period for energy saving. Thus, the basic principle of S-CoSenS is to delay the forwarding (routing) of received packets, by dividing the radio duty cycle in three periods: a sleeping period (SP), a waiting period (WP) where the radio medium is listened by routers for collecting incoming 802.15.4 packets, and finally a burst transmission period (TP) for emitting adequately the packets enqueued during WP. The main advantage of S-CoSenS is its ability to adapt dynamically to the wireless network throughput at runtime, by calculating for each radio duty cycle the length of SP and WP, according to the number of relayed packets during previous cycles. Note that the set of the SP and the WP of a same cycle is named \emph{subframe}; it is the part of a S-CoSenS cycle whose length is computed and known \textit{a priori}; on the contrary, TP duration is always unknown up to its very beginning, because it depends on the amount of data successfully received during the WP that precedes it. The computation of WP duration follows a ``sliding average'' algorithm, where WP duration for each duty cycle is computed from the average of previous cycles as: \begin{eqnarray*} && \overline{\mathrm{WP}_{n}} = \alpha \cdot \overline{\mathrm{WP}_{n-1}} + (1 - \alpha) \cdot \mathrm{WP}_{n-1} \\ && \mathrm{WP}_{n} = \max ( \mathrm{WP}_{min}, \min ( \overline{\mathrm{WP}_{n}}, \mathrm{WP}_{max} ) ) \end{eqnarray*} where $\overline{\mathrm{WP}_{n}}$ and $\overline{\mathrm{WP}_{n-1}}$ are respectively the average WP length at $n^{\mathrm{th}}$ and $(n-1)^{\mathrm{th}}$ cycle, while $\mathrm{WP}_{n}$ and $\mathrm{WP}_{n-1}$ are the actual length of respectively the $n^{\mathrm{th}}$ and $(n-1)^{\mathrm{th}}$ cycles; $\alpha$ is a parameter between 0 and 1 representing the relative weight of the history in the computation, and $\mathrm{WP}_{min}$ and $\mathrm{WP}_{max}$ are high and low limits imposed by the programmer to the WP duration. The length of the whole subframe being a parameter given at compilation time, SP duration is simply computed by subtracting the calculated duration of WP from the subframe duration for every cycle. The local synchronization between a S-CoSenS router and its leaf nodes is done thanks to a beacon packet, that is broadcasted by the router at the beginning of each cycle. This beacon contains the duration (in microseconds) of the SP and WP for the currently beginning cycle. The whole S-CoSenS cycle workflow for a router is summarized in figure \ref{FigSCosensDutyCycle} hereafter. \begin{figure}[!ht] \centering \begin{tikzpicture}[>=latex] \fill[black] (0cm, -0.25cm) rectangle +(0.2cm, 0.5cm); \draw[->,thick] (0.1cm, 0.25cm) -- +(0, 0.5cm); \draw (0.1cm, 1.3cm) node {Beacon}; \draw[anchor=west] (-0.6cm, 0.9cm) node {(broadcasted)}; \draw[thick] (0cm, -0.25cm) -- +(0, 0.5cm); \foreach \x in {1,2,3,4,5,6} { \fill[lightgray] (0.2cm + \x * 0.25cm, -0.25cm) rectangle +(0.05cm, 0.5cm); } \draw (1.1cm, 0) node {\textbf{SP}}; \draw[thick] (2cm, -0.25cm) -- +(0, 0.5cm); \fill[lightgray] (2cm, -0.25cm) rectangle +(2cm, 0.5cm); \draw (3cm, 0) node {\textbf{WP}}; \draw[thick] (4cm, -0.25cm) -- +(0, 0.5cm); \fill[lightgray] (4cm, -0.25cm) rectangle +(2cm, 0.5cm); \draw (5cm, 0) node {\textbf{TP}}; \draw[thick] (6cm, -0.25cm) -- +(0, 0.5cm); \draw[->] (-0.5cm, 0.25cm) -- +(7cm, 0); \draw[->] (-0.5cm, -0.25cm) -- +(7cm, 0); \draw[->,thick] (2.5cm, 0.75cm) -- +(0, -0.5cm); \draw (2.5cm, 1cm) node {P1}; \draw[->,thick] (3cm, 0.75cm) -- +(0, -0.5cm); \draw (3cm, 1cm) node {P2}; \draw[->,thick] (3.5cm, 0.75cm) -- +(0, -0.5cm); \draw (3.5cm, 1cm) node {P3}; \draw[->,thick] (4.5cm, 0.25cm) -- +(0, 0.5cm); \draw (4.5cm, 1cm) node {P1}; \draw[->,thick] (5cm, 0.25cm) -- +(0, 0.5cm); \draw (5cm, 1cm) node {P2}; \draw[->,thick] (5.5cm, 0.25cm) -- +(0, 0.5cm); \draw (5.5cm, 1cm) node {P3}; \draw (0cm, -0.5cm) .. controls +(0, -0.25cm) .. +(1cm, -0.25cm); \draw (1cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, -0.25cm); \draw (2cm, -1cm) .. controls +(0, 0.25cm) .. +(1cm, 0.25cm); \draw (3cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, 0.25cm); \draw (2cm, -1.25cm) node {\textbf{Subframe}}; \draw (0cm, -1.5cm) .. controls +(0, -0.25cm) .. +(1.5cm, -0.25cm); \draw (1.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, -0.25cm); \draw (3cm, -2cm) .. controls +(0, 0.25cm) .. +(1.5cm, 0.25cm); \draw (4.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, 0.25cm); \end{tikzpicture} \caption{A typical S-CoSenS router cycle.\\ The gray strips in the SP represents the short wake-up-and-listen periods used for inter-router communication.} \label{FigSCosensDutyCycle} \end{figure} An interesting property of S-CoSenS is that leaf (i.e.: non-router) nodes always have their radio transceiver offline, except when they have packets to send. When a data packet is generated on a leaf node, the latter wakes up its radio transceiver, listens and waits to the first beacon emitted by an S-CoSenS router, then sends its packet using CSMA/CA at the beginning of the WP described in the beacon it received. A leaf node will put its transceiver offline during the delay between the beacon and that WP (that is: the SP of the router that emitted the received beacon), and will go back to sleep mode once its packet is transmitted. All of this procedure is shown in figure \ref{FigSCoSenSPktTx}. \begin{figure}[!h] \centering \begin{tikzpicture}[>=latex] \draw (-0.5cm, 0) node {\large \textit{R}}; \draw[thick] (1cm, -0.25cm) -- +(0, 0.5cm); \draw (2cm, 0) node {\textbf{SP}}; \draw[thick] (3cm, -0.25cm) -- +(0, 0.5cm); \fill[lightgray] (3cm, -0.25cm) rectangle +(2cm, 0.5cm); \draw (4cm, 0) node {\textbf{WP}}; \draw[thick] (5cm, -0.25cm) -- +(0, 0.5cm); \fill[lightgray] (5cm, -0.25cm) rectangle +(0.5cm, 0.5cm); \draw (5.25cm, -0.5cm) node {\textbf{TP}}; \draw[thick] (5.5cm, -0.25cm) -- +(0, 0.5cm); \draw[->] (-0.5cm, 0.25cm) -- +(6.5cm, 0); \draw[->] (-0.5cm, -0.25cm) -- +(6.5cm, 0); \draw (-0.5cm, -1.5cm) node {\large \textit{LN}}; \fill[gray] (0cm, -1.25cm) rectangle +(1.3cm, -0.5cm); \fill[gray] (2.9cm, -1.25cm) rectangle +(0.5cm, -0.5cm); \fill[black] (1cm, -0.25cm) rectangle +(0.2cm, 0.5cm); \draw[->,thick] (1.1cm, 0.25cm) -- +(0, -1.5cm); \draw[anchor=east] (1cm, -0.75cm) node {Beacon}; \fill[black] (1cm, -1.25cm) rectangle +(0.2cm, -0.5cm); \draw[->,very thick] (0cm, -2.5cm) -- +(0, 0.75cm); \draw[anchor=west] (0cm, -2.5cm) node {\footnotesize \textbf{packet arrival}}; \fill[black] (3.1cm, -1.25cm) rectangle +(0.2cm, -0.5cm); \draw[->,thick] (3.2cm, -1.25cm) -- +(0, 1cm); \draw[anchor=west] (3.2cm, -0.75cm) node {P1}; \fill[black] (3.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm); \fill[black] (5.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm); \draw[->,thick] (5.2cm, 0.25cm) -- +(0, 0.5cm); \draw (5.2cm, 1cm) node {P1}; \draw[->] (-0.5cm, -1.25cm) -- +(6.5cm, 0); \draw[->] (-0.5cm, -1.75cm) -- +(6.5cm, 0); \end{tikzpicture} \caption{A typical transmission of a data packet with the S-CoSenS protocol between a leaf node and a router.} \label{FigSCoSenSPktTx} \end{figure} We thus need to synchronize with enough accuracy different devices (that can be based on different hardware platforms) on cycles whose periods are dynamically calculated at runtime, with resolution that needs to be in the sub-millisecond range. This is where RIOT OS advanced real-time features really shine, while the other comparable OSes are for that purpose definitely lacking. \subsection{Simulations and Synchronization Accuracy} We have implemented S-CoSenS under RIOT, and made first tests by performing simulations---with Cooja---of a 802.15.4 PAN (Personal Area Network) constituted of a router, and ten motes acting as ``leaf nodes''. The ten nodes regularly send data packets to the router, that retransmits these data packets to a nearby ``sink'' device. Both the router and the ten nodes use exclusively the S-CoSenS RDC/MAC protocol. This is summarized in figure \ref{FigPANtest}. \begin{figure}[!h] \centering \begin{tikzpicture}[>=latex] \draw (0, 1cm) circle (0.25cm); \draw (0, 1cm) node {S}; \draw[->,thick] (0, 0.25cm) -- (0, 0.75cm); \draw (0, 0) circle (0.25cm); \draw (0, 0) node {R}; \foreach \x in {6,7,8,9,10} { \fill[white] (\x * 1cm - 8cm, -1.75cm) circle (0.25cm); \draw (\x * 1cm - 8cm, -1.75cm) circle (0.25cm); \draw (\x * 1cm - 8cm, -1.75cm) node {\x}; \draw[->,thick] (\x * 1cm - 8cm, -1.5cm) -- (\x * 0.02cm - 0.16cm, -0.25cm); } \foreach \x in {1,2,3,4,5} { \fill[white] (\x * 1cm - 3cm, -1cm) circle (0.25cm); \draw (\x * 1cm - 3cm, -1cm) circle (0.25cm); \draw (\x * 1cm - 3cm, -1cm) node {\x}; \draw[->,thick] (\x * 1cm - 3cm, -0.75cm) -- (\x * 0.05cm - 0.15cm, -0.25cm); } \end{tikzpicture} \caption{Functional schema of our virtual test PAN.} \label{FigPANtest} \end{figure} Our first tests clearly show an excellent synchronization between the leaf nodes and the router, thanks to the time resolution offered by RIOT OS event management system (especially the availability of many hardware timers for direct use). This can be seen in the screenshot of our simulation in Cooja, shown in figure \ref{Screenshot}. For readability, the central portion of the timeline window of that screenshot (delimited by a thick yellow rectangle) is zoomed on in figure \ref{ZoomTimeline}. \begin{figure*}[ptb] \centering \includegraphics[width=15.75cm]{S-CoSenS-Cooja10.png} \caption{Screenshot of our test simulation in Cooja. (Despite the window title mentioning Contiki, the simulated application is indeed running on RIOT OS.)} \label{Screenshot} \end{figure*} \begin{figure*}[pbt] \centering \includegraphics{S-CoSenS-Cooja10-Timeline.png} \caption{Zoom on the central part of the timeline of our simulation.} \label{ZoomTimeline} \end{figure*} On figure \ref{ZoomTimeline}, the numbers on the left side are motes' numerical IDs: the router has ID number \textsf{1}, while the leaf nodes have IDs \textsf{2} to \textsf{11}. Grey bars represent radio transceiver being online for a given mote; blue bars represent packet emission, and green bars correct packet reception, while red bars represent collision (when two or more devices emit data concurrently) and thus reception of undecipherable radio signals. Figure \ref{ZoomTimeline} represents a short amount of time (around 100~milliseconds), representing the end of a duty cycle of the router: the first 20~milliseconds are the end of SP, and 80 remaining milliseconds the WP, then the beginning of a new duty cycle (the TP has been disabled in our simulation). In our example, four nodes have data to transmit to the router: the motes number \textsf{3}, \textsf{5}, \textsf{9}, and \textsf{10}; the other nodes (\textsf{2}, \textsf{4}, \textsf{6}, \textsf{7}, \textsf{8}, and \textsf{11}) are preparing to transmit a packet in the next duty cycle. At the instant marked by the first yellow arrow (in the top left of figure \ref{ZoomTimeline}), the SP ends and the router activates its radio transceiver to enter WP. Note how the four nodes that are to send packets (\textsf{3}, \textsf{5}, \textsf{9}, and \textsf{10}) do also activate their radio transceivers \emph{precisely} at the same instant: this is thanks to RIOT OS precise real-time mechanism (based on hardware timers), that allows to the different nodes to precisely synchronize on the timing values transmitted in the previous beacon packet. Thanks also to that mechanism, the nodes are able to keep both their radio transceiver \emph{and} their MCU in low-power mode, since RIOT OS kernel is interrupt-driven. During the waiting period, we also see that several collisions occur; they are resolved by the S-CoSenS protocol by forcing motes to wait a random duration before re-emitting a packet in case of conflict. In our example, our four motes can finally transmit their packet to the router in that order: \textsf{3} (after a first collision), \textsf{5}, \textsf{10} (after two other collisions), and finally \textsf{9}. Note that every time the router (device number \textsf{1}) successfully receives a packet, an acknowledgement is sent back to emitter: see the very thin blue bars that follow each green bar on the first line. Finally, at the instant marked by the second yellow arrow (in the top right of figure \ref{ZoomTimeline}), WP ends and a new duty cycle begins. Consequently, the router broadcasts a beacon packet containing PAN timing and synchronization data to all of the ten nodes. We can see that all of the six nodes waiting to transmit (\textsf{2}, \textsf{4}, \textsf{6}, \textsf{7}, \textsf{8}, and \textsf{11}) go idle after receiving this beacon (beacon packets are broadcasted and thus not to be acknowledged): they go into low-power mode (both at radio transceiver and MCU level), and will take advantage of RIOT real-time features to wake up precisely when the router goes back into WP mode and is ready to receive their packets. \subsection{Performance Evaluation: Preliminary Results} We will now present the first, preliminary results we obtained through the simulations we described hereabove. Important: note that \emph{we evaluate here the implementations}, and not the intrinsic advantages or weaknesses of the protocols themselves. We have first focused on QoS results, by computing Packet Reception Rates and end-to-end delays between the various leaf nodes and the sink of the test PAN presented earlier in figure \ref{FigPANtest}, to evaluate the quality of the transmissions allowed by using both of the protocols. For these first tests, we used default parameters for both RDC protocols (ContikiMAC and S-CoSenS), only pushing the CSMA/CA MAC layer of Contiki to make up to 8 attempts for transmitting a same packet, so as to put it on par with our implementation on RIOT OS. We have otherwise not yet tried to tweak the various parameters offered by both the RDC protocols to optimize results. This will be the subject of our next experiences. \subsubsection{Packet Reception Rates (PRR)} The result obtained for PRR using both protocols are shown in figure \ref{FigPRRresults} as well as table \ref{TblPRRresults}. \begin{figure} \centering \includegraphics[width=7.5cm]{PRRgraph.png} \caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols, using default values for parameters.} \label{FigPRRresults} \end{figure} \begin{table} \centering \begin{tabular}{|r|r|r|} \hline PAI \textbackslash\ Protocol & ContikiMAC & S-CoSenS \\ \hline 1500 ms & 49.70\% & 98.10\% \\ 1000 ms & 32.82\% & 96.90\% \\ 500 ms & 14.44\% & 89.44\% \\ 100 ms & 0.64\% & 25.80\% \\ \hline \end{tabular} \caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols, using default values for parameters.} \label{TblPRRresults} \end{table} The advantage of S-CoSenS as shown on the figure is clear and significant whatever the packet arrival interval constated. Excepted for the ``extreme'' scenario corresponding to an over-saturation of the radio channel, S-CoSenS achieve an excellent PRR ($\gtrapprox 90\%$), while ContikiMAC's PRR is always $\lessapprox 50\%$. \subsubsection{End-To-End Transmission Delays} The result obtained for PRR using both protocols are shown in figure \ref{FigDelaysResults} and table \ref{TblDelaysResults}. \begin{figure} \centering \includegraphics[width=7.5cm]{DelaysGraph.png} \caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC protocols, using default values for parameters; note that vertical axis is drawn with logarithmic scale.} \label{FigDelaysResults} \end{figure} \begin{table} \centering \begin{tabular}{|r|r|r|} \hline PAI \textbackslash\ Protocol & ContikiMAC & S-CoSenS \\ \hline 1500 ms & 3579 ms & 108 ms \\ 1000 ms & 4093 ms & 108 ms \\ 500 ms & 6452 ms & 126 ms \\ 100 ms & 12913 ms & 168 ms \\ \hline \end{tabular} \caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC protocols, using default values for parameters.} \label{TblDelaysResults} \end{table} S-CoSenS has here also clearly the upper hand, so much that we had to use logarithmic scale for the vertical axis to keep figure \ref{FigDelaysResults} easily readable. The advantage of S-CoSenS is valid whatever the packet arrival interval, our protocol being able to keep delay below an acceptable limit (in the magnitude of hundreds of milliseconds), while ContikiMAC delays rocket up to tens of seconds when network load increases. \subsubsection{Summary: QoS Considerations} While these are only preliminary results, it seems that being able to leverage real-time features is clearly a significant advantage when designing and implementing MAC/RDC protocols, at least when it comes to QoS results. \section{\uppercase{Future Works and Conclusion}} We plan, in a near future: \begin{itemize} \item to bring new contributions to the RIOT project: we are especially interested in the portability that the RIOT solution offers us; this OS is indeed actively ported on many devices based on powerful microcrontrollers based on ARM Cortex-M architecture (especially Cortex-M3 and Cortex-M4), and we intend to help in this porting effort, especially on high-end IoT motes we seek to use in our works (e.g.: as advanced FFD nodes with full network stack, or routers); \item to use the power of this OS to further advance our work on MAC/RDC protocols; more precisely, we are implementing other innovative MAC/RDC protocols---such as iQueue-MAC \cite{iQueueMAC}---under RIOT, taking advantage of its high-resolution real-time features to obtain excellent performance, optimal energy consumption, and out-of-the-box portability. \end{itemize} RIOT is a powerful real-time operating system, adapted to the limitations of deeply embedded hardware microcontrollers, while offering state-of-the-art techniques (preemptive multitasking, tickless scheduler, optimal use of hardware timers) that---we believe---makes it one of the most suitable OSes for the embedded and real-time world. While we weren't able to accurately quantize energy consumption yet, we can reasonably think that lowering activity of MCU and radio transceiver will significantly reduce the energy consumption of devices running RIOT OS. This will be the subject of some of our future research works. \bigskip Currently, RIOT OS supports high-level IoT protocols (6LoWPAN/IPv6, RPL, TCP, UDP, etc.). However, it still lacks high-performance MAC~/ RDC layer protocols. Through this work, we have shown that RIOT OS is also suitable for implementing high-performance MAC~/ RDC protocols, thanks to its real-time features (especially hardware timers management). Moreover, we have improved the robustness of the existing ports of RIOT OS on MSP430, making it a suitable software platform for tiny motes and devices. \vfill \bibliographystyle{apalike} {\small
1,941,325,220,366
arxiv
\section*{Abstract} Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for associations between genotype and phenotype. In general, these statistical methods view the compound effects of multiple genes on a phenotype as a sum of partial influences of each individual gene and can often miss a substantial part of the heritable effect. Such methods do not make use of any biological knowledge about underlying genotype-phenotype mechanisms. Modeling approaches from the Artificial Intelligence field that incorporate deterministic knowledge into models while performing statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), that employs first-order logic as a framework for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we have been able to replicate the results of traditional statistical methods. Moreover, we show that even with quite simple models we are able to go beyond finding independent markers linked to a phenotype by using joint inference that avoids an independence assumption. The method is applied to genetic data on yeast sporulation, a phenotype known to be governed by non-linear interactions between genes. In addition to detecting all of the previously identified loci associated with sporulation, our method is able to identify four additional loci with small effects on sporulation. Since their effect on sporulation is small, these four loci were not detected with standard statistical methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which in turn can be used as a general framework for incorporating systems biology with genetics. Such future work that embodies systems knowledge in probabilistic models is proposed. \section*{Author Summary} We have taken up the challenge of devising a framework for the analysis of genetic data that is fully functional in the usual statistical correlation analysis used in genome-wide association studies, but also capable of incorporating prior knowledge about biological systems relevant to the genetic phenotypes. We develop a general genetic analysis approach that meets this challenge. We adapt an AI method for learning models, called Markov Logic Networks, that is based on the fusion of Markov Random Fields with first order logic. Our adaption of the Markov Logic Network method for genetics allows very complex constraints and a wide variety of model classes to be imposed on probabilistic, statistical analysis. We illustrate the use of the method by analyzing a data set based on sporulation efficiency from yeast, in which we demonstrate gene interactions and identify a number of new loci involved in determining the phenotype. \section*{Introduction} Genome-wide association studies (GWAS) have allowed the detection of many genetic contributions to complex phenotypes in humans (see \emph{www.genome.gov}). Studies of biological networks of different kinds, including genetic regulatory networks, protein-protein interaction networks and others, have made it clear, however, that gene interactions are abundant and are therefore of likely importance for genetic analysis~\cite{Manolio09}. Complex, non-additive interactions between genetic variations are very common and can potentially play a crucial role in determining phenotypes~\cite{Brem05,Drees05,Carter07,CarterDudley09}. GWAS and similar statistical methods such as classical QTL studies generally assume additive models of gene interaction that attempt to capture a compound effect of multiple genes on a phenotype as a sum of partial influences of each individual gene~\cite{HirschhornDaly05,McCarthy08}. These statistical methods also assume no biological knowledge about the underlying processes or phenotypes. Since biological networks are complex, and since variations are numerous, unconstrained searches for associations between genotype and phenotype require large population samples, and can succeed only in detecting a limited range of effects. Without imposing any constraints based on biological knowledge searching for gene interactions is very challenging, particularly when input data consist of different data types coming from various sources. The major question that motivated this work is ``\emph{Can we constrain traditional statistical approaches by using biological knowledge to define some known networks that influence patterns in the data, and can such approaches produce more complete genetic models?}'' For example, we might use the patterns present in the genotype data to build more predictive models based on both genotype and phenotype data. Note that the problem of using biological knowledge to constrain a model of genetic interaction is closely connected to the problem of integrating various types of data in a single model. In this article we employ a known Artificial Intelligence (AI) approach (Markov Logic Networks) to reformulate the problem of defining and finding genetic models in a general way and use it to facilitate detection of non-additive gene interactions. This approach allows us to lay the foundations for studies of essentially any kind of genetic model, which we demonstrate for a relatively simple model. Markov Logic Networks (MLNs) is one of the most general approaches to statistical relational learning, a sub-field of machine learning, that combines two kinds of modeling: probabilistic graphical models, namely Markov Random Fields, and first-order logic. Probabilistic graphical models, first proposed by Pearl~\cite{Pearl88}, offer a way to represent joint probability distributions of sets of random variables in a compact fashion. A graphical structure describing the probabilistic independence relationships in these models allows the development of numerous algorithms for learning and inference and makes these models a good choice for handling uncertainty and noise in data. On the other hand, first-order logic allows us to represent and perform inferences over complex, relational domains. Propositional (Boolean) logic, which biologists are most familiar with, describes the truth state on the level of specific instances, while first-order logic allows us to make assertions about the truth state of relations between subsets (classes) of instances. Moreover, using first-order logic we can represent recursive and potentially infinite structures such as Markov chains where a temporal dependency of the current state on the state at the previous time step can be instantiated to an infinite time series. Thus, first order logic is a very flexible choice for representing general knowledge, like that we encounter in biology. MLNs merge probabilistic graphical models and first-order logic in a framework that gains the benefits of both representations. Most importantly, the logic component of MLNs provides an interface for adding biological knowledge to a model through a set of first-order constraints. At the same time, MLNs can be seen as a generalization of probabilistic graphical models since any distribution represented by the latter can be represented by the former, and this representation is more compact due to the first-order logic component. Even so, various learning and inference algorithms for probabilistic graphical models are applicable to MLNs and are thereby enhanced with logic inference. One key advantage of logic-based probabilistic modeling methods, and in particular MLNs, is that they allow us to work easily with data that are not independent and identically distributed (not i.i.d.). Many statistical and machine learning methods assume that the input data is i.i.d., a very strong, and usually artificial, property that most biological problems do not share. For instance, biological variables most often have a spatial or temporal structure, or can even be explicitly described in a relational database with multiple interacting relations. MLNs thus provide a means for non-i.i.d. learning and joint inference of a model. While the input data used in GWAS and in other genetic studies are rich in complex statistical interdependencies between the data points, MLNs can easily deal with any of these data structures. There are various modeling techniques that employ both probabilistic graphical models and first-order logic~\cite{Poole93,NgoHaddawy97,GlesnerKoller95,SatoKameya97,DeRaedt07,Friedman99,KerstingDeRaedt00,Pless06,RichardsonDomingos06}. Many of them impose different restrictions on the underlying logical representation in order to be able to map the logical knowledge base to a graphical model. One common restriction employed, for example, in~\cite{SatoKameya97,DeRaedt07,KerstingDeRaedt00,Pless06} is to use only \emph{clausal} first-order formulas of the form $b_1 \land b_2 \land \ldots \land b_n \Rightarrow h$ that are used to represent cause-effect relationships. The majority of the methods, such as those introduced in~\cite{NgoHaddawy97,GlesnerKoller95,KerstingDeRaedt00}, use Bayesian networks, directed graphical models, as the probabilistic representation. However, there are a few approaches~\cite{Pless06,RichardsonDomingos06} that instead use Markov Random Fields, undirected graphical models, to perform inferences. We use Markov Logic Networks~\cite{RichardsonDomingos06} that merge unrestricted first-order logic with Markov Random Fields, and as a result use the most general probabilistic logic-based modeling approach. In this paper we show this MLN-based approach to understanding complex systems and data sets. Similar to~\cite{Yi05} that proposed a Bayesian approach that can be used to infer models for QTL detection, our MLN-based approach is a model inference method that goes beyond just hypothesis testing. Moreover, in this paper we describe how we have adapted and applied MLN to genetic analysis so that complex biological knowledge can be included in the models. We have applied the method to a relatively simple genetic system and data set, the analysis of the genetics of sporulation efficiency in the budding yeast \emph{Saccharomyces cerevisiae}. In this system, recently analyzed by Cohen and co-workers~\cite{Gerke09}, two genetically and phenotypically diverse yeast strains, whose genomes were fully characterized, were crossed and the progeny studied for the genetics of the sporulation phenotype. This provided a genetically complex phenotype with a well-defined genetic context to which to apply our method. \section*{Methods} \subsection*{Markov Random Fields} Given a set of random variables of the same type, $\mathbf{X}=\{X_i:1 \le i \le N\}$ and a set of possible values (alphabet) $\mathbf{A}=\{A_i:1 \le i \le M\}$ so that any variable can take any value from $A_1$ to $A_M$ (it is easy to extend this to the case of multiple variable types). Consider a graph, $G$, whose vertices represent variables, $\mathbf{X}$, and whose edges represent probabilistic dependencies among the vertices such that a local Markov property is met. The local Markov property for a random variable $X_i$ can be formally written as $\Pr(X_i=A_j \mid \mathbf{X} \setminus \{X_i\})=\Pr(X_i=A_j \mid N(X_i))$ that states that a state of random variable $X_i$ is conditionally independent of all other variables given $X_i$'s neighbors $N(X_i)$, $N(X_i) \subseteq \mathbf{X} \setminus \{X_i\}$. Let $\mathbf{C}$ denote the set of all cliques in $G$, where a clique is a subgraph that contains an edge for every pair of its nodes (a complete subgraph). Consider a configuration, $\gamma$, of $\mathbf{X}$ that assigns each variable, $X_i$, a value from $\mathbf{A}$. We denote the space of all configurations as $\mathbf{\Gamma}=\mathbf{A}^{\mathbf{X}}$. A restriction of $\gamma$ to the variables of a specific clique $C$ is denoted by $\gamma_C$. A \emph{Markov Random Field} (MRF) is defined on $\mathbf{X}$ by a graph $G$ and a set of potentials $\mathbf{V} = \{ V_C(\gamma_C): C \in \mathbf{C}, \gamma \in \mathbf{\Gamma} \}$ assigned to the cliques of the graph. Using cliques allows us to explicitly define the topology of models, making MRFs convenient to model long-range, higher-order connections between variables. We encode the relationships between the variables using the clique potentials. By the Hammersley-Clifford theorem, a joint probability distribution represented by an MRF is given by the following Gibbs distribution \begin{equation} \Pr(\gamma) = \frac{1}{Z} \prod_{C \in \mathbf{C}} \exp \left(-V_C(\gamma_C) \right), \label{eq1} \end{equation} where the so-called partition function $Z = \sum_{\gamma \in \mathbf{\Gamma}} \prod_{C \in \mathbf{C}} \exp(-V_C(\gamma_C))$ normalizes the probability to ensure that $\sum_{\gamma \in \mathbf{\Gamma}} \Pr(\gamma) = 1$. Without loss of generality we can represent a Markov Random Field as a log-linear model~\cite{Pietra97}: \begin{equation} \Pr(\gamma)=\frac{1}{Z} \exp \left(\sum_i w_i f_i(\gamma) \right), \label{eq2} \end{equation} where $f_i: \mathbf{\Gamma} \rightarrow \mathbb{R}$ are functions defining features of the MRF and $w_i \in \mathbb{R}$ are the weights of the MRF. Usually, the features are indicators of the presence or absence of some attribute, and hence are binary. For instance, we can consider a feature function that is $1$ when some $X_i$ has a particular value and $0$ otherwise. Using these types of indicators, we can make $M$ different features for $X_i$ that can take on $M$ different values. Given some configuration $\gamma_{\mathbf{X} \setminus \{X_i\}}$ of all the variables $\mathbf{X}$ except $X_i$, we can have a different weight for this configuration whenever $X_i$ has a different value. The weights for these features capture the affinity of the configuration $\gamma_{\mathbf{X} \setminus \{X_i\}}$ for each value of $X_i$. Note that the functions defining features can overlap in arbitrary ways providing representational flexibility. One simple mapping of a traditional MRF to a log-linear MRF is to use a single feature $f_i$ for each configuration $\gamma_C$ of every clique $C$ with the weight $w_i = - V_C(\gamma_C)$. Even though in this representation the number of features (the number of configurations) increases exponentially as the size of cliques increases, the Markov Logic Networks described in the next section attempt to reduce the number of features involved in the model specification by using logical functions of the cliques' configurations. Given an MRF, a general problem is to find a configuration $\gamma$ that maximizes the probability $\Pr(\gamma)$. Since the space $\mathbf{\Gamma}$ is very large, performing an exhaustive search is intractable. For many applications, there are two kinds of information available: prior knowledge about the constraints imposed on the simultaneous configuration of connected variables; and observations about these variables for a particular instance of the problem. The constraints constitute the model of the world and reflect statistical dependencies between values of the neighbors captured in an MRF. For example, when modeling gene association with phenotype, the restrictions on the likelihood of configurations of co-expressed genes may be cast as an MRF with cliques of size 2 and 3 (see figure~\ref{fig1}). In the next section, we give a biological example involving construction of MRFs with cliques of size 3 and 4, and provide more mathematical details. \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{mrf-example.eps} \end{center} \caption{\textbf{An example of a Markov Random Field.} The MRF represents four genes influencing a phenotype with potentials $V_1, \ldots, V_4$ (blue edges). This model restricts genetic interactions to two pair-wise interactions with potentials $V_5, V_6$ (red cliques).} \label{fig1} \end{figure} \subsection*{Markov Logic Networks: Mapping First-Order Logic to Markov Random Fields} Markov Logic Networks merge Markov Random Fields with first-order logic. In first-order logic (FOL) we distinguish \textbf{constants} and \textbf{variables} that represent objects and their classes in a domain, as well as \textbf{functions} specifying mappings between subgroups of objects, and \textbf{predicates} representing relations among objects or their attributes. We call a predicate \emph{ground} if all its variables are assigned specific values. To illustrate, consider a study of gene interactions through the phenotypic comparison of wild type strains, single mutants, and double mutants such as the one presented in~\cite{Drees05}. Consider a set of constants representing genes, $\{ \mathtt{g1,g2} \}$, gene interaction labels, $\{ \mathtt{A,B} \}$, and difference between phenotype values of mutants, $\{ \mathtt{0,1,2} \}$. We define the following predicates: $\mathtt{RelWS/2}$ (a 2-argument predicate which captures a relation between a wild type and a single mutant), $\mathtt{RelWD/3}$ (a relation between a wild type and a double mutant), $\mathtt{RelSS/3}$ (a relation between two single mutants), $\mathtt{RelSD/4}$ (a relation between a single mutant and a double mutant), $\mathtt{Int/3}$ (an interaction between two genes). Using FOL we can define a knowledge base consisting of two formulas: $$ \begin{array}{l} \displaystyle \mathtt{\forall x,y \in \{g1,g2\},\forall c \in \{0,1\},\forall v,u \in \{0,1,2\}, Int(x,y,c) \Rightarrow (RelWS(x,v) \Leftrightarrow RelSD(y,x,y,u))}\\ \displaystyle \mathtt{\forall x,y \in \{g1,g2\},\forall c \in \{0,1\},\forall v,u,w \in \{0,1,2\}, RelWS(x,v) \land RelWS(y,u) \land RelWD(x,y,w) \Rightarrow Int(x,y,c).} \end{array} $$ The first rule represents the knowledge that depending on the type of interaction between two genes, there is a dependency between $\mathtt{RelWS(x,v)}$ and $\mathtt{RelSD(y,x,y,u)}$ relations. The second rule captures the knowledge that three relations, $\mathtt{RelWS(x,v)}$, $\mathtt{RelWS(y,u)}$, and $\mathtt{RelWD(x,y,w)}$, together determine the type of gene interaction. Note that first-order formulas define relations between (potentially infinite) groups of objects or their attributes. Formulas in FOL can be seen as relational templates for constructing models in propositional logic. Therefore, FOL offers a compact way of representing and aggregating relational data. For example, two first-order formulas above can be replaced with 288 propositional formulas since variables $\mathtt{x,y,c}$ can be assigned 2 different values and variables $\mathtt{u,v,w}$ can be assigned 3 values. Moreover, using representational power of FOL, we can specify infinite structures such as temporal relations, e.g., $\mathtt{Expression(e1, t1) \land NextTimeStep(t1, t2) \Rightarrow Expression(e2, t2)}$, that can give rise to a theoretically infinite number of propositions. The principal limitation of any strictly formal logic system is that it is not suitable for real applications where data contain uncertainty and noise. For example, the formulas specified earlier hold for the real data most of the time, but not always. If there is at least one data point where a formula does not hold, then the entire model is disregarded as being false. The two allowed states, true or false, is equivalent to allowing only probability values $1$ or $0$. Markov Logic Networks, however, relax this constraint by allowing the model with unsatisfied formula with a lesser probability than one. The model with the smallest number of unsatisfied formulas will then be the most probable. Markov Logic Networks (MLNs) extend FOL by assigning a weight to each formula indicating its probabilistic strength. An MLN is a collection of first-order formulas $F_i$ with associated weights $w_i$. For each variable of a Markov Logic Network there is a finite set of constants representing the domain of the variable. A Markov Logic Network together with its corresponding constants is mapped to a Markov Random Field as follows. Given a set of all predicates on an MLN, every ground predicate of the MLN corresponds to one random variable of a Markov Random Field whose value is $1$ if the ground predicate is true and $0$ otherwise. Similarly, every ground formula of $F_i$ corresponds to one feature of the log-linear Markov Random Field whose value is $1$ if the ground formula is true and $0$ otherwise. The weight of the feature in the Markov Random Field is the weight $w_i$ associated with the formula $F_i$ in the Markov Logic Network. From the original definitions (\ref{eq1}) and (\ref{eq2}) and the fact that features of false ground formulas are equal to $0$, the probability distribution represented by a \emph{ground} Markov Logic Network is given by \begin{equation} \Pr(\gamma)=\frac{1}{Z} \exp \left(\sum_i w_i n_i(\gamma)\right) = \frac{1}{Z} \prod_i \exp \left(-V_i(\gamma_i)n_i(\gamma)\right), \label{eq3} \end{equation} where $n_i(\gamma)$ is a number of true ground formulas of the formula $F_i$ in the state $\gamma$ (which directly corresponds to our data), $\gamma_i$ is the configuration (state) of the ground predicates appearing in $F_i$. $V_i$ is a potential function assigned to a clique which corresponds to $F_i$, and $\exp(-V_i(\gamma_i))=\exp(w_i)$. Note that this probability distribution would change if we changed the original set of constants. Thus, one can view MLNs as templates specifying classes of Markov Random Fields, just like FOL templates specifying propositional formulas. Figure~\ref{fig2} illustrates a portion of a Markov Random Field corresponding to the ground MLN. We assume a set of constants and an MLN specified by the knowledge base from the example above, where a weight is assigned to each formula. \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{mln-example.eps} \end{center} \caption{ \textbf{An example of a subnetwork of a Markov Random Field unfolded from a Markov Logic Network program.} Each node of the MRF corresponds to a ground predicate (a predicate with variables substituted with constants). The nodes of all ground predicates that appear in a single formula form a clique such as the one highlighted with red. The blue triangular cliques correspond to the first formula of the MLN and are assigned the weight of the formula (2.1). The larger rectangular cliques, such as the one colored red, correspond to the second formula of the MLN with the weight 1.5.} \label{fig2} \end{figure} \subsection*{An Example: Application to Yeast Sporulation Dataset} We applied our method to a data set generated by Cohen and co-workers~\cite{Gerke09}. They generated and characterized a set of 374 progeny of a cross between two yeast strains that differ widely in their efficiency of sporulation (a wine and an oak strain). For each of the progeny the sporulation efficiency was measured and assigned a normalized real value between 0 and 1. To generate a discrete value set we then binned and mapped the sporulation efficiencies into 5 integer values. Each yeast progeny strain was genotyped at 225 markers that are uniformly distributed along the genome. Each marker takes on one of two possible values indicating whether it derived from the oak or wine parent genotype. Using Markov Logic Networks, we first model the effect of a single marker on the phenotype, i.e., sporulation efficiency. Define a logistic-regression type model with the following set of formulas: \begin{equation} \mathtt{\forall strain \in \{1, \ldots ,374\}, \hspace{5pt} G(strain,m,g) \Rightarrow E(strain,v), \hspace{5pt} w_{m,g,v}}, \label{eq4} \end{equation} for every marker $\mathtt{m}$ under consideration (at this point we consider one marker in a model), genotype value $\mathtt{g} \in \{ \mathtt{A},\mathtt{B}\}$, and phenotype value $\mathtt{v} \in \{\mathtt{1}, \ldots, \mathtt{5}\}$. This Markov Logic Network contains two predicates, $\mathtt{G}$ and $\mathtt{E}$. Predicate $\mathtt{G}$ denotes markers' genotype values across yeast crosses, e. g., $\mathtt{G(strain,M1,B)}$ captures all yeast crosses for which the genotype of a marker $\mathtt{M1}$ is $\mathtt{B}$. Similarly, predicate $\mathtt{E}$ denotes the phenotype (sporulation efficiency) across yeast crosses, for instance, $\mathtt{E(strain,1)}$ captures all yeast strains for which the level of sporulation efficiency is $\mathtt{1}$. The Markov Logic Network (\ref{eq4}) contains 10 formulas, 1 marker of interest times 2 possible genotype values times 5 possible phenotype values. Each formula represents a pattern that holds true across all yeast crosses (indicated by the variable $\mathtt{strain}$) with the same strength (indicated by the weight $\mathtt{w_{m,g,v}}$). In other words, the weight $\mathtt{w_{m,g,v}}$ represents the fitness of the corresponding formula across all strains. Instantiations of the predicate $\mathtt{G}$ represent a set of predictor variables, whereas instantiations of the predicate $\mathtt{E}$ represent a set of target variables (\ref{eq4}). There are 748 ground predicates of $\mathtt{G}$ (assuming we handle only one marker in a model) and 1870 ground predicates of $\mathtt{E}$. Each ground predicate corresponds to a random variable in the corresponding Markov Random Field (see the previous section for more details). Since our MLN contains 10 formulas and there are 374 possible instantiations for each formula, the corresponding log-linear Markov Random Field contains 3740 features, one for each instantiation of every formula. \subsection*{Learning the Weights of MLNs} Each data point in the original dataset corresponds to one ground predicate (either $\mathtt{E}$ or $\mathtt{G}$ in our example). For example, the information that a genotype value of a marker $\mathtt{M71}$ in a strain $\mathtt{S13}$ is equal to $\mathtt{A}$ corresponds to a ground predicate $\mathtt{G(S13,M71,A)}$. Therefore, the original dataset can be represented with a collection of ground predicates that logically hold, which in turn is described as a data vector $\mathbf{d} = \langle d_1, \ldots, d_N \rangle$, where $N$ is the number of all possible ground predicates ($N=2618$ in this example). An element $d_i$ of the vector $\mathbf{d}$ is equal to $1$, if the $i$th ground predicate (assuming some order) is true and thus is included in our collection, and $0$ otherwise. Note that this vector representation is possible under a \emph{closed world assumption} stating that all ground predicates that are not listed in our collection are assumed to be false. In order to carry out training of a Markov Logic Network we can use standard Newtonian methods for likelihood maximization. The learning proceeds by iteratively improving weights of the model. At the $j$th step, given weights $\mathbf{w}^{(j)}$, we compute $\nabla_{\mathbf{w}^{(j)}} L(\mathbf{w}^{(j)})$, the gradient of the likelihood, which is our objective function that we maximize. Consequently, we improve the weights by moving in the direction of the positive gradient, $\mathbf{w}^{(j+1)} = \mathbf{w}^{(j)} + \alpha \nabla_{\mathbf{w}^{(j)}} L(\mathbf{w}^{(j)})$, where $\alpha$ is the step size. Recall that the likelihood is given by \begin{equation} L(\mathbf{w} \mid \gamma) = \Pr(\gamma \mid \mathbf{w}) = \frac{1}{Z} \exp \left(\sum_i w_i n_i(\gamma)\right) = \frac{\exp \left(\sum_i w_i n_i(\gamma)\right)} {\sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i n_i(\gamma')\right)}, \label{eq5} \end{equation} where $\gamma$ is a state (also called a configuration) of the set of random variables $\mathbf{X}$ and $\mathbf{\Gamma}$ is a space of all possible states. Therefore, the log-likelihood is \begin{equation} \log L(\mathbf{w} \mid \gamma) = \log \Pr(\gamma \mid \mathbf{w}) = \sum_i w_i n_i(\gamma) - \log \left[ \sum_{\gamma' \in \mathbf{\Gamma}} \exp\left(\sum_i w_i n_i(\gamma')\right) \right]. \label{eq6} \end{equation} Now derive the gradient with respect to the network weights, \begin{equation} \begin{array}{l} \displaystyle \frac{\partial}{\partial w_j} \log L(\mathbf{w} \mid \gamma) = n_j(\gamma) - \frac{1}{\sum_{\gamma' \in \mathbf{\Gamma}} \left[ Z \Pr(\gamma') \right]} \frac{\partial}{\partial w_j} \sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i n_i(\gamma')\right)\\ \displaystyle = n_j(\gamma) - \frac{1}{\sum_{\gamma' \in \mathbf{\Gamma}} \left[ Z \Pr(\gamma') \right]} \sum_{\gamma' \in \mathbf{\Gamma}} \left[ n_j(\gamma')\exp \left(\sum_i w_i n_i(\gamma')\right) \right]\\ \displaystyle = n_j(\gamma) - \sum_{\gamma' \in \mathbf{\Gamma}} \left[ n_i(\gamma')L(\mathbf{w} \mid \gamma') \right]. \end{array} \label{eq7} \end{equation} Note that the sum is computed over \emph{all possible} variable states $\gamma'$. The above expression shows that each component of the gradient is a difference between the number of true instances a corresponding formula $F_j$ (the number of true ground formulas of $F_j$) and the expected number of true instances of $F_j$ according to the current model. However, computation of both components of the difference is intractably large. Since the exact number of true ground formulas cannot be tractably computed from data \cite{RichardsonDomingos06}, the number is approximated by sampling the instances of the formula and checking their truth values according to the data. On the other hand, it is also intractable to compute the expected number of true ground formulas as well as the log-likelihood $L(\mathbf{w} \mid \gamma')$. The former involves inference over the model, whereas the later requires computing the partition function $Z = \sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i n_i(\gamma') \right)$. One solution, proposed in~\cite{RichardsonDomingos06}, is to maximize the \emph{pseudo-likelihood} \begin{equation} \hat{L}(\mathbf{w} \mid \gamma) = \prod_{j=1}^N \Pr(\gamma_j \mid \gamma_{MB_j};\mathbf{w}), \label{eq8} \end{equation} where $\gamma_j$ is a restriction of the state $\gamma$ to the $j$th ground predicate and $\gamma_{MB_j}$ is a restriction of $\gamma$ to what is called a Markov blanket of the $j$th ground predicate (the state of the Markov blanket according to our data). We elected to use this approach. Similar to the original definition of the Markov blanket in the context of Bayesian networks \cite{Pearl88}, the \emph{Markov blanket} of a ground predicate is a set of other ground predicates that are present in some ground formula. Using the yeast sporulation example, the set of ground predicates $\{ \mathtt{ \forall m, \forall g \mid G(S1,m,g) } \}$ is the Markov blanket of $\mathtt{E(S1,1)}$ due to the knowledge base (\ref{eq4}). Maximization of pseudo-likelihood is computationally more efficient than maximization of likelihood, since it does not involve inference over the model, and thus does not require marginalization over a large number of variables. Currently, we use the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm from the \emph{Alchemy} implementation of MLNs \cite{Kok09} to optimize the pseudo-likelihood. \subsection*{Using MLNs for Querying} After learning is complete, we have a trained Markov Logic Network that can be used for various types of inference. In particular, we can answer such queries as ``\emph{what is the probability that a ground predicate $Q$ is true given that every predicate from a set (a conjunction) of ground predicates $Ev=\{E_1, \ldots, E_m\}$ is true?}'' The ground predicate $Q$ is called a \emph{query} and the set $Ev$ is called an \emph{evidence}. Answering this query is similar to computing the probability $\Pr(Q \mid E_1 \land \ldots \land E_m, MLN)$. Using the product rule for probabilities we get \begin{equation} \begin{array}{l} \displaystyle \Pr(Q \mid E_1 \land \ldots \land E_m, MLN) = \frac{\Pr(Q \land E_1 \land \ldots \land E_m \mid MLN)}{\Pr(E_1 \land \ldots \land E_m \mid MLN)}\\ \displaystyle = \frac{\sum_{\gamma \in \mathbf{\Gamma}_Q \cap \mathbf{\Gamma}_E} \Pr(\gamma \mid MLN)} {\sum_{\gamma \in \mathbf{\Gamma}_E} \Pr(\gamma \mid MLN)}, \end{array} \label{eq9} \end{equation} where $\mathbf{\Gamma}_P$ is the set of \emph{all possible configurations} where a ground predicate $P$ is true, and $\mathbf{\Gamma}_E = \mathbf{\Gamma}_{E_1} \cap \ldots \cap \mathbf{\Gamma}_{E_m}$. Computing the above probabilistic query for majority of real application problems, which include the computational problems posed by the complexity experienced in systems biology, is intractable. Therefore, we need to approximate $\Pr(Q \mid E_1 \land \ldots \land E_m, MLN)$, which can be done using various sampling-based algorithms. Markov Logic Networks adopt a \emph{knowledge-based model construction} approach consisting of two steps: 1. constructing the smallest Markov Random Field from the original MLN that is sufficient for computing the probability of the query, and 2. inferring the Markov Random Field using traditional approaches. One of the commonly used inference algorithms is Gibbs sampling where at each step we sample a ground predicate $X_j$ given its Markov blanket. In order to define the probability of the node $X_j$ being in the state $\gamma_j$ given the state of its Markov blanket we use the earlier notation. Given a $j$th ground predicate $X_j$, all the formulas containing $X_j$ are denoted by $\mathbf{F}_j$. We denote the Markov blanket of $X_j$ as $MB_j$ and a restriction of $\gamma$ to the Markov blanket (the state of the Markov blanket) as $\gamma_{MB_j}$. Similarly, a restriction of the state $\gamma$ to $X_j$ is denoted as $\gamma_j$. Recall that each formula $F$ of a Markov Logic Network corresponds to a feature of a Markov Random Field, where the feature's value is the truth value $f$ of the formula $F$ depending on states $\gamma_1, \ldots, \gamma_k$ of the ground predicates $X_1, \ldots, X_k$ constituting the formula and denoted by $f=F|_{\gamma_1, \ldots, \gamma_k}$. Note that $F|_{\gamma_1, \ldots, \gamma_k}$ can also be shown as $F|_{\gamma_1, \gamma_{MB_j}}$. Using this notation we can express the probability of the node $X_j$ to be in the state $\gamma_j$ when its Markov blanket is in the state $\gamma_{MB_j}$ as \begin{equation} \Pr(\gamma_j \mid \gamma_{MB_j}) = \frac{\exp \left( \sum_{F_i \in \mathbf{F}_j} w_i F_i|_{\gamma_j, \gamma_{MB_j}} \right)} {\exp \left( \sum_{t=0}^1 \sum_{F_i \in \mathbf{F}_j} w_i F_i|_{X_j=t, \gamma_{MB_j}} \right)}. \label{eq10} \end{equation} For the Gibbs sampling, we let a Markov chain converge and then estimate the probability of a conjunction of ground predicates to be true by counting the fraction of samples from the estimated distribution in which all the ground predicates hold. The Markov chain is run multiple times in order to handle situations when the distribution has multiple local maxima so that the Markov chain can avoid being trapped on one of the peaks. One of the current implementations, called \emph{Alchemy}~\cite{Kok09}, attempts to reduce the burn-in time of the Gibbs sampler by applying a local search algorithm for the weighted satisfiability problem, called MaxWalkSat~\cite{Selman93}. \subsection*{Components of the Computational Method} The general overview of the computational method is given in figure~\ref{fig3}. At the first step, the method traverses the set of all markers and assigns an error score to each marker indicating its predictive power. An error score of a marker corresponds to performance of an MLN (\ref{eq4}) based on this single marker (a random variable $\mathtt{m}$ of (\ref{eq4}) that essentially chooses the location of markers to use in the model has only one value – the location of this single marker). The algorithm selects markers whose error scores are considerably lower than the average: we selected the outliers that are 3 standard deviations below the mean. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.99\textwidth]{setup.eps} \end{center} \caption{ \textbf{Components of the computational method.} This figure illustrates three major computational components. We used cross-validation to estimate the goodness of fit of a model. Panel \textbf{A} depicts 4-fold cross-validation. The data is partitioned into four sets and at each iteration a model is trained on three sets and then tested on the fourth set resulting in four prediction error scores that are then averaged for a total cross-validation error. Panel \textbf{B} shows details of model training and testing. Using training data we estimate the weights of each formula of an MLN template. The trained MLN is evaluated on testing data resulting in an error score. We shuffle the order in the training data and the order of the testing data and repeat the training and testing. The error scores after each evaluation are averaged to produce the average error score of the MLN. Panel \textbf{C} shows how the components illustrated in panels \textbf{A} and \textbf{B} are combined to search for the most informative markers. At the $N$th iteration of the search, given a data set and set of $N-1$ fixed markers, the method traverses the set of all markers and evaluates models constructed using the fixed markers and a selected marker. Note that at the $N$th iteration, $N$-marker models consider the fixed markers ($N-1$) as possible interactors with the next marker we found. The method then selects the outlier model, adds the corresponding marker to the set of fixed markers, and repeats the traversal of the markers. The method stops when no outliers are found.} \label{fig3} \end{figure} The current version of the method greedily selects a marker with the lowest score and appends it to a list of fixed markers (which is initially empty). The algorithm then repeats the search for an outlier with the lowest score, but this time a marker's score is computed using an MLN (\ref{eq4}) that estimates a joint effect of the marker under consideration together with all of the currently fixed markers on a phenotype (the variable $\mathtt{m}$ takes on the locations of all fixed markers and the marker under consideration). At each iteration the algorithms expands the list of fixed markers and rescans all of the remaining markers for outliers. The method stops as soon as no more outliers are detected and returns the fixed markers as potential loci associated with the phenotype. Our scanning for predictive genetic markers can be seen as an instance of the \emph{variable selection} problem. We use cross-validation to compare probabilistic models and select the one with the best fit to the data and the smallest number of informative markers. Using cross-validation we assess how a model generalizes to an independent dataset, addressing the model overfitting problem and facilitating unbiased outlier detection. Particularly, we use $K$-fold cross-validation which is illustrated in figure~\ref{fig3}(\textbf{A}). The data set is arbitrarily split into $K$ folds, $\mathbf{D}_1, \ldots, \mathbf{D}_K$, and $K$ iterations are performed. The $i$th iteration of cross-validation selects $\bigcup_{j \ne i} \mathbf{D}_j$ as a training dataset and $\mathbf{D}_i$ as a testing dataset. The model is then trained on the training dataset and the performance of the model is assessed using the testing dataset resulting in a prediction error. The average of prediction errors from $K$ steps, called a cross-validation error, is used as a score of the model. In case of yeast sporulation efficiency dataset introduced earlier, we used 11-fold cross-validation (since a population of 374 yeast strains can be evenly partitioned into 11 subsets with 34 strains in each). The results were generally insensitive to the cross-validation parameters. Recall that we distinguish two types of variables, the \emph{target variables} whose values are predicted, and the \emph{predictor variables} whose values are used to predict the values of the target variables. In the example above, sporulation efficiency of a yeast strain is a target variable, whereas genotype markers are predictor variables. Note that in some cases we can treat variables as both targets and predictors (e.g. gene expression in eQTL datasets). During the evaluation phase in the $i$th iteration of cross-validation we consider the testing dataset $\mathbf{D}_i$. Using knowledge-based model construction approach, we build a Markov Random Field that is small yet sufficient to infer the values of all target variables in the set $\mathbf{D}_i$. The target variables are inferred based on the values of the predictor variables from $\mathbf{D}_i$ (see section ``Using MLNs for Querying''). The model prediction of a target variable $X$ that can take on any value from $\{x_1,x_2,x_3\}$ can be represented as a vector $\hat{\mathbf{v}}=\langle p_1, p_2, p_3 \rangle$, where $p_j = \Pr(X=x_j \mid \mathbf{D}_i, \Theta_{MRF})$ is the probability of $X$ to take on a value $x_j$ given the testing data $\mathbf{D}_i$ and the Markov Random Field with parameters $\Theta_{MRF}$. On the other hand, the actual value of $X$ (provided in the testing dataset $\mathbf{D}_i$) can be represented as a vector $\mathbf{v}=\langle v_1, v_2, v_3 \rangle$, where $\forall j \ne k, v_j = 0$ and $v_k = 1$ iff $X=x_k$ in $\mathbf{D}_i$. Then the prediction error should measure the difference between the prediction $\hat{\mathbf{v}}$ and the true value $\mathbf{v}$. We used the Euclidean distance $d(\hat{\mathbf{v}},\mathbf{v})$ to compute the prediction error. This approach might make the comparison to other approaches difficult since the error can be a value that is not bounded by $0$ and $1$, but by $0$ and $\sqrt{M}$, where $M$ is the size of the domain of $X$ (3 in our example). Further computation is required to obtain values for model accuracy, to explain variance, and other standard characteristics. On the other hand, Euclidean distance is certainly sufficient for comparing predictions of different models. Due to the approximate nature of learning and inference in MLNs (see sections ``Learning the Weights of MLNs'' and ``Using MLNs for Querying''), two structurally identical models, trained on two data sets that differ only in the order of the samples, can generate predictions with slight differences. This is due to the fundamental path-dependency of learning and inference algorithms in knowledge-based model construction. For example, the order of training data affects the order in which the Markov Random Field is built, which in turn affects the way the approximate reasoning is performed over the field. Path-dependency introduces artificial noise into predictions and considerably reduces our ability to distinguish a signal with a small magnitude (such as a possible minor effect of a genetic locus on a phenotype) from a background. In order to reduce the effect of path-dependency on overall model prediction we shuffle the input data set and average the resulting predictions. We employed an iterative approach, based on shuffling, for denoising. At each iteration the model is retrained and reevaluated on newly shuffled data and the running mean of the model prediction is computed (see figure~\ref{fig3}(\textbf{B})). The method incrementally computes the prediction average until achieving convergence, namely until the difference between the running average at the two consecutive iterations is smaller than $Th$ for $W$ consecutive steps. The parameters $Th$ and $W$ are directly connected with the total amount of shuffling and re-estimation performed as illustrated in figure~\ref{fig4}. \begin{figure}[!ht] \begin{center} \includegraphics[width=3in]{shuffling.eps} \end{center} \caption{ \textbf{The amount of shuffling depends on the threshold and the size of the window.} Note that the number of iterations until convergence of the algorithm increases when the threshold decreases or the window size increases. Note also that selecting tight stopping parameters tends to allow the algorithm to identify more informative markers.} \label{fig4} \end{figure} In order to perform rigorous denoising, we select a lower value for the threshold $Th$ ($0.0001$) and a larger window size $W$ ($10$). The shuffling-based denoising procedure is applied at each iteration of the cross-validation. Averaging of the predictions after data shuffling reduces the amount of artificial noise enabling the overall method for detection of genetic loci to distinguish markers with a smaller effect on the phenotype (the algorithm detects more informative markers as illustrated in figure~\ref{fig4}). There are many different strategies to search for the most informative subset of genetic markers. In this section we used a greedy approach in order to illustrate the general MLN-based modeling framework presented in this paper. In the next section we show that MLN-based modeling that accounts for dependencies between markers through joint-inference allows us to find interesting biological results by using a greedy search method. In order to be confident that the fixed markers are meaningful, we manually selected markers at each iteration of the search from the set of outliers and arrived at a similar set of candidate loci (within the same local region). \section*{Results} The analyses presented in this paper are based on the dataset from~\cite{Gerke09} containing both phenotype (sporulation efficiency) and genotype information of yeast strains derived from one intercross. The results are obtained using our method that searches for the largest set of genetic markers with the strongest compound effect on the phenotype. All the detailed information on the computational components of our method is presented in the Methods section. In their paper Gerke et al.~\cite{Gerke09} identified 5 markers that have an effect on sporulation efficiency including 2 markers whose effect seems to be very small. Moreover, Gerke et al. provided evidence for non-linear interactions between 3 major loci. The presence of confirmed markers with various effect and non-linear interactions make the dataset from~\cite{Gerke09} an ideal choice for testing our computational method. Our method allows us to define and to use essentially any genetic model. First we used a simple regression-type model that mimics simple statisical approaches, like GWAS. At the first stage, the method compares the markers according to their individual predictive power. The amount of the effect of a marker on the phenotype is estimated by computing a prediction error score from a regression model based solely on this marker. The top line on the left panel in figure 5 illustrates the error scores of all markers ordered by location ($X$ axis). In figure~\ref{fig5} we observe three loci (around markers 71, 117, and 160) with the strongest effect on sporulation efficiency, which were identified in~\cite{Gerke09}. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.99\textwidth]{errors.eps} \end{center} \caption{ \textbf{Prediction of sporulation efficiency using a subset of markers.} This figure shows the execution of the algorithm plotted as error scores vs markers for a number of models. The top line in the left plot shows error scores of the models based only on a single marker plotted on the $X$-axis. A green star indicates the outlier marker with the smallest error and purple stars depict other outliers (markers whose error differs from the mean error by 3 standard deviations). The second line shows the error scores of the models based on a corresponding marker together with the previously selected marker (indicated with the green star). All the following lines are interpreted in the similar way. The left plot shows five markers with a large effect. The rest of the identified markers (six markers with a small effect) are illustrated in the right plot.} \label{fig5} \end{figure} At the next stage, our method adds the marker with the strongest effect (marker 71) to all the following models. This allows us to compare all other markers according to their predictive power \emph{in conjunction} with the fixed marker 71. This time the prediction error score of a marker indicates how well this marker \emph{together} with the marker 71 predicts the sporulation efficiency. The score is, thus, computed from the regression model based on two markers: the marker 71 and a second marker. This is an important distinction from traditional GWAS where the searches for multiple influential markers are performed independently from each other. In our approach, using MLNs and in particular joint-inference, the compound effect of markers is estimated allowing us to see possible interactions between markers. The method continues the iterations and selects 11 markers before the error no longer improves sufficiently and the computation stops. Among the selected markers, 5 are the same loci previously identified in~\cite{Gerke09} (markers 71, 117, 160, 123, 79), 3 are the markers next to the loci with the strongest effect (markers 72, 116, 161), and 3 are the new markers that have not been reported before (markers 57, 14, 20). In addition, the method identifies another marker (marker 130) as a candidate for a locus that has an effect on sporulation efficiency, although this marker was not selected due to its weak predictive power. Notice that even with a relatively simple model, such as logistic regression, and a quite stringent criterion for outliers (3 standard deviations from the mean, a $p$-value $0.003$ for a normal distribution) we are able to exceed the number of identified candidate loci. We argue that our method is more efficient at discovering markers with a very low individual effect on phenotype that have non-trivial interactions with other sporulation-affecting loci due to the use of joint-inference of MLNs. There are several distinct properties of our method that are important to note. First, although the method selects the neighboring markers of the three strongest loci, it does not select a neighbor immediately after the original loci has been identified, because there are better markers to be found. For example, after selecting the first marker 71, the method finds markers 117 and 160, and only then selects marker 72, which is the neighbor of 71. The method selects the next strongest marker at each stage that maximally increases the compound effect of selected markers. Second, our method does not find markers that do not add sufficient predictive power. The criterion for outliers determines when the method stops and determines the confidence that the added markers have a real effect on the phenotype. For each new marker (57, 14, 20, 130) we examined all genes that were nearby (different actual distances were used). For example for the marker 14 we considered all genes located between markers 13 and 15. Table~\ref{tab1} shows genes located near these newly identified markers that are involved in either meiosis or in sporulation. \begin{table}[!ht] \caption{ \bf{Candidate genes near the new informative markers.}} \begin{tabular}{|c|p{40pt}|p{30pt}|p{45pt}|c|p{220pt}|} \hline Marker & Euclidean Error & Coord. & Candidate Genes & GO & Description\\ \hline 57 & 0.7061 & 6, 103743 & YFL039C, ACT1 & S & Actin, structural protein involved in cell polarization, endocytosis, and other cytoskeletal functions\\ & & & YFL037W, TUB2 & M & Beta-tubulin; associates with alpha-tubulin (Tub1p and Tub3p) to form tubulin dimer, which polymerizes to form microtubules\\ & & & YFL033C, RIM15 & M & Glucose-repressible protein kinase involved in signal transduction during cell proliferation in response to nutrients, specifically the establishment of stationary phase; identified as a regulator of IME2; substrate of Pho80p-Pho85p kinase\\ & & & YFL029C, CAK1 & M & Cyclin-dependent kinase-activating kinase required for passage through the cell cycle, phosphorylates and activates Cdc28p\\ & & & YFL009W, CDC4 & M & F-box protein required for G1/S and G2/M transition, associates with Skp1p and Cdc53p to form a complex, SCFCdc4, which acts as ubiquitin-protein ligase directing ubiquitination of the phosphorylated CDK inhibitor Sic1p\\ & & & YFL005W, SEC4 & S & Secretory vesicle-associated Rab GTPase essential for exocytosis\\ \hline 14 & 0.7009 & 2, 656824 & YBR180W, DTR1 & S & Putative dityrosine transporter, required for spore wall synthesis; expressed during sporulation; member of the major facilitator superfamily (DHA1 family) of multidrug resistance transporters\\ & & & YBR186W, PCH2 & M & Nucleolar component of the pachytene checkpoint, which prevents chromosome segregation when recombination and chromosome synapsis are defective; also represses meiotic interhomolog recombination in the rDNA\\ \hline 130 & 0.7010 & 11, 447373 & YKR029C, SET3 & M & Defining member of the SET3 histone deacetylase complex which is a meiosis-specific repressor of sporulation genes; necessary for efficient transcription by RNAPII\\ & & & YKR031C, SPO14 & S & Phospholipase D, catalyzes the hydrolysis of phosphatidylcholine, producing choline and phosphatidic acid; involved in Sec14p-independent secretion; required for meiosis and spore formation; differently regulated in secretion and meiosis\\ \hline 20 & 0.6972 & 3, 188782 & YCR033W, SNT1 & M & Subunit of the Set3C deacetylase complex that interacts directly with the Set3C subunit, Sif2p; putative DNA-binding protein\\ \hline \end{tabular} \begin{flushleft} The list of genes located near the new markers identified by the MLN-based method. The table shows only the genes that are involved in sporulation or meiosis. Specific information for the genes can be found at \emph{www.yeastgenome.org}. \end{flushleft} \label{tab1} \end{table} The simple logistic regression-type model that was used can be summarized using the first-order formula $\mathtt{G(strain,m,g)} \Rightarrow \mathtt{E(strain,v)}$, which captures the effect of a subset of markers on phenotype. In order to investigate gene-gene interactions we used a pair-wise model which can be summarized by the formula $\mathtt{G(strain,m1,g1)} \land \mathtt{G(strain,m2,g2)} \Rightarrow \mathtt{E(strain,v)}$. The pair-wise model subsumes the simple regression model, since whenever $\mathtt{m1}$ and $\mathtt{m2}$ are identical, the pair-wise MLN is mapped to the same set of cliques as those from the simple MLN. However, the pair-wise model defines the dependencies between two loci and a phenotype that are mapped to an additional set of 3-node cliques. The pair-wise model allows us \emph{explicitly} to account for the pair-wise gene interactions. When using the pair-wise model, the joint-inference is performed over an MLN where possible interactions between two markers are specified with first-order formulas. The assumption inherent in genome-wide analyses is that a simple additive effect can be observed when applying the pair-wise model to loci that do not interact: the compound effect is essentially a sum of the individual effects of each locus. On the other hand, for two interacting markers, the pair-wise model is expected to predict a larger-than-additive compound effect. Since the pair-wise model incorporates possible interactions, the prediction error of this model should be smaller than the error of a simple model by a factor that corresponds to how much the interaction information helps to improve the prediction. \begin{figure}[!ht] \begin{center} \includegraphics[width=3in]{compare_simple_and_71.eps} \end{center} \caption{ \textbf{Investigating the 71-117 loci interaction.} This figure compares a standard genome-wide scan made by a simple regression model based on a single marker (red line) and a scan made by a pair-wise model based on two markers, one of which is preset to 71 (blue line). The green lines represent the size of the leftmost red peak corresponding to the difference $d$ between the baseline prediction error of the simple model and the error $error_S(71)$. Pink bars represent how much the difference between $error_S(117)$ and $error_{PW}(71,117)$ is larger than $d$. The large size of the leftmost pink bar indicates a strong interaction between markers 71 and 117.} \label{fig6} \end{figure} By using the pair-wise model, we investigated the presence of interactions between markers 71, 117, 160 which correspond to loci with the strongest effect on sporulation efficiency. We denote the prediction error of a simple regression model based on markers $M_1, \ldots, M_n$ as $error_S(M_1, \ldots, M_n)$, and the error of a pair-wise model based on markers $M_1, \ldots, M_n$ as $error_{PW}(M_1, \ldots, M_n)$. Figure~\ref{fig6} compares the prediction errors of the simple regression model based on one marker (red line) and the errors of the pair-wise model based on two markers one of which is preset to 71 (blue line). Note that the baseline prediction error of the pair-wise model is the same as $error_S(71)$, which means that on average the choice of a second marker in the pair-wise model does not affect the prediction. There are, however, 2 markers that visibly improve the prediction, namely markers 117 and 160. Note that the prediction error $error_{PW}(71,160)$ (the right blue peak) is lower than $error_S(160)$ (the rightmost red peak) by only a value roughly equal to the difference between the average prediction errors of simple and pair-wise models (this value is equal to the size of the leftmost red peak). The reduction of the prediction error, when combining markers 71 and 160, is additive, suggesting that there is no interaction between these two markers. On the other hand, if we look at the effect of combining markers 71 and 117, we can see that the prediction improvement using the pair-wise model based on both markers (the size of the left blue peak) is considerably more than just a sum of prediction improvements of two simple models independently (the leftmost and the middle red peaks). The non-additive improvement suggests that there is an interaction between the markers 71 and 117. \begin{figure}[!ht] \begin{center} \includegraphics[width=3in]{compare_simple_and_117.eps} \end{center} \caption{ \textbf{Investigation of 117-71 and 117-160 loci interactions.} This figure compares a standard genome-wide scan using a single-marker model and a scan using a pair-wise model based on two markers, one of which is preset to 117. See the caption of figure~\ref{fig6} for more details. Both pink bars indicate the presence of non-additive interactions between markers 117 and 71 and markers 117 and 160.} \label{fig7} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=3in]{compare_simple_and_160.eps} \end{center} \caption{ \textbf{Investigating an interaction between markers 117 and 160.} This figure compares a standard genome-wide scan using a single-marker model and a scan using a pair-wise model based on two markers, one of which is preset to 160. See the caption of figure~\ref{fig6} for more details. Note $error_{PW}(71,160)$ that is almost the same as $error_{PW}(117,160)$ even though $error_S(71)$ is considerably lower than $error_S(117)$. The tall pink bar on the right side of the figure indicates a non-additive interaction between markers 117 and 160. On the other hand, the pink bar on the left is almost non-existent indicating absence of an interaction between markers 71 and 160.} \label{fig8} \end{figure} Figures~\ref{fig7} and \ref{fig8} show a similar analysis to that illustrated in figure~\ref{fig6} performed on the other two markers. The analysis shown in figure~\ref{fig7} confirms the interactions between markers 71 and 117 and, additionally, suggests that there is an interaction between markers 117 and 160. Figure~\ref{fig8} confirms the interaction between 117 and 160 and the absence of the interaction between 71 and 160. One can see from figure~\ref{fig8} that the leftmost blue peak indicating $error_{PW}(71,160)$ is a sum of $error_S(71)$ and $error_S(160)$ (the pink bar next to the left pink star is extremely short). On the other hand, the rightmost blue peak is a lot more than just a sum of individual errors (the pink bar is tall). In fact, $error_{PW}(117,160)$ is almost the same as $error_{PW}(71,160)$. The two predicted interactions, 71-117 and 117-160, were experimentally identified in~\cite{Gerke09}. The strength of these interactions is significant enough to immediately stand out during the analysis in figure~\ref{fig6}. We next applied this analysis to the set of all nine identified loci (71, 117, 160, 123, 57, 14, 130, 79, 20) in order to quantify possible interactions between every pair of markers. For each two markers $A$ and $B$ from the set of loci we compute the prediction errors of a simple model based solely on either $A$ or $B$, denoted as $error_S(A)$ and $error_S(B)$. We also compute the prediction error of a pair-wise model based on both $A$ and $B$, denoted as $error_{PW}(A,B)$. Consequently the size of a possible interaction between $A$ and $B$, denoted as $i(A,B)$, is estimated using the following expression: \begin{equation} \begin{array}{l} \displaystyle i(A,B) = d(A,B) - d(A) - d(B)\\ \displaystyle = (median - error_{PW}(A,B)) - (median - error_S(A)) - (median - error_S(B))\\ \displaystyle = error_S(A) + error_S(B) - error_{PW}(A,B) - median. \end{array} \label{eq11} \end{equation} Here $median$ is a baseline of prediction error of a simple model based on a single marker. We averaged the errors over $10$ independently computed iterations. We next determined how high the value $i(A,B)$ should be in order to confidently predict an interaction between markers $A$ and $B$. We selected $36$ pairs of randomly selected markers that were not from the set of nine informative markers and computed $i(A,B)$ for each pair. Since we do not expect any interactions between random, non-informative markers, their $i(A,B)$ values are used to estimate a confidence interval for no interaction. We computed a mean and standard deviation of the set of $i(A,B)$ values corresponding to the randomly chosen markers. It is estimated that a value of $2.54$ standard deviations away from the mean completely covers the set of $i(A,B)$ values for all random markers, and we therefore argue there is a strong likelihood of interaction between markers $A$ and $B$ whenever $i(A,B)$ is more than $3$ standard deviations away from the mean. Whenever $i(A,B)$ is less than $3$ but more than $2.54$ standard deviations away from the mean, we argue there is a probable interaction between $A$ and $B$. Estimated interactions between identified loci are illustrated as a network of marker interactions in figure~\ref{fig9} where the color of each link represents the level of confidence of the corresponding interaction. We repeated the estimation of interactions by randomly selecting another $36$ pairs of markers and computing the confidence intervals for the new set ($2.32$ standard deviations). The probable interactions identified from the first experiment were confirmed in the second experiment. Two possible interactions, however, were identified in one experiment but not the other and are depicted in figure~\ref{fig9} with dashed links. This is a result of a slightly shifted mean of the second set of $36$ random marker pairs relative to the first set, and the marginal size of the effect. Two interactions with very large $i(A,B)$ values, 71-117 and 117-160, were previously identified in~\cite{Gerke09}. We also found several smaller interactions illustrated in figure~\ref{fig9} that have not been identified before. Note that since we measure absolute values $i(A,B)$, it is not a surprise that interactions 71-117 and 117-160 are so large, since the corresponding loci (71, 117, 160) are by far the strongest. Locus 117, which is involved in the strongest interactions, corresponds to the gene \emph{IME1}~\cite{Gerke09}, the master regulator of meiosis (\emph{www.yeastgenome.com}). Since \emph{IME1} is a very important sporulation gene, it is entirely reasonable that this gene is central to our interaction network (figure~\ref{fig9}). \begin{figure}[!ht] \begin{center} \includegraphics[width=0.99\textwidth]{interactions1.eps} \end{center} \caption{ \textbf{Estimated network of gene-gene interactions.} This figure shows a network of estimated interactions between loci based on genetic data and sporulation phenotype. The color of the links corresponds to the level of confidence of interactions. The three darkest colors are associated with the most likely interactions. The fourth color is associated with an interaction 123-117 that scored high in the first experiment (more than $3$ standard deviations away from the mean), but lower in the second experiment ($2.5$ standard deviations), although still above the confidence level ($2.32$ standard deviations). Two possible interactions depicted with the dashed lines with the lightest color scored above the confidence level in the first experiment ($2.54$ standard deviations), but below the confidence level in the second experiment ($1$ standard deviation). Loci 160, 71, 117, 123, 79 were previously identified in~\cite{Gerke09}. Moreover, the genes of the three major loci were detected: \emph{IME1} (locus 117), \emph{RME1} (locus 71), and \emph{RSF1} (locus 160) \cite{Gerke09}. The two strongest interactions, 160-117 and 117-71, were also identified in~\cite{Gerke09}.} \label{fig9} \end{figure} \section*{Discussion} The method presented in this paper provides a framework for using virtually any genetic model in a genome-wide study because of the high representational power of MLNs. This power stems from the use of general, first-order logic conjoined to probabilistic reasoning. Moreover, the use of knowledge-based model approaches that build models based on both data and a \emph{relevant} set of first order formulas~\cite{Wellman92} allows us to efficiently incorporate prior biological knowledge into a genetic study. The generality of MLNs allows greater representational power than most modeling approaches. The general approach can be viewed as a seamless unification of statistical analysis, model learning and hypothesis testing. As opposed to standard genome-wide approaches to genetics which assume additivity, the aim of the method described in this article is not to return values corresponding to the strength of individual effects of each marker. Our method aims at discovering the loci that are involved in determining the phenotype. The method computes the error scores for each marker in the context of the others representing the strength of each marker's effect in combination with other markers. It will be valuable in future to derive a scoring technique for markers that can be used directly to compare with results of the traditional approaches. In general, our approach provides a way of searching for the best model predicting the phenotype from the genetic loci. Since the model and the corresponding joint-inference methodology incorporate the relations between the model variables, we are able to begin a quantitative exploration of possible interactions between genetic loci. Our method shows promise in that it can accommodate complex models with internal relationships among the variables specified. The development of a succinct and clear language and grammar based on FOL for the description of (probabilistic) biological systems knowledge will be critical for the widespread application of this method to genetic analyses. Achieving this goal will also represent a significant step toward the fundamental integration of systems biology and the analysis and modeling of networks with genetics. Additionally, the development of the biological language describing useful biological constraints can alleviate the computational burden associated with model inference. MLN-based methods, such as ours, that perform both logic and probabilistic inferences are computationally expensive. While increases in computing power steadily reduce the magnitude of this problem, there are other approaches that will be necessary. Given such a focused biological language, we could tailor the learning and inference algorithms specifically to our needs and thus reduce the overall computational complexity of the method. Another future direction will be to find fruitful connections between a previously developed information theory-based approach to gene interactions~\cite{Carter09} with this AI-derived approach. There are clear several other applications of this approach in the field of biology. It is clear that there are, for example, many similarities between the problem discussed here and the problems of data integration. Biomarker identification, particularly for panels of biomarkers, is another important problem that involves many challenges of data integration and that can benefit from our MLN-based approach. Similar to GWAS or QTL mapping, where we search for genetic loci that are linked with a phenotype of interest, in biomarker detection we search for proteins or miRNAs that are associated with a disease state in a set of patient data. Just as in genetics, we can represent biomarkers as a network because there are various underlying biological mechanisms that govern the development of a disease. Often the most informative markers from a medical point of view have weak signals by themselves. MLNs can allow us to incorporate partial knowledge about the underlying biological processes to account for the inter-dependencies making the detection of the informative biomarkers more effective. It is clear that the approach described here has the potential to integrate the biomarker problem with human genetics, a key problem in the future development of personalized medicine. \section*{Acknowledgments} The authors are grateful to Aimee Dudley, Barak Cohen and Greg Carter for stimulating discussions and also Barak Cohen for sharing experimental data. This work was supported by a grant from the NSF FIBR program to DG, and the University of Luxembourg-Institute for Systems Biology Program. We also thank Dudley, Tim Galitski, Andrey Rzhetsky and especially Greg Carter for comments on the manuscript.
1,941,325,220,367
arxiv
\section{Introduction} \label{intro} The theory of cosmological inflation \cite{1} represents an awesome solution to the long-standing conundrums affecting the standard cosmology, i.e. the flatness problem, the horizon problem and the monopole problem. Inflation also provides a natural explanation for the seeds, namely the primordial metric fluctuations generating the matter inhomogeneities responsible both for the growth of the large-scale structures visible in the universe and the temperature anisotropies of the Cosmic Microwave Background (CMB). In its simplest version, the so called single field slow-roll inflation, the inflationary mechanism is driven by a homogeneous, neutral and minimally coupled scalar field $\phi$, called the inflation field, typically characterized by an effective scalar potential $V(\phi)$ equipped with an (almost) flat region and a fundamental vacuum state. In the first stage of inflation, the scalar field slowly crosses the plateau of the scalar potential, that behaves like a cosmological constant and triggers an (almost) de Sitter expansion of the universe. At the end of the inflationary period, the inflaton field reaches the steep region of the potential and falls in the global vacuum where it starts to oscillate. As a consequence, it should then decay to Standard Model (SM) and Beyond-the-Standard-Model (BSM) relativistic particles, reheating the cold universe and giving rise to the graceful exit toward the standard initial radiation-dominated Hot Big Bang (HBB) epoch (for reviews on reheating see \cite{2}). Of course, the simplest mentioned scenario is not mandatory. The universe could have experienced nonstandard post-reheating and pre-Big-Bang-Nuclesynthesis (pre-BBN) cosmological phases driven by one or more additional scalar fields, recovering the radiation dominance at lower energy scales. An intriguing possibility, first noticed in \cite{3}, is represented by the presence of additional sterile scalar fields characterized by a faster-than-radiation scaling law of the corresponding energy density. As new cosmological components, they can provide interesting modification of the dark matter annihilation rates and relics \cite{3,4,5,6,7}, inflationary e-folds \cite{8,9}, lepton and baryon asymmetry generation \cite{10,11}, matter-dark matter cogenesis \cite{12} and gravitational wave signals \cite{13}. These scalars are common in theories with extra dimensions and branes \cite{14}, like superstring orientifold models \cite{15}. Indeed, scalars parametrizing the positions of D-branes along the transverse internal directions interact gravitationally with the metric sector and can be engineered in a way to be decoupled by the longitudinal oscillation modes, related to SM and BSM fields. In this paper, we consider non-standard cosmologies inspired by orientifolds with D-branes containing, generically, multiple sterile scalar fields entering a non-standard post-reheating phase. We study their effects on minimal thermal leptogenesis, extending the analysis of \cite{10}, where the single-field case has been addressed. The term leptogenesis refers to the process of generation of lepton asymmetry and (induced) baryon asymmetry in the universe. The simplest class of models employs the decay of heavy right handed neutrinos (RHNs) in type-I seesaw mechanism \cite{16}. The process involves CP violating out of equilibrium decay of lepton number violating RHNs. There are also several other versions of leptogenesis, depending for instance on the choice of the used seesaw mechanism (type-II, type-III), in the presence of supersymmetry (soft leptogenesis) \cite{17} or even by radiative generation \cite{18}. Complete reviews on leptogenesis can be found in \cite{19}. We limit ourselves to the analysis of the effects of the mentioned insertion of multiple scalar fields on the minimal type-I seesaw leptogenesis, although we expect that similar modifications could be applied to other leptogenesis scenarios as well. The paper is organized as follows. In Section II we briefly discuss non-standard cosmology in the post-reheating early universe epoch, with the presence of scalar fields inspired by properties of (super)string theory vacua. In Section III we discuss how standard thermal leptogenesis is modified by the presence of a bunch of $k$ scalar fields, discussing the two-scalar-field case in details. In Section IV, we repeat the study of non-standard leptogenesis in the presence of three active scalar fields. Finally in Section V, we summarize our obtained results and discuss some open problems. In this paper we use the particle natural units $\hbar=c=1$, we indicate with $M_P = 1/\sqrt{8\pi G_N}$ the reduced Planck mass where $G_N$ is the gravitational Newton constant. \section{D-Brane scalars and non-standard cosmology} \label{Dbrane} In the standard HBB scenario, the universe after reheating should experience a very hot and dense radiation dominated era. In that phase, the evolution of the universe is well described by a homogeneous fluid that obeys the Friedmann equation \begin{equation}\label{raddom} H^2(T)\simeq\frac{1}{3M^2_{P}}\rho_{rad}(T), \end{equation} where the radiation energy density at a $T$ temperature is \begin{equation}\label{rhorad} \rho_{rad}(T)=\frac{\pi^2}{30}g_{E}(T) T^4 \end{equation} where $H = \dot{a}/a$ denotes the Hubble rate in a Friedmann-Lemaitre-Robertson-Walker (FLRW) metric whose $a(t)$ is the standard cosmic scale factor, while $g_{E}$ is the effective number of relativistic degrees of freedom, turning out to be \begin{equation} g_{E}(T)=\sum_{b} g_{b}\left( \frac{T_b}{T}\right)^4 + \frac{7}{8}\sum_{f} g_{f}\left(\frac{T_f}{T}\right)^4, \end{equation} where $b$ and $f$ label bosonic and fermionic contributions, respectively, while $T_b$ and $T_f$ indicate the corresponding temperatures. However, since the reheating phase is largely unknown, there is room for many scenarios involving a graceful exit from inflation and a corresponding modified path to the HBB era of a radiation dominated universe. A simple possibility consists in a cosmological stress-energy tensor that, after reheating, could be dominated by non-interacting scalar fields equipped with a faster-than-radiation dilution law of the corresponding energy density. These kind of components are quite common both in scalar modifications of General Relativity and in theories with extra dimensions. Among them, (super)string theories are (the only) consistent proposals for an UV quantum completion of General Relativity. Moreover, many scalars are naturally present in their spectra. In four dimensions, the scalars result from dimensional reduction of ten dimensional fields and parametrize the deformations of the internal (compact) manifolds. In orientifold models, that are genuine string theory vacua, additional scalar fields are related to the presence of D-branes, defects where open-string ends slide. Indeed, the position of space-time filling branes along the internal directions are additional parameters that correspond to scalar fields in the effective low energy action. At tree level, the scalars are moduli, i.e. their potential vanishes and their interactions are purely gravitational. In order to stabilize most of them, a known procedure is to introduce flux compactifications, namely adding vevs to some of the internal (form) fields in the spectra. This way, one also get (partial spontaneous) breaking of supersymmetry and back-reaction on the space-time geometry resulting into a warping of the metric. The generated scalar potentials are typically steep or dominated by kinetic terms, making the corresponding fields possibly active after the reheating phase. In particular, we consider a set of scalar fields that only interact with the inflaton but are completely decoupled from the rest. They correspond to positions in the transverse internal directions of a bunch of well separated\footnote{We are assuming that the branes move slowly in the compact space while the potential felt by their position give rise to a quick dilution due to the exotic nature of the corresponding fluid.} D-branes whose dynamics is decoupled from the visible sector (i.e. the SM fields), from other possible hidden (dark) matter sector and even from the fields related to the longitudinal degrees of freedom on the brane itself. Their mutual interactions can also be neglected and the position fluctuations do not interfere among themselves. One example of this kind of scenario has been given in ref \cite{8, 14}, where the transverse position of a probe D-brane behaves exactly as requested, once the DBI and the Wess-Zumino terms describing its dynamics are specialized to a warped geometry. As said, the most important point is that these scalar fields always interact with the inflaton, that can thus decay to them and to the remaining (relativistic) components of the standard reheating fluid. The previous conditions are necessary in order to avoid relics moduli fields that overclose the universe or ruin the BBN. Let us thus analyze a modification of the evolution of the early universe after the reheating phase, realized through the presence of the mentioned set of scalar fields $\phi_i (i=1,...,k)$ \cite{9}. They are assumed to dominate at different time scales until radiation becomes the most relevant component, well before the BBN era in order to guarantee the predictions about the light element abundances. Given the assumptions, the total energy density after the inflaton decay can be assumed to be \begin{equation}\label{eqn:totalenergydensity} \rho_{tot}(T)=\rho_{rad}(T) + \sum_{i=1}^k \rho_{\phi_i}(T). \end{equation} We introduce the scalar fields in such a way that, for $i>j$, $\rho_{\phi_{i}}$ hierarchically dominates at higher temperatures over $\rho_{\phi_{j}}$ when the temperature decreases. All the scalar fields, supposed to be completely decoupled from each other and from matter and radiation fields, can be described as perfect fluids diluting faster than radiation. In this respect, the dynamics is encoded in \begin{equation} \dot{\rho}_{\phi_i} + 3H\rho_{\phi_i}( 1 + w_i ) = 0 , \end{equation} where $w_i=w_{\phi_i}$ is the Equation Of State (EoS) parameter of the field $\phi_i$. Integrating this equation one finds \begin{equation} \rho_{\phi_i}(T)=\rho_{\phi_i}(T_i)\left(\frac{a(T_i)}{a(T)} \right)^{4+n_i}, \end{equation} with $n_i=3w_i -1$. The indices\footnote{It should be noticed that the $n_i$'s are not necessarily integers, even though in this paper we use for them integer values.} $n_i$, namely the ``dilution" coefficients, are understood to satisfy the conditions \begin{equation} n_i>0, \quad n_i<n_{i+1}. \end{equation} $T_i$ can be conveniently identified with the transition temperature at which the contribution of the energy density of $\phi_i$ becomes subdominant with respect to the one of $\phi_{i-1}$. In other words, the scalar fields are such that \begin{eqnarray} \rho_{\phi_i}>\rho_{\phi_{i-1}} \mbox{ for } T>T_i ,\\ \rho_{\phi_i}=\rho_{\phi_{i-1}} \mbox{ for } T=T_i ,\\ \rho_{\phi_i}<\rho_{\phi_{i-1}} \mbox{ for } T<T_i . \end{eqnarray} Using the conservation of the ``comoving" entropy density \begin{equation} g_S(T)a^3(T)T^3=g_S(T_i)a^3(T_i)T^3_i , \end{equation} being $g_S$, defined by \begin{equation} g_{S}(T)=\sum_{b} g_{b}\left( \frac{T_b}{T}\right)^3 + \frac{7}{8}\sum_{f} g_{f}\left(\frac{T_f}{T}\right)^3, \end{equation} the effective number of relativistic degrees of freedom associated with entropy, the energy density of the various fields at a temperature $T$ can be expressed in terms of the transition temperatures $T_i$ as \begin{equation}\label{ratioofrhophi} \rho_{\phi_i}(T)=\rho_{\phi_i}(T_i)\left( \frac{g_S(T)}{g_S(T_i)}\right)^{\frac{4+n_i}{3}}\left(\frac{T}{T_i}\right)^{4+n_i}. \end{equation} For the first scalar field $\phi_1$, by definition, the transition temperature coincides with that at the beginning of the radiation-dominated era, $T_1=T_{r}$, so that $\rho_{\phi_1}(T_1)=\rho_{rad}(T_1)$. The second scalar field $\phi_2$ is subdominant compared to $\phi_1$ below the temperature $T_2$. Using Eq.\eqref{ratioofrhophi} and observing that $T_2$ is the transition temperature at which $\rho_{\phi_2}(T_2)=\rho_{\phi_{1}}(T_2)$, one gets \begin{equation}\label{eqn: energy phi2} \rho_{\phi_2}(T)= \rho_{\phi_1}(T_1)\left( \frac{T_2 \, g_S^{1/3}(T_2)}{T_1 \, g_S^{1/3}(T_1)} \right)^{4+n_1} \left( \frac{T\, g_S^{1/3}(T)}{T_2 \, g_S^{1/3}(T_2)} \right)^{4+n_2} . \end{equation} This equation tells us that the energy density of the scalar field $\phi_2$ depends on the ratio between the two scales $T_1$ and $T_2$, where the $\phi_1$-dominance occurs. In the same way, we can derive the analogous expressions for the other scalar fields. The energy density carried by the $i$-th field $\phi_i$ can thus be written as \begin{equation}\label{eqn: general result} \rho_{\phi_i}(T)=\rho_{rad}(T_{r})\prod_{j=1}^{i-1}\left(\frac{T_{j+1} \ g_S^{1/3}(T_{j+1})}{T_j \ g_S^{1/3}(T_j)}\right)^{4+n_j} \left(\frac{T \ g_S^{1/3}(T)}{T_i \ g_S^{1/3}(T_i)}\right)^{4+n_i} , \quad i\ge 2 , \end{equation} and, inserted in Eq. \eqref{eqn:totalenergydensity}, it allows to calculate the total energy density dominating the expansion of the universe after the standard reheating phase, up to the beginning of the radiation-dominated epoch. In particular, using eq. \eqref{rhorad}, one has\footnote{It should be noticed that the amplification parameter ${\mathcal{J}}^2(T)$ is the $\eta(T)$ parameter of ref. \cite{9}.} \begin{equation}\label{eq:totalrho} \rho(T) = \rho_{rad}(T) \ {\cal{J}}^2(T) , \end{equation} where the (positive) ``correction factor'' determining the non-standard evolution in the presence of $k$ additional scalar fields results \begin{align}\label{eq:complmodfact} {\cal{J}}^2(T) &= 1 + \left(\frac{T_{r} \ g_E^{1/4}(T_{r})}{T \ g_E^{1/4}(T)}\right)^{4} \ \left(\frac{T \ g_S^{1/3}(T)}{T_1 \ g_S^{1/3}(T_1)}\right)^{4+n_1}\nonumber\\ &+\sum_{i=1}^{k} \left(\frac{T_{r} \ g_E^{1/4}(T_{r})}{T \ g_E^{1/4}(T)}\right)^{4} \ \left(\frac{T \ g_S^{1/3}(T)}{T_i \ g_S^{1/3}(T_i)}\right)^{4+n_i}\ \prod_{j=1}^{i-1}\left(\frac{T_{j+1} \ g_S^{1/3}(T_{j+1})}{T_j \ g_S^{1/3}(T_j)}\right)^{4+n_j} . \end{align} Since, by assumption, there is not a change in the number of degrees of freedom between the end of the reheating phase and the beginning of the HBB phase, all the ratios of $g_S$ and $g_E$ at different temperatures are of order $1$. Thus, for sufficiently high $T$ and $k$ additional scalar fields, it turns out that (defining $n_0=0$) \begin{equation}\label{eq:modfactor} {\cal{J}}^2(T) \simeq 1 + \sum_{i=1}^{k} \, \prod_{j=1}^{i}\left(\frac{T}{T_j}\right)^{n_j-n_{j-1}} . \end{equation} As expected, the larger the number of additional scalar fields is, the larger is the correction factor. Typically, in string-inspired models one cannot have $k\rightarrow \infty$ because the number of scalar fields is related to the number of branes and to the geometric deformations of the internal compactification manifold, both limited by the rank of the gauge group and the number of extra-dimensions, respectively\footnote{Typically, ``before’’ moduli stabilization, one has $\mathcal{O}(100)$ moduli from the compactification manifold and a net number of $\mathcal{O}(30)$ branes. Of course, the number of brane moduli can be made arbitrary by putting (unstable configuration of ) brane-antibrane pairs.}. Moreover, it is important to underline a couple of fundamental aspects. First, the properties of these scalars (i.e. dilution parameters and transition temperatures) cannot be completely arbitrary. In particular, it should be guaranteed that the energy density at the production scale (the reheating epoch) should not be larger than some cutoff $M$, bounded by the inflationary scale $M_{inf}$. As a consequence, a corresponding strong bound on the reheating temperature $T_{reh}$ is demanded \cite{9}. For instance, in the case of a single non-standard post-reheating scalar with a dilution parameter $n_1$ and a transition-to-radiation temperature $T_1=T_r$, the necessary condition is just $\rho_{\phi_1}(T_{reh})\leq M^4$ that leads to the bound \begin{equation}\label{eqn: bound_1} T_{reh} \leq \alpha_1 M \left(\frac{T_1}{M} \right)^{\frac{n_1}{4+n_1}}, \quad \alpha_1=\left(\frac{30}{\pi^2 g_E}\right)^{\frac{1}{4+n_1}} . \end{equation} In the case of a pair of non-standard post-reheating scalars with $\phi_2$ dominating at higher temperature $T>T_2$ on $\phi_1$, the necessary condition becomes $\rho_{\phi_2}(T_{reh})\leq M^4$ at $T=T_{reh}$. As a consequence, one gets \begin{equation}\label{enq: bound_2} T_{reh} \leq \alpha_2 M \left( \frac{T_1^{n_1} T_2^{n_2-n_1}}{M^{n_2}} \right)^{\frac{1}{4+n_2}}, \quad \alpha_2=\left(30/\pi^2 g_E\right)^{1/4+n_2} \end{equation} where, by assumption, $n_2>n_1$. Of course, similar expressions can be easily found for more than two additional scalar fields. The second point we would like to stress is that the presence of these additional early cosmological phases typically alter the inflationary number of e-folds \cite{8,9} with an extra contribution $\Delta N(\phi_i,T_{reh})$ proportional to the (logarithm of) ${\cal{J}}(T_{reh})$, i.e \begin{equation} N_*\sim \xi_* - \frac{1-3w_{reh}}{3(1+w_{reh})}\ln\left(\frac{M_{inf}}{T_{reh}}\right) + \ln\left(\frac{M_{inf}}{M_{Pl}}\right) + \frac{2}{3(1+w_{reh})}\ln{\cal{J}}(T_{reh}) , \end{equation} where $\xi_*\sim 64$ and $w_{reh}$ is the mean value of the EoS parameter of the reheating fluid. Thereby, this extra factor depends on the additional setup of scalar fields (namely number of scalars and dilution indices) and on the properties of the reheating scale. However, reasonable assumptions provides an \textit{enhancement} of the number of $e$-folds of the order of $5$-$15$, also allowing refined predictions for most of the inflationary models. \section{Non-Standard History of Leptogenesis with two scalar fields} \label{leptogen} In this section we probe the effects on leptogenesis of the described fast expansion of the universe with multiple scalar fields. We consider the simple type I seesaw mechanism including heavy Majorana RHNs that generate a lightest neutrino and induce lepton number violation. Complex Yukawa interactions with leptons result in CP violation when the RHN decay processes are considered with loop mediated interactions. Finally, out of equilibrium decay of RHNs (or of the lightest RHN $N_1$, the so-called $N_1$ leptogenesis that we use here) produces Baryon asymmetry in the universe. The Lagrangian involving the process is given (for three generations) by \begin{equation}\label{L} {\cal L}_{RHN}=-\lambda_{ik}\bar{l_i}\tilde{\Phi} N_{k} - \frac{1}{2}M_k\bar{N^c}_kN_k + h.c.\, \ \quad i,k=1, 2, 3, \end{equation} where a diagonal flavor basis is selected for the RHNs. The Standard Model Higgs doublet is denoted by $\Phi$, the corresponding conjugate is $\tilde{\Phi}$ while $l$ indicates a SM lepton doublet. With the above BSM extension, one obtains an active neutrino mass matrix \begin{equation}\label{numass} M_{\nu}=-m_{D}^TM^{-1}m_{D}\,\, , \end{equation} where $m_{D}$ denotes the Dirac mass matrix with entries of order ${\cal{O}}\sim v_{\Phi}\lambda$ ($v_{\Phi}$ is the vacuum expectation value of the Higgs doublet) and $M$ is the diagonal RHN mass matrix. As mentioned, the amount of CP asymmetry generated in the process of $N_1$ decay for a hierarchical RHN mass distribution $M_3, M_2 >> M_1$ is measured by \begin{align}\label{CPasy} \epsilon & = \frac{\sum_{\alpha}[\Gamma(N_1 \rightarrow l_{\alpha}+\Phi)-\Gamma(N_1 \rightarrow \bar{l}_{\alpha}+\Phi^{*})]}{\Gamma_1} \nonumber\\ &=-\frac{3}{16\pi}\frac{1}{(\lambda^{\dagger}\lambda)_{11}}\sum_{k=2,3}{\rm Im}[(\lambda^{\dagger}\lambda)^2_{1j}]\frac{M_1}{M_k}\,\, , \end{align} with $\Gamma_1=\frac{M_1}{8\pi}(\lambda^{\dagger}\lambda)_{11}$ being the total decay width of lightest RHN $N_1$. The asymmetry parameter $\epsilon$ can be used to provide a limit on the $N_1$ mass via the Casas-Ibarra (CI) parametrization formalism \cite{20}. Indeed it turns out that \begin{equation}\label{limit} |\epsilon|\leq \frac{3}{16\pi v_{\Phi}^2}M_1m_{\nu}^{max}\,\, , \end{equation} with $m_{\nu}^{max}$ being the largest light neutrino mass. As a consequence, a lower bound (the Davidson-Ibarra bound \cite{21}, $M_1 \gtrsim 10^9$ GeV) emerges for the $M_1$ mass of the lightest RHN, when neutrino oscillation parameters are taken into account. $N_1$ leptogenesis, effective at temperatures $T\gtrsim 10^{12}$ GeV, induces also a constraint on the reheating temperature after inflation at values $T_{reh}>10^{12} $ GeV. Disregarding the possibility of flavored leptogenesis, we limit ourselves to the usual thermal leptogenesis, solving for the simplified Boltzmann equations (BEs). In standard cosmology with a radiation dominated universe after reheating, they can be written as \begin{align} &\frac{d Y_{N_1}}{dz} = -z\frac{\Gamma_1}{H_1}\frac{\mathcal{K}_1(z)}{\mathcal{K}_2(z)}\left(Y_{N_1}-Y_{N_1}^{EQ}\right)\,,\label{RHNBE}\\ &\frac{d Y_{L}}{dz} = -\frac{\Gamma_1}{H_1} \left(\epsilon z \frac{\mathcal{K}_1(z)}{\mathcal{K}_2(z)}(Y_{N_1}^{EQ}- Y_{N_1}) + \frac{z^{3} \mathcal{K}_1(z)}{4} Y_L \right)\, \, .\label{asy} \end{align} In Eqs.~\eqref{RHNBE}-\eqref{asy}, $Y_i=n_i/s$ denotes the abundance of the particle $i$, namely the ratio of its number density to the entropy density, while $Y_L=(Y_l-Y_{\bar{l}})$ is the lepton asymmetry. The equilibrium abundance of the lightest RHN is \cite{19,22} \begin{equation} Y_{N_1}^{EQ}=\frac{45g}{4\pi^4}\frac{z^2\mathcal{K}_2(z)}{g_{S}}. \end{equation} It should be noticed that Eqs.~\eqref{RHNBE}-\eqref{asy} both depend on the modified Bessel functions ($\mathcal{K}_{1,2}$), on the Hubble parameter $H_1=H(T=M_1)=H(T) z^2$ and on the decay width $\Gamma_1$ of $N_1$ (or on the washout parameter $K=\frac{\Gamma_1}{H_1}$), while the BE for the lepton asymmetry also depends on the asymmetry parameter $\epsilon$. Solutions of these equations can be found in \cite{19}. In the presence of multiple scalar fields as described in Section II, the above BEs for leptogenesis have to be modified. The case with a single additional scalar field can be found in \cite{10}. For simplicity, we consider explicitly the simplest two-scalar-field scenario. Modifications in BEs arise by the correction to the Hubble parameter (as derived in Sect.~\ref{Dbrane}). With the assumption of $g_S\sim g_E$ for large $T$, the total radiation density can be extracted from eqs. \eqref{eq:totalrho}-\eqref{eq:modfactor}, and results \begin{equation}\label{rho_new} \rho_{tot}(T) = \rho_{rad}(T)+\sum_i^2\rho_i(T)\, \\ =\rho_{rad}(T)\left\{1+\left(\frac{T}{T_r}\right)^{n_1}\left[1+\left(\frac{T}{T_2}\right)^{(n_2-n_1)}\right]\right\}, \end{equation} where $T\ge T_2$ corresponds to the epoch of $\phi_2$ scalar domination, $T_r\leq T\leq T_2$ represents that of $\phi_1$ dominated expansion while for $T\le T_r$ ($T_r=T_1$) the universe is fully dominated by radiation. The modified Hubble parameter is thus \begin{equation}\label{Hnew} H_{new}=H\left\{1+\left(\frac{T}{T_r}\right)^{n_1}\left[1+\left(\frac{T}{T_2}\right)^{(n_2-n_1)}\right]\right\}^{1/2}\, \end{equation} and it gives rise to the following modified BEs \begin{align} &\frac{d Y_{N_1}}{dz}= -z \, \frac{\Gamma_1}{H_1} \, \frac{1}{\cal{J}} \, \frac{\mathcal{K}_1(z)}{\mathcal{K}_2(z)}\left(Y_{N_1}-Y_{N_1}^{EQ}\right)\,, \label{RHNBE2}\\ &\frac{d Y_{L}}{dz} = - \frac{\Gamma_1}{H_1} \, \frac{1}{\cal{J}} \, \left(\epsilon z \frac{\mathcal{K}_1(z)}{\mathcal{K}_2(z)}(Y_{N_1}^{EQ}- Y_{N_1}) + \frac{z^{3} \mathcal{K}_1(z)}{4} Y_L \right)\, .\label{asy2} \end{align} A convenient and useful way to write $\cal{J}$ is \begin{equation}\label{factor} {\cal{J}}=\left\{1+\left(\frac{M_1}{T_r z}\right)^{n_1}\left[1+\left(\frac{M_1}{T_r x z}\right)^{(n_2-n_1)}\right]\right\}^{1/2}\, , \end{equation} with $x=\frac{T_2}{T_r}$. Looking into the modified BEs \eqref{RHNBE2} and \eqref{asy2}, it can be observed that, apart from the standard parameters $\epsilon$ and $K=\frac{\Gamma_1}{H_1}$, leptogenesis with two scalar field depends on a set of four new parameters $n_1,~n_2~({\rm{or}}~n_2-n_1),~T_r/M_1$ and $x$ (or $T_2$), that naturally modify the abundance of lepton asymmetry $Y_L$ as compared to the one of standard leptogenesis. We solve numerically Eqs.\eqref{RHNBE2} and \eqref{asy2}, considering two possible sets of initial conditions. The first, A, corresponds to the case where the abundance of RHN $N_1$ is the one at the equilibrium $Y_{N_1}^{in}=Y_{N_1}^{eq}$. The second, B, $Y_{N_1}^{in}=0$ is relative to the case in which the initial abundance of RHN vanishes. In both cases, we assume that lepton asymmetry is absent before the decay of $N_1$, $~Y_L^{in}=0$. In the next two subsections we discuss in details the solutions for the quantities involved in the modified BEs. Lepton asymmetry is partially converted into baryon asymmetry by sphalerons \cite{19} \begin{equation}\label{transfer} Y_B=\frac{8n_f+4n_{H}}{22n_f+13n_{H}}Y_L\,\, . \end{equation} Note that $Y_B=\frac{28}{79}~Y_L$ (for $n_H=1, n_f=3$), consistent with the observed baryon asymmetry in the universe $Y_{B}=(8.24-9.38)\times 10^{-11}$ \cite{23}. \subsection{Case $Y_{N_1}^{in}=Y_{N_1}^{EQ}$} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Yeq_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/Yeq_2.pdf} \caption{\it Evolution of $Y_L$ versus $z$ for initial equilibrium RHN abundance, $T_2=5 \, T_r$ and different $n_1$ values, with $n_2-n_1=1$ (left panel) and $n_2-n_1=2$ (right panel). The double black line(s) describe the baryogenesis threshold.} \label{fig:1} \end{center} \end{figure} In Fig.~\ref{fig:1} (left panel), we show how the lepton asymmetry $Y_L$ evolves as the universe expands with $z$, considering the modified Hubble parameter of eq.~\eqref{Hnew} for different $n_1$ but fixed $n_2-n_1=1$ values. For the purpose of illustration, other relevant parameters are kept fixed. In particular, $\epsilon=10^{-5}$, $(M_1=10^{11}~{\rm{GeV}})$, $K=\frac{\Gamma_1}{H_1}=600$, $T_r=10^{-3}M_1$ and $T_2=5T_r$. It can be observed a relevant increasing of $Y_L$ with the increasing of $n_1$, as expected because an higher $n_1$ corresponds to a faster expansion. Moreover, the increasing of $n_1$ also dilutes in a considerable way the washout effect, as manifested by the lowering of the inverse decay to $N_1$. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{Figures/Tr0.01.pdf} \includegraphics[width=0.49\textwidth]{Figures/Tr0.001.pdf} \caption{\it $Y_L$ versus $T_2/T_r$ for $T_r/M_1=0.01$ (left panel) and $T_r/M_1=0.001$ (right panel), with different $n_1$. Solid (dashed) lines refer to $n_2-n_1=1$ ($n_2-n_1=2$). The double black line(s) describe the baryogenesis threshold.} \label{fig:2} \end{center} \end{figure} In order to study the net effect of the presence of the second scalar, we consider a different $n_2-n_1=2$ in the right panel of Fig.~\ref{fig:1}, keeping the same set of values for the remaining parameters as in the left panel. Comparing the two plots, it emerges quite neatly that the second scalar field gives rise to an enhancement of the asymmetry $Y_L$ accompanied by a clear lowering of the washout effect. For high values of $n_1$, however, the second scalar is less important because the influence of $\phi_1$ is already very efficient, as demonstrated by the $n_1=3$ case where the fast expansion and the increasing of $Y_L$ lead to a negligible washout of the asymmetry. It is however worth to analyze the dependence on the other involved parameters. For instance, it is very important the ratio between the temperatures separating the successive epochs of scalar domination. To this aim, it is useful to plot the dependence of lepton asymmetry on the ratio of the two relevant temperatures $T_2$ and $T_r=T_1$. Results are reported in Fig.~\ref{fig:2} in the range $2\leq T_2/T_r\leq 100$, for the two different values of $T_r=10^{-2}~M_1$ and $T_r=10^{-3}~M_1$, keeping the $\epsilon$ and $K$ values as in Fig.~\ref{fig:1}. In the left panel, the curves are referred to different values of $n_1$ and $T_r=10^{-2}~M_1$, with solid lines related to $n_2-n_1=1$ and dashed lines to $n_2-n_1=2$. The same plot with $T_r=10^{-3}~M_1$ is reported in the right panel. It happens that in the left-panel case the second scalar influences $Y_L$ only if $T_2\leq 10 \, T_r$, increasingly with increasing difference $n_2-n_1$, independently of the values of $n_1$. Notice that only for $n_1=3$ it is possible to get by leptogenesis the required baryon abundance in the universe (black bar). The behavior changes drastically when $T_r=10^{-3}~M_1$, as shown in the right panel of Fig.~\ref{fig:2}. Indeed, a decreasing in the value of $T_r/M_1$ reflects itself into an expansion of the epoch where $\phi_1$ dominates and a delay of the radiation domination. Thus, it helps to deviate the $N_1$ abundance from equilibrium and to generate lepton asymmetry. It can be easily noticed that the enhancement of $Y_L$ indeed allows to satisfy baryon asymmetry already for $n_1=1$, and there is higher sensitivity to the difference $n_2-n_1$, especially for large values of $T_2/T_r$, apart for the case $n_1=3$ when, as in the previous analysis reported in Fig.~\ref{fig:1}, the washout is practically absent. Finally, it is worth to observe that with the increasing of $T_2/T_r$, the effects related to the presence of $\phi_2$ become less and less important when the temperature decreases, becoming less prominent at the time of leptogeneis, as follows directly from eq. \eqref{factor}. For example, for $T_r/M_1=0.01$, the possible choice $T_2/T_r=100$ indicates that the influence of $\phi_2$ on the Hubble parameter ceases to exist at $T=M_1$ while, for $T_2/T_r=10$, $\phi_2$ remains active up to $T=0.1 M_1$ altering the abundance of $Y_L$. As clear from \eqref{factor}, the second-scalar effect dominates for $M_1/T_r>>xz$, becoming insignificant if $xz\ge 100$. \subsection{Case $Y_{N_1}^{in}=0$} In this paragraph, we repeat the study of leptogenesis influenced by the presence of two scalars for vanishing RHN initial abundance (conditions ``B''). \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Y0_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/Y0_2.pdf} \caption{\it Same as Fig.~\ref{fig:1} for zero RHN abundance.} \label{fig:3} \end{center} \end{figure} In Fig.~\ref{fig:3}, we report the plots analogous to those in Fig.~\ref{fig:1} using the same set of parameters $\epsilon$, $M_1$, $T_r$, $T_2$, $K$ and $n_i$. In this scenario, the initially produced $N_1$ is then partially compensated by the inverse decay, resulting in an oscillation of negative lepton asymmetry giving later rise to the generation of a net positive lepton asymmetry. It should be noticed that for flavourless leptogenesis the Boltzmann equations give rise to solutions with a single bounce on $Y_L$, while this is not the case in more general frameworks where additional bounces can occur, as shown in \cite{24}. From Fig.~\ref{fig:3} (left panel) it turns out that an increasing value of $n_1$ from $1$ to $3$ provides a reduced washout of asymmetry resulting into an enhancement of the $Y_L$ value. However, for $n_1=3$, the $Y_L$ abundance value decreases significantly. This is due to the fact that a faster expansion also reduces the production of RHN by inverse decay. Similar effects have already been observed in the presence of a single additional scalar field \cite{10}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{Figures/0_Tr0.01.pdf} \includegraphics[width=0.49\textwidth]{Figures/0_Tr0.001.pdf} \caption{\it Same as Fig.~\ref{fig:2} with zero RHN initial abundance. The double black line(s) describes the baryogenesis threshold.} \label{fig:4} \end{center} \end{figure} Moreover, this behavior becomes even more prominent increasing $n_2-n_1$, as depicted in the right panel of Fig.~\ref{fig:3}. Clearly, a faster expansion with respect to the $n_2-n_1=1$ case tends to reduce the lepton asymmetry starting from a situation where $Y_{N_1}^{in}=0$. Again, it is useful to study $Y_L$ as a function of $T_2/T_r$. An analysis similar to that of the previous paragraph leads to the plots in Fig.~\ref{fig:4}, related to the case of $T_r/M_1=10^{-2}$ (left panel) and $T_r/M_1=10^{-3}$ (right panel). Parameters kept fixed are exactly the same as those in Fig.~\ref{fig:2}. For $T_r/M_1=10^{-2}$, the behavior of $Y_L$ is very similar to the previous case in Fig.~\ref{fig:2} (left panel) and also the analysis remains basically the same, although $Y_{N_1}^{in}=0$. In the case $T_r/M_1=10^{-3}$, when compared with Fig.~\ref{fig:2} (right panel), the quantitative result are quite different, but the qualitative bahavior of $Y_L$ with the ratio of $T_2/T_r$ is again basically the same. The reduction of the final amount of asymmetry, as mentioned, is due to the relevance of the inverse decay that induces oscillations in the washout mechanism. In any case, the requested amount of baryon asymmetry can still be obtained for a large range of $T_2/T_r$ values, at least when $T_r/M_1=10^{-3}$. \section{Non-Standard History of Leptogenesis with three scalar fields} \label{3scalar} It is quite difficult to solve the BEs for a generic number $k\ge 3$ of additional scalar fields. In order to guess the trend of the solutions, it is worth to proceed with the $k=3$ example. Already in this case modifications of the BEs for leptogenesis are complicated, with an increased number of free parameters. The correction factor in this case reads \begin{eqnarray}\label{3field} {\cal{J}}= \left\{1+\left(\frac{M_1}{T_r z}\right)^{n_1}\left[1+\left(\frac{M_1}{T_r x z}\right)^{(n_2-n_1)}\left(1+\left(\frac{M_1}{T_r y z}\right)^{(n_3-n_2)} \right)\right]\right\}^{1/2}\,, \end{eqnarray} where $y=T_3/T_r$. Therefore, in the presence of three scalar field, six new parameters ($n_i,~i=1,...,3$, $T_r$, $x$ and $y$) are necessary to introduce the modified Hubble rate. As described in Section \ref{Dbrane}, in our setting of course $T_3> T_2 > T_1=T_r$, with successive ordered domination from $\phi_3$ to radiation. Again, BEs for leptogenesis are solved for the two choices of the $N_1$ initial abundance already used in the two-scalar-field scenario. \subsection{Case $Y_{N_1}^{in}=Y_{N_1}^{EQ}$} The $Y_L$ abundance is reported in Fig.~\ref{fig:5} for initial conditions $Y_{N_1}^{in}=Y_{N_1}^{EQ}$. The three curves in the left panel correspond to the different choices of $n_1=1,~2,~3$ and $T_3=10T_r$, $n_3-n_2=1$. The other parameters $\epsilon$, $M_1$, $K$, $T_2$ and $T_r$ are kept fixed at the same values considered in the plot of Fig.~\ref{fig:1}. Again, a comparison between the case with two scalar field shows that the washout effect is further reduced in the case where it is non-negligible, {\it i.e.} $n_1=1$. The behavior is even more significant when $n_2-n_1=2$, where the washout is already reduced in the presence of two scalar fields. The dependence on the new parameters $T_3$ and $n_3-n_2$ is also worth to be deepened, since it can change the behavior of the solutions. To this aim, we take $T_2=5 T_r$ and $T_r=10^{-3}M_1$, and solve the BEs for the four combinations of $n_2-n_1$ and $n_3-n_2$ equal to 1 or 2, varying also $T_3$ to be $10 T_r$, $50 T_r$ and $100 T_r$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_1_1_5_10.pdf} \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_2_1_5_10.pdf} \caption{\it Effect of three scalar fields on $Y_L$ versus $z$ plots for initial equilibrium RHN abundance and different $n_1$ values with $n_2-n_1=1$ (left panel) and $n_2-n_1=2$ (right panel) with $T_3=10T_r$ and $n_3-n_2=1$. The double black line(s) describes the baryogenesis threshold.} \label{fig:5} \end{center} \end{figure} All the resulting $Y_L$ abundancies are plotted in Fig.~\ref{fig:6}. In the upper-left panel, it can be observed that, increasing $T_3$, the washout effect becomes prominent and the lepton asymmetry $Y_L$ decreases. This is obvious, because an increasing of $T_3$ makes the third scalar insignificant. Indeed, for $T_3=100 T_r$, the dominance of the third scalar terminates at $T_3=M_1$, well before the decay of $N_1$, without influencing leptogenesis. On the contrary, for $T_3=10 T_r$, it remains effective until $T_3=0.1 M_1$, the time of leptogenesis. A similar behavior can be observed in the other three panels, with different choices of $n_3-n_2$ and $n_2-n_1$. The third scalar field affects $Y_L$ when $n_3-n_2$ increases, causing lesser washout, as evident from the comparison of the $n_3-n_2=2$ versus $n_3-n_2=1$ cases. It is also clear that the influence of the third scalar depends upon the influence of the second scalar. In the lower panels, it can be observed that increasing $n_2-n_1$ the washout effect is sensibly reduced, (almost) independently on the presence of the third scalar. In conclusion, one may say that an increasing of both $n_3-n_2$ and $n_2-n_1$ results into a net decreasing of the washout effect. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_1_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_1_2.pdf}\\ \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_2_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/Eq3_1_2_2.pdf} \caption{\it $Y_L$ versus $z$ plots for $Y_{N_1}^{in}=Y_{N_1}^{EQ}$ using various $T_3/T_r$ values ($10$, $50$ and $100$) and different $n_3-n_2$ and $n_2-n_1$ combinations of values $1$ and $2$. The double black line(s) describe the baryogenesis threshold.} \label{fig:6} \end{center} \end{figure} \subsection{Case $Y_{N_1}^{in}=0$} Similarly to the previous discussion of leptogenesis in the presence of two additional scalar field, in this paragraph are analyzed the possible effects due to a third scalar field in the case of vanishing $N_1$ initial abundance. As before, in Fig.~\ref{fig:7} are plotted the values of $Y_L$ against $z$ for different $n_1$ and $n_2-n_1=1,2$. The remaining parameters are the same as those used in Fig.~\ref{fig:5}. There is a conspicuous amount of washout for the three reported values $n_1=1, 2, 3$ (left panel) at the initial stage, while in the second decay the washout is limited to the $n_1=1$ case. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/3_1_1_5_10.pdf} \includegraphics[width=0.45\textwidth]{Figures/3_2_1_5_10.pdf} \caption{\it Same as Fig.~\ref{fig:5} for zero initial RHN abundance. The double black line(s) describe the baryogenesis threshold.} \label{fig:7} \end{center} \end{figure} Also the $Y_L$ value tends to decrease by increasing $n_1$, corresponding to the effect of less RHN production from inverse decay due to the faster expansion of the universe, as already observed in the analogous scenario with two scalar fields. In the right panel, for $n_2-n_1=2$, a similar behavior is slightly milded, with the $n_1=1$ case entering a regime of weak final washout. It should also be noticed that, however, the value of $Y_L$ is reduced as the net RHN production falls down due to a weaker inverse decay in a faster expansion. Again, it is useful also to extend the analysis related to Fig.~\ref{fig:6} to the case of three scalar fields for a variable $T_3/T_r$ and an initial vanishing $N_1$ abundance. An inspection of the four panels of Fig.~\ref{fig:8} clearly demonstrates that increasing $n_2-n_1$ or $n_3-n_2$ results into an overall dilution of the washout of asymmetry. In spite of it, the transition from strong to weak washout does not ensure an enhancement of the $Y_L$ value, governed also by the production of RHNs from the inverse decay. This fact can be easily extracted, for instance, by the $T_3=10 T_r$ plots (in blue) of Fig.~\ref{fig:8}. Initially, the lepton asymmetry enters a weak washout by changing $n_3-n_2$ from $1$ (upper left panel) to $2$ (upper right panel), and $Y_L$ increases. However, with a faster expansion ($n_2-n_1=2$, lower left panel) and even with a larger $n_3-n_2 =2$ (lower right panel), $Y_L$ reduces due to a compromised RHNs production. Things are quite different for the $T_3=50 T_r$ and $T_3=100 T_r$ plots (yellow and green curves, respectivey). The produced lepton asymmetry gradually enters a weak washout regime where it is enhanced (upper right and lower left panels) and finally saturates as the washout effect becomes negligible (lower right panel). \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/3_1_1_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/3_1_1_2.pdf}\\ \includegraphics[width=0.45\textwidth]{Figures/3_1_2_1.pdf} \includegraphics[width=0.45\textwidth]{Figures/3_1_2_2.pdf} \caption{\it Same as Fig.~\ref{fig:6} plotted for $Y_{N_1}^{in}=0$ with identical color scheme for different choices of $T_3$. The double black line(s) describe the baryogenesis threshold.} \label{fig:8} \end{center} \end{figure} A behavior different from the one reported in Fig.~\ref{fig:6} and related, as said, to the combined effects of RHNs production and the washout. As explained before, large values of $T_3/T_r$ soften the influence of the third scalar field enhancing the washout, being the system farer from an out-of-equilibrium phase (upper panels). These features disappear when the second scalar effect become stronger (lower panels). Finally, it should be stressed that the modifications to the BEs are basically linked to modifications of the effective Hubble rate, like that in eq. \eqref{Hnew} for the two-field case. It is thus natural to infer that our results are applicable to many other types of unflavored leptogenesis scenarios and are almost model-independent. \section{Discussion and Conclusions} It is notoriously difficult to probe the dynamics of the early universe in the epoch between cosmic inflation and the onset of BBN, whose predictions of primordial abundances of light elements are in very good agreement with measurements, and represent one of the biggest successes of modern cosmology. The post-inflationary evolution is thus highly unconstrained, having only to be compatible with BBN. In particular, all the cosmic relics that contribute to define the $\Lambda$CDM cosmological model, like dark matter, dark energy, baryon abundances, radiation composition and so on, crucially depend on the history around that time. Hence, the expansion rate can be drastically different compared to the the standard cosmology, in models where additional ingredients from fundamental (quantum or modified) gravity theories are present. For instance, four dimensional (super)string models equipped with D-branes typically contain additional scalar species related to the positions of the branes in the transverse internal directions, that in a non-equilibrium configurations could dominate the expansion rate before the radiation-dominated phase. It is plausible that these scalar are active during various processes in the universe such as post-reheating, baryon asymmetry, leptogenesis, dark matter freeze-out or freeze in, etc., and thus can modify significantly the thermal evolution of the universe. In this paper, we have addressed the effects of the presence of multiple additional (sterile) scalar fields with a faster-than-radiation dilution law in the post-reheating epoch that, if active at the scale of thermal leptogenesis ($T\sim 10^{12}$ GeV), can cause significant changes on the baryon asymmetry (via leptogenesis) of the universe. In what follows, we briefly summarize our findings from the study of modified leptogenesis. The Boltzmann equations (BEs) describe the dynamics of the decay of the lightest RHN $N_1$, together with the evolution of the abundance of Lepton asymmetry $Y_L$. In the presence of the $k$ additional scalar fields defined in Section \ref{Dbrane}, the standard BEs are modified basically by the introduction of an ``effective'' Hubble rate, $H_{new}(T) = H(T) \, \mathcal{J}(T)$, where $\mathcal{J}$ is defined in eq. \eqref{eq:complmodfact}. It depends on the exponents $n_i$, on the ``separating temperatures'' $T_i$ and on the effective degrees of freedom active at the corresponding epochs. Another important ingredient is represented by the initial conditions. We have considered the two cases of $Y_{N_1}^{in}=Y_{N_1}^{EQ}$ (conditions ``A''), where the initial abundance of RHN $N_1$ coincides with the abundance at equilibrium, and $Y_{N_1}^{in}=0$ (conditions ``B''), with vanishing initial abundance of $N_1$. The initial asymmetry $Y_L^{in}$ is always taken to be vanishing. The main general results that can be extracted by numerical solutions of the BEs are the following: \begin{itemize} \item Typically, with the increasing of $n_i$, $Y_L$ increases while the washout decreases (See Fig. 1, 3, 5, 7). This is due to the fact that the faster the expansion is, the higher is the departure from thermal equilibrium. As a consequence, asymmetry is feeded while washout is disfavored because there is less inverse decay. \item The relevance of the $\phi_{i+1}$ with respect to $\phi_i$ depends upon the difference $n_{i+1}-n_i$. It clearly grows if $n_{i+1}-n_i$ increases, but $n_i$ must not be too high, otherwise the dominance of the $\phi_{i+1}$ enters too early, in an epoch where the RHN $N_1$ has not been produced in a sufficient quantity (See Fig. 2, 4, 6, 8). In other words, if $\phi_i$ already absorbs the whole washout, the $\phi_{i+1}$ action ceases to be significant. \item In the evaluation of the $\phi_i$ contribution to leptogenesis, $T_i$ is of course a fundamental parameter, since the field $\phi_i$ is active only if $T_i<M_1$. Moreover, if the ratio $T_{i+1}/T_i$ decreases, the $\phi_{i+1}$-domination epoch is longer and $Y_L$ becomes bigger. All the temperatures have to be related also to $M_1$ and to $T_1=T_r$. \item With the $Y_{N_1}^{in}=Y_{N_1}^{EQ}$ initial conditions, the production of asymmetry $Y_L$ is typically monotonic and after a washout the value of $Y_L$ saturates at a certain value. To evaluate if leptogenesis is so efficient to generate the requested amount of baryon asymmetry, one has to analyze the balance between the values of the dilution exponents $n_i$ and the ratios of the temperatures $T_i$ to the radiation temperature $T_r$ (Fig. 1, 2, 5, 6). \item With the $Y_{N_1}^{in}=0$ initial conditions, there is an oscillation due to the strong initial washout, since the inverse decay of the produced RHN $N_1$ is large at the beginning and starts with a vanishing initial abundance. The saturation of $Y_L$ at a certain value is thus slower and the amount of asymmetry $Y_L$ can be small. As in the previous case, in order to understand if leptogenesis can generate baryogenesis, one has to evaluate the dipendence of $Y_L$ upon the $n_i$ and the temperatures $T_i$ (Fig. 3, 4, 7, 8). \item It is quite clear, however, that in general more scalar fields contribute to increase the value af the asymmetry $Y_L$ and in the worst case become uneffective for the reasons mentioned above. However with vanishing initial $N_1$ abundance, due to a weaker inverse decay in a faster expansion, there is a reduction of the saturated $Y_N$ value. \item We have studied in details the case with two scalar fields, where it is indeed possible to satisfy Baryon asymmetry in the universe within the range $0.001\le T_r/M_1 \le 0.01$ for thermal leptogenesis with a chosen set of parameters $M_1=10^{11}$ GeV, $\epsilon=10^{-5}$ and $K=600$ (see Section \ref{leptogen}) for a large interval of $T_2/T_r$ values (Fig. 2, 4). \item In the case of three scalar fields, also studied in details, it is important to analyze the behavior of the system with initial conditions $Y_{N_1}^{in}=0$ in comparison with the $Y_{N_1}^{in}=Y_{N_1}^{EQ}$ initial conditions. Again, as in the presence of two-scalar fields, a decreasing of the washout accompanied by a decreasing of the $Y_L$ values can be observed. Moreover, one finds that leptogenesis can generate baryogenesis in the interval $10\le T_3/T_r \le 100$ for different $n_2-n_1$ and $n_3-n_2$ (see Section \ref{3scalar} and Fig. 6, 8, for details). \end{itemize} It is important to notice that since the modifications in leptogenesis are obtained by changing the Hubble parameter into the effective one, we expect that our findings be applicable to any other thermal leptogenesis models, independently of the choice of the seesaw mechanism. Therefore, one can actually obtain a different regime of consistent parameter space in model dependent studies due to the influence of these scalar fields. The scale of leptogenesis in the case of a typical type I seesaw model discussed in the present work, is very high and is out of reach for the ongoing experimental facilities because the RHN mass be above $10^9$ GeV \cite{19}. In general, one may only get indirect signals for leptogenesis via observations of neutrino-less double beta decay \cite{25}, via CP violation in neutrino oscillation \cite{26}, by the structure of the mixing matrix \cite{27}, or from constraints relying on Higgs vacuum meta-stability in the early universe \cite{28,29}, that pin down very tightly the parameter space of heavy neutrino physics \cite{30}. Several mechanisms with a much lower leptogenesis scale exist where the RHN masses arise due to new physics around the TeV scale if two RHNs are nearly degenerate in mass, known as \emph{resonant leptogenesis} \cite{31} or via oscillations of GeV-scale right-handed neutrinos \cite{32} or via Higgs decay \cite{33} as well as dark matter assisted scenario with $1$ to $3$-body decays \cite{34} which allow these models to be probed at the ongoing experimental facilities. In such models, it is plausible to obtain successful leptogenesis via RHN with mass $M_1 \sim$ 10 TeV assuming an initial thermal abundance for RHNs along with an almost absence of washout. It is possible that the scalar fields discussed in the present work may remain active at low scale as well. Therefore, we expect significant deviation from the results reported in such low scale leptogenesis models once a different thermal expansion is invoked via scalars but remaining above the energy scale where the sphalerons are active to transfer the asymmetry to the baryon sector. Moreover, possible observations of primordial gravitational waves sourced by topological defects \cite{35}, colliding vacuum bubbles \cite{36} primordial black holes \cite{37} and Cosmic Microwave Background radiation (CMBR) measurements \cite{38} should represent additional and complementary tools to probe leptogenesis at high energy scales. However, it should be stressed that effective models, like the one presented here, are not easy to be distinguished by other mechanisms providing similar effects. It would thus be interesting to understand if there can be experimental probes to the proposed scenario. \section{Acknowledgements} ADB acknowledges financial support from DST, India, under grant number IFA20-PH250 (INSPIRE Faculty Award). This work is supported in part by the DyConn Grant of the University of Rome ``Tor Vergata''.
1,941,325,220,368
arxiv
\section{Introduction} \label{sec:intro} The ability to design new materials with desired properties is crucial to the development of new technology. The design of Silicon and Lithium-ion based materials are well known examples which led to the proliferation of consumer hand-held devices today. Materials discovery has historically proceeded via trial and error, with a mixture of serendipity and intuition being the most fruitful path. For example, all major classes of superconductors--from elemental Mercury in 1911, to the heavy Fermions, Cuprates and most recently, the iron-based superconductors--have been discovered largely by chance~\cite{Greene_2012}. The dream of materials design is to create an effective workflow for discovering new materials by combining our theories of electronic structure, chemistry and computation. It is an inverse problem: start with the materials properties desired, and work to back out the chemical compositions and crystal structures which would lead to desirable properties. It requires a conceptual framework for thinking about the physical properties of materials, and sufficiently accurate methods for computing them. In addition it requires algorithms for predicting crystal structures and testing them for stability against decomposition, efficient codes implementing them and broadly accessible databases of known materials and their properties. For weakly correlated materials, systems for which band theory works, significant progress in all these fronts has been made. Fermi liquid theory justifies our thinking of the excitations of a solid in terms of quasiparticles. Kohn Sham density functional theory (DFT) is a good tool for computing total energies and a good starting point for computing those quasiparticle properties in perturbation theory in the screened Coulomb interactions. Practical implementations of DFT such as LDA and GGA have become the underlying workhorse of the scientific community. Extensive benchmarks of software implementations~\cite{Lejaeghere_2016} have shown that DFT reliably produces the total energy of a given configuration of atoms, enabling comparisons of stability between different chemical polymorphs. The maturity of DFT, combined with searchable repositories of experimental and calculated data (Materials Project~\cite{Jain_2013}, OQMD~\cite{Kirklin2015}, AFLOWlib~\cite{Setyawan2011} and NIMS~\cite{Neugebauer2012}), has fostered the growth of databases of computed materials properties to the point where one can successfully design materials (see for example Refs.~\onlinecite{Fennie_2008, Gautier_2015, Fredeman_2011}). Indeed these advances are beginning to pan out. The search for new topological materials such as topological insulators or Weyl semimetals is now greatly aided by electronic structure calculations (for a recent review see Ref.~\onlinecite{Armitage2017}). Another clear example of this coming of age is the recent prediction of superconductivity in H$_3$S under high pressure near 190~K~\cite{Duan_2014}. Subsequently, hydrogen sulfide was observed to superconduct near 200~K, the highest temperature superconductor discovered so far~\cite{Drozdov_2015}. The situation is different for strongly correlated materials. Many aspects of the physics of correlated electron materials are still not well understood. Correlated systems exhibit novel phenomena not observed in weakly-correlated materials: metal-insulator transitions, magnetic order and unconventional superconductivity are salient examples. Designing and optimizing materials with these properties would advance both technology and our understanding of the underlying physics. Furthermore, material specific predictive theory for this class of materials is not fully developed, so even the direct problem of predicting properties of correlated materials with known atomic coordinates is very challenging. It requires going beyond perturbative approaches, and we currently lack methods for reliably modeling materials properties which scale up to the massive number of calculations necessary for material design purposes. In this article, we examine the challenges of material design projects involving strongly correlated electron systems. Our goal is to present the state of the art in the field stressing the outstanding challenges as it pertains to correlated materials, and propose strategies to solve them. We begin by providing a clear definition of correlations (Sec.~\ref{sec:correlations}), distinguishing two important types, static and dynamic, and some available tools to treat them. Next we introduce the material design workflow (Sec.~\ref{sec:workflow}). Then we give five examples of materials design in correlated systems to illustrate the application of our ideas (Sec.~\ref{sec:tuning}-\ref{sec:bacoso}) and conclude with a brief outlook. \section{What are correlated materials. Static and Dynamic Correlations} \label{sec:correlations} The standard model of periodic solids views the electrons in a crystal as freely propagating waves with well defined quantum numbers, crystal momentum and band index. Dating back to Sommerfeld and Bloch, it now has a firm foundation based on the Fermi liquid theory and the renormalization group, which explains why the effects of Coulomb interactions disappear or ``renormalize away'' at low energies, and provides an exact description of the excitation spectra in terms of quasiparticles. Another route to the band theory of solid, is provided by the density functional theory in the Kohn-Sham implementation, where a system of non-interacting quasiparticles is designed so as to provide the exact density of a solid. While this wave picture of a solid has been extraordinarily successful and is the foundation for the description of numerous materials, it fails dramatically for a class of materials, which we will denote strongly correlated electron systems. The basic feature of correlated materials is their electrons cannot be described as non-interacting particles. Since the constituent electrons are strongly coupled to one another, studying the behavior of individual particles generally provides little insight into the macroscopic properties of a correlated material. Often, correlated materials arise when electrons are subjected to two competing tendencies: the kinetic energy of hopping between atomic orbitals promotes band behavior, while the potential energy of electron-electron repulsion prefers atomic behavior. When a system is tuned so that the two energy scales are comparable, neither the itinerant nor atomic viewpoint is sufficient to capture the physics. The most interesting phases generally occur in this correlated and difficult-to-describe regime, as we shall see in subsequent sections. \textcolor{black}{These ideas have to be sharpened in order to quantify correlation strength,}\textcolor{red}{{} }as there is no sharp boundary between weakly and strongly correlated materials. Ultimately one would like to have a methodology which can explain the properties of any solid and which seeks to make predictions for comparison with experimental observations. To arrive at an operational definition of a correlated material, we examine DFT and how it relates to the observed electronic spectra. The key idea behind DFT is that the free energy of a solid can be expressed as a functional of the electron density $\rho(\vec{r})$. Extremizing the free energy functional one obtains the electronic density of the solid, and the value of the functional at the extremum gives the total free energy of the material. The functional has the form $\Gamma[\rho]=\Gamma_{univ}[\rho]+\int d^{3}rV_{cryst}(r)\rho(r)$ where $\Gamma_{univ}[\rho]$ is the same for all materials, and the material-specific information is contained in the second term through the crystalline potential. The universal functional is written as a sum of $T[\rho]$ the kinetic energy, $E_{H}$, a Hartree Coulomb energy and a rest which is denoted as $F_{xc}$ the exchange correlation free energy. This term needs to be approximated since it is not exactly known, and the simplest approximation is to use the free energy of the electron gas at a given density. This is called the Local Density Approximation (LDA). The extremization of the functional was recast by Kohn and Sham~\cite{Kohn_1965} in the form of a single particle Schr{\"o}dinger equation with the Hartree atomic units~$\left(m_{e}=e=\hbar=\frac{1}{4\pi\varepsilon_{0}}=1\right)$ \begin{equation} \left[-\frac{1}{2}\nabla^{2}+V_{KS}\left(\vec{r}\right)\right]\psi_{\vec{k}j}\left(\vec{r}\right)=\epsilon_{\vec{k}j}\psi_{\vec{k}j}\left(\vec{r}\right).\label{Kohn-Sham} \end{equation} \begin{equation} \sum_{\vec{k}j}|\psi_{\vec{k}j}(\vec{r})|^{2}f(\epsilon_{\vec{k}j})=\rho(\vec{r})\label{KS2} \end{equation} reproduces the density of the solid. It is useful to divide the Kohn-Sham potential into several parts: $V_{KS}=V_{H}+V_{cryst}+V_{xc}$, where one lumps into $V_{xc}$ exchange and correlation effects beyond Hartree. In practice, the exchange-correlation term is difficult to capture, and is generally modeled by approximations known as the local density approximation (LDA) or generalized gradient approximation (GGA). Density functional calculations using the LDA/GGA approximation have become very precise so that the uncertainties are almost entirely systematic. To get a feel for the numbers, convergence criteria of $10^{-1}$ to $10^{-4}$~meV/atom are routinely used whereas differences between experimental and theoretical heats of formation routinely differ by over 100~meV/atom~\cite{Stevanovic_2012}. The eigenvalues $\epsilon_{\vec{k}j}$ of the solution of the self-consistent set of Eqs.~(\ref{Kohn-Sham}) and~(\ref{KS2}) are not to be interpreted as excitation energies. Konh Sham excitations are \textit{not} Landau quasiparticle excitations. The latter represent the excitation spectra, which are the experimental observable in angle resolved photoemission and inverse photoemission experiments and should be extracted from the poles of the one particle Green's function: \begin{equation} G\left(\vec{r},\vec{r'},\omega\right)=\frac{1}{\omega+\frac{1}{2}\nabla^{2}+\mu-V_{H}-V_{cryst}-\Sigma\left(\vec{r},\vec{r',}\omega\right)}.\label{eq:gwr} \end{equation} Here $\mu$ is the chemical potential and we have singled out in Eq.~(\ref{eq:gwr}) the Hartree potential $V_{H}(\vec{r})=\int\frac{\rho\left(\vec{r}'\right)}{|\vec{r}-\vec{r}'|}d^{3}r'$ expressed in terms of the exact density, $V_{cryst}$ is the crystal potential and we lumped the rest of the effects of the correlation in the self-energy operator $\Sigma\left(\vec{r},\vec{r',}\omega\right)$ which depends on frequency as well as on two space variables. \begin{figure} \includegraphics[width=0.5\textwidth]{Fig_GW_new}\caption{\label{fig:GW} Schematic diagrams for the GW method. Starting from some $G_{0}$ a polarization bubble is constructed, which is used to screen the Coulomb interactions resulting in an interaction W. This W is then used to compute a self-energy $\Sigma_{GW}$ using W and $G_{0}$ . To obtain the full Green's function G in Eq. ~(\ref{eq:gwr}), one goes from $\Sigma_{GW}$ to $\Sigma$ by subtracting the necessary single particle potential and uses the Dyson equation $G^{-1}={G_{0}}^{-1}-\Sigma$ as discussed in the text. Adapted from \onlinecite{DMFT_at_25}.} \end{figure} Since taking $\Sigma=V_{xc}$ generates the Kohn-Sham spectra, we define weakly correlated materials as ones where \begin{equation} \left|\Sigma(\omega)-V_{\text{xc}}\right|\label{eq:deviation} \end{equation} is small for low energies, so our definition of weakly correlated materials are those for which the Kohn-Sham spectra is sufficiently close to experimental results. We can refine this definition, by taking into account first order perturbation theory in the screened Coulomb interactions, taking LDA as a starting point. This is the $G_{0}W_{0}$ method, which we now describe using diagrams in Fig.~\ref{fig:GW}. This figure first describes the evaluation of the polarization bubble $\Pi$ \begin{equation} \Pi\left(t,t'\right)=G_{0}\left(t,t'\right)G_{0}\left(t',t\right)\label{eq:bubble} \end{equation} Next, the screened Coulomb potential $W$ in the random phase approximation (RPA) which is the infinite sum of diagrams depicted and represent the expression \begin{equation} W^{-1}=v_{Coul}^{-1}-\Pi\label{eq:Q} \end{equation} where $v_{Coul}$ is the bare Coulomb potential. Then one proceeds to the evaluation of a self-energy \begin{equation} \Sigma_{GW}=G_{0}W\label{eq:self_GW} \end{equation} which represents the lowest order contribution in perturbation theory in W (given in real space by Fig.~\ref{fig:GW}), and then $G^{-1}=G_{0}^{-1}-\Sigma$ using Dyson's equation. $G_{0}$ above is just a Green's function of non-interacting particles, and it can thus be defined in various ways, leading to different variants of the GW method. In the ``one-shot'' (that is, a method with no self-consistency loop) GW method (aka $G_{0}W_{0})$ one uses the LDA Kohn-Sham Green's function. \begin{equation} G_{0}\left(i\omega\right)^{-1}=i\omega+\mu+\frac{1}{2}\nabla^{2}-V_{H}-V_{cryst}-V_{xc}^{LDA}.\label{eq:GW_G0} \end{equation} and the self-energy is thus taken to be $\Sigma=\Sigma_{GW}-V_{xc}^{LDA}$. Through this paper we use a matrix notation loosely and view operators as matrices. For example, in the Dyson equations $W,v_{Coul},\Pi$ are operators (matrices) with matrix elements $\bra{r}W(\omega)\ket{r'}=W(\omega,r,r')$, $\bra{r}\Pi(\omega)\ket{r'}=\Pi(\omega,r,r')$, $\bra{r}v_{Coul}\ket{r'}=\frac{1}{|r-r'|}$. This \textbf{$G_{0}W_{0}$ method}, introduced by Hybertsen and Louie \cite{Hybertsen1985}, systematically improves the gaps of all semiconducting materials. We show this in Fig.~\ref{fig:louie}. The success of this $G_{0}W_{0}$ method implies that in this kind of materials the Kohn-Sham references system is sufficiently close to the exact self-energy that the first order perturbation theory correction $\Sigma_{G_{0}W_{0}}(\omega)-V_{\text{xc}}^{LDA/GGA}$ brings us close enough to the experimental results. \begin{figure*} \includegraphics[scale=0.8]{louie-doctored}\caption{Theoretically-determined semiconductor gap in a one shot LDA $G_{0}W_{0}$ calculation versus experiment (data complied by E. Shirley). Adapted from Chapter III. ``First-principles theory of electron excitation energies in solids, surface, and defects'' (article author: Steven G. Louie) in Topics in Computational Materials Science, edited by C. Y. Fong (World Scientific, 1998) {[}\onlinecite{louie1998first}{]}. Diamonds are the $G_{0}W_{0}$ excitation gap, while the crosses are the LDA value.\label{fig:louie}} \end{figure*} However, there are many materials (usually containing atoms with open $d$ or $f$ shells) where the photoemission spectra (and many other physical properties) are not so well described by this method. A successful many body theory of the solid state aims to describe all these systems. For the most widely used DFT starting points, LDA and GGA, what is the physical basis for the deviations in Eq.~(\ref{eq:deviation})? It is useful to think about two limiting cases, one in which the self-energy $\Sigma$ is a strong function of frequency, in which case we talk about dynamical effects, and a case where $\Sigma$ is strongly momentum-dependent, or in real space - highly non-local, but weakly frequency dependent, and we talk about static correlations. In materials with strong dynamical correlations the spectral function displays additional peaks, which are not present in the band theory, and reflect the atomic multiplets of the material. Electron correlation is customarily divided into dynamical and non-dynamical, but there is no strict definition of these terms. In the context of Quantum Chemistry calculations, these terms are mainly used to describe the ability of different methods to capture significant correlation effects, and the type of wave function which would approximate the exact solution of the Schr{\"o}dinger equation. Non-dynamical or static correlations in the chemistry context means that energetically-close / degenerate electronic configurations are appreciably present in the wave function. This requires multiple Slater determinants of low lying configurations, and multi-reference methods to describe them, such as the multi-reference Hartree Fock method, or multi-reference coupled cluster methods. Dynamical correlation refers to a situation where a single Slater determinant, such as a closed shell configuration of some orbitals, is a good reference system - which then needs to be dressed by including double (or higher) excitations from strongly occupied core shells to empty orbitals. In addition, other virtual processes can modify the orbitals of the original slater determinant. This situation is well described in the standard coupled cluster method which is considered the gold standard in Quantum Chemistry~\cite{shavitt2009many}. Confusingly, the chemist's delocalization error corresponds to our definition of $k$ dependent self-energy, which we denote as static correlation (since it does not involve frequency dependence of the self-energy), while the chemist's static correlation corresponds to what we call dynamical correlation as it requires a strongly frequency dependent self-energy in condensed matter physics. We use the solid state physicist convention in this article. Another useful way to classify the correlations is by the level of locality of the self-energy. Introducing a complete basis set of localized wave functions labeled by site and orbital index we can expand the self-energy as \begin{equation} \Sigma\left(\vec{r},\vec{r}',\omega\right)=\sum_{\alpha\vec{R},\beta\vec{R}'}\chi_{\alpha\vec{R}}^{*}(\vec{r})\Sigma{}_{\alpha\vec{R},\beta\vec{R}'}\left(\omega\right)\chi_{\beta\vec{R}'}\left(\vec{r}'\right).\label{eq:sigma_basis} \end{equation} The self-energy is approximately local when the on-site term $R=R'$ in Eq.~(\ref{eq:sigma_basis}) is much larger than the rest. Notice that the notion of locality is defined with reference to a basis set of orbitals. Equation~(\ref{eq:sigma_basis}) allows us to introduce an approximation to the self-energy \cite{tomczak} involving a sum of a non-local but frequency independent term plus a frequency dependent but local self-energy: \begin{equation} \Sigma(\vec{k},\omega)\simeq\Sigma(\vec{k})+\sum_{\vec{R},\alpha\beta\in L}|\vec{R}\alpha\rangle\Sigma_{\alpha\vec{R,\beta}\vec{R}}(\omega)\langle\vec{R}\beta|\label{eq:sigma_ansatz} \end{equation} \textcolor{black}{This ansatz was first introduced by Sadovskii }\textit{\textcolor{black}{et al}}\textcolor{red}{\cite{Sadovskii2005}.} It is useful when the sum over orbitals in Eq.~(\ref{eq:sigma_ansatz}) runs over a small set $L$ (much smaller than the size of the basis set), for example over a single shell of $d$ or $f$ orbitals. This form captures both static and dynamical correlations and is also amenable to computation using Dynamical Mean Field Methods to be introduced in section \ref{sec:How-to-treat}. \section{How to treat correlations\label{sec:How-to-treat}} Having defined correlations as a departure of the Green's function from the results of lowest order perturbation theory around LDA (i.e. $G_{0}W_{0}$), we now review various ways to correlations into account. One should keep in mind that different materials may require stronger momentum or frequency dependence in their self-energy, and may exhibit different degrees of locality. This section lays out several complementary approaches to treat correlations beyond $G_{0}W_{0}$. They represent different compromises between speed and accuracy, and can target different levels of locality and different correlations strengths. A schematic view of the grand challenge posed by the treatment of correlations in the solid state is presented in Fig. \ref{fig:axes}, which explains the need to converge the calculations along multiple axis. \begin{figure} \includegraphics[viewport=15bp 512bp 330bp 760bp,clip,scale=0.7]{axes} \caption{Two complementary approaches to the treatment of correlations. One axis represents the systematic perturbative expansion in powers of an interaction (for example the screened Coulomb interaction W). The second axis, sums perturbation theory to all orders, but at the local level. When locality is just a lattice site, we have the single site DMFT, improvements involve larger clusters. In addition when one goes beyond model Hamiltonians towards the realistic treatment of solids we need to introduce a basis set and estimate that the results are converged as function of the size of the basis set.\label{fig:axes}} \end{figure} \textbf{Linearized Self Consistent Quasiparticle GW}. We begin our treatment with the GW approximation, which was introduced in the previous section. One obvious flaw of the $G_{0}W_{0}$ method is its dependence on the LDA input. This makes the method increasingly inaccurate as the strength of the correlations increase. One way to eliminate this dependence, is to introduce some level of self consistency. Hedin\cite{Hedin65} proposed a full self-consistent $GW$ scheme, namely to use $G_{0}=G$ is in Eq.~(\ref{eq:self_GW}). We can think of this as setting $V_{xc}=0$, so it is not used in intermediate steps. There are numerous advantages, however, to using a non-interacting form for $G_{0}$ in the algorithm, and in practice the spectra in self-consistent GW turned out to be consistently worse in solids than the non-self-consistent approach for spectral properties\cite{Holm98}. Nevertheless, GW can be reasonably accurate for total energy calculations, as they can be obtained as stationary points of a functional\cite{Stan06,Kutepov09}. To improve on the spectra relative to $G_{0}W_{0}$ while retaining some level of self consistency so as not to depend on the starting point, the self-consistent quasi-particle GW (QPGW)~\cite{mark_V}was proposed. Here one uses the ``best'' non-interacting Green's function $G_{0}$, which is defined in terms of an ``exchange and correlation potential'' $V_{xc}^{QPGW}$ chosen to reproduce the spectra of the full G as closely as possible: \begin{equation} G_{0}^{QPGW}\left(i\omega\right)^{-1}=i\omega+\mu+\frac{1}{2}\nabla^{2}-V_{H}-V_{cryst}-V_{xc}^{QPGW}. \end{equation} To determine $V_{xc}^{QPGW}$ (which once again we view as a matrix with matrix elements $\bra{r}V_{xc}^{QPGW}\ket{r'}=V_{xc}^{QPGW}(r)\delta(r-r')$), it was proposed to approximate the spectra and the eigenvectors of G by those of $G_{0}^{QPGW}$- by solving a set of non-linear equations on the real axis\cite{mark_V}. An alternative approach that works on the imaginary axis is to linearize the GW self-energy at each iteration. Namely, after the evaluation of the self-energy in Eq. (\ref{eq:self_GW}), this quantity is Taylor expanded around zero frequency (hence the name ``linearized''): \[ \Sigma_{lin}(\vec{k},i\omega)=i\omega(1-Z(\vec{k})^{-1})+\Sigma(\vec{k},0) \] and $G_{0}^{QPGW}\left(i\omega\right)$ is obtained by solving the usual Dyson equation with the linearized self-energy, and multiplying the result by the quasiparticle residue, $Z$, to obtain a properly normalized quasiparticle Green's function. \begin{align} & G_{0}^{\textrm{QPGW}}=\label{eq:linearized_QSGW}\\ & \,\sqrt{Z(\vec{k})}[i\omega+\mu+\frac{1}{2}\nabla^{2}-V_{H}-V_{cryst}-\Sigma_{lin}]^{-1}\sqrt{Z(\vec{k})}\nonumber \end{align} Note that this defines the exchange correlation potential of the self-consistent QPGW method. This method, the linearized self-consistent quasiparticle GW, was introduced in Ref.~\onlinecite{Kutepov12} and an open source code to implement this type of calculation in the linearized augmented plane wave (LAPW) basis set is available in Ref.~\onlinecite{Kutepov17}. The GW or RPA method captures an important physical effect. Electrons are charged objects which interact via the long range Coulomb interactions. \textcolor{black}{Quasiparticles, on the other hand, interact through the screened Coulomb interaction. }They are composed of electrons surrounded by screening charges, thus reducing the strength and the range of their interaction. For this reason, in many model Hamiltonians describing metals, only the short range repulsion is kept. On the other hand, it is well known that the RPA fails in describing the pair correlation function at short distances. One can say that the GW method captures the long range of the screening effects of the long range Coulomb interactions and produce a self-energy which is non-local in space, but with a weak frequency dependence (indeed the self-energy is linear in a broad range of energies). It turns out that this method is not able to capture the effects of the short range part of the Coulomb interactions which in turn induces strong frequency dependence (i.e. strong non-locality in time), but in turn is much more local in space. \begin{figure} \includegraphics[width=0.4\textwidth]{DMFT_mapping} \caption{\label{fig:dmft_mapping} Dynamical Mean Field Theory (DMFT) maps (or \textbf{truncates}) a lattice model to a single site embedded in a medium (impurity model) with a hybridization strength which is determined self consistently. Adapted from Ref.~\onlinecite{physics_today}. } \end{figure} \textbf{Dynamical Mean Field Theory (DMFT)}. To capture dynamical local correlations one uses Dynamical mean field theory\cite{georges_kotliar_PRB}, which is the natural extension of the Weiss mean field theory of spin systems to treat quantum mechanical model Hamiltonians. Dynamical Mean Field Theory becomes exact in the limit of infinite dimensions, which was introduced by Metzner and Vollhardt \cite{dieter_walter}. With suitable extensions it plays an important role in realistically describing strongly correlated electron materials. Here we describe the main intuitive DMFT ideas as a quantum embedding, starting from the example of a one-band Hubbard model (describing $s$ electrons), in which the relevant atomic configurations are $\ket{0},\ket{\uparrow},\ket{\downarrow},\ket{\uparrow\downarrow}$ as described in Fig.~\ref{fig:dmft_mapping}. It involves two steps. The first step, focuses on a single lattice site, and describes the rest of the sites by a medium with which an electron at this site hybridizes. This \textbf{truncation\index{truncation}} to a single site problem is common to all mean field theories. In the Weiss mean field theory one selects a spin at a given site, and replaces the rest of the lattice by an effective magnetic field or Weiss field. In the dynamical mean field theory, the local Hamiltonian at a given site is kept, and the kinetic energy is replaced by a hybridization term with a bath of non-interacting electrons, which allows the atom at the selected site to change its configuration. This is depicted in Fig.~\ref{fig:dmft_mapping} where we apply the method to the one-band Hubbard model. The system consist of one band of $s$ electrons. The Fourier transform of the hopping integral is given by $t(\overrightarrow{k})$. It is used in the second step, which involves the reconstruction of lattice observables by \textbf{embedding\index{embedding}} the local impurity self-energy into a correlation function of the lattice, \[ G_{latt}(\vec{k},i\omega)^{-1}=i\omega+\mu-t(\vec{k})-\Sigma_{imp}(i\omega). \] Here $\Sigma_{imp}(i\omega)$ are viewed as functionals of the Weiss field. The requirement that $\sum_{k}G_{latt}=G_{loc}$ determines the Weiss field. Table~\ref{tab:dmft_wm} summarizes the analogies between Weiss mean field theory and dynamical mean field theory. \begin{table}[h] \begin{tabular}{l|l} \hline Weiss Mean Field Theory & Dynamical Mean Field Theory\tabularnewline[2mm] \hline Ising Model $\rightarrow$ Single Spin & Hubbard Model $\rightarrow$ \tabularnewline in effective Weiss Field & Impurity in effective bath\tabularnewline \hline Weiss field: $h_{eff}$ & effective bath: $\Delta(\imath\omega_{n})$\tabularnewline \hline Local observable: $m=<s_{i}>$ & Local Observable: $G_{loc}(\imath\omega_{n})$\tabularnewline \hline Self-consistent condition: & Self-consistent condition:\tabularnewline $\tanh\left(\beta\sum_{j}J_{ij}s_{j}\right)=m$ & $i\omega_{n}-E_{imp}-\Delta\left(i\omega_{n}\right)$\tabularnewline & ~~$-\Sigma\left(i\omega_{n}\right)=\left[\sum_{\vec{k}}G_{\vec{k}}\left(i\omega_{n}\right)\right]^{-1}$\tabularnewline \hline \end{tabular} \vspace{3bp} \caption{\label{tab:dmft_wm}Corresponding quantities in Dynamical MFT (right) and Weiss or static MFT in statistical mechanics (left). } \end{table} The DMFT mapping of a lattice model into an impurity model gives a local picture of the solid, in terms of an impurity model, which can then be used to generate lattice quantities such as the electron Green's function and the magnetic susceptibility by computing the corresponding irreducible quantities. This is illustrated in Fig.~\ref{fig:dmft_mapping2}. \begin{figure} \includegraphics[width=0.48\textwidth]{impurity_model1} \caption{\label{fig:dmft_mapping2} The DMFT impurity model is used to generate irreducible quantities such as self-energies and one particle vertices. These are then \textbf{embedded} in the lattice model to generate momentum dependent lattice quantities such as spectral functions, or spin susceptibilities. Adapted from \onlinecite{DMFT_at_25}.} \end{figure} The self-consistent loop of DMFT is summarized in the following iterative cycle \medskip{} \begin{tabular}{|c|c|c|c|c|} \cline{1-1} \cline{3-3} \cline{5-5} & & & & \tabularnewline {\scriptsize{}$E_{imp},$ $\Delta\left(i\omega_{n}\right)$} & {\scriptsize{}$\rightarrow$} & {\scriptsize{}Impurity Solver} & {\scriptsize{}$\rightarrow$} & {\scriptsize{}$\Sigma_{imp}\left(i\omega_{n}\right),$ $G_{loc}\left(i\omega_{n}\right)$}\tabularnewline & & & & \tabularnewline \cline{1-1} \cline{3-3} \cline{5-5} \multicolumn{1}{c}{{\scriptsize{}$\uparrow$}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{{\scriptsize{}$\downarrow$}}\tabularnewline \cline{1-1} \cline{3-3} \cline{5-5} & & {\scriptsize{}$G_{\vec{k}}\left(i\omega_{n}\right)=$} & & \tabularnewline {\scriptsize{}Truncation} & {\scriptsize{}$\leftarrow$} & {\scriptsize{}$\frac{1}{i\omega_{n}+\mu-t\left(\vec{k}\right)-\Sigma\left(i\omega_{n}\right)}$} & {\scriptsize{}$\leftarrow$} & {\scriptsize{}Embedding}\tabularnewline & & & & \tabularnewline \cline{1-1} \cline{3-3} \cline{5-5} \end{tabular} \medskip{} From the simplest model, the one-band Hubbard model, one can proceed to more realistic descriptions of correlated materials by replacing $t(\vec{k})$ by a tight-binding model Hamiltonian matrix. The DMFT equations can be derived from a functional \begin{align} & \Gamma_{DMFT_{model}}\left[G_{\alpha\beta,\vec{R}},\Sigma_{\alpha\beta,\vec{R}}\right]=\label{eq:DMFT_model}\\ & -\mathrm{Tr}\,\ln\left[i\omega_{n}-H(\vec{k})-\Sigma_{\alpha\beta\vec{R}}(i\omega_{n})\right]\nonumber \\ & -\sum_{n}Tr\left[\Sigma\left(i\omega_{n}\right)G\left(i\omega_{n}\right)\right]+\sum_{\vec{R}}\Phi\left[G_{\alpha\beta,\vec{R}},U\right]\nonumber \end{align} where $\Phi\left[G_{\alpha\beta,\vec{R}},U\right]$ is the Baym-Kadanoff functional - the sum of all two particle irreducible diagrams in terms of the full Green's function $G$, and the Hubbard interaction $U$, which denotes a four rank tensor $U_{\alpha\beta\gamma\delta}$. It can also be evaluated from the Anderson impurity model expressed in terms of the full local Green's function of the impurity $G$. The impurity model is the engine of a DMFT calculation. Multiple approaches have been used for its solution, and full reviews have been written on the topic. The introduction of continuous time Monte Carlo method for impurity models~\cite{gull} have provided numerically exact solutions reducing the computational cost relative to the Hirsch-Fye algorithm that was used in earlier DMFT studies. \textbf{DFT+DMFT method.} This is the next step towards a more realistic description of solids. It was introduced in Refs.~\onlinecite{anisimov1997,lichtenstein}. In these early implementations, it consisted of replacing the Hamiltonian $H(\vec{k})$ by the Kohn-Sham matrix in Eq.~(\ref{eq:DMFT_model}) with a correction to subtract the correlation energy that is contained in the Kohn-Sham Hamiltonian (double counting correction). The original DFT calculations were carried out with an LDA exchange and correlation potential, but they could be done with GGA and other functionals. Furthermore the exchange and correlation potential in the Dyson equation for the Green's function can be replaced by another static mean field theory like hybrid DFT or QPGW, but in the following we will use the terminology LDA+DMFT. Starting from the Anderson model Hamiltonian point of view, one divides the orbitals into two sets. The first set contains the large majority of the electrons are properly described by the LDA Kohn-Sham matrix. The second set contains the more localized orbitals ($d$-electrons in transition metals and $f$-electrons in rare earths and actinides) which require the addition of DMFT corrections. A subtraction (called the double counting corrections) takes into account that the Hartree and exchange correlation has been included in that orbital twice, since it was treated both in LDA and in DMFT. The early LDA+DMFT calculations, proceeded in two steps (one-shot LDA+DMFT). First an LDA calculation was performed for a given material. Then a model Hamiltonian was constructed from the resulting Kohn-Sham matrix corrected by $E_{dc}$ written in a localized basis set. The values of a Coulomb matrix for the correlated orbitals were estimated or used as fitting parameters. Finally DMFT calculation were performed to improve on the one particle Green's function of the solid. In reality, the charge is also corrected by the DMFT self-energy, which in turn changes the exchange and correlation potential away from its LDA value. Therefore charge self-consistent LDA+DMFT is needed. This was first implemented in Refs.~\onlinecite{savrasov2004,Savrasov_2001}. For this purpose it is useful to notice that the LDA+DMFT equations can be derived as stationary points of an LDA+DMFT functional, which can be viewed as a functional of the density and local Green's function of correlated orbitals. This is a spectral density functional\index{spectral density functional theory}. Evaluating the functional at the stationary point gives the free energy of the solid, and the stationary Green's functions gives us the spectral function of the material. We can arrive at the DFT+DMFT functional by performing the substitutions $-\frac{1}{2}\nabla^{2}+V_{KS}(\vec{r})$ for $H(\vec{k})$ in the model DMFT functional Eq.~(\ref{eq:DMFT_model}) and then adding terms arising from the density functional theory, namely: \begin{align} & \Gamma_{DFT+DMFT}\left[\rho\left(\vec{r}\right),G_{\alpha\beta,\vec{R}},V_{KS}\left(\vec{r}\right),\Sigma_{\alpha\beta,\vec{R}}\right]\nonumber \\ & =\Gamma_{DMFT_{model}}[H(\vec{k})\rightarrow-\frac{1}{2}\nabla^{2}+V_{KS}(\vec{r})]\nonumber \\ & +\Gamma_{2}[V_{KS}\left(\vec{r}\right),\rho\left(\vec{r}\right)]-\Phi_{DC} \end{align} where \begin{align} \Gamma_{2}[V_{KS}\left(\vec{r}\right),\rho\left(\vec{r}\right)] & =-\int V_{KS}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r\nonumber \\ & +\int V_{ext}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r\nonumber \\ & +\frac{1}{2}\int\frac{\rho\left(\vec{r}\right)\rho\left(\vec{r}'\right)}{|\vec{r}-\vec{r}'|}d^{3}rd^{3}r'+E_{xc}^{DFT}\left[\rho\right] \end{align} We then arrive at the DFT+DMFT functional which we write in full below. \begin{widetext} \begin{eqnarray} & & \Gamma_{DFT+DMFT}\left[\rho\left(\vec{r}\right),G_{\alpha\beta,\vec{R}},V_{KS}\left(\vec{r}\right),\Sigma_{\alpha\beta,\vec{R}}\right]=\nonumber \\ & & -\mathrm{Tr}\ln\left[i\omega_{n}+\mu+\frac{\nabla^{2}}{2}-V_{KS}-\sum_{R,\alpha\beta\in L}\chi_{\alpha\vec{R}}^{*}\left(\vec{r}\right)\Sigma_{\alpha\beta\vec{R}}(i\omega_{n})\chi_{\beta\vec{R}}\left(\vec{r}'\right)\right]\nonumber \\ & & -\int V_{KS}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r-\sum_{n}\mathrm{Tr}\left[\Sigma\left(i\omega_{n}\right)G\left(i\omega_{n}\right)\right]+\int V_{cryst}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r\nonumber \\ & & +\frac{1}{2}\int\frac{\rho\left(\vec{r}\right)\rho\left(\vec{r}'\right)}{|\vec{r}-\vec{r}'|}d^{3}rd^{3}r'+E_{xc}^{DFT}\left[\rho\right]+\sum_{\vec{R}}\Phi\left[G_{\alpha\beta,\vec{R}},U\right]-\Phi_{DC}.\label{eq:LDA+DMFT} \end{eqnarray} \end{widetext} $\Phi$ is the sum of two-particle irreducible diagrams written in terms of $G$ and $U$. It was written down first in Ref.~\onlinecite{savrasov2004}, building on the earlier work of Chitra and Kotliar ~\cite{chitra_2001,Chitra2000}. It is essential for total energy calculations which require the implementation of charge self-consistency in the LDA+DMFT method. The first implementation of charge self-consistent LDA+DMFT was carried out in a full potential linear muffin-tin orbital (FP-LMTO) basis set~\cite{savrasov2004}. It was used to compute total energy and phonons of $\delta$-plutonium\cite{Savrasov_2001,dai_phonons}. Alternatively, one can include the hybridization function $\Delta$ or the Weiss field $\mathcal{G}$ as an independent variable in the functional in order to see explicitly the free energy of the Anderson Impurity Model, $\mathcal{G}_{\alpha\beta,\overrightarrow{R}}^{-1}=G_{atom_{\alpha,\beta,\overrightarrow{R}}}^{-1}-\Delta_{\alpha\beta,\overrightarrow{R}}$: \[ F_{imp}\left[{\cal {G}}_{\alpha\beta,\vec{R}}^{-1}\right]=-\ln\int D[c^{\dagger}c]e^{-S_{imp}[c^{\dagger},c]} \] with \begin{align*} S_{imp}[{\cal {G}}_{\alpha\beta,\vec{R}}^{-1}] & =-\sum_{\alpha\beta}\int d\tau d\tau^{\prime}c_{\alpha}^{\dagger}(\tau){\cal {G}}_{\alpha\beta,\vec{R}}^{-1}(\tau,\tau^{\prime})c_{\beta}(\tau^{\prime})\\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+U_{\alpha\beta\gamma\delta}\int d\tau c_{\alpha}^{\dagger}(\tau)c_{\beta}^{\dagger}(\tau)c_{\delta}(\tau)c_{\gamma}(\tau) \end{align*} So that: \begin{widetext} \begin{eqnarray} & & \Gamma_{DFT+DMFT}\left[\rho\left(\vec{r}\right),G_{\alpha\beta,\vec{R}},V_{KS}\left(\vec{r}\right),\Sigma_{\alpha\beta,\vec{R}},{}_{\alpha\beta,\vec{R}},{\cal {G}}_{\alpha\beta,\vec{R}}\right]=\nonumber \\ & & -\mathrm{Tr}\ln\left[i\omega_{n}+\mu+\frac{\nabla^{2}}{2}-V_{KS}-\sum_{R,\alpha\beta\in L}\chi_{\alpha\vec{R}}^{*}\left(\vec{r}\right)\Sigma_{\alpha\beta\vec{R}}(i\omega_{n})\chi_{\beta\vec{R}}\left(\vec{r}'\right)\right]\nonumber \\ & & -\int V_{KS}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r-\sum\mathrm{Tr}\left[({\cal {G}}^{-1}-\Sigma\left(i\omega_{n}\right)-G^{-1})G\left(i\omega_{n}\right)\right]+\int V_{cryst}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r\nonumber \\ & & +\frac{1}{2}\int\frac{\rho\left(\vec{r}\right)\rho\left(\vec{r}'\right)}{|\vec{r}-\vec{r}'|}d^{3}rd^{3}r'+E_{xc}^{DFT}\left[\rho\right]+\sum_{\vec{R}}F_{imp}\left[{\cal {G}}_{\alpha\beta,\vec{R}}^{-1}\right]-\mathrm{Tr}\ln[G_{\alpha\beta,\vec{R}}]-\Phi_{DC}. \end{eqnarray} \end{widetext} The form of the LDA+DMFT functional makes it clear that the method is independent of the basis set used to implement the electronic structure calculation, provided that the basis is complete enough. On the other hand, it is clearly dependent on the parameter $U$ chosen, on the form of the double counting correction and the choice of the projector (i.e., the orbitals ${\chi_{\alpha}}(\vec{r})$ with $\alpha\in L$ that enter this definition) and the exchange correlation functional $E_{xc}^{DFT}$. A projector of the form $P(r,r')=\sum_{\alpha\beta\in L}\chi_{\alpha\mathbf{\overrightarrow{R}}}^{*}(\overrightarrow{r})\chi_{\beta\mathbf{\overrightarrow{R}}}(\overrightarrow{r}')$ was used to define a truncation from $G$ to $G_{loc}$. The inverse of $P$ is the embedding operator $E$ defined by $P\cdot E=I_{L}$ where $I_{L}$ is the identity operator in the space spanned by the correlated orbitals. If one restricts $E\cdot P$ to the space $L$, one also obtains the identity operator in that space. $E$ is used to define an embedding of the self-energy $\Sigma(r,r')=\sum_{\alpha\beta}E^{\alpha,\beta}(r,r'){\Sigma^{loc}}_{\alpha,\beta}$. However, more general projectors can be considered as long as causality of the DMFT equations is satisfied. Ideas for choosing an optimal projector for LDA+DMFT based on orbitals were presented in Ref.~\onlinecite{indranil}. Choosing suitable projectors (and correspondingly a suitable value of the $U$ matrix and a proper double counting correction) is crucial for the accuracy of an LDA+DMFT calculation as demonstrated recently in the context of the hydrogen molecule~\cite{H2}. DFT+DMFT is now a widely used method. It has been successfully used across the periodic table, and has been implemented in numerous codes~\cite{Amadon08,Amadon12,Park14,Parcollet15,amulet,LISA,LmtART,haule_DMFT,Haule-dmft,Granas12}. Still there is ample room for advances in implementation, and on providing a firm foundation of the method. One can view the DFT+DMFT functional written above, as an approximation to an exact DFT+DMFT functional, which would yield the exact density and spectra of the solid~\cite{Chitra2000}. This viewpoint has been used recently to provide an expression for the double counting correction $\Phi_{DC}$~\cite{Haule2015}. An alternative perspective goes back to a fully diagrammatic many body theory of the solid, and examines how DFT+DMFT would fit in that framework as an approximation. We turn to this formulation next. \textbf{Fully Diagrammatic Methods:} The free energy of the solid can also be expressed as a functional of $G\left(x,x'\right)$ and $W\left(x,x'\right)$ by means of a Legendre transformation and results in Refs.~\onlinecite{chitra_2001,Almbladh99}, where $E_{H}=\frac{1}{2}\int\frac{\rho(\overrightarrow{r})\rho(\overrightarrow{r}')}{\left|\overrightarrow{r}-\overrightarrow{r}'\right|}d^{3}rd^{3}r'$, $\Phi$ is defined as sum of all 2-particle irreducible diagrams which cannot be divided into two parts by cutting two Green's functions lines (which can be either G's or W's): \begin{eqnarray} \Gamma\left[G,W,\Sigma,\Pi\right] & = & -\mathrm{Tr}\ln\left[G_{0}^{-1}-\Sigma\right]-\mathrm{Tr}\left[\Sigma G\right]\nonumber \\ & + & \frac{1}{2}\mathrm{Tr}\ln\left[v_{Coul}^{-1}-\Pi\right]-\frac{1}{2}\mathrm{Tr}\left[\Pi W\right]\nonumber \\ & + & E_{H}+\Phi\left[G,W\right],\label{eq:functional_GW} \end{eqnarray} This reformulation is exact and leads to the exact Hedin's equations, shown in Fig. \ref{fig:Hedin's-Equations}. To convert this general method into a tool of practical use, strong approximations have to be introduced. \begin{figure} \includegraphics[scale=0.37]{fd} \caption{Hedin's Equations\label{fig:Hedin's-Equations} give an exact representation of the correlation function of the Bosonic and Fermionic correlation in an expansion in G and W. They can be obtained by setting the functional derivatives of the Eq. (\ref{eq:functional_GW}) to 0. $\Phi(G,W)$ (first line) is the set of 2-PI skeleton diagrams (in G and W), where by convention the symmetry weights are omitted. The derivative by G (line 2) shows how the self-energy $\Sigma$ is defined in terms of a 3-legged vertex $\Gamma$. The derivative by W (line 3) equals the polarization $\Pi.$ The bottom line shows the definition of the vertex $\Gamma.$} \end{figure} The lowest order graphs of Eq. (\ref{eq:functional_GW}), shown in Fig.~\ref{fig:GW_f}, reproduce the self-consistent GW approximation: taking functional derivatives of the low order functional with respect to the arguments produces the same equations as the GW approximation. \begin{figure} \includegraphics[width=0.5\textwidth]{GW} \caption{\label{fig:GW_f} Lowest order graphs in the $\Phi$-functional of Eq.~(\ref{eq:functional_GW}). They give rise to the fully self-consistent GW approximation, as the saddle point equations. Note that the first term here is the Hartree energy. Adapted from \onlinecite{DMFT_at_25}.} \end{figure} To summarize the discussion so far, we recall that for semiconductors, non-local (but weakly frequency dependent) correlation effects are needed to increase the gap from its LDA value. This admixture of exchange, can be done within the GW method, or using hybrid density functionals. It reflects the importance of exchange beyond the LDA, which is due to the long-range but static part of the Coulomb interaction. These are the \textbf{static correlation} effects. It has recently been shown, that this type of correlation effects are important in materials near a metal-to-insulator transition such as BaBiO$_{3}$ or HfClN~\cite{yin_kutepov} and can have a dramatic effect in enhancing the electron-phonon interaction relative to its LDA estimated value. In these systems, a strongly k-dependent self-energy effect, $\Sigma(k)$, is much more important than frequency dependence, and here GW methods work well. On the other hand frequency dependence, and its implied non-locality in time, is crucial to capture Mott or Hund's physics. This physics tends to be local in space and can be captured by DMFT. Static mean field theories, such as the LDA, do not capture this non-locality in time, and therefore fail to describe Mott or Hund's phenomena. \textcolor{black}{DFT+DMFT can treat strong frequency dependency, but has k-dependence only as inherited from the k-dependence of DFT exchange and correlation potential, the k-dependence of the embedding and the double counting shift. } \textcolor{black}{In real materials both effects are present to some degree - thus motivating physically the ansatz, Eq.~(\ref{eq:sigma_ansatz}). Some examples discussed recently are Ce$_{2}$O$_{3}$ (using hybrid DFT+DMFT) in Ref.~\onlinecite{jacob} and the iron pnictides and chalcogenides in Ref.~\onlinecite{tomczak}.} We now describe a route proposed by Chitra~\cite{chitra_2001,Chitra2000} to embed DMFT into a many-body approach to electronic structure within a purely diagrammatic approach formulated in the continuum. If one selects a projector, which allows us to define a local Green's function, it was suggested in Refs.~\onlinecite{biermann,Chitra2000,chitra_2001} that one can perform a local approximation and keep only the local higher order graphs in a selected orbital. \begin{align*} & \Phi\left[G,W\right]\simeq\\ & \,\,\Phi_{EDMFT}\left[G_{loc},W_{loc},G_{nonlocal}=0,W_{nonlocal}=0\right]+\\ & \,\,\Phi_{GW}-\Phi_{GW}\left[G_{loc},W_{loc},G_{nonlocal}=0,W_{nonlocal}=0\right] \end{align*} Since the lowest-order graph is contained in the GW approximation, one should start from the second order graph and higher order. This $\Phi_{GW+DMFT}$ functional is shown in Fig. \ref{fig:gw-dmft}. \begin{figure} \includegraphics[scale=0.75]{GW-diagrams} \caption{Comparison of the functionals for the methods described in the text. The Hartree diagram was dropped since it appears in all methods\cite{Kotliar2006a}.\label{fig:gw-dmft}} \end{figure} These ideas were formulated and fully implemented in the context of a simple extended Hubbard model~\cite{ping_sun1,ping_sun2}. An open problem in this area, explored in Ref.~\onlinecite{ping_sun2} is the level of self consistency that should be imposed. This important issue is already present in the implementation of the GW method, and the work of Ref.~\onlinecite{ping_sun2} should be revisited using the lessons from the QPGW method~\cite{tomczak}. There has been a large number of works exploring GW+DMFT and related extensions and combinations, and we refer the reader to recent reviews for the most recent references~\cite{RevModPhys90025003}. Recently, we proposed the self consistent quasiparticle GW+DMFT~\cite{Choi2016,Tomczak2012c}, as a theory that contains both the most successful form of a GW approximation and DMFT as limiting cases. Further understanding of this method requires the systematic treatment of vertex corrections, an approach which is now vigorously pursued. \textcolor{black}{The }\textbf{\textcolor{black}{LDA+U}}\textcolor{black}{{} is a method was introduced by Anisimov }\textcolor{black}{\emph{et al.}}~\textcolor{black}{\cite{Anisimov1991a}. It was made rotationally invariant in Refs.}~\textcolor{black}{\onlinecite{Dudarev1998,Liechtenstein95}. One can view this is as a special case of LDA+DMFT, where in the functional (Eq.}~\textcolor{black}{(\ref{eq:LDA+DMFT})) $\Phi$ (the sum of graphs) is restricted to the Hartree-Fock graphs: $\Phi\rightarrow\Phi_{HF}$, and $\Sigma(\imath\omega_{n})$ is replaced by a constant matrix $\lambda$.} Then the LDA+U functional $\Gamma_{LDA+U}$ can be written in as follows: \begin{align} & \Gamma_{LDA+U}\left[\rho\left(\vec{r}\right),n_{\alpha\beta},V_{KS}(\vec{r}),\lambda_{\alpha\beta}\right]=\nonumber \\ & \,\,-\mathrm{Tr}\ln\left(i\omega+\mu+\frac{\nabla^{2}}{2}-V_{KS}-\sum_{\alpha\beta\in L}\chi_{\alpha}^{*}\left(\vec{r}\right)\lambda_{\alpha\beta}\chi_{\beta}\left(\vec{r}'\right)\right)\nonumber \\ & \,\,-\int V_{KS}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r-\lambda_{\alpha\beta}n_{\alpha\beta}+\int V_{cryst}\left(\vec{r}\right)\rho\left(\vec{r}\right)d^{3}r\nonumber \\ & \,\,+E_{H}\left[\rho\left(\vec{r}\right)\right]+E_{xc}^{LDA}\left[\rho\left(\vec{r}\right)\right]+\Phi_{HF}\left[n_{\alpha\beta}\right]-\Phi_{DC}\left[n_{\alpha\beta}\right],\label{eq:LDAU-functional} \end{align} where $n_{\alpha\beta}$ is the occupancy matrix. $\Phi_{HF}$ in Eq.~(\ref{eq:LDAU-functional}) is the Hartree-Fock approximation \begin{equation} \Phi_{HF}\left[n_{\alpha\beta}\right]=\frac{1}{2}\sum_{\alpha\beta\gamma\delta\in L}\left(U_{\alpha\gamma\delta\beta}-U_{\alpha\gamma\beta\delta}\right)n_{\alpha\beta}n_{\gamma\delta},\label{eq:HF-functional} \end{equation} where indexes $\alpha,\beta,\gamma,\delta$\textcolor{black}{{} refer to the fixed angular momentum $L$ of correlated orbitals, and the matrix $U_{\alpha\beta\gamma\delta}$ is the on-site Coulomb interaction matrix element.} \textcolor{black}{Similarly to DMFT, the on-site Coulomb interaction is already considered within LDA approximately, so it is subtracted, hence the double-counting term $\Phi_{DC}$. One of the popular choices is so-called ``fully-localized limit'' (FLL) whose form is}~\cite{Anisimov97} \[ \Phi_{DC}^{FLL}\left[n_{\alpha\beta}\right]=\frac{1}{2}\bar{U}\bar{n}\left(\bar{n\!}-\!1\right)-\frac{1}{2}\bar{J}\left[\bar{n}^{\uparrow}\!\left(\bar{n}^{\uparrow\!}-\!1\right)+\bar{n}^{\downarrow}\!\left(\bar{n}^{\downarrow}\!-\!1\right)\right], \] where \begin{align*} \bar{n}^{\sigma}\! & =\!{\displaystyle \sum_{\alpha\in L}}\!n_{\alpha\alpha}^{\sigma},\\ \bar{n}\! & =\!\bar{n}^{\uparrow}\!+\!\bar{n}^{\downarrow},\\ \bar{U} & =\frac{1}{\left(2L+1\right)^{2}}\sum_{\alpha\beta\in L}U_{\alpha\beta\beta\alpha},\\ \bar{J} & =\bar{U}-\frac{1}{2L(2L+1)}\sum_{\alpha\beta\in L}\left(\!U_{\alpha\beta\beta\alpha}\!-\!U_{\alpha\beta\alpha\beta}\!\right). \end{align*} The constant matrix $\lambda_{\alpha\beta}$ is determined by the saddle point equations $\frac{\delta\Gamma_{LDA+U}}{\delta n_{\alpha\beta}}=0$: \begin{align} \lambda_{\alpha\beta} & =\frac{\delta\Phi_{HF}}{\delta n_{\alpha\beta}}-\frac{\delta\Phi_{DC}}{\delta n_{\alpha\beta}}\nonumber \\ & =\sum_{\gamma\delta}\left(U_{\alpha\gamma\delta\beta}-U_{\alpha\gamma\beta\delta}\right)n_{\gamma\delta}\nonumber \\ & \,\,\,-\delta_{\alpha\beta}\left[\bar{U}\left(\bar{n}-\frac{1}{2}\right)-\bar{J}\left(\bar{n}^{\sigma}-\frac{1}{2}\right)\right]. \end{align} \textcolor{black}{The FLL double-counting term tends to work quite well for strongly correlated materials with very localized orbitals. However, for weakly correlated materials, FLL scheme describes the excessive stabilization of occupied states and leads to quite unphysical results such as the enhancement of the Stoner factor}~\cite{Petukhov03}\textcolor{black}{. In order to resolve the problems, ``around mean-field'' (AMF) was introduced in Ref.}~\onlinecite{Czyzyk94} and further developed in Ref.~\onlinecite{Petukhov03}. \textcolor{black}{One can say that the LDA+U method works when correlations are static - and at the same time local. For example cases where magnetic or orbital order are very well developed. For a review of the LDA+U method see Ref.}~\textcolor{black}{\onlinecite{Himmetoglu2014}.} \textbf{Slave-Boson Method.} The physics of strongly correlated electron materials requires to take into account - on the same footing, localized - quasi-atomic degrees of freedom, which are important at high energies, together with strongly renormalized itinerant quasiparticles which emerge at low energy. DMFT captures this physics via a sophisticated quantum embedding that requires the solution of a full Anderson impurity model in a self consistent bath. A\textcolor{red}{{} }\textcolor{black}{less accurate but computational faster methd to solve} the strong correlation problem, which precedes DMFT is the Gutzwiller method which has been shown to be equivalent to the slave boson method in the saddle point approximation. This approach starts with an exact quantum many body problem, but one enlarges the Hilbert space so as to introduce explicitly operators which describe the different atomic multiplet configurations, and additional fermionic degrees of freedom which will be related to the emergent low energy quasiparticles. The method proceeded by writing a functional integral in the enlarged Hilbert space, supplemented by Lagrange multipliers which enforce multiple constraints. The approach, at zero temperature is very closely connected to the Gutzwiller method, which appears as a saddle point solution in the functional integral formalism \cite{kotliar_ruckenstein}. In its original formulation this method was not manifestly rotationally invariant, but it was extended in this respect in Refs.~\onlinecite{wolfle1,wolfle2,fresard}. Further generalizations in the multi-orbital formulation and to capture non-local self-energies was introduced in Ref.~\onlinecite{lechermann}, and we denote this formulation as the RISB (rotationally invariant slave boson) method. Within the slave particle method it is possible to go beyond mean field theory, and fluctuations around the saddle point generate the Hubbard bands in the one particle spectra~\cite{raimondi}. The RISB method can be used\textcolor{black}{{} to compute the energy of} lattice models. When in conjunction with the DMFT self-consistency condition, it gives the same results as the direct application of the method to the lattice model \textcolor{red}{~\cite{lanata_prx15}}. In this review, we will restrict ourselves to the RISB mean field theory, specifically from the perspective of the free-energy functionals that describe the free-energy of the system. We explain the physical meaning of the variables used in this method, and summarize succinctly the content of the mean field slave boson equations using a functional approach. A precise operational formulation of the method was only given recently~\cite{Lanata2017}. For pedagogical reasons we start again with a Hubbard model with a tight binding one body Hamiltonian \textbf{$H(\vec{k})$}. The variables used in RISB can be motivated by noticing that the many-body local density matrix $\rho$ (with matrix elements $\bra{\Gamma'}\rho\ket{\Gamma}$) admits a Schmidt decomposition, which can be written in terms of the expectation value of matrices of slave-boson operators $\phi_{Bn}$ and $\phi_{Bn}^{\dagger}$, which become $\phi_{Bn,}\,\phi_{Bn}^{*}$ when the single-particle index $\alpha$ is M-dimensional, and can be stored as $2^{M}\times2^{M}$ matrices $\Phi,$ $[\Phi]_{An}\equiv\phi_{An}$, $[\Phi^{\dagger}]_{nA}=\phi_{An}^{*}$, so that: \begin{equation} \rho=\Phi\Phi^{\dagger}. \end{equation} The method also introduces Fermionic operators $f_{\alpha}$ at each site (site indices are omitted in the following) which will represent the low energy quasiparticles at the mean field level . The physical electron operator $\underline{d}$ is represented in the enlarged Hilbert space by \begin{equation} \underline{d}_{\alpha}=R_{\alpha\beta}[\phi]f_{\beta}\label{eq:d_alpha} \end{equation} where the matrix $\mathcal{R}$, with elements $R_{\alpha\beta}$ at the mean field level, has the interpretation of the quasiparticle residue, relating the physical electron to the quasiparticles. When $\mathcal{R}$ is small it exhibits the strong renormalizations induced by the electronic correlations. An important feature of the rotational invariant formalism is that the basis that diagonalizes the quasiparticles represented by the operators $f$ is not necessarily the same basis as that which would diagonalize the one electron density matrix expressed in terms of the operators $d$ and $d^{\dagger}$. Of central importance is the expression of the matrix $\mathcal{R}$, in terms of the bosonic amplitudes: \begin{align*} & R_{\alpha\beta}=\\ & \sum_{\gamma}\sum_{ABnm}F_{\alpha,A,B}^{\dagger}F_{\gamma,nm}^{\dagger}[\Phi^{\dagger}]_{nA}[\Phi]_{Bm}\left[\left(\Delta^{p}(1-\Delta^{p})\right)^{-1/2}\right]_{\gamma\beta}\\ & =\sum_{\gamma}\mathrm{Tr}\left[\Phi^{\dagger}F_{\alpha}^{\dagger}\Phi F_{\gamma}\right]\left[\left(\Delta^{p}(1-\Delta^{p})\right)^{-1/2}\right]_{\gamma\beta}. \end{align*} We introduced here the matrices $F$, \[ [F_{\alpha}]_{nm}\equiv_{f}\langle n|f_{\alpha}|m\rangle_{f}. \] The matrices \[ \Delta_{\alpha\beta}^{p}\equiv\sum_{Anms}\langle m|f_{\alpha}^{\dagger}|s\rangle\langle s|f_{\beta}|n\rangle\Phi^{\dagger}{}_{nA}\Phi_{Am}=\mathrm{Tr}\left[F_{\alpha}^{\dagger}F_{\beta}\Phi^{\dagger}\Phi\right] \] have the physical interpretation of a quasiparticle density matrix: \[ <f_{\alpha}^{\dagger}f_{\beta}>=\Delta_{\alpha,\beta}^{p}. \] For a multi-band Hubbard model with a tight-binding one-body Hamiltonian $H(\vec{k})$ and interactions $\Sigma_{i}H_{i}^{loc}$, the RISB functional, whose extremization gives the slave-boson mean field equations, is expressed in terms of $\phi_{i},\,\phi_{i}^{\dagger}$ (the slave-boson amplitude matrices) and the matrices $\,\lambda_{i}^{c}$, $\lambda_{i}$, ${\cal D}$. These are $N\times N$ matrices of Lagrange multipliers: (i) $\lambda_{i}^{c}$ enforces the definition of $\Delta_{i}^{p}$ in terms of the RISB amplitudes (ii) $\lambda_{i}$ enforces the Gutzwiller constraints and (iii) $\mathcal{D}_{i}$ enforces the definition of $\mathcal{R}_{i}$ in terms of slave boson amplitudes. Another variable, $E^{c}$ enforces\textcolor{red}{{} }\textcolor{black}{the normalization $\mathrm{Tr}[\Phi\Phi^{\dagger}]=1$. } \textcolor{black}{The variables ${\cal {R}}$,~$\lambda$ can be thought as a parametrization of the self-energy. While the matrices ${\cal {\cal {D}}}$, $\lambda^{c}$ are a parametrization of a small impurity model (the dimension of the bath Hilbert space is the same as that of the impurity Hilbert space) , ${\cal {\cal {D}}}$ is the hybridization function of the associated impurity model while $\lambda^{c}$ parametrizes the energy of the bath. $\Delta^{p}$ describes the quasiparticle occupancies, which are the static analogs of the impurity quasiparticle Green's function. } The RISB (Gutzwiller) functional for a model Hamiltonian with a local part which is bundled together with a local interaction term in $H^{loc}$ and a kinetic energy matrix which is non-local $H(\vec{k})^{nonloc}$, was constructed in Ref.~\onlinecite{lanata_prx15}: \begin{widetext} \begin{eqnarray} & & \Gamma_{\text{model}}[\phi,E^{c};\,\mathcal{R},\mathcal{R}^{\dagger},\lambda;\,\mathcal{D},\mathcal{D}^{\dagger},\lambda^{c};\,\Delta^{p}]=\nonumber \\[-0.6mm] & & -\lim_{\mathcal{T}\rightarrow0}\frac{\mathcal{T}}{\mathcal{N}}\sum_{k}\sum_{m\in\mathbb{Z}}\mathrm{Tr}\,\ln\!\left(\frac{1}{i(2m+1)\pi\mathcal{T}-\mathcal{R} H(\vec{k})^{nonloc}\mathcal{R}^{\dagger}-\lambda+\mu}\right)e^{i(2m+1)\pi\mathcal{T}0^{+}}\\ & & +\sum_{i}\mathrm{Tr}\bigg[\phi_{i}^{{\phantom{\dagger}}}\phi_{i}^{\dagger}\,H_{i}^{loc}\!+\!\sum_{a\alpha}\left(\left[\mathcal{D}_{i}\right]_{a\alpha}\,\phi_{i}^{\dagger}\,F_{i\alpha}^{\dagger}\,\phi_{i}^{{\phantom{\dagger}}}\,F_{ia}^{{\phantom{\dagger}}}+\text{H.c.}\right)\!+\!\sum_{ab}\left[\lambda_{i}^{c}\right]_{ab}\,\phi_{i}^{\dagger}\phi_{i}^{{\phantom{\dagger}}}\,F_{ia}^{\dagger}F_{ib}^{{\phantom{\dagger}}}\bigg]\!+\!\sum_{i}E_{i}^{c}\!\left(1\!-\mathrm{Tr}\big[\phi_{i}^{\dagger}\phi_{i}^{{\phantom{\dagger}}}\big]\right)\nonumber \\[-0.6mm] & & -\sum_{i}\bigg[\sum_{ab}\big(\left[\lambda_{i}\right]_{ab}+\left[\lambda_{i}^{c}\right]_{ab}\big)\left[\Delta_{i}^{p}\right]_{ab}+\sum_{ca\alpha}\left(\left[\mathcal{D}_{i}\right]_{a\alpha}\left[\mathcal{R}_{i}\right]_{c\alpha}\big[\Delta_{i}^{p}(1-\Delta_{i}^{p})\big]_{c\alpha}^{\frac{1}{2}}+\text{c.c.}\right)\bigg]\,.\label{Lag-SB} \end{eqnarray} \end{widetext} This method can also be turned into an \textit{ab-initio} DFT+G method (or DFT+RISB). To motivate the construction of a \textbf{DFT+G }functional we simply follow the same path used above to go from the model DMFT Hamiltonian to a DFT+DMFT functional. We substitute $H(\vec{k})$ for $-\frac{1}{2}\nabla^{2}+V_{KS}(\vec{r})$, which has a local and a nonlocal part, and follow the same steps as in the DFT+DMFT section. \begin{align*} & \Gamma_{DFT+G}\left[\rho\left(\vec{r}\right),V_{KS}\left(\vec{r}\right),\phi,E^{c};\mathcal{R},\mathcal{R}^{\dagger},\lambda;\mathcal{D},\mathcal{D}^{\dagger},\lambda^{c};\Delta_{i}^{p}\right]=\\ & \,\,\,\Gamma_{model}[H(\vec{k})\rightarrow-\frac{1}{2}\nabla^{2}+V_{KS}(\vec{r})]+\Gamma_{2}[V_{KS}\left(\vec{r}\right),\rho\left(\vec{r}\right)]-\\ & \,\,\,\,\,\sum_{i}\Phi_{DC}[\Delta_{i}^{p}] \end{align*} where $\Gamma_{2}$ and $\Phi_{DC}$ are the same functionals defined in the subsection on DMFT. The LDA+RISB and the LDA+G method are completely equivalent (more precisely, the slave boson method has a gauge symmetry, and a specific gauge needs to be chosen to correspond to the multi-orbital Gutzwiller method introduced in Ref.~\onlinecite{Bunemann2007a}. DFT+G was formulated in Refs.~\onlinecite{Deng2007,Ho08}. The slave boson method in combination with DFT was first used in Ref.~\onlinecite{Savrasov2005} in a non-rotationally-invariant framework and with full rotational invariance in Ref.~\onlinecite{lechermann}. For a recent review see Ref.~\onlinecite{Piefke2017}. \subsubsection*{Comparing the methods, critical discussion, future directions and outlook} For weakly correlated systems we argued in section \ref{sec:correlations} that once the structure is known, we have a well-defined path to compute their properties using DFT and the $G_{0}W_{0}$ method. To go beyond requires to move in the space illustrated in Fig \ref{fig:axes}. This has to be done while respecting as many general properties such as conservation laws (Refs.\onlinecite{Baym1961,PhysRev.127.1391,PhysRevB.96.075155}), sum rules, unitarity and causality (Refs.\onlinecite{PhysRevB.69.205108,PhysRevB.94.125124,PhysRevB.90.115134}) as possible. This is a very difficult problem which is under intensive investigation. This section reviewed several Green's-function-based approaches available for studying strongly-correlated-electron materials. The reader may wonder why we considered multiple methods. There are two reasons. First, as stressed throughout the paper and demonstrated in the examples, presented in the next sections there are materials where correlations are mostly static, and others where dynamical correlations dominate the physics. These different types of correlations require different methods. Second, even when two methods treat the same type of correlations, they have different accuracies and computational speeds. Finding the correct trade-off between speed and accuracy will be important, in particular when high throughput studies start becoming feasible for strongly correlated systems. As we strive towards a fully controlled but practical solution of the full many-body problem for solid state physics, we will need more exact and thus slower methods to benchmark the faster but more practical ones. Hence it is important to compare them and understand their connection. Static correlations can be treated by GW methods, and one can view the hybrid-functional exchange-correlation potentials as faster approximations to the QPGW exchange / correlation potential. One can also assess whether the GGA (or LDA) exchange / correlation potential is a good approximation to the self-energy in a given material - by checking how close it is to the corresponding self-consistent QPGW exchange correlation potential. In the same spirit one can understand the successes of LDA+DMFT from the \textbf{GW+DMFT }perspective. One issue is the definition of $U$ in a solid. The functional $\Phi$ can be viewed as the functional of an Anderson impurity model which contains a frequency-dependent interaction $U(\omega)$ obeying the self-consistency condition: \begin{equation} U^{-1}=W_{loc}^{-1}+\Pi_{loc}. \end{equation} This provides a link between LDA+DMFT, which uses a parameter $U$, and the GW+DMFT method, where this quantity is self-consistently determined. An important question is thus under which circumstances one can approximate the Hubbard $U$ by its static value. For projectors constructed on a very broad window, $U(\omega)$ is constant on a broad range of frequencies~\cite{Kutepov2010}. An important open question is how one can incorporate efficiently the effects of the residual frequency dependence of this interaction. Another question is the validity of the local ansatz for graphs beyond the $GW$ approximation. This question was first addressed in Ref.~\onlinecite{zein_first}, who showed that the lowest order $GW$ graph is highly non-local in all semiconductors, which can be understood as the exchange Fock graph is very non-local. On the other hand, higher-order graphs in transition metals in an LMTO basis set were shown to be essentially local. Consider a system such as Cerium, containing light $spd$ electrons and heavier, more correlated, $f$ electrons. We know that for very extended systems, the GW quasiparticle band structure is a good approximation to the LDA band structure. Therefore the self-energy of a diagrammatic treatment of the light electrons can be approximated by the exchange-correlation potential of the LDA (or by other improved static approximations if more admixture of exchange is needed) . Diagrams of all orders but in a local approximation are used for the $f$ electrons. In the full many-body treatment $\Sigma_{ff}$ is computed using skeleton graphs with $G_{loc}$ and $W_{loc}$. To reach the LDA+DMFT equations, one envisions that at low energies the effects of the frequency dependent interaction $U(\omega)$ can be taken into account by a static $U$, which should be close to (but slightly larger than) $U(\omega=0)$. The $ff$ block of the Green's function now approaches $\Sigma_{ff}-E_{dc}$. We reach the LDA+DMFT equations with some additional understanding on the origin of the approximations used to derive them from the GW+DMFT approximation, as summarized schematically in $\Sigma_{GW+DMFT}\left(\vec{k},\omega\right)\longrightarrow$ \[ \left(\begin{array}{cc} 0 & 0\\ 0 & \Sigma_{ff}-E_{dc} \end{array}\right)+\left(\begin{array}{cc} V_{xc}[\vec{k}]_{spd,spd} & V_{xc}[\vec{k}]_{spd,f}\\ V_{xc}[\vec{k}]_{f,spd} & V_{xc}[\vec{k}]_{f,f} \end{array}\right). \] Realistic implementations of combinations of GW and DMFT have not yet reached the maturity of LDA+DMFT implementations, and are a subject of current research. Recent self-consistent implementations include Refs.~\onlinecite{PhysRevMaterials.1.043803,Choi2016}. When strong dynamical correlations are involved, the spectra is very far from that of free fermions. The one electron spectral function $A(\vec{k},\omega)$ displays not only a dispersive quasiparticle peak, but also other features commonly denoted as satellites. The collective excitation spectra, which appear in the spin and charge excitation spectra, does not resemble the particle-hole continuum of the free Fermi gas with additional collective (zero sound, spin waves) modes, produced by the residual interactions among them. Finally the damping of the elementary excitations in many regimes does not resemble that of a Fermi liquid. Strong dynamical correlations are accompanied by anomalous transport properties, large transfer of optical spectral weight, large mass renormalizations, as well as metal-insulator transitions as a function of temperature or pressure. These can be captured by DMFT, which combined with electronic structure, enable the treatment of these effects in a material-specific setting, but not by LDA+G which only provides a quasiparticle description of the spectra. Many successful comparisons with experimental ARPES and optical and neutron scattering data have been made over the last two decades using LDA+DMFT which makes an excellent compromise of accuracy for speed, and it is now the mainstay for the elucidation of structure property relations in strongly correlated materials. LDA+G can only describe at best the quasiparticle featurs in that spectra. On the other hand, as it will be stressed through examples, for total energy evaluations - which are a central part of the material design workflow, faster methods are currently needed. We described above two methods, the LDA+U method, and the Gutzwiller RISB method, which fall in this ``fast but less accurate'' category. These methods can be viewed as approximations to the many body problem within a DMFT perspective. As pointed out in Refs.~\onlinecite{lanata_prx15} and~ \onlinecite{PhysRevB96235139}, the Gutzwiller RISB leads to a DMFT-like impurity solver with a bath consisting of only one site. LDA +U can be viewed as a limiting case of DMFT, where a static local self-energy is considered. There are numerous algorithmic challenges in optimizing studies of materials based on DMFT. While CTQMC runs for solving the Anderson impurity model, i.e. the single orbital case, as well as 3 or 2 orbitals ($t_{2g}$ and $e_{g}$ electrons) can be completed on one CPU in less than one day for extremely low temperatures, a full d-shell (5 electrons) requires several days, and the full f-shell is still at the border of what can be done with current methods. All this assumes high symmetry situations, where the hybridization function is diagonal. Off-diagonal hybridization introduces severe minus sign problems. Alternative exact diagonalization-based methods, such as NRG or DMRG will be needed. This would also help with the problem of reducing the uncertainties involved in the process of analytic continuation. \textcolor{black}{While the ansatz \ref{eq:sigma_ansatz} has reproduced the photoemission spectra of many materials, there have not been high-throughput studies which would enable us to systematically search for deviations. This requires the improvement of computational tools, an area of active research. What if the $\boldsymbol{k}$ and $\omega$ dependencies cannot be disentangled? This situation may arise near a quantum critical poin}t. Methods to incorporate the non-local correlations beyond DMFT are an important subject of active research, which is reviewed in Ref.~\onlinecite{RevModPhys90025003}. Armed with an understanding of methods to treat correlations and their physical and computational trade-offs, we proceed in section \ref{sec:workflow} to construct a workflow for designing correlated materials. {\def\bibliography
1,941,325,220,369
arxiv
\section{Geodesics on a static, negatively curved space} The most familiar description of the metric in an open universe is in hyperbolic coordinates \begin{equation} ds^2=-dt^2+dr^2+\sinh ^2r\left( d\theta ^2+\sin ^2\theta d\phi ^2\right) \ \ . \label{stat} \end{equation} However, the geodesics can be found more simply in a coordinate system $% (x,y,x)$ related to $(r,\theta ,\phi )$ by \begin{eqnarray} e^{-z} &=&\cosh {r}-\sinh {r}\cos {\theta } \nonumber \\ e^{-z}x &=&\sin {\theta }\cos {\phi }\sinh {r} \nonumber \\ e^{-z}y &=&\sin {\theta }\sin {\phi }\sinh {r}\ \ . \label{ge} \end{eqnarray} In the $(x,y,z)$ coordinate system the metric is \begin{equation} ds^2=-dt^2+dz^2+e^{-2z}(dx^2+dy^2)\ \ . \label{one} \end{equation} The geodesic equations can be found in the usual way from \begin{equation} {\frac{d^2x^\mu }{d\lambda ^2}}+\Gamma _{\alpha \beta }^\mu {\frac{dx^\alpha }{d\lambda }}{\frac{dz^\beta }{d\lambda }}=0\ \ . \end{equation} but it is more efficient to introduce the Lagrangian \begin{equation} L={\frac 12}\left[ -t^{\prime 2}+z^{\prime 2}+e^{-2z}\left( x^{\prime 2}+y^{\prime 2}\right) \right] \end{equation} where ${}^\prime={d/d\lambda }$. The equations of motion can be found from \begin{equation} \Pi _q^{\prime }-{\frac{\partial L}{\partial q}}=0 \end{equation} where $\Pi _q=\partial L/\partial q^\prime$ is the momentum conjugate to coordinate $q$. Since the Lagrangian is independent of $(t,x,y)$, the corresponding conjugate momenta are conserved, giving \begin{eqnarray} \Pi _t=-t^{\prime } &=&-E_{{\rm i}} \nonumber \\ \Pi _x=e^{-2z}x^\prime &=&\Pi _{x{\rm i}} \nonumber \\ \Pi _y=e^{-2z}y^\prime &=&\Pi _{y{\rm i}} \label{piz} \end{eqnarray} where $E_{{\rm i}},\Pi _{x{\rm i}},\Pi _{y{\rm i}}$ are constants of the motion. The last of these equations is \begin{equation} z^{\prime \prime }+e^{-2z}(x^{\prime 2}+y^{\prime 2})=0\ \ . \end{equation} This second-order equation can be reduced to first order by exploiting the invariance of the length element. Choosing the affine parameter to be proper time $\eta $, $ds^2/d\eta ^2=\alpha $ ($\alpha =0$ for photons and $\alpha =-1$ for massive particles), we have (\ref{one}) \begin{equation} \alpha =-E_{{\rm i}}^2+z^{\prime 2}+e^{-2z}\left( x^{\prime 2}+y^{\prime 2}\right) \ \ . \end{equation} Substituting the solutions for ${x^{\prime }}$ and ${y^{\prime }}$ and defining $W_{{\rm i}}^2=\Pi _{x{\rm i}}^2+\Pi _{y{\rm i}}^2$, we can solve this for ${z^{\prime }}$ to find \begin{equation} {z^{\prime }}=\pm \left[ (\alpha +E_{{\rm i}}^2)-e^{2z}W_{{\rm i}}^2\right] ^{1/2}\ \ . \label{velx} \end{equation} The system has been reduced to first-order equations. Notice that $z^{\prime \prime }\le 0$ always. It follows that if ${z^{\prime }}>0$, then $z$ will reach a maximum and then the geodesic reverses direction. Integrating the $z$ equation, we have \begin{equation} \int_{z_{{\rm i}}}^z{\frac{dz}{\left[ (\alpha +E_{{\rm i}}^2)-e^{2z}W_{{\rm i% }}^2\right] ^{1/2}}}=\pm (\eta -\eta _{{\rm i}}).\ \ \label{left} \end{equation} This completes the solution. Let the initial time be $\eta _{{\rm i}}=0$. Firstly, if $W_{{\rm i}}^2=0$, the trajectories are simple lines, \begin{equation} {W_{{\rm i}}^2=0\ \ \ \ \ z=z_{{\rm i}}\pm \gamma \eta ,} \end{equation} with $\gamma =(\alpha +E_{{\rm i}}^2)^{1/2}$. If $W_{{\rm i}}\ne 0$ we can integrate (\ref{left}) and then (\ref{ge}) to obtain \begin{eqnarray} W_{{\rm i}}^2e^{2z} &=&\gamma ^2{\frac 1{\cosh ^2(\gamma \eta \mp \beta _{% {\rm i}})},} \nonumber \\ x &=&x_{{\rm i}}+{\frac{\Pi _{x{\rm i}}}{W_{{\rm i}}^2}}\gamma \left[ \tanh (\gamma \eta \mp \beta _{{\rm i}})\pm \tanh (\beta _{{\rm i}})\right] , \nonumber \\ y &=&y_{{\rm i}}+{\frac{\Pi _{y{\rm i}}}{W_{{\rm i}}^2}}\gamma \left[ \tanh (\gamma \eta \mp \beta _{{\rm i}})\pm \tanh (\beta _{{\rm i}})\right] . \label{oneway} \end{eqnarray} The last constant of integration that appears here is \begin{equation} \tanh \beta _i=\sqrt{\gamma ^2-e^{2z{\rm i}}W_{{\rm i}}^2}/\gamma \ \ . \end{equation} Notice that, unless a particle is shot directly along the axis, it will never reach $z=+\infty $. The constant, which is proportional to $\tanh (\beta _{{\rm i}}),$ could be absorbed into $x_{{\rm i}}$, but then $x_{{\rm % i}}$ and would not correspond to the value $x(\eta _{{\rm i}}=0)$. Rewriting these in another way, that is useful when tracing geodesics, gives \begin{eqnarray} e^{-z} &=&e^{-z_{{\rm i}}}\left[ \gamma \cosh (\gamma \eta )\mp \sinh (\gamma \eta )\sqrt{\gamma ^2-e^{2z_{{\rm i}}}W_{{\rm i}}^2}\right] \nonumber \\ x &=&x_{{\rm i}}+{\Pi _{x{\rm i}}e^{2z_{{\rm i}}}}\gamma ^2\left[ \frac{% \tanh (\gamma \eta )}{\gamma \mp \tanh (\gamma \eta )\sqrt{\gamma ^2-e^{2z_{% {\rm i}}}W_{{\rm i}}^2}}\right] \nonumber \\ y &=&y_{{\rm i}}+{\Pi _{y{\rm i}}e^{2z_{{\rm i}}}}\gamma ^2\left[ \frac{% \tanh (\gamma \eta )}{\gamma \mp \tanh (\gamma \eta )\sqrt{\gamma ^2-e^{2z_{% {\rm i}}}W_{{\rm i}}^2}}\right] \label{another} \end{eqnarray} where \begin{equation} {\frac 1{\cosh ^2{\beta _{{\rm i}}}W_{{\rm i}}^2}}=\exp (2z_{{\rm i}})\ \ . \end{equation} We can write the path parametrically as \begin{equation} e^{2z}={\frac{\gamma ^2}{W_{{\rm i}}^2}}-(x-\hat x_{{\rm i}})^2-(y-\hat y_{% {\rm i}})^2 \end{equation} where $\hat x_{{\rm i}}=x_{{\rm i}}\pm (\Pi _{x{\rm i}}/W_{{\rm i}}^2)\gamma \tanh (\beta _{{\rm i}})$ and $\hat y_{{\rm i}}$ is defined analogously. These trajectories have some very odd properties. As already mentioned, the trajectories never reach $z=+\infty $. Any photon travelling along increasing $z$ eventually hits a maximum and then wraps back. As it moves toward $z=-\infty $, the velocities along $x$ and $y$ fall (eqns (\ref{piz}% )) so that they never reach infinite values of $x$ or $y$. \subsection{Geodesic flows in an expanding universe} When the expansion of space is included, the full metric becomes that of the open Friedmann universe, \begin{equation} ds^2=-dt^2+a^2(t)\left[ dz^2+e^{-2z}(dx^2+dy^2)\right] \ \ . \end{equation} The {\it null} geodesic equations are \begin{equation} \ddot t+H\dot t^2=0 \label{time} \end{equation} \begin{equation} \ddot z+e^{-2z}\left( \dot x^2+\dot y^2\right) +2{\cal H}\dot z=0 \end{equation} \begin{equation} \ddot x-2\dot z\dot x+2{\cal H}\dot x=0 \end{equation} \begin{equation} \ddot y-2\dot z\dot y+2{\cal H}\dot y=0\ \ , \end{equation} where an overdot denotes $d/d\lambda $ and ${\cal H}=H\dot t=d\ln {a}% /d\lambda $. Notice from $(\ref{time})$ that \begin{equation} \dot t={\frac 1a}\ \ . \end{equation} so $d\lambda =adt$. All of the results of the previous section can quickly be adapted to the case with expansion if a time coordinate is chosen astutely. Now let $\prime {}=d/d\eta $. Then the geodesic equations become \begin{equation} \dot \eta ^2\left[ z^{\prime \prime }+e^{-2z}({x^{\prime }}^2+{y^{\prime }}% ^2)\right] +z^\prime\left[ \ddot \eta +2{\cal H}\dot \eta \right] =0 \end{equation} \begin{equation} \dot \eta ^2\left[ x^{\prime \prime }-2{y^{\prime }}{x^{\prime }}\right] +{% x^{\prime }}\left[ \ddot \eta +2{\cal H}\dot \eta \right] =0 \end{equation} \begin{equation} \dot \eta ^2\left[ y^{\prime \prime }-2{z^{\prime }}{y^{\prime }}\right] +y^\prime\left[ \ddot \eta +2{\cal H}\dot \eta \right] =0\ \ . \end{equation} If we choose the coordinate $\eta $ such that \begin{equation} \ddot \eta +2{\cal H}\dot \eta =0, \end{equation} then it follows that the geodesic equations become \begin{eqnarray} z^{\prime \prime }+e^{-2z}\left( {x^{\prime }}^2+{y^{\prime }}^2\right) &=&0 \nonumber \\ x^{\prime \prime }-2{z^{\prime }}{x^{\prime }} &=&0 \\ y^{\prime \prime }-2{z^{\prime }}{y^{\prime }} &=&0 \nonumber \end{eqnarray} which is precisely the same as those on a static, negatively curved hypersurface. The solutions are then the same as equations (\ref{oneway}) (or equivalently (\ref{another})) with $\gamma =1$ for photons and \begin{equation} \eta =\int {\frac{d\lambda }{a^2}}=\int {\frac{dt}{a(t)}}\ \ \end{equation} is the usual conformal time. For timelike geodesics, the motion can again be projected onto a static hypersurface with a new affine parameter. Again, we recover the geodesic flows of the previous section with $\alpha =-1$ except the time parameter for massive particles is not conformal time but rather \cite{lock} \begin{equation} \eta =\int {\frac{dt}{a(v_o+a^2)^{1/2}}}\ \ \end{equation} and $v_o=av(t)/\sqrt{1-v^2}$ and $v^2(t)=g_{ij}\dot x^i\dot x^j$. \section{Tracing geodesics} The microwave background provides a sensitive probe of the curvature of the universe. If we locate the Earth (or our near-Earth satellite) at the origin of the coordinate system, then only photons from the surface of last scatter which travel along radial geodesics will be observed. The geodesic equations are then greatly simplified with respect to the direction of observation on the sky. The photons seen in the sky today can be traced backwards to locate their point of origin. We have six unknowns: \begin{eqnarray} \ z_{{\rm i}}&<=>\beta _{{\rm i}}\quad \quad &z_{{\rm i}}^{\prime } \nonumber \\ x_{{\rm i}} &&\Pi _{x{\rm i}} \nonumber \\ y_{{\rm i}} &&\Pi _{y{\rm i}}\ \ . \end{eqnarray} The magnitude of $z_{\rm i}^\prime $ is fixed by eqn (\ref{velx}) once the other initial coordinates are specified. The choice $z_{\rm i}^\prime $ corresponds then to a choice of the sign in eqn (\ref{velx}). There are two sets of boundary conditions: ($i$) The photon position vector is at the origin, and ($ii$) The velocity vector is opposite to the unit vector pointing in the direction of observation. In other words, \begin{eqnarray} (i)\quad \vec x &=&\vec x_o=\vec 0 \nonumber \\ (ii)\quad \hat v &=&-\hat n(\theta ,\phi )\ \ . \end{eqnarray} Boundary condition $(i)$ places the Earth at the origin of the coordinate system. Using the geodesic solutions, we have \begin{eqnarray} W_{{\rm i}}^2 &=&{\frac 1{\cosh ^2(\eta _o\mp \beta _{{\rm i}})},} \nonumber \\ x_{{\rm i}} &=&-{\frac{\Pi _{x{\rm i}}}{W_{{\rm i}}^2}}\left[ \tanh (\eta _o\mp \beta _{{\rm i}})\pm \tanh (\beta _{{\rm i}})\right] , \nonumber \\ y_{{\rm i}} &=&-{\frac{\Pi _{y{\rm i}}}{W_{{\rm i}}^2}}\left[ \tanh (\eta _o\mp \beta _{{\rm i}})\pm \tanh (\beta _{{\rm i}})\right] \ \ . \label{xc} \end{eqnarray} When evaluated at the origin, the geodesic equations relate the components of the velocity vector today to their initial values: \begin{eqnarray} z_o^{\prime 2} &=&1-W_{{\rm i}}^2 \nonumber \\ x_o^{\prime } &=&\Pi _{x{\rm i}} \nonumber \\ y_o^{\prime } &=&\Pi _{y{\rm i}}\ \ . \end{eqnarray} This velocity vector is normalized to $1$ as it must be for photons. Using \begin{equation} \hat n=\hat r=\sin \theta \cos \phi \hat {{\rm i}}+\sin \theta \sin \phi \hat {{\rm j}}+\cos \theta \hat {{\rm k}}\ ,\ \end{equation} we can rotate this into the $(x,y,z)$ coordinate system at the origin to find \begin{eqnarray} \hat z<=> &&\hat {{\rm k}} \nonumber \\ \hat x<=> &&\hat {{\rm j}} \nonumber \\ \hat y<=> &&\hat {{\rm i}}\ \ . \end{eqnarray} Condition $(ii)$ then gives \begin{eqnarray} z_o^{\prime } &=&-\cos \theta \nonumber \\ x_o^{\prime }=\Pi _{x{\rm i}} &=&-\sin \theta \cos \phi \nonumber \\ y_o^{\prime }=\Pi _{y{\rm i}} &=&-\sin \theta \sin \phi \ \ . \label{vc} \end{eqnarray} \mbox{$>$} It follows that $W_{{\rm i}}^2=\sin ^2\theta $. Also, taking the derivative of the geodesic equation, \begin{equation} z_o^{\prime }=-\tanh (\eta _o\mp \beta _{{\rm i}})=-\cos \theta ,\ \ \end{equation} from which it follows that \begin{equation} \mp \tanh (\beta _{{\rm i}})={\frac{\cos \theta -\tanh (\eta _o)}{1-\tanh (\eta _o)\cos \theta }}\ \ \end{equation} $\beta _{{\rm i}}=\pm (\eta _o-{\rm arctanh}(\cos \theta ))$. Putting (\ref {vc}) into (\ref{xc}) we find \begin{eqnarray} e^{-z_{{\rm i}}} &=&{\ \cosh (\eta _o)-\sinh (\eta _o)\cos \theta } \nonumber \\ e^{-z_{{\rm i}}}x_{{\rm i}} &=&{\sin \theta \cos \phi }\sinh (\eta _o) \nonumber \\ e^{-z_{{\rm i}}}yi &=&{\sin \theta \sin \phi }\sinh (\eta _o)\ \ . \label{xca} \end{eqnarray} These are all radial geodesics in {\it $(r,\theta ,\phi )$; }that is,{\it \ $% r=\eta _o-\eta $.} The integrated Sachs-Wolfe effect considers only radial geodesics; but processes such as gravitational lensing, that can deflect a photon into the line of sight, would draw from the more general pool of non-radially directed photons. \section{Local and Global anisotropy} The full geodesics on a universe of negative curvature have been obtained in an explicit form most accessible to cosmologists. One arena of renewed interest where the full geodesics may be needed is the case of a small universe. Negatively curved spacetimes can be made small and finite through topological identifications. A small universe could be witnessed with periodic effects or by features in the power spectrum of the microwave background. Topology induces global anisotropy even when the underlying space is locally isotropic. Local anisotropy is also possible in the absence of topological identifications. When homogeneous anisotropies are present, either in the form of shear or rotation in the expansion of the universe, or possibly also in the three-curvature of space, there are only a finite number of homogeneous spaces which can provide an exact description of the geometry of space. These anisotropic spaces were first classified by Bianchi \cite{bian}. They were introduced into cosmology by Taub \cite{taub}, and presented in the most efficient manner by Ellis and MacCallum \cite{mac}. Since the microwave sky is currently the most significant historical record of the primordial radiation, it is instructive to show how the anisotropic sky patterns created by these different anisotropic universes can be predicted just from a knowledge of the group invariances that generate the homogeneous geometries and their geodesic flows. In order to determine the detailed sky patterns permitted by the Bianchi geometries it is necessary to solve for the evolution of the geodesics on the anisotropic cosmological models either exactly or approximately (in the case of small anisotropy); see for example refs. \cite {{Nov},{hawk},{CollHaw},{BJS1},{BJS2}}. Again, the most unusual features arise in open universes. The basic quadrupole pattern arises in the simplest (flat) Bianchi type I universe with zero curvature. The addition of negative curvature focuses this quadrupole into a small hotspot on the sky (there is a preferred direction because there is a direction of lowest and highest expansion rate) in type V universes, which still possess isotropic 3-curvature. If anisotropic curvature is added then we reach the most general class of anisotropic homogeneous spaces and a spiralling of the geodesics is added to the quadrupole or focused quadrupole in the flat or open type VII universes. These particular models have been studied in the past by linearizing the geodesic equations about the isotropic solutions in which the temperature anisotropy of the microwave background is zero \cite {{Nov},{hawk},{CollHaw},{dzn},{BJS1},{BJS2},{JDB1},{bunn}}. They describe the most general anisotropic distortions of flat and open Friedmann universes. However, it is also possible to predict the geometric sky patterns expected in the different Bianchi type universes by simply noting the nature of the groups of motions which define each homogeneous space. The Bianchi classification of spatially homogeneous anisotropic universes is based on the geometric classification of 3-parameter Lie groups. The action of these groups on the spacelike hypersurfaces of constant time in these universes can be prescribed by three transformations of cartesian coordinates $(x,y,z)$. Each model possesses two simple translations in the $% x-y$ plane, with generators $\partial /\partial x$ and $\partial /\partial y, $ together with a more complicated motion out of this plane which is different for each group type. If it is considered as a flow from the $z=0$ plane to some other plane, $z=\alpha =$ constant, then the nature of this flow tells us qualitatively what the microwave background anisotropy pattern will look like. In the simplest flat universe of Bianchi type I the $z-$flow is uniform and just maps $(x,y)\rightarrow (x,y).$ This corresponds to a pure quadrupole geodesic temperature anisotropy pattern. In the open Bianchi type V universe the $z-$flow is a pure dilation and maps $(x,y)\rightarrow e^\alpha (x,y),$ with $\alpha $ constant$.$ This dilation describes the hotspot created by the focusing of the quadrupole pattern in open anisotropic universes. The most general non-compact homogeneous universes containing the Friedmann models are of Bianchi type VII$_h$, which contain the flat Friedmann universes when $h=0$ and the open Friedmann universes when $h\neq 0$. In type VII$_h$ the $z-$flow is a rotation plus a dilation: circles of radius $r=(x^2+y^2)^{1/2}=1$ are mapped into circles of radius $% r=e^{\alpha \sqrt{h}}$ and rotated by a constant angle $\alpha .$ Observationally, this corresponds to the geodesics producing a focusing of the basic quadrupole (as in type V) with a superimposed spiral twist. In Bianchi type VII$_0$ there is simply a spiral added to the underlying quadrupole with no focusing because the 3-geometry is flat. The $z-$flow for the closed universes of Bianchi type IX, which contain the closed Friedmann models as special cases, is more complicated. The $z-$plane action corresponds to the following $SO(3)$ invariant motion in polar coordinates based on the $x$-axis, in the region $r<2\pi ,$\cite{sik}: \begin{eqnarray*} \{r,\theta ,\phi \} &=&\{\cos ^{-1}[(1-\beta ^2)^{1/2}\cos (\alpha /2)], \\ &&\sin ^{-1}\beta [1-(1-\beta ^2)\cos ^2(\alpha /2)]^{-1/2},\ \frac 12\alpha +\phi _0\} \end{eqnarray*} Sky patterns in other, less familiar, Bianchi types can be generated in a similar fashion, if required. Thus, type VI$_h$ is generated by a $z-$flow that combines a shear with a dilation: the hyperbolae $x^2-y^2=A^2$ are mapped into hyperbolae which are rotated by a hyperbolic angle $\beta $ into $x^2-y^2=A^2e^{2\beta \sqrt{-h}}.\ $ \begin{center} \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \end{center} \vskip10truept We extend special thanks to P. Ferreira and J. Silk for discussions about these ideas. JDB is supported by the PPARC. JJL is supported in part by a President's Postdoctoral Fellowship.
1,941,325,220,370
arxiv
\section{Introduction} Ambj\o{}rn, Boulatov, Durhuus, Jonsson, and others have worked to develop a three-dimensional analogue of the simplicial quantum gravity theory, as provided for two dimensions by Regge \cite{REGGE}. (See \cite{ADJ} and \cite{REGGE1} for surveys.) The discretized version of quantum gravity considers simplicial complexes instead of smooth manifolds; the metric properties are artificially introduced by assigning length $a$ to any edge. (This approach is due to Weingarten \cite{Weingarten} and known as ``theory of dynamical triangulations''.) A crucial path integral over metrics, the ``partition function for gravity'', is then defined via a weighted sum over all triangulated manifolds of fixed topology. In three dimensions, the whole model is convergent only if the number of triangulated $3$-spheres with $N$ facets grows not faster than $C^N$, for some constant~$C$. But does this hold? How many simplicial spheres are there with $N$ facets, for $N$ large? Without the restriction to ``local constructibility'' this crucial question still represents a major open problem, which was put into the spotlight also by Gromov \cite[pp.~156-157]{Gromov}. Its 2D-analogue, however, was answered long time ago by Tutte \cite{TUT', TUT}, who proved that there are asymptotically fewer than $\big(\frac{16}{3\sqrt{3}}\big)^N$ combinatorial types of triangulated $2$-spheres. (By Steinitz' theorem, cf.~\cite[Lect.~4]{Z}, this quantity equivalently counts the maximal planar maps on \mbox{$n\ge4$} vertices, which have $N=2n-4$ faces, and also the combinatorial types of simplicial $3$-dimensional polytopes with $N$ facets.) In the following, the adjective ``simplicial'' will often be omitted when dealing with balls, spheres, or manifolds, as all the regular cell complexes and polyhedral complexes that we consider are simplicial. Why are $2$-spheres ``not so many''? Every combinatorial type of triangulation of the $2$-sphere can be generated as follows (Figure \ref{fig:treetriangles}): First for some even $N\ge4$ build a tree of $N$ triangles (which combinatorially is the same thing as a triangulation of an ($N+2$)-gon), and then glue edges according to a complete matching of the boundary edges. A necessary condition in order to obtain a $2$-sphere is that such a matching is \emph{planar}. Planar matchings and triangulations of ($N+2$)-gons are both enumerated by a Catalan number $C_{N+2}$, and since the Catalan numbers satisfy a polynomial bound $C_N=\frac1{N+1}\binom{2N}N<4^N$, we get an exponential upper bound for the number of triangulations. \vspace{-3mm}{}\enlargethispage{3mm} \begin{figure}[htbf] \centering \includegraphics[width=.2\linewidth]{treetriangles.eps}\vskip-4mm \caption{\small How to get an octahedron from a tree of $8$ triangles (i.e., a triangulated $10$-gon).} \label{fig:treetriangles} \end{figure} Neither this simple argument nor Tutte's precise count can be easily extended to higher dimensions. Indeed, we have to deal with three different problems when trying to extend results or methods from dimension two to dimension three: \begin{compactenum}[(i)] \item Many combinatorial types of simplicial $3$-spheres are not realizable as boundaries of convex $4$-polytopes; thus, even though we observe below that there are only exponentially-many simplicial $4$-polytopes with $N$ facets, the $3$-spheres could still be more numerous. \item The counts of combinatorial types according to the number $n$ of vertices and according to the number $N$ of facets are not equivalent any more. We have $3n-10\le N\le \frac{1}{2}n(n-3)$ by the lower resp.\ upper bound theorem for simplicial $3$-spheres. We know that there are more than $2^{n\sqrt[4]{n}}$ $3$-spheres \cite{Kalai, PZ}, but less than $2^{20 n \log n}$ types of $4$-polytopes with $n$ vertices \cite{Alon, GP}, yet this does not answer the question for a count in terms of the number $N$ of facets. \item While it is still true that there are only exponentially-many ``trees of $N$ tetrahedra'', the matchings that can be used to glue $3$-spheres are not planar any more; thus, they could be more than exponentially-many. If, on the other hand, we restrict ourselves to ``local gluings'', we generate only a limited family of $3$-spheres, as we will show below. \end{compactenum} In the early nineties, new finiteness theorems by Cheeger \cite{Cheeger} and Grove et al.\ \cite{GPW} yielded a new approach, namely, to count $d$-manifolds of ``fluctuating topology'' (not necessarily spheres) but ``bounded geometry'' (curvature and diameter bounded from above, and volume bounded from below). This allowed Bartocci et al.\ \cite{Bartocci} to bound for any $d$-manifold the number of triangulations with $N$ or more facets, under the assumption that no vertex had degree higher than a fixed integer. However, for this it is crucial to restrict the topological type: Already for $d=2$, there are more than exponentially many triangulated $2$-manifolds of bounded vertex degree with $N$ facets. In 1995, the physicists Durhuus and Jonsson \cite{DJ} introduced the class of ``locally constructible'' (LC) $3$-spheres. An LC $3$-sphere (with $N$ facets) is a sphere obtainable from a tree of $N$ tetrahedra, by identifying pairs of adjacent triangles in the boundary. ``Adjacent'' means here ``sharing at least one edge'', and represents a dynamic requirement. Clearly, every $3$-sphere is obtainable from a tree of $N$ tetrahedra by matching the triangles in its boundary; according to the definition of LC, however, we are allowed to match only those triangles that \emph{are} adjacent -- or that have \emph{become} adjacent by the time of the gluing. Durhuus and Jonsson proved an exponential upper bound on the number of combinatorially distinct LC spheres with $N$ facets. Based also on computer simulations (\cite{AmbVar}, see also \cite{CatKR} and \cite{ABKW}) they conjectured that all $3$-spheres should be LC. A positive solution of this conjecture would have implied that spheres with $N$ facets are at most $C^N$, for a constant $C$ -- which would have been the desired missing link to implement discrete quantum gravity in three dimensions. In the present paper, we show that the conjecture of Durhuus and Jonsson has a negative answer: There are simplicial $3$-spheres that are not LC. (With this, however, we do not resolve the question whether there are fewer than $C^N$ simplicial $3$-spheres on $N$ facets, for some constant~$C$.) On the way to this result, we provide a characterization of LC simplicial $d$-complexes which relates the ``locally constructible'' spheres defined by physicists to concepts that originally arose in topological combinatorics. \begin{thmnonumber} [Theorem~{\ref{thm:hierarchyspheres}}] \label{mainthm:hierarchyspheres} A simplicial $d$-sphere, $d\ge3$, is LC if and only if the sphere after removal of one facet can be collapsed down to a complex of dimension~$d-2$. Furthermore, there are the following inclusion relations between families of simplicial $d$-spheres: \em \[ \{ \textrm{vertex decomposable} \} \subsetneq \{ \textrm{shellable} \} \subseteq \{ \textrm{constructible}\} \subsetneq \{ \textrm{LC}\} \subsetneq \{ \textrm{all $d$-spheres}\}. \] \end{thmnonumber} \noindent We use the hierarchy in conjunction with the following extension and sharpening of Durhuus and Jonsson's theorem (who discussed only the case $d=3$). \begin{thmnonumber} [Theorem~\ref{thm:announced}] For fixed $d \geq 2$, the number of combinatorially distinct simplicial LC $d$-spheres with $N$ facets grows not faster than $2^{ d^2 \cdot N }$. \end{thmnonumber} \noindent We will give a proof for this theorem in Section~\ref{sec:numbers}; the same type of upper bound, with the same type of proof, also holds for LC $d$-balls with $N$ facets. Already in 1988 Kalai \cite{Kalai} constructed for every $d \ge 4$ a family of more than exponentially many $d$-spheres on $n$ vertices; Lee \cite{Lee6} later showed that all of Kalai's spheres are shellable. Combining this with Theorem ~\ref{thm:announced} and Theorem~\ref{thm:hierarchyspheres}, we obtain the following asymptotic result: \begin{cor*} For fixed $d \ge 4$, the number of shellable simplicial $d$-spheres grows more than exponentially with respect to the number $n$ of vertices, but only exponentially with respect to the number $N$ of facets. \end{cor*} \noindent The hierarchy of Main Theorem~\ref{mainthm:hierarchyspheres} is not quite complete: It is still not known whether constructible, non-shellable $3$-spheres exist (see \cite{EH, KAMEI}). A shellable $3$-sphere that is not vertex-\allowbreak decom\-pos\-able was found by Lockeberg in his 1977 Ph.D.\ work (reported in \cite[p.~742]{KK}; see also \cite{Htocome}). Again, the $2$-dimensional case is much simpler and completely solved: All $2$-spheres are vertex decomposable (see \cite{PB}). In order to show that not all spheres are LC we study in detail simplicial spheres with a ``knotted triangle''; these are obtained by adding a cone over the boundary of a ball with a knotted spanning edge (as in Furch's 1924 paper \cite{FUR}; see also Bing \cite{BING}). Spheres with a knotted triangle cannot be boundaries of polytopes. Lickorish \cite{LICK} had shown in 1991 that \begin{quote} \emph{a $3$-sphere with a knotted triangle is not shellable if the knot is at least $3$-complicated.} \end{quote} Here ``at least $3$-complicated'' refers to the technical requirement that the fundamental group of the complement of the knot has no presentation with less than four generators. A concatenation of three or more trefoil knots satisfies this condition. In 2000, Hachimori and Ziegler \cite{HACHI, HZ} demonstrated that Lickorish's technical requirement is not necessary for his result: \begin{quote} \emph{a $3$-sphere with {\em any} knotted triangle is not constructible.} \end{quote} In the present work, we re-justify Lickorish's technical assumption, showing that this is {exactly} what we need if we want to reach a stronger conclusion, namely, a topological obstruction to local constructibility. Thus, the following result is established in order to prove that the last inclusion of the hierarchy in Theorem~\ref{thm:hierarchyspheres} is strict. \begin{thmnonumber} [Theorem~\ref{thm:short}] A $3$-sphere with a knotted triangle is not LC if the knot is at least $3$-complicated. \end{thmnonumber} The knot complexity requirement is now necessary, as non-constructible spheres with a single trefoil knot can still be LC (see Example~\ref{thm:examplelick}). The combinatorial topology of $d$-balls and that of $d$-spheres are of course closely related -- our study builds on the well-known connections and also adds new ones. \begin{thmnonumber} [Theorems~\ref{thm:hierarchyballs} and~\ref{thm:mainDballs}] \label{mainthm:hierarchyballs} A simplicial $d$-ball is LC if and only if after the removal of a facet it collapses down to the union of the boundary with a complex of dimension at most $d-2$. We have the following hierarchy: \em \[ \Big\{\begin{array}{@{}c@{}} \textrm{vertex}\\ \textrm{decomp.} \end{array}\Big\} \subsetneq \{\textrm{shellable}\} \subsetneq \{\textrm{constructible}\} \subsetneq \{\textrm{LC}\} \subsetneq \Big\{\begin{array}{@{}c@{}} \textrm{collapsible onto a}\\ (d-2)\textrm{-complex} \end{array}\Big\} \subsetneq \{\textrm{all $d$-balls}\}. \] \end{thmnonumber} All the inclusions of Main Theorem \ref{mainthm:hierarchyballs} hold with equality for simplicial $2$-balls. In the case of $d=3$, collapsibility onto a $(d-2)$-complex is equivalent to collapsibility. In particular, we settle a question of Hachimori (see e.g.\ \cite[pp.~54, 66]{Hthesis}) whether all constructible $3$-balls are collapsible. Furthermore, we show in Corollary~\ref{cor:badcollapsible3ball} that some collapsible $3$-balls do not collapse onto their boundary minus a facet, a property that comes up in classical studies in combinatorial topology (compare \cite{CHIL, LICK2}). In particular, a result of Chillingworth can be restated in our language as ``if for any geometric simplicial complex $\Delta$ the support (union) $|\Delta|$ is a convex $3$-dimensional polytope, then $\Delta$ is necessarily an LC $3$-ball'', see Theorem~\ref{thm:chil}. Thus any geometric subdivision of the $3$-simplex is LC. \subsection{Definitions and Notations} \subsubsection{Simplicial regular CW complexes} In the following, we present the notion of ``local constructibility'' (due to Durhuus and Jonsson). Although in the end we are interested in this notion as applied to finite simplicial complexes, the iterative definition of locally constructible complexes dictates that for intermediate steps we must allow for the greater generality of finite ``simplicial regular CW complexes''. A CW complex is \emph{regular} if the attaching maps for the cells are injective on the boundary (see e.g. \cite{BJOE}). A regular CW-complex is \emph{simplicial} if for every proper face $F$, the interval $[0, F]$ in the face poset of the complex is boolean. Every simplicial complex (and in particular, any triangulated manifold) is a simplicial regular CW-complex. The $k$-dimensional cells of a regular CW complex $C$ are called $k$-\emph{faces}; the inclusion-maximal faces are called \emph{facets}, and the inclusion-maximal proper subfaces of the facets are called \emph{ridges}. The \emph{dimension} of $C$ is the largest dimension of a facet; \emph{pure} complexes are complexes where all facets have the same dimension. All complexes that we consider in the following are finite, most of them are pure. A \emph{$d$-complex} is a $d$-dimensional complex. Conventionally, the 0-faces are called \emph{vertices}, and the $1$-faces \emph{edges}. (In the discrete quantum gravity literature, the $(d-2)$-faces are sometimes called ``{hinges}'' or ``{bones}'', whereas the edges are sometimes referred to as ``{links}''.) If the union $|C|$ of all simplices of $C$ is homeomorphic to a manifold $M$, then $C$ is a \emph{triangulation} of $M$; if $C$ is a triangulation of a $d$-ball or of a $d$-sphere, we will call $C$ simply a $d$-\emph{ball} (resp. $d$-\emph{sphere}). The \emph{dual graph} of a pure $d$-dimensional simplicial complex~$C$ is the graph whose nodes correspond to the facets of~$C$: Two nodes are connected by an arc if and only if the corresponding facets share a $(d-1)$-face. \subsubsection{Knots} All the knots we consider are \textit{tame}, that is, realizable as $1$-dimensional subcomplexes of some triangulated $3$-sphere. A knot is $m$-\emph{complicated} if the fundamental group of the complement of the knot in the $3$-sphere has a presentation with $m+1$ generators, but no presentation with $m$ generators. By ``at least $m$-complicated'' we mean ``$k$-complicated for some $k\geq m$''. There exist arbitrarily complicated knots: Goodrick \cite{GOO} showed that the connected sum of $m$ trefoil knots is at least $m$-complicated. Another measure of how tangled a knot can be is the {bridge index} (see e.g.\ \cite[p.~18]{KAWA} for the definition). If a knot has bridge index $b$, the fundamental group of the knot complement admits a presentation with $b$ generators and $b-1$ relations \cite[p.~82]{KAWA}. In other words, the bridge index of a $t$-complicated knot is at least $t+1$. As a matter of fact, the connected sum of $t$ trefoil knots is $t$-complicated, and its bridge index is exactly $t+1$ \cite{EH}. \subsubsection{The combinatorial topology hierarchy} In the following, we review the key properties from the inclusion \[ \{\textrm{shellable}\} \subsetneq \{\textrm{constructible}\} \] valid for all simplicial complexes, and the inclusion \[ \{\textrm{shellable}\} \subsetneq \{\textrm{collapsible}\} \] applicable only for \emph{contractible} simplicial complexes, both known from combinatorial topology (see \ \cite[Sect.~11]{BJOE} for details). \noindent \emph{Shellability} can be defined for pure simplicial complexes as follows: \begin{compactitem}[ -- ] \item every simplex is shellable; \item a $d$-dimensional pure simplicial complex $C$ which is not a simplex is shellable if and only if it can be written as $C= C_1 \cup C_2$, where $C_1$ is a shellable $d$-complex, $C_2$ is a $d$-simplex, and $C_1 \cap C_2$ is a shellable $(d-1)$-complex. \end{compactitem} \emph{Constructibility} is a weakening of shellability, defined by: \begin{compactitem}[ -- ] \item every simplex is constructible; \item a $d$-dimensional pure simplicial complex $C$ which is not a simplex is constructible if and only if it can be written as $C= C_1 \cup C_2$, where $C_1$ and $C_2$ are constructible $d$-complexes, and $C_1 \cap C_2$ is a constructible $(d-1)$-complex. \end{compactitem} \smallskip \noindent Let $C$ be a $d$-dimensional simplicial complex. An \emph{elementary collapse} is the simultaneous removal from $C$ of a pair of faces $(\sigma, \Sigma)$ with the following prerogatives: \begin{compactenum}[ -- ] \item $\dim \Sigma = \dim \sigma + 1 $; \item $\sigma$ is a proper face of $\Sigma$; \item $\sigma$ is not a proper face of any other face of $C$. \end{compactenum} (The three conditions above are usually abbreviated in the expression ``$\sigma$ is a free face of $\Sigma$''; some complexes have no free face). If $C':= C - \Sigma - \sigma$, we say that the complex $C$ \emph{collapses onto} the complex $C'$. We also say that the complex $C$ \emph{collapses onto} the complex $D$, and write $C \searrow D$, if $C$ can be reduced to $D$ by a finite sequence of elementary collapses. Thus a \emph{collapse} refers to a sequence of elementary collapses. A \emph{collapsible} complex is a complex that can be collapsed onto a single vertex. Since $C':= C - \Sigma - \sigma$ is a deformation retract of $C$, each collapse preserves the homotopy type. In particular, all collapsible complexes are contractible. The converse does not hold in general: For example, the so-called ``dunce hat'' is a contractible $2$-complex without free edges, and thus with no elementary collapse to start with. However, the implication ``contractible $\Rightarrow$ collapsible'' holds for all $1$-complexes, and also for shellable complexes of any dimension. A connected $2$-dimensional complex is collapsible if and only if it does \emph{not} contain a $2$-dimensional complex without a free edge. In particular, for $2$-dimensional complexes, if $C \searrow D$ and $D$ is not collapsible, then $C$ is also not collapsible. This holds no more for complexes $C$ of dimension larger than two \cite{HOGAM}. \subsubsection{LC pseudomanifolds} By a $d$-\emph{pseudomanifold} [possibly with boundary] we mean a finite regular CW-complex $P$ that is pure $d$-dimensional, simplicial, and such that each $(d-1)$-dimensional cell belongs to at most two $d$-cells. The \emph{boundary} of the pseudomanifold $P$, denoted $\partial P$, is the smallest subcomplex of $P$ containing all the $(d-1)$-cells of $P$ that belong to exactly one $d$-cell of $P$. According to our definition, a pseudomanifold needs not be a simplicial complex; it might be disconnected; and its boundary might not be a pseudomanifold. \begin{definition}[Locally constructible pseudomanifold] For $d\ge2$, let $C$ be a pure $d$-dimensional simplicial complex with $N$ facets. A \emph{local construction} for $C$ is a sequence $T_1, T_2, \ldots, T_N, \ldots, T_k$ ($k \geq N$) such that $T_i$ is a $d$-pseudomanifold for each $i$ and \begin{compactenum}[(1)] \item $T_1$ is a $d$-simplex; \item if $i \le N-1$, then $T_{i+1}$ is obtained from $T_i$ by gluing a new $d$-simplex to $T_i$ alongside one of the $(d-1)$-cells in $\partial T_i$; \item if $i \ge N$, then $T_{i+1}$ is obtained from $T_{i}$ by identifying a pair $\sigma, \tau$ of $(d-1)$-cells in the boundary $\partial T_{i}$ whose intersection contains a $(d-2)$-cell $F$; \item $T_k = C$. \end{compactenum} We say that $C$ is \emph{locally constructible}, or \emph{LC}, if a local construction for $C$ exists. With a little abuse of notation, we will call each $T_i$ an \emph{LC pseudomanifold}. We also say that $C$ is locally constructed \emph{along $T$}, if $T$ is the dual graph of $T_N$, and thus a spanning tree of the dual graph of~$C$. \end{definition} The identifications described in item (3) above are operations that are not closed with respect to the class of simplicial complexes. Local constructions where all steps are simplicial complexes produce only a very limited class of manifolds, consisting of $d$-balls with no interior $(d-3)$-faces. (When in an LC step the identified boundary facets intersect in \emph{exactly} a $(d-2)$-cell, no $(d-3)$-face is sunk into the interior, and the topology stays the same.) However, since by definition the local construction in the end must arrive at a pseudomanifold $C$ that \textit{is} a simplicial complex, each intermediate step $T_i$ must satisfy severe restrictions: for each $t \leq d$, \begin{compactitem}[ -- ] \item distinct $t$-simplices that are not in the boundary of $T_i$ share at most one $(t-1)$-simplex; \item distinct $t$-simplices in the boundary of $T_i$ that share more than one $(t-1)$-simplex will need to be identified by the time the construction of $C$ is completed. \end{compactitem} Moreover, \begin{compactitem}[ -- ] \item if $\sigma, \tau$ are the two $(d-1)$-cells glued together in the step from $T_i$ to $T_{i+1}$, $\sigma$ and $\tau$ cannot belong to the same $d$-simplex of $T_i$; nor can they belong to two $d$-simplices that are already adjacent in $T_i$. \end{compactitem} For example, in each step of the local construction of a $3$-sphere, no two tetrahedra share more than one triangle. Moreover, any two distinct interior triangles either are disjoint, or they share a vertex, or they share an edge; but they cannot share two edges, nor three; and they also cannot share one edge and the opposite vertex. If we glued together two boundary triangles that belong to adjacent tetrahedra, no matter what we did afterwards, we would not end up with a simplicial complex any more. Roughly speaking, \begin{quote}\emph{a locally constructible $3$-sphere is a triangulated $3$-sphere obtained from a tree of tetrahedra $T_N$ by repeatedly identifying two adjacent triangles in the boundary.} \end{quote} As we mentioned, the boundary of a pseudomanifold need not be a pseudomanifold. However, if $P$ is an LC $d$-pseudomanifold, then $\partial P$ is automatically a $(d-1)$-pseudomanifold. Nevertheless, $\partial P$ may be disconnected, and thus, in general, it is not LC. All LC $d$-pseudomanifolds are simply connected; in case $d=3$, their topology is controlled by the following result. \begin{thm}[Durhuus--Jonsson {\cite{DJ}}] \label{thm:topologyLC3-dim} Every LC $3$-pseudomanifold $P$ is homeomorphic to a $3$-sphere with a finite number of ``cacti of $3$-balls'' removed. (A cactus of $3$-balls is a tree-like connected structure in which any two $3$-balls share at most one point.) Thus the boundary $\partial P$ is a finite disjoint union of cacti of $2$-spheres. In particular, each connected component of $\partial P$ is a simply-connected $2$-pseudomanifold. \end{thm} Thus every closed $3$-dimensional LC pseudomanifold is a sphere, while for $d>3$ other topological types such as products of spheres are possible (see Benedetti \cite{Benedetti-JCTA}). \section{On LC Spheres} In this section, we establish the following hierarchy announced in the introduction. \begin{thm} \label{thm:hierarchyspheres} For all $d\ge3$, we have the following inclusion relations between families of simplicial $d$-spheres: \em \[ \{ \textrm{vertex decomposable} \} \subsetneq \{ \textrm{shellable} \} \subseteq \{ \textrm{constructible}\} \subsetneq \{ \textrm{LC}\} \subsetneq \{ \textrm{all $d$-spheres}\}. \] \end{thm} \begin{proof} The first two inclusions, and strictness of the first one, are known; the third one will follow from Lemma~\ref{lem:LCdecomposition} and will be shown to be strict by Example \ref{thm:examplelick} together with Lemma~\ref{thm:suspensions}; finally, Corollary \ref{thm:cor3} will establish the strictness of the fourth inclusion for all $d\ge3$.% \end{proof} \subsection{Some $d$-spheres are not LC}\label{sec:notLCspheres} Let $S$ be a simplicial $d$-sphere ($d \geq 2$), and $T$ a spanning tree of the dual graph of $S$. We denote by $K^T$ the subcomplex of $S$ formed by all the $(d-1)$-faces of $S$ that are not intersected by~$T$ \begin{lemma} \label{thm:Scollapse} Let $S$ be any $d$-sphere with $N$ facets. Then for every spanning tree $T$ of the dual graph of $S$, \begin{compactitem} \item $K^T$ is a contractible pure $(d-1)$-dimensional simplicial complex with $\frac{dN - N + 2}{2}$ facets; \item for any facet $\Delta$ of $S$, $\; \;S - \Delta \; \searrow K^T$. \end{compactitem} \end{lemma} \enlargethispage{3mm} \noindent Any collapse of a $d$-sphere $S$ minus a facet $\Delta$ to a complex of dimension at most $d-1$ proceeds along a dual spanning tree~$T$. To see this, fix a collapsing sequence. We may assume that the collapse of $S - \Delta$ is ordered so that the pairs $( (d-1)\textrm{-face} , \; d\textrm{-face})$ are removed first. Whenever both the following conditions are met: \begin{compactenum} \item $\sigma$ is the $(d-1)$-dimensional intersection of the facets $\Sigma$ and $\Sigma'$ of $S$; \item the pair $(\sigma, \Sigma)$ is removed in the collapsing sequence of $S - \Delta$, \end{compactenum} draw an oriented arrow from the center of $\Sigma'$ to the center of $\Sigma$. This yields a directed spanning tree $T$ of the dual graph of $S$, where $\Delta$ is the root. Indeed, $T$ is \emph{spanning} because all $d$-simplices of $S - \Delta$ are removed in the collapse; it is \emph{connected}, because the only free $(d-1)$-faces of $S-\Delta$, where the collapse can start at, are the proper $(d-1)$-faces of the ``missing simplex'' $\Delta$; it is \emph{acyclic}, because the center of each $d$-simplex of $S- \Delta$ is reached by exactly one arrow. We will say that the collapsing sequence \emph{acts along the tree} $T$ (in its top-dimensional part). Thus the complex $K^T$ appears as intermediate step of the collapse: It is the complex obtained after the $(N-1)$st pair of faces has been removed from $S - \Delta$. \begin{definition} \label{thm:facetkilling} By a \textit{facet-killing sequence} for a $d$-dimensional simplicial complex $C$ we mean a sequence $C_0, C_1, \ldots, C_{t-1}, C_t $ of complexes such that $t=f_d(C)$, $C_0=C$, and $C_{i+1}$ is obtained by an elementary collapse that removes a free $(d-1)$-face $\sigma$ of $C_i$, together with the unique facet $\Sigma$ containing $\sigma$. \end{definition} If $C$ is a $d$-complex, and $D$ is a lower-dimensional complex such that $C \searrow D$, there exists a facet-killing sequence $C_0$, $\ldots$, $C_t$ for $C$ such that $C_t \searrow D$. In other words, the collapse of $C$ onto $D$ can be rearranged so that the pairs $\left((d-1)\textrm{-face}, \, d\textrm{-face} \right)$ are removed first. In particular, for any $d$-complex $C$, the following are equivalent: \begin{compactenum} \item there exists a facet-killing sequence for $C$; \item there exists a $k$-complex $D$ with $k \leq d-1$ such that $C \searrow D$. \end{compactenum} What we argued before can be rephrased as follows: \begin{proposition} \label{thm:Salongtrees} Let $S$ be a $d$-sphere, and $\Delta$ a $d$-simplex of $S$. Let $C$ be a $k$-dimensional simplicial complex, with $k \leq d-2$. Then, \[ S- \Delta \; \searrow \; C \ \ \Longleftrightarrow \ \ \exists \; T \hbox{ s.t. } \; K^T \; \searrow C. \] \end{proposition} \noindent The right-hand side in the equivalence of Proposition \ref{thm:Salongtrees} does not depend on the $\Delta$ chosen. So, for any $d$-sphere $\Delta$, either $S - \Delta$ is collapsible for every $\Delta$, or $S - \Delta$ is not collapsible for any~$\Delta$. \begin{figure}[htbf] \begin{minipage}{0.7\linewidth} \hfill \includegraphics[width=.2\linewidth]{collapse.eps} \hfill \includegraphics[width=.2\linewidth]{collapse1.eps} \hfill \includegraphics[width=.2\linewidth]{collapse2.eps} \hfill \includegraphics[width=.2\linewidth]{collapse3.eps} \hfill \caption{\small \textsc{(Above):} A facet-killing sequence of $S - \Delta$, where $S$ is the boundary of a tetrahedron ($d=2$), and $\Delta$ one of its facets. \small \textsc{(Right):} The $1$-complex $K^T$ onto which $S - \Delta$ collapses, and the directed spanning tree $T$ along which the collapse above acts.} \label{fig:collapsealongtrees} \end{minipage} \begin{minipage}{0.27\linewidth} \hfill \includegraphics[width=.84\linewidth]{collapsetree.eps} \end{minipage} \end{figure} One more convention: by a \textit{natural labeling} of a rooted tree $T$ on $n$ vertices we mean a bijection $b : V(T) \longrightarrow \{1, \ldots, n\}$ such that if $v$ is the root, $b(v)=1$, and if $v$ is not the root, there exists a unique vertex $w$ adjacent to $v$ such that $b(w) < b(v)$. \medskip We are now ready to link the LC concept with collapsibility. Take a $d$-sphere $S$, a facet $\Delta$ of~$S$, and a rooted spanning tree $T$ of the dual graph of $S$, with root $\Delta$. Since $S$ is given, fixing $T$ is really the same as fixing the manifold $T_N$ in the local construction of $S$; and at the same time, fixing $T$ is the same as fixing $K^T$. Once $T$, $T_N$, and $K^T$ have been fixed, to describe the first part of a local construction of $S$ (that is, $T_1, \ldots, T_N$) we just need to specify the order in which the tetrahedra of $S$ have to be added, which is the same as to give a natural labeling of $T$. Besides, natural labelings of $T$ are in bijection with collapses $S-\Delta \searrow K^T$ (the $i$-th facet to be collapsed is the node of $T$ labeled~$i+1$; see Proposition~\ref{thm:Salongtrees}). What if we do not fix $T$? Suppose $S$ and $\Delta$ are fixed. Then the previous reasoning yields a bijection among the following sets: \begin{compactenum} \item the set of all facet-killing sequences of $S-\Delta$; \item the set of ``natural labelings'' of spanning trees of $S$, rooted at $\Delta$; \item the set of the first parts $(T_1, \ldots, T_N )$ of local constructions for $S$, with $T_1 = \Delta$. \end{compactenum} \noindent Can we understand also the second part of a local construction ``combinatorially''? Let us start with a variant of the ``facet-killing sequence'' notion. \begin{definition} \label{thm:facetmassacre} A \textit{pure facet-massacre} of a pure $d$-dimensional simplicial complex $P$ is a sequence $P_0, P_1, \ldots, P_{t-1}, P_t$ of (pure) complexes such that $t=f_d(P)$, $P_0=P$, and $P_{i+1}$ is obtained by $P_i$ removing: \begin{compactdesc} \item{(a)} a free $(d-1)$-face $\sigma$ of $P_i$, together with the unique facet $\Sigma$ containing $\sigma$, and \item{(b)} all inclusion-maximal faces of dimension smaller than $d$ that are left after the removal of type (a) or, recursively, after removals of type (b). \end{compactdesc} \end{definition} \noindent In other words, the (b) step removes lower-dimensional facets until one obtains a pure complex. Since $t=f_d(P)$, $P_t$ has no facets of dimension $d$ left, nor inclusion-maximal faces of smaller dimension; hence $P_t$ is empty. The other $P_i$'s are pure complexes of dimension $d$. Notice that the step $P_i \longrightarrow P_{i+1}$ is not a collapse, and does not preserve the homotopy type in general. Of course $P_i \longrightarrow P_{i+1}$ can be ``factorized'' in an elementary collapse followed by a removal of a finite number of $k$-faces, with $k <d$. However, this factorization is not unique, as the next example shows. \begin{example} Let $P$ be a full triangle. $P$ admits three different facet-killing collapses (each edge can be chosen as free face), but it admits only one pure facet-massacre, namely $P, \emptyset$. \end{example} \begin{lemma} Let $P$ be a pure $d$-dimensional simplicial complex. Every facet-killing sequence of $P$ naturally induces a unique pure facet-massacre of $P$. All pure facet-massacres of $P$ are induced by some (possibly more than one) facet-killing sequence. \end{lemma} \begin{proof} The map consists in taking a facet-killing sequence $C_0$, $\ldots$, $C_t$, and ``cleaning up'' the $C_i$ by recursively killing the lower-dimensional inclusion-maximal faces. As the previous example shows, this map is not injective. It is surjective essentially because the removed lower-dimensional faces are of dimension ``too small to be relevant''. In fact, their dimension is at most $d-1$, hence their presence can interfere only with the freeness of faces of dimension at most $d-2$; so the list of all removals of the form $( (d-1)\hbox{-face}, \, d\hbox{-face} )$ in a facet-massacre yields a facet-killing sequence. \end{proof} \begin{thm} \label{thm:Smassacre} Let $S$ be a $d$-sphere; fix a spanning tree $T$ of the dual graph of $S$. The second part of a local construction for $S$ along $T$ corresponds bijectively to a facet-massacre of $K^T$. \end{thm} \begin{proof} Fix $S$ and $T$; $T_N$ and $K^T$ are determined by this. Let us start with a local construction $\left( T_1, \ldots, T_{N-1}, \right) T_N, \ldots, T_k$ for $S$ along $T$. Topologically, $S=T_N / {\sim}$, where ${\sim}$ is the equivalence relation determined by the gluing (two distinct points of $T_N$ are equivalent if and only if they will be identified in the gluing). Moreover, $K^T = \partial T_N / {\sim}$, by the definition of $K^T$. Define $P_0 := K^T = \partial T_N / {\sim}$, and $P_j := \partial T_{N+j} / {\sim}$. We leave it to the reader to verify that $k-N$ and $f_d(K^T)$ are the same integer (see Lemma \ref{thm:Scollapse}), which from now on is called $D$. In particular $P_D=\partial T_k / {\sim} = \partial S / {\sim} = \emptyset$. In the first LC step, $T_N \rightarrow T_{N+1}$, we remove from the boundary a free ridge $r$, together with the unique pair $\sigma', \sigma''$ of facets of $\partial T_N$ sharing $r$. At the same time, $r$ and the newly formed face $\sigma$ are sunk into the interior. This step $\partial T_N \longrightarrow \partial T_{N+1}$ naturally induces an analogous step $\partial T_{N+j}/ {\sim} \longrightarrow \partial T_{N+j+1}/ {\sim}$, namely, the removal of $r$ and of the (unique!) $(d-1)$-face $\sigma$ containing it. In the $j$-th LC step, $\partial T_{N+j} \longrightarrow \partial T_{N+j+1}$, we remove from the boundary a ridge $r$ together with a pair $\sigma', \sigma''$ of facets sharing $r$; moreover, we sink into the interior a lower-dimensional face $F$ if and only if we have just sunk into the interior all faces containing $F$. The induced step from $\partial T_{N+j}/ {\sim}$ to $\partial T_{N+j+1}/ {\sim}$ is precisely a ``facet-massacre'' step. For the converse, we start with a ``facet-massacre'' $P_0$, \ldots, $P_D$ of $K^T$, and we have $P_0 = K_T = \partial T_N / {\sim}$. The unique $(d-1)$-face $\sigma_j$ killed in passing from $P_j$ to $P_{j+1}$ corresponds to a unique pair of (adjacent!) $(d-1)$-faces $\sigma_j'$, $\sigma_j''$ in $\partial T_{N+j}$. Gluing them together is the LC move that transforms $T_{N+j}$ into $T_{N+j+1}$. \end{proof} \newpage \begin{remark} \label{thm:newremark} Summing up: \begin{compactitem}[--] \item The first part of a local construction along a tree $T$ corresponds to a facet-killing collapse of $S - \Delta$ (that ends in $K^T$). \item The second part of a local construction along a tree $T$ corresponds to a pure facet-massacre of $K^T$. \item A single facet-massacre of $K^T$ corresponds to many facet-killing sequences of $K^T$. \item By Proposition \ref{thm:Salongtrees}, there exists a facet-killing sequence of $K^T$ if and only if $K^T$ collapses onto some $(d-2)$-dimensional complex $C$. This $C$ is necessarily contractible, like $K^T$. \end{compactitem} \end{remark} \noindent So $S$ is locally constructible along $T$ if and only if $K^T$ collapses onto some $(d-2)$-dimensional contractible complex $C$, if and only if $K^T$ has a facet-killing sequence. What if we do not fix~$T$? \begin{thm} \label{thm:Dcollapse} Let $S$ be a $d$-sphere ($d \ge 3$). Then the following are equivalent: \begin{compactenum}[\rm 1.] \item S is LC; \item for some spanning tree $T$ of $S$, $K^T$ is collapsible onto some $(d-2)$-dimensional (contractible) complex $C$; \item there exists a $(d-2)$-dimensional (contractible) complex $C$ such that for every facet $\Delta$ of $S$, $S - \Delta \searrow C$; \item for some facet $\Delta$ of $S$, $S - \Delta$ is collapsible onto a $(d-2)$-dimensional contractible complex~$C$. \end{compactenum} \end{thm} \begin{proof} $S$ is LC if and only if it is LC along some tree $T$; thus $(1) \Leftrightarrow (2)$ follows from Remark \ref{thm:newremark}. Besides, $(2) \Rightarrow (3)$ follows from the fact that $ S - \Delta \; \searrow K^T$ (Lemma \ref{thm:Scollapse}), where $K^T$ is independent of the choice of $\Delta$. $(3) \Rightarrow (4)$ is trivial. To show $(4) \Rightarrow (2)$, take a collapse of $S- \Delta$ onto some $(d-2)$-complex $C$; by Lemma \ref{thm:Salongtrees}, there exists some tree $T$ (along which the collapse acts) so that $S- \Delta \searrow K^T$ and $K^T \searrow C$. \end{proof} \begin{cor}\label{thm:corollarycollapse} Let $S$ be a $3$-sphere. Then the following are equivalent: \begin{compactenum}[\rm 1.] \item S is LC; \item $K^T$ is collapsible, for some spanning tree $T$ of the dual graph of $S$; \item $S - \Delta$ is collapsible for every facet $\Delta$ of $S$; \item $S - \Delta$ is collapsible for some facet $\Delta$ of $S$. \end{compactenum} \end{cor} \begin{proof} This follows from the previous theorem, together with the fact that all contractible $1$-complexes are collapsible. \end{proof} We are now in the position to exploit results by Lickorish about collapsibility. \begin{prop} [Lickorish \cite{LICK}] \label{prop:Lickorish_not_collapsible} Let $\mathfrak{L}$ be a knot on $m$ edges in the $1$-skeleton of a simplicial $3$-sphere $S$. Suppose that $S - \Delta$ is collapsible, where $\Delta$ is some tetrahedron in $S - \mathfrak{L}$. Then $|S| - |\mathfrak{L}|$ is homotopy equivalent to a connected cell complex with one 0-cell and at most $m$ $1$-cells. In particular, the fundamental group of $|S|-|\mathfrak{L}|$ admits a presentation with $m$ generators. \end{prop} Now assume that a certain sphere $S$ containing a knot $\mathfrak{L}$ is LC. By Corollary \ref{thm:corollarycollapse}, $S-\Delta$ is collapsible, for any tetrahedron $\Delta$ not in the knot $\mathfrak{L}$. Hence by Lickorish's criterion the fundamental group $\pi_1 \left(|S| - |\mathfrak{L}|\right)$ admits a presentation with $m$~generators. ~ \begin{thm}\label{thm:short} Any $3$-sphere with a $3$-complicated $3$-edge knot is not LC. More generally, a $3$-sphere with an $m$-gonal knot cannot be LC if the knot is at least $m$-complicated. \end{thm} \begin{example} \label{thm:examplebing} As in the construction of the classical ``Furch--Bing ball'' \cite[p.~73]{FUR} \cite[p.~110]{BING} \cite{ZIE}, we drill a hole into a finely triangulated $3$-ball along a triple pike dive of three consecutive trefoils; we stop drilling one step before destroying the property of having a ball (see Figure~\ref{fig:nonLC}). If we add a cone over the boundary, the resulting sphere has a three edge knot which is a connected sum of three trefoil knots. By Goodrick \cite{GOO} the connected sum of $m$ copies of the trefoil knot is at least $m$-complicated. So, this sphere has a knotted triangle, the fundamental group of whose complement has no presentation with $3$~generators. Hence $S$ cannot be LC. \end{example} \begin{figure}[htbf] \centering \includegraphics[width=55mm]{nonLC.eps} \caption{\small Furch--Bing ball with a (corked) tubular hole along a triple-trefoil knot. The cone over the boundary of this ball is a sphere that is \emph{not} LC.} \label{fig:nonLC} \end{figure} From this we get a negative answer to the Durhuus--Jonsson conjecture: \begin{cor} \label{thm:strictcontainment2} Not all simplicial 3-spheres are LC. \end{cor} Lickorish proved also a higher-dimensional statement, basically by taking successive suspensions of the 3-sphere in Example \ref{thm:examplebing}. \begin{thm}[Lickorish \cite{LICK}] For each $d \geq 3$, there exists a PL $d$-sphere $S$ such that $S - \Delta$ is not collapsible for any facet $\Delta$ of $S$. \end{thm} To exploit our Theorem \ref{thm:Dcollapse} we need a sphere $S$ such that $S - \Delta$ is not even collapsible to a $(d-2)$-complex. To establish that such a sphere exists, we strengthen Lickorish's result. \begin{definition} \label{thm:dual} Let $K$ be a $d$-manifold, $A$ an $r$-simplex in $K$, and $\hat{A}$ the barycenter of $A$. Consider the barycentric subdivision $sd(K)$ of $K$. The \textit{dual} $A^*$ of $A$ is the subcomplex of $sd(K)$ given by all flags \[A \subset A_0 \subset A_1 \subset \cdots \subset A_r\] where $r=\dim A$, and $\dim A_{i+1} = \dim A_i + 1$ for each $i$. \end{definition} $A^*$ is a cone with apex $\hat{A}$, and thus collapsible. If $K$ is PL (see e.g.\ Hudson~\cite{Hudson} for the definition), we can say more: \begin{lemma}[{\cite[Lemma~1.19]{Hudson}}] \label{thm:newman} Let $K$ be a PL $d$-manifold (without boundary), and let $A$ be a simplex in $K$ of dimension $r$. Then \begin{compactitem} \item $A^*$ is a $(d-r)$-ball, and \item if $A$ is a face of an $(r+1)$-simplex $B$, then $B^*$ is a $(d-r-1)$-subcomplex of $\partial \, A^*$. \end{compactitem} \end{lemma} \noindent We have observed in Lemma \ref{thm:Scollapse} that for any $d$-sphere $S$ and any facet $\Delta$ the ball $S - \Delta$ is collapsible onto a $(d-1)$-complex: In other words, via collapses one can always get \textit{one} dimension down. To get \textit{two} dimensions down is not so easy: Our Theorem \ref{thm:Dcollapse} states that $S - \Delta$ is collapsible onto a $(d-2)$-complex precisely when $S$ is LC. This ``number of dimensions down you can get by collapsing'' can be related to the minimal presentations of certain homotopy groups. The idea of the next theorem is that if one can get $k$ dimensions down by collapsing a manifold minus one facet, then the $(k-1)$-th homotopy group of the complement of any $(d-k)$-subcomplex of the manifold cannot be too complicated to present. \begin{thm} \label{thm:thm0} Let $t$, $d$ with $0 \leq t \leq d-2$, and let $K$ be a PL $d$-manifold (without boundary). Suppose that $K- \Delta$ collapses onto a $t$-complex, for some facet $\Delta$ of $K$. Then, for each $t$-dimensional subcomplex $\mathfrak{L}$ of $K$, the homotopy group \[\pi_{d-t-1} (|K| - |\mathfrak{L}| )\] has a presentation with $f_t ( \mathfrak{L} )$ generators, while $\pi_i(|K| - |\mathfrak{L}| )$ is trivial for $i< d-t-1$. \end{thm} \begin{proof} As usual, we assume that the collapse of $K- \Delta$ is ordered so that: \begin{compactitem}[--] \item first all pairs $((d-1)\textrm{-face},\; d\textrm{-face} )$ are collapsed; \item then all pairs $((d-2)\textrm{-face},\; (d-1)\textrm{-face} )$ are collapsed; \item $\vdots$ \item finally, all pairs $( t\textrm{-face},\; (t+1)\textrm{-face} )$ are collapsed. \end{compactitem} \noindent Let us put together all the faces that appear above, maintaining their order, to form a single list of simplices \[A_1, A_2, \ldots , A_{2M-1}, A_{2M}. \] In such a list $A_1$ is a free face of $A_2$; $A_3$ is a free face of $A_4$ with respect to the complex $K - A_1 - A_2$; and so on. In general, $A_{2i-1}$ is a face of $A_{2i}$ for each $i$, and in addition, if $j > 2i$, $A_{2i-1}$ is not a face of $A_{j}$. We set $X_0 = A_0 := \hat{\Delta}$ and define a finite sequence $X_1, \ldots, X_M$ of subcomplexes of $sd(K)$ as follows: \[ X_j := \bigcup \left\{ A_i^* \hbox{ s.t. } i \in \{0, \ldots, 2j \} \hbox{ and } A_i \notin \mathfrak{L} \right\} , \qquad \hbox{ for } j \in \{ 1, \ldots, M\}. \] None of the $A_{2i}$'s can be in $\mathfrak{L}$, because $\mathfrak{L}$ is $t$-dimensional and $\dim A_{2i} \geq \dim A_{2M} = t+1$. However, exactly $f_t ( \mathfrak{L} )$ of the $A_{2i-1}$'s are in $\mathfrak{L}$. Consider how $X_j$ differs from $X_{j-1}$. There are two cases: \begin{compactitem}[\textbullet{} ] \item If $A_{2j-1}$ is not in $\mathfrak{L}$, \[ X_j = X_{j-1} \; \cup \; A^*_{2j-1} \; \cup \; A^*_{2j} ..\] By Lemma \ref{thm:newman}, setting $r=\dim A_{2j-1}$, $A^*_{2j-1}$ is a $(d-r)$-ball that contains in its boundary the $(d-r-1)$-ball $A^*_{2j}$. Thus $|X_j|$ is just $|X_{j-1}|$ with a $(d-r)$-cell attached via a cell in its boundary, and such an attachment does not change the homotopy type. \item If $A_{2j-1}$ is in $\mathfrak{L}$, then \[ X_j = X_{j-1} \; \cup \; A^*_{2j} .\] As this occurs only when $\dim A_{2j-1}=t$, we have that $\dim A_{2j}=t+1$ and $\dim A^*_{2j}=d-t-1$; hence $|X_j|$ is just $|X_{j-1}|$ with a $(d-t-1)$-cell attached via its whole boundary. \end{compactitem} Only in the second case the homotopy type of $|X_j|$ changes at all, and this second case occurs exactly $f_t ( \mathfrak{L} )$ times. Since $X_0$ is one point, it follows that $X_M$ is homotopy equivalent to a bouquet of $f_t ( \mathfrak{L} )$ many $(d-t-1)$-spheres. Now let us list by (weakly) decreasing dimension the faces of $K$ that do not appear in the previous list $A_1, A_2, \ldots , A_{2M-1}, A_{2M}$. We name the elements of this list \[ A_{2M+1}, A_{2M+2}, A_F\] (where $\sum_{i=1}^d f_i (K) = F + 1$ because all faces appear in $A_0, \ldots, A_F$). Correspondingly, we recursively define a new sequence of subcomplexes of $sd(K)$ setting $Y_0 := X_M$ and \[Y_h := \left \{ \begin{array}{ll} Y_{h-1} \; & \hbox{ if } A_{2M + h} \in \mathfrak{L} , \\ Y_{h-1} \; \cup \; A^*_{2M + h} & \hbox{ otherwise. } \end{array} \right.\] Since $\dim A_{2M+h} \leq \dim A_{2M+1} = t$, we have that $|Y_h|$ is just $|Y_{h-1}|$ with possibly a cell of dimension at least $d-t$ attached via its whole boundary. Let us consider the homotopy groups of the $Y_h$~'s : Recall that $Y_0$ was homotopy equivalent to a bouquet of $f_t ( \mathfrak{L} )$ $(d-t-1)$-spheres. Clearly, for all $h$, \[\pi_{j}(Y_h) = 0 \hbox{ for each } j \in \{1, \ldots, d-t-2 \}.\] Moreover, the higher-dimensional cell attached to $|Y_{h-1}|$ to get $|Y_h|$ corresponds to the addition of relators to a presentation of $\pi_{d-t-1}(Y_{h-1})$ to get a presentation of $\pi_{d-t-1}(Y_{h})$. This means that for all $h$ the group $\pi_{d-t-1}(Y_h)$ is generated by (at most) $f_t ( \mathfrak{L} )$ elements. The conclusion follows from the fact that, by construction, $Y_{F-2M}$ is the subcomplex of $sd(K)$ consisting of all simplices of $sd(K)$ that have no vertex in $sd(\mathfrak{L})$; and one can easily prove (see \cite[Lemma 1]{LICK}) that such a complex is a deformation retract of $|K| - |\mathfrak{L}|$. \end{proof} \begin{cor} \label{thm:cor1} Let $S$ be a PL $d$-sphere with a $(d-2)$-dimensional subcomplex $\mathfrak{L}$. If the fundamental group of $|S| - |\mathfrak{L}|$ has no presentation with $f_{d-2} ( \mathfrak{L} )$ generators, then $S$ is not LC. \end{cor} \begin{proof} Set $t=d-2$ in Theorem \ref{thm:thm0}, and apply Theorem \ref{thm:Dcollapse}. \end{proof} \begin{cor} \label{thm:cor2} Fix an integer $d \geq 3$. Let $S$ be a $3$-sphere with an $m$-gonal knot in its 1-skeleton, so that the knot is at least $(m \cdot 2^{d-3})$-complicated. Then the $(d-3)$-rd suspension of $S$ is a PL $d$-sphere that is not LC. \end{cor} \begin{proof} Let $S'$ be the $(d-3)$-rd suspension of $S$, and let $\mathfrak{L}'$ be the subcomplex of $S'$ obtained taking the $(d-3)$-rd suspension of the $m$-gonal knot $\mathfrak{L}$. Since $|S| - |\mathfrak{L}|$ is a deformation retract of $|S'| - |\mathfrak{L}'|$, they have the same homotopy groups. In particular, the fundamental group of $|S'| - |\mathfrak{L}'|$ has no presentation with $m \cdot 2^{d-3}$ generators. Now $\mathfrak{L}'$ is $(d-2)$-dimensional, and \[f_{d-2} (\mathfrak{L}' ) = 2^{d-3} \cdot f_1 (\mathfrak{L}) = m \cdot 2^{d-3},\] whence we conclude via Corollary \ref{thm:cor1}, since all $3$-spheres are PL (and the PL property is maintained by suspensions). \end{proof} \begin{cor}\label{thm:cor3} For every $d \geq 3$, not all PL $d$-spheres are LC. \end{cor} Theorem \ref{thm:thm0} can be used in connection with the existence of $2$-knots, that is, $2$-spheres embedded in a $4$-sphere in a knotted way (see Kawauchi \cite[p.~190]{KAWA}), to see that there are many non-LC $4$-spheres beyond those that arise by suspension of $3$-spheres. Thus, being ``non-LC'' is not simply induced by classical knots. \subsection{Many spheres are LC} Next we show that all constructible manifolds are LC. \begin{lemma} \label{lem:LCdecomposition} Let $C$ be a $d$-pseudomanifold. If $C$ can be split in the form $C = C_1 \cup C_2$, where $C_1$ and $C_2$ are LC $d$-pseudomanifolds and $C_1 \cap C_2$ is a strongly connected $(d-1)$-pseudomanifold, then $C$ is LC. \end{lemma} \begin{proof} Notice first that $C_1 \cap C_2 = \partial C_1 \cap \partial C_2$. In fact, every ridge of $C$ belongs to at most two facets of~$C$, hence every $(d-1)$-face $\sigma$ of $C_1 \cap C_2$ is contained in exactly one $d$-face of $C_1$ and in exactly one $d$-face of $C_2$. Each $C_i$ is LC; let us fix a local construction for each of them, and call $T_i$ the tree along which $C_i$ is locally constructed. Choose some $(d-1)$-face $\sigma$ in $C_1\cap C_2$, which thus specifies a $(d-1)$-face in the boundary of $C_1$ and of $C_2$. Let $C'$ be the pseudomanifold obtained attaching $C_1$ to $C_2$ along the two copies of $\sigma$. $C'$ can be locally constructed along the tree obtained by joining $T_1$ and $T_2$ by an edge across $\sigma$: Just redo the same moves of the local constructions of the $C_i$'s. So $C'$ is LC. If $C_1 \cap C_2$ consists of one simplex only, then $C' \equiv C$ and we are already done. Otherwise, by the strongly connectedness assumption, the facets of $ C_1 \cap C_2$ can be labeled $0, 1, \ldots, m$, so that: \begin{compactitem} \item the facet labeled by $0$ is $\sigma$; \item each facet labeled by $k \geq 1$ is adjacent to some facet labeled $j$ with $j < k$. \end{compactitem} Now for each $i \geq 1$, glue together the two copies of the facet $i$ inside $C'$. All these gluings are \emph{local} because of the labeling chosen, and we eventually obtain $C$. Thus, $C$ is LC. \end{proof} Since all constructible simplicial complexes are pure and strongly connected \cite{BJOE}, we obtain for simplicial $d$-pseudomanifolds that \[ \{ \hbox{constructible} \} \ \subseteq\ \{ \hbox{LC} \}. \] The previous containment is strict: Let $C_1$ and $C_2$ be two LC simplicial $3$-balls on $7$ vertices consisting of $7$ tetrahedra, as indicated in Figure~\ref{fig:nonLCmanifold}. (The $3$-balls are cones over the subdivided triangles on their fronts.) \begin{figure}[htbf] \centering \includegraphics[width=0.58\linewidth]{nonBuchsbaum.eps} \caption{\small Gluing the simplicial $3$-balls along the shaded $2$-dimensional subcomplex gives an LC, non-constructible $3$-pseudomanifold.} \label{fig:nonLCmanifold} \end{figure} Glue them together in the shaded strongly connected subcomplex in their boundary (which uses $5$ vertices and $4$ triangles). The resulting simplicial complex $C$, on $9$ vertices and $14$ tetrahedra, is LC by Lemma \ref{lem:LCdecomposition}, but the link of the top vertex is an annulus, and hence not LC. In fact, the complex $C$ is not constructible, since the link of the top vertex is not constructible. Also, $C$ is not $2$-connected, it retracts to a $2$-sphere. So, LC $d$-pseudomanifolds are not necessarily $(d-1)$-connected. Since all constructible $d$-complexes are $(d-1)$-connected, and every constructible $d$-pseudomanifold is either a $d$-sphere or a $d$-ball \cite[Prop.~1.4, p.~374]{HS}, the previous argument produces many examples of $d$-pseudomanifolds with boundary that are LC, but not constructible. None of these examples, however, will be a sphere (or a ball). We will prove in Theorem~\ref{thm:LCnonconstructibleballs} that there are LC $3$-balls that are not constructible; we show now that for $d$-spheres, for every $d \geq 3$, the containment $\{ \textrm{constructible} \} \subseteq \{ \textrm{LC} \}$ is strict. \begin{lemma} \label{thm:suspensions} Suppose that a $3$-sphere $\bar{S}$ is LC but not constructible. Then for all $d \geq 3$, the $(d-3)$-rd suspension of $\bar{S}$ is a $d$-sphere that is also LC but not constructible. \end{lemma} \begin{proof} Whenever $S$ is an LC sphere, $v*S$ is an LC $(d+1)$-ball. (The proof is straightforward from the definition of ``local construction''.) Thus the suspension $(v*S) \cup (w*S)$ is also LC by Lemma~\ref{lem:LCdecomposition}. On the other hand, the suspension of a non-constructible sphere is a non-constructible sphere \cite[Corollary 2]{HZ}. \end{proof} Of course, we should better show that the $3$-sphere $\bar{S}$ in the assumption of Lemma \ref{thm:suspensions} really exists. This will be established in Example \ref{thm:examplelick}, using Corollary \ref{thm:corollarycollapse} as follows. \begin{lemma} \label{thm:coneoverbeethoven} Let $B$ be a $3$-ball, $v$ an external point, and $B \cup v * \partial B$ the $3$-sphere obtained by adding to $B$ a cone over its boundary. If $B$ is collapsible, then $B \cup v * \partial B$ is LC. \end{lemma} \begin{proof} By Corollary \ref{thm:corollarycollapse}, and since $B$ is collapsible, all we need to prove is that $(B \cup v * \partial B) - (v*\sigma)$ collapses onto $B$, for some triangle $\sigma$ in the boundary of $B$. As all $2$-balls are collapsible, and $\partial B - \sigma $ is a $2$-ball, there is some vertex $P$ in $\partial B$ such that $\partial B - \sigma \searrow P$. This naturally induces a collapse of $v*\partial B \; - \; v*\sigma$ onto $\partial B \, \cup \, v*P$, according to the correspondence \[\sigma \hbox{ is a free face of } \Sigma \; \; \Longleftrightarrow \; \; v*\sigma \hbox{ is a free face of } v*\Sigma.\] Collapsing the edge $v*P$ down to $P$, we get $v*\partial B \; - \; v*\sigma \searrow \partial B$. In the collapse given here, the pairs of faces removed are all of the form $(v*\sigma, v*\Sigma)$; thus, the $(d-1)$-faces in $\partial B$ are removed together with subfaces (and not with superfaces) in the collapse. This means that the freeness of the faces in $\partial B$ is not needed; so when we glue back $B$ the collapse $v*\partial B \; - \; v*\sigma \searrow \partial B$ can be read off as $B \;\cup \; v * \partial B \; - \; v*\sigma \; \searrow \; B.$ \end{proof} \begin{example} \label{thm:examplelick} In \cite{LICKMAR}, Lickorish and Martin described a collapsible $3$-ball $B$ with a knotted spanning edge. This was also obtained independently by Hamstrom and Jerrard \cite{HAM}. The knot is an arbitrary $2$-bridge index knot (for example, the trefoil knot). Merging $B$ with the cone over its boundary, we obtain a knotted 3-sphere $\bar{S}$ which is LC (by Lemma \ref{thm:coneoverbeethoven}; see also \cite{LICK}) but not constructible (because it is knotted; see \cite[p.~54]{Hthesis} or \cite{HZ}). \end{example} \begin{remark} \label{thm:remarklick} In his 1991 paper \cite[p.~530]{LICK}, Lickorish announced (for a proof see \cite[pp.~100--103]{Benedetti-diss}) that ``with a little ingenuity'' one can get a sphere $S$ with a $2$-complicated triangular knot (the double trefoil), such that $S - \Delta$ is collapsible. Such a sphere is LC by Corollary \ref{thm:corollarycollapse}. \end{remark} \begin{example} \label{ex:knottedLC} The triangulated knotted $3$-sphere $S^3_{13, 56}$ realized by Lutz \cite{LUTZ1} has $13$ vertices and $56$ facets. Since it contains a $3$-edge trefoil knot in its $1$-skeleton, $S^3_{13, 56}$ cannot be constructible, according to Hachimori and Ziegler~\cite{HZ}. Let $B_{13, 55}$ be the $3$-ball obtained removing the facet $\Delta = \{1,2,6,9\}$ from $S^3_{13, 56}$. Let $\sigma$ be the triangle $\{2,6,9\}$. Then $B^3_{13, 55}$ collapses to the $2$-disc $\partial \Delta - \sigma$ (F. H. Lutz, personal communication; see \cite[pp.~106--107]{Benedetti-diss}). All $2$-discs are collapsible. In particular, $B^3_{13, 55}$ is collapsible, so $S^3_{13, 56}$ is LC. \end{example} \begin{cor} For each $d \geq 3$, not all LC $d$-spheres are constructible. In particular, a knotted $3$-sphere can be LC (but it is not constructible) if the knot is just $1$-complicated or $2$-complicated. \end{cor} The knot in the $1$-skeleton of the ball $B$ in Example \ref{thm:examplelick} consists of a path on the boundary of $B$ together with a ``spanning edge'', that is, an edge in the interior of $B$ with both extremes on $\partial B$. This edge determines the knot, in the sense that any other path on $\partial B$ between the two extremes of this edge closes it up into an equivalent knot. For these reasons such an edge is called a \emph{knotted spanning edge}. More generally, a \emph{knotted spanning arc} is a path of edges in the interior of a $3$-ball, such that both extremes of the path lie on the boundary of the ball, and any boundary path between these extremes closes it into a knot. (According to this definition, the relative interior of a knotted spanning arc is allowed to intersect the boundary of the $3$-ball; this is the approach of Hachimori and Ehrenborg in \cite{EH}.) The Example \ref{thm:examplelick} can then be generalized by adopting the idea that Hamstrom and Jerrard used to prove their ``Theorem B'' \cite[p.~331]{HAM}, as follows. \begin{thm} \label{thm:HJ} Let $K$ be any $2$-bridge knot (e.g. the trefoil knot). For any positive integer $m$, there exists a collapsible $3$-ball $B_m$ with a knotted spanning arc of $m$ edges, such that the knot is the connected union of $m$ copies of $K$. \end{thm} \begin{proof} By the work of Lickorish--Martin \cite{LICKMAR} (see also \cite{HAM} and Example \ref{thm:examplelick}) there exists a collapsible $3$-ball $B$ with a knotted spanning edge $[x, y]$, the knot being $K$. So if $m=1$ we are already done. Otherwise, take $m$ copies $B^{(1)}$, $\ldots$, $B^{(m)}$ of the ball $B$ and glue them all together by identifying the vertex $y^{(i)}$ of $B^{(i)}$ with the vertex $x^{(i+1)}$ of $B^{(i+1)}$, for each $i$ in $\{1, \ldots, m-1\}$. The result is a cactus of $3$-balls $C_m$. By induction on $m$, it is easy to see that a cactus of $m$~collapsible $3$-balls is collapsible. To obtain a $3$-ball from $C_m$, we thicken the junctions between the $3$-balls by attaching $m-1$ square pyramids with apex $y^{(i)} \equiv x^{(i+1)}$. Each pyramid can be triangulated into two tetrahedra to make the final complex simplicial. Let $B_m$ be the resulting $3$-ball. All the spanning edges of the $B^{(i)}$'s are concatenated in $B_m$ to yield a knotted spanning arc of $m$~edges, the knot being equivalent to the $m$-ple connected union of $K$ with himself. Moreover, the ``extra pyramids'' introduced can be collapsed away. This yields a collapse of the ball $B_m$ onto the complex $C_m$, which is collapsible. \end{proof} \begin{comment} Otherwise, let $\sigma_x$ and $\sigma_y$ be triangles in $\partial C$ containing the vertex $x$ resp.~$y$. Fix a collapse of $C$. After possibly modifying the triangulation of $C$ a bit, we can assume that both $\sigma_x$ and $\sigma_y$ are removed in pairs of the type (edge, triangle) in the collapse of~$C$: In fact, suppose that $\sigma_x$ is removed together with a tetrahedron $\Sigma_x$ in the collapse of $C$. Let us subdivide $\sigma_x$ stellarly into three triangles $\sigma_1$, $\sigma_2$, and $\sigma_3$ (with $x \in \sigma_1$): the tetrahedron $\Sigma_x$ is therefore subdivided into three tetrahedra $\Sigma_1$, $\Sigma_2$, $\Sigma_3$, where each $\sigma_j$ is contained in $\Sigma_j$. Let $C'$ be the new triangulation obtained. It still has a knotted spanning edge. Moreover, a collapse of $C'$ can be read off from the collapse of $C$, by replacing the elementary collapse $(\sigma_x, \Sigma_x)$ with the six elementary collapses \[ (\sigma_2, \Sigma_2), (\sigma_3, \Sigma_3), (\Sigma_1 \cap \Sigma_2, \Sigma_1), (\sigma_2 \cap \sigma_3, \Sigma_2 \cap \Sigma_3), (\sigma_1 \cap \sigma_3, \Sigma_1 \cap \Sigma_3), (\sigma_1 \cap \sigma_2, \sigma_1) . \] So, up to replacing $C$ with $C'$ and $\sigma_x$ with $\sigma_1$, we can assume that $\sigma_x$ is removed together with an edge; the same holds for $\sigma_y$. Now take $m$ copies $C^{(1)}$, $\ldots$, $C^{(m)}$ of the ball $C$, and glue them all together by identifying the boundary facet $\sigma_y^{(i)}$ of $C^{(i)}$ with the boundary facet $\sigma_x^{(i+1)}$ of $C^{(i+1)}$, for all $i \in \{1, \ldots, m-1\}$. Let us call $B_m$ be the resulting $3$-ball. All the spanning edges of the $C^{(i)}$'s are concatenated in $B_m$ to yield a knotted spanning arc of $m$-edges, the knot being equivalent to the $m$-ple connected union of $K$ with himself. By construction, each $C^{(i)}$ admits a collapse in which $\sigma_x^{(i)}$ and $\sigma_y^{(i)}$ are collapsed away together with an edge; thus, the ``freeness'' of $\sigma_x^{(i)}$ and $\sigma_y^{(i)}$ is irrelevant for the collapsibility of $C^{(i)}$. Hence, $B_m$ is collapsible. \end{comment} \begin{cor} \label{thm:LCknotted} A $3$-sphere with an $m$-complicated $(m+2)$-gonal knot can be LC. \end{cor} \begin{proof} Let $S_m = B_m \cup (v*\partial B_m)$, where $B_m$ is the $3$-ball constructed in the previous theorem. By Lemma \ref{thm:coneoverbeethoven}, $S_m$ is LC. The spanning arc of $m$ edges is closed up in $v$ to form an $(m+2)$-gon. \end{proof} \begin{remark} \label{rem:LCknotted} The bound given by Corollary \ref{thm:LCknotted} can be improved: In fact, for each positive integer $m$ there exists an LC $3$-sphere with an $(m+1)$-complicated $(m+2)$-gonal knot. The proof is rather long, so we preferred to omit it, referring the reader to \cite[pp.~100--103]{Benedetti-diss}. \end{remark} The spheres mentioned in Corollary~\ref{thm:LCknotted} and Corollary Remark~\ref{rem:LCknotted} are not vertex decomposable, not shellable and not constructible, because of the following result about the bridge index. \begin{thm} [Ehrenborg, Hachimori, Shimokawa \cite{EH} \cite{HS}] \label{thm:EHS} Suppose that a $3$-sphere (or a $3$-ball) $S$ contains a knot of $m$ edges. \begin{compactitem} \item[--] If the bridge index of the knot exceeds $\frac{m}{3}$, then $S$ is not vertex decomposable; \item[--] If the bridge index of the knot exceeds $\frac{m}{2}$, then $S$ is not constructible. \end{compactitem} \end{thm} \noindent The bridge index of a $t$-complicated knot is at least $t+1$. So, if a knot is at least $\lfloor\frac{m}{3}\rfloor$-complica\-ted, its bridge index automatically exceeds $\frac{m}{3}$. Thus, Ehrenborg--Hachimori--Shimokawa's theorem, the results of Hachimori and Ziegler in \cite{HZ}, the previous examples, and our present results blend into the following new hierarchy. \begin{thm} A $3$-sphere with a non-trivial knot consisting of\\ \begin{tabular}{rl} $3$ edges, $1$-complicated & is not constructible, but can be LC. \\ $3$ edges, $2$-complicated & is not constructible, but can be LC. \\ \phantom{xxxxxxxx} $3$ edges, $3$-complicated or more & is not LC. \\ $4$ edges, $1$-complicated & is not vertex dec., but can be shellable. \\ $4$ edges, $2$- or $3$-complicated & is not constructible, but can be LC. \\ $4$ edges, $4$-complicated or more & is not LC. \\ $5$ edges, $1$-complicated & is not vertex dec., but can be shellable. \\ $5$ edges, $2$-, $3$- or $4$-complicated & is not constructible, but can be LC. \\ $5$ edges, $5$-complicated or more & is not LC. \\ $6$ edges, $1$-complicated & can be vertex decomposable. \\ $6$ edges, $2$-complicated & is not vertex dec., but can be LC. \\ $6$ edges, $3$-, $4$ or $5$-complicated & is not constructible, but can be LC. \\ $6$ edges, $6$-complicated or more & is not LC. \\ \vdots\qquad{} & \qquad \vdots\\ $m$ edges, $k$-complicated, $k \geq \lfloor\frac{m}{3}\rfloor$ & is not vertex decomposable. \\ $m$ edges, $k$-complicated, $k \geq \lfloor\frac{m}{2}\rfloor$ & is not constructible. \\ $m$ edges, $k$-complicated, $k \leq m-1$ & can be LC. \\ $m$ edges, $k$-complicated, $k \geq m$ & is not LC. \end{tabular} \end{thm} \noindent The same conclusions are valid for $3$-balls that contain a knot, up to replacing the word ``LC'', wherever it occurs, with the word ``collapsible''. (See Lemma~\ref{thm:coneoverbeethoven}, Corollary~\ref{thm:mainballs} and \cite{HZ}.) \begin{comment} We do not know whether for $m \geq 3$ a $3$-sphere with an $m$-complicated $(m+1)$-gonal knot can be LC. However, an LC sphere with a $2$-complicated triangular knot was found by Lickorish (see Remark \ref{thm:remarklick}). \end{comment} \smallskip One may also derive from Zeeman's theorem (``given any simplicial $3$-ball, there is a positive integer $r$ so that its $r$-th barycentric subdivision is collapsible'' \cite[Chapters I and III]{Zeeman}) that any $3$-sphere will become LC after sufficiently many barycentric subdivisions. On the other hand, there is no fixed number $r$ of subdivisions that is sufficient to make \emph{all} $3$-spheres LC. (For this use sufficiently complicated knots, together with Theorem~\ref{thm:short}.) \section{On LC Balls} The combinatorial topology of $d$-balls and of $d$-spheres are intimately related: Removing any facet $\Delta$ from a $d$-sphere $S$ we obtain a $d$-ball $S-\Delta$, and adding a cone over the boundary of a $d$-ball $B$ we obtain a $d$-sphere $S_B$. We do have a combinatorial characterization of LC $d$-balls, which we will reach in Theorem \ref{thm:mainDballs}; it is a bit more complicated, but otherwise analogous to the characterization of LC $d$-spheres as given in Main Theorem~\ref{mainthm:hierarchyspheres}. \begin{thm} \label{thm:hierarchyballs} For simplicial $d$-balls, we have the following hierarchy: \em \[ \Big\{\begin{array}{@{}c@{}} \textrm{vertex}\\ \textrm{decomp.} \end{array}\Big\} \subsetneq \{\textrm{shellable}\} \subsetneq \{\textrm{constructible}\} \subsetneq \{\textrm{LC}\} \subsetneq \Big\{\begin{array}{@{}c@{}} \textrm{collapsible onto a}\\ (d-2)\textrm{-complex} \end{array}\Big\} \subsetneq \{\textrm{all $d$-balls}\}. \] \end{thm} \begin{proof} The first two inclusions are known. We have already seen that all constructible complexes are LC (Lemma~\ref{lem:LCdecomposition}). Every LC $d$-ball is collapsible onto a $(d-2)$-complex by Corollary \ref{cor:LCcollapsible-dimd}. Let us see next that all inclusions are strict for $d=3$: For the first inclusion this follows from Lockeberg's example of a $4$-polytope whose boundary is not vertex decomposable. For the second inclusion, take Ziegler's non-shellable ball from \cite{ZIE}, which is constructible by construction. A non-constructible $3$-ball that is LC will be provided by Theorem \ref{thm:LCnonconstructibleballs}. A collapsible $3$-ball that is not LC will be given in Theorem~\ref{thm:collapsiblenonLC}. Finally, Bing and Goodrick showed that not every $3$-ball is collapsible \cite{BING, GOO}. To show that the inclusions are strict for all $d\ge3$, we argue as follows. For the first four inclusions we get this from the case $d=3$, since \begin{compactitem}[ -- ] \item cones are always collapsible, \item the cone $v*B$ is vertex decomposable resp.\ shellable resp.\ constructible if and only if $B$ is, \item and in Proposition \ref{prop:LCcones} we will show that $v * B$ is LC if and only if $B$ is. \end{compactitem} For the last inclusion and $d\ge3$, we look at the $d$-balls obtained by removing a facet from a non-LC $d$-sphere. These exist by Corollary~\ref{thm:cor2}; they do not collapse onto a $(d-2)$-complex by Theorem~\ref{thm:Dcollapse}. \end{proof} \subsection{Local constructions for $d$-balls} We begin with a relative version of the notions of ``facet-killing sequence'' and ``facet massacre'', which we introduced in Subsection~\ref{sec:notLCspheres}. \begin{definition} Let $P$ a pure $d$-complex. Let $Q$ be a proper subcomplex of $P$, either pure $d$-dimensional or empty. A \textit{facet-killing sequence of $(P ,Q)$} is a sequence $P_0, P_1, \ldots, P_{t-1}, P_t$ of simplicial complexes such that $t=f_d(P)-f_d(Q)$, $P_0=P$, and $P_{i+1}$ is obtained by $P_i$ removing a pair $(\sigma, \Sigma)$ such that $\sigma$ is a free $(d-1)$-face of $\Sigma$ that does not lie in~$Q$ (which also implies that $\Sigma\notin Q$). \end{definition} It is easy to see that $P_t$ has the same $d$-faces as $Q$. The version of facet killing sequences given in Definition \ref{thm:facetkilling} is a special case of this one, namely the case when $Q$ is empty. \begin{definition} Let $P$ a pure $d$-dimensional simplicial complex. Let $Q$ be either the empty complex, or a pure $d$-dimensional proper subcomplex of $P$. A \textit{pure facet-massacre of $(P,Q)$} is a sequence $P_0, P_1, \ldots, P_{t-1}, P_t$ of (pure) complexes such that $t=f_d(P) - f_d(Q)$, $P_0=P$, and $P_{i+1}$ is obtained by $P_i$ removing: \begin{compactenum}[(a)] \item a pair $(\sigma, \Sigma)$ such that $\sigma$ is a free $(d-1)$-face of $\Sigma$ that does not lie in $Q$, and \item all inclusion-maximal faces of dimension smaller than $d$ that are left after the removal of type (a) or, recursively, after removals of type (b). \end{compactenum} \end{definition} \noindent Necessarily $P_t=Q$ (and when $Q = \emptyset$ we recover the notion of facet-massacre of $P$, that we introduced in Definition \ref{thm:facetmassacre}). It is easy to see that a step $P_i \longrightarrow P_{i+1}$ can be factorized (not in an unique way) in an elementary collapse followed by a removal of faces of dimensions smaller than $d$ that makes $P_{i+1}$ a pure complex. Thus, a single pure facet-massacre of $(P,Q)$ corresponds to many facet-killing sequences of $(P,Q)$. We will apply both definitions to the pair $(P,Q) = (K^T, \partial B)$, where $K^T$ is defined for balls as follows. \begin{definition} If $B$ be a $d$-ball with $N$ facets, and $T$ is a spanning tree of the dual graph of $B$, define $K^T$ as the subcomplex of $B$ formed by all $(d-1)$-faces of $B$ that are not hit by $T$. \end{definition} \begin{lemma} \label{thm:Bcollapse} Under the previous notations, \begin{compactitem} \item $K^T$ is a pure $(d-1)$-dimensional simplicial complex, containing $\partial B$ as a subcomplex; \item $K^T$ has $D + \frac{b}{2}$ facets, where $b$ is the number of facets in $\partial B$, and $D:=\frac{dN-N+2}{2}$; \item for any $d$-simplex $\Delta$ of $B$, $\; \;B - \Delta \; \searrow K^T$; \item $K^T$ is homotopy equivalent to a $(d-1)$-dimensional sphere. \end{compactitem} \end{lemma} We introduce another convenient piece of terminology. \begin{definition}[seepage] Let $B$ be a simplicial $d$-ball. A \emph{seepage} is a $(d-1)$-dimensional subcomplex $C$ of $B$ whose $(d-1)$-faces are exactly given by the boundary of $B$. \end{definition} A seepage is not necessarily pure; actually there is only one pure seepage, namely $\partial B$ itself. Since $K^T$ contains $\partial B$, a collapse of $K^T$ onto a seepage must remove all the $(d-1)$-faces of $K^T$ that are not in $\partial B$: This is what we called a facet-killing sequence of $(K^T, \partial B)$. \begin{proposition} \label{thm:Balongtrees} Let $B$ be a $d$-ball, and $\Delta$ a $d$-simplex of $B$. Let $C$ be a seepage of $\partial B$. Then, \[ B- \Delta \; \searrow \; C \; \; \Longleftrightarrow \; \exists \; T \hbox{ s.t. } \; K^T \; \searrow C. \] \end{proposition} \begin{proof} Analogous to the proof of Proposition \ref{thm:Salongtrees}. The crucial assumption is that no face of $\partial B$ is removed in the collapse (since all boundary faces are still present in the final complex $C$). \end{proof} If we fix a spanning tree $T$ of the dual graph of $B$, we have then a 1-1 correspondence between the following sets: \begin{compactenum} \item the set of collapses $B- \Delta \; \searrow K^T$; \item the set of ``natural labelings'' of $T$, where $\Delta$ is labeled by $1$; \item the set of the first parts $(T_1, \ldots, T_N)$ of local constructions for $B$, with $T_1 = \Delta$. \end{compactenum} \begin{thm} \label{thm:Bmassacre} Let $B$ be a $d$-ball; fix a facet $\Delta$, and a spanning tree $T$ of the dual graph of $B$, rooted at $\Delta$. The second part of a local construction for $B$ along $T$ corresponds bijectively to a facet-massacre of $(K^T , \partial B)$. \end{thm} \begin{proof} Let us start with a local construction $\left[ T_1, \ldots, T_{N-1}, \right] T_N, \ldots, T_k$ for $B$ along $T$. Topologically, $B=T_N / {\sim}$, where ${\sim}$ is the equivalence relation determined by the gluing, and $K^T = \partial T_N / {\sim}$. $K^T$ has $D + \frac{b}{2}$ facets (see Lemma \ref{thm:Bcollapse}), and all of them, except the $b$ facets in the boundary, represent gluings. Thus we have to describe a sequence $P_0, \ldots, P_t$ with $t=D - \frac{b}{2}$. But the local construction $\left( T_1, \ldots, T_{N-1}, \right) T_N, \ldots, T_k$ produces $B$ (which has $b$ facets in the boundary) from $T_N$ (which has $2D$ facets in the boundary, cf.\ Lemma \ref{thm:treesA}) in $k-N$ steps, each removing a pair of facets from the boundary. So, $2D - 2(k-N)=b$, which implies $k-N=t$. Define $P_0 := K_T = \partial T_N / {\sim}$, and $P_j := \partial T_{N+j} / {\sim}$. In the first LC step, $T_N \rightarrow T_{N+1}$, we remove from the boundary a free ridge $r$, together with the unique pair $\sigma', \sigma''$ of facets of $\partial T_N$ sharing $r$. At the same time, $r$ and the newly formed face $\sigma$ are sunk into the interior; so obviously neither $\sigma$ nor $r$ will appear in $\partial B$. This step $\partial T_N \longrightarrow \partial T_{N+1}$ naturally induces an analogous step $\partial T_{N+j}/ {\sim} \longrightarrow \partial T_{N+j+1}/ {\sim}$, namely, the removal of $r$ and of the unique $(d-1)-$face $\sigma$ containing it, with $r$ not in $\partial B$. The rest is analogous to the proof of Theorem \ref{thm:Smassacre}. \endproof \end{proof} \vspace{0.01\linewidth} \par \noindent Thus, $B$ can be locally constructed along a tree $T$ if and only if $K^T$ collapses onto some seepage. What if we do not fix the tree $T$ or the facet $\Delta$? \begin{lemma} \label{thm:astussia} Let $B$ be a $d$-ball; let $\sigma$ be a $(d-1)$-face in the boundary $\partial B$, and let $\Sigma$ be the unique facet of $B$ containing $\sigma$. Let $C$ be a subcomplex of $B$. If $C$ contains $\partial B$, the following are equivalent: \begin{compactenum}[ \rm1. ] \item $B - \Sigma \; \searrow \; C;$ \item $B - \Sigma - \sigma \; \searrow \; C - \sigma$; \item $B \; \searrow \; C - \sigma$. \end{compactenum} \end{lemma} \begin{thm} \label{thm:mainDballs} Let $B$ be a $d$-ball. Then the following are equivalent: \begin{compactenum}[ \rm1. ] \item $B$ is LC; \item $K^T$ collapses onto some seepage $C$, for some spanning tree $T$ of the dual graph of $B$; \item there exists a seepage $C$ such that for every facet $\Delta$ of $B$ one has $B - \Delta \; \searrow C$; \item $B - \Delta \; \searrow C$, for some facet $\Delta$ of $B$, and for some seepage $C$; \item there exists a seepage $C$ such that for every facet $\sigma$ of $\partial B$ one has $B \; \searrow \; C - \sigma$; \item $B \; \searrow \; C - \sigma$, for some facet $\sigma$ of $\partial B$, and for some seepage $C$; \end{compactenum} \end{thm} \begin{proof} The equivalences $1 \Leftrightarrow 2 \Leftrightarrow 3 \Leftrightarrow 4$ are established analogously to the proof of Theorem \ref{thm:Dcollapse}. Finally, Lemma \ref{thm:astussia} implies that $3 \Rightarrow 5 \Rightarrow 6 \Rightarrow 4$. \end{proof} \begin{cor} \label{cor:LCcollapsible-dimd} Every LC $d$-ball collapses onto a $(d-2)$-complex. \end{cor} \begin{proof} By Theorem~\ref{thm:mainDballs}, the ball $B$ collapses onto the union of the boundary of $B$ minus a facet with some $(d-2)$-complex. The boundary of $B$ minus a facet is a $(d-1)$-ball; thus it can be collapsed down to dimension $d-2$, and the additional $(d-2)$-complex will not interfere. \end{proof} \begin{cor} \label{thm:mainballs} Let $B$ be a $3$-ball. Then the following are equivalent: \begin{compactenum}[ \rm1. ] \item $B$ is LC; \item $K^T \searrow \partial B$, for some spanning tree $T$ of the dual graph of $B$; \item $B - \Delta \; \searrow \partial B$, for every facet $\Delta$ of $B$; \item $B - \Delta \; \searrow \partial B$, for some facet $\Delta$ of $B$; \item $B \; \searrow \; \partial B - \sigma$, for every facet $\sigma$ of $\partial B$; \item $B \; \searrow \; \partial B - \sigma$, for some facet $\sigma$ of $\partial B$. \end{compactenum} \end{cor} \begin{figure}[htbf] \centering \includegraphics[width=.18\linewidth]{seepage.eps} \caption{\small A seepage of a $3$-ball.} \label{fig:seepage} \end{figure} \begin{proof} When $B$ has dimension 3, any seepage $C$ of $\partial B$ is a $2$-complex containing $\partial B$, plus some edges and vertices. If a complex homotopy equivalent to $S^2$ collapses onto $C$, then $C$ is also homotopy equivalent to $S^2$, thus $C$ can only be $\partial B$ with some trees attached (see Figure \ref{fig:seepage}), which implies that $C \searrow \partial B$. \end{proof} \begin{cor} \label{thm:LCcollapsible} All LC $3$-balls are collapsible. \end{cor} \begin{proof} If $B$ is LC, it collapses to some $2$-ball $\partial B - \sigma$, but all $2$-balls are collapsible. \end{proof} \begin{cor} All constructible $3$-balls are collapsible. \end{cor} For example, Ziegler's ball, Gr\"unbaum's ball, and Rudin's ball are collapsible (see \cite{ZIE}). \begin{remark} The locally constructible $3$-balls with $N$ facets are precisely the $3$-balls that admit a ``special collapse'', namely such that after the first elementary collapse, in the next $N-1$ collapses, no triangle of $\partial B$ is collapsed away. Such a collapse acts along a dual (directed) tree of the ball, whereas a generic collapse acts along an acyclic graph that might be disconnected. \end{remark} One could argue that maybe ``special collapses'' are not that special: Perhaps every collapsible $3$-ball has a collapse that removes only one boundary triangle in its top-dimensional phase? This is not so: We will produce a counterexample in the next subsection (Theorem \ref{thm:collapsiblenonLC}). \begin{thm}\label{thm:LCnonconstructibleballs} For every $d\ge3$, not all LC $d$-balls are constructible. \end{thm} \begin{proof} If $B$ is a non-constructible $d$-ball and $v$ is a new vertex, then $v*B$ is a non-constructible $(d+1)$-ball. Also, it is easy to see that if $B$ is LC then $v \ast B$ is also LC (cf. Proposition \ref{prop:LCcones}). Therefore, it suffices to prove the claim for $d=3$. In Example~\ref{ex:knottedLC} we described a $3$-ball $B_{13,55}$ that collapses onto its boundary minus a facet. By Corollary~\ref{thm:mainballs}, $B_{13,55}$ is LC. At the same time, $B_{13,55}$ contains a $3$-edge trefoil knot, which prevents $B_{13,55}$ from being constructible \cite[Thm.~1]{HZ}. \end{proof} \subsection{3-Balls without interior vertices.} Here we show that a simplicial $3$-ball with all vertices on the boundary cannot contain any knotted spanning edge if it is LC, but might contain some if it is collapsible. We use this fact to establish our hierarchy for $d$-balls (Theorem \ref{thm:hierarchyballs}). Let us fix some notation first. Recall that by Theorem \ref{thm:topologyLC3-dim}, each connected component of the boundary of a simplicial LC $3$-pseudomanifold is homeomorphic to a simply-connected union of $2$-spheres, any two of which share at most one point. Let us call \emph{pinch points} the points shared by two or more spheres in the boundary of an LC $3$-pseudomanifold. \begin{definition} \label{defn:cases}[Steps of types (i)-(ix) in LC constructions] Any admissible step in a local construction of a $3$-pseudomanifold falls into one of the following nine types: \begin{compactenum}[(i)\ ] \item attaching a tetrahedron along a triangle; \item identifying two boundary triangles that share exactly 1 edge; \item identifying two boundary triangles that share 1 edge and the opposite vertex; \item identifying two b. t. that share 2 edges that meet in a pinch point; \item identifying two b. t. that share 2 edges that do not meet in a pinch point; \item identifying two b. t. that share 3 edges, all of whose vertices are pinch points; \item identifying two b. t. that share 3 edges, two of whose vertices are pinch points; \item identifying two b. t. that share 3 edges, one of whose vertices is a pinch point; \item identifying two b. t. that share 3 edges, none of whose vertices is a pinch point. \end{compactenum} \end{definition} {\noindent}For example, the first $N - 1$ steps of any local construction of a $3$-pseudomanifold with $N$ tetrahedra are all of type (i); the last step in the local construction of a $3$-sphere is necessarily of type~(ix). The following table summarizes the distinguished effects of the steps: \begin{center} \small \begin{tabular}{rcc} \emph{ step type } & \emph{ no.\ of interior vertices } & \emph{ no.\ of connected components of the boundary}\\ (i) & + 0 & + 0 \\ (ii) & + 0 & + 0 \\ (iii) & + 0 & $\quad \; \;$+ 0 (*)\\ (iv) & + 0 & + 1 \\ (v) & + 1 & + 0 \\ (vi) & + 0 & + 3 \\ (vii) & + 1 & + 2 \\ (viii) & + 2 & + 0 \\ (ix) & + 3 & -- 1 \end{tabular} \end{center} where the asterisk recalls that a type (iii) step \emph{almost} disconnects the boundary, pinching it in a point. Now, let $B$ be an LC $3$-ball \emph{without} interior vertices. Steps of type (v), (vii), (viii) or (ix) sink respectively one, one, two and three vertices into the interior, so they cannot occur in the local construction of $B$. Furthermore, any identification of type (vi) or (iv) increases the number of connected components in the boundary, hence it must be followed by at least one step of type (ix), which destroys a connected component of the boundary. Yet (ix) is forbidden, so no identification of type (vi) or (iv) can occur. Finally, the ``pinching step'' (iii) needs to be followed by one of the steps (vi), (vii), (viii) or (ix) in order to restore the ball topology -- but such steps are forbidden. This leads us to the following Lemma: \newpage \begin{lemma} \label{lem:sudoku} Let $B$ be an LC $3$-pseudomanifold. The following are equivalent: \begin{compactenum}[\rm (1)] \item in some local construction for $B$ all steps are of type (i) or (ii); \item in every local construction for $B$ all steps are of type (i) or (ii); \item $B$ is a $3$-ball without interior vertices. \end{compactenum} \end{lemma} We will use Lemma \ref{lem:sudoku} to obtain examples of non-LC $3$-balls. We already know that non-collapsible balls are not LC, by Corollary \ref{thm:LCcollapsible}: so a $3$-ball with a knotted spanning edge cannot be LC if the knot is the sum of two or more trefoil knots. (See also Bing \cite{BING} and Goodrick \cite{GOO}.) What about balls with a spanning edge realizing a single trefoil knot? \begin{proposition} \label{prop:knottednotLC} An LC $3$-ball without interior vertices does not contain any knotted spanning edge. \end{proposition} \begin{proof} An LC $3$-ball $B$ without interior vertices is obtained from a tree of tetrahedra via local gluings of type (ii), by Lemma \ref{lem:sudoku}. A tree of tetrahedra has no interior edge. Each type~(ii) step preserves the existing spanning edges (because it does not sink vertices into the interior), and creates one more spanning edge $e$, clearly unknotted (because the other two edges of the sunk triangle form a boundary path that ``closes up'' the edge $e$ onto an $S^1$ bounding a disc inside $B$). It is easy to verify that the subsequent type (ii) steps leave such edge $e$ spanning and unknotted. \end{proof} \begin{remark} The presence of knots/knotted spanning edges is not the only obstruction to local constructibility. Bing's thickened house with two rooms \cite{BING, HACHIweb} is a $3$-ball $B$ with all vertices on the boundary, so that every interior triangle of $B$ has at most one edge on the boundary $\partial B$. Were $B$ LC, every step in its local construction would be of type~(ii) (by Lemma \ref{lem:sudoku}); in particular, the last triangle to be sunk into the interior of $B$ would have exactly two edges on the boundary of $B$. Thus Bing's thickened house with two rooms cannot be LC, even if it does not contain a knotted spanning edge. \end{remark} \begin{example} Furch's $3$-ball \cite[p.~73]{FUR} \cite[p.~110]{BING} can be triangulated without interior vertices (see e.g. \cite{HACHIweb}). Since it contains a knotted spanning edge, by Proposition \ref{prop:knottednotLC} Furch's ball is not LC. \end{example} \begin{remark}\label{remark:hachimori} In \cite[Lemma 2]{HACHI}, Hachimori claimed that any $3$-ball $C$ obtained from a constructible $3$-ball $C'$ via a type~(ii) step is constructible. This would imply by Lemma \ref{lem:sudoku} that all LC $3$-balls without interior vertices are constructible, which is stronger than Proposition \ref{prop:knottednotLC} since constructible $3$-balls do not contain knotted spanning edges \cite[Lemma 1]{HZ}. Unfortunately, Hachimori's proof \cite[p. 227]{HACHI} is not satisfactory: If $C'=C'_1 \cup C'_2$ is a constructible decomposition of $C'$, and $C_i$ is the subcomplex of $C$ with the same facets of $C'_i$, $C=C_1 \cup C_2$ need not be a constructible decomposition for $C$. (For example, if the two glued triangles both lie on $\partial C_1'$, and if the two vertices that the triangles do not have in common lie in $C_1' \cap C_2'$, then $C_1 \cap C_2$ is not a $2$-ball and one of $C_1$ and $C_2$ is not a $3$-ball.) At present we do not know whether Hachimori's claim is true: Does $C'$ admit a different constructible decomposition that survives the type~(ii) step? On this depends the correctness of the algorithm \cite[p.~227]{HACHI} \cite[p.~101]{Hthesis} to test \emph{constructibility} of $3$-balls without interior vertices by cutting them open along triangles with exactly two boundary edges. However, we point out that Hachimori's algorithm can be validly used to decide the \emph{local constructibility} of $3$-balls without interior vertices: In fact, by Lemma \ref{lem:sudoku}, the algorithm proceeds by reversing the LC-construction of the ball. \end{remark} \medskip \noindent We can now move on to complete the proof of our Theorem \ref{thm:hierarchyballs}. Inspired by Proposition \ref{prop:knottednotLC}, we show that a \emph{collapsible} $3$-ball without interior vertices may contain a knotted spanning edge. Our construction is a tricky version of Lickorish--Martin's (see Example \ref{thm:examplelick}). \begin{thm} \label{thm:collapsiblenonLC} Not all collapsible $3$-balls are LC. \end{thm} \begin{proof} Start with a large $m \times m \times 1$ pile of cubes, triangulated in the standard way, and take away two distant cubes, leaving only their bottom squares $X$ and $Y$. The $3$-complex $C$ obtained can be collapsed vertically onto its square basis; in particular, it is collapsible, and has no interior vertices. Let $C'$ be a $3$-ball with two tubular holes drilled away, but where (1) each hole has been corked at a bottom with a $2$-disk, and (2) the tubes are disjoint but intertwined, so that a closed path that passes through both holes and between these traverses the top resp.\ bottom face of $C'$ yields a trefoil knot (see Figure \ref{fig:LickMar}). \enlargethispage{3mm} \begin{figure}[htbf] \centering\vskip-2mm \includegraphics[height=36mm]{LickorishMartin3.eps}\vskip-3mm \caption{\small $C$ and $C'$ are obtained from a $3$-ball drilling away two tubular holes, and then ``corking'' the holes on the bottom with $2$-dimensional membranes.} \label{fig:LickMar} \end{figure} $C$ and $C'$ are homeomorphic. Any homeomorphism induces on $C'$ a collapsible triangulation with no interior vertices. $X$ and $Y$ correspond via the homeomorphism to the corking membranes of $C'$, which we will call correspondingly $X'$ and~$Y'$. To get from $C'$ to a ball with a knotted spanning edge we will carry out two more steps: \begin{compactenum}[(i)] \item create a single edge $[x', y']$ that goes from $X'$ to $Y'$; \item thicken the ``bottom'' of $C'$ a bit, so that $C'$ becomes a $3$-ball and $[x', y']$ becomes an interior edge (even if its extremes are still on the boundary). \end{compactenum} We perform both steps by adding cones over $2$-disks to the complex. Such steps preserve collapsibility, but in general they produce interior vertices; thus we choose ``specific'' disks with few interior vertices. \begin{compactenum}[(i)] \item Provided $m$ is large enough, one finds a ``nice'' strip $F_1,F_2,\dots,F_k$ of triangles on the bottom of $C'$, such that $F_1\cup F_2\cup \dots\cup F_k$ is a disk without interior vertices, $F_1$ has a single vertex $x'$ in the boundary of $X'$, while $F_k$ has a single vertex $y'$ in the boundary of $Y'$, and the whole strip intersects $X'\cup Y'$ only in $x'$ and~$y'$. Then we add a cone to $C'$, setting \[C_1 \ :=\ C' \cup \left( y'*(F_1\cup F_2\cup \dots\cup F_{k-1}) \right). \] (An explicit construction of this type is carried out in \cite[pp.~164-165]{HZ}.) Thus one obtains a collapsible $3$-complex $C_1$ with no interior vertex, and with a direct edge from $X'$ to $Y'$. \item Let $R$ be a $2$-ball inside the boundary of $C_1$ that contains in its interior the $2$-complex $X' \cup Y' \cup [x',y']$, and such that every interior vertex of $R$ lies either in $X'$ or in $Y'$. Take a new point $z'$ and define $C_2 \ :=\ C_1 \cup (z' * R)$. \end{compactenum} As $z' * R$ collapses onto $R$, it is easy to verify that $C_2$ is a collapsible $3$-ball with a knotted spanning edge $[x', y']$. By Proposition \ref{prop:knottednotLC}, $C_2$ is not LC. \end{proof} \begin{cor}\label{cor:badcollapsible3ball} There exists a collapsible $3$-ball $B$ such that for any boundary facet $\sigma$, the ball~$B$ does not collapse onto $\partial B - \sigma$. \end{cor} Theorem~\ref{thm:collapsiblenonLC} can be extended to higher dimensions by taking cones. In fact, even though the link of an LC complex need not be LC, the link of an LC closed star is indeed LC. \begin{proposition}\label{prop:LCcones} Let $C$ be a $d$-pseudomanifold and $v$ a new point. $C$ is LC if and only if $v*C$ is LC. \end{proposition} \begin{proof} The implication ``if $C$ is LC, then $v*C$ is LC'' is straightforward. For the converse, assume $T_i$ and $T_{i+1}$ are intermediate steps in the local construction of $v*C$, so that passing from $T_i$ to $T_{i+1}$ we glue together two adjacent $d$-faces $\sigma', \sigma''$ of $\partial T_i$. Let $F$ be any $(d-1)$-face of $T_i$. If $F$ does not contain $v$, then $F$ is in the boundary of $v * C$, so $F \in \partial T_{i+1}$. Therefore, $F$ cannot belong to the intersection of $\sigma'$ and $\sigma''$, which is sunk into the interior of $T_{i+1}$. So, every $(d-1)$-face in the intersection $\sigma' \cap \sigma''$ must contain the vertex $v$. This implies that $\sigma' = v * S'$ and $\sigma'' = v * S''$, with $S'$ and $S''$ \emph{distinct} $(d-1)$-faces. $S'$ and $S''$ must share some $(d-2)$-face, otherwise $\sigma'$ and $\sigma'$ would not be adjacent. So from a local construction of $v * C$ we can read off a local construction of $C$. \end{proof} \begin{cor} \label{thm:badlycollapsibleDball} For every $d\ge3$, not all collapsible $d$-balls are LC. \end{cor} \begin{proof} All cones are collapsible. If $B$ is a non-LC $d$-ball, then $v \ast B$ is a non-LC $(d+1)$-ball by Proposition \ref{prop:LCcones}. \end{proof} We conclude this chapter observing that Chillingworth's theorem, ``every geometric triangulation of a convex $3$-dimensional polytope is collapsible'', can be strengthened as follows. \begin{thm}[Chillingworth \cite{CHIL}] \label{thm:chil} Every $3$-ball embeddable as a convex subset of the Euclidean $3$-space $\mathbb{R}^3$ is LC. \end{thm} \begin{proof} The argument of Chillingworth for collapsibility runs showing that $B \searrow \; \partial B - \sigma$, where $\sigma$ is any triangle in the boundary of $B$. Now Theorem \ref{thm:mainballs} ends the proof. \end{proof} Thus any subdivided $3$-simplex is LC. If Hachimori's claim is true (see Remark \ref{remark:hachimori}), then any subdivided $3$-simplex with all vertices on the boundary is also constructible. (So far we can only exclude the presence of knotted spanning edges in it: See Lemma \ref{lem:sudoku}.) However, a subdivided $3$-simplex might be non-shellable even if it has all vertices on the boundary (Rudin's ball is an example). \section{Upper bounds on the number of LC $d$-spheres.}\label{sec:numbers} For fixed $d\geq 2$ and a suitable constant $C$ that depends on $d$, there are less than $C^N$ combinatorial types of LC $d$-spheres with $N$ facets. Our proof for this fact is a $d$-dimensional version of the main theorem of Durhuus \& Jonsson \cite{DJ}, and allows us to determine an explicit constant $C$, for any $d$. It consists of two different phases: \begin{compactenum}[1. ] \item we observe that there are less trees of $d$-simplices than planted plane $d$-ary trees, which are counted by order~$d$ Fuss--Catalan numbers; \item we count the number of ``LC matchings'' according to ridges in the tree of simplices. \end{compactenum} \subsection{Counting the trees of $d$-simplices.} We will here establish that there are less than $ C_d (N) := \frac{1}{(d-1) N + 1} \binom{d N}{N} $ trees of $N$ $d$-simplices. \begin{lemma} \label{thm:treesA} Every tree of $N$ $d$-simplices has $(d-1)N+2$ boundary facets of dimension $d-1$ and $N-1$ interior faces of dimension $d-1$. \\ It has $\frac d2 ((d-1)N+2)$ faces of dimension $d-2$, all of them lying in the boundary. \end{lemma} By \emph{rooted} tree of simplices we mean a tree of simplices $B$ together with a distinguished facet $\delta$ of $\partial B$, whose vertices have been labeled $1, 2, \dots, d$. Rooted trees of $d$-simplices are in bijection with ``planted plane $d$-ary trees'', that is, plane rooted trees such that every non-leaf vertex has exactly $d$ (left-to-right-ordered) sons; cf.~\cite{Matousek}. \begin{proposition}\label{prop:countdtrees} There is a bijection between rooted trees of $N$ $d$-simplices and planted plane $d$-ary trees with $N$ non-leaf vertices, which in turn are counted by the Fuss--Catalan numbers $ C_d (N) = \frac{1}{(d-1) N + 1} \, \binom{d N}{N}$. Thus, the number of combinatorially-distinct trees of $N$ $d$-simplices satisfies \[\frac{1}{(d-1)N+2} \; \frac{1}{d!} \; C_d(N)\ \ \le\ \ \# \; \{ \hbox{ trees of } N \; \hbox{ $d$-simplices } \}\ \ \le\ \ C_d(N). \] \end{proposition} \begin{proof} Given a rooted tree of $d$-simplices with a distinguished facet $\delta$ in its boundary, there is a unique extension of the labeling of the vertices of $\delta$ to a labeling of all the vertices by labels $1,2,\dots,d+1$, such that no two adjacent vertices get the same label. Thus each $d$-simplex receives all $d+1$ labels exactly once. Now, label each $(d-1)$-face by the unique label that none of its vertices has. With this we get an edge-labeled rooted $d$-ary tree whose non-leaf vertices correspond to the $N$ $d$-simplices; the root corresponds to the $d$-simplex that contains $\delta$, and the labeled edges correspond to all the $(d-1)$-faces other than $\delta$. We get a plane tree by ordering the down-edges at each non-leaf vertex left to right according to the label of the corresponding $(d-1)$-face. The whole process is easily reversed, so that we can get a rooted tree of $d$-simplices from an arbitrary planted plane $d$-ary tree. There are exactly $ C_d (N) = \frac{1}{(d-1) N + 1} \, \binom{d N}{N}$ planted plane $d$-ary trees with $N$ interior vertices (see e.g.\ Aval \cite{Aval}; the integers $C_2(N)$ are the ``Catalan numbers'', which appear in many combinatorial problems, see e.g.\ Stanley \cite[Ex.~6.19]{Stanley2}). Any tree of $N$ $d$-simplices has exactly $(d-1)N+2$ boundary facets, so it can be rooted in exactly $\left((d-1)N + 2 \right)d!$ ways, which however need not be inequivalent. This explains the first inequality claimed in the lemma. Finally, combinatorially-inequivalent trees of $d$-simplices also yield inequivalent rooted trees, whence the second inequality follows. \end{proof} \begin{cor}\label{cor:exponentialbounds} The number of trees of $N$ $d$-simplices, for $N$ large, is bounded by \[ \binom{dN}{N} \; \sim \; \Big( d \cdot \big(\tfrac{d}{d-1} \big)^{d-1} \Big)^N \ < \ (d e)^N .\] \end{cor} \subsection{Counting the matchings in the boundary.} We know from the previous section that there are exponentially many trees of $N$ $d$-simplices. Our goal is to find an exponential upper bound for the LC spheres obtainable by a matching of adjacent facets in the boundary of one fixed tree of simplices. \begin{thm} \label{thm:announced} Fix $d \geq 2$. The number of combinatorially distinct LC $d$-spheres (or LC $d$-balls) with $N$ facets, for $N$ large, is not larger than \[ \Big( d \cdot \big(\tfrac{d}{d-1} \big)^{d-1} \cdot 2^{\; \frac{2d^2 - d}{3}} \Big)^N .\] \end{thm} \begin{proof} Let us fix a tree of $N$ $d$-simplices $B$. We adopt the word ``couple'' to denote a pair of facets in the boundary of $B$ that are glued to one another during the local construction of $S$. Let us set $D:=\frac12(2+N(d-1))$, which is an integer. By Lemma \ref{thm:treesA}, the boundary of the tree of $N$ $d$-simplices contains $2 D$ facets, so each perfect matching is just a set of $D$ pairwise disjoint couples. We are going to partition every perfect matching into ``rounds''. The first round will contain couples that are adjacent in the boundary of the tree of simplices. Recursively, the $(i+1)$-th round will consist of all pairs of facets that \emph{become} adjacent only after a pair of facets are glued together in the $i$-th round. Selecting a pair of adjacent facets is the same as choosing the ridge between them; and by Lemma \ref{thm:treesA}, the boundary contains $d D$ ridges. Thus the first round of identifications consists in choosing $n_1$ ridges out of $d D$, where $n_1$ is some positive integer. After each identification, at most $d-1$ new ridges are created; so, after this first round of identifications, there are at most $(d-1)n_1$ new pairs of adjacent facets. In the second round, we identify $2n_2$ of these newly adjacent facets: as before, it is a matter of choosing $n_2$ ridges, out of the at most $(d-1) n_1$ just created ones. Once this is done, at most $(d-1) n_2$ ridges are created. And so on. We proceed this way until all the $2 \, D$ facets in the boundary of $B$ have been matched (after $f$ steps, say). Clearly $n_1 + \ldots + n_f = D$, and since the $n_i$'s are positive integers, $f \leq D$ must hold. This means there are at most \[ \sum_{f=1}^{D} \quad \sum_{\begin{array}{c} n_1, \ldots, n_f \\ n_i \geq 1, \,\sum n_i = D \\ n_{i+1} \leq (d-1)n_i \end{array}} \binom{d D}{n_1} \binom{(d-1)n_1}{n_2} \binom{(d-1)n_2}{n_3} \cdots \binom{(d-1) n_{f-1}}{n_f} \] possible perfect matchings of $(d-1)$-simplices in the boundary of a tree of $N$ $d$-simplices. We sharpen this bound by observing that not all ridges may be chosen in the first round of identifications. For example, we should exclude those ridges that belong to just two $d$-simplices of $B$. An easy double-counting argument reveals that in a tree of $d$-simplices, the number of ridges belonging to at least 3 $d$-simplices is less than or equal to $\frac{N}{3} \; \binom{d+1}{2}$. So in the upper bound above we may replace the first factor $\binom{d D}{n_1}$ with the smaller factor $\binom{\frac{N}{3} \; \binom{d+1}{2}} {n_1}$. To bound the sum from above, we use $\binom{n}{k} \leq 2^n$ and $n_1 + \cdots + n_{f-1}<n_1 + \cdots + n_f=D$, while ignoring the conditions $n_{i+1} \leq (d-1)n_i$. Thus we obtain the upper bound \[ \hbox{\large 2 }^{\frac{N}{3} \binom{d+1}{2} + \frac{N}{2}(d-1)^2 + (d-1)} \; \cdot \; \sum_{f=1}^{D} \binom{D -1}{f-1} \ \ =\ \ \hbox{\large 2 }^{\frac{N}{3} (2d^2 - d) + (d-1)}. \] The factor $2^{d-1}$ is asymptotically negligible. Thus the number of ways to fold a tree of $N$ $d$-simplices into a sphere via a local construction sequence is smaller than $2^{\; \frac{2d^2 - d}{3} \; N}$. Combining this with Proposition \ref{prop:countdtrees}, we conclude the proof for the case of $d$-spheres. We leave the adaption of the proof for $d$-balls (or general LC $d$-pseudomanifolds) to the reader. \end{proof} The upper bound of Theorem~\ref{thm:announced} can be simplified in many ways. For example, for $d \geq 16$ it is smaller than $\sqrt[3]{4}^{d^2 N}$. From Theorem \ref{thm:announced} we obtain explicit upper bounds: \begin{compactitem} \item there are less than $216^N$ LC $3$-spheres with $N$ facets, \item there are less than $6117^N$ LC $4$-spheres with $N$ facets, \end{compactitem} and so on. We point out that these upper bounds are not sharp, as we overcounted both on the combinatorial side and on the algebraic side. When $d=2$, Tutte's upper bound is asymptotically $3.08^N$, whereas the one given by our formula is $16^N$. When $d=3$, however, our constant is smaller than what follows from Durhuus--Jonsson's original argument: \begin{compactitem}[ -- ] \item we improved the matchings-bound from $384^N$ to $32^N$; \item for the count of trees of tetrahedra we obtain an essentially sharp bound of $6.75^N$. (The value implicit in the Durhuus--Jonsson argument \cite[p. 184]{DJ} is larger since one has to take into account that different trees of tetrahedra can have the same unlabeled dual graph.) \end{compactitem} \begin{cor} For any fixed $d\ge2$, there are exponential lower and upper bounds for the number of LC $d$-spheres on $N$ facets. \end{cor} \begin{proof} We have just obtained an upper bound; we also get a lower bound from Proposition \ref{prop:countdtrees}/\allowbreak Corollary \ref{cor:exponentialbounds}, since the boundary of a tree of $(d+1)$-simplices is a stacked {$d$-sphere}, and for $d\ge2$ the stacked $d$-sphere determines the tree of $(d+1)$-simplices uniquely. \end{proof} We know very little about the number of LC $d$-spheres with $N$ facets when $d$ is not constant and $N$ is relatively small (say, bounded by a polynomial) in terms of $d$ --- and whether the LC condition is crucial for that. Compare Kalai \cite{Kalai}. \medskip \noindent \textbf{Acknowledgement.} We are very grateful to Matthias Staudacher, Davide Gabrielli, Niko Witte, Raman Sanyal, Thilo R\"{o}rig, Frank Lutz, Gil Kalai, and Emo Welzl for useful discussions and references. Many thanks also to the anonymous referees for the very careful reviews. \begin{small}
1,941,325,220,371
arxiv
\section{Introduction} Perturbation theory can be a frustrating tool for field theorists. Sometimes, it provides extremely accurate answers, sometimes it is not even qualitatively correct. In recent years, our main goal has been to construct modified perturbative series which are converging and accurate. As briefly reviewed in Section \ref{sec:large}, our approach consists in removing large field configurations in a way that preserves the closeness to the correct answer. In the case of quenched $QCD$, there are several questions that are relevant for this approach and that have been addressed. How sensitive is the average plaquette $P$ to a large field cutoff \cite{effects04}? How does $P$ behave when the coupling becomes negative \cite{gluodyn04}? How does $P$ differ from its weak coupling expansion \cite{burgio97,rakow2002}? Are all the derivatives of $P$ with respect to $\beta$ continuous in the crossover region? The analysis \cite{rakow2002,third} of the weak series for $P$ up to order 10 \cite{direnzo2000} suggests an (unexpected) singularity in the second derivative of $P$, or in other words in the third derivative of the free energy. In the following, we report our recent attempts to find this singularity. As all the technical details regarding this question have just appeared in a preprint \cite{third}, we will only summarize the main results leaving room for more discussion regarding the difference between series and the numerical values of $P$. \section{Large field configurations and perturbation theory} \label{sec:large} The reason why perturbation theory sometimes fail is well understood for scalar field theory. Large field configurations have little effect on commonly used observables but are important for the average of large powers of the field and dominate the large order behavior of perturbative series. A simple way to remove the large field configurations consists in restricting the range of integration for the scalar fields. \begin{equation} \prod_x \int_{-\phi_{max}}^{\phi_{max}}d\phi_x \ . \nonumber\end{equation} For a generic observable $Obs.$ in a $\lambda \phi^4$ theory, we have then \begin{equation} Obs.(\lambda )\simeq\sum_{k=0}^{K}a_k(\phi_{max})\lambda^k \nonumber\end{equation} The method produces series which apparently converge in nontrivial cases such as the anharmonic oscillator and $D=3$ Dyson hierarchical model \cite{convpert,tractable}. The modified theory with a field cutoff differs from the original theory. Fortunately, it seems possible, for a fixed order in perturbation theory, to adjust the field cutoff to an optimal value $\phi_{max}(\lambda,K)$ in order to minimize or eliminate the discrepancy with the (usually unknown) correct value of the observable in the original theory. In a simple example\cite{optim}, the strong coupling can be used to calculate approximately this optimal $\phi_{max}(\lambda,K)$. This method provides an approximate treatment of the weak to strong coupling crossover and we hope it can be extended to gauge theory where this crossover \cite{kogut80} is a difficult problem. The calculation of the modified coefficients remains a challenge, however approximately universal features of the transition between the small and large field cutoff limits for the modified coefficients of the anharmonic oscillator \cite{asymp}, suggest the existence of simple analytical formulas to describe the field cutoff dependence of large orders coefficients. This method needs to be extended to the case of lattice gauge theories. Important differences with the scalar case need to be understood. For compact groups such as $SU(N)$, the gauge fields are not arbitrarily large. Consequently, it is possible to define a sensible theory at negative $\beta=2N/g^2$. However, the average plaquette tends to two different values in the two limits $g^2\rightarrow \pm 0$ \cite{gluodyn04}. This precludes the existence of a regular perturbative series about $g^2=0$. A first order phase transition near $\beta =-22$, was also observed \cite{gluodyn04} for $SU(3)$. The impossibility of having a convergent perturbative series about $g^2=0$ is well understood \cite{plaquette} in the case of the partition function for a single plaquette which after gauge fixing to the identity on three links reads. \begin{equation} Z=\int dU {\rm e} ^{-\beta(1-\frac{1}{N}Re TrU)}\ , \end{equation} If we expand the group element $U=e^{igA}$ with $A=A^aT^a$ and the Haar measure in powers of $g$, we obtain a converging sum that allows us to calculate $Z$ accurately, however, the ``coefficients'' are $g$-dependent. This comes from the finite bounds of integration of the gauge fields that are proportional to $1/g$. If $g^2$ is small and positive, we can extend the range of integration to infinity with errors that seem controlled by $\rm{e}^{-2\beta}$. By ``decompactifying'' the gauge fields, we have transformed a converging sum into a power series in $g$ with constant coefficients growing factorially with the order. The situation is now resemblant to the scalar case and can be treated using this analogy. We can introduce a gauge invariant field cutoff that is treated as a $g$ independent quantity. For a given order in $g$, one can use the strong coupling expansion to determine the optimal value of this cutoff. This provides a significant improvement in regions where neither weak or strong coupling is adequate \cite{plaquette}. This program can in principle be extended to LGT on $D$-dimensional lattices, however the calculation of the modified coefficients is difficult. An appropriately modified version of the stochastic method seems to be the most promising for this task. As the technology for completing this task is being developed, we will discuss several questions about the average plaquette and its perturbative expansion. \section{The average plaquette and its perturbative expansion in quenched $QCD$} We now consider a $SU(3)$ lattice gauge theory in 4 dimensions without quarks (quenched $QCD$). We use the Wilson action without improvement. Our main object will be the average plaquette action denoted $P$ and can be expressed as $-\partial (\rm{ln}(Z)/6L^4)/\partial \beta$. The effect of a gauge invariant field cutoff is very small but of a different size below, near or above $\beta=5.6$ (see Fig. 6 of Ref. \cite{effects04}). This is in agreement with the idea that modifying the weight of the large field configurations affects the crossover behavior \cite{mack78}. The weak coupling series for $P$ has been calculated up to order 10 in Ref. \cite{direnzo2000}: \begin{equation}\nonumber P_W(1/\beta)=\sum_{m=1}^{10} b_m \beta^{-m} +\dots. \nonumber\end{equation} The coefficients are given in table 1. The values corresponding to the series and the numerical data calculated on a $16^4$ lattice is shown in Fig. \ref{fig:pade}. A discrepancy becomes visible below $\beta = 6$. The situation can be improved by using Pad\'e approximants, however, they do not show any change in curvature and often have poles near $\beta=5.2$. For comparison, Pad\'e approximant for the strong coupling expansion \cite{balian74err} depart visibly from the numerical values when $\beta$ becomes slightly larger than 5. In conclusion, it is not clear that by combining the two series we can get a complete information regarding the crossover behavior. \begin{figure} \label{fig:pade} \includegraphics[width=2.8in,angle=0]{wpade2.eps} \includegraphics[width=2.8in,angle=0]{strpade2.eps} \caption{Regular weak series (blue) and 4/6 weak Pad\'e (red) for the plaquette (left); 7/7 strong Pad\'e (right) } \end{figure} The difference between the weak coupling expansion $P_W$ and the numerical data $P$ can be further analyzed. From the example of the one-plaquette model \cite{plaquette}, one could infer that by adding the tails of integration, we should make errors of order $\rm{e}^{-C\beta}$, for some constant $C$. Consistently with this argument, the difference should scale as a power of the lattice spacing, namely \begin{equation} P_{Non Pert.}=(P-P_W)\propto a^A \propto \left({\rm e}^{-\frac{4\pi^2}{33}\beta} \right)^A \ . \end{equation} A case for $A=2$ has been made in Ref. \cite{burgio97} based on a series of order 8. Another analysis supports $A=4$ (the canonical dimension of $F_{\mu \nu}F^{\mu \nu }$) \cite{rakow2002,rakowthese}. Fig. \ref{fig:apower} shows fits at different orders and in different regions that support each of these possibilities. It would be interesting to study cases where long series are available and non-perturbative effects well understood in order to define a prescription to extract the power properly. \begin{figure} \label{fig:apower} \includegraphics[width=2.9in,angle=0]{sd8.eps} \includegraphics[width=2.9in,angle=0]{sd10bis.eps} \caption{$Log_{10}|P-P_W|$ for order 8 (left) and 10 (right, in a different range of $\beta$); the constant is fitted asumming $a^2$ (blue) or $a^4$ (red). } \end{figure} The series $P_W$ has another intriguing feature: $r_m=b_m/b_{m-1}$, the ratio of two successive coefficients seem to extrapolates near 6 when $m\rightarrow\infty$ when $m$ becomes large \cite{rakow2002}. This suggests a behavior of the form \begin{equation}\nonumber P=(1/\beta _c -1/\beta )^{-\gamma } (A_0 + A_1 (\beta _c -\beta)^{ \Delta } +....)\ , \label{eq:convpar} \nonumber\end{equation} as encountered in the study of the critical behavior of spin models. We have reanalyzed \cite{third} the series using estimators \cite{nickel80} known as the the extrapolated ratio ($\widehat{R}_m$) and the extrapolated slope ($\widehat{S}_m$) in order to estimate $\beta_c$ and $\gamma $. We found that the weak series suggests \begin{equation} \label{eq:critical} P\propto (1/5.74-1/\beta)^{1.08} \ . \end{equation} These estimators are sensitive to small variations in the coefficients and show a remarkable stability when the volume is increased from $8^4$ to $24^4$. The numbers are in good agreement with the estimates of Ref. \cite{rakow2002} with other methods. A finite radius of convergence is not expected and one does not expect any singularity between the limits where confinement and asymptotic freedom hold. It may simply be that the series is too short to draw conclusion about its asymptotic behavior. A simple example where this happens \cite{third} is \begin{equation} Q(\beta)=\int_0^{\infty}dt {\rm e}^{-t}t^{\alpha}[1-t\beta_c/(\alpha \beta)]^{-\gamma} \ , \end{equation} with $\alpha$ sufficiently large. If $m<<\alpha$, $r_m\simeq \beta_c(1+(\gamma -1)/m), $ For $m>>\alpha$ we have $r_m \propto m$ and the coefficients grow factorially. If we take Eq. (\ref{eq:critical}) seriously, it implies that the second derivative of $P$ diverges near $\beta =5.7$. We have searched for such a singularity \cite{third}. We have shown that the peak in the third derivative of the free energy present on $4^4$ lattices disappears if the size of the lattice is increased isotropically up to a $10^4$ lattice. On the other hand, on $4\times L^3$ lattices, a jump in the third derivative persists when $L$ increases. Its location coincides with the onset of a non-zero average for the Polyakov loop and seems consequently related to the finite temperature transition. It should be noted that the possibility of a third-order phase transition has been discussed for effective theories of the Polyakov's loop \cite{pisarski}. A few words about the tadpole improvement \cite{lepage92} for the weak series. If we consider the resummation \begin{equation} P_W(1/\beta)=\sum_{m=1}^{K} e_m \beta_R^{-m} + O(\beta_R^{-K-1}) \end{equation} with $\beta_R=\beta (1-\sum_{m=1} b_m \beta^{-m})$, the ratios $e_{m}/e_{m-1}$ stay close to -1.5 for $m$ up to 7, but seem to start oscillating more for large $m$. \begin{table}[h] \begin{tabular}{||c||c|c|c|c|c|c|c|c|c|c||} \hline $m$&1&2&3&4&5&6&7&8&9&10\cr \hline $b_m$& 2 & 1.2208 & 2.9621 & 9.417 & 34.39 & 136.8 & 577.4 & 2545 & 11590 &54160 \cr $e_m$&2 & -2.779 &3.637 &-3.961 &4.766 & -3.881 & 6.822 & -1.771 & 17.50 & 48.08 \cr \hline \end{tabular} \caption{$b_m$: regular coefficients; $e_m$: tadpole improved coefficients} \end{table} This research was supported in part by the Department of Energy under Contract No. FG02-91ER40664. We thank G. Burgio, F. di Renzo and P. Rakow for interesting discussions. \providecommand{\href}[2]{#2}\begingroup\raggedright
1,941,325,220,372
arxiv
\section{Bar construction and Koszul duality for operads} In this section, we recall the definition of the reduced bar construction and the definition of Koszul duality for operads. For more details and references, we refer the reader to \cite{Benoit}. \subsection{Augmentation ideal of an operad} The {\it identity operad} is defined by $I(r)=\mathbb K$ for $r=1$ and $I(r)=0$ for $r \neq 1$. An operad $\mathcal P$ equipped with a morphism $\epsilon : \mathcal P \rightarrow I$ is called an augmented operad. The augmentation ideal of $\mathcal P$ is $\tilde{\mathcal P}= \ker \epsilon$. As $\epsilon$ is a retract of the identity morphism, we have a splitting $\mathcal P = I \oplus \tilde{\mathcal P}$. \subsection{Reduced bar construction} Recall that the {\it suspension of a dg-module} $M$ is the dg-module $\Sigma M$ defined by $\mathbb K e \otimes M$, where $deg(e)=1$. We have a natural identification $(\Sigma M)_d = M_{d-1}$. For a non graded operad $\mathcal P$, the module $\Sigma \tilde{\mathcal P}(r)$ is equal to the module $\tilde{\mathcal P}(r)$ in degree 1 and is zero in degree $* \neq 1$. The {\it reduced bar construction} $B(\mathcal P)$ is a quasi-cofree cooperad defined by $F^c(\Sigma \tilde{\mathcal P})$, the cofree cooperad generated by the suspension $\tilde{\mathcal P}$. The bar construction $B(\mathcal P)$ is equipped with a differential is given by a coderivation $\partial : F^c(\Sigma \tilde{\mathcal P}) \rightarrow F^c(\Sigma \tilde{\mathcal P})$ which is determined by the partial composition products of $\mathcal P$. Recall that $F^c(\Sigma \tilde{\mathcal P})$ is generated by tensors $\bigotimes_{v\in V(\tau)} x_v$, where $\tau$ ranges over trees, the notation $V(\tau)$ refers to the set of vertices of $\tau$ and $x_v$ is an element of $\Sigma \tilde{\mathcal P}$ associated to each vertex. More details on this construction are given in section \ref{bardetails}. \vspace{0.3cm} We are interested in the homology of this bar complex. To calculate it, we use that many operads come equipped with a weight grading. \subsection{Modules equipped with a weight grading}\label{poidstens} We consider $\mathbb K$-modules $V$ equipped with a weight grading, a splitting $V = \bigoplus V_{(s)}$. In the case of a dg-module $V$, the homogeneous components $V_{(s)}$ are supposed to be sub-dg-modules of $V$. A tensor product of modules equipped with a weight grading inherits a natural weight grading such that $(V \otimes W) _{(n)} = \bigoplus_{s+t=n} V_{(s)} \otimes W_{(t)}$. \subsection{Operads equipped with a weight grading}\label{poidsoperadelib} An operad $\mathcal P$ is {\it equipped with a weight grading} if each term $\mathcal P(n)$ is weight graded and the composition product $\mathcal P \circ \mathcal P \rightarrow \mathcal P$ preserves the weight grading. This condition asserts equivalently that the partial composition product of homogeneous elements $p \in \mathcal P_{(s)}(m)$ and $ q \in \mathcal P_{(t)}(n)$ verify $p \circ_i q \in \mathcal P_{(s+t)} (m+n-1)$. An operad equipped with a weight grading is called {\it connected} if $$\mathcal P_{(0)}(r) = \left \{ \begin{array}{rl} \mathbb K . 1 & \text{for $r=1$} \\ 0 & \text{else} \end{array} \right. $$ A connected operad is automatically augmented, the augmentation being the projection on the weight $0$ component. We have $\tilde{\mathcal P}_{(s)}=\mathcal P_{(s)}$ if $s \neq 0$. In what follows, we will use the free operad $F(M)$. This operad has a natural weight which makes it a graded operad. Recall briefly that the free operad $F(M)$, like the cofree cooperad $F^c(M)$, is generated by tensors on trees $\bigotimes_{v\in V(\tau)} x_v$, representing formal compositions of operations. The weight of such a tensor in $F(M)$ is given by its number of factors $x_v$. Notice that the free operad is connected. We will go back to the construction of $F(M)$ in section \ref{operadelibre}. \subsection{Homogeneous operadic ideals and quotients} A {\it homogeneous operadic ideal} is an operadic ideal $I$ such that $I= \bigoplus I_{(s)}$, where $I_{(s)} = I \cap \mathcal P_{(s)}$. We observe that the quotient of an operad equipped with a weight by a homogeneous ideal is equipped naturally with a weight. This assertion is an obvious generalization of a classical result for algebras. \subsection{Quadratic operads} A {\it quadratic operad} is an operad such that $\mathcal P = F (M)/I$, where $I=(\overline{R})$ is the operadic ideal generated by $R \subset F_{(2)}(M)$. For the Koszul duality, we use $\overline{R} \subset F_{(2)}(M)$ the sub-$\Sigma_*$-module generated by $R \subset F_{(2)}(M)$. This sub-$\Sigma_*$-module generates the same operadic ideal $(R) = (\overline{R})$. We will see that the elements of $(\overline{R})$ are represented by trees where one of the vertices is labelled by an element of $\overline{R}$ and the other vertices by elements of $M$. A quadratic operad has a natural weight grading, induced by the weight grading of the free operad. For a quadratic operad such that $M(0)=0$, we have automatically $$\mathcal P_{(0)}(r) = \left \{ \begin{array}{rl} \mathbb K . 1, & \text{if $r=1,$} \\ 0, & \text{otherwise.} \end{array} \right. $$ We have a natural isomorphism $\displaystyle{\mathcal P_{(1)}(r) = M(r)}$. Moreover, we have $\mathcal P_{(2)}(r) = \mathcal F_{(2)} (M)/\overline{R}$. The operads associated respectively to the associative, commutative, and Lie algebras are quadratic. \subsection{Weight grading on the bar construction} If $\mathcal P$ is equipped with a weight grading, then $B(\mathcal P)$ has an induced weight grading. Formally, we use that $B(\mathcal P)$ is spanned by tensors $\bigotimes_v p_v$. The weight of such a tensor is the sum of the weight of the factors $p_v$, as defined in section \ref {poidstens}. The differential is homogeneous. If we suppose that $\mathcal P_{(0)}$ is reduced to $\mathbb K.1$, then $\Sigma \tilde{\mathcal P}_{(0)}=0$. Hence the elements $p_i$ which occur in the treewise tensors of $B(\mathcal P)$ have a weight larger than $1$. As a consequence, we have $B_d(\mathcal P)_{(s)}=0$ if $d>s$. \subsection{Koszul operads}\label{Koszul} We work with the definition given by Fresse in \cite{Benoit}. It generalizes the original definition by Ginzburg and Kapranov in \cite{GK} for operads with are not generated by binary operations. Ones says that a (connected, graded, equipped with weight) operad $\mathcal P$ is Koszul if $H_*(B_*(\mathcal P)_{(s)})=0$ for $* \neq s$ (in words if the homology of its bar construction is concentrated on the diagonal $*=s$). The {\it Koszul construction} is defined by $$K(\mathcal P)_{(s)}=H_s(B_*(\mathcal P)_{(s)}, \delta)= \text{ker} (\delta : B_s(\mathcal P)_{(s)} \rightarrow B_{s-1}(\mathcal P)_{(s)} ) .$$ From the definition, $K(\mathcal P)_{(s)}$ is concentrated in degree $s$. We observe that the inclusion $K_d(\mathcal P)_{(s)} \rightarrow B_d(\mathcal P)_{(s)}$ is a morphism of complexes. The operad $\mathcal P$ is Koszul if and only if the inclusion morphism $K(\mathcal P) \rightarrow B(\mathcal P)$ is a quasi-isomorphism. \section{The language of trees} Trees allow us to represent graphically the elements of the free operad and of the bar construction. The goal of this section is to define the conventions used throughout the article to describe the structure of a tree. \vspace{0.3cm} \subsection{Vertices and edges}\label{defarbres} An {\it n-tree} is an abstract oriented tree together with one {\it outgoing edge} (the root of the tree) and $n$ {\it ingoing edges} (the entries of the tree) indexed by the set $\left\{1, \ldots, n\right\}$. Formally, an $n$-tree $\tau$ is determined by a set of {\it vertices } $V(\tau)$ and by a set of {\it edges } $e \in E(\tau)$ oriented from a source $s(e) \in V(\tau) \coprod \left\{1, \ldots, n\right\}$ to a target $t(e) \in V(\tau) \coprod \left\{0\right\}$, with the following conditions : \begin{enumerate} \item There is a unique edge $e \in E(\tau)$ such that $t(e)=0$. We call this edge the {\it root}. \item For every vertex $v \in V(\tau)$, there is a unique $e \in E(\tau)$ such that $s(e) = v$. \item For every $i \in \left\{1, \ldots, n\right\}$, there is a unique $e$ such that $s(e)=i$. This edge is the $i$th entry of the tree. \item For every vertex $v$, there is a sequence of edges $e_1, \ldots, e_n$ such that $s(e_1)=v, t(e_i)=s(e_{i+1})$ for every $i \in \left[ 1,n-1 \right]$ and $t(e_n)= 0$. \end{enumerate} These conditions imply that the set $V(\tau) \coprod \left\{1, \ldots, n\right\}$ is equipped with a partial order so that $s(e)>t(e)$ for any edge $e$. The minimum of the order is $0$. There is an associated partial order on edges. The set $E'(\tau)$ of {\it internal edges} is the set $E(\tau)$ of edges minus the ingoing edges and the outgoing edge. We call a {\it leaf} the source of an ingoing edge. We draw trees with leaves on top and the root at the bottom. We say that a leaf $i$ is {\it linked to a vertex $v$} if there is a monotonic path of edges between $i$ and $v$. We assume also that a leaf $i$ is linked to itself. We define the entries of the vertex $v$ by $I_v = \left\{s(e), e \in E(\tau) \textrm{ such that } t(e)=v \right\}$. Then a tree structure is determined by a partition of the set $V(\tau) \coprod \left\{1, \ldots, n\right\}$ of the form $\coprod_{v \in V(\tau) \coprod \left\{0\right\}} I_v$. \vspace{0.3cm} \subsection{Tree isomorphisms} An {\it isomorphism of $n$-trees} $f : \tau \rightarrow \tau'$ is defined by two bijections $$f_V : V(\tau) \rightarrow V(\tau') \textrm{ and } f_E : E(\tau) \rightarrow E(\tau') $$ which preserve the structure of the tree (the source and target of every edge). We can extend $f_V$ by the identity on $\left\{1, \ldots, n\right\}$ to have the relation $I_{f_V (v)} = f_V (I_v)$ for every $v \in V(\tau) \coprod \left\{0\right\}$. The $n$-trees and their isomorphisms define a category. \vspace{0.3cm} \subsection{The $\Sigma_*$-category of trees} Let $T(n)$ be the category defined by the $n$-trees and their isomorphisms. This category has a weight splitting : $$T(n) = \coprod_{r=0}^{\infty}T_{(r)}(n),$$ where $T_{(r)}(n)$ is the category formed by trees with $r$ vertices. We can generalize the construction of $T(n)$ by indexing the entries of an $n$-tree by a set $I= \left\{i_1, \ldots, i_n\right\}$ of $n$ elements. We obtain the category $T(I)$ of $I$-trees. A bijection $u : I \rightarrow I'$ induces a functor $u_* : T(I) \rightarrow T(I')$ such that $u_*(T_{(r)}(I)) \subset T_{(r)}(I')$. A permutation $w \in \Sigma_n$ induces a functor from the category of $n$-trees to itself. Hence the symmetric group acts on $n$-trees. \vspace{0.3cm} \subsection{Subtrees}\label{sousarbre} A {\it subtree} $\sigma$ of a tree $\tau$ is a tree determined by subsets $V(\sigma) \subset V(\tau)$ and $E(\sigma) \subset E(\tau)$ such that $$v \in V(\sigma) \Longleftrightarrow \forall e \in E(\tau), (e \in E(\sigma) \Leftrightarrow s(e)=v \text{ or } t(e)=v).$$ The source and the target of an internal edge $e$ in $\sigma$ are the source and the target of $e$ in $\tau$. A leaf $s(e)$ in $\sigma$ is labelled by the minimum of the leaves which are linked to $s(e)$ in $\tau$. Graphically, a subtree corresponds to a connected part of the graph of the tree. The subtree $\sigma$ of a tree $\tau$ generated by an edge $e \in E'(\tau)$ is the tree $\tau_e$ such that $V(\tau_e)=\{s(e),t(e)\}$ and $E'(\tau_e)=\{e\}$. The ingoing edges relative to $s(e)$ and $t(e)$ are kept, and the outgoing edge is linked to $t(e)$. The leaves are labelled as specified above. \vspace{0.3cm} \subsection{The operad of trees} One equips the sequence of categories $T(n)$ with the structure of an operad. The partial composition product $$ \circ_i : T_{(r)}(m) \times T_{(s)}(n) \rightarrow T_{(r+s)}(m+n-1)$$ is defined as follows : For $\sigma \in T_{(r)}(m)$ and $\tau \in T_{(s)}(n)$, the composite tree $\sigma \circ_i \tau$ is obtained by grafting the root of $\tau$ to the $i$th entry of $\sigma$ (\textit{cf}. figure 1 in the appendix at the end of the article). \vspace{0.3cm} \subsection{The module of treewise tensors} Let $M$ be a $\Sigma_*$-module. A {\it module of treewise tensors} $\tau(M)$ is associated to any tree $\tau$. Let $v$ be a vertex of $\tau$. Call $n_v$ the cardinal of $I_v$. Let $M(I_v)$ be the $\mathbb K$-module generated by tensors $f \otimes_{\Sigma_{n_v}} x_v$ where $x_v \in M(n_v)$ and $f$ is a bijection of the entries $\left\{1, \ldots, n \right\}$ to the entries of $x_v$. One sets $$\tau(M) = \bigotimes_{v \in V(\tau)} M(I_v).$$ Observe that this construction is functorial in $\tau$ : an isomorphism of trees $f : \tau \rightarrow \tau'$ induces a morphism $f_* : \tau(M) \rightarrow \tau'(M)$. In practice, we see a treewise tensor as a tree with vertices labelled by elements of $M$, or equivalently a tensor product arranged on a tree. Recall that a tree $\tau$ is called a corolla if it has only one vertex. For a corolla, we have an identification $\tau(M) \cong M(n)$ where $n$ is the number of entries of $\tau$. \vspace{0.3cm} \subsection{The free operad}\label{operadelibre} The free operad has an explicit expansion so that $$F(M)(n)= \bigoplus_{\tau \in T(n)} \tau(M) / \cong.$$ In $F(M)$, the relation $\cong$ identifies treewise tensors which correspond to each other by an isomorphism. Explicitly, for $x \in \tau(M)$ and $x' \in \tau'(M)$, we have $x' \cong x$ if and only if $x'=f_* x$ for an isomorphism $f : \tau \rightarrow \tau'$. In this representation, the weight grading of the free operad defined in section \ref{poidsoperadelib} is given by the number of vertices of the tree. \vspace{0.3cm} \subsection{Construction without quotient}\label{sansquotient} Throughout the paper, we work with trees (called {\it reduced}) verifying $I_v \neq \emptyset$ for every vertex $v$. A reduced tree has no automorphism except the identity. If $M(0)=0$, then the free operad involves only treewise tensors $x \in \tau(M)$ where $\tau$ is reduced. An operad is called {\it reduced} if it is spanned by treewise tensors on reduced trees. \vspace{0.3cm} We are going to use that a reduced tree has a canonical planar representation. This representation is determined by an ordering of the entries of each vertex $v$. We determine an order on $I_{v}$ in the following way : \begin{enumerate} \item To every $v'$ in $I_{v}$, we associate the minimum of the leaves linked to $v'$. \item We place the vertices $v'$ and the leaves directly linked to $v$ from left to right above $v$ in ascending order. \end{enumerate} The order gives a bijection between $\{1,\ldots,n_v\}$ and the entries of $v$. This bijection gives an isomorphim $M(I_v)\simeq M(n_v)$, for each $v\in V(\tau)$. As a consequence, for the module of treewise tensors $\tau(M)$, we obtain $\tau(M)\simeq\bigotimes_{v\in V(\tau)} M(n_v)$. To obtain a canonical representation of elements of the free operad, we fix also a set $T'(n)$ of representatives of isomorphism classes of $n$-trees. The expansion of the free operad gives then: $$F(M)(n)\simeq\bigoplus_{\tau \in T'(n)} \tau(M) \simeq\bigoplus_{\tau \in T'(n)} \bigl\{\bigotimes_{v\in V(\tau)} M(n_v)\bigr\}.$$ \section{The Poincar\'e-Birkhoff-Witt criterion} The aim of this section is to give the PBW criterion. We define the notion of a PBW basis for an operad, generalizing what Priddy did in the case of the algebras (\textit{cf}. \cite{Prid}). \vspace{0.3cm} \subsection{A basis of treewise tensors and of the free operad}\label{defbasetauM} Let $M$ be a $\Sigma_*$-module, with an ordered basis $B^M$ (as a $\mathbb K$-module) and such that $M(0)=0$. For every tree $\tau$, we define a {\it monomial basis} $B^{F(M)}_\tau$ of $\tau(M)$ in the following way. We use the planar representation of $\tau$, giving an isomorphism $\tau(M) \cong \bigotimes_v M(n_v)$. An element $\bigotimes_v m_v$ belongs to $B^{F(M)}_\tau$ if and only if each $m_v$ is in $B^M$. We set $B^{F(M)} = \coprod_\tau B^{F(M)}_\tau$. A {\it pointed shuffle of a composition $\alpha \circ_i \beta$ } is a permutation preserving the order of the entries of each treewise tensor in the partial composition product and preserving the entry $i$. More explicitely, for $\alpha$ a treewise tensor with $s$ entries and $\beta$ a treewise tensor with $t$ entries, a permutation $w \in \Sigma_{s+t-1}$ is a pointed shuffle if the orders of the entries of $\alpha$ and of $\beta$ are the same as in the composition $w. \alpha \circ_i \beta$ and if the minimum of the entries of $\beta$ in the composition is $i$. This definition implies that the entries labelled $1$ to $i-1$ are not modified. \begin{observ}\label{baseuniq} The basis $B^{F(M)}$ is the only basis such that \begin{itemize} \item $B^{F(M)}_\tau=B^{M(n)}$ if $\tau$ is a corolla with $n$ entries. \item For all $\alpha \in \sigma(M), \beta \in \tau(M)$ treewise tensors and $w$ pointed shuffle, we have : $$w. \alpha \circ_i \beta \in B^{F(M)}_{w. \sigma \circ_i \tau} \Leftrightarrow \alpha \in B^{F(M)}_\sigma \text { and } \beta \in B^{F(M)}_\tau.$$ \end{itemize} \end{observ} \subsection{Order on the basis of the treewise tensors}\label{defordre} We are choosing an order on the monomial basis of $F(M)(r)$ for every $r$ in $\mathbb N$, verifying the compatibility condition : For $\alpha, \alpha'$ with $m$ entries and $\beta,\beta'$ with $n$ entries, we have \begin{equation} \left \{ \begin{array}{l} \alpha \leq \alpha' \\ \beta \leq \beta' \end{array} \right. \Rightarrow \forall i, w.\alpha \circ_i \beta \leq w.\alpha' \circ_i \beta', \ \forall w \text{ pointed shuffle}. \nonumber \end{equation} \subsection{Example of a suitable order}\label{ordre} Let $\alpha$ be a treewise tensor with $n$ entries. We associate a sequence of $n$ words $(\underline{a}_1,\underline{a}_2, \ldots,\underline{a}_n)$ to $\alpha$ in the following way : For all $i$, there exists a unique monotonic path of vertices from the root to $i$, and $\underline{a}_i$ is the word composed (from left to right) of the labels of these vertices (from bottom to top). Recall that $M$ has an ordered basis. If $\underline a$ and $\underline b$ are two words, we first compare the length of the words ($\underline a < \underline b$ if $l(\underline a)<l(\underline b)$, where $l$ is the length) and if they are equal, we compare them lexicographically (each letter being in $M$). We can then compare two treewise tensors with the same number of entries $\alpha$ (associated to $(\underline{a}_1,\underline{a}_2, \ldots,\underline{a}_n)$) and $\beta$ (associated to $(\underline{b}_1,\underline{b}_2, \ldots,\underline{b}_n)$), such that $\alpha \neq \beta$, by comparing $\underline{a}_1$ with $\underline{b}_1$, then $\underline{a}_2$ with $\underline{b}_2$, etc. This defines the strict relation. \begin{prop} The order defined above verifies the compatibility condition of section \ref{defordre}. \end{prop} \begin{proof} Let $\alpha$ and $\alpha'$ be treewise tensors with $n$ entries, such that $\alpha \leq \alpha'$. Let $\beta$ and $\beta'$ be treewise tensors with $m$ entries, such that $\beta \leq \beta'$. Let $(\underline{a}_1,\underline{a}_2, \ldots,\underline{a}_n)$, resp. $(\underline{a}'_1,\underline{a}'_2, \ldots,\underline{a}'_n)$, be the word sequence associated to $\alpha$, resp. $\alpha'$. Let $(\underline{b}_1,\underline{b}_2, \ldots,\underline{b}_m)$, resp. $(\underline{b}'_1,\underline{b}'_2, \ldots,\underline{b}'_m)$, be the word sequence associated to $\beta$, resp. $\beta'$. The word sequence associated to the composite $\alpha \circ_i \beta$ has the form $(\underline{a}_1,\underline{a}_2, \ldots,\underline{a}_i \underline{b}_1, \underline{a}_i \underline{b}_2,\ldots, \underline{a}_i \underline {b}_m, \underline{a}_{i+1}, \ldots, \underline{a}_n)$ where $\underline{a}_i\underline {b}_j$ is the concatenation of $\underline{a}_i$ and $\underline{b}_j$. Similarly, the word sequence $(\underline{a}'_1,\underline{a}'_2, \ldots,\underline{a}'_i \underline{b}'_1, \underline{a}'_i \underline{b}'_2,\ldots, \underline{a}'_i\underline b'_m, \underline{a}'_{i+1}, \ldots, \underline{a}'_n)$ is associated to $\alpha' \circ_i \beta'$ . To begin with, note that the length of $\underline {a}_i \underline {b}_j$ is the sum of the length of $\underline {a}_i$ and $\underline {b}_j$. We compare $\alpha \circ_i \beta$ and $\alpha' \circ_i \beta'$ as they have both $n+m-1$ entries. As $\alpha \leq \alpha'$, we have $(\underline a_1,\underline a_2, \ldots,\underline a_{i-1}) \leq (\underline a'_1,\underline a'_2, \ldots,\underline a'_{i-1})$. If the inequality is strict, we are done, as our order is lexicographical in the sequence. If the inequality is an equality, we look at $\underline a_i$ and $\underline a'_i$. If $\underline a_i < \underline a'_i$, then $\underline a_i \underline b_1 < \underline a'_i \underline b'_1$. If $\underline a_i=\underline a'_i$, comparing $\underline a_i \underline b_j$ with $\underline a'_i \underline b'_j$ is the same as comparing $\underline b'_j$ with $\underline b'_j$ for all $j$. So $\underline a_i \underline b_1, \underline a_i \underline b_2,\ldots, \underline a_i \underline b_m \leq \underline a'_i \underline b'_1, \underline a'_i \underline b'_2 ,\ldots, \underline a'_i \underline b'_m $. If the inequality is strict, we are done, else we have to look at the remainder of the sequence. As $\alpha \leq \alpha'$ and $(\underline a_1 ,\underline a_2 , \ldots,\underline a_i ) = (\underline a'_1 ,\underline a'_2 , \ldots,\underline a'_i )$, we have $(\underline a_{i+1} , \ldots, \underline a_n ) \leq (\underline a'_{i+1} , \ldots, \underline a'_n )$. Finally we have $\alpha \circ_i \beta \leq \alpha' \circ_i \beta'$. To show that $w.\alpha \circ_i \beta \leq w.\alpha' \circ_i \beta'$ for all pointed shuffles $w$, we see how the pointed shuffles act on the sequence of words associated to a composition of treewise tensors. The pointed shuffles will induce a shuffle (in the usual meaning) between the set composed of the $\underline a_i \underline b_j$ for all $j$ and the set composed of the $\underline a_j $ for $j>i$. A shuffle preserves the order among each set it acts on and the order we have defined on the treewise tensors look at the associated words recursively. As a consequence, the order between $w.\alpha \circ_i \beta$ and $w.\alpha' \circ_i \beta'$ will be the same as the one between $\alpha \circ_i \beta$ and $\alpha' \circ_i \beta'$. \end{proof} \subsection*{Remark} We call this order the {\it lexicographical order}. Another suitable order, the {\it reverse-length lexicographical order}, can be defined in a similar way, If $\underline a$ and $\underline b$ are two words, we first compare the length of the words ($\underline a > \underline b$ if $l(\underline a)<l(\underline b)$, where $l$ is the length) and if they are equal, we compare them lexicographically (each letter being in $M$). The proof of the compatibility condition of section \ref{defordre} is the same. \vspace{0.3cm} \subsection{Restriction of a treewise tensor to a subtree} Let $\alpha=\bigotimes_{v \in \tau} m_v$ be a treewise tensor. The restriction of $\alpha$ to a subtree $\sigma$ of $\tau$ is the tensor $\alpha_{|\sigma} =\bigotimes_{v \in V(\sigma)} m_v$ which gives an element of $\sigma(M)$. We use this notion for a subtree $\sigma = \tau_e$ generated by an edge $e$ (defined in section \ref{sousarbre}). \subsection{Poincar\'e-Birkhoff-Witt basis} Let $\mathcal P$ be a reduced operad defined by $F(M)/(\overline{R})$. A {\it PBW basis} for $\mathcal P$ is a set $B^\mathcal P \subset B^{F(M)}$ of elements representing a basis of the $\mathbb K$-module $\mathcal P$, containing $1$, $B^M$ and for every $\tau$ a subset $B_\tau^\mathcal P$ of $B_\tau^{F(M)}$, verifying the conditions : \begin{enumerate} \item For $\alpha \in B_\sigma^\mathcal P$, $\beta \in B_\tau^\mathcal P$ and $w$ a pointed shuffle, either $w. \alpha \circ_i \beta$ is in $B_{w. \sigma \circ_i \tau}^\mathcal P$, or the elements of the basis $\gamma \in B^\mathcal P$ which appear in the unique decomposition $w. \alpha \circ_i \beta \equiv \Sigma_\gamma c_\gamma \gamma$, verify $\gamma > w. \alpha \circ_i \beta$ in $F(M)$. \item A treewise tensor $\alpha$ is in $B_\tau^\mathcal P$ if and only if for every internal edge $e$ of $\tau$, the restricted treewise tensor $\alpha_{|\tau_e}$ is in $B^\mathcal P_{\tau_e}$. \end{enumerate} \subsection*{Remark} This definition generalizes Priddy's definition for algebras (\textit{cf}. \cite{Prid}). Recall that an algebra $A$ is equivalent to an operad $\mathcal P_A$ such that $\mathcal P_A(r) = \left \{ \begin{array}{rl} A, & \text{for $r=1$,} \\ 0, & \text{otherwise.} \end{array} \right. $ The algebra $A$ has a PBW basis in Priddy's sense if and only if the operad $\mathcal P_A$ has a PBW basis in our sense. \begin{observ} Condition 1 is equivalent to condition 1' : \begin{itemize} \item[(1')] For $\alpha$ in $B^{F(M)}$, either $\alpha \in B^\mathcal P$, or the elements of the basis $\gamma \in B^\mathcal P$ which appear in the unique decomposition $\alpha \equiv \Sigma_\gamma c_\gamma \gamma$, verify $\gamma > \alpha$ in $F(M)$. \end{itemize} \end{observ} \begin{proof} Condition 1' implies obviously condition 1. For the conserve direction, we use an induction on the number of vertices in $\alpha$ in $B^{F(M)}$ and observation \ref{baseuniq}. \end{proof} \begin{prop}\label{condquad} Assume that $M$ is finitely generated. If condition 1 is verified when $\alpha$ and $\beta$ are corollas, and condition 2 is verified, then condition 1 is true for all $\alpha$ and $\beta$. \end{prop} \begin{proof} Equivalently, we can say : Let $M$ be finitely generated. If condition 1' is verified when $\alpha$ has only one internal edge and condition 2 is verified, then condition 1' is true for all $\alpha$. We prove this equivalent proposition. Let $\alpha$ be in $B_\tau^{F(M)} \setminus B_\tau^\mathcal P$. Condition 2 implies that there exists an internal edge $e$ such that $\alpha_{|\tau_e} \notin B^\mathcal P_{\tau_e}$. By condition 1', we can write $\alpha_{|\tau_e} \equiv \Sigma_\gamma c_\gamma \gamma$, where $\gamma > \alpha_{|\tau_e}$ in $F(M)$. We replace $\alpha_{|\tau_e}$ by $\Sigma_\gamma c_\gamma \gamma$ in $\alpha$. This gives another representative of $\alpha \equiv \Sigma_{\gamma'} c_{\gamma'} \gamma'$ such that $\gamma' > \alpha$ (because the order is compatible with the partial composition product). If all $\gamma'$ are in $B^\mathcal P$, we're done. Otherwise, we iterate the method. We get others representative of $\alpha$ as sums of treewise tensors, each time strictly larger. As the number of trees with a specified number of entries is finite and as the basis of $M$ is also finite, then the number of treewise tensors with a specified number of entries is also finite. So the process stops after a finite number of steps, and the treewise tensors we get at the end are in $B^\mathcal P$. \end{proof} \begin{theo}\label{thpbw} A reduced operad which has a PBW basis is Koszul. \end{theo} The proof of this statement is achieved in the next section. \section{Proof of the Poincar\'e-Birkhoff-Witt criterion}\label{demopbw} To show this result, we describe more precisely $B(\mathcal P)$ and a basis. Then we will use a filtration to study the homology of $E^0 B(\mathcal P)(r)_\lambda $. \subsection{Explicit description of $B(\mathcal P)$}\label{bardetails} By definition, $B(F(M))$ is equal to $\bigoplus_{\sigma} \sigma(F(M))$. Explicitly, a generator of $\sigma(F(M))$ corresponds to a tree $\sigma$ labelled with trees labelled by elements of $M$, that is a treewise tensor composed of treewise tensors on $M$. We can represent it by a large tree $\tau$ labelled by elements of $M$ and equipped with a splitting in subtrees $\tau_{comp}$, that we can see as connected components. The $\tau_{comp}$ are separated by {\it cutting edges} which form a subset $D \subset E'(\tau)$. The union of the internal edges of the subtrees $\tau_{comp}$ form a set $S \subset E'(\tau)$ such that $S \coprod D = E'(\tau)$. We will work with $S$, the set of {\it marking edges}. The marking edges $S$ determine the decomposition of a treewise tensor $\alpha$ into $\bigotimes \alpha_{comp}$ where $\alpha_{comp}=\alpha_{|\tau_{comp}}$ are the factors in $F(M)$. So we identify an element of $B(F(M))$ to a pair $(\alpha, S)$, with $\alpha \in \tau(M)$ (\textit{cf}. figure 2). $$B(F(M)) \cong \bigoplus_{\tau, S} (\tau(M), S).$$ \vspace{0.3cm} We examine now the differential structure of $B(F(M))$. The differential $\delta$ is given by $$\delta(\alpha, S)= \sum_{e \in E'(\tau) - S} \pm (\alpha,S \coprod \{e\})$$ for $\alpha$ a treewise tensor associated to the tree $\tau$. The operation $(\alpha, S) \mapsto (\alpha, S \coprod e)$ represents a partial composition at the edge $e$ for the element in $B(F(M))$ represented by $(\alpha,S)$. Notice that the differential changes only the marking and not $\tau(M)$. Hence $\delta(\bigoplus_{S} (\tau(M), S)) \subset \bigoplus_{S} (\tau(M), S)$. \subsection{Description and basis of $B(\mathcal P)$} First, let $B_\tau^{B(F(M))}$ be the natural basis of treewise tensors on the tree $\tau$ labelled with elements of $B^{F(M)}$. Set also $B^{B(F(M))} = \coprod_{\tau} B_\tau^{B(F(M))}$. As $\mathcal P = F(M)/(\overline{R})$, the reduced bar construction $B(\mathcal P)$ is a quotient of $B(F(M))$. Two elements $(\alpha,S)$ and $(\alpha',S)$ are identified in $B(\mathcal P)$ if and only if $S=S'$ and every factor $\alpha_{comp}$ is identified to $\alpha'_{comp}$ in $\mathcal P$. We define $B^{B(\mathcal P)}$, a set of elements in $B(F(M))$ representing a basis of $B(\mathcal P)$, starting from the base $B^\mathcal P$ as follows : An element $(\beta, S)$ in $ (\tau(M), S)$ is in $B_\tau^{B(\mathcal P)} \subset B_\tau^{B(F(M))}$ if every one of its factors $\beta_{\tau_{comp}}$ lies in $B_{\tau_{comp}}^{\mathcal P}$. The element $\beta$ is an element in $B^{F(M)}_\tau$, the basis defined in section \ref{defbasetauM}. \subsection*{Definition}An edge $e$ is said to be {\it admissible} if the restricted treewise tensor $\alpha_{|\tau_e}$ is in $B^{\mathcal P}$. The set $Adm_{\alpha}$ is the set of the admissible edges of $\alpha$. \begin{observ} We have an equivalence $$(\alpha,S) \in B^{B(\mathcal P)} \Leftrightarrow S \subset Adm_{\alpha}.$$ \end{observ} \subsection{Filtration of $B(\mathcal P)$} We consider a filtration $\displaystyle{B(\mathcal P) =\bigcup_{\lambda \in I} B(\mathcal P)_\lambda}$ where $I$ is a poset. This poset $I$ is defined by the basis of $F(M)$ and by the partial order specified in section \ref{defordre}. In practice, we forget the cutting and we use the partial order of the basis of $F(M)=\bigoplus_{\tau} \tau(M)$. Explicitely, an element $(\alpha,S) \in B(\mathcal P)(r)$ is in $B(\mathcal P)(r)_\lambda$ if and only if $\alpha \geq \lambda$. Hence $$B(\mathcal P)(r) = \bigcup_{\lambda \in I(r)} B(\mathcal P)(r)_\lambda$$ where $I(r)$ is the monomial basis of $F(M)(r)$, a partially ordered set (with the order from section \ref{defordre}). \vspace{0.3cm} Observe that $B(\mathcal P)(r)_\lambda$ is a subcomplex of $B(\mathcal P)(r)$. In fact, the differential $\delta$ corresponds to a partial composition product, modifying the cutting (that we forget in the filtration). Condition 1 of a PBW basis insures that an element is sent on the sum of larger or equal elements. The differential $d^0$ induced by $\delta$ in the quotient preserves the factor $E^0 B(\mathcal P)(r)_\lambda$, which is generated by the pairs $(\lambda, S)$ which belong to $B^{B(\mathcal P)}$. Remark that $d^0_e : (\lambda,S) \mapsto (\lambda,S \coprod \{e\})$, so we can write $d^0= \sum_{e} \pm d^0_e$, taking the sum on the edges $e$ such that $d^0_e(\lambda)$ remains in the basis. \begin{lemm}An edge $e$ is admissible if and only if $d^0_e (\lambda,S) \neq 0$. \end{lemm} \begin{proof} The differential $d^0_e$ transforms a non-marked admissible edge into a marked admissible edge, by condition 1 of a PBW basis. Conversely, if $d^0_e (\lambda,S) \neq 0$, then by condition 2 (converse direction), the edge is admissible. \end{proof} \vspace{0.3cm} \subsection{Homology of $E^0 B(\mathcal P)(r)_\lambda$}\label{prophomo} The quotient $E^0 B(\mathcal P)(r)_\lambda $ is generated by the pairs $(\lambda, S)$ where $S$ ranges over the subsets of $Adm_{\lambda}$, the set of admissible edges. The differential $d^0_e$ sends $(\lambda,S)$ to $\displaystyle{(\lambda,\sum_{e \in Adm_\lambda - S} S \coprod \{e\})}$. This combinatorial complex is the dual of the oriented complex $C_ {\ast}(\Delta^{Adm_\lambda})_{+}$ of the simplex $\Delta^{Adm_\lambda}$ augmented over $\mathbb K$, with the augmentation term added in $C_ {\ast}(\Delta^{Adm_\lambda})_{+}$. The inclusion of the summand spanned by $(\lambda, \emptyset)$ in $E^0 B(\mathcal P)(r)_\lambda$ is dual to the augmentation $C_ {\ast}(\Delta^{Adm_\lambda})_{+} \rightarrow \mathbb K$. If $Adm_{\lambda}=\emptyset$, then the complex is reduced to a unique generator $(\lambda, \emptyset)$. Every component $\tau_{comp}$ is reduced to a vertex (we cut on all edges). This implies that the weight of $\lambda$ is equal to its degree. If $Adm_{\lambda}$ is not empty, then the homology is zero. \vspace{0.3cm} We conclude from these assertions that $H_*E^0 B(\mathcal P)(r)_\lambda =0$ if the weight is different from the degree. The filtration is compatible with the weight. Hence the associated spectral sequence splits. We obtain $H_*B(\mathcal P)=0$ when the weight is different from the degree. This result achieves the proof of theorem \ref{thpbw}. \qed \section{Result on the dual of a Poincar\'e-Birkhoff-Witt operad} In this section we consider a reduced quadratic operad $\mathcal P=F(M)/(R)$ such that $M$ is a finitely generated $\Sigma_*$-module. Recall that the Koszul construction $K(\mathcal P)$ defined in section \ref{Koszul} is a cooperad, and its linear dual $K(\mathcal P)^\#$ is an operad. The goal of this section is to prove the following result: \begin{theo} If $\mathcal P$ is a PBW operad, then the dual operad $K(\mathcal P)^\#$ is also a PBW operad. \end{theo} We determine a basis of $K(\mathcal P)^\#$, and we prove it defines a PBW basis. \vspace{0.3cm} \subsection{Basis of $K(\mathcal P)^\#$} First, in order to work with a filtration, we pick a total order which is a refinement of an order satisfying the condition of section \ref{defordre}. The only thing we use is that the subquotient $E^0 B(\mathcal P)(r)_\lambda$ remains unchanged if we replace the partial order by any refinement. We work here with a fixed number of entries $r$ and a fixed weight $n$. There is a finite number of trees with $n$ vertices and $r$ entries, so there is a finite basis of treewise tensors with $n$ vertices labelled by elements of $M$ and $r$ entries. This finite totally ordered set of treewise tensors can be written symbolically $\Lambda_{n,r}=\{ 0 < 1 < \ldots < \lambda < \lambda +1 < \ldots < \mu \}$. Recall that an element $(\alpha,S) \in B(\mathcal P)(r)$ is in $B(\mathcal P)(r)_\lambda$ if and only if $\alpha \geq \lambda$. In what follows, we write $F_\lambda = B(\mathcal P)(r)_\lambda$ and $E^0_\lambda=E^0 B_{(n)}(\mathcal P)(r)_\lambda $. We have a finite filtration of $B_{(n)}(\mathcal P)(r)$ : $$B_{(n)}(\mathcal P)(r) = F_0 \supseteq F_1 \supseteq \ldots \supseteq F_\lambda \supseteq F_{\lambda+1} \supseteq \ldots \supseteq F_\mu = E^0 _\mu.$$ \begin{sublemm} For every $\lambda \in \Lambda_{n,r}$, the homology $H_{n-1} F_\lambda$ is $0$. \end{sublemm} \begin{proof} We are using a decreasing induction. For $\lambda=\mu$, we have $F_\mu = E^0_\mu$. We know that $H_{n-1} E^0 _\mu=0$ as the weight is different from the degree, so $H_{n-1} F_\mu$ is $0$. Suppose the result true for $\lambda+1$. The long exact sequence in homology induced by $0 \rightarrow F_{\lambda+1} \rightarrow F_{\lambda} \rightarrow E^0 _\lambda \rightarrow 0$ gives $$ \ldots \rightarrow H_{n-1} F_{\lambda+1} \rightarrow H_{n-1} F_{\lambda} \rightarrow H_{n-1} E^0 _\lambda \rightarrow \ldots .$$ The first term is $0$ by induction, and the third term is $0$ because the weight is different from the degree. So $H_{n-1} F_{\lambda}=0$. \end{proof} Another part of the long exact sequence gives the short exact sequence : $$ 0 \rightarrow H_{n} F_{\lambda+1} \rightarrow H_{n} F_{\lambda} \rightarrow H_{n} E^0 _\lambda \rightarrow 0. $$ The first term $0$ comes from $H_{n+1} E^0_\lambda$ and the second one from $H_{n-1} F_\lambda$. These short exact sequences can be put together in a diagram, where vertical arrows are $\coker$'s : \[\xymatrix{ H_{n} F_{\mu} \ar[r] & \ldots \ar[r] & H_{n} F_{\lambda+1}\ar[d]\ar[r] & H_{n} F_{\lambda}\ar[r]\ar[d]& \ldots \ar[r] & H_{n} F_{0} .\\ & & H_{n} E^0 _{\lambda+1} & H_{n} E^0 _{\lambda}& & & \\ }\] Recall that $H_{n} F_\mu=0$ and $H_{n} F_{0}=H_n (B_{(n)}(\mathcal P)(r))$. We dualize this diagram, using that $H_n(C^\#)= H_n(C)^\#$, and we get the following diagram, where vertical arrows are Ker's : \[\xymatrix{ 0 & \ldots \ar[l] & H_{n} (F_{\lambda+1})^\#\ar[l] & H_{n} (F_{\lambda})^\#\ar[l]& \ldots \ar[l] &\mathbb K(\mathcal P)^\#_{(n)}(r). \ar[l]\\ & & H_{n} (E^0 _{\lambda+1})^\#\ar[u] & H_{n} (E^0 _{\lambda})^\#\ar[u]& & & \\ }\] We showed in section \ref{prophomo} that $$H_{n} (E^0 _{\lambda}) = \left \{ \begin{array}{ll} \mathbb K, & \textrm{ if } Adm_{\lambda}=\emptyset, \\ 0, & \textrm{ otherwise.} \end{array} \right. $$ Thus we have $K(\mathcal P)^\#_{(n)}(r) = \bigoplus_\lambda \mathbb K$ where $Adm_{\lambda}=\emptyset$ with $\lambda \in \Lambda_{n,r}$. Recall $K(\mathcal P)^\#$ is a quotient of $F(\Sigma^{-1}M^\#)$. \begin{lemm} A basis of the $\mathbb K$-module $K(\mathcal P)^\#$ is represented by $\{ \lambda^\# \in F(\Sigma^{-1}(M^\#)) \ | \ Adm_{\lambda}=\emptyset\}$. \end{lemm} \begin{proof} This result is an obvious consequence of the description of $K(\mathcal P)^\#_{(n)}(r)$ in the previous paragraph. \end{proof} These treewise tensors $\lambda$ are determined by the following property : the restricted treewise tensor induced by any edge $e$ is not in $B^\mathcal P$. \subsection{A PBW basis of $K(\mathcal P)^\#$} Ginzburg and Kapranov showed in \cite{GK} that $K(\mathcal P)^\#= F(\Sigma^{-1}(M^\#))/(R')$, where $\overline{R'}$ is determined by the exact sequence $$ 0 \rightarrow \overline{R'} \rightarrow F_{(2)}(\Sigma^{-1}(M^\#)) \rightarrow K_{(2)}(\mathcal P)^\# \rightarrow 0.$$ We have to determine $\overline {R'} \subset F_{(2)}(\Sigma^{-1}(M^\#))$ explicitely. As $\overline R$ is characterized by $0 \rightarrow \overline R \rightarrow F_{(2)}(M) \rightarrow \mathcal P_{(2)} \rightarrow 0$, we have dually $$ 0 \rightarrow \phi(\Sigma^{-2} \overline R^\bot) \rightarrow F_{(2)}(\Sigma^{-1}(M^\#)) \rightarrow K_{(2)}(\mathcal P)^\# \rightarrow 0,$$ where $\phi$ is the isomorphism between $\Sigma^{-2} F_{(2)}(M^\#)$ and $F_{(2)}(\Sigma^{-1}(M^\#))$. We have to study $\phi(\Sigma^{-2} \overline R^\bot)$. The main problem will be the suspensions which induce signs. Signs are induced by the classical commutation rule $g \otimes f = (-1)^{|f|.|g|}f \otimes g$. The suspension has degree $+1$. Recall $F_{(2)}(M)$ is the set of treewise tensors with exactly one internal edge. Its basis $B^{F_{(2)}(M)}$ can be decomposed into $\displaystyle{\{\alpha_i\}_{i \in I} \coprod \{\alpha_j\}_{j \in J}}$ where $\forall i \in I, \alpha_i \notin B^\mathcal P $ and $ \forall j \in J, \alpha_j \in B^\mathcal P$. The ideal generated by the relations is $\displaystyle{\overline R= \Span \{ \alpha_i-\sum_{ j \in J} c_{ij} \alpha_j ; i \in I \} } \subset F_{(2)}(M)$. A classic result of linear algebra gives $\displaystyle{\overline R^\bot= \Span \lbrace \alpha_j^\# + \sum_{ i \in I} c_{ij} \alpha_i^\# ; j \in J \rbrace} \subset F_{(2)}(M^\#)$. For $x_1 \in M(n_1)$ and $x_2 \in M(n_2)$, the definition returns us the relation $$\phi (\Sigma^{-2} w. x_1^\# \circ_i x_ 2^\#)= \epsilon(w) (-1)^{|x_1|} w. \Sigma^{-1} x_1^\# \circ_i \Sigma^{-1} x_2^\#,$$ where $\epsilon(w)$ denotes the signature of the permutation $w$. As $\Sigma^{-2} \overline R^\bot \subset \Sigma^{-2} F_{(2)}(M^\#)$, we have $\phi(\Sigma^{-2} \overline R^\bot) \subset F_{(2)}(\Sigma^{-1}(M^\#))$. We have $\phi(\Sigma^{-2} \overline R^\bot)= \displaystyle{\Span \lbrace \phi(\Sigma^{-2} \alpha_j^\#) + \sum_{ i \in I} c_{ij} \phi(\Sigma^{-2} \alpha_i^\#) ; j \in J \rbrace }$, where we identify naturally $F_{(2)}(M^\#)$ and $F_{(2)}(M)^\#$. \begin{theo} Consider the set $B^\#$ formed by treewise tensors $\beta$ in $F(\Sigma^{-1}(M^\#))$, such that every subtensor generated by an internal edge of $\beta$ is in the set ${\phi(\Sigma^{-2} \alpha_i^\#), i \in I}$. This set $B^\#$ forms a PBW basis of $K(\mathcal P)^\#$ for the opposite order (denoted $<^\#$). \end{theo} Note $B^\#$ is uniquely determined by the $\phi(\Sigma^{-2} \alpha_i^\#), i \in I$. \begin{proof} From the descriptions of $\overline R^\bot$ and of the basis of $K(\mathcal P)^\#$, we observe that $B^\#$ is a basis of the module $K(\mathcal P)^\#$. Also, condition 2 to be a PBW basis is true by definition. Let us show condition 1. Here signs and suspensions do not interfere. As the $\alpha_j, j \in J$ are the quadratic part of a PBW basis, we have $c_{ij} \neq 0$ for $\alpha_i<\alpha_j$. Hence we have $\alpha_i^\# >^\# \alpha_j^\#$ if $c_{ij} \neq 0$. As a consequence, condition 1 is verified for tensors with only one internal edge ({\it cf.} \ref{condquad}). As $M^\#$ is finitely generated, this implies condition 1 of a PBW basis. \end{proof} \subsection{Remark} When the module $M$ is non-graded, we identify $M$ with a dg-object concentrated in degree $0$. In the original construction by Ginzburg and Kapranov \cite{GK}, the Koszul dual $\mathcal P^!$ is only defined for quadratic operads generated by binary operations. The original $\mathcal P^!$ is an operadic suspension of $K(\mathcal P)^\#$. The presentation by generators and relations has to be rewritten $\mathcal P^! = F(M^\#)/(R'')$, and thus the operations in the dual $\mathcal P^!$ remain in degree 0 if they were originally in degree 0 in $\mathcal P$. This is not possible when the generators are not binary. Because of signs, the orthogonal $(R'')$ is here $<\alpha_j^* + \sum_{ i \in I} c_{ij} \alpha_i^* ; j \in J>$, where $\alpha_k^*= \epsilon(w) (-1)^{|x_1|(n_2-1)} (-1)^{(i-1)(n_2-1)} w. x_1^\# \circ_i x_2^\# \in F_{(2)}(M^\#)$ if $\alpha_k = w. x_1 \circ_i x_2$. The operad $\mathcal P^!$ has also a PBW basis, whose quadratic part is composed of the treewise tensors $\alpha_i^*, i \in I$. In the case of operads generated by binary operations, we will work with $\mathcal P^!$ rather than $K(\mathcal P)^\#$, and determine the treewise tensors $\alpha_i^*, i \in I$ and the generating relations $R''$. \section{Case of non-symmetric operads} We obtain in a similar way a PBW criterion in the case of the non-symmetric operads. \subsection{Non-symmetric operads and planar trees} A non-symmetric operad is defined as an operad, but without the action of symmetric groups. For more details, we refer the reader to \cite{MSS}. We can represent compositions in a non-symmetric operad by planar trees. The planar structure of a tree is determined by a total order on every set of entries $I_v$ for vertices $v \in V(\tau)$, as in the construction explained in section \ref{sansquotient}. The planar structure induces a total order on the entries of the tree. When we work with non-symmetric operads, we always consider planar trees with a natural numerotation of the entries, the numeration preserving the order. The non-symmetric free operad $F_{ns}(M)$ is associated to a non-symmetric module, a sequence of modules $M(n), n \in \mathbb N$ without an action of symmetric groups. We just replace abstract trees by planar trees in this construction. \subsection{Order on the treewise tensors} We define an order as in the symmetric case, the only difference being that we forget pointed shuffles. Let $M$ be a non-symmetric module, with an ordered basis $B^M$. For every planar tree $\tau$, we have a natural {\it monomial basis} $B^{F(M)}_\tau$ of $\tau(M)$ : an element of this basis is the tree $\tau$ labelled with elements of $B^M$. We choose an order on the monomial basis of $F_{ns}(M)(r)$ for every $r$ in $\mathbb N$, verifying the following condition : For $\alpha, \alpha'$ with $m$ entries and $\beta,\beta'$ with $n$ entries, we have \begin{equation} \left \{ \begin{array}{l} \alpha \leq \alpha' \\ \beta \leq \beta' \end{array} \right. \Rightarrow \forall i, \alpha \circ_i \beta \leq \alpha' \circ_i \beta'. \nonumber \end{equation} \subsection{Non-symmetric PBW basis} We define this notion as in the symmetric case, but without pointed shuffles. Let $\mathcal P$ be a non-symmetric operad, defined by $F_{ns}(M)/(R)$. A {\it PBW basis} of $\mathcal P$ is a set $B^\mathcal P \subset F_{ns}(M)$ of representatives of a base of the module $\mathcal P$, containing $1$, $B^M$ and for all $\tau$ a subset $B_\tau^\mathcal P$ of $B_\tau^{F(M)}$, verifying the following properties : \begin{enumerate} \item For $\alpha \in B_\sigma^\mathcal P$, $\beta \in B_\tau^\mathcal P$, either $\alpha \circ_i \beta$ is in $B_{\sigma \circ_i \tau}^\mathcal P$, or the elements of the basis $\gamma \in B^\mathcal P$ which appear in the unique decomposition $\alpha \circ_i \beta \equiv \Sigma_\gamma c_\gamma \gamma$, verify $\gamma > \alpha \circ_i \beta$ in $F(M)$. \item A treewise tensor $\alpha$ is in $B_\tau^\mathcal P$ if and only if for every internal edge $e$ of $\tau$, the restricted treewise tensor $\alpha_{|\tau_e}$ lies in $B^\mathcal P_{\tau_e}$. \end{enumerate} \subsection{Symmetrization} The forgetful functor from $\Sigma_*$-modules to sequences of (non-symmetric) modules has a left adjoint $\_ \otimes \Sigma_*$. If $\mathcal P$ is a non-symmetric operad, then the associated $\Sigma_*$-module $\mathcal P \otimes \Sigma_*$ has a natural operad structure. For a free operad, we obtain $F_{ns}(M_{ns}) \otimes \Sigma_* = F(M_{ns} \otimes \Sigma_*)$. We extend the order relation from $F_{ns}(M_{ns})$ to $F_{ns}(M_{ns}) \otimes \Sigma_*$, setting : \begin{equation} \alpha \otimes \sigma \leq \alpha' \otimes \sigma' \text{ if } \left \{ \begin{array}{l} \sigma = \sigma' \\ \alpha \leq \alpha' \end{array} \right. . \nonumber \end{equation} We do not compare the elements if $\sigma \neq \sigma'$. \begin{lemm} A symmetric PBW basis of $\mathcal P$ is given by the orbits of a non-symmetric PBW basis. \end{lemm} \begin{proof} Easy. \end{proof} As a corollary, we have : \begin{theo} A non-symmetric operad which has a non-symmetric PBW basis is Koszul, and the non-symmetric dual operad has a non-symmetric PBW basis, which can be explicitely determined from the other basis. \qed \end{theo} \section{Examples} We know that the following operads are Koszul : commutative $\mathcal C$, associative $\mathcal A$ and Lie $\mathcal Lie$ (\textit{cf}. Ginzburg and Kapranov \cite {GK}). We use our PBW criterion on these examples and on some other operads. To simplify notations, we write sometimes relations with treewise tensors in the operad, and sometimes in line in the associated algebra. We do not draw the root of the trees. In the examples with operads generated by binary operations, we will work with $\mathcal P^!$, and determine the treewise tensors $\alpha_i^*, i \in I$ and the generating relations $R''$. Else we consider the dual $K(\mathcal P)^\#$. Recall that by condition 2, a treewise tensor is in the basis if and only if every subtensor generated by an edge is in the basis. As a consequence, we specify only the quadratic part of the basis to determine the basis completely. Verifications are omitted. There are two main methods to find PBW bases : \begin{itemize} \item We can start from a basis, and we need to find the an order on $M$ so it is a PBW basis (we have to check it verifies conditions 1 and 2). \item We can start from an ordered basis of $M$, which forces us the choice of the quadratic part (because of the relations). We then construct the set generated by this quadratic part (we are assured it verifies conditions 1 and 2) and we need to check if it is a basis (as a $\mathbb K$-module). \end{itemize} \subsection{The associative operad} In the binary case, the associative operad is generated by a single binary operation $\mu$, which verifies the associativity relation $\mu(a,\mu(b,c))= \mu (\mu (a,b), c)$. For the lexicographical order, the quadratic part of a non-symmetric PBW basis is given by $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }\end{array}$. The dual operad also has a non-symmetric PBW basis, whose quadratic part is given by $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\mu}\ar@{-}[dl] & \\ & & *{\mu} & & }\end{array}$. The relation is still the associativity relation, so the associative operad is self-dual. \subsection{Generalization for higher associative operads}\label{pasbinaire} It is possible to generalize the notion of associativity for operations of arity larger than $2$. For more details, we refer the reader to \cite{Gned}. The operads here are not generated by binary operations. In the ternary case, one can define two types of associative operad. The totally associative operad satisfies $\mu(a, b, \mu(c,d,e))= \mu (a, \mu (b,c,d), e) = \mu(a,b,\mu(c,d,e))$, while the partially associative operad satisfies $\mu(a, b, \mu(c,d,e)) + \mu (a, \mu (b,c,d), e) + \mu(a,b,\mu(c,d,e))=0$. For the lexicographical order, the quadratic part of a non-symmetric PBW basis of the totally associative operad is $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1\ar@{-}[dr] & 2\ar@{-}[d]& 3\ar@{-}[dl] & 4\ar@{-}[ddl] & 5\ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }\end{array}$. As a consequence, this operad is Koszul, and its dual $K(\mathcal P)^\#$ is the partially associative operad where operations are in degree $1$, with the quadratic part of a PBW basis composed of $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1\ar@{-}[ddrr] & 2\ar@{-}[dr]& 3\ar@{-}[d] & 4\ar@{-}[dl] & 5\ar@{-}[ddll] \\ & &*{\mu}\ar@{-}[d] & & \\ & & *{\mu} & & }\end{array}$ and $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1\ar@{-}[ddrr] & 2\ar@{-}[ddr]& 3\ar@{-}[dr] & 4\ar@{-}[d] & 5\ar@{-}[dl] \\ & & &*{\mu}\ar@{-}[dl] & \\ & & *{\mu} & & }\end{array}$ for the reverse-length lexicographical order. The same result can be shown for larger arities (with signs depending on the parity), {\it cf}. \cite{Gned}. \subsection{The commutative and Lie operads} The commutative operad is generated by a single binary operation $\mu$, which verifies commutativity and associativity. $$\mu(a,b)=\mu(b,a) \ \text{and} \ \mu(a,\mu(b,c))= \mu (\mu (a,b), c)$$ The relations in degree $2$ are \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }$\end{array} = \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }$\end{array} = \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\mu}\ar@{-}[dl] & \\ & & *{\mu} & & }$\end{array}\end{equation*} For the reverse-length lexicographical order, we get \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }$\end{array} < \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\mu}\ar@{-}[dr] & & & \\ & & *{\mu} & & }$\end{array} < \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\mu}\ar@{-}[dl] & \\ & & *{\mu} & & }$\end{array}, \end{equation*} We check easily that the maximal element $\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\mu}\ar@{-}[dl] & \\ & & *{\mu} & & }$\end{array}$ in the quadratic relations is the quadratic part of a PBW basis of the commutative operad (and as a consequence, this operad is Koszul). The dual operad is also Koszul, and the quadratic part of a PBW basis consists of two treewise tensors $\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array} \text{ and } \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array}$, where $[,]$ is the dual of $\mu$ and is anticommutative. The relations in the dual operad is \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array} - \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array} = \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{[,]}\ar@{-}[dl] & \\ & & *{[,]} & & }$\end{array}\end{equation*} We recognize the Jacobi relation. So the operad $\mathcal Lie$ is the dual operad of $\mathcal C$, and as a consequence is Koszul. Note that we retrieve the basis of Reutenauer in \cite[Section 5.6.2]{Reutenauer}. \subsection{The Poisson operad} The $\mathcal Poisson$ operad can be defined as $\mathcal C \circ \mathcal Lie$. Explicitely, it is generated by $M= \mathbb K . \bullet \oplus \mathbb K [sgn] [,]$, with the relations $$a \bullet (b \bullet c) = (a \bullet b) \bullet c \text { (Associativity)}$$ $$[[a,b],c]+[[b,c],a]+[[c,a],b]=0 \text{ (Jacobi)}$$ $$[a \bullet b,c]= a \bullet [b,c]+ b \bullet [a,c] \text{ (Poisson)}$$ We use the lexicographical order and we set $\bullet > [,]$. We already know the quadratic part of a PBW basis for $\mathcal Lie$ and $\mathcal Com$. After some calculations for the action of $\Sigma_3$ on the Poisson relation, we can determine the quadratic part of a PBW basis : \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\bullet}\ar@{-}[dr] & & & \\ & & *{\bullet} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array}, \end{equation*} \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\bullet}\ar@{-}[dr] & & & \\ & & *{[,]} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{\bullet} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{[,]}\ar@{-}[dr] & & & \\ & & *{\bullet} & & }$\end{array}. \end{equation*} So the Poisson operad is Koszul and the quadratic part of a PBW basis of its dual is : \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{[,]^\#}\ar@{-}[dl] & \\ & & *{[,]^\#} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\bullet^\#}\ar@{-}[dl] & \\ & & *{\bullet^\#} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\bullet^\#}\ar@{-}[dr] & & & \\ & & *{\bullet^\#} & & }$\end{array}, \end{equation*} \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\bullet^\#}\ar@{-}[dr] & & & \\ & & *{[,]^\#} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{[,]^\#}\ar@{-}[dl] & \\ & & *{\bullet^\#} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\bullet^\#}\ar@{-}[dl] & \\ & & *{[,]^\#} & & }$\end{array}. \end{equation*} The operation $\bullet^\#$ is anticommutative and satisfies the Jacobi relation, while the operation $[,]^\#$ is commutative and associative. The two operations together satisfy a Poisson relation. So we have retrieved that $\mathcal Poisson$ is self-dual, which was already proved by Markl using distributive laws in \cite{Markl}. \subsection{The Perm and Prelie operads} The $\mathcal Perm$ operad is defined by a single operation $\bullet$ satisfying : $(x\bullet y) \bullet z = x \bullet (y \bullet z)= x \bullet (z \bullet y)$. Let $\tau$ be the transposite $(12) \in \Sigma_2$. For the lexicographical order and $\bullet > \tau \bullet$, a PBW basis is given by \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\bullet}\ar@{-}[dr] & & & \\ & & *{\bullet} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\tau \bullet}\ar@{-}[dr] & & & \\ & & *{ \bullet} & & }$\end{array} \text{and} \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\tau \bullet}\ar@{-}[dr] & & & \\ & & *{\bullet} & & }$\end{array}. \end{equation*} The duals of the nine quadratic treewise tensors in the complementary are a PBW basis of the dual operad, and the relation ideal is generated by \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & *{\bullet^\#}\ar@{-}[dr] & & & \\ & & *{\bullet^\#} & & }$\end{array}- \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 3\ar@{-}[dl] & & 2 \ar@{-}[ddll] \\ & *{\bullet^\#}\ar@{-}[dr] & & & \\ & & *{\bullet^\#} & & }$\end{array} - \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\bullet^\#}\ar@{-}[dl] & \\ & & *{\bullet^\#} & & }$\end{array} + \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\tau \bullet^\#}\ar@{-}[dl] & \\ & & *{\bullet^\#} & & }$\end{array}=0. \end{equation*} This relation is known to define the $\mathcal Prelie$ operad. So we have shown that $\mathcal Prelie$ and $\mathcal Perm$ are Koszul and dual to each other. This was already proved by Chapoton and Livernet in \cite{CL}. \subsection{The $m-Dend$ operad} A $\mathbb K$-vector space $V$ is an $m$-dendriform algebra if it is equipped with $m$ binary operations $\bullet_1, \ldots, \bullet_m: V^{\otimes 2} \longrightarrow V$ verifying for all $x,y,z \in V,$ and for all $2 \leq i \leq m-1$, the axioms $$ (x\prec y)\prec z = x \prec (y \star z), \ \ \ \ (x \prec y)\bullet_i z =x\bullet_i (y \succ z) \ \forall \, 2 \leq i \leq m-1, $$ $$ (x \succ y)\prec z = x \succ (y \prec z), \ \ \ \ (x \succ y)\bullet_i z =x\succ (y \bullet_i z) \ \forall \, 2 \leq i \leq m-1, $$ $$(x \star y)\succ z =x\succ (y \succ z) , \ \ \ \ (x\bullet_i y)\prec z = x \bullet_i (y \prec z) \ \forall \, 2 \leq i \leq m-1, $$ $$(x\bullet_i y)\bullet_j z = x \bullet_i (y \bullet_j z) \ \ \forall \, 2 \leq i<j \leq m-1, $$ where $\bullet_1 := \succ$, $\bullet_m := \prec$ and $x \star y := x\prec y + x\succ y$. We work with the associated operad, which was introduced by Leroux in \cite{Leroux}. He conjectured it was Koszul for $m>2$. For $m=2$, the operad is the classical dendriform operad, introduced by Loday, and is Koszul \cite{Loday2}. For the lexicographical order and $\bullet_i < \bullet_j$ if $i<j$, the quadratic part of a non-symmetric PBW basis is defined by all treewise tensors on $\begin{array}{c} \xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[dr] & & 2\ar@{-}[dl] & & 3 \ar@{-}[ddll] \\ & \ar@{-}[dr] & & & \\ & & & & } \end{array}$ and the following tensors : \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\prec}\ar@{-}[dl] & \\ & & *{\prec} & & }$\end{array}, \begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\bullet_i}\ar@{-}[dl] & \\ & & *{\prec} & & }$\end{array} \forall \, 2 \leq i \leq m-1 , \end{equation*} \begin{equation*}\begin{array}{c} $\xymatrix@M=1pt@C=6pt@R=6pt{ 1 \ar@{-}[ddrr] & & 2\ar@{-}[dr] & & 3 \ar@{-}[dl] \\ & & & *{\bullet_j}\ar@{-}[dl] & \\ & & *{\bullet_i} & & }$\end{array}\forall \, 2 \leq j \leq i \leq m-1. \end{equation*} We have proved that the $m-Dend$ operad is Koszul, and so its dual $m-Tetra$ (calculated in \cite{Leroux}) is Koszul too. \section*{Acknowledgements} I am grateful to Benoit Fresse for many useful discussions on this subject. I also would like to thank Muriel Livernet, Jean-Louis Loday, Martin Markl, Elisabeth Remm and Bruno Vallette for their comments. \clearpage
1,941,325,220,373
arxiv
\section{The ATLAS experiment} \label{sec:atlasdet} The ATLAS detector is described in detail elsewhere \cite{Aad:2008zzm}. The main system used for this analysis is the calorimeter, divided into electromagnetic and hadronic parts. The lead/liquid-argon (LAr) electromagnetic calorimeter is split into three regions:\footnote{ ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis pointing along the beam axis. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates ($r$, $\phi$) are used in the transverse plane, $\phi$ being the azimuthal angle around the beam axis, referred to the $x$-axis. The pseudorapidity is defined in terms of the polar angle $\theta$ with respect to the beamline as $\eta=-\ln\tan(\theta/2)$. When dealing with massive jets and particles, the rapidity $y = 1/2 \ln (E+p_z)/(E-p_z)$ is used, where $E$ is the jet energy and $p_z$ is the $z$-component of the jet momentum. The transverse momentum \pt is defined as the component of the momentum transverse to the beam axis. } the barrel ($\abseta < 1.475$), the end-cap ($1.375 < \abseta < 3.2$), and the forward ($3.1< \abseta <4.9$) regions. The hadronic calorimeter is divided into four regions: the barrel ($\abseta < 0.8$) and the extended barrel ($0.8 < \abseta < 1.7$) made of scintillator/steel, the end-cap ($1.5 < \abseta < 3.2$) with LAr/copper modules, and the forward calorimeter covering ($3.1 < \abseta < 4.9$) composed of LAr/copper and LAr/tungsten modules. The tracking detectors, consisting of silicon pixels, silicon microstrips, and transition radiation tracking detectors immersed in a 2\T axial magnetic field provided by a solenoid, are used to reconstruct charged-particle tracks in the pseudorapidity region $\abseta < 2.5$. Outside the calorimeter, a large muon spectrometer measures the momenta and trajectories of muons deflected by a large air-core toroidal magnetic system. \section{Data-taking conditions} \label{sec:conditions} Data-taking periods in ATLAS are divided into intervals of approximately uniform running conditions called luminosity blocks, with a typical duration of one minute. For a given pair of colliding beam bunches, the expected number of $pp$ collisions per bunch crossing averaged over a luminosity block is referred to as \actmu. The average of \actmu over all bunches in the collider for a given luminosity block is denoted by \avgmu. In 2011 the peak luminosity delivered by the accelerator at the start of an LHC fill increased, causing \avgmu to change from 5 at the beginning of the data-taking period to more than 18 by the end. A \emph{bunch train} in the accelerator is generally composed of 36 proton bunches with 50\ns \emph{bunch spacing}, followed by a larger 250\ns window before the next bunch train. The overlay of multiple collisions, either from the same bunch crossing or due to electronic signals present from adjacent bunch crossings, is termed \emph{pileup}. There are two separate, although related, contributions to consider. In-time pileup refers to additional energy deposited in the detector due to simultaneous collisions in the same bunch crossing. Out-of-time pileup is a result of the 500\ns pulse length of the LAr calorimeter readout, compared to the 50\ns spacing between bunch crossings, leaving residual electronic signals in the detector, predominantly from previous interactions. The shape of the LAr calorimeter pulse is roughly a 100\ns peak of positive amplitude, followed by a shallower 400\ns trough of negative amplitude. Due to this trough, out-of-time pileup results by design in a subtraction of measured energy in the LAr calorimeter, providing some compensation for in-time pileup. \section{Summary and conclusions} \label{sec:conclusions} Cross-section measurements are presented for dijet production in $pp$ collisions at \onlyseventev centre-of-mass energy as a function of dijet mass up to $5\TeV$, and for half the rapidity separation of $\ystar < 3.0$. These measurements are based on the full data sample collected with the ATLAS detector during 2011 at the LHC, corresponding to an integrated luminosity of \lumifbshort. Jets are reconstructed with the \AKT algorithm using two values of the radius parameter, \rfour and \rsix, in order to compare data and theory for two different regimes of hadronization, parton shower, underlying event, and higher-order QCD corrections. The measurements are corrected for detector effects to the particle level and compared to theoretical predictions. Fixed-order NLO QCD calculations by \nlojet, corrected for non-perturbative effects, and \powheg NLO matrix element calculations interfaced to a parton-shower MC generator are considered. In both cases, corrections to account for electroweak effects are applied. The statistical uncertainties are smaller than in the previous ATLAS measurement of dijet production using data collected during 2010. In particular, the improved measurement in the high dijet-mass region can be used to constrain the PDF's at high momentum fraction. The correlations between the two measurements, which should be accounted for in a common fit of the PDFs, are non-trivial due to the different treatments of the jet energy calibration uncertainty. Detailed quantitative comparisons between the measured cross-sections and theoretical predictions are presented, using a frequentist method employing a $\chi^2$ definition generalized for asymmetric uncertainties and their correlations. Good agreement is observed when using the CT10, NNPDF2.1 and MSTW 2008 PDF sets for the theoretical predictions with values of the jet radius parameter \rfour and \rsix. Disagreement is observed in some ranges of dijet mass and \ystar when using the HERAPDF1.5 PDF set, for both values of the jet radius parameter. Even stronger disagreement is observed between data and the \nlojet predictions using the ABM11 PDF set, indicating the sensitivity of the measurements to the choice of PDF set. The sensitivity of the frequentist method to assumptions about the correlations between the jet energy calibration uncertainties is tested and found to be small. However, tests of the impact of assumptions about the correlations between the theoretical uncertainties, dominant in the comparison between data and the SM, would be desirable. The \emph{CLs} method is employed using measured cross-sections to explore possible contributions to dijet production from physics beyond the Standard Model. An example using a model of contact interactions is shown, computed using full NLO QCD. An exclusion of compositeness scales $\Lambda < 6.9$--7.7\TeV is achieved, depending on the PDF set used for the calculation. This analysis gives results similar to those from a previous ATLAS analysis using the same data set, that relied on detailed detector simulations of the contact interaction contribution, and does not improve the lower limit on $\Lambda$. The method presented here can be used to confront other new models using measurements of dijet production in $pp$ collisions. \section{Trigger, jet reconstruction and data selection} \label{sec:datasel} This measurement uses the full data set of $pp$ collisions collected with the ATLAS detector at \onlyseventev centre-of-mass energy in 2011, corresponding to an integrated luminosity of \lumifbshort \cite{Aad:2013ucp}. Only events collected during stable beam conditions and passing all data-quality requirements are considered in this analysis. At least one primary vertex, reconstructed using two or more tracks, each with $\pt > 400\MeV$, must be present to reject cosmic ray events and beam background. The primary vertex with the highest $\sum p_{\mathrm{T}}^{2}$ of associated tracks is selected as the hard-scatter vertex. Due to the high luminosity, a suite of single-jet triggers was used to collect data, where only the trigger with the highest \pt threshold remained unprescaled. An event must satisfy all three levels of the jet trigger system based on the transverse energy (\et) of jet-like objects. Level-1 provides a fast, hardware decision using the combined \et of low-granularity calorimeter towers. Level-2 performs a simple jet reconstruction in a window around the geometric region passed at Level-1, with a threshold generally 20 \GeV higher than at Level-1. Finally, a jet reconstruction using the \AKT algorithm with \rfour is performed over the entire detector solid angle by the Event Filter (EF). The EF requires transverse energy thresholds typically 5 \GeV higher than those used at Level-2. The efficiencies of the central jet triggers ($\abseta < 3.2$) relevant to this analysis are shown in figure \ref{fig:TrigEff}. They are determined using an unbiased sample known to be fully efficient in the jet-\pt range of interest. The unbiased sample can be either a random trigger, or a single-jet trigger that has already been shown to be fully efficient in the relevant jet-\pt range. The efficiency is presented in a representative rapidity interval ($1.2 \leq \absy < 2.1$), as a function of calibrated jet \pt, for jets with radius parameter \rsix. Triggers are used only where the probability that a jet fires the trigger is $> 99\%$. Because events are triggered using \rfour jets, the \pt at which calibrated \rsix jets become fully efficient is significantly higher than that for calibrated \rfour jets. To take advantage of the lower \pt at which \rfour jets are fully efficient, distinct \pt ranges are used to collect jets for each value of the radius parameter. \begin{figure}[htp!] \begin{center} \includegraphics[width=0.75\textwidth]{triggereff} \caption{ Jet trigger efficiency as a function of calibrated jet \pt for jets in the interval $1.2 \leq \absy < 2.1$ with radius parameter \rsix, shown for various trigger thresholds. The energy of jets in the trigger system is measured at the electromagnetic energy scale (EM scale), which correctly measures the energy deposited by electromagnetic showers. } \label{fig:TrigEff} \end{center} \end{figure} A dijet event is considered in this analysis if either the leading jet, the subleading jet, or both are found to satisfy one of the jet trigger requirements. Because of the random nature of the prescale decision, some events are taken not because of the leading jet, but instead because of the subleading jet. This two-jet trigger strategy results in an increase in the sample size of about $10\%$. To properly account for the combined prescale of two overlapping triggers the ``inclusion method for fully efficient combinations'' described in ref. \cite{Lendermann:2009ah} is used. After events are selected by the trigger system, they are fully reconstructed offline. The input objects to the jet algorithm are three-dimensional ``topological'' clusters \cite{Lampl:2008zz} corrected to the electromagnetic energy scale. Each cluster is constructed from a seed calorimeter cell with $|E_{\rm cell}| > 4\sigma$, where $\sigma$ is the RMS of the total noise of the cell from both electronic and pileup sources. Neighbouring cells are iteratively added to the cluster if they have $|E_{\rm cell}| > 2\sigma$. Finally, an outer layer of all surrounding cells is added. A calibration that accounts for dead material, out-of-cluster losses for pions, and calorimeter response, is applied to clusters identified as hadronic by their topology and energy density \cite{Barillari:1112035}. This additional calibration serves to improve the energy resolution from the jet-constituent level through the clustering and calibration steps. Taken as input to the \AKT jet reconstruction algorithm, each cluster is considered as a massless particle with an energy $E=\sum E_{\rm cell}$, and a direction given by the energy-weighted barycentre of the cells in the cluster with respect to the geometrical centre of the ATLAS detector. The four-momentum of an uncalibrated jet is defined as the sum of four-momenta of the clusters making up the jet. The jet is then calibrated in four steps: \begin{enumerate} \item Additional energy due to pileup is subtracted using a correction derived from MC simulation and validated in situ as a function of \avgmu, the number of primary vertices (\npv) in the bunch crossing, and jet $\eta$ \cite{ATLAS:2012lla}. \item The direction of the jet is corrected such that the jet originates from the selected hard-scatter vertex of the event instead of the geometrical centre of ATLAS. \item Using MC simulation, the energy and the position of the jet are corrected for instrumental effects (calorimeter non-compensation, additional dead material, and effects due to the magnetic field) and the jet energy scale is restored on average to that of the particle-level jet. For the calibration, the particle-level jet does not include muons and non-interacting particles. \item An additional in situ calibration is applied to correct for residual differences between MC simulation and data, derived by combining the results of $\gamma$--jet, $Z$--jet, and multijet momentum balance techniques. \end{enumerate} The full calibration procedure is described in detail in ref. \cite{ATLAS-CONF-2013-004}. The jet acceptance is restricted to $\absy < 3.0$ so that the trigger efficiency remains $> 99\%$. Furthermore, the leading jet is required to have $\pt > 100 \GeV$ and the subleading jet is required to have $\pt > 50 \GeV$ to be consistent with the asymmetric cuts imposed on the theoretical predictions. Part of the data-taking period was affected by a read-out problem in a region of the LAr calorimeter, causing jets in this region to be poorly reconstructed. Since the same unfolding procedure is used for the entire data sample, events are rejected if either the leading or the subleading jet falls in the region $-0.88 < \phi < -0.5$, for all $\abseta$, in order to avoid a bias in the spectra. This requirement results in a loss in acceptance of approximately 10\%. The inefficiency is accounted for and corrected in the data unfolding procedure. The leading and subleading jets must also fulfil the ``medium'' quality criteria as described in ref. \cite{ATLAS-CONF-2012-020}, designed to reject cosmic rays, beam halo, and detector noise. If either jet fails these criteria, the event is not considered. More than four (two) million dijet events are selected with these criteria using jets with radius parameter \rfour (\rsix), with the difference in sample size resulting mostly from the trigger requirements. \section{Introduction} \label{sec:intro} Measurements of jet production in $pp$ collisions at the LHC \cite{Evans:2008zzb} test the predictions of Quantum Chromodynamics (QCD) at the largest centre-of-mass energies explored thus far in colliders. They are useful for understanding the strong interaction and its coupling strength \alphas, one of the fundamental parameters of the Standard Model. The measurement of cross-sections as a function of dijet mass is also sensitive to resonances and new interactions, and can be employed in searches for physics beyond the Standard Model. Where no new contribution is found, the cross-section results can be exploited to study the partonic structure of the proton. In particular, the high dijet-mass region can be used to constrain the parton distribution function (PDF) of gluons in the proton at high momentum fraction. Previous measurements at the Tevatron in $p\bar{p}$ collisions have shown good agreement between predictions from QCD calculations and data for lower dijet mass \cite{Aaltonen:2008dn,Abazov:2010fr}. Recent results from the LHC using $pp$ collisions have extended the measurement of dijet production cross-sections to higher dijet-mass values \cite{Chatrchyan:2011qta,Aad:2011fc,Chatrchyan:2012bja}. While good agreement between data and several theoretical predictions has been observed in general, the predicted cross-section values using the CT10 PDF set \cite{Lai:2010vv} at high dijet mass tend to be larger than those measured in data. In this paper, measurements of the double-differential dijet cross-sections are presented as functions of the dijet mass and half the rapidity separation of the two highest-\pt jets. The measurements are made at the particle level (see section \ref{sec:xsdef} for a definition), using the iterative, dynamically stabilized (IDS) unfolding method \cite{Malaescu:2011yg} to correct for detector effects. The use of an integrated luminosity more than a factor of 100 larger than in the previous ATLAS publication \cite{Aad:2011fc} improves the statistical power in the high dijet-mass region. In spite of the increased number of simultaneous proton--proton interactions in the same beam bunch crossing during 2011 data taking, improvements in the jet energy calibration result in an overall systematic uncertainty smaller than previously achieved \cite{ATLAS-CONF-2013-004}. The measurements are compared to next-to-leading-order (NLO) QCD calculations \cite{Nagy:2003tz}, as well as to a NLO matrix element calculation by POWHEG \cite{Nason:2004rx}, which is interfaced to the parton-shower Monte Carlo generator \pythia. Furthermore, a quantitative comparison of data and theory predictions is made using a frequentist method. New particles, predicted by theories beyond the Standard Model (SM), may decay into dijets that can be observed as narrow resonances in dijet mass spectra \cite{Baur:1987ga}. New interactions at higher energy scales, parameterized using an effective theory of contact interactions \cite{Eichten:1983hw}, may lead to modifications of dijet cross-sections at high dijet mass. Searches for these deviations have been performed by ATLAS \cite{ATLAS:2012pu} and CMS \cite{Chatrchyan:2012bf}. The approach followed here is to constrain physics beyond the SM using unfolded cross-sections and the full information on their uncertainties and correlations. This information is also provided in HepData \cite{Buckley:2010jn}. This has the advantage of allowing new models to be confronted with data without the need for additional detector simulations. This paper considers only one theory of contact interactions, rather than presenting a comprehensive list of exclusion ranges for various models. The results are an illustration of what can be achieved using unfolded data, and do not seek to improve the current exclusion range. The content of the paper is as follows. The ATLAS detector and data-taking conditions are briefly described in sections \ref{sec:atlasdet} and \ref{sec:conditions}, followed by the cross-section definition in section \ref{sec:xsdef}. Sections \ref{sec:mcsamp} and \ref{sec:theory} describe the Monte Carlo samples and theoretical predictions, respectively. The trigger, jet reconstruction, and data selection are presented in section \ref{sec:datasel}, followed by studies of the stability of the results under different luminosity conditions (pileup) in section \ref{sec:pileup}. Sections \ref{sec:unfolding} and \ref{sec:sysunc} discuss the data unfolding and systematic uncertainties on the measurement, respectively. These are followed by the introduction of a frequentist method for the quantitative comparison of data and theory predictions in section \ref{sec:stattest}. The cross-section results are presented in section \ref{sec:results}, along with quantitative statements of the ability of the theory prediction to describe the data. An application of the frequentist method for setting a lower limit on the compositeness scale of a model of contact interactions is shown in section \ref{sec:setlimits}. Finally, the conclusions are given in section \ref{sec:conclusions}. \section{Monte Carlo samples} \label{sec:mcsamp} The default Monte Carlo (MC) generator used to simulate jet events is \pythia 6.425 \cite{Sjostrand:2006za} with the ATLAS Underlying Event Tune AUET2B \cite{ATL-PHYS-PUB-2011-009}. It is a leading-order (LO) generator with $2 \to 2$ matrix element calculations, supplemented by leading-logarithmic parton showers ordered in \pt. A simulation of the underlying event is also provided, including multiple parton interactions. The Lund string model \cite{Andersson:1979ij} is used to simulate the hadronization process. To simulate pileup, minimum bias events\footnote{ Events passing a trigger with minimum requirements, and which correspond mostly to inelastic $pp$ collisions, are called minimum-bias events. } are generated using \pythia 8 \cite{Sjostrand:2007gs} with the 4C tune \cite{Corke:2010yf} and MRST LO$^{**}$ proton PDF set \cite{Sherstnev:2008dm}. The number of minimum bias events overlaid on each signal event is chosen to model the distribution of \avgmu throughout the data-taking period. To estimate the uncertainties on the hadronization and parton-shower modelling, events are also generated by \herwigpp 2.5.2 \cite{Bahr:2008pv,Gieseke:2011na,Corcella:2000bw} using the UE-EE-3 tune \cite{Gieseke:2012ft}. In this LO generator, the parton shower follows an angular ordering, and a clustering model \cite{Webber:1983if} is used for the hadronization. The effect of the underlying event is included using the eikonal multiple-scattering model \cite{Bahr:2008dy}. Two PDF sets are considered for both MC generators. For the nominal detector simulation, the MRST LO$^{**}$ proton PDF set is used. Additionally, versions of the same tunes based on the CTEQ6L1 \cite{Pumplin:2002vw} proton PDF set are used to assess uncertainties on the non-perturbative corrections (see section \ref{subsec:theoryunc}). The output four-vectors from these event generators are passed to a detector simulation \cite{Aad:2010ah} based on \geant \cite{Agostinelli:2002hh}. Simulated events are digitized to model the detector responses, and then reconstructed using the software used to process data. \section{Acknowledgements} We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, DIP and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW and NCN, Poland; GRICES and FCT, Portugal; MNE/IFA, Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide. \section{Stability of the results under pileup conditions} \label{sec:pileup} \begin{figure}[ht!] \begin{center} \subfigure[\ystarone]{\includegraphics[width=0.45\textwidth]{muact_band_eta1}} \subfigure[\ystarthree]{\includegraphics[width=0.45\textwidth]{muact_band_eta3}} \caption{ Luminosity-normalized dijet yields as a function of dijet mass for two ranges of \actmu, in the range (a) \ystarone and (b) \ystarthree. The measurements are shown as ratios with respect to the full luminosity-normalized dijet yields. The gray bands represent the uncertainty on the jet energy calibration that accounts for the \avgmu and \npv dependence, propagated to the luminosity-normalized dijet yields. The statistical uncertainty shown by the error bars is propagated assuming no correlations between the samples. This approximation has a small impact, and does not reduce the agreement observed within the pileup uncertainties. } \label{fig:pileup} \end{center} \end{figure} The dijet mass is sensitive to the effects of pileup through the energies, and to a lesser extent the directions, of the leading and subleading jets. As such, it is important to check the stability of the measurement with respect to various pileup conditions. The effects of pileup are removed at the per-jet level during the jet energy calibration (see section \ref{sec:datasel}). To check for any remaining effects, the integrated luminosity delivered for several ranges of \actmu is determined separately for each trigger used in this analysis. This information is then used to compute the luminosity-normalized dijet yields in different ranges of \actmu. A comparison of the yields for two ranges of \actmu is shown in figure \ref{fig:pileup}. While statistical fluctuations are present, the residual bias is covered by the uncertainties on the jet energy calibration due to \avgmu and \npv, derived through the in situ validation studies (see section \ref{sec:sysunc}) \cite{ATLAS-CONF-2013-004,ATLAS:2012lla}. The dependence of the luminosity-normalized dijet yields on the position in the accelerator bunch train is also studied. Because the first bunches in a train do not fully benefit from the compensation of previous bunches due to the long LAr calorimeter pulses, a bias is observed in the jet energy calibration. This bias can be studied by defining a control region using only events collected from the middle of the bunch train, where full closure in the jet energy calibration is obtained. By comparing the luminosity-normalized dijet yields using the full sample to that from the subsample of events collected from the middle of the bunch train, any remaining effects on the measurement due to pileup are estimated. An increase in the luminosity-normalized dijet yields using the full sample compared to the subsample from the middle of the bunch train is observed, up to 5\% at low dijet mass. This increase is well described in the MC simulation, so that the effect is corrected for during the unfolding step. All remaining differences are covered by the jet energy calibration uncertainty components arising from pileup. The stability of the luminosity-normalized dijet yields in the lowest dijet-mass bins is studied as a function of the date on which the data were collected. Since portions of the hadronic calorimeter became non-functional during the course of running\footnote{ Up to 5\% of the modules in the barrel and extended barrels of the hadronic calorimeter were turned off by the end of data taking. } and pileup increased throughout the year, this provides an important check of the stability of the result. The observed variations are consistent with those due to \actmu, which on average increased until the end of data taking. Pileup is found to be the dominant source of variations, whereas detector effects are small in comparison. \section{Cross-section results} \label{sec:results} Measurements of the dijet double-differential cross-sections as a function of dijet mass in various ranges of \ystar are shown in figure \ref{fig:MassSummary04} for \AKT jets with values of the radius parameter \rfour and \rsix. The cross-sections are measured up to a dijet mass of 5 \TeV and $\ystar$ of $3.0$, and are seen to decrease quickly with increasing dijet mass. The NLO QCD calculations by \nlojet using the CT10 PDF set, which are corrected for non-perturbative and electroweak effects, are compared to the measured cross-sections. No major deviation of the data from the theoretical predictions is observed over the full kinematic range, covering almost eight orders of magnitude in measured cross-section values. More detailed quantitative comparisons are made between data and theoretical predictions in the following subsection. At a given dijet mass, the absolute cross-sections using jets with radius parameter \rfour are smaller than those using \rsix. This is due to the smaller value of the jet radius parameter, resulting in a smaller contribution to the jet energy from the parton shower and the underlying event. Tables summarising the measured cross-sections are provided in Appendix \ref{app:tables}. The quadrature sum of the uncertainties listed in the tables is the total uncertainty on the measurement, although the in situ, pileup, and flavour columns are each composed of two or more components. The full set of cross-section values and uncertainty components, each of which is fully correlated in dijet mass and \ystar but uncorrelated with the other components, can be found in HepData \cite{Buckley:2010jn}. \addtolength{\subfigcapskip}{-12pt} \begin{figure}[!htp] \begin{center} \subfigure[\rfour]{ \includegraphics[width=0.6\textwidth]{mass04} } \subfigure[\rsix]{ \includegraphics[width=0.6\textwidth]{mass06} } \caption{ Dijet double-differential cross-sections for \AKT jets with radius parameter \rfour and \rsix, shown as a function of dijet mass in different ranges of \ystar. To aid visibility, the cross-sections are multiplied by the factors indicated in the legend. The error bars indicate the statistical uncertainty on the measurement, and the dark shaded band indicates the sum in quadrature of the experimental systematic uncertainties. For comparison, the NLO QCD predictions of NLOJet++ using the CT10 PDF set, corrected for non-perturbative and electroweak effects, are included. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. The hatched band shows the uncertainty associated with the theory predictions. Because of the logarithmic scale on the vertical axis, the experimental and theoretical uncertainties are only visible at high dijet mass, where they are largest. } \label{fig:MassSummary04} \end{center} \end{figure} \addtolength{\subfigcapskip}{0pt} \subsection{Quantitative comparison with \nlojet predictions}\label{subsec:nlojetresult} The ratio of the NLO QCD predictions from \nlojet, corrected for non-perturbative and electroweak effects, to the data is shown in figures \ref{fig:pdf04_1}--\ref{fig:pdf06_2} for various PDF sets. The CT10, HERAPDF1.5, epATLJet13, MSTW 2008, NNPDF2.3, and ABM11 PDF sets are used. As discussed in section \ref{sec:sysunc}, the individual experimental and theoretical uncertainty components are fully correlated between \mass and \ystar bins. As such, the frequentist method described in section \ref{sec:stattest} is necessary to make quantitative statements about the agreement of theoretical predictions with data. For this measurement, the \nlojet predictions using the MSTW 2008, NNPDF2.3 and ABM11 PDF sets have smaller theoretical uncertainties than those using the CT10 and HERAPDF1.5 PDF sets. Due to the use of ATLAS jet data in the PDF fit, the predictions using the epATLJet13 PDF set have smaller uncertainties at high dijet mass compared to those using the HERAPDF1.5 PDF set, when only considering the uncertainties due to the experimental inputs for both. \afterpage{\clearpage} The frequentist method is employed using the hypothesis that the SM is the underlying theory. Here, the NNPDF2.1 PDF set is considered instead of NNPDF2.3 due to the larger number of replicas available, which allows a better determination of the uncertainty components. The epATLJet13 PDF set is not considered since the full set of uncertainties is not available. The resulting observed p-values are also shown in figures \ref{fig:pdf04_1}--\ref{fig:pdf06_2}, where all \mass bins are considered in each range of \ystar separately. The agreement between the data and the various theories is good (observed p-value greater than $5\%$) in all cases, except the following. For the predictions using HERAPDF1.5, the smallest observed p-values are seen in the range \ystarthree for jets with radius parameter \rfour, and in the range \ystartwo for jets with radius parameter \rsix, both of which are $<5\%$. For the predictions using ABM11, the observed p-value is $<0.1\%$ for each of the first three ranges of $\ystar<1.5$, for both values of jet radius parameter. The results using the ABM11 PDF set also show observed p-values of less than $5\%$, for the range \ystarfive for jets with radius parameter \rfour, and \ystarfour for \rsix jets. Figure~\ref{fig:Limits_NLO_chi2withCorrelationsAsymmSyst_CT10} shows the \chisq distribution from pseudo-experiments for the SM hypothesis using the CT10 PDF set, as well as the observed \chisq value. The full range of dijet mass in the first three ranges of $\ystar < 1.5$, for jets with radius parameter \rsix, is considered. The mean, median, $\pm 1\sigma$ and $\pm 2\sigma$ regions of the \chisq distribution from pseudo-experiments around the median, and number of degrees of freedom are also indicated. Figure \ref{fig:Limits_NLO_chi2withCorrelationsAsymmSyst_CT10_y4} shows that a broadening of the \chisq distribution from pseudo-experiments is observed when additional degrees of freedom are included by combining multiple ranges of \ystar. For both \chisq distributions, the mean and median are close to the number of degrees of freedom. This shows that the choice of generalized \chisq as the test statistic in the frequentist method behaves as expected. Because the data at larger values of \ystar are increasingly dominated by experimental uncertainties, the sensitivity to the proton PDFs is reduced. For this reason, when considering a combination only the first three ranges of $\ystar<1.5$ are used. While the low dijet-mass region provides constraints on the global normalization, due to the increased number of degrees of freedom it also introduces a broadening of the \chisq distribution from the pseudo-experiments that reduces sensitivity to the high dijet-mass region. To focus on the regions where the PDF uncertainties are large while the data still provide a good constraint, the comparison between data and SM predictions is also performed using a high dijet-mass subsample. The high dijet-mass subsample is restricted to $m_{12}> 1.31 \TeV$ for \ystarone, $m_{12}> 1.45 \TeV$ for \ystartwo, and $m_{12}> 1.60 \TeV$ for \ystarthree. \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{pdf04_1} \caption{ Ratio of the NLO QCD predictions of \nlojet to the measurements of the dijet double-differential cross-section as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with radius parameter \rfour. The predictions of \nlojet using different PDF sets (CT10, HERAPDF1.5, and epATLJet13) are shown. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. Observed p-values resulting from the comparison of theory with data are shown considering all \mass bins in each range of \ystar separately. The HERAPDF1.5 analysis accounts for model and parameterization uncertainties as well as experimental uncertainties. The theoretical predictions are labelled with \emph{exp. only} when the model and parameterization uncertainties are not included. } \label{fig:pdf04_1} \end{center} \end{sidewaysfigure} \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{pdf04_2} \caption{ Ratio of the NLO QCD predictions of \nlojet to the measurements of the dijet double-differential cross-section as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with radius parameter \rfour. The predictions of \nlojet using different PDF sets (MSTW 2008, NNPDF2.3 and ABM11) are shown. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. Observed p-values resulting from the comparison of theory with data are shown considering all \mass bins in each range of \ystar separately. Here, the observed p-value is shown for the NNPDF2.1 PDF set, while the lines are for the NNPDF2.3 PDF set (see text). } \label{fig:pdf04_2} \end{center} \end{sidewaysfigure} \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{pdf06_1} \caption{ Ratio of the NLO QCD predictions of \nlojet to the measurements of the dijet double-differential cross-section as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with radius parameter \rsix. The predictions of \nlojet using different PDF sets (CT10, HERAPDF1.5, and epATLJet13) are shown. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. Observed p-values resulting from the comparison of theory with data are shown considering all \mass bins in each range of \ystar separately. The HERAPDF1.5 analysis accounts for model and parameterization uncertainties as well as experimental uncertainties. The theoretical predictions are labelled with \emph{exp. only} when the model and parameterization uncertainties are not included. } \label{fig:pdf06_1} \end{center} \end{sidewaysfigure} \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{pdf06_2} \caption{ Ratio of the NLO QCD predictions of \nlojet to the measurements of the dijet double-differential cross-section as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with radius parameter \rsix. The predictions of \nlojet using different PDF sets (MSTW 2008, NNPDF2.3 and ABM11) are shown. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. Observed p-values resulting from the comparison of theory with data are shown considering all \mass bins in each range of \ystar separately. Here, the observed p-value is shown for the NNPDF2.1 PDF set, while the lines are for the NNPDF2.3 PDF set (see text). } \label{fig:pdf06_2} \end{center} \end{sidewaysfigure} \begin{figure}[!htbp] \begin{center} \subfigure[\ystarone]{ \includegraphics[width=0.7\linewidth]{chi2_sm_Eta1}\hspace{0.0cm} \label{fig:Limits_NLO_chi2withCorrelationsAsymmSyst_CT10_y1} } \\ \subfigure[$\ystar < 1.5$]{ \includegraphics[width=0.7\linewidth]{chi2_sm_Eta1-3} \label{fig:Limits_NLO_chi2withCorrelationsAsymmSyst_CT10_y4} } \caption{ The \chisq distribution of pseudo-experiments (black histogram) for NLO QCD using the CT10 PDF set. The renormalization and factorization scale choice $\mu$ is as described in section \ref{sec:theory}. The full information on the uncertainties, including their asymmetries and correlations, is used for both the pseudo-experiments and the \chisq calculation. The black vertical dashed (solid) line indicates the median (mean) of the distribution, while the green (yellow) band indicates the $\pm 1\sigma\ (\pm 2\sigma)$ region. The blue dot-dashed vertical lines indicate the observed \chisq, with the corresponding observed p-value given in the legend (see text). The pink dotted lines show the number of degrees of freedom, 21 for (a) and 61 for (b). The plots correspond to (a) the range \ystarone , and (b) the first three ranges of $\ystar < 1.5$ combined, for jets reconstructed with radius parameter \rsix. } \label{fig:Limits_NLO_chi2withCorrelationsAsymmSyst_CT10} \end{center} \end{figure} Table \ref{tab:variousPdfDataVsQCD_1} presents a reduced summary of the observed p-values for the comparison of measured cross-sections and SM predictions in both the full and high dijet-mass regions using various PDF sets, for both values of the jet radius parameter. For the first three ranges of $\ystar < 1.5$, as well as their combination, theoretical predictions using the CT10 PDF set have an observed p-value $>6.6\%$ for all ranges of dijet mass, and are typically much larger than this. For the HERAPDF1.5 PDF set, good agreement is found at high dijet mass in the ranges \ystarone and \ystarthree (not shown), both with observed p-values $>15\%$. Disagreement is observed for jets with distance parameter \rsix when considering the full dijet-mass region for the first three ranges of $\ystar < 1.5$ combined, where the observed p-value is $2.5\%$. This is due to the differences already noted for the range \ystartwo, where the observed p-value is $0.9\%$ (see figure \ref{fig:pdf06_1}). Disagreement is also seen when limiting to the high dijet-mass region and combining the first three ranges of $\ystar < 1.5$, resulting in an observed p-value of $0.7\%$. The observed p-values for the MSTW 2008 and NNPDF2.1 PDF sets are always $>12.5\%$ in the ranges shown in table \ref{tab:variousPdfDataVsQCD_1}. This is particularly relevant considering these two PDF sets provide small theoretical uncertainties at high dijet mass. A strong disagreement, where the observed p-value is generally $<0.1\%$, is observed for the ABM11 PDF set for the first three ranges of $\ystar < 1.5$ and both values of the jet radius parameter. \clearpage \begin{table}[!t] \begin{center} \begin{tabular}{|ccccc|} \hline PDF set & \ystar ranges & mass range & \multicolumn{2}{c|}{${\rm P}_{\rm obs}$} \\ & & (full/high) & \rfour & \rsix \\ \hline & \ystarone & high & $0.742$ & $0.785$ \\ CT10 & $\ystar < 1.5$ & high & $0.080$ & $0.066$ \\ & $\ystar < 1.5$ & full & $0.324$ & $0.168$ \\ \hline & \ystarone & high & $0.688$ & $0.504$ \\ HERAPDF1.5 & $\ystar < 1.5$ & high & $0.025$ & $0.007$ \\ & $\ystar < 1.5$ & full & $0.137$ & $0.025$ \\ \hline & \ystarone & high & $0.328$ & $0.533$ \\ MSTW 2008 & $\ystar < 1.5$ & high & $0.167$ & $0.183$ \\ & $\ystar < 1.5$ & full & $0.470$ & $0.352$ \\ \hline & \ystarone & high & $0.405$ & $0.568$ \\ NNPDF2.1 & $\ystar < 1.5$ & high & $0.151$ & $0.125$ \\ & $\ystar < 1.5$ & full & $0.431$ & $0.242$ \\ \hline & \ystarone & high & $0.024$ & $<10^{-3}$ \\ ABM11 & $\ystar < 1.5$ & high & $<10^{-3}$ & $<10^{-3}$ \\ & $\ystar < 1.5$ & full & $<10^{-3}$ & $<10^{-3}$ \\ \hline \end{tabular} \caption{ Sample of observed p-values obtained in the comparison between data and the NLO QCD predictions using the CT10, HERAPDF1.5, MSTW 2008, NNPDF2.1 and ABM11 PDF sets, with values of the jet radius parameter \rfour and \rsix. Results are presented for the range \ystarone, as well as the combination of the first three ranges of $\ystar < 1.5$, performing the test in the full dijet-mass range or restricting it to the high dijet-mass subsample. The full information on uncertainties, including their asymmetries and correlations, is used for both the pseudo-experiments and the \chisq calculation. } \label{tab:variousPdfDataVsQCD_1} \end{center} \end{table} It is possible to further study the poor agreement observed at high dijet mass for the combination of the first three ranges of $\ystar<1.5$ when using the HERAPDF1.5 PDF set by exploring the four variations described in section \ref{subsec:theoryunc}. The observed p-values using variations 1, 2, and 4 in the \nlojet predictions, shown in table \ref{tab:variousPdfDataVsQCD_2}, are generally similar to those using the default HERAPDF1.5 PDF set. However, much smaller p-values are observed for variation 3, which has a more flexible parameterization for the valence $u$-quark contribution. Including the present dijet measurement in the PDF analysis should provide a better constraint on the choice of parameterization. \begin{table}[!t] \begin{center} \begin{tabular}{|ccccc|} \hline PDF set & \ystar ranges & mass range & \multicolumn{2}{c|}{${\rm P}_{\rm obs}$} \\ HERAPDF1.5 & & (full/high) & \rfour & \rsix \\ \hline & \ystarone & high & $0.692$ & $0.493$ \\ variation 1 & $\ystar < 1.5$ & high & $0.018$ & $0.005$ \\ & $\ystar < 1.5$ & full & $0.113$ & $0.018$ \\ \hline & \ystarone & high & $0.667$ & $0.453$ \\ variation 2 & $\ystar < 1.5$ & high & $0.025$ & $0.008$ \\ & $\ystar < 1.5$ & full & $0.124$ & $0.024$ \\ \hline & \ystarone & high & $0.424$ & $0.209$ \\ variation 3 & $\ystar < 1.5$ & high & $<10^{-3}$ & $<10^{-3}$ \\ & $\ystar < 1.5$ & full & $0.001$ & $<10^{-3}$ \\ \hline & \ystarone & high & $0.677$ & $0.525$ \\ variation 4 & $\ystar < 1.5$ & high & $0.031$ & $0.010$ \\ & $\ystar < 1.5$ & full & $0.160$ & $0.031$ \\ \hline \end{tabular} \caption{ Sample of observed p-values obtained in the comparison between data and the NLO QCD predictions based on the variations (described in section \ref{subsec:theoryunc}) of the HERAPDF1.5 PDF analysis, with values of the jet radius parameter \rfour and \rsix. Results are presented for the range \ystarone, as well as the combination of the first three ranges of $\ystar < 1.5$, performing the test in the full dijet-mass range or restricting it to the high dijet-mass subsample. The full information on uncertainties, including their asymmetries and correlations, is used for both the pseudo-experiments and the \chisq calculation. } \label{tab:variousPdfDataVsQCD_2} \end{center} \end{table} The studies described above, as well as most of the previously published analyses, use test statistics that exploit the information on the sizes and bin-to-bin correlations of the uncertainties. As such, the results are sensitive to the assumptions used to derive the uncorrelated uncertainty components, in particular those of the jet energy calibration. To improve upon previous studies, the analysis is repeated using the two different correlation assumptions described in section \ref{sec:sysunc}. Theoretical predictions using the CT10 PDF set in the range \ystarone for the high dijet-mass subsample, with jet radius parameter \rsix, are considered. For both the stronger and weaker correlation assumptions the observed p-values differ by $<0.1\%$ from the value of $78.5\%$ found using the default jet energy calibration uncertainty components. This is because the impact on the total uncertainty and its correlations is only at the few-percent level for most bins, even though these assumptions lead to a relative change of up to 8\% in the experimental uncertainty, and up to 12\% in its correlations. Quantitative comparisons of theoretical predictions with data would benefit from alternative assumptions for the correlations of the theoretical uncertainties, in particular the large PDF uncertainty components. \subsection{Comparison with \powheg predictions} The ratios of the theoretical predictions from \powheg to data are shown in figures~\ref{fig:MassRatio04} and \ref{fig:MassRatio06} for values of the jet radius parameter \rfour and \rsix. Because of the difficulty in propagating multiple PDF sets through the \powheg generation, direct calculations of the theoretical uncertainties are not made and only qualitative comparisons are provided. However, the theoretical uncertainties are expected to be similar to those shown for the \nlojet calculations using the CT10 PDF set. \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{ratio04} \caption{ Ratio of the POWHEG predictions to the measurements of the dijet double-differential cross-sections as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with jet radius parameter \rfour. The predictions of \powheg with parton-shower MC simulation by \pythia are shown for the AUET2B and Perugia 2011 tunes. The statistical (total systematic) uncertainties of the measurements are indicated as error bars (shaded bands). } \label{fig:MassRatio04} \end{center} \end{sidewaysfigure} \begin{sidewaysfigure}[!htbp] \begin{center} \includegraphics[width=0.90\textwidth]{ratio06} \caption{ Ratio of the POWHEG predictions to the measurements of the dijet double-differential cross-sections as a function of dijet mass in different ranges of \ystar. The results are shown for jets identified using the \AKT algorithm with jet radius parameter \rsix. The predictions of \powheg with parton-shower MC simulation by \pythia are shown for the AUET2B and Perugia 2011 tunes. The statistical (total systematic) uncertainties of the measurements are indicated as error bars (shaded bands). } \label{fig:MassRatio06} \end{center} \end{sidewaysfigure} \afterpage{\clearpage} The \powheg predictions show no major deviations from data, with the Perugia 2011 tune resulting in slightly larger cross-sections ($10$--$20\%$) than the AUET2B tune. At high dijet mass, the difference in the theory calculations is less pronounced, decreasing to a few percent at low values of \ystar. For the range \ystarone, theoretical predictions underestimate the measured cross-sections by up to $25\%$ at larger values of dijet mass. In the range \ystartwo, this difference at high dijet mass is less, at around $15\%$. At larger values of \ystar, the experimental sensitivity decreases so that no strong statements can be made. The ratios of the theoretical predictions to the cross-section measurements using jets with radius parameter \rfour are observed to be larger than those for \rsix. This is similar to the trend observed previously for inclusive jet cross-sections from ATLAS \cite{Aad:2011fc}. While this trend is particularly apparent for the Perugia 2011 tune, it is almost negligible for the AUET2B tune at high dijet mass. \section{Exploration and exclusion of contact interactions} \label{sec:setlimits} To illustrate the sensitivity of the measurements to physics beyond the SM, the model of QCD plus contact interactions (CIs) \cite{Gao:2011ha} with left--left coupling and destructive interference between CIs and QCD as implemented by the CIJET program \cite{Gao:2013kp} is considered. The use of NLO QCD calculations for the CI portion unifies the treatment of the two predictions. For this study, the measurement is restricted to the high dijet-mass subsample $\mass>1.31\TeV$ in the range \ystarone, where theory predicts the largest effect. Because QCD predicts a more uniform \ystar distribution compared to CIs, where events are preferentially produced at smaller values of \ystar, this region provides the highest sensitivity to the CI contribution. Furthermore, the additional QCD plus CIs matrix elements are sensitive to different quark--gluon compositions. Given that the positive electroweak corrections and the CIs produce similar contributions, the electroweak corrections are not applied to the CI portion. This results in a conservative approach when setting limits for the kinematic region considered here. Non-perturbative corrections are also not applied to the CI portion, but have a negligible size of less than 1\% at high dijet mass where CI contributions are most significant. The PDF sets for which the SM predictions in this region provide a good description of the data are considered: CT10, HERAPDF1.5, MSTW 2008, and NNPDF2.1. Even when considering the ABM11 PDF set in only the low dijet-mass region $\mass < 1.31\TeV$, where CIs were previously excluded \cite{Abe:1996mj,Abazov:2009ac}, the observed p-value is $<0.1\%$ in the range \ystarone. For this reason, the ABM11 PDF set is not considered in this example. Figure \ref{fig:Limits_QCDpCI6p5TeV_chi2withCorrelationsAsymmSyst_CT10} shows the \chisq distributions from pseudo-experiments for both the SM (background) and QCD plus CIs (signal$+$background), as well as the observed and expected \chisq values. Here, the hypothesis for the \chisq calculation is the prediction of QCD plus CIs using the CT10 PDF set with compositeness scale $\Lambda=6.5 \TeV$. For this example, jets with radius parameter \rsix are considered. The observed p-value for the signal$+$background model, i.e. the integral of the distribution above the observed \chisq, is significantly smaller than that obtained for the SM pseudo-data, resulting in an exclusion of the compositeness scale $\Lambda = 6.5\TeV$ with a \emph{CLs} value $<0.001$. Similar results are obtained for jets reconstructed with radius parameter \rfour, as well as for the MSTW 2008, NNPDF2.1 and HERAPDF1.5 PDF sets. \begin{figure}[!t] \begin{center} \includegraphics[width=0.7\linewidth]{chi2_smplusci_eta1} \caption{ The \chisq distribution from pseudo-experiments of QCD plus CIs (black histogram) and of the SM background (red histogram) using the full information on the uncertainties, including their asymmetries and correlations, for both the pseudo-experiments (PEs) and the \chisq calculation. The theoretical hypothesis is the NLO QCD plus CIs prediction based on the CT10 PDF set and a compositeness scale $\Lambda=6.5\TeV$. The blue (red) dashed vertical line indicates the observed (expected) \chisq, with the corresponding observed (expected) \emph{CLs} value given in the legend (see text). The plot corresponds to the measurement in the high dijet-mass subsample for the range \ystarone and jet radius parameter \rsix. } \label{fig:Limits_QCDpCI6p5TeV_chi2withCorrelationsAsymmSyst_CT10} \end{center} \end{figure} \begin{table}[t!] \begin{center} \begin{tabular}{|ccccc|} \hline PDF set & \multicolumn{4}{c|}{\hspace{1cm} $\Lambda$ [\TeV{}]} \\ & \multicolumn{2}{c}{\hspace{1cm} \rfour} & \multicolumn{2}{c|}{\hspace{1cm} \rsix} \\ & \hspace{1cm} Exp & Obs & \hspace{1cm} Exp & Obs \\ \hline CT10 & \hspace{1cm} $7.3$ & $7.2$ & \hspace{1cm} $7.1$ & $7.1$ \\ HERAPDF1.5 & \hspace{1cm} $7.5$ & $7.7$ & \hspace{1cm} $7.3$ & $7.7$ \\ MSTW 2008 & \hspace{1cm} $7.3$ & $7.0$ & \hspace{1cm} $7.1$ & $6.9$ \\ NNPDF2.1 & \hspace{1cm} $7.3$ & $7.2$ & \hspace{1cm} $7.2$ & $7.0$ \\ \hline \end{tabular} \caption{ Expected and observed lower limits at the $95\%$ CL on the compositeness scale $\Lambda$ of the NLO QCD plus CIs model using the CT10, HERAPDF1.5, MSTW 2008, and NNPDF2.1 PDF sets, for values of the jet radius parameter \rfour and \rsix, using the measurements in the range \ystarone for the high dijet-mass subsample. } \label{tab:LambdaLimitValues} \end{center} \end{table} \afterpage{\clearpage} In order to exclude a range for the compositeness scale, a scan of the observed \emph{CLs} value as a function of $\Lambda$ is performed. This is shown in figure \ref{fig:Limits_LambdaScan_chi2withCorrelationsAsymmSyst_HM_CT10:b}, whereas figure \ref{fig:Limits_LambdaScan_chi2withCorrelationsAsymmSyst_HM_CT10:a} shows the scan over $\Lambda$ of the observed and expected \chisq values from which the \emph{CLs} values are derived. The observed and expected \chisq values follow each other closely for small values of $\Lambda$, converging towards a constant difference in the limit $\Lambda\to\infty$. The scans are for the theoretical predictions with jet radius parameter \rsix, using the CT10 PDF set, and exclude the range $\Lambda < 7.1\TeV$ at the $95\%$ CL. The granularity of the scan is 0.25\TeV, and the final value is obtained using a linear interpolation between points. A summary of the lower limits on $\Lambda$ obtained using the different PDF sets is shown in table \ref{tab:LambdaLimitValues} for both values of the jet radius parameter. While the expected limits are in the range $7.1$--$7.5\TeV$, the observed ones span the range $6.9$--$7.7\TeV$, the larger values being obtained for the HERAPDF1.5 PDF set. When considering the variations of the HERAPDF1.5 PDF's, values of $\Lambda < 7.6$--$7.8\TeV$ are excluded. \begin{figure}[!t] \begin{center} \subfigure[ ]{ \includegraphics[width=0.7\linewidth]{scan_cls} \label{fig:Limits_LambdaScan_chi2withCorrelationsAsymmSyst_HM_CT10:b} } \\ \subfigure[ ]{ \includegraphics[width=0.7\linewidth]{scan_chi2} \label{fig:Limits_LambdaScan_chi2withCorrelationsAsymmSyst_HM_CT10:a} } \caption{ Scan of (a) \emph{CLs} value and (b) \chisq for NLO QCD plus CIs as a function of $\Lambda$, using the CT10 PDF set. The green~(yellow) bands indicate the $\pm 1\sigma(\pm 2\sigma)$ regions, from pseudo-experiments of (a) the SM background and (b) the QCD plus CIs. The full information on uncertainties, including their asymmetries and correlations, is used for both the pseudo-experiments and the \chisq calculation. The dashed horizontal line in (a) indicates the $95\%$ CL exclusion, computed using the observed (expected) p-value shown by the blue (red) dashed line. In (b) the black dashed (solid) line indicates the median (mean) of the \chisq distribution of pseudo-experiments. The blue (red) dashed lines indicate the observed (expected) \chisq. The plots correspond to the measurement with jet radius parameter \rsix in the range \ystarone, restricted to the high dijet-mass subsample. } \label{fig:Limits_LambdaScan_chi2withCorrelationsAsymmSyst_HM_CT10} \end{center} \end{figure} \afterpage{\clearpage} The sensitivity to the assumptions about the correlations between the jet energy calibration uncertainty components is also tested. An effect at the level of $0.01\TeV$ on the lower limit for $\Lambda$ is observed, consistent with the small changes seen for the comparisons with the SM predictions in section \ref{sec:results}. To compare these results with those obtained by previous studies \cite{ATLAS:2012pu} using an approximate (rather than exact) NLO QCD plus CIs calculation, the cross-sections presented here are also used to test the approximate NLO QCD plus CIs prediction. A scaling factor computed from the ratio of the NLO to LO QCD calculations is applied to the sum of the LO QCD plus CIs predictions, together with their interference. This factor ranges between $0.85$ and $1.20$, depending on the dijet mass and \ystar region. Here, for the sake of consistency with previous studies, no electroweak corrections are applied to the QCD or CI contributions. While the cross-sections used here include dijet-mass values $\mass > 1.31\TeV$ in the range \ystarone, the previous results include dijet-mass values $\mass > 2.6\TeV$ in the range $\ystar < 1.7$. Values of $\Lambda < 6.6$--$7.3\TeV$ are excluded at the $95\%$ CL using the predictions based on the different PDF sets, with expected lower limits in the range $\Lambda < 6.6$--$6.9\TeV$. This result is comparable to the observed (expected) lower limit of $7.6\TeV$ ($7.7\TeV$) obtained by the reconstruction-level analysis using a normalized dijet angular distribution \cite{ATLAS:2012pu}, and does not improve the previous result. \section{Frequentist method for a quantitative comparison of data and theory spectra} \label{sec:stattest} The comparison of data and theoretical predictions at the particle level rather than at the reconstruction level has the advantage that data can be used to test any theoretical model without the need for further detector simulation. Because the additional uncertainties introduced by the unfolding procedure are at the sub-percent level, they do not have a significant effect on the power of the comparison. The frequentist method described here provides quantitative statements about the ability of SM predictions to describe the measured cross-sections. An extension, based on the \emph{CLs} technique \cite{Read:2002a}, is used to explore potential deviations in dijet production due to contributions beyond the SM (see section \ref{sec:setlimits}). \subsection{Test statistic} The test statistic, which contains information about the degree of deviation of one spectrum from another, is the key input for any quantitative comparison. The use of a simple $\chi^2$ test statistic such as \ifdraft \begin{linenomath} \fi \begin{equation} \chi^2\left( \mathbf{d}; \mathbf{t} \right) = \sum_{i}{ \left( \frac{d_i - t_i}{\sigma_i(t_i)} \right)^2 } , \label{Eq:chi2diag} \end{equation} \ifdraft \end{linenomath} \fi comparing data ($\mathbf{d}$) and theoretical predictions ($\mathbf{t}$), accounts only for the uncertainties on individual bins ($\sigma_i$). This ignores the statistical and, even stronger, systematic correlations between bins. Therefore, it has a reduced sensitivity to the differences between theoretical predictions and the measurements compared to other more robust test statistics. The use of a covariance matrix ($C$) in the $\chi^2$ definition, \ifdraft \begin{linenomath} \fi \begin{equation} \chi^2\left( \mathbf{d}; \mathbf{t} \right) = \sum_{i,j} \left( d_i - t_i \right) \cdot \left[C^{-1}(\mathbf{t})\right]_{ij} \cdot \left( d_j - t_j \right), \label{Eq:chi2covMat} \end{equation} \ifdraft \end{linenomath} \fi is a better approximation. However, the covariance matrix is built from symmetrized uncertainties and thus cannot account for the asymmetries between positive and negative uncertainty components. An alternative \chisq definition, based on fits of the uncertainty components, was proposed in refs.~\cite{D'Agostini:1993uj} and~\cite{Blobel:2003wa}. For symmetric uncertainties it is equivalent to the definition in eq.~(\ref{Eq:chi2covMat}) \cite{Botje:2001fx}. However, it allows a straightforward generalization of the \chisq statistic, accounting for asymmetric uncertainties by separating them from the symmetric ones in the test statistic, which is now: \ifdraft \begin{linenomath} \fi \begin{equation} \begin{split} \chi^2\left( \mathbf{d}; \mathbf{t} \right) = \min_{\beta_a} & \left\{ \sum_{i,j} \left[ d_i - \left(1+\sum_{a}{\beta_a\cdot \left(\boldsymbol{\epsilon}_{a}^{\pm}(\beta_a)\right)_i} \right) t_i \right] \cdot \left[C_\mathrm{su}^{-1}(\mathbf{t})\right]_{ij} \right. \\ & \left. \vphantom{\sum_{i,j}} \cdot \left[ d_j - \left(1+\sum_{a}{\beta_a\cdot \left(\boldsymbol{\epsilon}_a^{\pm}(\beta_a)\right)_j} \right) t_j \right] + \sum_{a}{\beta_a^2} \right\}, \end{split} \label{Eq:chi2asymmSyst} \end{equation} \ifdraft \end{linenomath} \fi where $C_\mathrm{su}$ is the covariance matrix built using only the \emph{symmetric} uncertainties, and $\beta_a$ are the profiled coefficients of the \emph{asymmetric} uncertainties which are varied in a fit that minimizes the \chisq. Here $\boldsymbol{\epsilon}^\pm_a$ is the positive component of the $a^\mathrm{th}$ asymmetric relative uncertainty if the fitted value of $\beta_a$ is positive, or the negative component otherwise. The magnitude of each $\boldsymbol{\epsilon}^\pm_a$ corresponds to the relative effect on the theory prediction of a one standard deviation shift of parameter $a$. For this analysis, an uncertainty component is considered asymmetric when the absolute difference between the magnitudes of the positive and negative portions is larger than 1\% of the cross-section in at least one bin. Due to the large asymmetric uncertainties on the theoretical predictions, this $\chi^2$ definition is not only a better motivated choice, but provides stronger statistical power than the two approaches mentioned above. The theoretical and experimental uncertainties, including both statistical and systematic components, are included in the \chisq fit. This relies on a detailed knowledge of the individual uncertainty components and their correlations with each other. Sample inputs to the \chisq function using the SM predictions based on the CT10 PDF set are shown in figure~\ref{fig:asymmetricSystematicsAndCorrelations_CT10}. To illustrate the asymmetries, examples of the most asymmetric uncertainty components are shown, namely the theoretical uncertainties due to the scale choice and two uncertainty components of the CT10 PDF set. The positive and negative relative uncertainties are shown in figure \ref{fig:asymmetricSystematicsAndCorrelations_CT10_a} as functions of the bin number, corresponding to those in the cross-section tables in Appendix \ref{app:tables}. The signed difference between the magnitudes of the positive and negative relative uncertainties ($up-(-down)$) are shown in figure \ref{fig:asymmetricSystematicsAndCorrelations_CT10_b} as functions of the bin number. The importance of the asymmetric uncertainties is highlighted in figure \ref{fig:asymmetricSystematicsAndCorrelations_CT10_c} through the comparison of the relative total symmetric uncertainty with the relative total uncertainty. Uncertainty components exhibiting asymmetries of up to 22\% are present, and are an important fraction of the total uncertainty, in particular at high dijet mass. The largest asymmetric uncertainties are individually comparable to the size of the total symmetric ones. The correlation matrix computed for the symmetric uncertainties is shown in figure \ref{fig:asymmetricSystematicsAndCorrelations_CT10_d}. Strong correlations of $90\%$ or more are observed among neighbouring bins of dijet mass, mostly due to the jet energy scale and resolution, while correlations are smaller between different ranges of \ystar. The total symmetric uncertainty and its correlation matrix define the symmetric covariance matrix in eq.~(\ref{Eq:chi2asymmSyst}). \begin{figure}[htp] \begin{center} \subfigure[\label{fig:asymmetricSystematicsAndCorrelations_CT10_a} Asymmetric uncertainties.]{ \includegraphics[width=0.45\linewidth]{asymm_inputs} }\hspace{0.4cm} \subfigure[\label{fig:asymmetricSystematicsAndCorrelations_CT10_b} Asymmetry of the uncertainties.]{ \includegraphics[width=0.45\linewidth]{asymmetry} } \\ \subfigure[\label{fig:asymmetricSystematicsAndCorrelations_CT10_c} Relative total uncertainty~(black) and relative total symmetric uncertainty~(blue).]{ \includegraphics[width=0.45\linewidth]{total_symm} }\hspace{0.4cm} \subfigure[\label{fig:asymmetricSystematicsAndCorrelations_CT10_d} Correlation matrix for the symmetric uncertainties~(statistical and systematic).]{ \includegraphics[width=0.45\linewidth]{correlations} } \caption{ Sample inputs to the asymmetric generalization of the $\chi^2$ function. The uncertainty components with the largest asymmetries are shown in (a), which includes the renormalization and factorization scale, and two uncertainty components (PDF comp.) of the CT10 PDF set. The asymmetries of the same uncertainty components are shown in (b), defined as the signed difference between the magnitudes of the positive and negative portions ($up-(-down)$). The relative total symmetric uncertainty (blue dashed line) and relative total uncertainty (black line) are shown in (c), along with the correlation matrix computed from the symmetric uncertainties in (d). These plots correspond to the theoretical prediction using the CT10 PDF set, and radius parameter \rsix. For each plot, the horizontal axis covers all dijet-mass bins (ordered according to increasing dijet mass) for three ranges in \ystar{}: starting from the left end the range \ystarone, then \ystartwo~and finally \ystarthree. The \mass--\ystar bin numbers correspond to those in the cross-section tables in Appendix \ref{app:tables}. } \label{fig:asymmetricSystematicsAndCorrelations_CT10} \end{center} \end{figure} \subsection{Frequentist method} The \chisq distribution expected for experiments drawn from the parent distribution of a given theory hypothesis is required in order to calculate the probability of measuring a specific \chisq value under that theory hypothesis. This is obtained by generating a large set of pseudo-experiments that represent fluctuations of the theory hypothesis due to the full set of experimental and theoretical uncertainties. The theory hypothesis can be the SM, or the SM with any of its extensions, depending on the study being carried out. In the generation of pseudo-experiments, the following sources of uncertainty are considered: \begin{itemize} \item Statistical uncertainties: an eigenvector decomposition of the statistical covariance matrix resulting from the unfolding procedure is performed. The resulting eigenvectors are taken as Gaussian-distributed uncertainty components. \item Systematic experimental and theoretical uncertainties: the symmetric components are taken as Gaussian distributed, while a two-sided Gaussian distribution is used for the asymmetric ones. \end{itemize} For each pseudo-experiment, the \chisq value is computed between the pseudo-data and the theory hypothesis. In this way, the \chisq distribution that would be expected for experiments drawn from the theory hypothesis is obtained without making assumptions about its shape. The \emph{observed \chisq} (\chiobs) value is computed using the data and the theory hypothesis. To quantify the compatibility of the data with the theory, the ratio of the area of the \chisq distribution with $\chisq > \chiobs$ to the total area is used. This fractional area, called a p-value, is the observed probability, under the assumption of the theory hypothesis, to find a value of \chisq with equal or lesser compatibility with the hypothesis relative to what is found with \chiobs. If the observed p-value (\pobs) is smaller than 5\%, the theoretical prediction is considered to poorly describe the data at the 95\% CL. The comparison of a theory hypothesis that contains an extension of the SM to the data is quantified using the \emph{CLs} technique \cite{Read:2002a}, which accounts for cases where the signal is small compared to the background. This technique relies on the computation of two \chisq distributions: one corresponding to the \emph{background} model (the SM) and another to the \emph{signal$+$background} model (the SM plus any extension), each with respect to the assumption of the signal$+$background model. These distributions are calculated using the same technique described earlier, namely through the generation of large sets of pseudo-experiments: \begin{itemize} \item In the case of the background-only distribution, the \chisq values are calculated between each background pseudo-experiment and the signal$+$background prediction. The \emph{expected \chisq} (\chiexp) is defined as the median of this \chisq distribution. \item In the case of the signal$+$background distribution, the \chisq values are calculated between each signal$+$background pseudo-experiment and the signal$+$background prediction. \end{itemize} These two \chisq distributions are subsequently used to calculate two p-values: (1) the observed signal$+$background p-value ($p_{s+b}$), which is defined in the same way as the one described above, i.e. the fractional area of the signal$+$background \chisq distribution above \chiobs; and (2) the observed background p-value ($p_b$), which is defined as the fractional area of the background-only \chisq distribution below \chiobs and measures the compatibility of the background model with the data. In the \emph{CLs} technique, these two p-values are used to construct the quantity $\mathit{CLs} = p_{\mathrm{s}+\mathrm{b}} / (1 - p_\mathrm{b})$, from which the decision to exclude a given signal$+$background prediction is made. The theory hypothesis is excluded at 95\% CL if the quantity \emph{CLs} is less than 0.05. This technique has the advantage, compared to the use of $p_{s+b}$ alone, that theoretical hypotheses to which the data have little or no sensitivity are not excluded. For comparison, the expected exclusion is calculated using the same procedure, except using the expected \chisq value instead of the observed \chisq value. \section{Experimental uncertainties} \label{sec:sysunc} The uncertainty on the jet energy calibration is the dominant uncertainty for this measurement. Complete details of its derivation can be found in ref.~\cite{ATLAS-CONF-2013-004}. Uncertainties in the central region are determined from in situ calibration techniques, such as the transverse momentum balance in $Z/\gamma$--jet and multijet events, for which a comparison between data and MC simulation is performed. The uncertainty in the central region is propagated to the forward region using transverse momentum balance between a central and a forward jet in dijet events. The difference in the balance observed between MC simulation samples generated with \pythia and \herwig results in an additional large uncertainty in the forward region. The uncertainty due to jet energy calibration on each individual jet is $1$--$4\%$ in the central region, and up to 5\% in the forward region. The improvement of the in situ jet calibration techniques over the single-particle response used for data taken in 2010~\cite{Aad:2011he} leads to a reduction in the magnitude of the uncertainty compared to that achieved in the 2010 measurement, despite the increased level of pileup. As a result of the different techniques employed in the 2010 and current analyses, the correlations between the two measurements are non-trivial. The uncertainty due to the jet energy calibration is propagated to the measured cross-sections using MC simulation. Each jet in the sample is scaled up or down by one standard deviation of a given uncertainty component, after which the luminosity-normalized dijet yield is measured from the resulting sample. The yields from the original sample and the samples where all jets were scaled are unfolded, and the difference is taken as the uncertainty due to that component. Since the sources of jet energy calibration uncertainty are taken as uncorrelated with each other, the corresponding uncertainty components on the cross-section are also taken as uncorrelated. Because the correlations between the various experimental uncertainty components are not perfectly known, two additional jet energy calibration uncertainty configurations are considered. They have \emph{stronger} and \emph{weaker} correlations with respect to the nominal configuration, depending on the number of uncertainty sources considered as fully correlated or independent of one another~\cite{ATLAS-CONF-2013-004}. Jet energy and angular resolutions are estimated using MC simulation, after using an angular matching of particle-level and reconstruction-level jets. The resolution is obtained from a Gaussian fit to the distribution of the ratio (difference) of reconstruction-level and particle-level jet energy (angle). Jet energy resolutions are cross-checked in data using in situ techniques such as the bisector method in dijet events \cite{Aad:2012ag}, where good agreement is observed with MC simulation. The uncertainty on the jet energy resolution comes from varying the selection parameters for jets, such as the amount of nearby jet activity, and depends on both jet \pt and jet $\eta$. The jet angular bias is found to be negligible, while the resolution varies between 0.005 radians and 0.07 radians. An uncertainty of $10\%$ on the jet angular resolution is shown to cover the observed differences in a comparison between data and MC simulation. The resolution uncertainties are propagated to the measured cross-sections through the transfer matrix. All jets in the MC sample are smeared according to the uncertainty on the resolution, either the jet energy or jet angular variable. To reduce the dependence on the MC sample size, this process is repeated for each event 1000 times. The transfer matrix resulting from this smeared sample is used to unfold the luminosity-normalized dijet yields, and the deviation from the measured cross-sections unfolded using the original transfer matrix is taken as a systematic uncertainty. The uncertainty due to the jet reconstruction inefficiency as a function of jet \pt is estimated by comparing the efficiency for reconstructing a calorimeter jet, given the presence of an independently measured track-jet of the same radius, in data and in MC simulation. Here, a track-jet refers to a jet reconstructed using the \AKT algorithm, considering as input all tracks in the event with $\pt > 500 \MeV$ and $\abseta < 2.5$, and which are assumed to have the mass of a pion. Since this method relies on tracking, its application is restricted to the acceptance of the tracker for jets of $\abseta < 1.9$. For jets with $\pt > 50 \GeV$, relevant for this analysis, the reconstruction efficiency in both the data and the MC simulation is found to be $100\%$ for this rapidity region, leading to no additional uncertainty. The same efficiency is assumed for the forward region, where jets have more energy for a given value of \pt; therefore, their reconstruction efficiency is likely to be as good as or better than that of jets in the central region. Comparing the jet quality selection efficiency for jets passing the ``medium'' quality criteria in data and MC simulation, an agreement of the efficiency within $0.25\%$ is found~\cite{ATLAS-CONF-2012-020}. Because two jets are considered for each dijet system, a $0.5\%$ systematic uncertainty on the cross-sections is assigned. The impact of a possible mis-modelling of the spectrum shape in MC simulation, introduced through the unfolding as described in section \ref{sec:unfolding}, is also included. The luminosity uncertainty is $1.8\%$ \cite{Aad:2013ucp} and is fully correlated between all data points. The bootstrap method \cite{Bohm:2010tb} has been used to evaluate the statistical significance of all systematic uncertainties described above. The individual uncertainties are treated as fully correlated in dijet mass and \ystar, but uncorrelated with each other, for the quantitative comparison described in section \ref{sec:stattest}. The total uncertainty ranges from $10\%$ at low dijet mass up to $25\%$ at high dijet mass for the range $\ystar < 0.5$, and increases for larger $\ystar$. \section{Theoretical predictions and uncertainties} \label{sec:theory} The measured dijet cross-sections are compared to fixed-order NLO QCD predictions by \nlojet \cite{Nagy:2003tz}, corrected for non-perturbative effects in the fragmentation process and in the underlying event using calculations by \pythia 6.425. A NLO matrix element calculation by \powheg, which is interfaced to the \pythia parton-shower MC generator, is also considered. Both the \nlojet and \powheg predictions are corrected to account for NLO electroweak effects. When used in this paper, the term \emph{Standard Model predictions} refers to NLO QCD calculations corrected for non-perturbative and electroweak effects. \subsection{Theoretical predictions} \label{subsec:theorycalc} The fixed-order $O(\alphas^3)$ QCD calculations are performed with the \nlojet program interfaced to APPLGRID~\cite{Carli:2010rw} for fast convolution with various PDF sets. The following proton PDF sets are considered for the theoretical predictions: CT10 \cite{Lai:2010vv}, HERAPDF1.5 \cite{HERAPDF15}, epATLJet13 \cite{Carli:1447073}, MSTW 2008 \cite{Martin:2009iq}, NNPDF2.1 \cite{Ball:2010de,Forte:2010ta} and NNPDF2.3 \cite{Ball:2012cx}, and ABM11 \cite{Alekhin:2013dmy}. The epATLJet13 PDF set is the result of simultaneously using in the fit ATLAS jet data, collected at centre-of-mass energies of $2.76\TeV$ and $7\TeV$, and HERA-I $ep$ data. In the previous ATLAS measurement \cite{Aad:2011fc}, a scale choice was introduced to ensure that the cross-sections remained positive for large values of \ystar. Although values of \ystar greater than 3.0 are not considered here, the same choice is used for consistency. The renormalization ($\mu_\mathrm{R}$) and factorization ($\mu_\mathrm{F}$) scales are set to \ifdraft \begin{linenomath} \fi \begin{equation} \mu = \mu_\mathrm{R} = \mu_\mathrm{F} = \pt^\mathrm{max} \mathrm{e}^{0.3 \ystar}, \end{equation} \ifdraft \end{linenomath} \fi where $\pt^\mathrm{max}$ is the \pt of the leading jet. Further details can be found in ref. \cite{Ellis:1992en}. Non-perturbative corrections are evaluated using leading-logarithmic parton-shower generators, separately for each value of the jet radius parameter. The corrections are calculated as bin-by-bin ratios of the dijet differential cross-section at the particle level, including hadronization and underlying event effects, over that at the parton level. The nominal corrections are calculated using \pythia 6.425 with the AUET2B tune derived for the MRST LO$^{**}$ PDF set. The non-perturbative corrections as a function of dijet mass are shown in figure \ref{fig:npcorr:a} for the range \ystarthree. Comparisons are also made to the \powheg~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd} NLO matrix element calculation\footnote{ The folding values of \verb+foldcsi=5+, \verb+foldy=10+, \verb+foldphi=2+ associated with the spacing of the integration grid as described in ref. \cite{Nason:2007vt}, are used. } interfaced to a parton-shower MC generator, using the CT10 PDF set. For these calculations the factorization and renormalization scales are set to the default value of $\pt^{\mathrm{Born}}$, the transverse momentum in the $2 \to 2$ process before the hardest emission. The parton-level configurations are passed to the \pythia generator for fragmentation and underlying event simulation. Both the AUET2B and Perugia 2011 \cite{Skands:2010ak} tunes are considered. The version of the program used includes an explicit calculation of all hard emission diagrams \cite{Nason:2013uba,Carli:1447073}. This provides improved suppression of rare events, removing fluctuations that arise from the migration of jets with low values of parton-level \pt to larger values of particle-level \pt. Corrections for electroweak tree-level effects of $O(\alpha\alphas, \alpha^2)$ as well as weak loop effects of $O(\alpha\alphas^2)$ \cite{Dittmaier:2012kx} are applied to both the \nlojet and \powheg predictions. The calculations are derived for NLO electroweak processes on a LO QCD prediction of the observable in the phase space considered here. In general, electroweak effects on the cross-sections are $<1\%$ for values of $\ystar \ge 0.5$, but reach up to 9\% for $\mass > 3\TeV$ in the range \ystarone. The magnitude of the corrections on the theoretical predictions as a function of dijet mass in several ranges of \ystar is shown in figure \ref{fig:npcorr:b}. The electroweak corrections show almost no dependence on the jet radius parameter. \begin{figure}[tbp] \centering \subfigure[Non-perturbative corrections]{ \includegraphics[width=.45\textwidth]{npcorr} \label{fig:npcorr:a} } \subfigure[Electroweak corrections]{ \includegraphics[width=.45\textwidth]{ewcorr} \label{fig:npcorr:b} } \caption{ Non-perturbative corrections (ratio of particle-level cross-sections to parton-level cross-sections) obtained using various MC generators and tunes are shown in (a), for the differential dijet cross-sections as a function of dijet mass in the range \ystarthree with values of jet radius parameter \rfour and \rsix. Uncertainties are taken as the envelope of the various curves. Electroweak corrections are shown in (b) as a function of dijet mass in multiple ranges of \ystar \cite{Dittmaier:2012kx}, for jet radius parameter \rsix. } \label{fig:npcorr} \end{figure} \subsection{Theoretical uncertainties}\label{subsec:theoryunc} To estimate the uncertainty due to missing higher-order terms in the fixed-order calculations, the renormalization scale is varied up and down by a factor of two. The uncertainty due to the choice of factorization scale, specifying the separation between the short-scale hard scatter and long-scale hadronization, is also varied by a factor of two. All permutations of these two scale choices are considered, except for cases where the renormalization and factorization scales are varied in opposite directions. In these extreme cases, logarithmic factors in the theoretical calculations may become large, resulting in instabilities in the prediction. The maximal deviations from the nominal prediction are taken as the uncertainty due to the scale choice. The scale uncertainty is generally within $^{+5\%}_{-15\%}$ for the \rfour calculation, and $\pm 10\%$ for the \rsix calculation. The uncertainties on the cross-sections due to that on $\alphas$ are estimated using two additional proton PDF sets, for which different values of $\alphas$ are assumed in the fits. This follows the recommended prescription in ref. \cite{Lai:2010nw}, such that the effect on the PDF set as well as on the matrix elements is included. The resulting uncertainties are approximately $\pm 4\%$ across all dijet-mass and \ystar ranges considered in this analysis. The multiple uncorrelated uncertainty components of each PDF set, as provided by the various PDF analyses, are also propagated through the theoretical calculations. The PDF analyses generally derive these from the experimental uncertainties on the data used in the fits. For the results shown in section \ref{sec:results} the standard Hessian sum in quadrature \cite{Pumplin:2001ct} of the various independent components is taken. The NNPDF2.1 and NNPDF2.3 PDF sets are exceptions, where uncertainties are expressed in terms of \emph{replicas} instead of by independent components. These replicas represent a collection of equally likely PDF sets, where the data entering the PDF fit were varied within their experimental uncertainties. For the plots shown in section \ref{sec:results}, the uncertainties on the NNPDF PDF sets are propagated using the RMS of the replicas, producing equivalent PDF uncertainties on the theoretical predictions. For the frequentist method described in section \ref{sec:stattest}, these replicas are used to derive a covariance matrix for the theoretical predictions. The eigenvector decomposition of this matrix provides a set of independent uncertainty components, which can be treated in the same way as those in the other PDF sets. In cases where variations of the theoretical parameters are also available, these are treated as additional uncertainty components assuming a Gaussian distribution. For the HERAPDF1.5, NNPDF2.1, and MSTW 2008 PDF sets, the results of varying the heavy-quark masses within their uncertainties are taken as additional uncertainty components. The nominal value of the charm quark mass is taken as $m_{c} = 1.40 \GeV$, and the bottom quark mass as $m_{b} = 4.75 \GeV$. For NNPDF2.1 the mass of the charm quark is provided with a symmetric uncertainty, while that of the bottom quark is provided with an asymmetric uncertainty. NNPDF2.3 does not currently provide an estimate of the uncertainties due to heavy-quark masses. For HERAPDF1.5 and MSTW 2008, asymmetric uncertainties for both the charm and bottom quark masses are available. When considering the HERAPDF1.5 PDF set, the uncertainty components corresponding to the strange quark fraction and the $Q^2$ requirement are also included as asymmetric uncertainties. In addition, the HERAPDF1.5 analysis provides four additional PDF sets arising from the choice of the theoretical parameters and fit functions, which are treated here as separate predictions. These are referred to as \emph{variations} in the following, and include: \begin{enumerate} \item Varying the starting scale $Q^2_0$ from $1.9\GeV^2$ to $1.5\GeV^2$, while adding two additional parameters to the gluon fit function. \item Varying the starting scale $Q^2_0$ from $1.9\GeV^2$ to $2.5\GeV^2$. \item Including an extra parameter in the valence $\textit{u}$-quark fit function. \item Including an extra parameter in the $\bar{\textit{u}}$-quark fit function. \end{enumerate} Including the maximal deviations due to these four variations with respect to the original doubles the magnitude of the theoretical uncertainties at high dijet mass, as seen in section \ref{sec:results}. When performing the quantitative comparison described in section \ref{sec:stattest}, the four variations are treated as distinct PDF sets, since they are obtained using different parameterizations and their statistical interpretation in terms of uncertainties is not well defined. The uncertainties on the theoretical predictions due to those on the PDFs range from $2\%$ at low dijet mass up to $20\%$ at high dijet mass for the range of smallest \ystar values. For the largest values of \ystar, the uncertainties reach $100$--$200\%$ at high dijet mass, depending on the PDF set. The uncertainties on the non-perturbative corrections, arising from the modelling of the fragmentation process and the underlying event, are estimated as the maximal deviations of the corrections from the nominal (see section \ref{subsec:theorycalc}) using the following three configurations: \pythia with the AUET2B tune and CTEQ6L1 PDF set, and \herwigpp 2.5.2 with the UE-EE-3 tune using the MRST LO$^{**}$ or CTEQ6L1 PDF sets. In addition, the statistical uncertainty due to the limited size of the sample generated using the nominal tune is included. The uncertainty increases from $1\%$ for the range \ystar$<0.5$, up to $5\%$ for larger values of \ystar. The statistical uncertainty significantly contributes to the total uncertainty only for the range \ystarsix. There are several cases where the PDF sets provide uncertainties at the $90\%$ confidence level (CL); in particular, the PDF uncertainty components for the CT10 PDF set, as well as $\alphas$ uncertainties for the CT10, HERAPDF1.5, and NNPDF2.1/2.3 PDF sets. In these cases, the magnitudes of the uncertainties are scaled to the $68\%$ CL to match the other uncertainties. In general, the scale uncertainties are dominant at low dijet mass, and the PDF uncertainties are dominant at high dijet mass. \section{Data unfolding} \label{sec:unfolding} The cross-sections as a function of dijet mass are obtained by unfolding the data distributions, correcting for detector resolutions and inefficiencies, as well as for the presence of muons and neutrinos in the particle-level jets (see section \ref{sec:xsdef}). The same procedure as in ref. \cite{Aad:2011fc} is followed, using the iterative, dynamically stabilized (IDS) unfolding method \cite{Malaescu:2011yg}, a modified Bayesian technique. To account for bin-to-bin migrations, a transfer matrix is built from MC simulations, relating the particle-level and reconstruction-level dijet mass, and reflecting all the effects mentioned above. The matching is done in the \mass--\ystar plane, such that only a requirement on the presence of a dijet system is made. Since migrations between neighbouring bins predominantly occur due to jet energy resolution smearing the dijet mass, and less frequently due to jet angular resolution, the unfolding is performed separately for each range of \ystar. Data are unfolded to the particle level using a three-step procedure, consisting of correcting for matching inefficiency at the reconstruction level, unfolding for detector effects, and correcting for matching inefficiency at the particle level. The final result is given by the equation: \ifdraft \begin{linenomath} \fi \begin{equation} N_i^\mathrm{part} = \Sigma_j N_j^\mathrm{reco} \cdot \epsilon_j^\mathrm{reco}A_{ij}/\epsilon^\mathrm{part}_i \end{equation} \ifdraft \end{linenomath} \fi where $i$ ($j$) is the particle-level (reconstruction-level) bin index, and $N_k^\mathrm{part}$ ($N_k^\mathrm{reco}$) is the number of particle-level (reconstruction-level) events in bin $k$. The quantities $\epsilon_k^\mathrm{reco}$ ($\epsilon_k^\mathrm{part}$) represent the fraction of reconstruction-level (particle-level) events matched to particle-level (reconstruction-level) events for each bin $k$. The element of the unfolding matrix, $A_{ij}$, provides the probability for a reconstruction-level event in bin $j$ to be associated with a particle-level event in bin $i$. The transfer matrix is improved through a series of iterations, where the particle-level MC distribution is reweighted to the shape of the unfolded data spectrum. The number of iterations is chosen such that the bias in the closure test (see below) is at the sub-percent level. This is achieved after one iteration for this measurement. The statistical uncertainties following the unfolding procedure are estimated using pseudo-experiments. Each event in the data is fluctuated using a Poisson distribution with a mean of one before applying any additional event weights. For the combination of this measurement with future results, the pseudo-random Poisson distribution is seeded uniquely for each event, so that the pseudo-experiments are fully reproducible. Each resulting pseudo-experiment of the data spectrum is then unfolded using a transfer matrix and efficiency corrections obtained by fluctuating each event in the MC simulation according to a Poisson distribution. Finally, the unfolded pseudo-experiments are used to calculate the covariance matrix. In this way, the statistical uncertainty and bin-to-bin correlations for both the data and the MC simulation are encoded in the covariance matrix. For neighbour and next-to-neighbour bins the statistical correlations are generally $1$--$15\%$, although the systematic uncertainties are dominant. The level of statistical correlation decreases quickly for bins with larger dijet-mass separations. A data-driven closure test is used to derive the bias of the spectrum shape due to mis-modelling by the MC simulation. The particle-level MC simulation is reweighted directly in the transfer matrix by multiplying each column of the matrix by a given weight. These weights are chosen to improve the agreement between data and reconstruction-level MC simulation. The modified reconstruction-level MC simulation is unfolded using the original transfer matrix, and the result is compared with the modified particle-level spectrum. The resulting bias is considered as a systematic uncertainty.\footnote{ For this measurement the IDS method results in the smallest bias when compared with the bin-by-bin technique used in ref. \cite{Aad:2010ad} or the SVD method \cite{Hocker:1995kb}. } \section{Cross-section definition} \label{sec:xsdef} Jet cross-sections are defined using jets reconstructed by the \AKT algorithm \cite{Cacciari:2008gp} implemented in the FastJet \cite{Cacciari:2011ma} package. In this analysis, jets are clustered using two different values of the radius parameter, \rfour and \rsix. Non-perturbative (hadronization and underlying event) and perturbative (higher-order corrections and parton showers) effects are different, depending on the choice of the jet radius parameter. By performing the measurement for two values of the jet radius parameter, different contributions from perturbative and non-perturbative effects are probed. The fragmentation process leads to a dispersion of the jet constituents, more of which are collected for jets with a larger radius parameter. Particles from the underlying event make additional contributions to the jet, and have less of an effect on jets with a smaller radius parameter. Additionally, hard emissions lead to higher jet multiplicity when using a smaller jet radius parameter. Measured cross-sections are corrected for all experimental effects, and thus are defined at the \emph{particle-level} final state. Here particle level refers to stable particles, defined as those with a proper lifetime longer than $10\ps$, including muons and neutrinos from decaying hadrons \cite{Buttar:2008jx}. Events containing two or more jets are considered, with the leading (subleading) jet defined as the one within the range $\absy<3.0$ with the highest (second highest) \pt. Dijet double-differential cross-sections are measured as functions of the dijet mass \mass and half the rapidity separation $\ystar = |y_1 - y_2|/2$ of the two leading jets. This rapidity separation is invariant under a Lorentz boost along the $z$-direction, so that in the dijet rest frame $y_1^\prime = -y_2^\prime = \ystar$. The leading (subleading) jet is required to have $\pt > 100 \GeV$ ($\pt > 50 \GeV$). The requirements on \pt for the two jets are asymmetric to improve the stability of the NLO calculation~\cite{Frixione:1997ks}. The measurement is made in six ranges of $\ystar < 3.0$, in equal steps of $0.5$. In each range of \ystar, a lower limit on the dijet mass is chosen in order to avoid the region of phase space affected by the requirements on the \pt of the two leading jets.
1,941,325,220,374
arxiv
\section{Introduction} \subsection{Background} Let $n$ be a positive integer, $G$ be a multiplicative group and let $\pmb{\nu}=(\nu_1,\ldots,\nu_n)$ be in $G^n$. We say that $\pmb{\nu}$ is multiplicatively dependent if there is a non-zero vector $\mathbf{k}=(k_1,\ldots,k_n)\in \mathbb{Z}^n$ for which \begin{equation} \label{eq:MultDep} \pmb{\nu}^{\mathbf{k}}=\nu^{k_1}_1\cdots \nu^{k_n}_n=1. \end{equation} We denote by ${\mathcal M}_n(G)$ the set of multiplicatively dependent vectors in $G^n$. For instance, the set ${\mathcal M}_n(\mathbb{C}^*)$ of multiplicatively dependent vectors in $(\mathbb{C}^*)^n$ is of Lebesgue measure zero, since it is a countable union of sets of measure zero. Further, if we fix an exponent vector $\mathbf{k}$ the subvariety of $(\mathbb{C}^*)^n$ determined by~\eqref{eq:MultDep} is an algebraic subgroup of $(\mathbb{C}^*)^n$. For multiplicatively dependent vectors of algebraic numbers there are two kinds of questions which have been extensively studied. The first question concerns the exponents in~\eqref{eq:MultDep}. Given a multiplicatively dependent vector $\pmb{\nu}$ it follows from the work of Loxton and van der Poorten~\cite{LvdP,vdPL}, Matveev~\cite{Matveev}, and Loher and Masser~\cite[Corollary~3.2]{LM} (attributed to K.~Yu) that there is a relation of the form~\eqref{eq:MultDep} with a non-zero vector $\mathbf{k}$ with small coordinates. The second question is to find comparison relations among the heights of the coordinates. For example, Stewart~\cite[Theorem~1]{Stewart} has given an inequality for the heights of the coordinates of such a vector (of low multiplicative rank, in the terminology of Section~\ref{sec:rank}), and a lower bound for the sum of the heights of the coordinates is implied in~\cite{Vaaler}. In this paper, we obtain severa asymptotic formulas for the number of multiplicatively dependent $n$-tuples whose coordinates are algebraic numbers of fixed degree, or within a fixed number field, and bounded height. Aside from the results mentioned above, to the best of our knowledge, this natural question has never been addressed in the literature. We remark that the above question is interesting in its own right, but is also partially motivated by the works~\cite{OstSha,SV}, where multiplicatively independent vectors play an important role. \subsection{Rank of multplicative independence} \label{sec:rank} The following notion plays a crucial role in our argument, and is also of independent interest. Let $\overline{{\mathbb Q}}$ be an algebraic closure of the rational numbers ${\mathbb Q}$. For each $\pmb{\nu}$ in $(\overline{{\mathbb Q}}^*)^n$, we define $s$, the \textit{multiplicative rank} of $\pmb{\nu}$, in the following way. If $\pmb{\nu}$ has a coordinate which is a root of unity, we put $s=0$; otherwise let $s$ be the largest integer with $1\leq s\leq n$ for which any $s$ coordinates of $\pmb{\nu}$ form a multiplicatively independent vector. Notice that \begin{equation} \label{mn} 0\leq s\leq n-1, \end{equation} whenever $\pmb{\nu}$ is multiplicatively dependent. \subsection{Conventions and notation} \label{sec:con} For any algebraic number $\alpha$, let $$ f(x)=a_dx^d+\cdots+a_1x+a_0 $$ be the minimal polynomial of $\alpha$ over the integers ${\mathbb Z}$ (so with content $1$ and positive leading coefficient). Suppose that $f$ is factored as $$ f(x)=a_d(x-\alpha_1)\cdots (x-\alpha_d) $$ over the complex numbers $\mathbb{C}$. The \textit{naive height} ${\mathrm H}_0(\alpha)$ of $\alpha$ is given by $$ {\mathrm H}_0(\alpha)=\max\{|a_d|,\ldots,|a_1|,|a_0|\} , $$ and ${\mathrm H}(\alpha)$, the height of $\alpha$, also known as the \textit{absolute Weil height} of $\alpha$, is defined by $$ {\mathrm H}(\alpha)=\(a_d\prod^d_{i=1}\max\{1,|\alpha_i|\}\)^{1/d}. $$ Let $K$ be a number field of degree $d$ (over ${\mathbb Q}$). We use the following standard notation: \begin{itemize} \item $r_1$ and $r_2$ for the number of real and non-real embeddings of $K$, respectively, and put $r=r_1+r_2-1$; \item $D,h, R$ and $\zeta_K$ for the discriminant, class number, regulator and Dedekind zeta function of $K$, respectively; \item $w$ for the number of roots of unity in $K$. \end{itemize} Note that $r$ is exactly the rank of the unit group of the ring of algebraic integers of $K$. As usual, let $\zeta(s)$ be the Riemann zeta function. For any real number $x$, let $\lceil x\rceil$ denote the smallest integer greater than or equal to $x$, and let $\lfloor x\rfloor$ denote the greatest integer less than or equal to $x$. We always implicitly assume that $H$ is large enough, in particular so that the logarithmic expressions $\log H$ and $\log \log H$ are well-defined. In the sequel, we use the Landau symbols $O$ and $o$ and the Vinogradov symbol $\ll$. We recall that the assertions $U=O(V)$ and $U\ll V$ are both equivalent to the inequality $|U|\le cV$ with some positive constant $c$, while $U=o(V)$ means that $U/V\to 0$. We also use the asymptotic notation $\sim$. For a finite set $S$ we use $\card{S}$ to denote its cardinality. Throughout the paper, the implied constants in the symbols $O$ and $\ll$ only depend on the given number field $K$, the given degree $d$, or the dimension $n$. \subsection{Counting vectors within a number field} \label{sec:fixK} Let $K$ be a number field of degree $d$. Denote the set of algebraic integers of $K$ of height at most $H$ by ${\mathcal B}_K(H)$ and the set of algebraic numbers of $K$ of height at most $H$ by ${\mathcal B}^*_K(H)$. Set $$ B_K(H)=\card{{\mathcal B}_K(H)} \qquad \mbox{and} \qquad B^*_K(H) = \card{{\mathcal B}^*_K(H)}. $$ Put $$ C_1(K)=\frac{2^{r_1}{(2\pi)}^{r_2}d^r}{|D|^{1/2}r!}. $$ It follows directly from the work of Widmer~\cite[Theorem~1.1]{Widmer2} (taking $n=e=1$ there) that \begin{equation} \label{BK} B_K(H)=C_1(K)H^d(\log H)^r+O\(H^d(\log H)^{r-1}\). \end{equation} If $r=0$, then~\eqref{BK} can be improved to (see~\cite[Theorem~1.1]{Barroero1}) \begin{equation} \label{BK0} B_K(H)=C_1(K)H^d+O(H^{d-1}). \end{equation} We remark that the estimate in~\eqref{BK} is stated in~\cite[Chapter~3, Theorem~5.2]{Lang} without the explicit constant $C_1(K)$, and moreover Barroero~\cite{Barroero2} has obtained similar estimates for the number of algebraic $S$-integers with fixed degree and bounded height. Define $$ C_2(K)=\frac{2^{2r_1}(2\pi)^{2r_2}2^rhR}{|D|w \zeta_K(2)}. $$ Schanuel~\cite[Corollary to Theorem~3]{Schanuel} proved in 1979 (see also~\cite[Equation~(1.5)]{Masser2}) that \begin{equation} \label{B*K} B^*_K(H)=C_2(K)H^{2d}+O\( H^{2d-1}(\log H)^{\sigma(d)} \), \end{equation} where $\sigma(1)=1$ and $\sigma(d)=0$ for $d>1$. Note that the height in~\cite{Schanuel} is our height to the power $d$. For any positive integer $n$, we denote by $L_{n,K}(H)$ the number of multiplicatively dependent $n$-tuples whose coordinates are algebraic integers of height at most $H$, and we denote by $L^*_{n,K}(H)$ the number of multiplicatively dependent $n$-tuples whose coordinates are algebraic numbers of height at most $H$. Put $$ C_3(n,K)=\frac{n(n+1)}{2}w C_1(K)^{n-1}. $$ \begin{thm} \label{thm:MnK} Let $K$ be a number field of degree $d$ over ${\mathbb Q}$ and let $n$ be an integer with $n\geq 2$. We have \begin{equation} \label{MnK} \begin{split} L_{n,K}(H)=C_3(n,K)&H^{d(n-1)}(\log H)^{r(n-1)} \\ &+O\(H^{d(n-1)}(\log H)^{r(n-1)-1}\); \end{split} \end{equation} if furthermore $K={\mathbb Q}$ or is an imaginary quadratic field, we have \begin{equation} \label{MnQ} L_{n,K}(H)=C_3(n,K)H^{d(n-1)}+O\(H^{d(n-3/2)}\). \end{equation} \end{thm} We remark that when $K={\mathbb Q}$ a better error term than that given in~\eqref{MnQ} is stated in Theorem~\ref{thm:Mnd} below, more precisely, see~\eqref{Mnd'}. We estimate $L^*_{n,K}(H)$ next. Put $$ C_4(n,K)=n^2 w C_2(K)^{n-1}. $$ \begin{thm} \label{thm:MnK2} Let $K$ be a number field of degree $d$, and let $n$ be an integer with $n\geq 2$. Then, we have \begin{equation} L^*_{n,K}(H) =C_4(n,K)H^{2d(n-1)} +O\(H^{2d(n-1)-1}g(H)\), \end{equation} where \begin{equation*} g(H)=\left\{ \begin{array}{ll} \log H & \textrm{if $d=1$ and $n=2$}\\ \exp(c\log H/\log\log H) & \textrm{if $d=1$ and $n>2$}\\ 1 & \textrm{if $d>1$ and $n\ge 2$}, \end{array} \right. \end{equation*} and $c$ is a positive number depending only on $n$. \end{thm} We now outline the strategy of the proofs. Given a number field $K$, we define $L_{n,K,s}(H)$ and $L^*_{n,K,s}(H)$ to be the number of multiplicatively dependent $n$-tuples of multiplicative rank $s$ whose coordinates are algebraic integers in ${\mathcal B}_K(H)$ and algebraic numbers in ${\mathcal B}^*_K(H)$ respectively. It follows from~\eqref{mn} that \begin{equation} \label{MnK=} \left\{ \begin{array}{ll} L_{n,K}(H)=L_{n,K,0}(H)+\cdots+L_{n,K,n-1}(H)\\ \\ L^*_{n,K}(H)=L^*_{n,K,0}(H)+\cdots+L^*_{n,K,n-1}(H). \end{array} \right. \end{equation} The main term in~\eqref{MnK} comes from the contributions of $L_{n,K,0}(H)$ and $L_{n,K,1}(H)$ in~\eqref{MnK=}, and the main term in Theorem~\ref{thm:MnK2} comes from the contributions of $L^*_{n,K,0}(H)$ and $L^*_{n,K,1}(H)$ in~\eqref{MnK=}. To prove Theorems~\ref{thm:MnK} and~\ref{thm:MnK2}, we make use of~\eqref{MnK=} and the following result. \begin{prop} \label{prop:MnKm} Let $K$ be a number field of degree $d$. Let $n$ and $s$ be integers with $n\geq 2$ and $0 \le s\leq n-1$. Then, there exist positive numbers $c_1$ and $c_2$ which depend on $n$ and $K$, such that \begin{equation} \label{MnKm} L_{n,K,s}(H)<H^{d(n-1)-d\(\lceil (s+1)/2\rceil-1\)}\exp(c_1\log H/\log\log H) \end{equation} and \begin{equation} \label{M*nKm} L^*_{n,K,s}(H)<H^{2d(n-1)-d\(\lceil (s+1)/2\rceil-1\)}\exp(c_2\log H/\log\log H). \end{equation} \end{prop} In Section~\ref{sec:low}, we show that when $K={\mathbb Q}$ and $s=n-1$ (\ref{MnKm}) cannot be improved by much; see Theorem~\ref{thm:MnQm}. In particular, it does not hold with $\exp(c_1\log H/\log\log H)$ replaced by a quantity which is $o((\log H)^{(k-1)^2})$, where $n=2k$. \subsection{Counting vectors of fixed degree} Let $d$ be a positive integer, and let ${\mathcal A}_d(H)$, respectively ${\mathcal A}^*_d(H)$, be the set of algebraic integers of degree $d$ (over ${\mathbb Q}$), respectively algebraic numbers of degree $d$, of height at most $H$. We set $$ A_d(H) = \card{{\mathcal A}_d(H)} \qquad \mbox{and} \qquad A^*_d(H) = \card{{\mathcal A}^*_d(H)}. $$ Put $$ C_5(d)=d2^d\prod^{\lfloor(d-1)/2\rfloor}_{j=1}\frac{d(2j)^{d-2j-1}}{(2j+1)^{d-2j}} $$ and $$ C_6(d)=\frac{d2^d}{\zeta(d+1)}\prod^{\lfloor(d-1)/2\rfloor}_{j=1}\frac{(d+1)(2j)^{d-2j}}{(2j+1)^{d-2j+1}}. $$ It follows from the work of Barroero~\cite[Theorem~1.1]{Barroero1} that (see also~\cite[Equation~(1.2)]{Barroero1} for a previous estimate with a weaker error term which follows from~\cite[Theorem~6]{Chern}) \begin{equation} \label{Ad} A_d(H)=C_5(d)H^{d^2}+O\( H^{d(d-1)}(\log H)^{\rho(d)} \), \end{equation} where $\rho(2)=1$ and $\rho(d)=0$ for any $d\ne 2$. Further, Masser and Vaaler~\cite[Equation~(7)]{Masser1} have shown that (see also~\cite[Equation~(1.5)]{Masser2}) \begin{equation} \label{A*d} A^*_d(H)=C_6(d)H^{d(d+1)}+O\(H^{d^2}(\log H)^{\vartheta(d)}\), \end{equation} where $\vartheta(1)=\vartheta(2)=1$ and $\vartheta(d)=0$ for any $d\geq 3$. For any positive integer $n$, we denote by $M_{n,d}(H)$ the number of multiplicatively dependent $n$-tuples whose coordinates are algebraic integers in ${\mathcal A}_d(H)$, and we denote by $M^*_{n,d}(H)$ the number of multiplicatively dependent $n$-tuples whose coordinates are algebraic numbers in ${\mathcal A}_d^*(H)$. For each positive integer $d$, we define $w_0(d)$ to be the number of roots of unity of degree $d$. Let $\varphi$ denote Euler's totient function. Since $\varphi(k)\gg k/\log\log k$ for any integer $k\ge 3$, it follows that \begin{equation} \label{eq:w0w1} w_0(d)\ll d^2 \log\log d , \end{equation} where $d\ge 3$ and the implied constant is absolute. We remark that $w_0(d)$ can be zero, such as for an odd integer $d>1$. Given positive integers $n$ and $d$, we define $C_7(n,d)$ and $C_8(n,d)$ as $$ C_7(n,d)=\(nw_0(d)+n(n-1)\)C_5(d)^{n-1} $$ and $$ C_8(n,d)=\(nw_0(d)+2n(n-1)\)C_6(d)^{n-1}. $$ \begin{thm} \label{thm:Mnd} Let $d$ and $n$ be positive integers with $n\geq 2$. Then, the following hold. \begin{itemize} \item[(i)] We have \begin{equation} \label{Mnd} M_{n,d}(H) = C_7(n,d)H^{d^2(n-1)} + O\(H^{d^2(n-1)-d/2}\); \end{equation} furthermore if $d=2$ or $d$ is odd, we have \begin{equation} \label{Mnd'} \begin{split} M_{n,d}(H) = C_7&(n,d)H^{d^2(n-1)} \\ & + O\(H^{d^2(n-1)-d}\exp(c_0\log H/\log\log H)\) \end{split} \end{equation} and \begin{equation} \label{M2d} M_{2,d}(H) = C_7(2,d)H^{d^2} + O\(H^{d^2-d}(\log H)^{\rho(d)}\), \end{equation} where $c_0$ is a positive number which depends only on $n$ and $d$, and $\rho(d)$ has been defined in~\eqref{Ad}. \item[(ii)] We have \begin{equation} \label{M*nd} M^*_{n,d}(H) = C_8(n,d)H^{d(d+1)(n-1)}+O\(H^{d(d+1)(n-1)-d/2}\log H\); \end{equation} furthermore if $d=2$ or $d$ is odd, we have \begin{equation} \label{M*nd'} \begin{split} M^*_{n,d}(H) = C_8&(n,d)H^{d(d+1)(n-1)} \\ & +O\(H^{d(d+1)(n-1)-d}\exp(c\log H/\log\log H)\) \end{split} \end{equation} and \begin{equation} \label{M*2d} M^*_{2,d}(H) = C_8(2,d)H^{d(d+1)} + O\(H^{d^2}(\log H)^{\vartheta(d)}\), \end{equation} where $c$ is a positive number which depends only on $n$ and $d$, and $\vartheta(d)$ is defined in~\eqref{A*d}. \end{itemize} \end{thm} We remark that the case when $d=1$ actually has been included in Theorems~\ref{thm:MnK} and~\ref{thm:MnK2}. However, in this case the error term in~\eqref{Mnd'} is $H^{n-2+o(1)}$, which is better than that in~\eqref{MnQ} taken with $d=1$. The strategy to prove Theorem~\ref{thm:Mnd} is similar to that in proving Theorems~\ref{thm:MnK} and~\ref{thm:MnK2}. For each integer $s$ with $0\leq s\leq n-1$, we define $M_{n,d,s}(H)$ and $M^*_{n,d,s}(H)$ to be the number of multiplicatively dependent $n$-tuples of multiplicative rank $s$ whose coordinates are algebraic integers in ${\mathcal A}_d(H)$ and algebraic numbers in ${\mathcal A}^*_d(H)$ respectively. Just as in~\eqref{MnK=} we have \begin{equation} \label{Mnd=} \left\{ \begin{array}{ll} M_{n,d}(H)=M_{n,d,0}(H)+\cdots+M_{n,d,n-1}(H)\\ \\ M^*_{n,d}(H)=M^*_{n,d,0}(H)+\cdots+M^*_{n,d,n-1}(H). \end{array} \right. \end{equation} For the proof of Theorem~\ref{thm:Mnd}, we make use of~\eqref{Mnd=} and the following result. \begin{prop} \label{prop:Mndm} Let $d$, $n$ and $s$ be integers with $d \ge 1$, $n\geq 2$ and $0 \le s \leq n-1$. Then, there exist positive numbers $c_1$ and $c_2$, which depend on $n$ and $d$, such that \begin{equation} \label{Mndm} M_{n,d,s}(H)<H^{d^2(n-1)-d(\lceil (s+1)/2\rceil-1)}\exp(c_1\log H/\log\log H) \end{equation} and \begin{equation} \label{M*ndm} \begin{split} M^*_{n,d,s}(H)&<H^{d(d+1)(n-1)-d(\lceil (s+1)/2 \rceil-1)}\\ &\qquad \qquad \qquad \exp(c_2\log H/\log\log H). \end{split} \end{equation} \end{prop} We remark that the estimate~\eqref{Mndm} yields an improvement on the upper bound of $H^{d^2(n-1)}$ and~\eqref{M*ndm} yields an improvement of the upper bound $H^{d(d+1)(n-1)}$ for $s$ at least $2$. \section{Preliminaries} \subsection{Weil height} We first record a well-known result about the absolute Weil height; see~\cite[Chapter 3]{Lang}. \begin{lem} Let $\alpha$ be a non-zero algebraic number, and let $k$ be an integer. Then $$ {\mathrm H}(\alpha^k)={\mathrm H}(\alpha)^{|k|}. $$ \end{lem} \begin{proof} This follows from the product formula and the fact that $$ {\mathrm H}(\alpha)=\prod_v\max\{1,|\alpha|_v\}, $$ where the product is taken over all inequivalent valuations $v$ appropriately normalized, see for example~\cite[Chapter~3, \S1]{Lang}. \end{proof} Next we need a result that allows us to compare the naive height ${\mathrm H}_0$ and the absolute Weil height ${\mathrm H}$. \begin{lem} \label{lem:H0} Let $\alpha$ be an algebraic number of degree $d$. Then $$ {\mathrm H}_0(\alpha)\leq \(2{\mathrm H}(\alpha)\)^d. $$ \end{lem} \begin{proof} This follows from noticing that the coefficients of the minimal polynomial $f$ of $\alpha$ can be expressed in terms of elementary symmetric polynomials in the roots of $f$; see for example~\cite[Equation~(6)]{Mahler}. \end{proof} For the proofs of Theorems~\ref{thm:MnK} and~\ref{thm:MnK2}, we also need the following result. \begin{lem} \label{lem:Ha} Let $\alpha$ be an algebraic number of degree $d$, and let $a$ be the leading coefficient of the minimal polynomial of $\alpha$ over the integers. Then $$ {\mathrm H}(a\alpha)\leq 2^{d-1}{\mathrm H}(\alpha)^d. $$ \end{lem} \begin{proof} By definition, we have $$ {\mathrm H}(\alpha)=\(a\prod^d_{i=1}\max\{1,|\alpha_i|\}\)^{1/d}, $$ where $\alpha_1,\ldots,\alpha_d$ are the roots of the minimal polynomial of $\alpha$. Then, $a\alpha$ is an algebraic integer, and $$ {\mathrm H}(a\alpha)=\(\prod^d_{i=1}\max\{1,|a\alpha_i|\}\)^{1/d}. $$ Thus \begin{align*} {\mathrm H}(a\alpha)^d & \leq a^d\prod^d_{i=1}\max\{1,|\alpha_i|\} =a^{d-1}{\mathrm H}(\alpha)^d, \end{align*} which, together with Lemma~\ref{lem:H0}, implies that $$ {\mathrm H}(a\alpha)^d\leq \(2{\mathrm H}(\alpha)\)^{d(d-1)}{\mathrm H}(\alpha)^d=2^{d(d-1)}{\mathrm H}(\alpha)^{d^2}, $$ and so $$ {\mathrm H}(a\alpha)\leq 2^{d-1}{\mathrm H}(\alpha)^d $$ as required. \end{proof} \subsection{Multiplicative structure of algebraic numbers} Let $K$ be a number field, and let $H$ be a positive real number. We denote by $U_K(H)$ the number of units in the ring of algebraic integers of $K$ of height at most $H$. \begin{lem} \label{lem:units} Let $K$ be a number field, and let $r$ be the rank of the unit group as defined in Section~\ref{sec:con}. Then, there exists a positive number $c$, depending on $K$, such that $$ U_K(H)<c(\log H)^r. $$ \end{lem} \begin{proof} This is~\cite[Part~(ii) of Theorem~5.2 of Chapter~3]{Lang}. \end{proof} The next result shows that if algebraic numbers $\alpha_1,\ldots,\alpha_n$ are multiplicatively dependent, then we can find a relation as~\eqref{eq:MultDep}, where the exponents are not too large. Such a result has found application in transcendence theory, see for example~\cite{Baker,Matveev,vdPL,Stark}. \begin{lem} \label{lem:exponent} Let $n\geq 2$, and let $\alpha_1,\ldots,\alpha_n$ be multiplicatively dependent non-zero algebraic numbers of degree at most $d$ and height at most $H$. Then, there is a positive number $c$, which depends only on $n$ and $d$, and there are rational integers $k_1,\ldots,k_n$, not all zero, such that $$ \alpha^{k_1}_1\cdots\alpha^{k_n}_n=1 $$ and $$ \max_{1\leq i\leq n}|k_i|<c(\log H)^{n-1}. $$ \end{lem} \begin{proof} This follows from~\cite[Theorem~1]{vdPL}. For an explicit constant $c$, we refer to~\cite[Corollary~3.2]{LM}. \end{proof} Let $x$ and $y$ be positive real numbers with $y$ larger than 2, and let $\psi(x,y)$ denote the number of positive integers not exceeding $x$ which contain no prime factors greater than $y$. Put $$ Z=\(\log\(1+\frac{y}{\log x}\)\)\frac{\log x}{\log y}+\(\log\(1+\frac{\log x}{y}\)\)\frac{y}{\log y} $$ and $$ u=(\log x)/(\log y). $$ \begin{lem} \label{lem:psixy} For $2<y\leq x$, we have \begin{align*} \psi(x&,y)\\ &=\exp\( Z \(1+O((\log y)^{-1})+O((\log\log x)^{-1})+O((u+1)^{-1}) \) \). \end{align*} \end{lem} \begin{proof} This is~\cite[Theorem~1]{dB}. \end{proof} \subsection{Counting special algebraic numbers} In this section, we count two special kinds of algebraic numbers. \begin{lem} \label{lem:coeff} Let $K$ be a number field of degree $d$, and let $u$ and $v$ be non-zero integers with $u>0$. Then, there is a positive number $c$, which depends on $K$, such that the number of elements $\alpha$ in $K$ of height at most $H$, whose minimal polynomial has leading coefficient $u$ and constant coefficient $v$, is at most $$ \exp(c\log H/\log\log H). $$ \end{lem} \begin{proof} Let $c_1,c_2,\ldots$ denote positive numbers depending on $K$. Let $N_{K/{\mathbb Q}}$ be the norm function from $K$ to ${\mathbb Q}$. Suppose that $\alpha$ is an element of $K$ of height at most $H$ whose minimal polynomial has leading coefficient $u$ and constant coefficient $v$. Then, we see that $u\alpha$ is an algebraic integer in $K$, and $$ N_{K/{\mathbb Q}}(\alpha) = (-1)^d v/u \qquad \mbox{and} \qquad N_{K/{\mathbb Q}}(u\alpha) = (-1)^d u^{d-1}v. $$ By Lemma~\ref{lem:Ha}, we further have ${\mathrm H}(u\alpha) \le 2^{d-1}H^d$. Note that $u$ is fixed, so the number of such $\alpha$ does not exceed the number of algebraic integers $\beta \in K$ of height at most $2^{d-1}H^d$ and satisfying \begin{equation} \label{eq:norm eq} N_{K/{\mathbb Q}}(\beta) = (-1)^d u^{d-1}v. \end{equation} We say that two algebraic integers $\beta_1$ and $\beta_2$ in $K$ are equivalent if the principal integral ideals $\langle \beta_1 \rangle$ and $\langle \beta_2 \rangle$ are equal. We note that, using~\cite[Chapter~3, Equation~(7.8)]{BS}, the number $E$ of equivalence classes of solutions of~\eqref{eq:norm eq} is at most $\tau(|u^{d-1}v|)^d$, where, for any positive integer $k$, $\tau(k)$ denotes the number of positive integers which divide $k$. By Wigert's Theorem, see~\cite[Theorem~317]{Hardy}, \begin{equation} \label{eq:E1} E<\exp \( c_1\log (3|uv|)/\log\log (3|uv|) \). \end{equation} Further by Lemma~\ref{lem:H0} $u$ and $v$ are at most $(2H)^d$ in absolute value, hence \begin{equation} \label{eq:E2} E<\exp(c_{2}\log H/\log\log H). \end{equation} Besides, if two solutions $\beta_1$ and $\beta_2$ of~\eqref{eq:norm eq} are equivalent, then $\beta_1/\beta_2$ is a unit $\eta$ in the ring of algebraic integers of $K$. But $$ {\mathrm H}(\eta) \leq {\mathrm H}(\beta_1){\mathrm H}((\beta_2)^{-1}) \leq 2^{2(d-1)}H^{2d}. $$ By Lemma~\ref{lem:units} the number of such units is at most \begin{equation}\label{eq:U} U_K(2^{2(d-1)}H^{2d}) \leq c_{3}( \log H)^r. \end{equation} Our result now follows from~\eqref{eq:E2} and~\eqref{eq:U}. \end{proof} We remark that if we set $u=1$, then Lemma~\ref{lem:coeff} gives an upper bound for the number of algebraic integers in $K$ of norm $\pm v$ and of height at most $H$. Given integer $d\ge 1$, let ${\mathcal C}^*_d(H)$ be the set of algebraic numbers $\alpha$ of degree $d$ and height at most $H$ such that $\alpha\eta$ is also of degree $d$ for some root of unity $\eta\ne \pm 1$, and let ${\mathcal C}_d(H)$ be the set of algebraic integers contained in ${\mathcal C}^*_d(H)$. Here, we want to estimate the sizes of ${\mathcal C}_d(H)$ and ${\mathcal C}^*_d(H)$. For this we need some preparations. Given a polynomial $f=a_dX^d + \cdots + a_1X +a_0\in {\mathbb Q}[X]$ of degree $d$, we call it \textit{degenerate} if it has two distinct roots whose quotient is a root of unity. Besides, we define its \textit{height} as $$ {\mathrm H}(f) = \max\{|a_d|,\ldots,|a_1|,|a_0|\}, $$ and we denote by $G_f$ the \textit{Galois group} of the splitting field of $f$ over ${\mathbb Q}$. Let $S_d$ be the full symmetric group of $d$ symbols. Define $$ {\mathcal E}_d(H) =\{\textrm{monic $f\in {\mathbb Z}[X]$ of degree $d$: ${\mathrm H}(f)\le H$ and $G_f\ne S_d$} \} $$ and $$ {\mathcal E}^*_d(H) =\{\textrm{$f\in {\mathbb Z}[X]$ of degree $d$: ${\mathrm H}(f)\le H$ and $G_f\ne S_d$} \}. $$ The study of the sizes of ${\mathcal E}_d(H)$ and ${\mathcal E}^*_d(H)$ was initiated by van der Waerden~\cite{Waerden}. Here, we recall a recent result due to Dietmann~\cite[Theorem~1]{Dietmann2013}: \begin{equation} \label{eq:Galois1} |{\mathcal E}_d(H) | \ll H^{d-1/2}. \end{equation} Besides, by a result of Cohen~\cite[Theorem~1]{Cohen} (taking $K={\mathbb Q}, s=n+1$ and $r=1$ there), we directly have \begin{equation} \label{eq:Galois2} |{\mathcal E}^*_d(H) | \ll H^{d+1/2}\log H. \end{equation} We also put $$ {\mathcal F}_d(H) = \{\textrm{monic $f\in {\mathbb Z}[X]$ of degree $d:$\, ${\mathrm H}(f)\le H$, $f$ is degenerate} \} $$ and $$ {\mathcal F}^*_d(H) = \{\textrm{$f\in {\mathbb Z}[X]$ of degree $d:$\, ${\mathrm H}(f)\le H$, $f$ is degenerate} \}. $$ Applying~\cite[Theorems~1 and~4]{DS}, we have \begin{equation} \label{eq:degenerate} |{\mathcal F}_d(H)| \ll H^{d-1} \qquad \mbox{and} \qquad |{\mathcal F}^*_d(H)| \ll H^d. \end{equation} We are now ready to prove the following lemma. \begin{lem} \label{lem:special} We have: \begin{itemize} \item[(i)] for any integer $d\ge 1$, $$ |{\mathcal C}_d(H)| \ll H^{d(d-1/2)} \quad \text{and} \quad |{\mathcal C}^*_d(H)|\ll H^{d(d+1/2)}\log H; $$ \item[(ii)] for $d=2$ or for $d$ odd, $$ |{\mathcal C}_d(H)| \ll H^{d(d-1)} \qquad \mbox{and} \qquad |{\mathcal C}^*_d(H)|\ll H^{d^2}. $$ \end{itemize} \end{lem} \begin{proof} Pick an arbitrary element $\alpha \in {\mathcal C}_d(H)$. We let $f$ be its minimal polynomial over ${\mathbb Z}$, and let the $d$ roots of $f$ be $\alpha_1,\ldots,\alpha_d$ with $\alpha_1=\alpha$. Since $\alpha$ is of height at most $H$, by Lemma~\ref{lem:H0} we have $$ {\mathrm H}(f)\le (2H)^d. $$ By definition, there is a root of unity $\eta \ne \pm 1$ such that $\alpha\eta$ is also of degree $d$. If $\eta \in {\mathbb Q}(\alpha)$, then under an isomorphism sending $\alpha$ to $\alpha_i$, $\eta$ is mapped to one of its conjugates $\eta_i$ in ${\mathbb Q}(\alpha_i)$, which implies that $\eta \in {\mathbb Q}(\alpha_i)$ for any $1\le i \le d$. Indeed, the image $\eta_i$ of $\eta$ in ${\mathbb Q}(\alpha_i)$ multiplicatively generates the same group as $\eta$, and thus $\eta$ is a power of $\eta_i$, so $\eta \in {\mathbb Q}(\alpha_i)$. Hence, $\bigcap_{i=1}^{d}{\mathbb Q}(\alpha_i) \ne {\mathbb Q}$, then we must have $G_f \ne S_d$, that is, \begin{equation} \label{eq:f in E} f \in {\mathcal E}_d((2H)^d). \end{equation} Furthermore, since $f$ is irreducible, in this case $d \ne 2$. We also note that since $\eta$ is of even degree $\varphi(k)$, where $k > 2$ is the smallest positive integer with $\eta^k = 1$, this case does not happen when $d$ is odd. Now, we assume that $\eta \not\in {\mathbb Q}(\alpha)$. Let $K={\mathbb Q}(\eta,\alpha_1,\ldots,\alpha_d)$, and let $G$ be the Galois group ${\rm Gal}(K/{\mathbb Q})$, where $K$ is indeed a Galois extension over ${\mathbb Q}$. We construct a disjoint union $G= \bigcup_{i=1}^{d}G_i$, where $$ G_i = \{\phi\in G: \, \phi(\alpha)=\alpha_i \}. $$ So, for each $1\le i \le d$ $$ G_i \alpha\eta = \{\phi(\alpha\eta):\, \phi \in G_i \} =\{\alpha_i\phi(\eta):\, \phi \in G_i \}. $$ Since $\alpha\eta$ is of degree $d$, we have \begin{equation} \label{eq: Union Card} \left | \bigcup_{i=1}^{d}G_i \alpha\eta \right| = d. \end{equation} Note that $\alpha_1=\alpha$, then $G_1 = {\rm Gal}(K/{\mathbb Q}(\alpha))$. Since $\eta \not\in {\mathbb Q}(\alpha)$, there exist two morphisms $\phi_1,\phi_2 \in G_1$ such that $\phi_1(\eta) \ne \phi_2(\eta)$. That is, $|G_1 \alpha\eta| \ge 2$. Trivially, $|G_i \alpha\eta| \ge 1$ for $2\le i \le d$. We now see from~\eqref{eq: Union Card} that there are two distinct indices $i, j$ such that $G_i\alpha\eta \cap G_j\alpha\eta \ne \emptyset$, which implies that $\alpha_i/\alpha_j$ is a root of unity and thus $f$ is degenerate, that is, \begin{equation} \label{eq:f in F} f \in {\mathcal F}_d((2H)^d). \end{equation} Hence, if $\alpha \in {\mathcal C}_d(H)$, then combing~\eqref{eq:f in E} and~\eqref{eq:f in F} with~\eqref{eq:Galois1} and~\eqref{eq:degenerate}, respectively, we derive the first inequality in~(i). If $d=2$ or $d$ is odd, by the above discussion we always have~\eqref{eq:f in F}, and thus the first inequality in~(ii) follows from~\eqref{eq:degenerate}. Similar arguments also apply to estimate $|{\mathcal C}^*_d(H)|$ by using~\eqref{eq:Galois2} and~\eqref{eq:degenerate}. \end{proof} \section{Proofs of Propositions~\ref{prop:MnKm} and~\ref{prop:Mndm}} \subsection{Proof of Proposition~\ref{prop:MnKm}} Let $c_3,c_4,\ldots$ denote positive numbers depending on $n$ and $K$. Let $\pmb{\nu}=(\nu_1,\ldots,\nu_n)$ be a multiplicatively dependent vector of multiplicative rank $s$ whose coordinates are from $K$ and have height at most $H$. Set $m = s+1$. Then, there are $m$ distinct integers $j_1,\ldots,j_m$ from $\{1,\ldots,n\}$ for which $\nu_{j_1},\ldots,\nu_{j_m}$ are multiplicatively dependent and there are non-zero integers $k_{j_1},\ldots,k_{j_m}$ for which \begin{equation} \label{eq:mult} \nu^{k_{j_1}}_{j_1}\cdots \nu^{k_{j_m}}_{j_m}=1, \end{equation} and further by Lemma~\ref{lem:exponent}, we can assume that \begin{equation} \label{eq:expo} \max\{|k_{j_1}|,\ldots,|k_{j_m}|\}<c_3(\log H)^{m-1}. \end{equation} Let $P$ be the set of indices $i$ for which $k_i$ is positive, and let $N$ be the set of indices $i$ for which $k_i$ is negative. Then \begin{equation} \label{eq:sep} \prod_{i\in P}\nu^{k_i}_{i}=\prod_{i\in N}\nu^{-k_i}_{i}. \end{equation} Plainly, either $\card{P}$ or $\card{N}$ is at least $\lceil m/2\rceil$. Let $I=\{j_1,\ldots,j_m\}$, and let $I_0$ be the subset of $I$ consisting of the indices $i$ for which $k_i$ is positive if $\card{P}\geq\lceil m/2 \rceil$, and otherwise let $I_0$ be the subset of $I$ consisting of the indices $i$ for which $k_i$ is negative. Note that \begin{equation} \label{eq:I0} \card{ I_0}\geq\left\lceil\frac{m}{2}\right\rceil. \end{equation} It follows from~\eqref{eq:sep} that \begin{equation} \label{eq:maineq} \prod_{i\in I_0}\nu^{|k_i|}_i=\prod_{i\in I\backslash I_0}\nu^{|k_i|}_i. \end{equation} For each coordinate $\nu_i$, $i\in I$, let $a_i$ be the leading coefficient of the minimal polynomial of $\nu_i$ over the integers. Note that $a_i\nu_i$ is an algebraic integer, and we can rewrite~\eqref{eq:maineq} as \begin{equation} \label{eq:maineq1} \prod_{i\in I_0}(a_i\nu_i)^{|k_i|}=\prod_{i\in I_0}a^{|k_i|}_i\prod_{i\in I\backslash I_0}\nu_i^{|k_i|}. \end{equation} We first establish~\eqref{MnKm}. Accordingly, we fix non-zero algebraic integers $\nu_i \in {\mathcal B}_K(H)$ for $i$ from $\{1,\ldots,n\}\backslash I_0$ and estimate the number of solutions of~\eqref{eq:maineq} in algebraic integers $\nu_i$, $i\in I_0$, from ${\mathcal B}_K(H)$. Observe that the number of cases when we consider an equation of the form~\eqref{eq:maineq} is, by~\eqref{eq:expo}, at most $$ \binom{n}{m}\(2c_3(\log H)^{(m-1)}\)^m B_K(H)^{n-\card{ I_0}}, $$ and, by~\eqref{BK} and~\eqref{eq:I0}, is at most \begin{equation} \label{eq:cases1} c_4H^{d(n-\lceil m/2 \rceil)}(\log H)^{c_5}. \end{equation} Let $q_1,\ldots,q_t$ be the primes which divide $$ \prod_{i\in I\backslash I_0}N_{K/{\mathbb Q}}(\nu_i), $$ where $N_{K/{\mathbb Q}}$ is the norm from $K$ to ${\mathbb Q}$. Since the height of $\nu_i$ is at most $H$, it follows from Lemma~\ref{lem:H0} that \begin{equation} \label{eq:normv} |N_{K/{\mathbb Q}}(\nu_i)|\leq (2H)^d, \quad i=1,2,\ldots,n, \end{equation} and since $\card {I\backslash I_0}\leq n$, we see that \begin{equation} \label{eq:normv2} \left|\prod_{i\in I\backslash I_0}N_{K/{\mathbb Q}}(\nu_i)\right|\leq (2H)^{dn}. \end{equation} Let $p_1,\ldots,p_k$ be the first $k$ primes, where $k$ satisfies $$ p_1\cdots p_k\leq\left|\prod_{i\in I\backslash I_0}N_{K/{\mathbb Q}}(\nu_i)\right|<p_1\cdots p_{k+1}. $$ Let $T$ denote the number of positive integers up to $(2H)^d$ which are composed only of primes from $\{q_1,\ldots,q_t\}$. We see that $T$ is bounded from above by the number of positive integers up to $(2H)^d$ which are composed of primes from $\{p_1,\ldots,p_k\}$. By~\eqref{eq:normv2}, we obtain $$ \sum_{\textrm{prime $p\le p_k$}} \log p \ll \log H, $$ which, combined with the prime number theorem, yields $$ p_k<c_6\log H. $$ Therefore we have $$ T\leq \psi\((2H)^d, c_6\log H \), $$ and thus by Lemma~\ref{lem:psixy}, \begin{equation} \label{eq:T1} T<\exp(c_7\log H/\log\log H). \end{equation} It follows that if $(\nu_i,i\in I_0)$ is a solution of~\eqref{eq:maineq}, then $|N_{K/{\mathbb Q}}(\nu_i)|$ is composed only of primes from $\{q_1,\ldots,q_t\}$, and so $N_{K/{\mathbb Q}}(\nu_i)$ is one of at most $2T$ integers of absolute value at most $(2H)^d$. Let $a$ be one of those integers. By Lemma~\ref{lem:coeff}, the number of algebraic integers $\alpha$ from $K$ of height at most $H$ for which \begin{equation} \label{eq:normeq} N_{K/{\mathbb Q}}(\alpha)=a \end{equation} is at most $\exp(c_8\log H/\log \log H)$. Therefore, by~\eqref{eq:T1}, and~\eqref{eq:normeq}, the number of $\card{ I_0}$-tuples $(\nu_i, i\in I_0)$ which give a solution of~\eqref{eq:maineq} is at most $\exp(c_{9}\log H/\log\log H)$. Recalling $m = s+1$, we see that our bound~\eqref{MnKm} now follows from~\eqref{eq:cases1}. We now establish~\eqref{M*nKm}. We first remark by Lemmas~\ref{lem:H0} and~\ref{lem:Ha} that \begin{equation} \label{eq:ai} 0< a_i \leq (2H)^d \end{equation} and \begin{equation} \label{eq:ainu} {\mathrm H}(a_i\nu_i)\leq 2^{d-1}H^d, \end{equation} for $i=1,\ldots,n$. Moreover, without loss of generality we can assume that $I \setminus I_0$ is not empty. Indeed, if $I \setminus I_0$ is empty, then we can replace an arbitrary coordinate $\nu_i,i\in I$, by its inverse $\nu_i^{-1}$. In view of~\eqref{eq:maineq1}, we proceed by fixing $a_i$ for $i$ in $I_0$ and $\nu_i$ for $i$ in $\{1,\ldots,n\}\backslash I$. Since $I\backslash I_0$ is non-empty, say that it contains $i_1$. We further fix $\nu_i$ for $i$ in $I\backslash I_0$ with $i\neq i_1$, and then the corresponding leading coefficient $a_i$ is also fixed. Let $$ \beta=\prod_{i\in I_0}a^{|k_i|}_i\prod_{\substack{i\in I\backslash I_0 \\ i \ne i_1}}(a_i\nu_i)^{|k_i|}, $$ which is actually a fixed non-zero algebraic integer, then $N_{K/{\mathbb Q}}(\beta)$ is a fixed non-zero integer. Note that the left-hand side of~\eqref{eq:maineq1} is an algebraic integer, so $\beta \nu_{i_1}$ is an algebraic integer, and then $N_{K/{\mathbb Q}}(\beta\nu_{i_1})$ is also an algebraic integer. Thus, the leading coefficient $a_{i_1}$ divides $N_{K/{\mathbb Q}}(\beta)$. It follows that the prime factors of $a_{i_1}$ divide $$ \prod_{i\in I_0}a_i\prod_{\substack{i\in I\backslash I_0 \\ i \ne i_1}}N_{K/{\mathbb Q}}(a_i\nu_i). $$ Since the heights of $\nu_1, \ldots, \nu_n$ are at most $H$, we see, as in the proof of the estimate~\eqref{eq:T1}, that there are at most $\exp(c_{10}\log H/\log\log H)$ possibilities for the leading coefficient $a_{i_1}$. Note that by Lemma~\ref{lem:H0} there are at most $2(2H)^d$ possibilities for the constant coefficient of the minimal polynomial of $\nu_{i_1}$. Thus, by Lemma~\ref{lem:coeff}, there are at most \begin{equation} \label{eq:22} H^d\exp(c_{11} \log H/\log \log H) \end{equation} possible values of $\nu_{i_1}$ that we need to consider. In total we have, by~\eqref{B*K}, \eqref{eq:ai} and~\eqref{eq:22}, at most \begin{align*} \binom{n}{m}\(2c_3(\log H)^{(m-1)}\)^m (2H)^{d\card{ I_0}} & H^{2d(n-\card{ I_0}-1)}H^d \\ & \exp(c_{11} \log H/ \log \log H) \end{align*} equations of the form~\eqref{eq:maineq1}. Since $\card{ I_0}\geq\lceil \frac{m}{2}\rceil$, the number of such equations is at most \begin{equation} \label{eq:total} H^{2dn-d(\lceil \frac{m}{2}\rceil +1)}\exp(c_{12}\log H/ \log \log H). \end{equation} Let us put \begin{equation} \label{eq:gamma0} \gamma_0=\prod_{i\in I_0}a_i^{|k_i|}\prod_{i\in I\backslash I_0}(a_i\nu_i)^{|k_i|} \end{equation} and $$ \gamma_1=\prod_{i\in I\backslash I_0}a^{|k_i|}_i. $$ Notice that once $\nu_i$ is fixed for $i$ in $I\backslash I_0$, so is $a_i$ and thus $\gamma_1$ is fixed. Then,~\eqref{eq:maineq1} can be rewritten as \begin{equation} \label{eq:maineq2} \gamma_1\prod_{i\in I_0}(a_i\nu_i)^{|k_i|}=\gamma_0, \end{equation} and we seek an estimate for the number of solutions of~\eqref{eq:maineq2} in algebraic numbers $\nu_i$ from ${\mathcal B}^*_K(H)$ with leading coefficient $a_i$ for $i \in I_0$. Note that $\gamma_0$ is an algebraic integer and $\gamma_1$ is an integer. Let $q_1,\ldots,q_t$ be the prime factors of $$ \prod_{i\in I_0}a_i\prod_{i\in I\backslash I_0}N_{K/{\mathbb Q}}(a_i\nu_i). $$ Then, by~\eqref{eq:gamma0} and~\eqref{eq:maineq2}, for each index $i\in I_0$ the prime factors of $N_{K/{\mathbb Q}}(a_i\nu_i)$ are from $\{q_1,\ldots, q_t\}$. It follows from~\eqref{eq:ai}, \eqref{eq:ainu} and Lemma~\ref{lem:H0} that $$ \left|\prod_{i\in I_0}a_i\prod_{i\in I\backslash I_0}N_{K/{\mathbb Q}}(a_i\nu_i)\right| \leq (2H)^{d\card{ I_0}}(2^dH^d)^{d\card{I\backslash I_0}} \leq (2H)^{d^2n}. $$ We can now argue as in our proof of~\eqref{MnKm} that the number of solutions of~\eqref{eq:maineq2} in algebraic integers $a_i\nu_i$, $i\in I_0$, from $K$ of height at most $2^{d-1}H^d$ is at most $\exp(c_{13}\log H/\log\log H)$. The result~\eqref{M*nKm} now follows from~\eqref{eq:total}. \subsection{Proof of Proposition~\ref{prop:Mndm}} Let $c_3,c_4,\ldots$ denote positive numbers depending on $n$ and $d$. Notice that if $\pmb{\nu}=(\nu_1,\ldots,\nu_n)$ is a multiplicatively dependent vector of multiplicative rank $s$ whose coordinates are from ${\mathcal A}^*_d(H)$. Set $m = s+1$. Then, there are $m$ distinct integers $j_1,\ldots,j_m$ from $\{1,\ldots,n\}$ for which $\nu_{j_1},\ldots,\nu_{j_m}$ are multiplicatively dependent and there are non-zero integers $k_{j_1},\ldots,k_{j_m}$ for which~\eqref{eq:mult} holds, and by Lemma~\ref{lem:exponent}, we can suppose that~\eqref{eq:expo} holds. Let $I=\{j_1,\ldots,j_m\}$ and $I_0$ be defined as in the proof of Proposition~\ref{prop:MnKm}, so that~\eqref{eq:I0} and~\eqref{eq:maineq} hold. We first establish~\eqref{Mndm}. Fixing non-zero algebraic integers $\nu_i \in {\mathcal A}_d(H)$ for $i \in \{1,\ldots,n\}\backslash I_0$, we want to estimate the number of solutions of~\eqref{eq:maineq} in algebraic integers $\nu_i \in {\mathcal A}_d(H)$ for $i\in I_0$. The number of cases when we consider an equation of the form~\eqref{eq:maineq} is, by~\eqref{eq:expo}, at most $$ \binom{n}{m}\(2c_3(\log H)^{m-1}\)^{m}A_d(H)^{n-\card{I_0}}, $$ which, by~\eqref{Ad}, is at most \begin{equation} \label{eq:cases4} c_4H^{d^2(n-\card{I_0})}(\log H)^{m(m-1)}. \end{equation} For each $i\in I_0$, by~\eqref{eq:maineq} the prime factors of $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(\nu_i)$ divide $$ \prod_{j\in I \setminus I_0} N_{{\mathbb Q}(\nu_j)/{\mathbb Q}}(\nu_j). $$ Just as in the proof of Proposition~\ref{prop:MnKm}, we can apply Lemma~\ref{lem:H0} and Lemma~\ref{lem:psixy} to conclude that, for $i\in I_0$, $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(\nu_i)$ is one of at most $T$ integers, where, as in~\eqref{eq:T1}, $$ T<\exp(c_5\log H/\log\log H). $$ Then, estimating the number of possible choices of the minimal polynomial of $\nu_i$ over the integers by using Lemma~\ref{lem:H0}, we see that there are at most \begin{equation} \label{eq:viI0} d\(2(2H)^d+1\)^{d-1}\exp(c_5\log H/\log\log H) \end{equation} possible values of each $\nu_i$ for $i \in I_0$. We now fix $\card{I_0}-1$ of the terms $\nu_i$ with $i$ in $I_0$. Let $i_0 \in I_0$ denote the index of the term which is not fixed. Then, $\nu_{i_0}$ is a solution of \begin{equation} \label{eq:subeq} x^{|k_{i_0}|}=\eta_0, \end{equation} where $$ \eta_0=\prod_{\substack{i\in I_0 \\ i\neq i_0}}\nu^{-|k_i|}_i\prod_{i\in I\backslash I_0}\nu^{|k_i|}_i. $$ If $\nu_{i_0}$ and $\mu_{i_0}$ are two solutions of~\eqref{eq:subeq} from ${\mathcal A}_d(H)$, then $\nu_{i_0}/\mu_{i_0}$ is a $|k_{i_0}|$-th root of unity. But the degree of $\nu_{i_0}/\mu_{i_0}$ is at most $d^2$, and so there are at most $c_6$ possibilities for $\nu_{i_0}/\mu_{i_0}$ when $d$ is fixed. It follows from~\eqref{eq:viI0} that each equation~\eqref{eq:maineq} has at most \begin{equation} \label{eq:solutions1} H^{d(d-1)(\card{I_0}-1)}\exp(c_7\log H/\log\log H) \end{equation} solutions. Thus by~\eqref{eq:cases4} and~\eqref{eq:solutions1}, we have \begin{equation} \label{eq:Mndm1} M_{n,d,s}(H)<H^{d^2(n-\card{I_0})+d(d-1)(\card{I_0}-1)}\exp(c_8\log H/\log\log H). \end{equation} Further, by~\eqref{eq:I0}, \begin{equation} \label{eq:dI0} d^2(n-\card{I_0})+d(d-1)(\card{I_0}-1)\leq d^2(n-1)-d\(\left\lceil\frac{m}{2}\right\rceil-1\). \end{equation} Now,~\eqref{Mndm} follows from~\eqref{eq:Mndm1} and~\eqref{eq:dI0}. We next establish~\eqref{M*ndm}. For each $i\in I$, let $a_i$ denote the leading coefficient of the minimal polynomial of $\nu_i$ over the integers. Without loss of generality, we can assume that $I \setminus I_0$ is not empty. Indeed, if $I \setminus I_0$ is empty, then we can replace an arbitrary coordinate $\nu_i,i\in I$, by its inverse $\nu_i^{-1}$. In view of~\eqref{eq:maineq1}, we proceed by first fixing positive integers $a_i$ for $i \in I_0$. Since $I\backslash I_0$ is non-empty, say that it contains $i_1$. We next fix $\nu_i$ for $i$ in $i \in \{1,\ldots,n\}\backslash I_0$ with $i \neq i_1$, and then the corresponding $a_i$ is also fixed. Let $$ \beta=\prod_{i\in I_0}a^{|k_i|}_i\prod_{\substack{i\in I\backslash I_0 \\ i \ne i_1}}(a_i\nu_i)^{|k_i|}, $$ which is a fixed non-zero algebraic integer. Notice that the left-hand side of~\eqref{eq:maineq1} is an algebraic integer, so $\beta \nu_{i_1}$ is also an algebraic integer, and thus as in the proof of~\eqref{M*nKm} the prime factors of the leading coefficient $a_{i_1}$ divide $$ \prod_{i\in I_0}a_i\prod_{\substack{i\in I\backslash I_0 \\ i \ne i_1}}N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(a_i\nu_i). $$ Since the heights of $\nu_1, \ldots, \nu_n$ are at most $H$ and their degrees are all equal to $d$, we see, as in the proof of~\eqref{eq:T1}, that there are at most $\exp(c_9 \log H/ \log\log H)$ possibilities for the leading coefficient $a_{i_1}$. Then, combining this result with Lemma~\ref{lem:H0}, we know that the number of the possibilities for the minimal polynomial of $\nu_{i_1}$ is at most $$ H^{d^2}\exp(c_{10} \log H / \log \log H). $$ Thus, there are at most \begin{equation} \label{eq:nu_i1} H^{d^2}\exp(c_{11} \log H / \log \log H) \end{equation} possible values of $\nu_{i_1}$ that we need to consider. Hence, the number of cases of the equation~\eqref{eq:maineq1} to be considered is, by~\eqref{eq:expo}, \eqref{eq:ai} and~\eqref{eq:nu_i1}, at most \begin{align*} \binom{n}{m}\(2c_3(\log H)^{m-1}\)^{m}(2H)^{d\card{I_0}} &A^*_d(H)^{n-\card{I_0}-1}H^{d^2} \\ & \exp(c_{11} \log H / \log \log H), \end{align*} which, by~\eqref{A*d}, is at most \begin{equation} \label{eq:cases5} H^{d(d+1)(n-\card{I_0}-1)+d\card{I_0}+d^2}\exp(c_{12} \log H / \log \log H). \end{equation} We now estimate the number of solutions of~\eqref{eq:maineq1} in algebraic numbers $\nu_i\in {\mathcal A}^*_d(H)$ for $i \in I_0$ with minimal polynomial having leading coefficient $a_i$. It follows from~\eqref{eq:maineq1} that for each $i\in I_0$ the prime factors of $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(a_i\nu_i)$ divide $$ \prod_{j\in I_0}a_j\prod_{j\in I\backslash I_0}N_{{\mathbb Q}(\nu_j)/{\mathbb Q}}(a_j\nu_j). $$ Thus, by Lemma~\ref{lem:H0}, Lemma~\ref{lem:Ha} and Lemma~\ref{lem:psixy}, as in the proof of~\eqref{eq:T1}, there is a set of at most $T$ integers, where $$ T<\exp(c_{13}\log H/\log\log H), $$ and $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(a_i\nu_i)$ belongs to that set. Since $a_i$ is fixed, the norm $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(\nu_i)$ also belongs to a set of cardinality at most $T$ for $i \in I_0$. Notice that for the minimal polynomial of $\nu_i, i \in I_0$, if $N_{{\mathbb Q}(\nu_i)/{\mathbb Q}}(\nu_i)$ is fixed, then the constant coefficient is also fixed, because the leading coefficient $a_i$ has already been fixed. Hence, counting possible choices of the minimal polynomial of $\nu_i$ by using Lemma~\ref{lem:H0}, we see that there are at most \begin{equation} \label{eq:viI0*} H^{d(d-1)}\exp(c_{14}\log H/\log\log H) \end{equation} possible values of $\nu_i$ for $i \in I_0$. We now fix $\card{I_0}-1$ of the coordinates $\nu_i$ with $i\in I_0$ and argue as before to conclude from~\eqref{eq:viI0*} that each equation~\eqref{eq:maineq1} has at most \begin{equation} \label{eq:solutions2} H^{d(d-1)(\card{I_0}-1)}\exp(c_{15}\log H/\log\log H) \end{equation} solutions. Thus, by~\eqref{eq:cases5} and~\eqref{eq:solutions2}, we obtain \begin{equation} \label{eq:M*ndm1} \begin{split} M^*_{n,d,s}(H) &<H^{d(d+1)(n-\card{I_0}-1)+d\card{I_0}+d^2+d(d-1)(\card{I_0}-1)} \\ & \qquad \qquad \qquad \qquad \qquad \exp(c_{16}\log H/\log\log H). \end{split} \end{equation} Observing that \begin{align*} &d(d+1)(n-\card{I_0}-1)+d\card{I_0}+d^2 +d(d-1)(\card{I_0}-1) \\ & \quad =d(d+1)(n-1)-d(\card{I_0}-1), \end{align*} our result~\eqref{M*ndm} now follows from~\eqref{eq:I0} and~\eqref{eq:M*ndm1}. \section{Proof of Main Results} \subsection{Proof of Theorem~\ref{thm:MnK}} By~\eqref{MnK=} and~\eqref{MnKm}, there is a positive number $c$ which depends on $n$ and $K$ such that \begin{equation} \label{eq:MnKH} \begin{split} L_{n,K}(H) =L_{n,K,0}(H)&+L_{n,K,1}(H)\\ &+O(H^{d(n-1)-d}\exp(c\log H/\log\log H)). \end{split} \end{equation} Each such vector $\pmb{\nu}$ of multiplicative rank $0$ has an index $i_0$ for which $\nu_{i_0}$ is a root of unity. Accordingly, we have $$ nw(B_K(H)-w-1)^{n-1}\leq L_{n,K,0}(H)\leq nwB_K(H)^{n-1}, $$ and thus by~\eqref{BK} \begin{equation} \label{eq:MnK1} \begin{split} L_{n,K,0}(H)=nwC_1(K)^{n-1}&H^{d(n-1)}(\log H)^{r(n-1)}\\ & +O\(H^{d(n-1)}(\log H)^{r(n-1)-1}\). \end{split} \end{equation} We next estimate $L_{n,K,1}(H)$. Each such vector $\pmb{\nu}$ of rank $1$ has a pair of indices $(i_0,i_1)$, two coordinates $\nu_{i_0}$ and $\nu_{i_1}$ from ${\mathcal B}_K(H)$ and non-zero integers $k_{i_0}$ and $k_{i_1}$ such that $\nu^{k_{i_0}}_{i_0}\nu^{k_{i_1}}_{i_1}=1$. There are $n(n-1)/2$ pairs $(i_0,i_1)$. By Lemma~\ref{lem:exponent}, the number of such vectors associated with two distinct such pairs $(i_0,i_1)$ and $(i_2,i_3)$ is \begin{equation} \label{eq:twop} O\(B_K(H)^{n-2} (\log H)^4\). \end{equation} We now estimate the number of $n$-tuples $\pmb{\nu}$ whose coordinates are from ${\mathcal B}_K(H)$ for which $$ \nu^{k_{i_0}}_{i_0}\nu^{k_{i_1}}_{i_1}=1 $$ with $(k_{i_0},k_{i_1})$ equal to $(t,t)$ or $(t,-t)$ for some non-zero integer $t$. We have $(B_K(H)-w-1)^{n-2}$ choices for the coordinates of $\pmb{\nu}$ associated with indices different from $i_0$ and $i_1$, because they are non-zero and not roots of unity. Also there are $B_K(H)-w-1$ choices for the $i_0$-th coordinate, and once it is determined, say $\nu_{i_0}$, then the $i_1$-th coordinate is of the form $\eta\nu_{i_0}$ or $\eta\nu^{-1}_{i_0}$, where $\eta$ is a root of unity from $K$. Note that $$ {\mathrm H}(\eta\nu_{i_0})={\mathrm H}(\nu_{i_0})={\mathrm H}(\eta\nu^{-1}_{i_0}), $$ and that $\eta\nu^{-1}_{i_0}$ is only counted when $\nu_{i_0}$ is a unit in the ring of algebraic integers of $K$. Thus, we have \begin{equation} \label{eq:majority} \(B_K(H)-w-1\)^{n-2}\(\(B_K(H)-w-1\)w+\(U_K(H)-w\)w\) \end{equation} such vectors of rank $1$ associated with $(i_0,i_1)$. So, by~\eqref{BK}, \eqref{eq:twop}, \eqref{eq:majority} and Lemma~\ref{lem:units}, the number of such vectors of rank $1$ associated with an exponent vector $\mathbf{k}$ with $k_{i_0}=t$, $k_{i_1}=\pm t$ for $t$ a non-zero integer is \begin{equation} \label{eq:majority1} \begin{split} \frac{n(n-1)}{2}wC_1(K)^{n-1}H^{d(n-1)}&(\log H)^{r(n-1)}\\ & +O\(H^{d(n-1)}(\log H)^{r(n-1)-1}\). \end{split} \end{equation} It remains to estimate the number of such vectors of multiplicative rank $1$ associated with an exponent vector $\mathbf{k}$ with $k_{i_0}=t_1$ and $k_{i_1}=t_2$ with $t_1\neq \pm t_2$ and $t_1$ and $t_2$ non-zero integers. Let $\nu_1, \nu_2 \in {\mathcal B}_K(H)$ be associated with $t_1,-t_2$ respectively. In this case $$ \nu^{t_1}_1=\nu^{t_2}_2. $$ We first consider the case when $t_1$ and $t_2$ are of opposite signs. Then, $\nu_1$ and $\nu_2$ are units in the ring of algebraic integers of $K$, and so by Lemma~\ref{lem:units} the number of such vectors is \begin{equation} \label{eq:extra1} O\((\log H)^{2r}B_K(H)^{n-2}\). \end{equation} It remains to consider the case when $t_1$ and $t_2$ are both positive. Without loss of generality, we assume that $0<t_1 < t_2$, and also $t_2 \ll \log H$ by Lemma~\ref{lem:exponent}. If $t_2 = 2t_1$, then $\nu_1$ is determined by $\nu_2^2$ up to a root of unity contained in $K$, and also we have ${\mathrm H}(\nu_2) \le H^{1/2}$. So, the number of such pairs $(\nu_1,\nu_2)$ is $O(H^{d/2}(\log H)^r)$ by using~\eqref{BK}, and thus the number of such vectors of rank $1$ is \begin{equation} \label{eq:t22t1} O\(H^{d/2}(\log H)^rB_K(H)^{n-2}\). \end{equation} If $t_1$ divides $t_2$ and $t_2/t_1 \ge 3$, then we have ${\mathrm H}(\nu_2) \le H^{1/3}$, and so as the above the number of such vectors of rank $1$ is \begin{equation} \label{eq:t23t1} O\(H^{d/3}(\log H)^{r+1}B_K(H)^{n-2}\). \end{equation} Now, we assume that $t_1$ does not divide $t_2$. Let $t$ be the greatest common divisor of $t_1$ and $t_2$. Note that $t_1/t \ge 2$ and $t_2/t \ge 3$. Put \begin{equation} \label{eq:nu1nu2} \gamma=\nu^{t_1}_1=\nu^{t_2}_2, \end{equation} and let $\beta$ be a root of $x^{t_1t_2}-\gamma$. Observe that $$ \beta^{t_1}=\eta_1\nu_2\quad \text{and}\quad \beta^{t_2}=\eta_2\nu_1 $$ for some $t_1t_2$-th roots of unity $\eta_1$ and $\eta_2$. There exist integers $u$ and $v$ with $ut_1+vt_2=t$, and so $$ \beta^t=\beta^{t_1u}\beta^{t_2v} =\eta_1^u\nu_2^u\eta_2^v\nu^v_1=\eta\alpha $$ for $\eta$ a $t_1t_2$-th root of unity and $\alpha$ an algebraic integer of $K$. Therefore \begin{equation} \label{eq:theta} (\eta\alpha)^{t_2/t}=\beta^{t_2}=\eta_2\nu_1, \end{equation} and so \begin{equation} \label{eq:Htheta} {\mathrm H}(\alpha)^{t_2/t}={\mathrm H}(\nu_1). \end{equation} Since ${\mathrm H}(\nu_1)\leq H$, we see, from~\eqref{eq:theta} and~\eqref{eq:Htheta}, that $\nu_1$ is determined up to a $t_1t_2$-th root of unity, by an algebraic integer of $K$ of height at most $H^{t/t_2}\le H^{1/3}$. Thus, by~\eqref{BK} and Lemma~\ref{lem:exponent}, the number of such pairs $(\nu_1,\nu_2)$ is $O(H^{d/3}(\log H)^{r+4})$, hence the number of such vectors of rank $1$ is \begin{equation} \label{eq:extra2} O\(H^{d/3}(\log H)^{r+4}B_K(H)^{n-2}\). \end{equation} Thus, by~\eqref{BK}, \eqref{eq:majority1}, \eqref{eq:extra1}, \eqref{eq:t22t1}, \eqref{eq:t23t1} and~\eqref{eq:extra2}, we get \begin{equation} \label{eq:MnK2} \begin{split} L_{n,K,1}(H) =\frac{n(n-1)}{2}wC_1&(K)^{n-1} H^{d(n-1)}(\log H)^{r(n-1)} \\ & +O\(H^{d(n-1)}(\log H)^{r(n-1)-1}\). \end{split} \end{equation} The estimate~\eqref{MnK} now follows from~\eqref{eq:MnKH}, \eqref{eq:MnK1} and~\eqref{eq:MnK2}. Finally, assume that $K$ is the rational number field ${\mathbb Q}$ or an imaginary quadratic field. Then, $r=0$, and so $B_K(H) = C_1(K)H^d+O(H^{d-1})$ by \eqref{BK0}. Repeating the above process, we obtain $$ L_{n,K,0}(H) = nwC_1(K)^{n-1}H^{d(n-1)} + O(H^{d(n-1)-1}) $$ and $$ L_{n,K,1}(H) = \frac{n(n-1)}{2}wC_1(K)^{n-1} H^{d(n-1)} + O\(H^{d(n-3/2)}\), $$ where the second error term comes from \eqref{eq:t22t1} (and also~\eqref{eq:majority} when $d=2$). Hence, noticing~\eqref{eq:MnKH} and $d=1$ or $2$, we obtain~\eqref{MnQ}. \subsection{Proof of Theorem~\ref{thm:MnK2}} By~\eqref{MnK=} and~\eqref{M*nKm}, we have \begin{equation} \label{eq:M*nK1234} \begin{split} L^*_{n,K}(H)= L^*_{n,K,0}&(H) + L^*_{n,K,1}(H) \\ & +O\(H^{2d(n-1)-d}\exp(c_2\log H/\log\log H)\). \end{split} \end{equation} As in the proof of Theorem~\ref{thm:MnK}, we obtain, by using~\eqref{B*K} in place of~\eqref{BK}, \begin{equation} \label{eq:M*nK1} L^*_{n,K,0}(H)=nwC_2(K)^{n-1}H^{2d(n-1)}+O\(H^{2d(n-1)-1}(\log H)^{\sigma(d)}\), \end{equation} where $\sigma(1)=1$ and $\sigma(d)=0$ for $d>1$. Similarly, we find that \begin{equation} \label{eq:M*nK2} \begin{split} L^*_{n,K,1}(H)=n(n-1)wC_2(K)^{n-1}&H^{2d(n-1)}\\ & +O\(H^{2d(n-1)-1}(\log H)^{\sigma(d)}\), \end{split} \end{equation} where the main difference from the proof of~\eqref{eq:MnK2} is that the contribution from the exponent vectors $(k_{i_0},k_{i_1})$ equal to $(t,t)$ is the same as when $(k_{i_0},k_{i_1})$ is equal to $(t,-t)$. The desired result now follows from~\eqref{eq:M*nK1234}, \eqref{eq:M*nK1} and~\eqref{eq:M*nK2} by noticing that $$ L^*_{2,K}(H)= L^*_{2,K,0}(H) + L^*_{2,K,1}(H). $$ \subsection{Proof of Theorem~\ref{thm:Mnd}} We first establish~\eqref{Mnd}. By~\eqref{Mnd=} and~\eqref{Mndm}, we have \begin{equation} \label{eq:Mnd12} \begin{split} M_{n,d}(H)=M_{n,d,0}&(H)+M_{n,d,1}(H) \\ &+O\(H^{d^2(n-1)-d}\exp(c_1\log H/\log\log H)\). \end{split} \end{equation} Note that each such vector $\pmb{\nu}$ of multiplicative rank 0 has a coordinate which is a root of unity of degree $d$. So, in view of the definition of $w_0(d)$ in~\eqref{eq:w0w1} we have $$ nw_0(d)\(A_d(H)-w_0(d)\)^{n-1} \le M_{n,d,0}(H)\le nw_0(d)A_d(H)^{n-1}, $$ and thus by~\eqref{Ad} and~\eqref{eq:w0w1}, \begin{equation} \label{eq:Mnd1} \begin{split} M_{n,d,0}(H) = nw_0(d)C_5(d)^{n-1}&H^{d^2(n-1)} \\ & + O\(H^{d^2(n-1)-d}(\log H)^{\rho(d)}\). \end{split} \end{equation} We remark that $M_{n,d,0}(H) =0$ if $w_0(d)=0$. Moreover, arguing as in the proof of Theorem~\ref{thm:MnK}, we find that the main contribution to $M_{n,d,1}(H)$ comes from vectors associated with an exponent vector $\mathbf{k}$ which has two non-zero components one of which is $t$ and the other of which is $\pm t$ with $t$ a non-zero integer. Notice that the number $U_d(H)$ of algebraic integers which are units of degree $d$ and height at most $H$ satisfies (by using Lemma~\ref{lem:H0}) \begin{equation} \label{eq:Ud} U_d(H)=O\(H^{d(d-1)}\). \end{equation} We then deduce from~\eqref{Ad}, \eqref{eq:w0w1}, \eqref{eq:Ud} and Lemma~\ref{lem:special} that \begin{equation} \label{eq:Mnd2} M_{n,d,1}(H)=n(n-1)C_5(d)^{n-1}H^{d^2(n-1)} +O\(H^{d^2(n-1)-d/2}\); \end{equation} if furthermore $d=2$ or $d$ is odd, then \begin{equation} \label{eq:Mnd2'} M_{n,d,1}(H)=n(n-1)C_5(d)^{n-1}H^{d^2(n-1)} +O\(H^{d^2(n-1)-d}\log H\). \end{equation} Here, we need to note that for an algebraic integer $\alpha$ of degree $d$ and a root of unity $\eta \ne \pm 1$, $\alpha\eta$ might not be of degree $d$. The desired asymptotic formula~\eqref{Mnd} now follows from~\eqref{eq:Mnd12}, \eqref{eq:Mnd1} and~\eqref{eq:Mnd2}. In order to show~\eqref{Mnd'}, we use~\eqref{eq:Mnd2'} instead of~\eqref{eq:Mnd2}. Besides,~\eqref{M2d} follows from~\eqref{eq:Mnd1} and~\eqref{eq:Mnd2'} by noticing that $$ M_{2,d}(H) = M_{2,d,0}(H) + M_{2,d,1}(H). $$ Finally, we prove~\eqref{M*nd}, \eqref{M*nd'} and~\eqref{M*2d}. By~\eqref{Mnd=} and~\eqref{M*ndm}, we have \begin{equation} \label{eq:M*nd1234} \begin{split} M^*_{n,d}(H)= &M^*_{n,d,0}(H) + M^*_{n,d,1}(H) \\ & +O\(H^{d(d+1)(n-1)-d}\exp(c_2\log H/\log\log H)\). \end{split} \end{equation} As before, we have, by using~\eqref{A*d}, \begin{equation} \label{eq:M*nd1} \begin{split} M^*_{n,d,0}(H)=nw_0(d)C_6(d)^{n-1}&H^{d(d+1)(n-1)} \\ &+ O\(H^{d(d+1)(n-1)-d}(\log H)^{\vartheta(d)}\). \end{split} \end{equation} As in~\eqref{eq:Mnd2} and~\eqref{eq:Mnd2'}, we find that \begin{equation} \label{eq:M*nd2} \begin{split} M^*_{n,d,1}(H)=2n(n-1)C_6&(d)^{n-1}H^{d(d+1)(n-1)} \\ &+O\(H^{d(d+1)(n-1)-d/2}\log H\); \end{split} \end{equation} if furthermore $d=2$ or $d$ is odd, we have \begin{equation} \label{eq:M*nd2'} \begin{split} M^*_{n,d,1}(H)=2n(n-1)&C_6(d)^{n-1}H^{d(d+1)(n-1)} \\ &+O\(H^{d(d+1)(n-1)-d}(\log H)^{\vartheta(d)}\). \end{split} \end{equation} So,~\eqref{M*nd} follows from~\eqref{eq:M*nd1234}, \eqref{eq:M*nd1} and~\eqref{eq:M*nd2}; then using~\eqref{eq:M*nd2'} instead of~\eqref{eq:M*nd2} gives~\eqref{M*nd'}. In order to deduce ~\eqref{M*2d}, we apply~\eqref{eq:M*nd1} and~\eqref{eq:M*nd2'} and notice that $$ M^*_{2,d}(H) = M^*_{2,d,0}(H) + M^*_{2,d,1}(H). $$ \section{Lower Bound} \label{sec:low} In this section, we shall prove that~\eqref{MnKm} is sharp, apart from a factor $H^{o(1)}$, when $n=s+1$ is even and $K={\mathbb Q}$. We need the following slight extension of~\cite[Lemma~2.3]{MS}. \begin{lem}\label{lem:prod=} Let $k$ and $q$ be integers with $k\geq 2$ and $q \geq 2$. Let $\pmb{\gamma}=(\gamma_1, \ldots, \gamma_k) $ with $\gamma_1, \ldots, \gamma_k$ positive real numbers. Then, there exists a positive number $\Gamma(q,\pmb{\gamma}) $ such that for $T \to \infty$, we have $$ \mathop{\sum\, \ldots \sum}_{\substack{ a_1\cdots a_k= b_1\cdots b_k\\ \gcd(a_ib_i, q) = 1\\ 1\le a_i,b_i \le T^{\gamma_i}\\i=1, \ldots, k}} \,\, 1 \quad \sim \quad \Gamma(q,\pmb{\gamma}) T^{\gamma}(\log T)^{(k-1)^2}, $$ where $\gamma = \gamma_1 + \cdots +\gamma_k$. \end{lem} \begin{proof} The proof proceeds along the same lines as in the proof of~\cite[Lemma~2.3]{MS}. The only difference is that the primes $p$ which divide $ q$ are now excluded from the Euler products that appear in~\cite{MS}. \end{proof} We show that apart perhaps from the factor $\exp(c_1 \log H/ \log\log H)$ the estimate~\eqref{MnKm} in Proposition~\ref{prop:MnKm} is sharp when $n$ is even, $s = n-1$ and $K={\mathbb Q}$. \begin{thm} \label{thm:MnQm} Let $n=2k$, where $k$ is an integer with $k>1$. Then, for sufficiently large $H$, there exists a positive number $c$ depending on $n$ such that \begin{equation} \label{MnQm} L_{n,{\mathbb Q},n-1}(H) \ge cH^{k}(\log H)^{(k-1)^2}. \end{equation} \end{thm} \begin{proof} Fix $n-2$ distinct odd primes $p_i$, $q_i$, $i=2, \ldots, k$. Given positive integers $a_1,\ldots,a_k,b_1,\ldots,b_k$, we first set $$ \nu_1= 2 p_2\cdots p_k a_1 \qquad \mbox{and} \qquad \nu_{k+1} = 2 q_2\cdots q_k b_1. $$ After this we set $$ \nu_i = q_i a_i \qquad \mbox{and} \qquad \nu_{k+i} = p_i b_{i} , \quad i = 2, \ldots, k. $$ Clearly, if $a_1\cdots a_k = b_1\cdots b_k$ with $\gcd(a_ib_i, 2 p_2q_2\cdots p_kq_k) =1$ for any $2\le i \le k$, then the integer vector $\pmb{\nu}=(\nu_1,\ldots,\nu_n)$ is multiplicatively dependent of rank $n-1$ by noticing that $\nu_1\cdots \nu_k = \nu_{k+1}\cdots \nu_n$ and that there is no non-empty subset $\{i_1,\ldots ,i_m\}$ of $\{1, \ldots ,n\}$ of size less than $n$ for which \begin{equation} \label{eq:mult1} \nu_{i_1}^{j_{i_1}}\cdots \nu_{i_m}^{j_{i_m}}=1, \end{equation} with $j_{i_1}, \ldots ,j_{i_m}$ non-zero integers. For sufficiently large $H$, we choose such integers $a_i, b_i \le c_1H$ for some positive number $c_1$ depending only on the above fixed primes such that we have $|\nu_i| \le H$ for each $1\le i \le n$. Then, each such vector $\pmb{\nu}$ contributes to $L_{n,{\mathbb Q},n-1}(H)$. Now applying Lemma~\ref{lem:prod=} to count such vectors (taking $T=c_1H$ and $\gamma_i=1$ for each $i=1, \ldots, k$), we derive $$ L_{n,{\mathbb Q},n-1}(H) \ge cH^{k}(\log H)^{(k-1)^2}, $$ where $c$ is a positive number depending on $n$. \end{proof} \section{Comments} It might be of interest to investigate in more detail how tight our bounds are in Propositions~\ref{prop:MnKm} and~\ref{prop:Mndm}. In Section~\ref{sec:low} we have taken an initial step in this direction. It would be interesting to study multiplicatively dependent vectors of polynomials over finite fields. In this case the degree plays the role of the height. While we expect that most of our results can be translated to this case many tools need to be developed and this should be of independent interest. \section*{Acknowledgements} The first author was supported in part by Gruppo Nazionale per le Strutture Algebriche, Geometriche e le loro Applicazioni from Istituto Nazionale di Alta Matematica ``F. Severi''. The research of the second and third authors was supported by the Australian Research Council Grant DP130100237. The research of the fourth author was supported in part by the Canada Research Chairs Program and by Grant A3528 from the Natural Sciences and Engineering Research Council of Canada.
1,941,325,220,375
arxiv
\section{#1}} \setcounter{footnote}{1} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \usepackage{color} \usepackage{amsthm} \theoremstyle{plain} \usepackage{thmtools,thm-restate} \usepackage{bm} \usepackage{tikz-cd} \usepackage{hyperref} \newcommand{\comment}[1]{} \newcommand{\bydef}{\stackrel{\rm def}{=}} \newcommand{\clb}{\mathcal{B}} \newcommand{\cld}{\mathcal{D}} \newcommand{\clh}{\mathcal{H}} \newcommand{\clk}{\mathcal{K}} \newcommand{\clU}{\mathcal{U}} \newcommand{\clY}{\mathcal{Y}} \newcommand{\clV}{\mathcal{V}} \newcommand{\clW}{\mathcal{W}} \theoremstyle{theorem} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{definition}[theorem]{Definition} \newtheorem*{theorem*}{Theorem} \newtheorem{protoeg}[theorem]{Example} \newtheorem{protoremark}[theorem]{Remark} \newtheorem{protodefinition}[theorem]{Definition} \renewcommand{\theequation}{\arabic{equation}} \renewcommand{\thetheorem}{\arabic{theorem}} \newcommand{\EE}[1]{\E\l[#1\r]} \newcommand\Tr{{\mbox{Tr}}} \newcommand\tr{{\mbox{tr}}} \newcommand\mnote[1]{} \newcommand\be{\begin{equation*}} \newcommand\bel[1]{{\mnote{#1}}\be\label{#1}} \newcommand\ee{\end{equation*}} \newcommand\lb[1]{\label{#1}\mnote{#1}} \newcommand\ben{\begin{equation}} \newcommand\een{\end{equation}} \newcommand\bes{\begin{eqnarray*}} \newcommand\ees{\end{eqnarray*}} \newcommand\bex{\begin{exercise}} \newcommand\eex{\end{exercise}} \newcommand\beg{\begin{example}} \newcommand\eeg{\end{example}} \newcommand\benu{\begin{enumerate}} \newcommand\eenu{\end{enumerate}} \newcommand\beit{\begin{itemize}} \newcommand\eeit{\end{itemize}} \newcommand\berk{\begin{remark}} \newcommand\eerk{\end{remark}} \newcommand\bdefn{\begin{defintion}} \newcommand\edefn{\end{definition}} \newcommand\bthm{\begin{theorem}} \newcommand\ethm{\end{theorem}} \newcommand\bprf{\begin{proof}} \newcommand\eprf{\end{proof}} \newcommand\blem{\begin{lemma}} \newcommand\elem{\end{lemma}} \newcommand{\as}{\mbox{\hspace{.3cm} a.s.}} \newcommand{\dist}{\mbox{\rm dist}} \newcommand{\supp}{\mbox{\rm supp}} \newcommand{\Arg}{\mbox{\rm Arg}} \newcommand{\Poi}{\mbox{\rm Poi}} \newcommand{\Vol}{\mbox{\rm Vol}} \newcommand{\Cov}{\mbox{\rm Cov}} \newcommand{\sm}{{\raise0.3ex\hbox{$\scriptstyle \setminus$}}} \newcommand{\intc}{\int_0^{2\pi}} \newcommand{\agau}{a complex Gaussian analytic function} \newcommand{\gi}{\,|\,} \newcommand{\area}{\operatorname{area}} \newcommand{\wA}{\widetilde{A}} \newcommand{\zb}{\overline{z}} \newcommand{\ontop}{\genfrac{}{}{0cm}{2}} \def\mb{\mbox} \def\l{\left} \def\r{\right} \def\sig{\sigma} \def\lam{\lambda} \def\alp{\alpha} \def\eps{\epsilon} \def\tends{\rightarrow} \def\Lap{\Delta} \newcommand{\const}{\operatorname{const}} \newcommand{\ip}[2]{\langle#1,#2\rangle} \renewcommand{\Pr}[1]{\,\mathbb P\,\(\,#1\,\)\,} \newcommand{\ti}{\tilde} \newcommand{\Wi}[1]{:#1:} \newcommand{\imply}{\;\;\;\Longrightarrow\;\;\;} \def\Kdet{{\mathbb K}} \def\STAR{\bigstar} \def\CYCLE{\diamondsuit} \def\CHI{\mathchoic {\raise2pt\hbox{$\chi$} {\raise2pt\hbox{$\chi$} {\raise1.3pt\hbox{$\scriptstyle\chi$} {\raise0.8pt\hbox{$\scriptscriptstyle\chi$}}} \def\smalloplus{\raise1pt\hbox{$\,\scriptstyle \oplus\;$}} \newtheorem{thm}{Theorem}[section] \newtheorem{defn}[thm]{Definition} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{rslt}[thm]{Result} \newtheorem{obs}[thm]{Observation} \newtheorem{prop}[thm]{Proposition} \newtheorem{rem}[thm]{Remark} \newtheorem{note}[thm]{Note} \numberwithin{equation}{section} \newcommand{\bb}[1]{\mathbb{#1}} \newcommand{\cl}[1]{\mathcal{#1}} \def\textmatrix#1&#2\\#3&#4\\{\bigl({#1 \atop #3}\ {#2 \atop #4}\bigr)} \def\dispmatrix#1&#2\\#3&#4\\{\left({#1 \atop #3}\ {#2 \atop #4}\right)} \begin{document} \title[Interpolating sequences and Toeplitz corona theorem]{Interpolating sequences and the Toeplitz corona theorem on the symmetrized bidisk } \author[Bhattacharyya]{Tirthankar Bhattacharyya} \address[Bhattacharyya]{Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India.} \email{[email protected]} \author[Sau]{Haripada Sau} \address[Sau]{Department of Mathematics, Virginia Tech, Blacksburg, VA 24061-0123, USA.} \email{[email protected], [email protected]} \thanks{MSC2010: 46E22, 47A13, 47A25, 47A56} \thanks{Key words and phrases: Symmetrized bidisk, Realization formula, Interpolation, Operator--valued kernel, Reproducing kernel Hilbert space, Toeplitz corona problem.} \thanks{This research is supported by University Grants Commission, India via CAS. Work of the second author was largely done at the Indian Institute of Technology Bombay.} \date{\today} \maketitle \begin{abstract} This paper is a continuation of work done in \cite{BS}. It contains two new theorems about bounded holomorphic functions on the symmetrized bidisk -- a characterization of interpolating sequences and a Toeplitz corona theorem. \end{abstract} \section{Statement of main results} \subsection{Introduction} This paper extends two important results in operator theory and complex function theory on the unit disk to the symmetrized bidisk $$\mathbb{G}=\{(z_1+z_2,z_1z_2): |z_1|<1, |z_2|<1\},$$ namely, a characterization of interpolating sequences (Theorem 1) and the Toeplitz-corona theorem (Theorem 2). These two seemingly uncorrelated results are unified by the fact that both are applications of the statement and the method of proof of the Realization Theorem for operator-valued bounded holomorphic functions on $\mathbb G$ of norm no larger than $1$. The symmetrized bidisk is non-convex, but polynomially convex. It is interesting to both complex analysts and operator theorists -- for dilation and related results (\cite{ay-Edinburgh}, \cite{BPSR}), for a rich function theory (\cite{AYRealization}, \cite{BS}) and for its complex geometry (\cite{ay-geometry}, \cite{KZ}). Let $\mathcal L$ be a Hilbert space. All Hilbert spaces in this note are separable and are over the complex field. Let $\mathcal{B}(\mathcal L)$ denote the algebra of all bounded operators acting on $\mathcal L$. A function $k:\mathbb G \times \mathbb G\to \mathcal B(\mathcal L)$ is called {\it{positive semi-definite}} if for all $n \ge 1$, all $\lambda_1, \lambda_2, \ldots , \lambda_n$ in $\mathbb G$ and all $h_1, h_2, \ldots , h_n$ in $\mathcal L$, it is true that \begin{eqnarray}\label{kercond} \sum_{i,j=1}^n \langle k(\lambda_i,\lambda_j)h_i , h_j \rangle \geq0. \end{eqnarray} If moreover, $k$ is holomorphic in the first variable, anti-holomorphic in the second variable and $k(\lambda,\lambda)\neq 0$ for every $\lambda\in \mathbb G$, then it is called a $kernel$. A {\it{weak kernel}} $k$ is a function $k:\mathbb G \times \mathbb G\to \mathcal B(\mathcal L)$ that is holomorphic in the first variable and anti-holomorphic in the second such that (\ref{kercond}) holds with no requirement of being non-zero on the diagonal. In what follows, a kernel or a weak kernel will be assumed to be scalar-valued, i.e., when $\mathcal L=\mathbb C$, unless otherwise mentioned. It is elementary that for every $\mathcal B(\mathcal L)$-valued positive semi-definite function $k$, there is a Hilbert space $H_k$ consisting of $\mathcal L$-valued functions on $\mathbb G$ such that the set of functions \begin{equation} \label{kernelspace} \{\sum_{i=1}^n k(\cdot, \lambda_i)h_i: n \in \mathbb N, \; h_i\in \mathcal L \text{ and }\lambda_i\in \mathbb G\} \end{equation} is dense in $H_k$ and $\langle f, k(\cdot,\lambda)h\rangle_{H_k}=\langle f(\lambda), h\rangle_{\mathcal L}$ for any $f \in H_k$, $h \in \mathcal L$ and $\lambda \in \mathbb G$. If moreover, $k$ is a kernel (or a weak kernel), then the functions are holomorphic. The operator theory comes into play because of the following. Let $(T_1, T_2)$ be a pair of commuting bounded operators acting on a Hilbert space and $\sigma(T_1,T_2)$ be the Taylor joint spectrum of $(T_1,T_2)$. A polynomially convex compact set $X \subseteq \mathbb C^2$ is called a {\em{spectral set}} for $(T_1, T_2)$, if $\sigma (T_1, T_2) \subseteq X$ and $$ \| \xi(T_1, T_2) \| \le \sup_X | \xi |$$ for any polynomial $\xi$ in two variables. A typical point of the symmetrized bidisk will be denoted by $(s,p)$. The terminology {\it$\Gamma$--contraction} in the following definition is by now classical and was introduced by Agler and Young in \cite{ay-jot}. \begin{defn} \label{Gamma--contraction} A pair of commuting bounded operators $(S, P)$ on a Hilbert space $\mathcal H$ having the closed symmetrized bidisk $\Gamma$ as a spectral set is called a $\Gamma$--contraction. Thus $(S,P)$ is a $\Gamma$--contraction if and only if $\| \xi (S,P) \| \le \sup_{\mathbb G} |\xi|$ for all polynomials $\xi$ in two variables. A $\mathcal B(\mathcal L)$-valued kernel $k$ on $\mathbb G$ is called admissible if the pair $(M_s,M_p)$ of multiplication by the co-ordinate functions is a $\Gamma$--contraction on $H_k$. Similarly, a kernel $k$ on $\mathbb D^2$ ($\mathbb D$ being the unit disk in the complex plane) is called admissible if the multiplication operators $M_{z_1}$ and $M_{z_2}$ by the coordinate functions are contractions on $H_k$. \end{defn} Note that the definition of admissibility is attuned to the domain. For the bidisk, we demand that the operator pair of multiplication by the coordinate functions has $\overline{\mathbb D^2}$ as a spectral set whereas for the symmetrized bidisk, the demand is that the operator pair of multiplication by the coordinate functions has $\Gamma$ as a spectral set. \subsection{Interpolating Sequences} \label{IS} A sequence $\{(s_j,p_j): j \geq 1\}$ of points in $\mathbb{G}$ is called an {\em interpolating sequence} for $H^\infty(\mathbb{G})$, the algebra of bounded analytic functions on $\mathbb{G}$, if for every bounded sequence $w=\{w_j: j \geq 1\}$ of complex numbers, there exists a function $f$ in $H^\infty(\mathbb{G})$ such that $f(s_j,p_j)=w_j$ for each $j\geq 1$. Interpolating sequences for the algebra $H^\infty(\mathbb D)$ of bounded analytic functions on $\mathbb D$ were characterized by Carleson \cite{Carleson-IntSeq}. One of his characterizations of interpolating sequences is that a sequence $\{z_j\}$ in $\mathbb D$ is interpolating if and only if there exists $\delta > 0$ such that $$ \prod_{j\neq k}\left|\frac{z_j-z_k}{1-\overline{z_k}z_j}\right| \geq \delta, \text{ for all } k. $$ In \S 3, we shall see that there exist uncountably many Carleson-type sufficient conditions for a sequence in $\mathbb G$ to be interpolating (Lemma \ref{suff}). A sequence $\{(s_j,p_j):j\geq 1\}$ of points in $\mathbb G$ is called {\em strongly separated} if there exists a constant $M$ such that, for each $i$ there is an $f_i$ in $H^\infty (\mathbb G)$ of norm at most $M$ that satisfies $f_i(s_i,p_i ) = 1$ and $f_i(s_j,p_j ) = 0$ for all $j$ other than $i$. And the sequence is called {\em weakly separated} if whenever $i \neq j$, there exists a function $f_{ij}$ in $H^\infty (\mathbb G)$ of norm at most $M$ that satisfies $f_{ij}(s_i,p_i) = 1$ and $f_{ij}(s_j,p_j) = 0$. Note that an interpolating sequence is strongly separated and a strongly separated sequence is weakly separated. For a given sequence $\{(s_j,p_j): j \geq 1\}$ in $\mathbb{G}$ and a kernel $k$ on $\mathbb{G}$, the {\em normalized Grammian} of $k$ is the following infinite matrix $$ G_k=\left( \frac{k((s_i,p_i),(s_j,p_j))}{\sqrt{k((s_i,p_i),(s_i,p_i))}\sqrt{k((s_j,p_j),(s_j,p_j))}} \right)_{i,j=1}^\infty. $$ The following theorem characterizes the interpolating sequences on the symmetrized bidisk, which will be proved in \S 3. Interpolating sequences on the bidisk were characterized in \cite{Ag-Mc IntSeq}. \begin{theorem}[Characterization of Interpolating Sequences] Let $\{(s_j,p_j): j \geq 1\}$ be a sequence in $\mathbb{G}$. Then the following are equivalent: \begin{enumerate} \item[(i)] The sequence $\{(s_j,p_j): j \geq 1\}$ is an interpolating sequence for $H^\infty(\mathbb{G})$; \item[(ii)]For all admissible kernels $k$, the normalized Grammians are uniformly bounded below, i.e., there exists an $N>0$ such that \begin{eqnarray}\label{grammbelow} G_k \geq \frac{1}{N}I \end{eqnarray}for every admissible kernel $k$; \item[(iii)] The sequence $\{(s_j,p_j): j \geq 1\}$ is strongly separated and for all admissible kernels $k$, the normalized Grammians are uniformly bounded above, i.e., there exists an $M>0$ such that \begin{eqnarray}\label{grammabove} G_k \leq MI \end{eqnarray} for every admissible kernel $k$. \item[(iv)] Items (ii) and (iii) both hold. \end{enumerate} \label{Interpolate} \end{theorem} Although (iv) is redundant in view of (ii) and (iii), we have listed it because in the course of the proof, we shall first show that (i) is equivalent to (iv), and shall then show that (ii) is equivalent to (iii). It is clear that (i) and (iv) together are equivalent to (ii) and (iii) together. \subsection{Toeplitz Corona Theorem} The Corona Theorem for $H^\infty(\mathbb D)$ is a statement about its maximal ideal space. Obviously, $\mathbb D$ is contained in the maximal ideal space $M_{H^\infty(\mathbb D)}$ of the Banach algebra $H^\infty(\mathbb D)$ by means of identification of a $w\in \mathbb D$ with the multiplicative linear functional of evaluation, $f\to f(w)$ for all $f\in H^\infty(\mathbb D)$. It is usually difficult to find the maximal ideal space of a Banach algebra. Kakutani asked whether the {\em{corona}} $M_{H^\infty(\mathbb D)}\smallsetminus \overline{\mathbb D}$ (in the weak-star topology) is empty or in other words, whether $\mathbb D$ is dense in $M_{H^\infty(\mathbb D)}$ in the natural weak-star topology. Elementary functional analysis shows that Kakutani's question is equivalent to the following: Given $\varphi_1$, $\varphi_2$, \dots, $\varphi_N$ in $H^\infty(\mathbb D)$ satisfying \begin{eqnarray}\label{coronacondition} |\varphi_1(z)|^2+|\varphi_2(z)|^2+\cdots+|\varphi_N(z)|^2 \geq \delta^2 >0 \text{ for all $z \in \mathbb{D}$}, \end{eqnarray}for some $\delta >0$, is it true that there are functions $\psi_1,\psi_2,\dots,\psi_N$ in $H^\infty (\mathbb{D})$ such that \begin{eqnarray}\label{coronaconclusion} \psi_1\varphi_1+\psi_2\varphi_2+\cdots+\psi_N\varphi_N = 1? \end{eqnarray} It is easy to see that the converse implication is true, so that (\ref{coronacondition}) is a necessary condition for (\ref{coronaconclusion}). The sufficiency was proved, and hence Kakutani's question was answered affirmatively by Carleson \cite{Carleson}. This triggered a rather long list of research work on issues related to the corona theorem. First, H$\ddot{\text{o}}$rmander \cite{Hormander} introduced a different approach based on an appropriate inhomogeneous $\bar{\partial}$-equation, see \cite{corona survey} and references therein for a beautiful discussion and various results in this direction. Then Wolff produced a simpler proof than Carleson's, see \cite{corona survey} for Wolff's solution. Coburn and Schechter in \cite{Coburn-Schechter} and Arveson in \cite{Arveson} came up with an operator inequality to replace (\ref{coronacondition}): \begin{eqnarray}\label{T-coronacondition} M_{\varphi_1}M_{\varphi_1}^*+M_{\varphi_2}M_{\varphi_2}^*+\cdots+M_{\varphi_N}M_{\varphi_N}^* \geq \delta^2 >0, \end{eqnarray}where the notation $M_\varphi$, for a $\varphi \in H^\infty(\mathbb D)$ stands for the operator of multiplication by $\varphi$ on $H^2(\mathbb D)$ and is also called the Toeplitz operator with symbol $\varphi$. Coburn and Schechter were interested in interpolation (in other words when an ideal in a Banach algebra contains the identity) whereas Arveson's motivation was to search for an operator theoretic proof of the corona theorem. Using the Szeg\H{o} kernel, it is elementary to see that (\ref{T-coronacondition}) implies (\ref{coronacondition}). Both papers mentioned above proved that (\ref{T-coronacondition}) implies (\ref{coronaconclusion}). Arveson achieved a bound: if $\|\varphi_i\|_\infty\leq 1$ for each $1\leq i\leq N$, then $\psi_1, \psi_2,\dots,\psi_N$ can be so chosen that $$ \|\psi_i\|_\infty\leq 4N\epsilon^{-3}. $$ Using the Corona Theorem, one can show that (\ref{coronaconclusion}) implies (\ref{T-coronacondition}). Thus, in the disk, all three statements are equivalent. In a general domain, equivalence of (\ref{coronaconclusion}) and (\ref{T-coronacondition}) is called the Toeplitz Corona Theorem. Agler and McCarthy proved the Toeplitz corona theorem for the bidisk in \cite{ag-Mc NP}. Our second contribution in this note is the Toeplitz corona theorem for the symmetrized bidisk. Before we state the theorem, we need to make a few comments about $H^\infty(\mathbb G)$. Recall that a scalar-valued kernel $k$ on $\mathbb G \times \mathbb G$ is admissible if the pair $(M_s, M_p)$ of multiplication operators on $H_k$ forms a $\Gamma$--contraction. There is a characterization of $H^\infty(\mathbb G)$ through admissible kernels: a function $\varphi$ in $H^\infty(\mathbb G)$ has norm no larger than $1$ if and only if $M_\varphi$ is a contraction on $H_k$ for every admissible kernel $k$ on $\mathbb G \times \mathbb G$. In other words, a function $\varphi$ is in $H^\infty(\mathbb G)$ if and only if the operator $M_\varphi$ is a bounded operator on $H_k$ for all admissible kernels $k$. We refer the reader to Lemma 3.1 of \cite{BS} for the proof. Buoyed by this fact, we may ask whether the admissible kernels can be replaced by measures on the distinguished boundary $b\Gamma$ of the symmetrized bidisk. This is the Shilov boundary with respect to the algebra of functions continuous on $\Gamma$ and holomorphic on $\mathbb G$. It turns out that $$b\Gamma = \{ (z_1 + z_2, z_1z_2): |z_1| = |z_2| = 1\},$$ see Theorem 1.3 in \cite{ay-jot}. For a regular Borel measure $\mu$ on $b\Gamma$ (respectively on $\mathbb{T}^2$), let $H^2(b\Gamma,\mu)$ (respectively $H^2(\mathbb{T}^2,\mu)$) denote the closure of all polynomials in $L^2(b\Gamma,\mu)$ (respectively $L^2(\mathbb{T}^2,\mu)$). For a function $\varphi \in H^\infty(\mathbb{G})$, we consider its radial limit, which exists almost everywhere with respect to the Lebesgue measure in $b\Gamma$, and denote it by $\varphi$ itself. However, $M_\varphi$ need not be defined as a bounded operator on $H^2(b\Gamma,\mu)$. To remedy this situation, consider the following scaling of the function $\varphi$: $$ \varphi_r(s,p):=\varphi(rs,r^2p)\text{ for all } (s,p)\in \mathbb{G} \text{ and }0\leq r<1. $$ Now, $M_{\varphi_r}$ is a bounded operator on $H^2(\mathbb{G},\mu)$. For two Hilbert spaces $\mathcal L_1$ and $\mathcal L_2$, let $H^\infty(\mathbb G, \mathcal B (\mathcal L_1, \mathcal L_2))$ denote the Banach space of $\mathcal B (\mathcal L_1, \mathcal L_2)$-valued bounded analytic functions on $\mathbb G$. Given an admissible kernel $k$ on $\mathbb G$ and $\Phi \in H^\infty(\mathbb G, \mathcal B (\mathcal L_1, \mathcal L_2))$, it is natural to consider the multiplication operator $M_\Phi$ from the Hilbert space $H_k(\mathbb G) \otimes \mathcal L_1$ (identified as a Hilbert space of $\mathcal L_1$-valued functions) into $H_k(\mathbb G) \otimes \mathcal L_2$. Similarly, $M_\Phi^\mu$ will denote the multiplication operator from $H^2(b\Gamma,\mu) \otimes \mathcal L_1$ into $H^2(b\Gamma,\mu) \otimes \mathcal L_2$. Finally, given a domain $\Omega$ and two functions $k_1,k_2:\Omega\times \Omega \to B(\mathcal L)$, we follow Agler and McCarthy, \cite{ag-Mc}, to use the notation $k_1 \oslash k_2$ for the $B(\mathcal L \otimes \mathcal L)$-valued function on $\Omega\times \Omega $ defined by $$ k_1 \oslash k_2(z,w)=k_1(z,w) \otimes k_2(z,w) $$ for all $z$, $w$ in $\Omega$. \vspace*{5mm} \begin{theorem}[Toeplitz Corona Theorem on the Symmetrized Bidisk]\label{TC-G} Let $\Phi$ be a function in $H^\infty(\mathbb G, \mathcal B (\mathcal L_1, \mathcal L_2))$ and let $\delta $ be a positive number. The following statements are equivalent. \begin{enumerate} \item[(1)]There is a $\Psi \in H^\infty(\mathbb G, \mathcal B(\mathcal L_2, \mathcal L_1))$ of norm no larger than $1/\delta$ such that \begin{align}\label{LeftInverse} \Psi (s,p) \Phi (s,p) = I_{\mathcal L_2} \end{align} for all $(s,p) \in \mathbb G$. \item[(2)]For every regular Borel measure $\mu$ on $b\Gamma$, the operator $$ M_{\Phi_r}^\mu (M_{\Phi_r}^\mu)^* - \delta I_{H^2(b\Gamma,\mu) \otimes \mathcal L_2}$$ is positive for every $0< r < 1$. \item[(3)]For any $\mathcal B(\mathcal L_2)$-valued admissible kernel $k$ on $\mathbb G$, the function $$(\Phi (s,p) \Phi(t,q)^* - \delta I_{\mathcal L_2})\oslash k((s,p),(t,q)) $$ is positive semi-definite. \end{enumerate} \end{theorem} This theorem will be proved in \S 5. At this stage, one may wonder whether the Toeplitz corona theorem on the symmetrized bidisk follows from that on the bidisk. We explain below why it does not. Let us start with a $\Phi$ satisfying the condition (1) in the theorem above. In bidisk coordinates, the equation (\ref{LeftInverse}) can be written as $$ \Phi(\gamma(z_1,z_2))\Psi(\gamma(z_1,z_2))=I_{\mathcal L_2}, $$where $\gamma: \mathbb D^2 \rightarrow \mathbb G$ is the symmetrization map $\gamma(z_1,z_2)=(z_1+z_2,z_1z_2)$. A straightforward application of Theorem 11.65 in \cite{ag-Mc} implies the following two facts: \begin{enumerate} \item[(2$'$)] For every $\mathcal{B}(\mathcal{L}_2)$-valued admissible kernel $k$ on $\mathbb D^2$, the function $$ \big((\Phi \circ \gamma(z_1,z_2))({\Phi \circ \gamma(w_1,w_2)})^* - \delta I_{\mathcal L_2}\big) \oslash k((z_1,z_2),(w_1,w_2))$$ on $\mathbb{D}^2\times\mathbb{D}^2$ is positive semi-definite; or equivalently, \item[(3$'$)] For every regular Borel measure $\mu$ on $\mathbb T^2$ and $0< r < 1$, the operator $$ M^\mu_{\Phi \circ \gamma_r} (M^\mu_{\Phi \circ \gamma_r})^* - \delta I_{H^2(b\Gamma,\mu) \otimes \mathcal L_2}$$ is positive, where $\gamma_r:\mathbb D^2\to \mathbb G$ is the map $$\gamma_r(z_1,z_2)=(rz_1+rz_2,r^2z_1z_2).$$ \end{enumerate} Moreover, (2$'$) and (3$'$) are equivalent. The challenge is to get a left inverse $\Psi$ of $\Phi$ if (2$'$) and (3$'$) hold. What one can get is a function $\Psi$ in $H^\infty(\mathbb D^2, \mathcal B(\mathcal L_2, \mathcal L_1))$ such that \begin{align}\label{Con-Nesconds} \Phi(\gamma(z_1,z_2))\Psi(z_1,z_2)=I_{\mathcal L_2}. \end{align} This is due to Theorem 11.65 in \cite{ag-Mc} again. However, $\Psi$ need not be symmetric, i.e., $\Psi(z_1,z_2)$ need not be the same operator as $\Psi(z_2,z_1)$ and hence, in general, it does not give rise to a function on the symmetrized bidisk. One case when the necessary conditions (2$'$) and (3$'$) will be sufficient as well is when $\Phi(s,p)$ is a one-one operator for every $(s,p)$ in $\mathbb G$. We leave it to the reader to check that $\Psi$ turns out to be symmetric in this case. This shows the need of proving the Toeplitz corona theorem for the symmetrized bidisk separately although the theorem has been well-established in the bidisk. In fact, it has been observed many times that results in the bidisk do not imply results in the symmetrized bidisk, naive attempts to deduce results for the symmetrized bidisk from the corresponding results for the bidisk run into difficulty. The proof of the existence of rational dilation in the symmetrized bidisk (see \cite{ay-Edinburgh}) is an example: it needed a substantial amount of effort and tools while the rational dilation theorem was known to succeed in the bidisk due to And\^o \cite{ando}. This is the case for the two theorems stated above too. There are at least two reasons why this happens: the admissible kernels on the symmetrized bidisk have no relation to the admissible kernels on the bidisk and there are uncountably infinitely many parametrized co-ordinate functions (to be defined in the next section) on the symmetrized bidisk as opposed to only two on the bidisk. We conclude this section by noting the results of Amar in \cite{Amar} where he proved a Toeplitz Corona theorem for a bounded convex domain in $\mathbb C^n$ in terms of measures on the boundary. Results for the symmetrized bidisk do not follow from Amar's results because the symmetrized bidisk is not convex. \section{Background on The Realization Theorem} One of the most important results in the area of holomorphic functions and in the theory of Hilbert space operators is the realization formula. A function $f$ is in $H^\infty(\mathbb D)$ and satisfies $\| f \|_\infty \le 1$ if and only if there is a Hilbert space $\mathcal H$ and a unitary operator $$U = \dispmatrix A & B \\ C & D \\ : \mathbb C \oplus \mathcal H \rightarrow \mathbb C \oplus \mathcal H$$ such that $$ f(z) = A + zB (I - zD)^{-1} C.$$ Agler generalized this elegantly to the bidisk in \cite{ag}. He showed that a function $f$ is in $H^\infty(\mathbb D^2)$ and satisfies $\| f \| \le 1$ if and only if there is a graded Hilbert space $\mathcal H = \mathcal H_1 \oplus \mathcal H_2$ and a unitary operator $$U = \dispmatrix A & B \\ C & D \\ : \mathbb C \oplus \mathcal H \rightarrow \mathbb C \oplus \mathcal H$$ such that writing $P_i$ for the projection from $\mathcal H$ onto $\mathcal H_i$ for $i=1,2$, we have $$ f(z) = A + B (z_1 P_1 + z_2 P_2) (I - D (z_1 P_1 + z_2 P_2) )^{-1} C.$$ The importance of realization formulae lie in their applications to several interesting areas of research including Pick interpolation, Beurling type submodules and the corona problem, see \cite{ag-Mc NP}, \cite{BL}, \cite{BK} and \cite{DM} and the book \cite{ag-Mc}. The Realization Theorem for the symmetrized bidisk was proved in \cite{AYRealization} and \cite{BS} for scalar-valued functions. Here a version of it for operator-valued functions is needed which we shall state and {\em{not}} prove because all the crucial concepts of the proof are present in the scalar case and hence the same proof with necessary modifications continues to hold in the case when the functions are operator-valued. We shall need one more level of generalization of the concept of kernels than what has already been explained. For two $C^*$-algebras $\mathcal A$ and $\mathcal C$, a function $\Delta: \mathbb G\times \mathbb G\to \mathcal B(\mathcal A,\mathcal C)$ is called a completely positive function if $$ \sum_{i,j=1}^Nc_i^*\Delta\big( \lambda_i, \lambda_j \big)(a_i^*a_j)c_j $$is a non-negative element of $\mathcal C$ for any positive integer $N$, any $n$ points $\lambda_1, \lambda_2,\dots, \lambda_N$ of $\mathbb G$, any $N$ elements $a_1,a_2,\dots,a_N$ from $\mathcal A$ and any $N$ elements $c_1,c_2,\dots,c_N$ from $\mathcal C$. We give an example of such a completely positive function. Let $\delta:\overline{\mathbb D}\times \mathbb G\times \mathbb G\to \mathcal B(\mathcal L)$ be a function such that for each $\alpha \in \overline{\mathbb D}$, the function $\delta(\alpha, \cdot, \cdot)$ is a positive semi-definite function on $\mathbb G$ and for every fixed $(s,p)$ and $(t,q)$ in $\mathbb G$, the function $\delta(\cdot, (s,p), (t,q))$ is a Borel measurable function on $\overline{\mathbb D}$. Given a positive regular Borel measure $\mu$ on $\overline{\mathbb D}$, the function $\Delta^\delta_\mu:\mathbb G \times \mathbb G \to \mathcal B(C(\overline{\mathbb D}), \mathcal B(\mathcal L))$ defined by \begin{equation} \label{ExampleofDelta} \Delta^\delta_\mu\left( (s,p), (t,q) \right) (h) = \int_{\overline{\mathbb D}} h(\cdot) \delta(\cdot , (s,p), (t,q) ) d\mu \end{equation} is a completely positive function on $\mathbb G$. More details on these functions are found in \cite{BBLS}. When we use the word kernel or the phrase weak kernel, holomorphy in the first component and anti-holomorphy in the second component are built in whereas when we use the word function, as in a completely positive function, no such holomorphy is assumed. For $\alpha \in \overline{\mathbb D}$ and $(s,p) \in \mathbb G$, let \begin{eqnarray}\label{testfunction} \varphi(\alpha, s , p) = \frac{2 \alpha p - s}{2 - \alpha s}. \end{eqnarray} Since $|s| < 2$ for all $(s,p) \in \mathbb G$ and $\alpha \in \overline{\mathbb D}$, this function is well-defined on $\overline{\mathbb D} \times \mathbb G$. Agler and Young proved (Theorem 2.1, \cite{ay-geometry}) that \begin{equation} \label{co-ordinates} (s,p) \in \mathbb G \mbox{ if and only if } \varphi(\alpha, s, p) \in \overline{\mathbb D} \end{equation} for all $\alpha$ in the closed unit disk. For this reason, we call the family $\{\varphi(\alpha, \cdot): \alpha \in \overline{\mathbb D}\}$ the {\em parametrized coordinate functions} for the symmetrized bidisk. We note that for every $\alpha \in \overline{\mathbb D}$, the function $\varphi(\alpha , \cdot)$ is in the norm unit ball of $H^\infty(\mathbb G)$, and for every $(s,p) \in \mathbb G$, the function $\varphi( \cdot , s, p)$ is in $C(\overline{\mathbb D})$. The following lemma gives an equivalent formulation of admissiblility of a kernel $k$ on $\mathbb{G}$ in terms of coordinate functions. See Lemma 3.2 of \cite{BS} for a proof of this. \begin{lem} \label{admi} A $\mathcal{B}(\mathcal{L})$-valued kernel $k$ on $\mathbb G$ is admissible if and only if the following $\mathcal{B}(\mathcal{L})$-valued function $$ (1 - \varphi(\alpha, s, p) \overline{\varphi(\alpha, t, q)}) k\big( (s,p) , (t,q) \big)$$ on $\mathbb{G}\times\mathbb{G}$ is positive semi-definite for every $\alpha \in \overline{\mathbb D}$. \end{lem} \noindent \textbf{Realization Theorem for Operator-Valued Functions.} {\em Let $\mathcal L_1$ and $\mathcal L_2$ be two Hilbert spaces, $Y$ be any subset of $\mathbb G$ and $f:Y \to \mathcal{B}(\mathcal L_1, \mathcal L_2)$ be any function. Then the following statements are equivalent. \begin{description} \item[(H)] There exists a function $F$ in $H^\infty(\mathbb G,\mathcal{B}(\mathcal L_1, \mathcal L_2))$ with $\| F \|_\infty \le 1$ and $F|_Y=f$; \item[(M)] The function $$((s,p),(t,q))\mapsto (I_{\mathcal L_2} - f(s,p)f(t,q)^*) \oslash k( (s,p) , (t,q)) $$ is a weak kernel for every $\mathcal B(\mathcal L_2)$-valued admissible kernel $k$ on $Y$; \item[(D)] There is a completely positive function $\Delta: Y \times Y \to \mathcal B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal L_2)\big)$ such that for every $(s,p)$ and $(t,q)$ in $Y$, $$ I_{\mathcal L_2} - f(s,p)f(t,q)^* = \Delta ( (s,p), (t,q)) \big( 1 - \varphi(\cdot, s, p) \overline{\varphi(\cdot , t, q)} \big) ;$$ \item[(R)] There is a Hilbert space $\mathcal H$, a unital $*$-representation $\pi : C(\overline{\mathbb D}) \rightarrow B(\mathcal H)$ and a unitary $V : \mathcal L_1 \oplus \mathcal H \rightarrow \mathcal L_2 \oplus \mathcal H$ such that writing $V$ as $$ V = \left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)$$ we have $f(s,p) = A + B \pi(\varphi(\cdot, s, p)) \big( I_\mathcal H - D \pi(\varphi(\cdot, s, p)) \big)^{-1} C$, for every $(s,p)\in Y$. \end{description}} \section{Interpolating sequences -- Proof of Theorem \ref{Interpolate}} The celebrated Pick interpolation theorem, now studied for a hundred years, characterizes those data $\lambda_1$, $\lambda_2$,$\dots$, $\lambda_N$ in $\mathbb D$ and $w_1,w_2,\dots,w_N$ in $\overline{\mathbb D}$ for which there is a function $f\in H^\infty(\mathbb D)$ interpolating the data: there is an $f\in H^\infty(\mathbb D)$ with $\|f\|_\infty\leq 1$ and $f(\lambda_i)=w_i$, $i=1,2,\dots,N$ if and only if $$ \left(\left( \frac{1-w_i\overline{w_j}}{1-\lambda_i\overline{\lambda_j}} \right)\right)_{i,j=1}^N $$is a positive semi-definite matrix. For detailed discussions on Pick interpolation in various contexts, see \cite{BBFt} and \cite{Bt}. In \cite{BS}, we proved the Interpolation Theorem for the symmetrized bidisk. Its version for operator-valued functions is as follows. We again omit the proof because it is similar to the proof of the scalar version in \cite{BS}. \vspace*{5mm} \noindent \textbf{Interpolation Theorem for Operator-Valued Functions.} {\em Let $\mathcal{L}_1$ and $\mathcal{L}_2$ be Hilbert spaces and $W_1,W_2,\dots,W_N \in \mathcal{B}(\mathcal{L}_1,\mathcal{L}_2)$. Let $(s_1,p_1),(s_2,p_2),\dots,(s_N,p_N)$ be $N$ distinct points in $\mathbb{G}$. Then there exists a function $f$ in the closed unit ball of $H^\infty\big(\mathbb{G},\mathcal{B}(\mathcal{L}_1,\mathcal{L}_2)\big)$ interpolating each $(s_i,p_i)$ to $W_i$ if and only if \begin{eqnarray}\label{pickmatrix} \big((I_{\mathcal{L}_2}-W_iW_j^*) \otimes k\big((s_i,p_i),(s_j,p_j)\big)\big)_{i,j=1}^N \end{eqnarray}is a positive operator on $\oplus_{i=1}^N \mathcal L_2$, for every $\mathcal{B}(\mathcal{L}_2)$-valued admissible kernel $k$ on $\mathbb{G}$.} The Interpolation Theorem mentioned above was stated in Subsection 1.2, page 508 of \cite{BS} for scalar-valued functions. The following lemma is a straightforward consequence of that and hence we leave the proof to the reader. It deals with an infinite data set. \begin{lem}\label{infinitePick} Let $\{\lambda_j=(s_j,p_j): j \geq 1\}$ be a sequence of points in $\mathbb{G}$ and $w=\{w_j: j \geq 1\}$ be a bounded sequence of complex numbers. Then there exists a function $f_w$ in $H^\infty(\mathbb{G})$ with $\|f_w\|_\infty \leq C_w$ such that $f_w(\lambda_j)=w_j$ for each $j\geq 1$ if and only if $$ (i,j) \longrightarrow (C_w^2 - w_i \overline{w}_j) k(\lambda_i , \lambda_j) $$ is a positive semi-definite function on $\mathbb N \times \mathbb N$ for every admissible kernel $k$ on $\mathbb{G}$. \end{lem} For an interpolating sequence $\Upsilon = \{(s_j,p_j): j \geq 1\}$ in $\mathbb{G}$, the following constant is called the {\em{constant of interpolation}}: $$ \sup_{\|(w_j)\|_\infty \leq 1}\inf\{\|f\|_\infty: f \in H^\infty(\mathbb{G}), f(s_j,p_j)=w_j, j=1,2,3, \dots\}. $$ This constant depends on $\Upsilon$ and we show below that it is finite for any interpolating sequence $\Upsilon$. To that end, define a linear operator $L_\Upsilon:H^\infty(\mathbb{G}) \to l^\infty$ by $$ f \mapsto (f(s_1,p_1),f(s_2,p_2),f(s_3,p_3),\dots). $$ Clearly, $L_\Upsilon$ is a contraction. Recall that the definition of an interpolating sequence (given in Subsection \ref{IS}), precisely means that $L_\Upsilon$ is onto. Let $\mathcal{N}$ be the null space of $L_\Upsilon$. Then the natural map $\tilde{L}_\Upsilon:H^\infty(\mathbb{G})/\mathcal{N} \to l^\infty$ is an isomorphism. Let $R_\Upsilon$ be the inverse of $\tilde{L}_\Upsilon$ and $w=\{w_j: j \geq 1\}$ be a sequence in $l^\infty$. Then $R_\Upsilon(w)=f_w + \mathcal{N}$, where $f_w \in H^\infty(\mathbb{G})$ is such that $f_w(s_j,p_j)=w_j$ for each $j\geq 1$. We claim that $\|R_\Upsilon\|$ is the constant of interpolation for $\Upsilon$. Indeed, \begin{eqnarray*} \|R_\Upsilon\| = \sup_{\|(w_j)\|_\infty \leq 1}\|f_w+\mathcal{N}\| &=& \sup_{\|(w_j)\|_\infty \leq 1} \inf\{\|f_w +g\|_\infty: g \in \mathcal{N}\} \\ &=&\sup_{\|(w_j)\|_\infty \leq 1} \inf\{\|f\|_\infty: f(s_j,p_j)=w_j, j=1,2,3, \dots\}. \end{eqnarray*} The next lemma is a decomposition of a completely positive function. The proof is along the lines of Proposition 3.3 in \cite{DM} and hence we omit it. \begin{lem}\label{crucial-lemma} Let $Y$ be a subset of $\mathbb G$. If $\Delta:Y \times Y \to \mathcal B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal L)\big)$ is completely positive function, then there is a Hilbert space $\mathcal H$ and a function $L:Y \to B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal H,\mathcal L)\big)$ such that for every $h_1$, $h_2 \in C(\overline{\mathbb D})$ and $(s,p)$, $(t,q)\in Y$, \begin{eqnarray}\label{kernel-decomposition} \Delta\big((s,p), (t,q)\big)(h_1\overline{h_2})=L(s,p)[h_1] (L(t,q)[h_2])^*. \end{eqnarray} Moreover, there is a unital $*$-representation $\pi:C(\overline{\mathbb D})\to \mathcal B(\mathcal H)$ such that for every $h_1$, $h_2 \in C(\overline{\mathbb D})$ and $(s,p) \in Y$, \begin{eqnarray}\label{star-represt.} L(s,p)(h_1h_2) = \pi(h_1)L(s,p)h_2. \end{eqnarray} \end{lem} We are now ready for the proof of Theorem \ref{Interpolate} \textbf{Proof of Theorem \ref{Interpolate}} $(i)\Rightarrow (iv)$: Let $\{(s_j,p_j): j \geq 1\}$ be an interpolating sequence for $H^\infty(\mathbb{G})$ with constant of interpolation $M$. This means for every $w=(w_j)$ with $\|w\|_\infty \leq 1$, there exists a function $f$ in $H^\infty(\mathbb{G})$ such that $f(s_j,p_j)=w_j$ and $\|f\|_\infty\leq M$. Therefore by Lemma \ref{infinitePick} we have \begin{eqnarray}\label{Pickequivalent} \sum_{i,j=1}^nc_i\overline{c_j}w_i\overline{w_j}k((s_i,p_i),(s_j,p_j))\leq M^2\sum_{i,j=1}^nc_i\overline{c_j}k((s_i,p_i),(s_j,p_j)), \end{eqnarray} for any $n \in \mathbb N$ and any complex numbers $c_1, c_2, \ldots ,c_n$. Now the proof depends on choosing $w_j$ and $c_j$ appropriately. Choose $w_j=\exp(i \theta_j)$ and let $(c_j)$ be any sequence in $l^2$. Then we have $$ \sum_{i,j=1}^nc_i\overline{c_j}\exp( i (\theta_i-\theta_j))k((s_i,p_i),(s_j,p_j))\leq M^2\sum_{i,j=1}^nc_i\overline{c_j}k((s_i,p_i),(s_j,p_j)), $$which, after integrating with respect to $\theta_1,\theta_2,\dots ,\theta_n$ on $[0,2\pi]\times[0,2\pi]\times \cdots \times [0,2\pi]$ and dividing by $(2\pi)^n$ both sides, becomes $$ \sum_{j=1}^n|c_j|^2k((s_j,p_j),(s_j,p_j))\leq M^2\sum_{i,j=1}^nc_i\overline{c_j}k((s_i,p_i),(s_j,p_j)), $$which after replacing $c_j$ by $c_j':=c_j/\sqrt{k((s_j,p_j),(s_j,p_j))}$ becomes $$ \sum_{j=1}^n|c_j|^2\leq M^2\sum_{i,j=1}^nc_i\overline{c_j}G_k(i,j). $$ Similarly, choosing $w_j=\exp(i \theta_j)$ and $c_j'':=\exp(-i \theta_j)c_j/\sqrt{k((s_j,p_j),(s_j,p_j))}$ in (\ref{Pickequivalent}) and integrating as above we get $$ \sum_{i,j=1}^nc_i\overline{c_j}G_k(i,j) \leq M^2 \sum_{j=1}^n|c_j|^2. $$ Consequently, whenever we have an interpolating sequence $\{(s_j,p_j): j \geq 1\}$ with $M$ as its constant of interpolation, we have for every admissible kernel $k$ \begin{eqnarray}\label{Grammcond} \frac{1}{M^2}\sum_{j=1}^n|c_j|^2\leq \sum_{i,j=1}^nc_i\overline{c_j}G_k(i,j)\leq M^2 \sum_{j=1}^n|c_j|^2, \end{eqnarray} where $G_k(i,j)$ is the $(ij)$-th entry of the Grammian matrix associated to $k$ and the interpolating sequence. Since this is true for every $n \ge 1$, we have shown that the constants $M$ and $N$ in (\ref{grammabove}) and (\ref{grammbelow}) can be chosen to be the square of the constant of interpolation. $(iv)\Rightarrow (i)$: Conversely, suppose (\ref{grammabove}) and (\ref{grammbelow}) hold for some constants $M$ and $N$. Without loss of generality we can assume that $M$ and $N$ are the same. Therefore for every admissible kernel $k$ and $(c_j)$ in $l^2$, we have \begin{eqnarray}\label{theconverse} \frac{1}{M}\sum_{j}|c_j|^2\leq \sum_{i,j}c_i\overline{c_j}G_k(i,j)\leq M \sum_{j}|c_j|^2. \end{eqnarray} To prove that $\{(s_j,p_j): j \geq 1\}$ is an interpolating sequence, given any bounded sequence $w=\{w_1, w_2, \ldots\}$, we need to find a constant $C_w$ such that for every admissible kernel $k$, \begin{eqnarray}\label{auxilaryzero} \left( \left( \; \; (C_w^2 - w_i \overline{w}_j) k(\lambda_i , \lambda_j) \; \; \right) \right) \geq 0, \end{eqnarray} which, by Lemma \ref{infinitePick}, will prove our assertion. For any integer $n \ge 1$, choosing $$\widetilde{c_j} = \left\{ \begin{array}{c} c_jw_j\sqrt{k((s_j,p_j),(s_j,p_j))} \text{ if } 1 \le j \le n,\\ 0 \text{ if } j > n \end{array} \right. $$ in the second inequality of (\ref{theconverse}), we get \begin{equation}\label{auxilaryone} \sum_{i,j=1}^nc_i\overline{c_j}w_i\overline{w_j}k((s_i,p_i),(s_j,p_j))\leq M \big(\sup_{i\ge 1} |w_i|\big)^2 \sum_{j=1}^n|c_j|^2k((s_j,p_j),(s_j,p_j)). \end{equation} Choosing $$\widetilde{c_j}' = \left\{ \begin{array}{c} c_j\sqrt{k((s_j,p_j),(s_j,p_j))} \text{ if } 1 \le j \le n,\\ 0 \text{ if } j > n \end{array} \right. $$ in the first inequality of (\ref{theconverse}), we get \begin{eqnarray}\label{auxilarytwo} \sum_{j=1}^n|c_j|^2k((s_j,p_j),(s_j,p_j))\leq M\sum_{i,j=1}^nc_i\overline{c_j}k((s_i,p_i),(s_j,p_j)). \end{eqnarray} Combining (\ref{auxilaryone}) and (\ref{auxilarytwo}) we get $$ \sum_{i,j=1}^nc_i\overline{c_j}w_i\overline{w_j}k((s_i,p_i),(s_j,p_j))\leq M^2 \big(\sup_{i\ge 1} |w_i|\big)^2 \sum_{i,j=1}^nc_i\overline{c_j}k((s_i,p_i),(s_j,p_j)). $$ Now, for any $l^2$ sequence $(c_j)$, we have the inequality above for any $n\ge 1$. Putting $C_w = M \big(\sup_{i\ge 1} |w_i|\big)$, the required inequality (\ref{auxilaryzero}) follows. We now prove that $(ii)$ is equivalent to $(iii)$. First observe that condition (\ref{grammbelow}) is equivalent to the following: \begin{eqnarray}\label{schurproduct} (N-\delta_{ij})\cdot k((s_i,p_i),(s_j,p_j)) \geq 0, \end{eqnarray}for every admissible kernel $k$ on $\mathbb{G}$. Let $Y=\{(s_j,p_j): j\geq 1\}$. By the scalar-valued version of the Realization Theorem described in Section 2 (i.e., the way it is stated in Subsection 1.3, page 510 of \cite{BS}), there is a completely positive function $\Delta: Y \times Y \to C(\overline{\mathbb D})^*$ such that for every $i,j\geq 1$, $$ N-\delta_{ij}=\Delta((s_i,p_i),(s_j,p_j))\big(1-\varphi(\cdot,s_i,p_i)\overline{\varphi(\cdot,s_j,p_j)}\big). $$ Let $\{e_j:j\geq1\}$ be the canonical orthogonal basis of $l^2$. Rewriting the above term we get $$ N + \Delta ( (s_i,p_i),(s_j,p_j)) \big( \varphi(\cdot, s_i, p_i) \overline{\varphi(\cdot , s_j, p_j)} \big) = \langle e_i,e_j \rangle + \Delta ((s_i,p_i),(s_j,p_j)) \big( 1 \big).$$ By Lemma \ref{crucial-lemma}, there is a Hilbert space $\mathcal H$ and a function $L : Y \times Y \rightarrow B(C(\overline{\mathbb D}) ,\mathcal H)$ such that $$\Delta \left((s_i,p_i),(s_j,p_j) \right) (h_1 \overline{h_2}) = \langle L(s_i,p_i) h_1, L(s_j,p_j)h_2 \rangle_{\mathcal H}$$ for every $h_1, h_2$ in $C(\overline{\mathbb D})$. Hence $$ N + \langle L (s_i,p_i) \varphi(\cdot , s_i, p_i), L (s_j,p_j) \varphi(\cdot , s_j, p_j) \rangle = \langle e_i,e_j \rangle + \langle L (s_i,p_i) 1 , L (s_j,p_j) 1 \rangle.$$ By equation (\ref{star-represt.}), this is the same as $$ N + \langle \pi \varphi(\cdot , s_i, p_i) L (s_i,p_i) 1, \pi \varphi(\cdot , s_j, p_j) L (s_j,p_j) 1 \rangle = \langle e_i,e_j \rangle + \langle L (s_i,p_i) 1 , L (s_j,p_j) 1 \rangle.$$ Now we can define an isometry $V$ from the span of $$\{\sqrt{N} \oplus \pi \varphi(\cdot , s_j, p_j) L (s_j,p_j) 1 : j\geq1\}\subseteq \mathbb C\oplus \mathcal H$$ into the span of $\{e_j \oplus L (s_j,p_j) 1: j\geq1\}\subseteq l^2\oplus \mathcal H$ such that for each $ j\geq1$ $$ V \left( \begin{array}{c} \sqrt N \\ \pi \varphi(\cdot , s_j, p_j) L (s_j,p_j) 1 \end{array} \right) = \left( \begin{array}{c} e_j \\ L (s_j,p_j) 1 \end{array} \right)$$ and then extending by linearity. By a standard technique of adding an infinite dimensional Hilbert space to $\mathcal H$, if required, we can extend $V$ to a unitary from $\mathbb C \oplus \mathcal H$ onto $l^2 \oplus \mathcal H$. Now, write $V$ as \begin{eqnarray}\label{actionofV} \left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right) \end{eqnarray} and define $F:\mathbb G\to \mathcal B(\mathbb C,l^2)$ as $$ F(s,p) = A + B \pi \varphi(\cdot , s, p) (I_H - D \pi \varphi(\cdot , s, p) )^{-1} C . $$ By the Realization Theorem, $F$ is a bounded analytic function and $\|F\|_\infty\leq 1$. By (\ref{actionofV}) we have \begin{eqnarray} A \sqrt{N} + B \pi \varphi(\cdot , s_j, p_j) L (s_j, p_j) 1 & = & e_j \mbox{ and} \label{formulaforphi} \\ C \sqrt{N} + D \pi \varphi(\cdot , s_j, p_j) L (s_j, p_j) 1 & = & L(s_j, p_j) 1. \nonumber \end{eqnarray} Eliminating $L(s,p) 1$ we get that the function $\Phi:\mathbb G\to \mathcal B(\mathbb C,l^2)$ defined by $\Phi=\sqrt N F$ has the property $\Phi(s_j,p_j)=e_j$ for each $j \geq 1$. Therefore we have proved the following lemma. \begin{lem}\label{boundedbelow} Condition (\ref{grammbelow}) is equivalent to the existence of a function $\Phi$ in $H^\infty(\mathbb{G},\mathcal{B}(\mathbb{C}, l^2))$ of norm at most $\sqrt{N}$ such that $\Phi(s_j,p_j)=e_j$ for each $j \geq 1$. \end{lem} Also note that condition (\ref{grammabove}) is equivalent to $$ (M\delta_{ij}-1)k((s_i,p_i),(s_j,p_j)) \geq 0. $$ Proceeding as above one gets the following result. \begin{lem}\label{boundedabove} Condition (\ref{grammabove}) is equivalent to the existence of a function $\Psi$ in $H^\infty(\mathbb{G},\mathcal{B}( l^2, \mathbb{C}))$ of norm at most $\sqrt{M}$ such that $\Psi(s_j,p_j)e_j=1$ for each $j \geq 1$. \end{lem} Now we are ready to prove that $(ii)$ and $(iii)$ are equivalent. Suppose $(ii)$ holds. Then by Lemma \ref{boundedbelow} there exists a function $\Phi$ in $H^\infty(\mathbb{G},\mathcal{B}(\mathbb{C}, l^2))$ of norm at most $\sqrt{N}$ such that $\Phi(s_j,p_j)=e_j$ for each $j \geq 1$. Let $\phi_1,\phi_2,\dots$ be functions on $\mathbb{G}$ such that $$ \Phi(s,p)=(\phi_1(s,p),\phi_2(s,p),\dots)^t. $$ The norm of $\Phi$ is no greater than $\sqrt{N}$. This implies that for all $(s,p)\in \mathbb{G}$, $$\sum_i|\phi_i(s,p)|^2 \leq N.$$ Define $\Psi(s,p) = \Phi(s,p)^t$. Then $\Psi$ is a function in $H^\infty(\mathbb{G},\mathcal{B}( l^2, \mathbb{C}))$ of norm at most $\sqrt{N}$ such that $\Psi(s_i,p_i)e_i=1$ for each $i$ and hence by Lemma \ref{boundedabove} condition (\ref{grammabove}) holds with the constant $N$ in place of $M$. Note that for each $i \geq 1$, $\phi_i$ is the function such that $\phi_i(s_j,p_j)=\delta_{ij}$, for all $j \geq 1$. Hence $\{(s_j,p_j): j\geq 1\}$ is strongly separated. Conversely, suppose $(iii)$ holds. Therefore by Lemma \ref{boundedabove} there exists a function $\Psi$ in $H^\infty(\mathbb{G},\mathcal{B}( l^2, \mathbb{C}))$ of norm at most $\sqrt{M}$ such that $\Psi(s_i,p_i)e_i=1$ for each $i \geq 1$. Write $\Psi$ as $$ \Psi(s,p)=(\psi_1(s,p),\psi_2(s,p),\dots), $$where the functions $\psi_i$s are such that $\sum_i|\psi_i(s,p)|^2 \leq M$ and $\psi_i(s_i,p_i)=1$ for each $i$. Moreover, the sequence $\{(s_j,p_j): j\geq 1\}$ is strongly separated. This means there exist a constant $L$ and a sequence $\varphi_i$ of functions on $\mathbb{G}$ such that $\varphi_i(s_j,p_j)=\delta_{ij}$ for each $j$ and $\|\varphi_i\|_\infty \leq L$. Define $$ \Phi(s,p)=(\varphi_1(s,p)\psi_1(s,p),\varphi_2(s,p)\psi_2(s,p),\dots)^t. $$ Clearly, $\|\Phi\|_\infty \leq L\sqrt{M}$ and $\Phi(s_i,p_i)=e_i$ for all $i$ which, by Lemma \ref{boundedbelow}, proves that $(ii)$ holds. Note that $(i)$ and $(iv)$ together are equivalent to $(ii)$ and $(iii)$ together. We have proved that $(i)$ is equivalent to $(iv)$, and $(ii)$ is equivalent to $(iii)$. Hence the proof of Theorem \ref{Interpolate} is complete. \qed We end this section with a sufficient condition for a sequence to be interpolating. Suppose $\{(s_j,p_j): j \geq 1\}$ be a sequence of points in $\mathbb{G}$ such that for some $\alpha$ in $\overline{\mathbb{D}}$, the sequence $\{z_j=\varphi(\alpha,s_j,p_j): j\geq 1\}$ in $\mathbb{D}$ is interpolating, where $\varphi(\alpha, \cdot)$ is the coordinate function as defined by (\ref{testfunction}). Then the sequence $\{(s_j,p_j): j \geq 1\}$ is also interpolating. Because for each bounded sequence $w=\{w_j: j \geq 1\}$, the function $g_w\circ \varphi ( \alpha, \cdot )$ interpolates $(s_j,p_j)$ to $w_j$, where $g_w$ is the function from $\mathbb D$ to $\mathbb D$ that interpolates $z_j$ to $w_j$. So we have the following Carleson-type condition. \begin{lem}\label{suff} Let $\{(s_j,p_j): j \geq 1\}$ be a sequence of points in $\mathbb{G}$. Let $\alpha$ be in $\overline{\mathbb{D}}$ and $\varphi(\alpha,\cdot)$ be the coordinate function as defined in (\ref{testfunction}). Denote $z_j:=\varphi(\alpha,s_j,p_j)$. If there exits $\delta>0$ such that $$ \prod_{j\neq k}\left|\frac{z_j-z_k}{1-\overline{z_k}z_j}\right| \geq \delta, \text{ for all }k, $$ then $\{(s_j,p_j): j \geq 1\}$ is an interpolating sequence. \end{lem} \section{Cyclic $\Gamma$--isometries} This short section proves a result about cyclic $\Gamma$--isometries. The main result of this section will be used in the proof of Theorem 2. A $\Gamma$--contraction $(R,U)$ is called a $\Gamma$--{\em unitary} if $U$ is a unitary operator. In such a case, $R$ and $U$ are normal operators and the joint spectrum $\sigma(R,U)$ of $(R,U)$ is contained in the distinguished boundary of $\Gamma$. A $\Gamma$--contraction $(T,V)$ acting on a Hilbert space $\mathcal K$ is called a $\Gamma$--{\em isometry} if there exist a Hilbert space $\mathcal{N}$ containing $\mathcal{K}$ and a $\Gamma$--unitary $(R,U)$ on $\mathcal{N}$ such that $\mathcal{K}$ is left invariant by both $R$ and $U$, and $$T = R|_{\mathcal{K}} \mbox{ and } V = U|_{\mathcal{K}}.$$ In other words, $(T,V)$ is a $\Gamma$--isometry if it has a $\Gamma$--unitary extension $(R,U)$. The two following theorems are from \cite{ay-jot} and characterize $\Gamma$--unitaries and $\Gamma$--isometries. \begin{thm} \label{G-unitary} Let $(R,U)$ be a pair of commuting operators defined on a Hilbert space $\mathcal{H}.$ Then the following are equivalent: \begin{enumerate} \item $(R,U)$ is a $\Gamma$--unitary; \item there exist commuting unitary operators $U_{1}$ and $ U_{2}$ on $\mathcal{H}$ such that $$R= U_{1}+U_{2},\quad U= U_{1}U_{2};$$ \item $U$ is unitary,\;$R=R^*U,\;$\;and $\| R \| \leq2$. \item $U$ is a unitary and $R = W + W^* U$ for some unitary $W$ commuting with $U$. \end{enumerate} \end{thm} \begin{thm} \label{G-isometry} Let $T\,,\,V$ be commuting operators on a Hilbert space $\mathcal{H}.$ The following statements are all equivalent: \begin{enumerate} \item $(T,V)$ is a $\Gamma$--isometry, \item $(T,V)$ is a $\Gamma$--contraction and $V$ is isometry, \item $V$ is an isometry\;,\;$T=T^*V$ and $\| T \| \leq2.$ \end{enumerate} \end{thm} A $\Gamma$--coisometry is the adjoint (componentwise) of a $\Gamma$--isometry. Agler and Young proved the following remarkable result which we shall need. \begin{thm}[Agler and Young, Theorem 3.1 in \cite{ay-jot}] \label{dilation} Let $(S,P)$ be a $\Gamma$--contraction on a Hilbert space $\mathcal H$. There exists a Hilbert space $K$ containing $\mathcal H$ and a $\Gamma$--coisometry $(S^\flat, P^\flat)$ on $K$ such that $\mathcal H$ is invariant under $S^\flat$ and $P^\flat$, and $S = S^\flat|_H, P = P^\flat|_H$. \end{thm} Let $\mu$ be a regular Borel measure on $b\Gamma$. On $H^2(b\Gamma , \mu)$, define two commuting bounded operators $$(M_s^\mu f)(s,p) = sf(s,p) \mbox{ and } M_p^\mu f(s,p) = pf(s,p).$$ Since $(s,p) \in b\Gamma$ (equivalently, $|p|=1, s = \overline{s}p$ and $|s|\le 2$), it is easy to check that $(M_s^\mu ,M_p^\mu )$ is a $\Gamma$--isometry on $H^2(b\Gamma , \mu)$. Indeed, according to one of the characterizations of a $\Gamma$--isometry given above, we need to show that $M_p^\mu $ is an isometry, $M_s^\mu = (M_s^\mu)^*M_p^\mu$ and $\|M_s^\mu\| \le 2$ all of which follow from the fact that $(s,p) \in b\Gamma$. Moreover, $(M_s^\mu ,M_p^\mu )$ is $cyclic$ with the constant function $1$ serving as the cyclic vector because $$\overline{{\rm span}} \{(M_s^\mu)^m(M_p^\mu)^n 1 : m,n \in \mathbb N\} = H^2(b\Gamma, \mu).$$ Conversely, if $(T,V)$ is a $\Gamma$--isometry on $\mathcal H$ with a cyclic vector $h_0$, we extend it to a $\Gamma$--unitary $(R,U)$ on $K$, say. Since $R$ and $U$ are commuting normal operators, the $C^*$-algebra $C^*(R,U)$ generated by them is commutative. The closure of the subspace $\{Xh_0: X \in C^*(R,U)\}$ is a reducing subspace of $(R,U)$ and contains $\mathcal H$ as an invariant subspace of $(R,U)$. So, we can, without loss of generality, assume $K$ to be the above space. Hence, $(R,U)$ is a $minimal$ dilation. The $\Gamma$--unitary $(R,U)$ is cyclic too (i.e., $K = \{Xh_0: X \in C^*(R,U)\}$) with the same cyclic vector $h_0$. Applying Gelfand theory to $C^*(R,U)$ and remembering that the joint spectrum of $(R,U)$ is contained in $b\Gamma$, we get a measure $\mu$ on $b\Gamma$ such that $(R,U)$ is unitarily equivalent to $(M_s^\mu,M_p^\mu)$ on $L^2(b\Gamma, \mu)$ and the $\Gamma$--isometry $(T,V)$ is the restriction of $(M_s^\mu,M_p^\mu)$ to $H^2(b\Gamma, \mu)$. Summing up, we have proved the following. \begin{lem}\label{cyclic} A commuting pair of bounded operators $(T,V)$ is a cyclic $\Gamma$--isometry if and only if there is a regular Borel measure $\mu$ on $b\Gamma$ such that $(T,V)$ is unitarily equivalent to $(M_s^\mu,M_p^\mu)$ on $H^2(b\Gamma, \mu)$. \end{lem} \section{The Toeplitz corona theorem on the symmetrized bidisk -- Proof of Theorem \ref{TC-G}} The following lemma plays a pivotal role in the proof of Theorem \ref{TC-G}. \begin{lem}\label{factor} Let $Y$ be a subset of $\mathbb G$ and $J:Y\times Y\to \mathcal{B}(\mathcal{L})$ be a continuous self-adjoint (i.e., $J((s,p),(t,q))=J((t,q),(s,p))^*$) function. If \begin{eqnarray}\label{weaker assumption} J \oslash k : \big((s,p), (t,q)\big) \mapsto J\big((s,p), (t,q)\big) \otimes k\big((s,p), (t,q)\big) \end{eqnarray}is positive semi-definite for every $\mathcal{B}(\mathcal{L})$-valued admissible weak kernel $k$, then there is a completely positive function $\Delta: Y \times Y \to \mathcal B(C(\overline{\mathbb D}), \mathcal L)$ such that for every $(s,p),(t,q)$ in $Y$, \begin{eqnarray*} J\big((s,p), (t,q)\big) = \Delta((s,p),(t,q))\big(1-\varphi(\cdot,s,p)\overline{\varphi(\cdot,t,q)}\big). \end{eqnarray*} \end{lem} \begin{proof} We first prove the result for finite subsets of $Y$ and then apply Kurosh's theorem. Let $\mathcal F=\{(s_j,p_j):1\leq j\leq N\}$ be a finite subset of $Y$ of cardinality $N$. Consider the following subset of $N \times N$ self-adjoint operator matrices with entries in $\mathcal{B}(\mathcal{L})$, \begin{align*} \mathcal{W} & = \big\{\left[\Delta\big((s_i,p_i),(s_j,p_j)\big)\left(1-\varphi(\cdot,s_i,p_i)\overline{\varphi(\cdot,s_j,p_j)}\right)\right]_{i,j=1}^N:\\&\Delta: \mathcal F \times \mathcal F \to \mathcal{B}\left(C(\overline{\mathbb D}), \mathcal{B}(\mathcal L)\right) \text {completely positive function}\big\}. \end{align*} The subset $\mathcal{W}$ of $\mathcal{B}(\mathcal{L}^N)$ is a wedge in the vector space of $N \times N$ self-adjoint matrices with entries from $\mathcal B(\mathcal L)$ in the sense that it is convex and if we multiply a member of $\mathcal{W}$ by a non-negative real number, then the element remains in $\mathcal{W}$. Since $\mathcal{B}(\mathcal{L}^N)$ is the dual of $\mathcal{B}_1(\mathcal{L}^N)$, the ideal of trace class operators acting on $\mathcal{L}^N$, it has its natural weak-star topology. We shall show that it is closed. This will require some work. We shall pick up the proof of the lemma after we show that the wedge is closed. Let $$ K_\nu\big((s_i,p_i),(s_j,p_j)\big)=\Delta_\nu\big((s_i,p_i),(s_j,p_j)\big)\left(1-\varphi(\cdot,s_i,p_i)\overline{\varphi(\cdot,s_j,p_j)}\right) $$ be a net in $\mathcal W$ which is indexed by $\nu$ in some index set and which converges to an $N \times N$ self-adjoint $\mathcal B(\mathcal L)$-valued matrix $K=(K_{ij})$ with respect to the weak-star topology. This means that for every $X=(X_{kl})\in \mathcal{B}_1(\mathcal{L}^N)$, the net of scalars $\tr(K_{\nu}X)$ converges to $\tr(KX)$. Let us use a special $X$. Consider two vectors $u$ and $v$ in $\mathcal{L}$ and choose $X$ to be the block operator matrix which has $u\otimes v$ in the $(ji)$-th entry and zeroes elsewhere. Then we get $$ \langle \Delta_\nu\big((s_i,p_i),(s_j,p_j)\big)\left(1-\varphi(\cdot,s_i,p_i)\overline{\varphi(\cdot,s_j,p_j)}\right)u,v \rangle \to \langle K_{ij}u, v\rangle $$ for all $i=1,2, \ldots , N$ and all $j=1,2, \ldots , N$. In particular, we have $$ \langle \Delta_\nu\big((s_i,p_i),(s_i,p_i)\big)\left(1-|\varphi(\cdot,s_i,p_i)|^2\right)u,u \rangle \to \langle K_{ii}u, u\rangle $$for all $u\in \mathcal L$ and $1\leq i \leq N$. Let us recall that for any $(s,p) \in \mathbb G$, $\sup\{ | \varphi (\alpha , s, p) | : \alpha \in \overline{\mathbb D} \} < 1$, so that we have an $\epsilon >0$ satisfying $1-|\varphi(\cdot,s_i,p_i)|^2\geq \epsilon 1$ for each $1\leq i \leq N$. Hence for each $1\leq i \leq N$, $$ \langle \Delta_\nu\big((s_i,p_i),(s_i,p_i)\big)\left(1-|\varphi(\cdot,s_i,p_i)|^2\right)u,u \rangle \geq \epsilon \langle \Delta_\nu\big((s_i,p_i),(s_i,p_i)\big)(1)u,u \rangle. $$ Since the left hand side of the above inequality converges for every $i$ and there are only finitely many $i$'s, we have a positive constant $M(u)$ depending only on $u$ such that $$ \sup_\nu \langle \Delta_\nu \big((s_i,p_i),(s_i,p_i)\big)(1)u,u \rangle <M(u). $$ Since we have $\|h\|_\infty^2-|h(\cdot)|^2\geq 0$ for every $h \in C(\overline{\mathbb D})$, hence \begin{eqnarray*} \langle \Delta_\nu\big((s_i,p_i),(s_i,p_i)\big)(|h|^2)u,u \rangle\leq \|h\|_\infty^2\langle \Delta_\nu\big((s_i,p_i),(s_i,p_i)\big)(1)u,u \rangle \leq \|h\|_\infty^2M(u). \end{eqnarray*} For a completely positive function $\Delta$, we have for every $h_1,h_2 \in C(\overline{\mathbb D})$ and $u,v \in \mathcal L$, $$ |\langle \Delta\big( (s,p), (t,q) \big)(h_1\overline{h_2})u,v\rangle| \leq\langle \Delta\big( (s,p), (s,p) \big)(|h_1|^2)u,u\rangle\langle \Delta\big( (t,q), (t,q) \big)(|h_2|^2)v,v\rangle, $$ which immediately gives a bound on off-diagonal entries \begin{eqnarray*} |\langle \Delta_\nu\big((s_i,p_i),(s_j,p_j)\big)(h)u,v \rangle|\leq \|h\|_\infty^2M(u)M(v), \end{eqnarray*}for every $h \in C(\overline{\mathbb D})$, all $u,v \in \mathcal L$ and every $\nu$. Therefore, for every $h \in C(\overline{\mathbb D})$ and $u,v \in \mathcal L$, the net $\{\langle \Delta_\nu\big((s_i,p_i),(s_j,p_j)\big)(h)u,v \rangle\}$ is bounded, for each $1\leq i,j \leq N$. Since the set $\mathcal F$ is finite, we get a subnet $\nu_l$ such that $\{\langle \Delta_{\nu_l}\big((s_i,p_i),(s_j,p_j)\big)(h)u,v \rangle\}$ converges to some complex number (depending on $i,j,h,u$ and $v$). Now we define a completely positive function $\Delta: \mathcal F \times \mathcal F \to \mathcal{B}\left(C(\overline{\mathbb D}), \mathcal{B}(\mathcal L)\right)$ by $$ \langle \Delta\big( (s_i,p_i), (s_j,p_j) \big)(h)u,v \rangle = \lim_l\langle \Delta_{\nu_l}\big( (s_i,p_i), (s_j,p_j) \big)(h)u,v \rangle $$ and extend it trivially to $Y \times Y$. Consequently, for every $h \in C(\overline{\mathbb D})$ and $u,v \in \mathcal L$, $$ \langle \Delta\big( (s_i,p_i), (s_j,p_j) \big)(h)u,v \rangle =\langle K_{ij}u,v \rangle \text{ for each }1\leq i,j\leq N $$ proving that $\mathcal{W}$ is weak-star closed and hence operator norm closed too. Continuing the proof of the lemma, for a function $\delta:\overline{\mathbb D}\times \mathbb G\times \mathbb G\to \mathcal B(\mathcal L)$ such that for each $\alpha \in \overline{\mathbb D}$, $\delta(\alpha, \cdot, \cdot)$ is a weak kernel on $\mathbb G$, let $\Delta^\delta_\mu$ be as defined in (\ref{ExampleofDelta}). Consider the two functions $b,d:\overline{\mathbb D}\times \mathbb G\times \mathbb G\to \mathcal B(\mathcal L)$ defined by \begin{eqnarray}\label{the-b-kernel} b(\alpha,(s,p),(t,q))=\frac{I_\mathcal L}{1-\varphi(\alpha,s,p)\overline{\varphi(\alpha,t,q)}}, \end{eqnarray} and \begin{eqnarray}\label{the-d-kernel} d(\alpha,(s,p),(t,q))=\frac{[u(s,p) \otimes u(t,q)]}{1-\varphi(\alpha,s,p)\overline{\varphi(\alpha,t,q)}}, \end{eqnarray}where $\bf u:\mathbb G\to \mathcal L$ is a function and for two elements $u_1,u_2$ of $\mathcal L$, $(u_1\otimes u_2)$ denotes the bounded operator on $\mathcal L$ defined by $$ (u_1\otimes u_2)(h)=\langle h,u_2\rangle u_1. $$ Then for a probability measure $\mu$ on $\overline{\mathbb D}$, we have $$\Delta^b_\mu\big( (s,p), (t,q) \big)\big(1-\varphi(\cdot,s,p)\overline{\varphi(\cdot,t,q)}\big)=I_\mathcal L \text{ for every $(s,p),(t,q)\in \mathbb G$}$$and $$\Delta^d_\mu\big( (s,p), (t,q) \big)\big(1-\varphi(\cdot,s,p)\overline{\varphi(\cdot,t,q)}\big)={\bf u}(s,p)\otimes {\bf u}(t,q)\text{ for every }(s,p),(t,q)\in \mathbb G$$ and hence we conclude that the block operator matrix with each entry being $I_{\mathcal L}$ is in $\mathcal W$ and if $u_1,u_2,\dots,u_N$ are any vectors in $\mathcal L$, then the $N \times N$ matrix $$ D(i,j)=u_i\otimes u_j \text{ for each } 1\leq i,j \leq N $$ is also in $\mathcal{W}$. We now show that the restriction $J_{\mathcal F}$ of $J$ to $\mathcal F \times \mathcal F$ is in $\mathcal{W}$. Suppose on the contrary that $J_{\mathcal F}$ is not in $\mathcal{W}$. Then it is well-known as a consequence of the Hahn--Banach extension theorem that these two can be separated by a weak-star continuous linear functional $L$ on $\mathcal{B}(\mathcal{L}^N)$. Specifically, applying part (b) of Theorem 3.4 of \cite{Rud}, we get such an $L$ whose real part is non-negative on $\mathcal{W}$ and strictly negative on $J_{\mathcal F}$. We replace this linear functional by its real part, i.e., $\frac{1}{2}(L(T)+\overline{L(T)})$ and denote it by $L$ itself. Thus, without loss of generality we can take $L$ to be real-valued. Since $L$ is weak-star continuous, $L$ has a specific form. In fact, there is an $N \times N$ self-adjoint operator matrix $K$ with entries in the ideal of trace class operators such that $$ L(T)=\tr(TK). $$ This is also well-known and can be found for example in Theorem 1.3 of Chapter V of \cite{conway func}. Let us define $K^t$ by $K^t(\lambda_i, \lambda_j)=K(\lambda_j, \lambda_i)^t$. Let $\{e_n : n \in \mathbb{N}\}$ be an orthonormal basis for $\mathcal{L}$. For $u=\sum c_m e_m$ and $v=\sum d_n e_n$ in $\mathcal{L}$, we make a note of the following fact about $K^t$, which will be used later in the proof. \begin{eqnarray*} \langle K^t(\lambda_i, \lambda_j)u,v \rangle&=&\sum_{m, n}c_m \bar{d_{n}}\langle K(\lambda_j, \lambda_i)^te_m,e_n \rangle\\ &=&\sum_{m,n}c_m \bar{d_{n}}\langle K(\lambda_j, \lambda_i)e_n,e_m \rangle = \langle K(\lambda_j, \lambda_i)\bar{v},\bar{u} \rangle, \end{eqnarray*} where $\bar{u}=\sum \bar{c}_m e_m$ and $\bar{v}=\sum \bar{d_n} e_n$. It is simple to show that $K^t$ is a $\mathcal{B}(\mathcal{L})$-valued positive semi-definite kernel on $\mathcal F$, i.e., \begin{eqnarray}\label{action on D} \sum_{i,j=1}^N\langle K^t(\lambda_i, \lambda_j)u_j,u_i \rangle \geq 0, \end{eqnarray} where $u_1,u_2,\dots,u_N$ are arbitrary vectors in $\mathcal{L}$. The following shows that (\ref{action on D}) is the action of $L$ on the kernel $D(i,j)=[\bar{u_i} \otimes \bar{u_j}]$ and hence we are done. \begin{eqnarray*} 0 \leq L(D)= \tr(D K)=\sum_{i,j=1}^N\tr(D_{ij}K_{ji})&=&\sum_{i,j=1}^N\tr([\bar{u_i} \otimes K_{ji}^*\bar{u_j}])=\sum_{i,j=1}^N\langle \bar{u_i}, K_{ji}^*\bar{u_j} \rangle \\&=& \sum_{i,j=1}^N \langle K_{ji}\bar{u_i}, \bar{u_j} \rangle = \sum_{i,j=1}^N \langle K^t(\lambda_i, \lambda_j)u_j,u_i \rangle. \end{eqnarray*} The next step is to show that $K^t$ is admissible. Lemma \ref{admi} will be used now. This is a matter of choosing the completely positive function judiciously. Note that for each $\alpha \in \overline{\mathbb D}$ and a function $u:\mathbb G \to \mathcal L$, the function $\Delta^\alpha: \mathbb G \times \mathbb G \to \mathcal{B}\left(C(\overline{\mathbb D}), \mathcal{B}(\mathcal L)\right)$ defined by $$ \Delta^\alpha\big( (s,p), (t,q) \big)(h)=h(\alpha)[u(s,p)\otimes u(t,q)] $$is completely positive, because $\Delta^\alpha=\Delta^\delta_\mu$, as defined in (\ref{ExampleofDelta}) with $\delta(\cdot,(s,p),(t,q))=u(s,p)\otimes u(t,q)$ and $\mu$ being the point mass measure at $\alpha$. This implies that for each $\alpha\in \overline{\mathbb D}$ and vectors $u_1,u_2,\dots,u_N$ in $\mathcal L$, the following $\mathcal B(\mathcal L)$-valued $N\times N$ matrix $$ A(\alpha)=\left(\left(\big(1-\varphi(\alpha,s_i,p_i)\overline{\varphi(\alpha,s_j,p_j)}\big)[u_i\otimes u_j]\right)\right)_{i,j=1}^N $$is in $\mathcal{W}$. The fact that $L$ is non-negative on $\mathcal{W}$ shows that $K^t$ is admissible. Therefore by hypothesis, the $\mathcal{B}(\mathcal{L} \otimes \mathcal{L})$-valued function $J_{\mathcal F} \oslash K^t$ on $Y \times Y$ is positive semi-definite, which means that for every choice of vectors $\{u_i\}_{i=1}^N$ in $\mathcal{L} \otimes \mathcal{L}$, we have \begin{eqnarray}\label{fun} \sum_{i,j=1}^N \langle J_{\mathcal F} \oslash K^t(\lambda_i, \lambda_j)u_j, u_i\rangle \geq 0. \end{eqnarray} For a finite subset $\mathcal{F}=\{1,2,\dots,R\}$ of $\mathbb{N}$, choose $u_i=\sum_{m =1}^R e_m \otimes e_m$ for each $i$. Note that for this choice of $u_i$, (\ref{fun}) is the same as \begin{eqnarray}\label{fun1} \sum_{i,j=1}^N\sum_{m, n =1}^R \langle J_{\mathcal F} (\lambda_i, \lambda_j)e_m, e_n \rangle\langle K^t(\lambda_i, \lambda_j)e_m, e_n \rangle \geq 0. \end{eqnarray} On the other hand, \begin{eqnarray*} L(J_{\mathcal F})&=&\sum_{i,j=1}^N\tr(J_{\mathcal F}(\lambda_i, \lambda_j)K(\lambda_j, \lambda_i)) \\ &=& \sum_{i,j=1}^N\sum_{n =1}^\infty\langle J_{\mathcal F}(\lambda_i, \lambda_j)K(\lambda_j, \lambda_i)e_n, e_n\rangle\\ &=& \sum_{i,j=1}^N\sum_{m, n=1}^\infty \langle J_{\mathcal F} (\lambda_i, \lambda_j)e_m, e_n \rangle\langle K(\lambda_j, \lambda_i)e_n, e_m \rangle \\ &=&\sum_{i,j=1}^N\sum_{m, n=1}^\infty \langle J_{\mathcal F} (\lambda_i, \lambda_j)e_m, e_n \rangle\langle K^t(\lambda_i, \lambda_j)e_m, e_n \rangle \geq 0. \end{eqnarray*} the last inequality following from (\ref{fun1}). Now $L(J_{\mathcal F})$ being non-negative is a contradiction to the assumption that $J_{\mathcal F}$ not in $\mathcal{W}$. Therefore the restriction $J_{\mathcal F}$ of $J$ to every finite subset $\mathcal F$ of $Y$ must be in $\mathcal{W}$. Now an application of Kurosh's theorem finishes the proof. \end{proof} We shall actually prove the following general theorem from which the Toeplitz corona theorem for the symmetrized bidisk follows. \begin{theorem}\label{gentpsymmbiD} Let $\mathcal{L}_1, \mathcal{L}_2$ and $\mathcal{L}_3$ be Hilbert spaces and $Y$ be a subset of $\mathbb G$. Suppose $\Phi : Y\to \mathcal{B}(\mathcal{L}_1,\mathcal{L}_2)$ and $\Theta:Y\to \mathcal{B}(\mathcal{L}_3,\mathcal{L}_2)$ are given functions. Then the following statements are equivalent: \begin{enumerate} \item[(1)] There exists a function $\Psi$ in the closed unit ball of $H^\infty\big(\mathbb{G},\mathcal{B}(\mathcal{L}_3,\mathcal{L}_1)\big)$ such that $$ \Phi(s,p)\Psi(s,p)=\Theta(s,p) $$ for all $(s,p) \in Y$; \item[(2)] The function $$[\Phi(s,p)\Phi(t,q)^*-\Theta(s,p)\Theta(t,q)^*]\oslash k\big((s,p),(t,q)\big)$$ is positive semi-definite on $Y$ for every $\mathcal{B}(\mathcal{L}_2)$-valued admissible kernel $k$ on $Y$; \item[(3)] There exists a completely positive function $\Delta:Y \times Y \to \mathcal B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal L_2)\big)$ such that for all $(s,p)$, $(t,q)$ in $Y$, \begin{eqnarray*} \Phi(s,p)\Phi(t,q)^*-\Theta(s,p)\Theta(t,q)^*&=&\Delta ( (s,p), (t,q)) \big( 1 - \varphi(\cdot, s, p) \overline{\varphi(\cdot , t, q)} \big). \end{eqnarray*} \end{enumerate} \end{theorem} \begin{proof} The proof will require the technique of the proof of the Realization Theorem. $(1)\Rightarrow(2):$ Suppose $(1)$ holds. Since $\Psi$ is in the closed unit ball of $H^\infty\big(\mathbb{G},\mathcal{B}(\mathcal{L}_3,\mathcal{L}_2)\big)$, we apply the realization theorem with $Y=\mathbb G$ and $f=\Psi$ to get, by part $({\bf{M}})$ of The Realization Theorem, $$ \left(I_{\mathcal{L}_2}-\Psi(s,p)\Psi(t,q)^*\right)\oslash k\big((s,p),(t,q)\big) $$ is positive semi-definite for every $\mathcal{B}(\mathcal{L}_2)$-valued admissible kernel $k$ on $\mathbb{G}$. Now part $(2)$ follows from the following simple observation: \begin{eqnarray*} &&[\Phi(s,p)\Phi(t,q)^*-\Theta(s,p)\Theta(t,q)^*]\oslash k\big((s,p),(t,q)\big) \\ &=& \Phi(s,p)\big(I_{\mathcal{L}_1}-\Psi(s,p)\Psi(t,q)^*\big)\Phi(t,q)^*\oslash k\big((s,p),(t,q)\big). \end{eqnarray*} $(2)\Rightarrow(3):$ This is Lemma \ref{factor}. $(3)\Rightarrow(1):$ This part of the proof uses a lurking isometry argument to construct the function $\Psi$. Suppose there exists a completely positive function $\Delta:Y \times Y \to \mathcal B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal L)\big)$ such that for every $(s,p)$, $(t,q)$ in $Y$, \begin{eqnarray*} \Phi(s,p)\Phi(t,q)^*-\Theta(s,p)\Theta(t,q)^*&=&\Delta ( (s,p), (t,q)) \big( 1 - \varphi(\cdot, s, p) \overline{\varphi(\cdot , t, q)} \big) . \end{eqnarray*} We re-arrange the terms in the above equation and apply Lemma \ref{crucial-lemma} to obtain a Hilbert space $\mathcal H$, a function $L:Y\to B\big(C(\overline{\mathbb D}), \mathcal B(\mathcal H,\mathcal L_2)\big)$ and a unital $*$-representation $\pi:C(\overline{\mathbb D})\to \mathcal B(\mathcal H)$ such that $$ \Phi(s,p)\Phi(t,q)^*+L(s,p)(\varphi(\cdot,s,p))L(t,q)(\varphi(\cdot,t,q))^*=\Theta(s,p)\Theta(t,q)^*+L(s,p)(1)L(t,q)(1)^*, $$which implies that there exists an isometry $V_1$ from $$\overline{\text{span}}\{\Phi(t,q)^*e\oplus L(t,q)(\varphi(\cdot,t,q))^*e:(t,q)\in Y\; e\in \mathcal L_2\}\subset \mathcal{L}_1 \oplus \mathcal{H}$$ onto $$\overline{\text{span}}\{\Theta(t,q)^*e\oplus L(t,q)(1)^*e :(t,q)\in Y\; e\in \mathcal L_2\}\subset \mathcal{L}_3 \oplus \mathcal{H}$$ such that for all $(t,q)\in Y$ and $e\in \mathcal L_2$, \begin{eqnarray}\label{tpkaction} \left( \begin{array}{c} \Phi(t,q)^* \\ \pi(\varphi(\cdot,t,q))^*L(t,q)(1)^* \end{array} \right)e \xrightarrow{V_1} \left( \begin{array}{c} \Theta(t,q)^* \\ L(t,q)(1)^* \end{array} \right)e. \end{eqnarray} We add an infinite-dimensional summand to $\mathcal{H}$, if necessary, to extend $V_1$ as a unitary from $\mathcal{L}_1 \oplus \mathcal{H}$ onto $\mathcal{L}_3 \oplus \mathcal{H}$. Decompose $V_1$ as the $2\times 2$ block operator matrix $$\left(\begin{array}{cc} A_1 & B_1\\ C_1 & D_1\\ \end{array} \right) $$and define the function $\Psi$ on $\mathbb G$ by $$ \Psi(t,q)^*=A_1+B_1\pi(\varphi(\cdot,t,q))^*(I_\mathcal H-D_1\pi(\varphi(\cdot,t,q))^*)^{-1}C_1. $$ Then by the Realization Theorem $\Psi$ is a contractive multiplier and by (\ref{tpkaction}) it satisfies $\Psi(t,q)^*\Phi(t,q)^*=\Theta(t,q)^*$ for all $(t,q)$ in $Y$. Hence $(1)$ holds. \end{proof} \begin{proof}[\underline{Proof of Theorem \ref{TC-G}}:] Note that equivalence of part (1) and (3) in Theorem \ref{TC-G} follows from Theorem \ref{gentpsymmbiD} when one chooses $Y=\mathbb G$ and $\Theta=\sqrt{\delta}$. We complete the proof of Theorem \ref{TC-G} by establishing $(1)\Rightarrow(2)\Rightarrow(3)$. $(1)\Rightarrow(2):$ Denote the operator $(M^\mu_{{\psi_1}_r},M^\mu_{{\psi_2}_r},\dots,M^\mu_{{\psi_N}_r})^t$ by $M^\mu_{\psi_r}$. The inequality in part (1) shows that $$M^{\mu *}_{\psi_r}M^\mu_{\psi_r}\leq \frac{1}{\delta} I_{H^2(b\Gamma,\mu)},$$which implies $$ M^{\mu}_{\psi_r}{M^{\mu}_{\psi_r}}^*\leq\frac{1}{\delta}I_N, $$which after conjugation by $(M^\mu_{{\varphi_1}_r},M^\mu_{{\varphi_2}_r},\dots,M^\mu_{{\varphi_N}_r})=:M^\mu_{\varphi_r}$ gives $$ M^\mu_{\varphi_r} M^{\mu}_{\psi_r}{M^{\mu *}_{\psi_r}}{M^\mu_{\varphi_r}}^*\leq \frac{1}{\delta}M^\mu_{\varphi_r}{M^\mu_{\varphi_r}}^*, $$whish establishes part (2), since $M^\mu_{\varphi_r} M^{\mu}_{\psi_r}=I_{H^2(b\Gamma,\mu)}$. $(2)\Rightarrow(3):$ Let $k$ be an admissible $\mathcal B (\mathcal L_2)$-valued kernel on $\mathbb G$. As in \eqref{kernelspace}, we get a Hilbert space $H_k$ of $\mathcal L_2$-valued functions on $\mathbb G$. Define two operators $S$ and $P$ on $H_k$ by $$Sf(s,p) = sf(s,p) \text{ and } Pf(s,p) = pf(s,p), \text{ where } (s,p) \in \mathbb G.$$ Since $k$ is admissible, the pair $(S,P)$ is a $\Gamma$--contraction. Hence, by Theorem \ref{dilation}, there is a $\Gamma$--coisometry $(S^\flat, P^\flat)$ which extends $(S,P)$. By assumption, we have $$\Phi_r(M_s^\mu, M_p^\mu) \Phi_r(M_s^\mu, M_p^\mu)^* - \delta I \ge 0 \text{ for all } 0<r<1$$ for every measure $\mu$ on $b\Gamma$. By virtue of Lemma \ref{cyclic}, this means that $$\Phi_r(T, V) \Phi_r(T, V)^* - \delta I \ge 0 \text{ for all } 0<r<1$$ and for any cyclic $\Gamma$--isometry $(T,V)$. Now suppose $(T,V)$ is a $\Gamma$--isometry on $\mathcal H$ and $h \in \mathcal H$. Consider the subspace $$ \mathcal M = \overline{\rm span} \{ T^mV^nh : m,n \ge 0\}. $$ This is an invariant subspace for $T$ and $V$. Let $T^\prime = T|_{\mathcal M}$ and $V^\prime = V|_{\mathcal M}$. Then, $(T^\prime , V^\prime)$ is a cyclic $\Gamma$--isometry. So, for all $0<r<1$, we have $$\langle \Phi_r(T, V) \Phi_r(T, V)^* h , h \rangle = \| \Phi_r(T, V)^* h \|^2 \ge \| P_{\mathcal M} \Phi_r(T, V)^* h \|^2 = \| (\Phi_r(T^\prime , V^\prime)^* h \|^2 \ge \delta \| h \|^2 $$ because of cyclicity of $(T^\prime , V^\prime)$. Thus we have $$\Phi_r(T, V) \Phi_r(T, V)^* - \delta I \ge 0 \text{ for all } 0<r<1$$ and for any $\Gamma$--isometry $(T,V)$. Now, making use of Theorem \ref{dilation}, we get $$\Phi_r(S, P) \Phi_r(S, P)^* - \delta I \ge 0 \text{ for all } 0<r<1.$$ This implies that $$ M_\Phi M_\Phi^* - \delta I = \Phi(S, P) \Phi(S, P)^* - \delta I \ge 0.$$ That is what was required to prove. \end{proof} \begin{rem} We would like to conclude by noting that the proof of $(2)\Rightarrow(3)$ of Theorem \ref{TC-G} needs a characterization of cyclic $\Gamma$-isometries. Since a $\Gamma$-isometry in general cannot be obtained as a symmetrization of a pair of isometries (see \cite{ay-jot} for more on $\Gamma$-isometries), the result on cyclic $\Gamma$-isometries cannot be made to follow from the corresponding result on pairs of isometries. This is an example of the challenges that we alluded to at the end of Section 1. \end{rem}
1,941,325,220,376
arxiv
\section{Introduction} Lorentz invariance (LI) of physical laws is one of the corner stone of modern physics. There is a number of experiments confirming this symmetry at energies we can approach now. For example, on a classical level, the rotation invariance has been tested in Michelson-Morley experiments, and the boost invariance has been tested in Kennedy-Torhndike experiments \cite{reviews}. Although, up to now, LI is well established experimentally, we cannot say surely that at higher energies it is still valid. Moreover, modern astrophysical and cosmological data (e.g. UHECR, dark matter, dark energy, etc) indicate for a possible LI violation (LV). To resolve these challenges, there are number of attempts to create new physical models, such as M/string theory, Kaluza-Klein models, brane-world models, etc. \cite{reviews}. In this paper we investigate LV test related to photon dispersion measure (PhDM). This test is based on the LV effect of a phenomenological energy-dependent speed of photon \cite{a98,ellis00,km01,sarkar,myers,piran03,jlm03}, for recent studies see Ref. \cite{MP06} and references therein. The formalism that we use is based on the analogy with electromagnetic waves propagation in a magnetized medium, and extends previous works \cite{cfj90,jlm03,Tina1}. In our model, instead of propagation in a magnetized medium, the electromagnetic waves are propagating in vacuum filled with a scalar field $\psi$. LV occurs because of an interaction term $\mathrm{f}(\psi )F^2$ where $F$ is an amplitude of the electromagnetic field. Such an interaction might have different origins. In the string theory $\psi $ could be a dilaton field \cite{string,damour-pol}. The field $\psi$ could be associated with geometrical moduli. In brane-world models the similar term describes an interaction between the bulk dilaton and the Standard Model fields on the brane \cite{zhuk}. In Ref. \cite{klp}, such an interaction was obtained in $N=4$ super-gravity in four dimensions. In Kaluza-Klein models the term $\mathrm{f}(\psi )F^2$ has the pure geometrical origin, and it appears in the effective, dimensionally reduced, four dimensional action (see e.g. \cite{kubyshin,GSZ}). In particular, in reduced Einstein-Yang-Mills theories, the function $f(\psi )$ coincides (up to a numerical prefactor) with the volume of the internal space. Phenomenological (exactly solvable) models with spherical symmetries were considered in Refs.~\cite{melnikov}. To be more specific, we consider the model which is based on the reduced Einstein-Yang-Mills theory \cite{GSZ}, where the term $\propto \psi F^2$ describes the interaction between the conformal excitations of the internal space (gravexcitons) and photons. It is clear that the similar LV effect exists for all types of interactions of the form $\mathrm{f}(\psi )F^2$ mentioned above. Obviously, the interaction term $\mathrm{f}(\psi )F^2$ modifies the Maxwell equations, and, consequently, results in a modified dispersion relation for photons. We show that this modification has rather specific form. For example, we demonstrate that refractive indices for the left and right circularly polarized waves coincide with each other. Thus, rotational invariance is preserved. However, the speed of the electromagnetic wave's propagation in vacuum differs from the speed of light $c$. This difference implies the time delay effect which can be measured via high-energy GRB photons propagation over cosmological distances (see e.g. Ref. \cite{MP06}). It is clear that gravexcitons should not overclose the Universe and should not result in variations of the fine structure constant. These demands lead to a certain constrains for gravexcitons (see Refs. \cite{GSZ,iwara}). We use the time delay effect, caused by the interaction between photons and gravexcitons, to get additional bounds on the parameters of gravexcitons. The starting point of our investigation is the Abelian part of D-dimensional action of the Einstein-Yang-Mills theory: \be{1.0} S_{EM}= -\frac12 \int\limits_{M}d^{D}x\sqrt{|g|}\, F_{MN}F^{MN}\, , \end{equation} where the D-dimensional metric, $g = g_{MN}(X)dX^M\otimes dX^N = g^{(0)}(x)_{\mu \nu}dx^{\mu}\otimes dx^{\nu} + a_1^2(x)g^{(1)}$, is defined on the product manifold $M=M_0\times M_1$. Here, $M_0$ is the $(D_0=d_0+1)$-dimensional external space. The $d_1$-dimensional internal space $M_1$ has a constant curvature with the scale factor $a_1(x) \equiv L_{Pl}\exp \beta^1(x)$. Dimensional reduction of the action \rf{1.0} results in the following effective $D_0$-dimensional action \cite{GSZ} \be{1.00} \bar S_{EM}= -\frac {1}{2} \int_{M_0}d^{D_0}x\sqrt{|\tilde g^{(0)}|} \left[\left(1 - \mathcal D \kappa_0 \psi \right) F_{\mu\nu}F^{\mu \nu}\right], \end{equation} which is written in the Einstein frame with the $D_0$-dimensional metric, $\tilde g^{(0)}_{\mu \nu}=(\exp d_1\bar \beta^{1})^{-2/(D_0-2)}g^{(0)}_{\mu \nu}$. Here, $\kappa_0 \psi \equiv -\bar \beta^1 \sqrt{(D_0-2)/d_1(D-2)}\ll 1$ and $\bar \beta^1 \equiv \beta^1 - \beta^1_0$ are small fluctuations of the internal space scale factor over the stable background $\beta^1_0$ ($0$ subscript denotes the present day value). These internal space scale-factor small fluctuations/oscillations have the form of a scalar field (so called gravexciton \cite{GZ1}) with a mass $m_{\psi}$ defined by the curvature of the effective potential (see for detail \cite{GZ1}). Action \rf{1.00} is defined under the approximation $\kappa_0 \psi < 1$ that obviously holds for the condition\footnote{In the brane-world model the prefactor $\kappa_0$ in the expression for $\kappa_0\psi$ is replaced by the parameter proportional to $M^{-1}_{EW}$ \cite{zhuk}. Thus, the smallness condition holds for $\psi < M_{EW}$.} $\psi < M_{Pl}$. $\kappa_0^2 = 8\pi/M^2_{Pl}$ is four dimensional gravitational constant, $M_{Pl}$ is the Plank mass, $\mathcal D = 2 \sqrt{d_1/[(D_0-1)(D-1)]}$ is a model dependent constant. The Lagrangian density for the scalar field $\psi$ reads: $\mathcal{L}_{\psi} = \sqrt{|\tilde g^{(0)}|}(-\tilde g^{\mu\nu}\psi_{,\mu}\psi_{,\nu}-m_{\psi}^2 \psi \psi)/2$. For simplicity we assume that $\tilde g^{0}$ is the flat Friedman-Lemaitre-Robertson-Walker (FLRW) metric with the scale factor $a(t)$. Let's consider Eq. \rf{1.00}. It is worth of noting that the $D_0$-dimensional field strength tensor, $F_{\mu \nu}$, is gauge invariant.\footnote{Eq. \rf{1.00} can be rewritten in the more familiar form $\bar S_{EM}=-(1/2)\int_{M_0}d^{D_0}x\sqrt{|\tilde g^{(0)}|} \bar F_{\mu\nu}\bar F^{\mu \nu}$ \cite{GSZ}. The field strength tensor $\bar F_{\mu\nu}$ is not gauge invariant here.} Secondly, action \rf{1.00} is conformally invariant in the case when $D_0=4$. The transform to the Einstein frame does not break gauge invariance of the action \rf{1.00}, and the electromagnetic field is antisymmetric as usual, $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. Varying \rf{1.00} with respect to the electromagnetic vector potential, \be{1.3} \partial_{\nu}\left[ \sqrt{-g} \left(1-\mathcal D\kappa_0 \psi \right)F^{\mu \nu} \right]=0. \end{equation} The second term in the round brackets $\mathcal D\kappa_0 \psi F^{\mu\nu}$ reflects the interaction between photons and the scalar field $\psi$, and as we show below, it is responsible for LV. In particular, coupling between photons and the scalar field $\psi$ makes the speed of photons different from the standard speed of light. Eq. (\ref{1.3}) together with Bianchi identity (which is preserved in the considered model due to gauge-invariance of the tensor, $F_{\mu\nu}$ \cite{GSZ}) defines a complete set of the generalized Maxwell equations. As we noted, action \rf{1.00} is conformally invariant in the $4D$ dimensional space-time. So, it is convenient to present the flat FLRW metric $\tilde g^{0}$ in the conformally flat form: $\tilde g^{0}_{\mu\nu}=a^2\eta_{\mu \nu}$, where $\eta_{\mu\nu}$ is the Minkowski metric. Using the standard definition of the electromagnetic field tensor, $F_{\mu \nu}$, we obtain the complete set of the Maxwell equations in vacuum, \begin{eqnarray} {\bf \nabla} \cdot {\bf B}&=&0\, , \label{divB} \\ {\bf \nabla} \cdot {\bf E}&=& \frac{{\mathcal D}\kappa_0}{1- {\mathcal D}\kappa_0 \psi} ({\bf \nabla} \psi \cdot {\bf E})\, , \label{divE} \\ {\bf \nabla} \times {\bf B}&=&\frac{\partial {\bf E}}{\partial \eta} - \frac{{\mathcal D} {\kappa_0\dot\psi}}{1- {\mathcal D}\kappa_0 \psi} {\bf E} \nonumber \\ &+& \frac{{\mathcal D}\kappa_0}{1- {\mathcal D}\kappa_0 \psi} [{\bf \nabla} \psi \times {\bf B}]\, , \label{rotB} \\ {\bf \nabla} \times {\bf E}&=&-\frac{\partial {\bf B}}{\partial \eta}\, , \label{rotE} \end{eqnarray} where all operations are performed in the Minkowski space-time, $\eta$ denotes conformal time related to physical time $t$ as $dt = a(\eta)d\eta$, and an overdot represents a derivative with respect to conformal time $\eta$. Eqs. \rf{divB} and \rf{rotE} correspond to Bianchi identity, and since it is preserved, Eqs. \rf{divB} and \rf{rotE} keep their usual forms. Eqs. \rf{divE} and \rf{rotB} are modified due to interactions between photons and gravexcitons ($\propto \mathcal \kappa_0 \psi$). These modifications have simple physical meaning: the interaction between photons and the scalar field $\psi$ acts as an effective electric charge $e_{eff}$. This effective charge is proportional to the scalar product of the $\psi$ field gradient and the ${\bf E}$ field, and it vanishes for an homogeneous $\psi$ field. The modification of Eq. (\ref{rotB}) corresponds to an effective current ${\bf J}_{eff}$, which depends on both electric and magnetic fields. This effective current is determined by variations of the $\psi$ field over the time ($\dot\psi$) and space ($\nabla \psi$). For the case of a homogeneous $\psi$ field the effective current is still present and LV takes place. The modified Maxwell equations are conformally invariant. To account for the expansion of the Universe we rescale the field components as${\bf{B, E}} \rightarrow {\bf{B, E}}~ a^2$ \cite{grasso}. To obtain a dispersion relation for photons, we use the Fourier transform between position and wavenumber spaces as, \begin{eqnarray} {\bf F}({\bf k}, \omega) &=& \int \int d \eta ~ d^3\!x \, e^{-i(\omega \eta - {\bf k} \cdot {\bf x})} {\bf F}({\bf x}, \eta )\, , \nonumber \\ {\bf F}({\bf x}, \eta ) &=& \frac{1}{(2\pi)^4}\int \int {d\omega}~ {d^3\!k} e^{i(\omega \eta - {\bf k}\cdot {\bf x})} {\bf F}({\bf k}, \omega )\, . \label{field1} \end{eqnarray} Here, ${\bf F}$ is a vector function describing either the electric or the magnetic field, $\omega$ is the angular frequency of the electro-magnetic wave measured today, and ${\bf k}$ is the wave-vector. We assume that the field $\psi$ is an oscillatory field with the frequency $\omega_{\psi}$ and the momentum $\bf q$, so $ \psi({\bf x}, \eta )=Ce^{i(\omega_\psi \eta - {\bf q} \cdot {\bf x})}\, ,\quad C=\mbox{\rm const}\,$. Eq. (\ref{divB}) implies ${\bf B} \perp {\bf k}$. Without loosing of generality, and for simplicity of description we assume that the wave-vector ${\bf k}$ is oriented along the ${\bf z}$ axis. Using Eq. \rf{rotE} we get ${\bf E} \perp {\bf B}$. A linearly polarized wave can be expressed as a superposition of left (L, $-$) and right (R, $+$) circularly polarized (LCP and RCP) waves. Using the polarization basis of Sec.~1.1.3 of Ref.~\cite{varshalovich89}, we derive $E^\pm =(E_x \pm i E_y)/\sqrt{2}$. Rewriting Eqs. (\ref{divB}) - \rf{rotE} in the components,{\footnote{We have defined the system of 6 equations with respect to 6 components of the vectors ${\bf E}$ and ${\bf B}$. This system has non-trivial solutions only if its determinant is nonzero. From this condition we get the dispersion relation. The Faraday rotation effect is absent if the matrix has a diagonal form. }} for LCP and RCP waves we get, \begin{equation} (1-n_{+}^2)E^{+}=0, ~~~~~~~~~~~(1-n_{(-)}^2)E^{-}=0\, , \label{dispersion1} \end{equation} where $n_{+}$ and $n_-$ are refractive indices for RCP and LCP electromagnetic waves \begin{equation} n_+^2= \frac{k^2 \left[1-{\mathcal D} \kappa_0 \psi (1+q_{z}/k)\right]}{\omega^2\left[1-{\mathcal D} \kappa_0\psi(1+ \omega_\psi /\omega)\right]}= n_-^2\, . \label{n-} \end{equation} In the case when LI is preserved the electromagnetic waves propagating in vacuum have $n_+=n_-=n=k/\omega \equiv 1$. For the electromagnetic waves propagating in the magnetized plasma, $k/\omega \neq 1$, and the difference between the LCP and RCP refractive indices describes the Faraday rotation effect, $\alpha \propto \omega (n_+ - n_-)$ \cite{krall}. In the considered model, since $n_+=n_-$ the rotation effect is absent, but the speed of electromagnetic waves propagation in vacuum differs from the speed of light $c$ (see also Ref. \cite{Cantcheff:2004dn} for LV induced by electromagnetic field coupling to other generic field). This difference implies the propagation time delay effect, $\Delta t = \Delta l (1-\partial k/\partial \omega)$ ($\Delta l$ is a propagation distance), $\Delta t$ is the difference between the photon travel time and that for a "photon" which travels at the speed of light $c$. Here, $t$ is physical synchronous time. This formula does not take into account the evolution of the Universe. However, it is easy to show that the effect of the Universe expansion is negligibly small. Solving the dispersion relation as a square equation, we obtain \be{dispersion} \frac{\partial k}{\partial \omega}\simeq \pm\left\{1+\frac{1}{2}\left[\frac{\omega_{\psi}^2-q_z^2}{4\omega^2}\right]({\mathcal D} \kappa_0\psi)^2\right\}\, , \end{equation} where $\pm$ signs correspond to photons forward and backward directions respectively. The modified inverse group velocity \rf{dispersion} shows that the LV effect can be measured if we know the gravexciton frequency $\omega_{\psi}$, $z$-component of the momentum $q_z$ and its amplitude $\psi$. For our estimates, we assume that $\psi$ is the oscillatory field, satisfying (in local Lorentz frame) the dispersion relation, $\omega_\psi^2 = m_\psi^2 + {\bf q}^2$, where $m_{\psi}$ is the mass of gravexcitons\footnote{To get physical values of the corresponding parameters we should rescale them by the scale factor $a$.}. Unfortunately, we do not have any information concerning parameters of gravexcitons (some estimates can be found in \cite{GSZ,iwara}). Thus, we intend to use possible LV effects (supposing it is caused by interaction between photons and gravexcitons) to set limits on gravexciton parameters. For example, we can easily get the following estimate for the upper limit of the amplitude of gravexciton oscillations: \be{limit} |\psi| \approx \frac{1}{\sqrt{\pi}\, \mathcal D}\, \sqrt{\left|\frac{\Delta t}{\Delta l}\right|}\, \frac{\omega}{m_{\psi}}\, M_{Pl}\, , \end{equation} where for $\omega$ and $m_{\psi}$ we can use their physical values. In the case of GRB with $\omega \sim 10^{21}\div 10^{22}$Hz $\sim 10^{-4}\div 10^{-3}$GeV and $\Delta l \sim 3\div 5\times 10^{9}$y $\sim 10^{17}$sec the typical upper limit for the time delay is $\Delta t \sim 10^{-4}$sec \cite{MP06}. For these values the upper limit on gravexciton amplitude of oscillations is{\footnote{We thank R. Lehnert to point that in addition of the time delay effect the Cherenkov effect could be used to constrain the electromagnetic field and $\psi$ field coupling strength \cite{cherenkov}.}} \be{limit2} \left|\kappa_0\psi \right| \approx \frac{10^{-13}\mbox{GeV}}{m_{\psi}}\, . \end{equation} This estimate shows that our approximation $\kappa_0 \psi <1$ works for gravexciton masses $m_{\psi}>10^{-13}$GeV. Future measurements of the time-delay effect for GRBs at frequencies $\omega \sim 1-10$GeV would increase significantly the limit up to $m_{\psi} > 10^{-9}$GeV. On the other hand, Cavendish-type experiments \cite{dvali,cavendish}) exclude fifth force particles with masses $m_{\psi} \lesssim 1/(10^{-2}\mbox{cm}) \sim 10^{-12}$GeV which is rather close to our lower bound for $\psi$ field masses. Respectively we slightly shift the considered mass lower limit to be $m_{\psi} \geq 10^{-12}\mbox{GeV}$. These masses considerably higher than the mass corresponding to the equality between the energy densities of the matter and radiation (matter/radiation equality), $m_{eq}\sim H_{eq}\sim 10^{-37}$GeV, where $H_{eq}$ is the Hubble "constant" at matter/radiation equality. It means that such $\psi$-particles start to oscillate during the radiation dominated epoch (see appendix). Another bound on the $\psi$-particles masses comes from the condition of their stability. With respect to decay $\psi \to \gamma \gamma$ the life-time of $\psi$-particles is $\tau \sim (M_{Pl}/m_{\psi})^3t_{Pl}$ \cite{GSZ}, and the stability conditions requires that the decay time should be greater than the age of the Universe. According this we consider light gravexcitons with masses $m_{\psi} \le 10^{-21} M_{Pl} \sim 10^{-2}\mbox{GeV} \sim 20 m_e$ (where $m_e$ is the electron mass). As an additional restriction arises from the condition that such cosmological gravexcitons should not overclose the observable Universe. This reads $m_{\psi} \lesssim m_{eq}(M_{Pl}/\psi_{in})^4$ which implies the following restriction for the amplitude of the initial oscillations: $\psi_{in}\lesssim \left(m_{eq}/m_{\psi}\right)^{1/4}M_{PL} << M_{Pl}$ \cite{iwara}. Thus, for the range of masses $10^{-12}\mbox{GeV}\leq m_{\psi}\leq 10^{-2}\mbox{GeV}$, we obtain respectively $\psi_{in}\lesssim 10^{-6}M_{Pl}$ and $\psi_{in}\lesssim 10^{-9}M_{Pl}$. According to Eq. \rf{a.3}, we can also get the estimate for the amplitude of oscillations of the considered gravexciton at the present time. Together with the non-overcloseness condition, we obtain from this expression that $|\kappa_0\psi| \sim 10^{-43}$ for $m_{\psi}\sim 10^{-12}$GeV and $\psi_{in}\sim 10^{-6}M_{Pl}$ and $|\kappa_0\psi| \sim 10^{-53}$ for $m_{\psi}\sim 10^{-2}$GeV and $\psi_{in}\sim 10^{-9}M_{Pl}$. Obviously, it is much less than the upper limit \rf{limit2}. Note, as we mentioned above, gravexcitons with masses $m_{\psi}\gtrsim 10^{-2}$GeV can start to decay at the present epoch. However, taking into account the estimate $|\kappa_0\psi| \sim 10^{-53}$, we can easily get that their energy density $\rho_{\psi} \sim (|\kappa_0\psi|^2/8\pi)M_{Pl}^2m_{\psi}^2\sim 10^{-55}\mbox{g}/\mbox{cm}^3$ is much less than the present energy density of the radiation $\rho_{\gamma}\sim 10^{-34}\mbox{g}/\mbox{cm}^3$. Thus, $\rho_{\psi}$ contributes negligibly in $\rho_{\gamma}$. Otherwise, the gravexcitons with masses $m_{\psi} \gtrsim 10^{-2}$GeV should be observed at the present time, which, obviously, is not the case. Additionally, it follows from Eq. (42) in Ref. \cite{GSZ} that to avoid the problem of the fine structure constant variation, the amplitude of the initial oscillations should satisfy the condition: $\psi_{in} \lesssim 10^{-5}M_{Pl}$ which, obviously, completely agrees with our upper bound $\psi_{in} \lesssim 10^{-6}$GeV. Summarizing we shown that LV effects can give additional restrictions on parameters of gravexcitons. First, we found that gravexcitons should not be lighter than $10^{-13}$GeV. It is very close to the limit following from the fifth-force experiment. Moreover, experiments for GRB at frequencies $\omega > 1$GeV can result in significant shift of this lower limit making it much stronger than the fifth-force estimates. Together with the non-overcloseness condition, this estimate leads to the upper limit on the amplitude of the gravexciton initial oscillations. It should not exceed $\psi_{in}\lesssim 10^{-6}$GeV. Thus, the bound on the initial amplitude obtained from the fine structure constant variation is one magnitude weaker than our one even for the limiting case of the gravexciton masses. Increasing the mass of gravexcitons makes our limit stronger. Our estimates for the present day amplitude of the gravexciton oscillations, following from the obtained above limitations, show that we cannot use the LV effect for the direct detections of the gravexcitons. Nevertheless, the obtained bounds can be useful for astrophysical and cosmological applications. For example, let us suppose that gravexcitons with masses $m_{\psi}>10^{-2}$GeV are produced during late stages of the Universe expansion in some regions and GRB photons travel to us through these regions. Then, Eq. \rf{a.3} is not valid for such gravexcitons having astrophysical origin and the only upper limit on the amplitude of their oscillations (in these regions) follows from Eq. \rf{limit2}. In the case of TeV masses we get $|\kappa_0\psi|\sim 10^{-16}$. If GRB photons have frequencies up to 1 TeV, $\omega \sim 1$TeV, then this estimate is increased by 6 orders of magnitude. \bigskip {\bf Acknowledgments} We thank G. Dvali, G. Gabadadze, A. Gruzinov, G. Melikidze, B. Ratra, and A. Starobinsky for stimulating discussions. T. K. and A. Zh. acknowledge hospitality of Abdus Salam International Center for Theoretical Physics (ICTP) where this work has been started. A.Zh. would like to thank the Theory Division of CERN for their kind hospitality during the final stage of this work. T.K. acknowledges partial support from INTAS 061000017-9258 and Georgian NSF ST06/4-096 grants. \renewcommand{\theequation}{A.\arabic{equation}} \subsection{Appendix: Dynamics of Light Gravexcitons} \setcounter{equation}{0} In this appendix we briefly summarize the main properties of the light gravexcitons necessary for our investigations. The more detail description can be found in Refs. \cite{GSZ,iwara}. The effective equation of motion for massive cosmological gravexciton\footnote{We have seen that the interaction between gravexcitons and ordinary matter (in our case it is 4D-photons) is suppressed by the Planck scale. Thus, gravexcitons are weakly interacting massive particles (WIMPs).} is \be{a.1} \frac{d^2}{dt^2} \psi + (3H+\Gamma)\frac{d}{dt} \psi + m_\psi^2 \psi= 0\, , \end{equation} where $H\sim 1/t$ and $\Gamma\sim m_{\psi}^3/M_{Pl}^2$ are the Hubble parameter and decay rate ($\psi \to \gamma \gamma$) correspondingly. This equation shows that at times when the Hubble parameter is less than the gravexciton mass: $H \lesssim m_{\psi}$ the scalar field begins to oscillate (i.e. time $t_{in}\sim H^{-1}_{in}\sim 1/m_{\psi}$ roughly indicates the beginning of the oscillations): \be{a.2} \psi \approx C B(t) \cos (m_{\psi}t +\delta)\, . \end{equation} We consider cosmological gravexcitons with masses $10^{-12}\mbox{GeV}\leq m_{\psi}\leq 10^{-2}$GeV. The lower bound follows both from the fifth-force experiments and Eq. \rf{limit2}. The upper bound follows from the demand that the life-time of these particles (with respect to decay $\psi \to \gamma \gamma$) is larger than the age of the Universe: $\tau = 1/\Gamma \sim \left(M_{Pl}/m_{\psi}\right)^3 t_{Pl} \ge 10^{19}\mbox{sec} > t_{univ} \sim 4\times 10^{17}$ sec. Thus, we can neglect the decay processes for these gravexcitons. Additionally, it can be easily seen that these particles start to oscillate before $t_{eq}\sim H^{-1}_{eq}$ when the energy densities of the matter and radiation become equal to each other (matter/radiation equality). According to the present WMAP data for the $\Lambda$CDM model it holds $H_{eq} \equiv m_{eq} \sim 10^{-56}M_{Pl} \sim 10^{-28}\mbox{eV}$. Thus, considered particles have masses $m_{\psi} >> m_{eq}$ and start to oscillate during the radiation dominated stage. They will not overclose the observable Universe if the following condition is satisfied: $m_{\psi} \lesssim m_{eq}(M_{Pl}/\psi_{in})^4$, where $\psi_{in}$ is the amplitude of the initial oscillations at the moment $t_{in}$ (see Eq. (18) in Ref. \cite{iwara}). Prefactors $C$ and $B(t)$ in Eq. \rf{a.2} for considered light gravexcitons respectively read: $C\sim (\psi_{in}/M_{Pl})\left(M_{Pl}/m_{\psi}\right)^{3/4}$ and $B(t) \sim M_{Pl}\left(M_{Pl}t\right)^{-3s/2}$. Here, $s=1/2,2/3$ for oscillations during the radiation dominated and matter dominated stages, correspondingly. We are interested in the gravexciton oscillations at the present time $t=t_{univ}$. In this case $s=2/3$ and for $B(t_{univ})$ we obtain: $B(t_{univ})\sim t^{-1}_{univ}\approx 10^{-61}M_{Pl}$. Thus, the amplitude of the light gravexciton oscillations at the present time reads: \be{a.3} |\kappa_0\psi| \sim 10^{-60} \frac{ \psi_{in}}{M_{Pl}} \left(\frac{M_{Pl}}{m_{\psi}}\right)^{3/4}\, . \end{equation}
1,941,325,220,377
arxiv
\section{Introduction} \label{SEC1} Consider the convex polynomial optimization problem \begin{equation*} \begin{array}{crl} (P_0) & \inf & f(x) \\ & s.t. & g_i(x)\leq 0,\ i=1,\ldots,m, \end{array} \end{equation*} where $f$ and $g_i$'s are convex polynomials. The model problem $(P_0)$ admits a hierarchy of semidefinite programming (SDP) relaxations, known as the Lasserre hierarchy of SDP-relaxations. More generally, the Lasserre hierarchy of SDP relaxations is often used to solve nonconvex polynomial optimization problems with compact feasible sets \cite{Lasserre,Lasserre2}, and it has finite convergence generically as shown recently in \cite{nie2012}. In particular, if $f$ and $g_i, i=1,2,\ldots, m$ are SOS-convex polynomials (see Definition \ref{def:SOS_convex}), then $(P_0)$ enjoys an exact SDP-relaxation in the sense that the optimal values of $(P_0)$ and its relaxation problem are equal and the relaxation problem attains its optimum under the Slater constraint qualification (\cite[Theorem 3.4]{Lasserre2}). The class of SOS-convex polynomials is a numerically tractable subclass of convex polynomials and it contains convex quadratic functions and convex separable polynomials \cite{Parrilo,HeNie10}. The SOS-convexity of a polynomial can be numerically checked by solving a semidefinite programming problem. The exact SDP-relaxation of a convex optimization problem is a highly desirable feature because SDP problems can be efficiently solved \cite{BV,las-hand}. However, the data of real-world convex optimization problems are often uncertain (that is, they are not known exactly at the time of the decision) due to estimation errors, prediction errors or lack of information. Recently, robust optimization approach has emerged as a powerful approach to treat optimization under data uncertainty. It is known that a robust convex quadratic optimization problem under ellipsoidal data uncertainty enjoys exact SDP relaxation as it can be equivalently reformulated as a semi-definite programming problem (see \cite{robust}). In the same vein, Goldfarb and Iyengar \cite{goldfab} have shown that a robust convex quadratic optimization problems under restricted ellipsoidal data uncertainty can be equivalently reformulated as a second-order cone programming problem. Unfortunately, an exact SDP relaxation may fail for a robust convex (not necessarily quadratic) polynomial optimization problem under restricted ellipsoidal data uncertainty (see Example \ref{ex:3.1}). This raises the fundamental question: Do some classes of robust convex (not necessarily quadratic) polynomial optimization problems posses exact SDP relaxation? This question has motivated us to study SOS-convex polynomial optimization problems under uncertainty. In this paper, we study the SOS-convex polynomial optimization problem $(P_0)$ in the face of data uncertainty. This model problem under data uncertainty in the constraints can be captured by the model problem \begin{equation*} \begin{array}{crl} (UP_0) & \inf & f(x) \\ & s.t. & g_i(x,v_i)\leq 0,\ i=1,\ldots,m, \end{array} \label{primal} \end{equation*} where $v_i$ is an uncertain parameter and $v_i \in \mathcal{V}_i$ for some compact uncertainty set $\mathcal{V}_i \in \mathbb{R}^{q_i}$, $q_i \in \mathbb{N}$, $f:\mathbb{R}^n\rightarrow\mathbb{R}$ is a SOS-convex polynomial and $g_i:\mathbb{R}^n\times\mathbb{R}^{q_i}\rightarrow\mathbb{R}$, $i=1,\ldots,m$, are functions such that for each $v_i\in\mathbb{R}^{q_i}$, $g_i(\cdot,v_i)$ is a SOS-convex polynomial. As solutions to convex optimization problems are generally sensitive to data uncertainty, even a small uncertainty in the data can affect the quality of the optimal solution of a convex optimization problem, making it far from an optimal solution and unusable from a practical viewpoint. Consequently, how to find robust optimal solutions, that are immunized against data uncertainty, has become an important question in convex optimization and has recently been extensively studied in the literature (see \cite{robust,BN1,jl-siam,jeya-boris,jeya-li-wang}). Following the robust optimization approach, the robust counterpart of $(UP_0)$, which finds a robust solution of $(UP_0)$ that is immunized against all the possible uncertain scenarios, is given by \begin{equation*} \begin{array}{crl} (P) & \inf & f(x) \\ & s.t. & g_i(x,v_i)\leq 0,\ \forall v_i\in \mathcal{V}_i,i=1,\ldots,m, \end{array} \label{robust} \end{equation*} and is called a \textit{robust SOS-convex polynomial optimization problem} or called simply, \textit{robust SOSCP}. In the robust counterpart, the uncertain inequality constraints are enforced for all realizations of the uncertainties $v_i\in \mathcal{V}_i, \ i=1,\ldots,m$. A sum of squares (SOS) relaxation problem of $(P)$ with degree $k$ is the model problem \begin{equation*} \begin{array}{cr} (D_k) & \sup\limits_{\mu\in \mathbb{R}, v_i\in \mathcal{V}_i, \lambda_i\geq 0} \left\{\mu\ :\ f(\cdot) +\sum\limits_{i=1}^{m}\lambda_i g_i(\cdot,v_i)-\mu \in \Sigma^2_k \right\} \\ \end{array} \label{relaxed} \end{equation*} where $\Sigma^2_k$ denotes the set of all sum of squares polynomials with degree no larger than $k$. The model $(D_k)$ is, in fact, the sum of squares relaxation of the robust Lagrangian dual, examined recently in \cite{beck,jeya-goberna,jl-siam,jeya-boris,jeya-li-wang}. The following contributions are made in this paper to robust convex optimization. \smallskip I. We first derive a complete characterization of the solution of a robust SOS-convex polynomial optimization problem $(P)$ in terms of sums of squares polynomials under a normal cone constraint qualification that is shown to be the weakest condition for the characterization. We show that the sum of squares characterization can be numerically checked for some classes of uncertainty sets by solving a semidefinite programming problem. \smallskip II. We establish that the value of a robust SOS-convex optimization problem (P) can be found by solving a sum-of-squares programming problem. This is done by proving an exact sum-of-squares relaxation of the robust SOS-convex optimization problem $(P)$. \smallskip III. Although the sum of squares relaxation problem $(D_k)$ is NP-hard for general classes of uncertainty sets, we prove that, for the classes of polytopic and ellipsoidal uncertainty sets, the relaxation problem can equivalently be re-formulated as a semidefinite programming problem. This shows that these uncertainty sets, which allow second-order cone re-formulations of robust quadratically constrained optimization problems \cite{goldfab}, permit exact SDP relaxations for a broad class of robust SOS-convex optimization problems. The relaxation problem provides an alternative formulation of an exact second-order cone relaxation for the robust quadratically constrained optimization problem, studied in \cite{goldfab}. \medskip \noindent The outline of the paper is as follows. Section 2 presents necessary and sufficient conditions for robust optimality and derives SOS-relaxation for robust SOS-convex optimizatin problems. Section 3 provides numerically tractable classes of robust convex optimization problems by presenting exact SDP-relaxations. \section{Solutions of Robust SOSCPs} \label{SEC2} We begin with some definitions and preliminaries on polynomials. We say that a real polynomial $f$ is sum-of-squares \cite{Laurent_survey} if there exist real polynomials $f_j$, $j=1,\ldots,r$, such that $f=\sum_{j=1}^rf_j^2$. The set consisting of all sum of squares real polynomial is denoted by $\Sigma^2$. Moreover, the set consisting of all sum of squares real polynomial with degree at most $d$ is denoted by $\Sigma^2_d$. The space of all real polynomials on $\mathbb{R}^n$ is denoted by $\mathbb{R}[x]$ and the set of all $n\times r$ matrix polynomials is denoted by $\mathbb{R}[x]^{n \times r}$. \begin{definition} {\bf (SOS matrix polynomial)} We say a matrix polynomial $F \in \mathbb{R}[x]^{n \times n}$ is a SOS matrix polynomial if $F(x)=H(x)H(x)^T$ where $H(x) \in \mathbb{R}[x]^{n \times r}$ is a matrix polynomial for some $r \in \mathbb{N}$. \end{definition} \begin{definition} \label{def:SOS_convex} {\bf (SOS-Convex polynomial} {\cite{HeNie10})} A real polynomial $f$ on $\mathbb{R}^n$ is called \textit{SOS-convex} if the Hessian matrix function $F:x\mapsto \nabla^2 f(x)$ is a SOS matrix polynomial. \end{definition} Clearly, a SOS-convex polynomial is convex. However, the converse is not true, that is, there exists a convex polynomial which is not SOS-convex \cite{Parrilo}. It is known that any convex quadratic function and any convex separable polynomial is a SOS-convex polynomial. Moreover, a SOS-convex polynomial can be non-quadratic and non-separable. For instance, $f(x)=x_1^8+x_1^2+x_1x_2+x_2^2$ is a SOS-convex polynomial (see \cite{HeNie10}) which is non-quadratic and non-separable. \begin{lemma}[{\bf SOS-Convexity \& sum-of-squares} {\cite[Lemma 8]{HeNie10}}] \label{polysos} Let $f$ be a SOS-convex polynomial on $\mathbb{R}^n$. If $f(u)=0$ and $\nabla f(u)=0$ for some $u\in\mathbb{R}^n$, then $f$ is a sum-of-squares polynomial. \end{lemma} The following existence result for solutions of a convex polynomial optimization problem will also be useful for our later analysis. \begin{lemma}[{\bf Solutions of convex polynomial optimization} {\cite[Theorem 3]{BeKl02}}] \label{minattain} Let $f_0,f_1,\ldots,f_m$ be convex polynomials on $\mathbb{R}^n$ and let $C:=\left\{x \in \mathbb{R}^n : f_i(x) \leq 0, i=1,\ldots,m\right\}$. If $\inf\limits_{x\in C}f_0(x)>-\infty$ then $\operatorname{argmin}\limits_{x\in C}f_0(x) \neq \emptyset$. \end{lemma} We note that it is possible to reduce a convex polynomial optimization problem to a quadratic optimization problem by introducing new variables. For example, $\min_{x \in \mathbb{R}}\{x^2:x^4 \le 1\}$ can be converted to a quadratic optimization problem $\min_{(x,t) \in \mathbb{R}^2}\{x^2:x^2 \le 1, t=x^2\}$. However, introducing new variables will result in a problem which may not satisfy the required convexity. Recall that for a convex set $A\subset\mathbb{R}^{m}$, the normal cone of $A$ at $x\in A$ is given by $N_A(x):=\left\{v\in \mathbb{R}^n : v^T(y-x)\leq 0,\,\forall y\in A\right\}.$ Let $F:=\left\{x : g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\right\} \neq \emptyset$. We say that the \emph{normal cone condition} holds for $F$ at $x\in F$ provided that $$ N_F(x) = \left\{\sum_{i=1}^{m} \lambda_i \nabla_x g_i(x,v_i) : \lambda_i\geq 0, v_i\in \mathcal{V}_i, \lambda_i g_i(x,v_i)=0\right\},$$ where $\nabla_x$ denotes the gradient with respect to the variable $x$. It is known from \cite{jl-siam} that the normal cone condition is guaranteed by the following robust Slater condition, $\left\{x\in\mathbb{R}^n : g_i(x,v_i)< 0, \ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\right\} \neq \emptyset.$ On the other hand, the normal cone condition is, in general, weaker than the robust Slater condition. In the following theorem we first prove that the normal cone condition guarantees a robust solution characterization involving sums-of-squares representations for robust SOSCPs. \begin{theorem}[{\bf Sum-of-squares characterization of solutions}] \label{alternative2} Let $f:\mathbb{R}^n\rightarrow\mathbb{R}$ be a SOS-convex polynomial and let $g_i:\mathbb{R}^n\times\mathbb{R}^{q_i}\rightarrow\mathbb{R}$, $i=1,\ldots,m$, be functions such that for each $v_i\in\mathbb{R}^{q_i}$, $g_i(\cdot,v_i)$ is a SOS-convex polynomial with degree at most $d_i$. Let $\mathcal{V}_i\subset \mathbb{R}^{q_i}$ be compact and $F:=\left\{x\in\mathbb{R}^n : g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\right\} \neq \emptyset$. Suppose that ${\rm argmin}_{x\in F}f(x)\neq\emptyset$ and the normal cone condition holds at $x^*\in F$. Then, $x^*$ is a minimizer of $\min_{x \in F}f(x)$ if and only if $(\exists\ \bar{v}_i\in \mathcal{V}_i, \, \lambda_i \in \mathbb{R}_+, i=1,\ldots,m, \, \sigma_0\in \Sigma^2_{k_0})$ $(\forall \, x \in \mathbb{R}^n)$ \begin{equation}\label{eq:py}f(x)-f(x^*)+\sum_{i=1}^{m}\lambda_i g_i\left(x,\bar{v}_i\right) = \sigma_0(x),\end{equation} where $k_0$ is the smallest even number such that $k_0 \geq \max\left\{\deg f, \max_{1 \leq i \leq m}d_i\right\}$. \end{theorem} \begin{proof} [(if part)] It easily follows from the fact that $f(x)-f(x^*)=\sigma_0(x)-\sum_{i=1}^{m}\lambda_i g_i\left(x,\bar{v}_i\right) \geq 0$ for all $x\in F$ as $\sigma_0(x) \ge 0$ and $\lambda_i \ge 0$. \smallskip [(only if part)] If $f(x^*) = \min_{x\in F}f(x)$, then, by the optimality condition of convex optimization, $-\nabla f(x^*) \in N_F(x^*)$. By the normal cone condition, there exist $\bar{v}_i\in \mathcal{V}_i$, $\lambda_i\geq 0$, $i=1,\ldots,m$, with $\lambda_i g_i(x^*,\bar{v}_i) = 0$, such that $-\nabla f(x^*) = \sum_{i=1}^{m} \lambda_i \nabla g_i(x^*,\bar{v}_i)$. Let $L(x):=f(x)-f(x^*)+\sum_{i=1}^{m}{\lambda_i g_i(x,\bar{v}_i)}$, for $x\in\mathbb{R}^n$. Observe that $L(x^*)=0$ and $\nabla L(x^*)=0$. Clearly, $L$ is SOS-convex since $f$ and $g(\cdot,\bar{v}_i)$ are all SOS-convex. So, Lemma \ref{polysos} guarantees that $L$ is a sum of squares polynomial. Moreover, the degree of $L$ is not larger than $k_0$. So, there exists $\sigma_0 \in\Sigma^2_{k_0}$ such that $f(x)-f(x^*)+\sum_{i=1}^{m}{\lambda_i g_i\left(x,\bar{v}_i\right)} = \sigma_0(x) \quad \forall x\in\mathbb{R}^n.$ Thus, the conclusion follows. \end{proof} Next, we show that the normal cone condition is indeed a characterization for the robust solution characterization, in the sense that, if the normal cone condition fails at some feasible point then there exists a SOS-convex real polynomial $f$ such that the robust solution characterization fails. \begin{theorem}[{\bf Weakest qualification for solution characterization}] \label{charact2} Let $g_i:\mathbb{R}^n\times\mathbb{R}^{q_i}\rightarrow\mathbb{R}$, $i=1,\ldots,m$, be functions such that for each $v_i\in\mathbb{R}^{q_i}$, $g_i(\cdot,v_i)$ is a SOS-convex polynomial with degree at most $d_i$. Let $F:=\left\{x\in\mathbb{R}^n : g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\right\} \neq \emptyset$ and let $\mathcal{V}_i\subset \mathbb{R}^{q_i}$ be compact. Then, the following statements are equivalent: \begin{enumerate} \item[{\rm (i)}] For each SOS-convex real polynomial $f$ on $\mathbb{R}^n$ with ${\rm argmin}_{x\in F}f(x)\neq\emptyset$, {$f(x^*) = \min\limits_{x\in F}f(x)\ \Leftrightarrow \ \left[\exists\,\bar{v}_i\in \mathcal{V}_i, \lambda_i\geq 0 : f(\cdot)-f(x^*)+\sum_{i=1}^{m}\lambda_i g_i\left(\cdot,\bar{v}_i\right) \in \Sigma^2_{k_0} \right]$} where $k_0$ is the smallest even number such that $k_0 \geq \max\left\{\deg f, \max_{1 \leq i \leq m}d_i\right\}$. \item[{\rm (ii)}] $N_F(x) = \left\{\sum_{i=1}^{m} \lambda_i \nabla_x g_i(x,v_i) : \lambda_i\geq 0, v_i\in \mathcal{V}_i, \lambda_i g_i(x,v_i)=0\right\}$, for all $x\in F$. \end{enumerate} \end{theorem} \begin{proof} It suffices to show that (\emph{i}) $\Rightarrow$ (\emph{ii}) since the converse statement has been already shown in Theorem \ref{alternative2}. In fact, we just need to show $ N_F(x) \subset \{\sum_{i=1}^{m} \lambda_i \nabla_x g_i(x,v_i) : \lambda_i\geq 0, v_i\in \mathcal{V}_i, \lambda_i g_i(x,v_i)=0\},$ for any $x\in F$, since the converse inclusion always holds. Let $x^*\in F$ be arbitrary. If $ w\in N_F(x^*)$ then $-w^T(x-x^*)\geq 0$ for all $x\in F$. Let $f(x):=-w^T(x-x^*)$. Then, $\min_{x\in F}f(x) = f(x^*) = 0$. Since any affine function is SOS-convex, applying (\emph{i}), there exist $\bar{v}_i\in \mathcal{V}_i$, $\lambda_i\geq 0$, for $i=1,\ldots,m$, and $\sigma_0\in \Sigma^2_{k_0}$ such that, for all $x\in \mathbb{R}^n$, \begin{equation} -w^T(x-x^*) + \sum_{i=1}^{m}\lambda_i g_i\left(x,\bar{v}_i\right) = \sigma_0(x) \geq 0. \label{equ2} \end{equation} Letting $x=x^*$, we see that $\sum_{i=1}^{m}\lambda_i g_i\left(x^*,\bar{v}_i\right) \geq 0$. This together with $x\in F$ implies $\lambda_i g_i\left(x^*,\bar{v}_i\right) = 0$, $i=1,\ldots,m$. So, \eqref{equ2} implies that $\sigma_0(x^*)=0$ and $0=\nabla\sigma_0(x^*) = -w+\sum_{i=1}^{m}\lambda_i g_i\left(x^*,\bar{v}_i\right)$. Then, $w\in \{ \sum_{i=1}^{m} \lambda_i \nabla_x g_i(x^*,v_i) : \lambda_i\geq 0, v_i\in \mathcal{V}_i, \lambda_i g_i(x^*,v_i)=0 \}.$ Thus, the conclusion follows. \end{proof} It is worth noting that the sum-of-squares condition characterizing the solution of a robust SOSCP can be numerically verified by solving semi-definite programming problems for some uncertainty sets. We illustrate this with two simple examples: (1) SOS-convex constraints with finite uncertainty sets; (2) quadratic constraints with spectral norm data uncertainty sets. The numerical tractability of more general classes of robust SOS-convex optimization problems under sophisticated classes of uncertainty sets will be discussed later on in Section 3. \subsection*{SOS-Convex constraints and finite uncertainty sets} Suppose that $\mathcal{V}_i=\left\{v_i^1,\ldots,v_i^{s_i}\right\}$ for any $i\in\left\{1,\ldots,m\right\}$. Then, the robust SOS-convex polynomial optimization problem takes the form \begin{equation*} (P_1) \ \ \ \min_{x \in \mathbb{R}^n} \{f(x): g_i(x,v_i^j)\leq 0,\ \forall j=1,\ldots,s_i,\ \forall i=1,\ldots,m\}, \end{equation*} and the minimum is attained in virtue of Lemma \ref{minattain}. Let $x^*$ be a feasible solution of $(P_1)$ and suppose that the normal cone condition is satisfied at $x^*$. Let $k_0=\max_{1 \le i \le m, 1 \le j \le s_i}\{{\rm deg}f,{\rm deg}g_i(\cdot,v_i^j)\}$. In this case, (\ref{eq:py}) in Theorem \ref{alternative2} is equivalent to the condition that there exist ${\lambda}_i^j \geq 0$, $i=1,\ldots,m, j=1,\ldots,s_i$, and $\sigma_0\in \Sigma_k^2$ such that \begin{equation} \label{eq:0} f(x)-f(x^*)+\sum_{1 \le i \le m, 1 \le j \le s_i}{\lambda}_i^j g_i\left(x, {v}_i^j\right) = \sigma_0(x) \mbox{\ for all\ } x\in\mathbb{R}^n. \end{equation} Indeed, it is easy to see that (\ref{eq:py}) in Theorem \ref{alternative2} implies \eqref{eq:0}. On the other hand, \eqref{eq:0} immediately gives us that $x^*$ is a solution of $(P_1)$ which implies (\ref{eq:py}). Therefore, a solution of a SOS-convex polynomial optimization problem under finite uncertainty sets can be efficiently verified by solving a semidefinite programming problem. \subsection*{Quadratic constraints under spectral norm uncertainty} Consider the following SOS-convex polynomial optimization problem with quadratic constraints under spectral norm uncertainty: \begin{equation*} \min_{x \in \mathbb{R}^n} \{f(x) :x^TB_ix + 2b_i^T x + \beta_i \leq 0,\ i=1,\ldots,m\}, \end{equation*} where, $b_i \in \mathbb{R}^n$ and $\beta_i \in \mathbb{R}$, the data $(B_i,b_i,\beta_i) \in S^n \times \mathbb{R}^n \times \mathbb{R}$, $i=1,\ldots,m$, are uncertain and belong to the spectral norm uncertainty set $$\mathcal{V}_i=\{(B_i,b_i,\beta_i) \in S^n \times \mathbb{R}^n \times \mathbb{R} : \|\left(\begin{array}{cc} B_i & b_i\\ b_i^T & \beta_i \end{array} \right)-\left(\begin{array}{cc} \overline{B}_i & \overline{b}_i\\ \overline{b}_i^T & \overline{\beta}_i \end{array} \right)\|_{\rm spec} \leq \varepsilon_i\},$$ for some $\varepsilon_i \ge 0$, $\overline{B}_i \succeq 0$, $\overline{b}_i \in \mathbb{R}^n$ and $\overline{\beta}_i \in \mathbb{R}$. Here, $S^{n}$ denotes the space of symmetric $n\times n$ matrices and $\|\cdot\|_{\rm spec}$ denotes the spectral norm defined by $\|M\|_{\rm spec}=\sqrt{\lambda_{\max}(M^TM)}$ where $\lambda_{\max}(C)$ is the maximum eigenvalue of the matrix $C$. The corresponding robust counterpart of the above problem is \begin{equation*} \begin{array}{crl} (P_2) & \min & f(x) \\ & s.t. & x^TB_ix + 2b_i^T x + \beta_i \leq 0, \ \forall \, (B_i,b_i,\beta_i) \in \mathcal{V}_i, \ i=1,\ldots,m, \end{array} \end{equation*} Let $k=\max\{{\rm deg} f,2\}$. In this case, (\ref{eq:py}) in Theorem \ref{alternative2} becomes $(\exists\,B_i\in \mathcal{V}_i,\,\lambda \in \mathbb{R}^m_+,\,\sigma_0\in\Sigma_k^2)$ $(\forall x \in \mathbb{R}^n)$ $f(x)-f(x^*)+\sum_{i=1}^{m}\lambda_i (x^TB_ix + 2b_i^T x + \beta_i ) = \sigma_0(x)$. This, in turn, is equivalent to the condition that there exist $\lambda_i \geq 0$, $i=1,\ldots,m$, and $\sigma_0\in \Sigma_k^2$ such that \begin{equation} \label{eq:001} f(x)-f(x^*)+\sum_{i=1}^{m}\lambda_i (x^T(\overline{B}_i+\varepsilon_i I_n) x + 2\overline{b}_i^T x + \overline{\beta}_i+\varepsilon_i ) = \sigma_0(x) \mbox{\ for all\ } x\in\mathbb{R}^n. \end{equation} In fact, \eqref{eq:001} implies (\ref{eq:py}) as $(\overline{B}_i+\varepsilon_i I_n,\overline{b}_i,\overline{\beta}_i+\varepsilon_i) \in \mathcal{V}_i$. On the other hand, note that, for all $(B_i,b_i,\beta_i) \in \mathcal{V}_i$, $\left(\begin{array}{cc} \overline{B}_i+\varepsilon_i I_n & \overline{b}_i\\ \overline{b}_i^T & \beta_i+\varepsilon_i \end{array} \right)-\left(\begin{array}{cc} B_i & b_i\\ b_i^T & \beta_i \end{array} \right)$ is a positive semidefinite matrix, and hence, for each $i=1,\ldots,m$, $$h_i(x):= \left(\begin{array}{c} x\\ 1 \end{array} \right)^T(\left(\begin{array}{cc} \overline{B}_i+\varepsilon_i I_n & \overline{b}_i\\ \overline{b}_i^T & \beta_i+\varepsilon_i \end{array} \right)-\left(\begin{array}{cc} B_i & b_i\\ b_i^T & \beta_i \end{array} \right))\left(\begin{array}{c} x\\ 1 \end{array} \right)$$ is sum-of-squares. So, (\ref{eq:py}) implies that there exist $\lambda_i \ge 0$ and $(B_i,b_i,\beta_i) \in \mathcal{V}_i$, such that \begin{eqnarray*} & & f(x)-f(x^*)+\sum_{i=1}^{m}\lambda_i (x^T(\overline{B}_i+\varepsilon_i I_n) x + 2\overline{b}_i^T x + \overline{\beta}_i+\varepsilon_i ) \\ &= & f(x)-f(x^*)+\sum_{i=1}^{m}\lambda_i (x^TB_ix + 2b_i^T x + \beta_i )+\sum_{i=1}^mh_i(x), \end{eqnarray*} is a sum-of-squares polynomial with degree at most $k$. Therefore, a solution of a quadratic optimization problem under spectral norm data uncertainty can also be efficiently verified by solving a semidefinite programming problem. Next, we examine how to find the optimal value of a robust SOSCP by solving a sum of squares relaxation problem. In particular, the corresponding sum of squares relaxation problem can often be equivalently reformulated as semi-definite programming problems under various commonly used data uncertainty sets. \begin{theorem}[{\bf Exact sum of squares relaxation}] \label{alternative} Let $f:\mathbb{R}^n\rightarrow\mathbb{R}$ be a SOS-convex polynomial. Let $g_i:\mathbb{R}^n\times\mathbb{R}^{q_i}\rightarrow\mathbb{R}$, $i=1,\ldots,m$, be functions such that for each $x \in \mathbb{R}^n$, $g_i(x,\cdot)$ is concave, $g_i(\cdot,v_i)$ is a SOS-convex polynomial for each $v_i\in \mathcal V_i$ with degree at most $d_i$, and $\mathcal V_i \subset \mathbb{R}^{q_i}$ are convex compact sets. Let $\left\{x\in\mathbb{R}^n : g_i(x,v_i) < 0\ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\right\} \neq \emptyset$. Then, we have $$\inf\{f(x): g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i, i=1,\ldots,m\} = \max_{\lambda_i \geq 0, v_i \in \mathcal{V}_i}\{ \mu: f(\cdot)+\sum_{i=1}^{m}{{\lambda}_i g_i\left(\cdot,{v}_i\right)}-\mu \in \Sigma_{k_0}^2 \},$$ where $k_0$ is the smallest even number such that $k_0 \geq \max\left\{\deg f, \max_{1 \leq i \leq m}d_i\right\}$. \end{theorem} \begin{proof} Note that, for any $\lambda_i \geq 0$, $v_i \in \mathcal{V}_i$ with $f(\cdot)+\sum_{i=1}^{m}{{\lambda}_i g_i\left(\cdot,{v}_i\right)}-\mu \in \Sigma_{k_0}^2$ and any point $x \in \mathbb{R}^n$ such that $g_i(x,v_i) \leq 0$ for all $v_i\in \mathcal{V}_i$, one has $f(x) \geq f(x)+\sum_{i=1}^{m}{{\lambda}_i g_i\left(x,{v}_i\right)} \geq \mu$. So, we see that $\inf\{f(x): g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i,\ i=1,\ldots,m\} \geq \max_{\lambda_i \geq 0, v_i \in \mathcal{V}_i}\{ \mu: f(\cdot)+\sum_{i=1}^{m}{{\lambda}_i g_i\left(\cdot,{v}_i\right)}-\mu \in \Sigma_{k_0}^2 \}.$ \smallskip To see the reverse inequality, we may assume without loss of generality that $c:=\inf\{f(x): g_i(x,v_i)\leq 0\ \forall v_i\in \mathcal{V}_i,\ i=1,\ldots,m\} \in \mathbb{R}$. By the usual convex programming duality and the robust Slater condition, \begin{eqnarray*} \inf\{f(x):g_i(x,v_i) \le 0, \, \forall \, v_i \in \mathcal{V}_i\} &=& \inf\{f(x): \max_{v_i \in \mathcal{V}_i} g_i(x,v_i) \le 0, i=1,2,\ldots, m\} \\ = \quad \max\limits_{\substack{\lambda_i\geq 0}}\:\inf_{x\in\mathbb{R}^n}\left\{f(x)+\sum_{i=1}^{m}\lambda_i \max_{v_i \in \mathcal{V}_i}g_i(x,v_i)\right\} & = & \max\limits_{\substack{\lambda_i\geq 0}}\:\inf_{x\in\mathbb{R}^n} \max_{v_i \in \mathcal{V}_i}\left\{f(x)+\sum_{i=1}^{m}\lambda_i g_i(x,v_i)\right\}, \end{eqnarray*} where the attainment of the maximum is guaranteed by the robust Slater condition. Note that $x \mapsto g_i(x,v_i)$ is SOS-convex (and so convex) and $v_i \mapsto g_i(x,v_i)$ is concave. From the convex-concave minimax theorem, we have, for each $\lambda_i \ge 0$, $$ \inf_{x\in\mathbb{R}^n} \max_{v_i \in \mathcal{V}_i}\left\{f(x)+\sum_{i=1}^{m}\lambda_i g_i(x,v_i)\right\}=\max\limits_{v_i\in \mathcal{V}_i} \inf_{x\in\mathbb{R}^n}\left\{f(x)+\sum_{i=1}^{m}\lambda_i g_i(x,v_i)\right\}. $$ So, $\inf\{f(x):g_i(x,v_i) \le 0, \, \forall \, v_i \in \mathcal{V}_i\} = \max\limits_{v_i\in \mathcal{V}_i, \lambda_i \ge 0} \inf_{x\in\mathbb{R}^n}\left\{f(x)+\sum_{i=1}^{m}\lambda_i g_i(x,v_i)\right\} . $ Hence, there exist $\bar{v}_i\in \mathcal{V}_i$ and $\bar{\lambda}_i\geq 0$, for $i=1,\ldots,m$, such that $f(x) + \sum_{i=1}^{m}\bar{\lambda}_i g_i\left(x,\bar{v}_i\right) \geq c$ for all $x\in\mathbb{R}^n.$ Let $h(x):=f(x) + \sum_{i=1}^{m}{\bar{\lambda}_i g_i\left(x,\bar{v}_i\right)} - c $. Then, $h\geq 0$ and it is also a SOS-convex polynomial, as $f(\cdot)$ and $g\left(\cdot,\bar{v}_i\right)$ are all SOS-convex. So, by Lemma \ref{minattain}, we obtain that $\min_{x\in\mathbb{R}^n}h(x) = h(x^*)$ for some $x^*\in \mathbb{R}^n$. The polynomial $L(x):=h(x)-h(x^*)$ is again SOS-convex. Moreover, $L(x^*)=0$ and $\nabla L(x^*)=0 $. Then, $L$ is a sum of squares polynomial as a consequence of Lemma \ref{polysos}. So we get that $f(x)+\sum_{i=1}^{m}{\bar{\lambda}_i g_i\left(x,\bar{v}_i\right)} - c - h(x^*) = \sigma_1(x) \quad \forall x\in\mathbb{R}^n,$ where $\sigma_1 \in\Sigma^2_{k_0}$ and $k_0$ is the smallest even number such that $k_0 \geq \max\left\{\deg f, \max_{1 \leq i \leq m}d_i\right\}$. Note that a sums-of-squares polynomial must be of even degree. As $h(x^*) \geq 0$, $\sigma_0(\cdot) := \sigma_1(\cdot) + h(x^*)$ is also a sum of squares with degree at most $k_0$. Therefore, $\sigma_0 \in \Sigma^2_{k_0} $ and $f(x)+\sum_{i=1}^{m}{\bar{\lambda}_i g_i\left(x,\bar{v}_i\right)} - c = \sigma_0(x)$ for all $x\in\mathbb{R}^n.$ Hence, $c \leq \displaystyle \max_{\lambda_i \geq 0, v_i \in \mathcal{V}_i}\{ \mu: f(\cdot)+\sum_{i=1}^{m}{{\lambda}_i g_i\left(\cdot,{v}_i\right)}-\mu \in \Sigma_{k_0}^2 \}$ and the conclusion follows. \end{proof} \begin{remark}[{\bf Intractability of general SOS-relaxation problems}] In general, finding the optimal value of a robust SOSCP using the SOS-relaxation problem can still be an intrinsically hard problem. For example, consider the following robust convex quadratic optimization problem: \begin{eqnarray*} (P_4) & \inf & x^TAx + 2a^T x +\alpha \\ & s.t. & g_i(x,v_i) \leq 0, \quad \forall\, v_i^T Q_i^l v_i \leq 1, l=1,\ldots,k, i=1,\ldots,m, \end{eqnarray*} where \begin{equation} \label{eq:04} g_i(x,v_i)=\|(\overline{B}_i^0+\sum_{j=1}^s v_i^j \overline{B}_i^j )x\|^2 + 2(\overline{b}_i^0+\sum_{j=1}^s v_i^j \overline{b}_i^j)^Tx + (\overline{\beta}_i^0+\sum_{j=1}^s v_i^j \overline{\beta}_i^j ), \end{equation} $v_i=(v_i^1,\ldots,v_i^s)$, $Q_i^l \succ 0$, for $l=1,\ldots,k$, $i=1,\ldots,m$, and $k \geq 2$. It was shown in \cite[Section 3.2.2]{BN1} that checking the robust feasibility of the problem $(P_4)$, is an NP-hard problem even under the robust Slater condition. By applying Theorem \ref{alternative} with $f(x)=0$, we see that the robust feasibility problem of $(P_4)$ is equivalent to the condition that the optimal value of the following problem is zero: \[ \sup\limits_{\mu\in \mathbb{R}, v_i^TQ_i^lv_i \le 1, \lambda_i\geq 0} \left\{\mu\ :\ \sum_{i=1}^{m}\lambda_i g_i(\cdot,v_i)-\mu \in \Sigma^2_2 \right\}. \] This, in particular, shows that finding the optimal value of a SOS-relaxation of a robust SOSCP, via Theorem \ref{alternative}, is also NP-hard, and so, cannot be equivalently rewritten as a semi-definite programming problem. Furthermore, observe that for the above intractable case, $v_i \mapsto g_i(x,v_i)$ with $g_i(x,v_i)$ defined in \eqref{eq:04} is not affine. \end{remark} A semidefinite programming approximation scheme for solving a robust nonconvex polynomial optimization problem has been given in \cite{Lasserre_robust} where it was shown that the optimal value of the robust optimization problem can be approached as close as possible by the optimal value of a sequence of semi-definite programming relaxation problem under mild conditions. In the next section, we show that, in the case of affine data parametrization (that is, $v_i \mapsto g_i(x,v_i)$ is affine), the optimal value of the robust SOS-convex polynomial optimization problem can be found by solving {\it a single semi-definite programming problem} under two commonly used data uncertainty sets: polytopic uncertainty and ellipsoidal uncertainty. \setcounter{equation}{0} \section{Exact SDP-Relaxations \& Affine Parameterizations} In this section we consider the robust SOSCP under affinely parameterized data uncertainty: \begin{equation*} \begin{array}{crl} (P) & \inf & f(x) \\ & \mbox{s.t.} & g_i(x,v_i)\leq 0,\ \forall v_i\in \mathcal{V}_i,i=1,\ldots,m, \end{array} \end{equation*} where $f$ is a SOS-convex polynomial and the data is affinely parameterized in the sense that \begin{equation*} g_i(\cdot,v_i)=g_i^{(0)}(\cdot) + \sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)}(\cdot), \quad v_i=(v_i^{(1)},\ldots,v_i^{(t_i)},v_i^{(t_i+1)},\ldots,v_i^{(q_i)}) \in \mathcal{V}_i \subseteq \mathbb{R}_{+}^{t_i} \times \mathbb{R}^{q_i-t_i}, \end{equation*} where $g_i^{(r)}$, for $r=1,\ldots,t_i$, are SOS-convex polynomials, and $g_i^{(r)}$, for $r=t_i+1,\ldots,q_i$, are affine functions, for all $i=1,\ldots,m$. In the following, we show, in the case of two commonly used uncertain sets, that the SOS-relaxation is exact and the relaxation problems can be represented as semi-definite programming (SDP) problems whenever a robust Slater condition is satisfied. \subsection{Polytopic data uncertainty} Consider the robust SOSCP with polytopic uncertainty, that is, $t_i=q_i$ and $\mathcal{V}_i=\bar{\mathcal{V}}_i$, where $\bar{\mathcal{V}}_i$ is given by \begin{equation*} \bar{\mathcal{V}}_i :=\{v_i=(v_i^{(1)},\ldots,v_i^{(q_i)})\in \mathbb{R}^{q_i}: v_i^{(r)} \geq 0, A_i v_i = b_i\}, \end{equation*} for some matrix $A_i=(A_i^{jr}) \in \mathbb{R}^{l_i \times q_i}$ and $b_i=(b_i^j) \in \mathbb{R}^{l_i}$ such that $\bar{\mathcal{V}}_i$ is compact. Examples include the case where $\mathcal{V}_i$ is a simplex, i.e., $ \mathcal{V}_i = \{v_i \in \mathbb{R}^{q_i} : v_i^{(r)} \geq 0, \sum_{r=1}^{q_i}v_i^{(r)}=1\}$ or more generally where $A_i\in \mathbb{R}^{l_i \times q_i}$ and $\{x \in \mathbb{R}^{q_i}:A_ix=0\} \cap \mathbb{R}_+^{q_i}=\{0\}$. For the robust SOSCP $(P)$ with polytopic uncertainty sets $\bar{\mathcal{V}}_i$, named $(P^p)$, and each $k \in \mathbb{N}$, the corresponding relaxation problem $(D_k^p)$ can be stated as \begin{equation} \label{eq:06} \begin{array}{ccl} (D_k^p) & \max\limits_{\mu, w_i^r} & \mu \\ & s.t. & f + \sum\limits_{i=1}^{m} \sum\limits_{r=0}^{q_i} w_i^r g_i^{(r)} - \mu \in \Sigma^2_{k} \\ & & \sum\limits_{r=1}^{q_i} A_{i}^{jr} w_i^r = w_i^0 b_i^j, \ \forall j=1,\ldots,l_i, i=1,\ldots,m, \\ & & \mu\in\mathbb{R}, w_i^r \geq 0, \ \forall r=0,1,\ldots,q_i, i=1,\ldots,m. \end{array} \end{equation} Let $k_0$ be the smallest even number such that $k_0 \geq \max_{0 \leq r \leq q_i, 1 \leq i \leq m}\{\deg f,\deg g_i^{(r)}\}$. Note that the relaxation problem $(D_k^p)$ can be equivalently rewritten as a semidefinite programming problem. \begin{theorem}[{\bf Exact SDP-relaxation under polytopic data uncertainty}]\label{cor:1} Consider the uncertain SOS-convex polynomial optimization problem under polytope data uncertainty $(P^p)$ and its relaxation problem $(D_k^p)$. Suppose that $$\left\{ x\in\mathbb{R}^n : g_i^{(0)}(x)+\sum_{r=1}^{q_i}v_i^{(r)}g_i^{(r)}(x)<0\ \forall v_i\in \bar{\mathcal{V}}_i, i=1,\ldots,m\right\} \neq \emptyset.$$ Then, the minimum of $(P^p)$ is attained and $\min(P^p)=\max(D_{k_0}^p)$, where $k_0$ is the smallest even number such that $k_0 \geq \max_{0 \leq r \leq q_i, 1 \leq i \leq m}\{\deg f,\deg g_i^{(r)}\}$. \end{theorem} \begin{proof} Denote the extreme points of $\bar{\mathcal{V}}_i$ by $v_i^1,\ldots,v_i^{s_i}$, $i=1,\ldots,m$. As $v_i \mapsto g_i(x,v_i)$ is affine, $\max_{v_i \in \bar{\mathcal{V}}_i}g_i(x,v_i)=\max_{1 \leq j \leq s_i}g_i(x,v_i^j)$ for each fixed $x \in \mathbb{R}^n$. So, $\min(P^p)$ can be equivalently rewritten as follows: $\min\limits_{x\in\mathbb{R}^n}\left\{ f(x) : g_i(x,v_{i}^j)\leq 0, \forall j=1,\ldots,s_i,\,\forall i=1,\ldots,m\right\}$. So, the minimum in the primal problem is attained in virtue of Lemma \ref{minattain}. Now, according to Theorem \ref{alternative}, we have \begin{eqnarray} \label{dualpolyA} \min(P^p) & = & \min\limits_{x\in\mathbb{R}^n} \left\{ f(x) : g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)}(x) \leq 0 \ \forall v_i\in \bar{\mathcal{V}}_i, i=1,\ldots,m\right\} \nonumber \\ & = & \max\limits_{\substack{\mu\in \mathbb{R}, \lambda_i \ge 0 \\ v_i^{(r)} \ge 0, A_i v_i = b_i}} \left\{\mu\ :\ f + \sum\limits_{i=1}^{m} \lambda_i \left(g_i^{(0)}+\sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)} \right) - \mu \in \Sigma^2_{k_0} \right\}. \end{eqnarray} Let $w_i^{0}:=\lambda_i$ and $w_i^r:=\lambda_i v_i^{(r)}$ for $r=1,\ldots,q_i$. Observe that, for each $i=1,\ldots,m$, $$\lambda_i \geq 0, v_i^{(r)} \geq 0\ \forall r=1,\ldots,q_i, A_i v_i = b_i $$ is equivalent to $ w_i^r \geq 0 \ \forall r=0,1,\ldots,q_i, \sum_{r=1}^{q_i} A_{i}^{jr}w_i^r = w_i^0 b_i^j \ \forall j=1,\ldots,l_i.$ So, the maximization problem in \eqref{dualpolyA} collapses to that one in \eqref{eq:06} with $k=k_0$. Thus, $\min(P^p)=\max(D^p_{k_0})$. \end{proof} The above theorem illustrates that, under the robust strict feasibility condition, an exact SDP relaxation holds for robust SOS-convex polynomial optimization problem under polytopic data uncertainty. In the special case of robust convex quadratic optimization problem, such an exact SDP relaxation result was given in \cite{BN1,BV}. \subsection{Restricted ellipsoidal data uncertainty} Consider the robust SOSCP with a restrictive ellipsoidal uncertainty, that is, $\mathcal{V}_i=\hat{\mathcal{V}}_i$ where $\hat{\mathcal{V}}_i$ is given by \begin{equation}\label{unC} \hat{\mathcal{V}}_i :=\{ v_i \in \mathbb{R}^{q_i} : v_i^{(r)} \geq 0, r=1,\ldots,t_i, \|(v_i^{(1)},\ldots,v_i^{(t_i)})\| \leq 1, \|(v_i^{(t_i+1)},\ldots,v_i^{(q_i)})\| \leq 1\}. \end{equation} In the case where all the SOS-convex polynomials are convex quadratic functions, the above problem collapses to the robust quadratic optimization problem under restrictive ellipsoidal uncertainty set which was examined in \cite{goldfab}. It is worth noting that the restriction of $v_i^{(r)} \ge 0$ is essential. Indeed, as pointed out in \cite{goldfab}, if this nonnegative restriction is dropped, the corresponding robust quadratic optimization problem becomes NP-hard. It should also be noted that SOS-convexity of $g_i(\cdot, v)$ may not be preserved if $v_i^{(r)}$ is negative for some $r$ because $g_i(\cdot,v)=g_i^{(0)}+\sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)}$ and $g_i^{(r)}$'s are all SOS-convex polynomials. We begin this case by providing a numerically tractable characterization for a point $x$ to be feasible for the robust SOSCP under the above restricted ellipsoidal data uncertainty. \begin{lemma}[{\bf Robust feasibility characterization}] Let $x \in \mathbb{R}^n$ and let $\hat{\mathcal{V}}_i$ be given in (\ref{unC}). Then $x$ is feasible for the robust SOSCP problem (P), that is, $g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)}(x) \leq 0$, $\forall v_i\in \hat{\mathcal{V}}_i$, $i=1,2, \ldots, m$, if and only if, for each $i=1,\ldots,m$, the following second-order cone programming problem has a non-negative optimal value \begin{equation*} \begin{array}{cl} \max\limits_{\mu_1,\mu_2,\lambda_i^{(r)}} & -a_i^{(0)}-\mu_1-\mu_2 \\ s.t. & \|\big(a_i^{(1)}+\lambda_i^{(1)},\ldots, a_i^{(t_i)}+\lambda_i^{(t_i)}\big)\|\le \mu_1, \\ & \|\big(a_i^{(t_i+1)},\ldots, a_i^{(q_i)}\big)\|\le \mu_2, \\ & \mu_1, \mu_2 \geq 0, \lambda_i^{(r)}\geq 0, \ \forall r=1,\ldots,t_i, \end{array} \end{equation*} where $a_i^{(r)}=g_i^{(r)}(x)$, $r=0,1,\ldots,q_i$. \end{lemma} \begin{proof} Fix $i \in \{1,\ldots,m\}$. Note that $x$ is feasible for robust SOSCP under the restricted ellipsoidal uncertainty is equivalent to $$ \left. \begin{array}{c} \|(v_i^{(1)},\ldots,v_i^{(t_i)})\| \leq 1, \, v_i^{(r)} \geq 0, \, r=1,\ldots,t_i \\ \|(v_i^{(t_i+1)},\ldots,v_i^{(q_i)})\| \leq 1 \end{array} \right\} \Rightarrow -g_i^{(0)}(x)-\sum_{r=1}^{q_i} v_i^{(r)} g_i^{(r)}(x) \geq 0. $$ Using the standard Lagrangian duality theorem, this can be equivalently rewritten as \begin{equation*} 0 \leq \inf\limits_{v_i \in \mathbb{R}^{q_i}} \left\{ -g_i^{(0)}(x)-\sum_{r=1}^{q_i} v_i^{(r)} g_i^{(r)}(x) : \begin{array}{l} \|(v_i^{(1)},\ldots,v_i^{(t_i)})\| \leq 1, \, v_i^{(r)} \geq 0, \, r=1,\ldots,t_i \\ \|(v_i^{(t_i+1)},\ldots,v_i^{(q_i)})\| \le 1 \end{array} \right\} \end{equation*} \begin{equation*} = \max\limits_{\substack{\mu_1,\mu_2 \ge 0 \\ \lambda_i \in\mathbb{R}^{t_i}_+ }} \ \inf\limits_{v_i \in \mathbb{R}^{q_i}} \ \left\{ \begin{array}{c} -g_i^{(0)}(x)-\sum\limits_{r=1}^{q_i} v_i^{(r)} g_i^{(r)}(x) + \mu_1(\|(v_i^{(1)},\ldots,v_i^{(t_i)})\| -1) \\ +\mu_2(\|(v_i^{(t_i+1)},\ldots,v_i^{(q_i)})\|-1)-\sum\limits_{r=1}^{t_i}\lambda_i^{(r)}v_i^{(r)} \ \end{array} \right\} \end{equation*} \begin{equation*} = \max\limits_{\substack{\mu_1,\mu_2 \ge 0 \\ \lambda_i \in\mathbb{R}^{t_i}_+ }} \ \left\{ -g_i^{(0)}(x)-\mu_1-\mu_2 : \begin{array}{l} \|\big(g_i^{(1)}(x)+\lambda_i^{(1)},\ldots, g_i^{(t_i)}(x)+\lambda_i^{(t_i)}\big)\|\le \mu_1, \\ \|\big(g_i^{(t_i+1)}(x),\ldots, g_i^{(q_i)}(x)\big)\|\le \mu_2. \end{array} \right\} \end{equation*} Hence, the equivalence follows. \end{proof} For the robust SOSCP $(P)$ with restricted ellipsoidal uncertainty sets $\hat{\mathcal{V}}_i$, named $(P^e)$, and each $k \in \mathbb{N}$, the corresponding relaxation problem $(D_k^e)$ can be stated as \begin{equation} \label{eq:08} \begin{array}{ccl} (D_k^e) & \max\limits_{\mu, w_i^r} & \mu \\ & s.t. & f + \sum\limits_{i=1}^{m} \sum\limits_{r=0}^{q_i} w_i^r g_i^{(r)} - \mu \in \Sigma^2_{k} \\ & & \|(w_i^{1},\ldots,w_i^{t_i})\| \leq w_i^{0}, \ \forall i=1,\ldots,m, \\ & & \|(w_i^{t_i+1},\ldots,w_i^{q_i})\| \leq w_i^{0}, \ \forall i=1,\ldots,m, \\ & & w_i^r \geq 0, \ \forall r=0,1,\ldots,t_i, \ \forall i=1,\ldots,m. \\ & & \mu\in\mathbb{R}, w_i^r \in \mathbb{R}, \ \forall r=t_i+1\ldots,q_i,\ \forall i=1,\ldots,m. \end{array} \end{equation} Let $k_0$ be the smallest even number such that $k_0\geq \max_{0 \leq r \leq q_i, 1 \leq i \leq m}\{ \deg f, \deg g_i^{(r)}\}$. As in the polytopic case, the relaxation problem $(D_k^e)$ can be equivalently rewritten as a semidefinite programming problem with an additional second-order cone constraint. \begin{theorem}[{\bf Exact SDP-relaxation under restricted ellipsoidal uncertainty}] \label{cor43} Consider the uncertain SOS-convex polynomial optimization problem under ellipsoidal data uncertainty $(P^e)$ and its relaxation problem $(D_k^e)$. Suppose that $\{x\in\mathbb{R}^n : g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)} g_i^{(r)}(x)<0 \ \forall v_i\in \hat{\mathcal{V}}_i, i=1,\ldots,m\} \neq \emptyset.$ Then, $\inf(P^e)=\max(D_{k_0}^e)$. \end{theorem} \begin{proof} Using Theorem \ref{alternative}, we obtain that \begin{eqnarray} \label{dualpoly1} \inf(P^e) & = & \inf\limits_{x\in\mathbb{R}^n}\left\{ f(x) : g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)}g_i^{(r)}(x) \leq 0, \ \forall v_i\in \hat{\mathcal{V}}_i, i=1,\ldots,m\right\} \nonumber \\ & = & \max\limits_{\substack{ \mu\in\mathbb{R}, \lambda_i\geq 0 \\ v_i\in \hat{\mathcal{V}}_i}} \left\{\mu : f + \sum\limits_{i=1}^{m}\lambda_i g_i^{(0)} + \sum\limits_{i=1}^{m}\sum_{r=1}^{q_i} \lambda_i v_i^{(r)} g_i^{(r)} -\mu \in \Sigma^2_{k_0} \right\}. \end{eqnarray} To see the conclusion, define $w_i^{0}:=\lambda_i$ and $w_i^{r}:=\lambda_i v_i^{(r)}$, $r=1,\ldots,q_i$. It can be verified that, for each $i=1,\ldots,m$, $\lambda_i \geq 0,v_i\in \hat{\mathcal{V}}_i $ is equivalent to \begin{eqnarray*} \|(w_i^{1},\ldots,w_i^{t_i})\| & \leq & w_i^{0}, \quad w_i^r \geq 0, \ \forall r=0,1,\ldots,t_i, \\ \|(w_i^{t_i+1},\ldots,w_i^{q_i})\| & \leq & w_i^{0}, \quad w_i^r \in \mathbb{R}, \ \forall r=t_i+1\ldots,q_i. \end{eqnarray*} So, the maximization problem in \eqref{dualpoly1} collapses to that one in \eqref{eq:08} with $k=k_0$. Thus, we get $\inf(P^e)=\max(D^e_{k_0})$. \end{proof} In the following example, we show that an exact SDP relaxation may fail for a robust convex (but not SOS-convex) polynomial optimization problem with linear constraints under restricted ellipsoidal data uncertainty. \begin{example}{\bf (Failure of exact SDP relaxation for convex polynomial optimization)}\label{ex:3.1} Let $f$ be a convex homogeneous polynomial with degree at least $2$ in $\mathbb{R}^n$ which is not a sum-of-squares polynomial (see \cite{Laurent1,convexA}, for the existence of such polynomials). Consider the following robust convex polynomial optimization problem under ellipsoidal data uncertainty: \begin{eqnarray*} & \min_{x \in \mathbb{R}^n} & f(x) \\ & \mbox{ s.t. } & v^Tx -1 \le 0, \, \forall \, v\in \mathcal{V}, \end{eqnarray*} where $\mathcal{V}=\{u\in \mathbb{R}^n : \|u\| \le 1\}$ is the uncertainty set and $g(x,v)=v^Tx -1$. It is easy to see that the strict feasibility condition is satisfied. We now show that our SDP relaxation is not exact. To see this, as $f$ is a convex homogeneous polynomial with degree at least $2$ (which is necessarily nonnegative), we first note that $\inf_{x\in \mathbb{R}^n}\{f(x): g(x,v) \le 0, \, \forall \, v\in \mathcal{V}\}=0$. The claim will follow if we show that, for any $(w_1^0,w_1^1,\ldots,w_1^n) \in \mathbb{R}^{n+1}$ with $\|(w_1^1,\ldots,w_1^n) \| \le w_1^0$, $f(x)+(-1)w_1^0+\sum_{i=1}^n w_1^i x_i-0 \notin \Sigma_d^2$. Otherwise, there exists a sum of squares polynomial $\sigma$ with degree at most $d$ such that \begin{equation}\label{eq:representation} f(x)+(-1)w_1^0+ \sum_{i=1}^n w_1^i x_i=\sigma(x), \ \mbox{ for all } x \in \mathbb{R}^n. \end{equation} for some $(w_1^0,w_1^1,\ldots,w_1^n) \in \mathbb{R}^{n+1}$ with $\|(w_1^1,\ldots,w_1^n) \| \le w_1^0$. Note that $\sigma$ is a sum-of-squares (and so, is nonnegative) and $w_1^0 \ge 0$. So, $f(x) \ge h(x):=-\sum_{i=1}^n w_1^i x_i$. As $f$ is a convex homogeneous polynomial with degree $m$ and $m \ge 2$, $h \equiv 0$. (Indeed, if there exists $\hat{x}$ such that $h(\hat{x}) \neq 0$, then by replacing $\hat{x}$ with $-\hat{x}$, we can assume that $h(\hat{x}) >0$. Now, we have $t^mf(\hat{x})=f(t \hat{x}) \ge 2h(t\hat{x})=2th (\hat{x})$, for all $t > 0$. This shows us that $\frac{f(\hat{x})}{h(\hat{x})} \ge \frac{1}{t^{m-1}}$, for all $t>0$. This is a contradiction as $\frac{1}{t^{m-1}}\rightarrow \infty$ as $t \rightarrow 0$.) Hence, $f=\sigma+w_1^0$ and so $f$ is a sum-of-squares polynomial. This contradicts our construction of $f$. Therefore, our relaxation is not exact. \end{example} \begin{corollary}[{\bf Robust SOSCP with a sum of quadratic and separable functions}] \label{cor431} Consider the uncertain convex polynomial optimization problem under ellipsoidal data uncertainty $(P^e)$ and its relaxation problem $(D_k^e)$, where each $g_i^{(r)}$ is the sum of a separable convex polynomial and a convex quadratic function, i.e., $g_i^{(r)}(x)=\sum_{l=1}^nh_{il}^{(r)}(x_l)+\frac{1}{2}x^TB_i^{(r)}x + \big(b_i^{(r)}\big)^T x + \beta_i^{(r)}$ for some convex univariate polynomial $h_{il}^{(r)}$, $B_i^{(r)} \succeq 0$, $b_i^{(r)} \in \mathbb{R}^n$ and $\beta_i^{(r)} \in \mathbb{R}$. Suppose that $\{x\in\mathbb{R}^n : g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)} g_i^{(r)}(x)<0 \ \forall v_i\in \hat{\mathcal{V}}_i, i=1,\ldots,m\} \neq \emptyset.$ Then, we have $\inf(P^e)=\max(D_{k_0}^e)$. \end{corollary} \begin{proof} The conclusion follows from the preceding theorem by noting that the sum of a separable convex polynomial and a convex quadratic function is a SOS-convex polynomial. \end{proof} \begin{corollary}[{\bf Robust convex quadratic problem}] \label{cor:CQP} For problem $(P^e)$ and its relaxation problem $(D_2^e)$, let $f(x)= x^TAx + 2a^T x +\alpha$, $g_i^{(r)}(x) =x^TB_i^{(r)}x + 2\big(b_i^{(r)}\big)^T x + \beta_i^{(r)}$, $r=0,1,\ldots,t_i$, and $g_i^{(r)}(x) =\big(b_i^{(r)}\big)^T x + \beta_i^{(r)}$, $r=t_i+1,\ldots,q_i$, where $A,B_i^{(r)} \succeq 0$, $a,b_i^{(r)} \in \mathbb{R}^n$ and $\alpha,\beta_i^{(r)} \in \mathbb{R}$. Suppose that $\{x\in\mathbb{R}^n : g_i^{(0)}(x)+\sum_{r=1}^{q_i} v_i^{(r)} \ g_i^{(r)}(x)<0\ \forall v_i\in \hat{\mathcal{V}}_i, i=1,\ldots,m\} \neq \emptyset.$ Then, $\inf(P^e)=\max(D_{2}^e)$ and $\max(D_2^e)$ can be written as the following semi-definite programming problem \begin{eqnarray}\label{eq:ut} & \max\limits_{\mu, w_i^r} & \mu \nonumber \\ & s.t. & \begin{pmatrix} A+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{t_i} w_i^{r}B_i^{(r)} & a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)} \\ (a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)})^T & \alpha+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}\beta_i^{(r)}-\mu \end{pmatrix} \succeq 0 \\ & & \|(w_i^{1},\ldots,w_i^{t_i})\| \leq w_i^{0}, \ \forall i=1,\ldots,m, \nonumber \\ & & \|(w_i^{t_i+1},\ldots,w_i^{q_i})\| \leq w_i^{0}, \ \forall i=1,\ldots,m, \nonumber \\ & & w_i^r \geq 0, \ \forall r=0,1,\ldots,t_i, \ \forall i=1,\ldots,m, \nonumber \\ & & \mu\in\mathbb{R}, w_i^r \in \mathbb{R}, \ \forall r=t_i+1\ldots,q_i,\ \forall i=1,\ldots,m. \nonumber \end{eqnarray} \end{corollary} \begin{proof} As $f$ and each $g_i^{(r)}$, $r=1,\ldots,q_i$, are convex quadratic functions, the conclusion follows by applying Theorem \ref{cor43} and noting just that $f + \sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r} g_i^{(r)} -\mu \in \Sigma^2_{2}\,$ is, in this particular case, equivalent to $ \begin{pmatrix} A+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{t_i} w_i^{r}B_i^{(r)} & a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)} \\ (a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)})^T & \alpha+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}\beta_i^{(r)}-\mu \end{pmatrix} \succeq 0. $ \end{proof} We now present a numerical example verifying the exact SDP relaxation for a robust SOS-convex polynomial optimization problem where the objective function is neither a quadratic function nor a separable function. \begin{example}{\bf (Exact SDP relaxation for a robust non-quadratic SOS-convex problem)} Consider the following robust SOSCP \begin{eqnarray*} (P_5) & \min & x_1^4+2x_1^2-2x_1x_2+x_2^2 \\ &\mbox{ s.t. } & v_1x_1+v_2x_2 \le 1, \ \forall \ \|(v_1,v_2)\| \le 1. \end{eqnarray*} It is easy to verify that global solution of $(P_5)$ is $(0,0)$ with optimal value zero, and robust Slater condition is satisfied. The corresponding $4th$-order relaxation problem is given by \begin{equation*} \max\limits_{\substack{\mu \in \mathbb{R}, \lambda \geq 0 \\ \|(v_1,v_2)\| \le 1}} \{ \mu : x_1^4+2x_1^2-2x_1x_2+x_2^2+\lambda (v_1x_1+v_2x_2-1)-\mu \in \Sigma_4^2\}. \end{equation*} It can be equivalently reformulated as the following semi-definite programming problem: \begin{eqnarray*} & \max\limits_{\mu, w, W} & \mu \\ & s.t. & W_{11}=-w^0-\mu, 2W_{12}=w^1, 2W_{13}=w^2, 2W_{23}+2W_{14}=-2, \\ & & 2W_{16}+W_{33}=1, 2W_{15}+W_{22}=2, W_{55}=1, \\ & & W_{ij}=0 \ \,\forall (i,j) \notin \{(2,2),(2,3),(3,2),(3,3),(5,5)\} \cup \{\cup_{j=1}^6 (1,j)\} \cup \{\cup_{j=1}^6 (j,1)\}, \\ & & \|(w^1,w^2)\| \leq w^0, \mu\in\mathbb{R}, w=(w^0,w^1,w^2)\in\mathbb{R}^3, W=(W_{ij})\in S^6_+. \end{eqnarray*} Let $\mu^*$ be the optimal value of the above SDP problem associated to a maximizer $(\mu^*,\hat{w},\hat{W})$. Since $\hat{W} \succeq 0$, then $\hat{W}_{11} \geq 0$ which implies $-\hat{w}^0 - \mu^* \geq 0$, and so, $\mu^* \leq -\hat{w}^0 \leq 0 $. On the other hand, define $\overline{W}\in S^6_+$ by $\overline{W}_{33} = \overline{W}_{55} = 1$, $\overline{W}_{22}=2$ and $\overline{W}_{23}=\overline{W}_{32}=-1$ and $\overline{W}_{ij}=0$ otherwise. Let $\overline{w}^0 = \overline{w}^1 = \overline{w}^2 = 0$ and $\overline{\mu}=0$. It is not hard to verify that $\overline{W} \succeq 0$ and $(\overline{\mu},\overline{w}, \overline{W})$ is a feasible point for the above SDP problem. So, $\mu^* \geq 0$. Thus, $\mu^*=0$ which shows that the SDP relaxation is exact. \end{example} We note that, in the special case of quadratically constrained optimization problem with linear objective function under restrictive ellipsoidal data uncertainty, the linear matrix inequality constraint in (\ref{eq:ut}) reduces to \[ \left(\begin{array}{cc} \sum\limits_{i=1}^{m}\sum\limits_{r=0}^{t_i} w_i^{r}(L_i^{(r)})^TL_i^{(r)} & a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)} \\ (a+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i} w_i^{r}b_i^{(r)})^T & \alpha+\sum\limits_{i=1}^{m}\sum\limits_{r=0}^{q_i}w_i^{r}\beta_i^{(r)}-\mu \end{array}\right) \succeq 0, \] where $L_i^{(r)} \in \mathbb{R}^{n \times s}$, is a matrix such that $B_i^{(r)}=L_i^{(r)}(L_i^{(r)})^T$. It then follows from \cite{Loboetal98} (see also \cite[page 277]{Nestrov}) that this linear matrix inequality can be equivalently written as second-order cone constraints. So, for a quadratically constrained optimization problem with linear objective function under restrictive ellipsoidal data uncertainty, the sums-of-squares relaxation problem can be equivalently rewritten as a second order cone programming problem, and hence, exact second-order cone relaxation holds under the robust strict feasibility condition. A second-order cone reformulation of a robust quadratic optimization problem with linear objective function under restrictive ellipsoidal data uncertainty was first shown in \cite{goldfab}. Our Corollary provides an alternative second-order cone reformulation for this class of problems. \section{Conclusion} In this paper, we studied robust solutions and semidefinite linear programming (SDP) relaxations for SOS-convex polynomial optimization problems in the face of data uncertainty. We established sums-of-squares polynomial representations characterizing robust solutions and exact SDP-relaxations of robust SOS-convex optimization problem under various commonly used uncertainty sets. It is easy to see from our SDP-relaxation results that the optimal value of a robust SOS-convex polynomial optimization problem can be found by solving a single semi-definite programming problem. On the other hand, it is known that, using the Lasserre hierarchy together with a moment approach \cite{Lasserre}, we can get a sequence of points converging to a minimizer of the original polynomial optimization problem. Employing the moment approach of \cite{Lasserre,Lasserre_robust} and using our exact SDP relaxation results, one can get a sequence of points converging to a minimizer of a given robust SOS-convex polynomial optimization problem.
1,941,325,220,378
arxiv
\subsection{Notation} \label{subsec:notation} We first define the notation that will be used in the sequel; see Table~\ref{table:notation} for quick reference. We denote the $N$ SI frames by $\mathbf{S}^{1}, \ldots, \mathbf{S}^{N}$, one of which is guaranteed to be available at the decoder buffer when M-frame $\mathbf{M}$ is decoded. We denote a desired target picture by $\mathbf{T}$ and for notational convenience we will include it in the set of SI frames as $\mathbf{S}^{0} = \mathbf{T}$. We denote the group of fixed-size code blocks in $\mathbf{M}$ that are encoded in merge mode by $\mathcal{B}_M$. Each block has $K$ pixels. We denote by $\mathbf{x}^{n}_b$ the $b$-th block in SI frame $\mathbf{S}^n$ coded in merge mode. Each block $\mathbf{x}^{n}_b$ is transformed into the DCT domain as $\mathbf{Y}^{n}_b = [Y^{n}_b(0), \ldots, Y^{n}_b(K-1)]$, where $Y^{n}_b(k)$ is the $k$-th DCT coefficient of $\mathbf{x}^{n}_b$. We denote by $X^{n}_b(k)$ the $k$-th quantized coefficient (\textit{q-coeff}) given uniform quantization step size $Q$: \begin{equation}\label{eq:XYQ} X^n_b(k) = \mathrm{round} \left( \frac{Y^n_b(k)}{Q} \right) , \end{equation} where $\mathrm{round}(x)$ is the standard rounding operation to the nearest integer. \subsection{Formulation} \label{subsec:formulation} We consider two different problems based on the reconstruction requirement with respect to the desired target $\mathbf{T}$. One typically chooses $\mathbf{T}$ {\em a priori}, \textit{e.g.}, by encoding the target picture independently (intra only) and using the decoded version as $\mathbf{T}$. The first problem requires the M-frame to reconstruct \textit{identically} to desired target $\mathbf{T}$: \begin{problem}{{\bf Fixed Target Merging}} \label{prob:fixed} (Section~\ref{sec:target}). Find M-frame $\mathbf{M}$ such that the decoder, taking as input \textit{any} one of the SI frames $\mathbf{S}^{n}$ and $\mathbf{M}$, can reconstruct $\mathbf{T}$ identically as output. \end{problem} Because of the differences between SI frames $\mathbf{S}^n$ and desired target $\mathbf{T}$, there may be situations where a high rate is required for $\mathbf{M}$ (\textit{e.g.}, due to motion in the video sequence, the target frame is very different from previously transmitted frames). In this case, we allow the reconstruction to deviate from desired target $\mathbf{T}$ in order to reduce the rate required for $\mathbf{M}$ by optimizing a rate-distortion criterion: \begin{problem}{{\bf Optimized Target Merging}}\label{prob:opt} (Section~\ref{sec:Solving}). Find $\mathbf{M}^*$ and $\bar{\mathbf{T}}(\mathbf{M}^*)$ so that the decoder, taking as input \textit{any} one of SI frames $\mathbf{S}^{n}$ and $\mathbf{M}^*$, can always reconstruct $\bar{\mathbf{T}}(\mathbf{M}^*)$ as output, and where $\mathbf{M}^*$ is an RD-optimal solution for a given weight parameter $\lambda$, \textit{i.e.}, \begin{equation}\label{eq:formular} \mathbf{M}^* = \arg \min_{\mathbf{M}} D(\mathbf{T}, \bar{\mathbf{T}}(\mathbf{M})) + \lambda R(\mathbf{M}) , \end{equation} where $D(\mathbf{T}, \bar{\mathbf{T}}(\mathbf{M}))$ is the distortion incurred (with respect to $\mathbf{T}$) when choosing $\bar{\mathbf{T}}(\mathbf{M})$ as the common reconstructed frame, and $R(\mathbf{M})$ is the rate needed to transmit $\mathbf{M}$. \end{problem} The second problem essentially states that the \textit{reconstruction target} $\bar{\mathbf{T}}(\mathbf{M})$ is RD-optimized with respect to desired target $\mathbf{T}$, while the first problem requires identical reconstruction to desired target $\mathbf{T}$. Note that in both problem formulations we avoid coding drift since they guarantee identical reconstruction for any SI frame, but a solution to Problem~\ref{prob:opt} will be shown to lead to significantly lower coding rates. \subsection{Piecewise Constant Function for Single Merging} \label{subsec:PWC} A merge operation must, given q-coeff $X^n_b(k)$ of any SI frames $\mathbf{S}^n$, $n \in \{1, \ldots, N\}$, reconstruct an identical value $\bar{X}_b(k)$, for all frequencies $k$. We use a PWC function $f(x)$ as the chosen merging operator, with {\em shift} $c$ and {\em step size} $W$ parameters selected for each frequency $k$ of each block $b$ encoded in merge mode (see Fig.~\ref{fig:pwc}). The selection of these parameters influences the RD performance of this merging operation for the optimized target merging case. We now focus our discussion on how $c$ and $W$ are selected for each coefficient. Because the optimization is the same for each frequency $k$, we will drop the frequency index $k$ for simplicity of presentation. Examples of PWC functions are \texttt{ceiling}, \texttt{round}, \texttt{floor}, etc. In this paper, we employ the \texttt{floor} function\footnote{We define \texttt{floor} function to minimize the maximum difference between original $x$ and reconstructed $f(x)$, given shift $c$ and step size $W$.}: \begin{equation} f(x) = \left\lfloor \frac{x+c}{W} \right\rfloor W + \frac{W}{2} - c. \label{eq:floor} \end{equation} From Fig.\;\ref{fig:pwc}, it is clear that there are numerous combinations of parameters $W$ and $c$ such that identical merging is ensured---\textit{i.e.}, all $X^n_b$ map to the same constant interval. Note also that the choice of $W$ depends on how spread out the various $X^0_b, \ldots, X^N_b$ are, that is, how correlated the SI blocks are to each other. In contrast, $c$ is used to select a desired reconstruction value $X^0_b$. Thus, because the level of correlation can be assumed to be relatively consistent across blocks, a \textit{step size $W_{\mathcal{B}_M}$ is selected once for all blocks $b \in \mathcal{B}_M$ for a given frequency}. On the other hand, since the actual reconstruction value will be different from block to block, the {\em shift $c_b$ will be selected on a per block basis for a given frequency}. Before formulating the problem of optimizing the choice of $c$ and $W$, we derive constraints under which this selection is made by determining: \begin{itemize} \item The minimum value of $W$ that guarantees identical merging, \item The choice of $c$ that guarantees correct reconstruction, \item Effective range of $c$. \end{itemize} We first compute a minimum step size $W$ to enable identical merging for blocks $b$ in $\mathcal{B}_M$. Let $Z_b^*$ be the \textit{maximum pair difference} between any pair of q-coeffs of a given frequency in block $b$, \textit{i.e.}, \begin{equation}\label{eq:WX} Z_b^* = \max_{i,j \in \{0, \ldots, N\}} X^i_b - X^j_b = X^{\max}_b - X^{\min}_b , \end{equation} where $X^{\max}_b$ and $X^{\min}_b$ are respectively the maximum and minimum q-coeffs among the SI frames, \textit{i.e.}, \begin{equation} X^{\max}_b = \max_{n = 0, \ldots, N} X^n_b, ~~~~~ X^{\min}_b = \min_{n = 0, \ldots, N} X^n_b. \end{equation} Given $Z_b^*$, we next define the \textit{group-wise maximum pair difference} $Z_{\mathcal{B}_M}^*$ for the blocks in group $\mathcal{B}_M$: \begin{equation} Z_{\mathcal{B}_M}^* = \max_{b \in \mathcal{B}_M} Z_b^* . \label{eq:Wk} \end{equation} Since all $X^n_b$ are integer, $Z_{\mathcal{B}_M}^*$ is also an integer. We can now establish a minimum for step size $W_{\mathcal{B}_M}$ above which identical merging for all blocks $b \in \mathcal{B}_M$ is achievable: \begin{fact}{\textbf{Minimum Step Size for Identical Merging}}\label{fact:min-step}: a step size $W_{\mathcal{B}_M} > Z_{\mathcal{B}_M}^* $, is large enough for \texttt{floor} function $f(X^n_b)$ in (\ref{eq:floor}) to merge any $X^{n}_b$ in $\mathcal{B}_M$ to a same value $\bar{X}_b$. \end{fact} Since each $\mathbf{S}^n$ is a coarse approximation of (and thus is similar to) desired target $\mathbf{T}$, the $\mathbf{S}^n$'s themselves are similar. Hence, the largest difference $Z_b^*$ should be small in the typical case. Indeed, we observe empirically that $Z^*_b$ follows an exponential distribution (one-sided because $Z_b^*$ is non-negative). Fig.\;\ref{fig:Z} shows $Z^*_b$ probability distribution for $k=16$ and $k=32$. We can see that 80\% of the blocks have $Z^*_b \leq 5$. Assuming that $Z^*_b$ follows a Laplacian distribution, the maximum $Z_{\mathcal{B}_M}^*$ is typically much larger than the average $Z^*_b$. This will be shown to be useful for the optimized merging of Section~\ref{sec:Solving}. \begin{figure}[htb] \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/Z16.eps}}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/Z32.eps}}\medskip \end{minipage} \vspace{-0.2in} \caption{Two examples of probability distribution of $Z^*_b$ with three SI frames at $Q = 1$ for \texttt{Balloons} at frequency $k=16$ and $k=32$.} \label{fig:Z} \end{figure} Fact~\ref{fact:min-step} states that step size $W_{\mathcal{B}_M}$ is wide enough so that $X^{0}_b,\ldots, X^{N}_b$ can all fall on the same interval in $f(x)$, as shown in Fig.\;\ref{fig:pwc}. However, given $W_{\mathcal{B}_M}$, shift $c_b$ must still be appropriately chosen \textit{per block} to achieve identical merging. Mathematically, identical merging means that the \texttt{floor} function with parameters $c_b$ and $W_{\mathcal{B}_M}$ produces the same integer output for all inputs $X^n_b$, that is: \begin{equation} \left\lfloor \frac{X^{n}_b + c_b}{W_{\mathcal{B}_M}} \right\rfloor = \left\lfloor \frac{X^{0}_b + c_b}{W_{\mathcal{B}_M}} \right\rfloor , ~~~ \forall n \in \{1, \ldots, N\} . \label{eq:cCondition} \end{equation} Thus for all $X^n_b$, we must have for some $m \in \mathbb{Z}$ that: \begin{equation}\label{equ:W} mW_{\mathcal{B}_M} \leq X^{n}_b + c_b < (m+1)W_{\mathcal{B}_M} , ~~ \forall n \in \{0, \ldots, N\} \end{equation} Instead of considering all $X^n_b$'s, it is sufficient to consider only the maximum and minimum values, so that the maximum range for $c_b$ that guarantees identical reconstruction is: \begin{equation} mW_{\mathcal{B}_M} - X^{\min}_b \leq c_b < (m+1)W_{\mathcal{B}_M} - X^{\max}_b \label{eq:c_bound} \end{equation} for some integer $m$. Note that given step size $W_{\mathcal{B}_M}$, $c_b$ and $c_b + m W_{\mathcal{B}_M}$ lead to the same output: \begin{align} f(x) = & \left\lfloor \frac{x + c_b + m W_{\mathcal{B}_M}}{W_{\mathcal{B}_M}}\right\rfloor W_{\mathcal{B}_M} + \frac{W_{\mathcal{B}_M}}{2} - (c_b + m W_{\mathcal{B}_M}) \nonumber \\ = & \left\lfloor \frac{x + c_b}{W_{\mathcal{B}_M}}\right\rfloor W_{\mathcal{B}_M} + \frac{W_{\mathcal{B}_M}}{2} - c_b \nonumber \end{align} Thus it will be sufficient to consider at most $W$ different values of $c_b$ as possible candidates. Define $\alpha = X^{\min}_b \bmod W_{\mathcal{B}_M} $ and $\beta = X^{\max}_b \bmod W_{\mathcal{B}_M} $ and consider the two possible cases. \begin{itemize} \item In case (i) $X^{\min}_b = m W_{\mathcal{B}_M} + \alpha$ and $X^{\max}_b = m W_{\mathcal{B}_M} + \beta$, where $\alpha < \beta$, so that $X^{\min}_b$ and $X^{\max}_b$ fall in the same interval when there is no shift, $c_b=0$. Hence we can have $ -\alpha \leq c_b < W_{\mathcal{B}_M} - \beta$ in order to keep both $X^{\min}_b$ and $X^{\max}_b$ in the interval $[m W_{\mathcal{B}_M}, (m+1) W_{\mathcal{B}_M})$. \item In case (ii) $X^{\min}_b = m W_{\mathcal{B}_M} + \alpha$ and $X^{\max}_b = (m+1) W_{\mathcal{B}_M} + \beta$, where $\beta < \alpha$, \textit{i.e.}, when $c_b=0$, $X^{\min}_b$ and $X^{\max}_b$ fall in neighboring intervals. Here we can have $ -\alpha \leq c_b < -\beta $ to move $X^{\max}_b$ down to the interval $[m W_{\mathcal{B}_M}, (m+1) W_{\mathcal{B}_M})$, or have $ W_{\mathcal{B}_M} -\alpha < c_b \leq W_{\mathcal{B}_M} -\beta $ to move $X^{\min}_b$ up to the interval $[(m+1) W_{\mathcal{B}_M}, (m+2) W_{\mathcal{B}_M})$. \end{itemize} Note that the selection of $W_{\mathcal{B}_M}$ (Fact 1) implies that $X^{\max}_b - X^{\min}_b < W_{\mathcal{B}_M}$, and $\alpha = \beta$ only if $X^{\min}_b = X^{\max}_b$, in which case there is no merging needed and any $c_b$ would suffice. \begin{figure}[htb] \begin{minipage}[b]{0.47\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/alphaBeta.eps}}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.47\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/alphaBeta2.eps}}\medskip \end{minipage} \vspace{-0.2in} \caption{Two cases of $X^{\min}_b$ and $X^{\max}_b$ (left: $\alpha < \beta$ and right: $\alpha > \beta$) and their implications on the feasible range of shift $c_b$.} \label{fig:alphaBeta} \end{figure} The two cases ($\alpha < \beta$ and $\alpha > \beta$) are illustrated in Fig.\;\ref{fig:alphaBeta}. Note that given $ X^{\max}_b \geq X^{\min}_b$ by definition, we will be in Case (ii) whenever $\beta < \alpha$. Thus we can summarize this result as: \begin{fact}{\textbf{Maximum Feasible Range $\mathcal{F}_b$ for Shift $c_b$}}\label{fact:feasible}: For the shift $c_b$ to provide identical merging of q-coeffs $X^0_b, \ldots X^N_b$ to a same value $\bar{X}_b$, given step size $W_{\mathcal{B}_M}$ \[ c_b \in \mathcal{F}_b = [-\alpha, W_{\mathcal{B}_M} - \beta) \;\; {\rm if} \;\; \alpha < \beta \] and \[ c_b \in \mathcal{F}_b = [W_{\mathcal{B}_M} -\alpha, W_{\mathcal{B}_M} - \beta) \;\; {\rm if} \;\; \alpha > \beta \\ \] with $\alpha = X^{\min}_b \bmod W_{\mathcal{B}_M} $ and $\beta = X^{\max}_b \bmod W_{\mathcal{B}_M} $. \end{fact} \subsection{Formulation of Merge Frame RD-Optimization} \label{sec:RD-opt} In order to formulate the PWC function parameter optimization problem, we first define distortion, $d_b$, as the squared difference between coefficient $Y_b^0$ of the desired target $\mathbf{T}$ and reconstructed coefficient $f(X^0_b) \, Q$: \begin{equation} d_b ~=~ | \, Y_b^0 - f(X^0_b) \, Q \, |^2 . \label{eq:distort} \end{equation} Because shift $c_b$ will be always chosen within the feasible range defined in Fact~\ref{fact:feasible}, all q-coeffs $X^n_b$ will map to the same value $f(X^n_b), \forall n \in \{0, \ldots, N\}$. Thus we only need to compute the distortion for $f(X^0_b)$ in (\ref{eq:distort}). For the $k$-th q-coeff in block group $\mathcal{B}_M$, the encoder will have to transmit to the decoder: \begin{enumerate} \item one step size $W_{\mathcal{B}_M}(k) > Z_{\mathcal{B}_M}(k)$ for each group $\mathcal{B}_M$. \item one shift $c_b(k)$ for each block $b$ in group $\mathcal{B}_M$. \end{enumerate} The cost of encoding a single $W_{\mathcal{B}_M}(k)$ for all $k$-th q-coeffs in group $\mathcal{B}_M$ is small, while the cost of encoding $|\mathcal{B}_M|$ shifts $c_b(k)$ for each of the $k$-th q-coeffs can be significant. Thus we consider only the rate associated to $c_b(k)$ in our optimization. Note that since the high-frequency DCT coefficients of a given code block are very likely zero, we can insert an \textit{End of Block} (EOB) flag $E_b$ to signal the remaining high-frequency q-coeffs in block $b$ in a raster-scan order are 0. Effective use of $E_b$ can reduce the amount of transmitted PWC function parameters\footnote{In the fixed target merging case, $E_b$ is inserted when the remaining high-frequency q-coeffs of a block $b$ in target $\mathbf{T}$ are exactly zero. In the optimized target case, $E_b$ can be inserted in an RD-optimal manner on a per-block basis, similar to what is done in coding standards such as H.264~\cite{wiegand03}.}. In summary, we can define the RD optimized target merging problem as: \begin{equation} \label{equ:opt} \min_{W_{\mathcal{B}_M}(k), \, c_b(k)} \quad \sum_{b \in \mathcal{B}_M} D_b + \lambda R_b , ~~~ \begin{array}{l} W_{\mathcal{B}_M}(k) > Z_{\mathcal{B}_M}(k) \\ c_b(k) \in \mathcal{F}_b(k) \end{array} \end{equation} with distortion $D_b$ and rate $R_b$ for block $b$ calculated as: \begin{eqnarray*} D_b & = & \sum_{k=0}^{E_b} d_b(k) + \sum_{k = E_b+1}^{K-1} Y_b^0(k)^2 \\ R_b & = & \sum_{k=0}^{E_b} R(c_b(k)) , \end{eqnarray*} where $d_b(k)$ is defined in (\ref{eq:distort}) and $R(c_b(k))$ is the rate to encode $c_b(k)$. We discuss how we tackle this optimization in Section~\ref{sec:Solving}. \section{Introduction} \label{sec:intro} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{System Overview} \label{sec:system} \input{system} \section{Problem Formulation} \label{sec:formulation} \input{formulation} \section{Fixed Target Merging} \label{sec:target} \input{target} \section{Optimized Target Merging} \label{sec:Solving} \input{Solving} \section{Experiments} \label{sec:results} \input{results} \section{Conclusion} \label{sec:conclude} \input{conclude} \bibliographystyle{IEEEtran} \subsection{Interactive Media Applications} The H.264 video coding standard \cite{wiegand03} introduced the concept of \textit{SP-frames}~\cite{karczewicz03} for stream-switching. In a nutshell, first the difference between one SI frame and the target picture is \textit{lossily} coded as the primary SP-frame. Then, the difference between each additional SI frame and the reconstructed primary SP-frame is \textit{losslessly} coded as a secondary SP-frame; lossless coding ensures identical reconstruction between primary and each of the secondary SP-frames. One drawback of SP-frames is coding inefficiency. Due to lossless coding in secondary SP-frames, their sizes can be significantly larger than conventional P-frames. Furthermore, the number of secondary SP-frames required is equal to the number of SI frames, thus resulting in significant storage costs. As we will discuss, our proposed scheme encodes only one merge frame for all SI frames, and hence the storage requirement is lower than for SP-frames. While DSC has been proposed for designing interactive and stream-switching mechanisms in the past decade~\cite{wu08,aaron04,nmcheung06,mcheung08vcip,mcheung09pcs}, partly due to the computation complexity required for bit-plane and channel coding in common DSC implementations, DSC is not widely used nor adopted into any video coding standards. In contrast, in this work, our proposed coding tool involves only quantization (PWC function) and entropy coding of function parameters, both of which are computationally simple. Further, we demonstrate coding gain over a previously proposed DSC-based approach \cite{mcheung09pcs} in Section~\ref{sec:results}. One of the primary applications of our proposed merge frame is interactive media systems, which have attracted considerable interest~\cite{cheungCheung:13}. In particular, a range of media data types have been considered for interactive applications in the past: images~\cite{taubman:03}, light-fields~\cite{steinbach:08a,steinbach:08b}, volumetric images~\cite{ortega:09}, videos~\cite{wee:99,lin:01,fu:06,nmcheung06,mcheung08vcip,devaux:07,taubman:11} and high-resolution videos~\cite{girod:11,Mavlankar:10,Halawa:11,Pang:11}. While it is conceivable that our proposed merge frame can be applicable in some of these use scenarios for which DSC techniques have been proposed, here we focus on real-time switching among multiple pre-encoded video streams, as discussed in Section \ref{sec:system}. This paper extends our earlier work~\cite{dai13}, by providing a more detailed presentation and evaluation of the system, as well as introducing two new concepts. First, we study the fixed target merging case (Section \ref{sec:target}). Second, for the optimized target merging case, we develop a new algorithm to compute a locally optimal probability function $P(c)$ for shift $c$---one that leads to more efficient entropy coding of $c$, \textit{and} small signal reconstruction distortion after merging (Section \ref{sec:Solving}). We will show in our experiments, described in Section \ref{sec:results}, that our new algorithm leads to significantly better RD performance than our previously published work~\cite{dai13}. \subsection{Experimental Setup} \label{subsec:ExperiSet} We use four different multiview video test sequences with resolution 1024x768 for scenarios 1 and 3: \texttt{Balloons}, \texttt{Kendo}\footnote{http://www.tanimoto.nuee.nagoya-u.ac.jp/mpeg/mpeg\_ftv.html}, \texttt{Lovebird1} and \texttt{Newspaper}\footnote{ftp://203.253.128.142}. The viewpoints of each sequence are shown in Table~\ref{tab:SVS}. For scenario 2, we use four single-view video sequences with resolution 1920x1080: \texttt{BasketballDrive}, \texttt{Cactus}, \texttt{Kimono1} and \texttt{ParkScene}\footnote{ftp://ftp.tnt.uni-hannover.de/testsequences/}. \begin{table}[htb] \begin{small} \begin{center} \renewcommand{\arraystretch}{1.2} \renewcommand{\multirowsetup}{\centering} \caption{Viewpoints of each multiview sequences.}\label{tab:SVS} \begin{tabular}{|c|c|} \hline Sequence Name & Viewpoints \\ \hline Balloons & 1, 3, 5 \\ \hline Kendo & 1, 3, 5 \\ \hline Lovebird1 & 4, 6, 8 \\ \hline Newspaper & 3, 4, 5 \\ \hline \end{tabular} \end{center} \end{small} \end{table} We compare the coding performance of our proposed scheme against two schemes\footnote{Here $QP_{A}$ denotes the quantization parameter for coding DCT coefficients in approach $A$}: SP-frame~\cite{karczewicz03} in H.264 and D-frame proposed in~\cite{cheung2010rate}. $QP$ for D-frame is set to be equal to $QP_{SI}$ to maintain consistent quality. For multi-view scenarios 1 and 3, we encoded three streams from three viewpoints: the center view was set as the target, to which the other two side views can switch at a defined switching point. For Scenario 2, we encoded the single-view video in three different bit-rates and then switched among them. The bit-rates for the three streams were decided according to \textit{additive increase multiple decrease} (AIMD) rate control behavior in TCP and TFRC~\cite{chiu1989analysis}: one stream has twice the target stream's bit-rate, while the other has slightly smaller bit-rate (0.9 times of the target stream's bit-rate). The results are shown in plots of PSNR versus coding rate for a switched frame. M-frame parameters are selected as follows. In Scenario 1, different $QP_{M}$ will result in different rates, and so we set $QP_{M}$ to equal to $QP_{SI}$, as was done for D-frames. However, for optimized target merging, coding rate is determined mainly by the number of spikes in the distribution, and not $QP_{M}$. In our experiments, as similarly done in High Efficiency Video Coding (HEVC), we first empirically compute $\lambda$ as a function of the SI frame's $QP_{SI}$: \begin{equation} \lambda = 2^{0.6QP_{SI}-12} \text{,} \end{equation} The number of spikes in the distribution is driven by the selected $\lambda$. We then set $QP_{M} = 1$ to maintain small quantization error. For mode selection among \textit{skip}, \textit{intra} and \textit{merge}, for each block $b$ we first examine q-coeffs $X^n_b(k)$ of $N$ SI frames. If $X^n_b(k)$ of all $K$ frequencies are identical across the SI frames, then block $b$ is coded as \textit{skip}. Otherwise, selection between \textit{intra} and \textit{merge} is done based on a RD criteria. In HEVC, large code block sizes are introduced which bring significant coding gain on high resolution sequences~\cite{sullivan2012overview}. Motivated by this observation, we also investigated the effect of different block sizes ($4 \times 4$, $8 \times 8$, $16 \times 16$) on coding performance. We also compare our current proposal against the performance of our previous work~\cite{dai13}, where block size is fixed at $8\times 8$, initial probability distribution of shift $P(c_b)$ is not optimized, and no RD-optimized EOB flag is employed. The corresponding PSNR-bitrate curves for scenario 3 are shown in Fig.\;\ref{fig:ESB}. \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/BLB.eps}} \centerline{(a) \texttt{Balloons}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/KDB.eps}} \centerline{(b) \texttt{Kendo}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/LBB.eps}} \centerline{(c) \texttt{Lovebird1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/NPB.eps}} \centerline{(d) \texttt{Newspaper}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR v.s. encoding rate comparison with different block sizes for sequences \texttt{Balloons}, \texttt{Kendo}, \texttt{Lovebird1} and \texttt{Newspaper}.} \label{fig:ESB} \end{figure} From Fig.\;\ref{fig:ESB}, we observe that block size $16 \times 16$ provides the best coding performance at all bit-rates. One reason for the superior performance of large blocks in M-frame is the following: because SI frames are already reconstructions of the target frames (albeit slightly different), motion compensation is not necessary, so the benefit of smaller blocks typical in video coding is diminished. We note that in general an optimal block size per frame can be selected by the encoder \textit{a priori} and encoded as side information to inform the decoder. In the following experiments, the block size will be fixed at $16 \times 16$ for best performance. Further, we observe also that our proposed method achieves a significant coding performance gain compared to our previous method in \cite{dai13} over all bit-rate regions, showing the effectiveness of our newly proposed optimization techniques. \subsection{Effectiveness of ``Spike + Uniform'' Distribution} \label{subsec:verify} In order to verify the effectiveness of our proposed ``Spike + Uniform'' (\texttt{SpU}) probability distribution $P(c_b)$ for shift parameter $c_b$, we choose a competing na\"ive distribution for $P(c_b)$ as follows: first, we compute distortion-minimizing $g(c^0)$ as the initial probability distribution. Next, we compute the RD-optimal $c_b$ for each block $b \in \mathcal{B}_M$ via (\ref{eq:RDopt}) for a single iteration using the initialized probability distribution and compute a new $P^\prime(c_b)$. This $P^\prime(c_b)$ is then used to compute the rate to encode each $c_b$ of a merge block $b$. The difference between $P^\prime(c_b)$ and our proposed $P^t(c_b)$ is that $P^\prime(c_b)$ in general is an arbitrarily shaped distribution, not a skewed ``spiky" distribution. Experimental results of M-frame using these distributions are shown in Fig.\;\ref{fig:Naive}. \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/BLN.eps}} \centerline{(a) \texttt{Balloons}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/ES/KDN.eps}} \centerline{(b) \texttt{Kendo}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR v.s. encoding rate comparison with different block sizes for sequences \texttt{Balloons}, \texttt{Kendo}.} \label{fig:Naive} \end{figure} We observe from Fig.\;\ref{fig:Naive} that our proposed \texttt{SpU} distribution outperforms the na\"ive distribution in the high bit-rate region and is comparable in the low bit-rate region. This is because in the low bit-rate region $\lambda$ is very large, so that for any initial distribution, after one iteration, there will only remain one spike, and the number of iterations required for convergence is very small. \subsection{Scenario 1: Static View Switching} \label{subsec:S1} We first test our proposed M-frame in the static view switching scenario for multi-view sequences. Three views are encoded using same $QP$. The fixed target merging algorithm described in Section~\ref{sec:target} is used to facilitate switching to neighboring views among pictures of the same instant, as shown in Fig.~\ref{fig:SVS}. Specifically, we constructed M- / D- frames to enable static view-switching from view 1 or 3 to target view 2. We first use H.264 to encode two SI frames (P-frames) using $\Pi_{2, 2}$ as the target and $\Pi_{1, 2}$ and $\Pi_{3, 2}$ as predictors, respectively. This results in encoded rates $\mathcal{R}_{1, 2}$ and $\mathcal{R}_{2, 2}$ for the two SI frames, respectively. Then we encoded a M- / D- frame to merge these two SI frames identically to $\Pi_{2, 2}$. The corresponding rates for M-frame and D-frame are $\mathcal{R}^M_{2,2}$ and $\mathcal{R}^D_{2,2}$, respectively. Since SP-frame in H.264 cannot perform fixed target merging, it is not tested in this scenario. We assume that the switching probability is equal on both view 1 and 3, which is 0.5. Then the overall rate for the D-frame is calculated as: \begin{equation} \mathcal{R}^{D} = \frac{\mathcal{R}_{1, 2} + \mathcal{R}_{3, 2}}{2} + \mathcal{R}^D_{2, 2} \text{.} \end{equation} Also, the overall rate for our proposed M-frame using fixed target merging scheme is calculated as: \begin{equation} \mathcal{R}^{M} = \frac{\mathcal{R}_{1, 2} + \mathcal{R}_{3, 2}}{2} + \mathcal{R}^M_{2, 2} \text{.} \end{equation} \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S1/BL.eps}} \centerline{(a) \texttt{Balloons}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S1/KD.eps}} \centerline{(b) \texttt{Kendo}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S1/LB.eps}} \centerline{(c) \texttt{Lovebird1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S1/NP.eps}} \centerline{(d) \texttt{Newspaper}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR v.s. encoding rate comparing proposed M-frame using fixed target merging scheme with D-frame for sequences \texttt{Balloons}, \texttt{Kendo}, \texttt{Lovebird1} and \texttt{Newspaper} in static view switching scenario.} \label{fig:S1} \end{figure} \begin{table}[htb] \begin{center} \renewcommand{\arraystretch}{1.2} \renewcommand{\multirowsetup}{\centering} \caption{BD-rate reduction of proposed M-frame using fixed target merging scheme compared to D-frame in static view switching scenario.}\label{tab:S1MD} \begin{tabular}{|c|c|} \hline Sequence Name & M-frame \emph{vs.} D-frame \\ \hline Balloons & -31.7\% \\ \hline Kendo & -40.1\% \\ \hline Lovebird1 & -35.7\% \\ \hline Newspaper & -31.1\% \\ \hline \end{tabular} \end{center} \end{table} The coding results are shown in Fig.~\ref{fig:S1} and BD-rate~\cite{bjontegaard2008improvements} comparison can be found in Table~\ref{tab:S1MD}. We observe from Table~\ref{tab:S1MD} that our proposed M-frame using fixed target merging scheme achieved up to 40.1\% BD-rate reduction compared to D-frame. Further, from Fig.\;\ref{fig:S1} we observe that our M-frame is better than D-frame in all bit-rate regions, especially in low and high bit-rate region, mainly due to the skip block and EOB flag tools. In high bit-rate region, due to the small distortion in SI frames, more blocks will be classified into skip block, which efficiently reduces the bits to encode the M-frame, while in low bit-rate region more coefficients are set to zero and skipped due to the EOB flag. This shows the effectiveness of our proposed M-frame using fixed target merging scheme compared to the D-frame. \subsection{Scenario 2: Bit-rate Adaptation} \label{subsec:S2} We next conducted experiments of bitrate adaptation scenario for single-view video sequences. M-frame is encoded in a RD-optimized manner, described in section~\ref{sec:Solving} with the system framework shown in Fig.~\ref{fig:DVS}. Three streams of different rates are encoded according to AIMD rate control behavior. We constructed M- / D- frames to enable stream-switching from stream 1, 2 or 3 to target stream 2 under different bit-rates. We first encode three SI frames using $\Pi_{2, 2}$ as target and $\Pi_{1, 1}$, $\Pi_{2, 1}$ and $\Pi_{3, 1}$ as reference respectively. This results in encoded rate $\mathcal{R}_{1, 1}$, $\mathcal{R}_{2, 1}$ and $\mathcal{R}_{3, 1}$ for the three SI frames, respectively. Then we encoded a M- / D-frame to merge these three SI frames into an identical frame. The corresponding rate for M-frame and D-frame are $\mathcal{R}^M_{2,2}$ and $\mathcal{R}^D_{2,2}$, respectively. We also constructed SP-frames to enable stream-switching from stream 1, 2 or 3 to target stream 2. We first encoded a primary SP-frame using $\Pi_{2, 2}$ as target and $\Pi_{2, 1}$ as reference. We then losslessly encoded two secondary SP-frames using the primary SP-frame as target and $\Pi_{1, 1}$, $\Pi_{3, 1}$ as reference respectively. $\mathcal{R}^S_{2, 1}$ denotes the rate for primary SP-frame while $\mathcal{R}^S_{1, 1}$ and $\mathcal{R}^S_{3, 1}$ denote the rate for two secondary SP-frames. As measure for transmission rate, we consider both the average and worst case code rate during a stream-switch. For average case, in the absence of application-dependent information, we assume that the probability of stream-switching is equal for all views. Thus, the overall rate for RD optimized M-frame is calculated as: \begin{equation} \mathcal{R}^{M}_{T_A} = \frac{\mathcal{R}_{1, 1} + \mathcal{R}_{2, 1} + \mathcal{R}_{3, 1}}{3} + \mathcal{R}^M_{2, 2} \text{.} \end{equation} The overall rate for D-frame is calculated as: \begin{equation} \mathcal{R}^{D}_{T_A} = \frac{\mathcal{R}_{1, 1} + \mathcal{R}_{2, 1} + \mathcal{R}_{3, 1}}{3} + \mathcal{R}^D_{2, 2} \text{.} \end{equation} The overall rate for SP-frame is calculated as: \begin{equation} \mathcal{R}^{SP}_{T_A} = \frac{\mathcal{R}^S_{1, 1} + \mathcal{R}^S_{2, 1} + \mathcal{R}^S_{3, 1}}{3} \text{.} \end{equation} \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/AV/BD.eps}} \centerline{(a) \texttt{BasketballDrive}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/AV/CT.eps}} \centerline{(b) \texttt{Cactus}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/AV/KN.eps}} \centerline{(c) \texttt{Kimono1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/AV/PS.eps}} \centerline{(d) \texttt{ParkScene}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR versus encoding rate comparing proposed RD-optimized M-frame with D-frame and SP-frame for sequences \texttt{BasketballDrive}, \texttt{Cactus}, \texttt{Kimono1} and \texttt{ParkScene} in average case.} \label{fig:S4AV} \end{figure} \begin{table*}[htb] \begin{center} \renewcommand{\arraystretch}{1.2} \renewcommand{\multirowsetup}{\centering} \caption{BD-rate reduction of RD-optimized M-frame compared to D-frame and SP-frame of scenario 2.}\label{tab:S2} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Sequence Name} & \multicolumn{2}{c|}{M-frame \emph{vs.} D-frame} & \multicolumn{2}{c|}{M-frame \emph{vs.} SP-frame} \\ \cline{2-5} & Average Case & Worst Case & Average Case & Worst Case \\ \hline Balloons & -63.4\% & -63.7\% & -17.0\% & -39.4\% \\ \hline Kendo & -63.5\% & -63.2\% & -18.8\% & -42.1\% \\ \hline Lovebird1 & -65.6\% & -65.4\% & -36.3\% & -49.9\% \\ \hline Newspaper & -56.3\% & -56.7\% & -19.5\% & -43.8\% \\ \hline \end{tabular} \end{center} \end{table*} The coding results of average case are shown in Fig.~\ref{fig:S4AV} and BD-rate comparison can be found in Table~\ref{tab:S2}. We observe from Table~\ref{tab:S2} that our proposed RD-optimized M-frame achieves up to 65.6\% BD-rate reduction compared to D-frame and 36.3\% BD-rate reduction compared to SP-frame. Moreover, from Fig.\;\ref{fig:S4AV} we observe that our proposed RD-optimized M-frame is better than D-frame and SP-frame in all bit-rate regions. Note that for the SP-frame case, if the switching probability to the primary SP-frame is higher, it will result in a smaller average rate. For worst case, the code rate for M-frame is calculated as: \begin{equation} \mathcal{R}^{M}_{T_W} = \max(\mathcal{R}_{1, 1}, \mathcal{R}_{2, 1}, \mathcal{R}_{3, 1}) + \mathcal{R}^M_{2, 2} \text{.} \end{equation} The rate for D-frame is calculated as: \begin{equation} \mathcal{R}^{D}_{T_W} = \max(\mathcal{R}_{1, 1}, \mathcal{R}_{2, 1}, \mathcal{R}_{3, 1}) + \mathcal{R}^D_{2, 2} \text{.} \end{equation} The rate for SP-frame is calculated as: \begin{equation} \mathcal{R}^{SP}_{T_W} = \max(\mathcal{R}^S_{1, 1}, \mathcal{R}^S_{2, 1}, \mathcal{R}^S_{3, 1}) \text{.} \end{equation} \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/WS/BD.eps}} \centerline{(a) \texttt{BasketballDrive}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/WS/CT.eps}} \centerline{(b) \texttt{Cactus}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/WS/KN.eps}} \centerline{(c) \texttt{Kimono1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S4/WS/PS.eps}} \centerline{(d) \texttt{ParkScene}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR versus encoding rate comparing RD-optimized M-frame with D-frame and SP-frame for sequences \texttt{BasketballDrive}, \texttt{Cactus}, \texttt{Kimono1} and \texttt{ParkScene} in worst case.} \label{fig:S4WS} \end{figure} The coding results of worst case are shown in Fig.~\ref{fig:S4WS} and BD-rate comparison can be found in Table~\ref{tab:S2}. We observe from Table~\ref{tab:S2} that our proposed RD-optimized M-frame achieves up to 65.4\% BD-rate reduction compared to D-frame and 49.9\% BD-rate reduction compared to SP-frame. We observe in Table~\ref{tab:S2} that the performance difference between average and worst case for D-frame is small. However, for SP-frame the performance difference between average and worst case is large. This is due to lossless coding in secondary SP-frames, resulting in a much larger size than primary SP-frame (typically 10 times larger). \subsection{Scenario 3: Dynamic View Switching} \label{subsec:S3} Finally we conducted experiments of dynamic view switching scenario for multiview video sequences. Three views are encoded using same $QP$. The detailed frame structure for M-frame, D-frame and SP-frame are the same as in Section~\ref{subsec:S2}. Also, the overall rate calculation for average and worst case are identical too. \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/AV/BL.eps}} \centerline{(a) \texttt{Balloons}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/AV/KD.eps}} \centerline{(b) \texttt{Kendo}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/AV/LB.eps}} \centerline{(c) \texttt{Lovebird1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/AV/NP.eps}} \centerline{(d) \texttt{Newspaper}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR versus encoding rate comparing proposed RD-optimized M-frame with D-frame and SP-frame for sequences for sequences \texttt{Balloons}, \texttt{Kendo}, \texttt{Lovebird1} and \texttt{Newspaper} in average case.} \label{fig:S3AV} \end{figure} \begin{table*}[htb] \begin{center} \renewcommand{\arraystretch}{1.2} \renewcommand{\multirowsetup}{\centering} \caption{BD-rate reduction of RD-optimized M-frame compared to D-frame and SP-frame of scenario 3.}\label{tab:S3} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Sequence Name} & \multicolumn{2}{c|}{M-frame \emph{vs.} D-frame} & \multicolumn{2}{c|}{M-frame \emph{vs.} SP-frame} \\ \cline{2-5} & Average Case & Worst Case & Average Case & Worst Case \\ \hline Balloons & -55.1\% & -53.0\% & -19.2\% & -35.0\% \\ \hline Kendo & -53.8\% & -53.6\% & -19.3\% & -36.4\% \\ \hline Lovebird1 & -57.5\% & -58.7\% & -11.3\% & -28.7\% \\ \hline Newspaper & -51.6\% & -50.4\% & -5.0\% & -12.9\% \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure}[htb] \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/WS/BL.eps}} \centerline{(a) \texttt{Balloons}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/WS/KD.eps}} \centerline{(b) \texttt{Kendo}}\medskip \end{minipage} \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/WS/LB.eps}} \centerline{(a) \texttt{Lovebird1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{.45\linewidth} \centering \centerline{\includegraphics[width=4.8cm]{figures/S3/WS/NP.eps}} \centerline{(b) \texttt{Newspaper}}\medskip \end{minipage} \vspace{-0.15in} \caption{PSNR versus encoding rate comparing proposed M-frame with D-frame and SP-frame for sequences for sequences \texttt{Balloons}, \texttt{Kendo}, \texttt{Lovebird1} and \texttt{Newspaper} in worst case.} \label{fig:S3WS} \end{figure} The coding results of dynamic view switching for average case and worst case are shown in Fig.~\ref{fig:S3AV} and \ref{fig:S3WS} respectively. BD-rate comparison for average case and worst case can be found in Table~\ref{tab:S3}. From Table~\ref{tab:S3} we observe that our proposed RD-optimized M-frame achieves 57.5\% BD-rate reduction compared to D-frame and 19.3\% BD-rate reduction compared to SP-frame. From Table~\ref{tab:S3} we observe that our proposed RD-optimized M-frame achieves 58.7\% BD-rate reduction compared to D-frame and 36.4\% BD-rate reduction compared to SP-frame. \subsection{Selection of $W_{\mathcal{B}_M}$} \label{subsec:W} Note, by definition of $Z^*_{\mathcal{B}_M}$, we are guaranteed that all $X^n_b$ can be within an interval of size $W_{\mathcal{B}_M} $ as long as $W_{\mathcal{B}_M} > Z^*_{\mathcal{B}_M}$, provided we transmit an appropriate $c_b$ (Fact \ref{fact:min-step}). Reducing $W_{\mathcal{B}_M}$ from $2 + 2 Z_{\mathcal{B}_M}$ can reduce the rate required to transmit $c_b$, since $c_b$ can take at most $W_{\mathcal{B}_M}$ different values. As shown in Section~\ref{subsec:PWC} we observe empirically that $Z^*_b$ follows a Laplacian distribution (Fig.\;\ref{fig:Z}). Thus, for a large block group $\mathcal{B}_M$, $Z^*_{\mathcal{B}_M} = \max_{b\in\mathcal{B}_M} Z^*_b$ will be in general much larger than $Z^*_b$. Since $Z^*_b \geq Z_b$, in practice for many blocks $b$ it is thus possible to reconstruct target $X^0_b$ since $W^*_{\mathcal{B}_M} > 2 Z_b + 2$. Thus, we propose to select $W_{\mathcal{B}_M} = Z^*_{\mathcal{B}_M} + 1$, which guarantees that for the worst case block all SI values are in the same interval, with appropriate choice of $c_b$ to be discussed next. \subsection{RD-optimal Selection of Shifts} \label{subsec:RDoptShift} Given a chosen $W_{\mathcal{B}_M}$, according to Fact~\ref{fact:feasible} there will be multiple values of $c_b$ that guarantee identical reconstruction for all $X^n_b$. To enable efficient entropy coding of $c_b$, it is desirable to have a skewed probability distribution $P(c_b)$ of $c_b$. We design an algorithm to promote a skewed $P(c_b)$ iteratively. We first propose how to initialize $P(c_b)$, and then discuss how to update $P(c_b)$ in subsequent iterations. We optimize shift $c_b$ via the following RD cost function: \begin{equation} \min_{0 \leq c_b < W_{\mathcal{B}_M} \,|\, c_b \in \mathcal{F}_b} d_b + \lambda (- \log P(c_b)) , \label{eq:RDopt} \end{equation} where the rate term is approximated as the negative log of the probability $P(c_b)$ of candidate $c_b$, and $d_b$ is the distortion term computed using (\ref{eq:distort}). The difficulty in using objective (\ref{eq:RDopt}) to compute optimal $c^*_b$ lies in how to define $P(c_b)$ \textit{prior} to selection of $c_b$. Our strategy is to initialize a skewed distribution $P(c_b)$ to promote a low coding rate, perform optimization (\ref{eq:RDopt}) for each block $b \in \mathcal{B}_M$, then update $P(c_b)$ based on statistics of the selected $c_b$'s, and repeat until $P(c_b)$ converges. In order to choose an initial distribution $P(c_b)$, we note that a distribution with a small number of spikes has lower entropy than a smooth distribution (see Fig.~\ref{fig:ProbC} as an example). Choosing $c_b$ values following such a discrete distribution (\textit{e.g.}, left in Fig.~\ref{fig:ProbC}) means that we reduce the number of possible $c_b$, which may increase $d_b$. Thus, if $\lambda$ in (\ref{eq:RDopt}) is small, in order to reduce distortion one can increase the number of spikes in $P(c_b)$. In this paper, we propose to induce a multi-spike probability $P(c_b)$, where the appropriate number of spikes depends on the desired tradeoff between distortion and rate in (\ref{eq:RDopt}). \begin{figure}[htb] \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/ProbC1}}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=45mm]{figures/ProbC2}}\medskip \end{minipage} \vspace{-0.2in} \caption{Two examples of shift distribution $P(c_b)$. Left distribution has small number of spikes and has low entropy (1.22). Right distribution is smooth but has high entropy (4.38).} \label{fig:ProbC} \end{figure} Since $c_b$ is constrained to be in the feasible region $\mathcal{F}_b$ defined in Fact~\ref{fact:feasible}, it is possible that when we restrict $c_b$ to just a few values as in Fig.~\ref{fig:ProbC}\,(left), there will be some blocks $b$ for which none of the ``spikes'' in $P(c_b)$ fall within their $\mathcal{F}_b$. In order to guarantee identical reconstruction they must be able to select non-spike values as shifts $c_b$. Thus we propose a ``spike + uniform'' distribution $P(c_b)$: \begin{equation} P(c_b) = \left\{ \begin{array}{ll} ~~ p^s_i & \mbox{if} ~ c_b = c^s_i \\ ~~ p_c & \mbox{o.w.} \end{array} \right. \label{eq:Pcb} \end{equation} where $\{c^s_1,\ldots, c^s_H\}$ are the $H$ spikes, each with probability $p^s_i$, and $p_c$ is a small constant for non-spike shift values. $p_c$ is chosen so that $P(c_b)$ sums to $1$. \subsubsection{Computing distribution $P(c_b)$ for fixed $H$} We now discuss how we compute $P(c_b)$ for given $H$. Empirically we observe that for a reasonable number of spikes (\textit{e.g.}, $H \geq 3$), the majority of blocks (typically $99\%$ or more) in $\mathcal{B}_M$ have at least one spike in their feasible region $\mathcal{F}_b$. Thus, to simplify our computation we first ignore the feasibility constraint and employ an iterative \textit{rate-constrained Lloyd-Max} algorithm (rc-LM)~\cite{sullivan1996efficient} to identify spike locations. We illustrate the operations of rc-LM to initialize $H$ spike locations for $H=3$ as follows. Let $c^o_b$ be the shift value that minimizes \textit{only} distortion for block $b$. Let $g(c^o)$ be the probability distribution of distortion-minimizing shift $c^o$ for blocks in $\mathcal{B}_m$, where $0 \leq c^o < W_{\mathcal{B}_M}$. $g(c^o)$ can be computed empirically for group $\mathcal{B}_m$. Without loss of generality, we define quantization bins for the three spikes $c^s_1$, $c^s_2$ and $c^s_3$ as $[0, b_1)$, $[b_1, b_2)$ and $[b_2, W_{\mathcal{B}_M})$ respectively. The expected distortion $D(\{c^s_i\})$ given three spikes is: \begin{equation} \sum\limits_{c^o = 0}^{b_1 - 1} |c^o-c^s_1|^2 g(c^o) + \sum\limits_{c^o = b_1}^{b_2 - 1} |c^o-c^s_2|^2 g(c^o) + \sum\limits_{c^o = b_2}^{W_{\mathcal{B}_M}-1} |c^o-c^s_2|^2 g(c^o) \end{equation} where $D(\{c^s_i\})$ is computed as the sum of squared difference between $c^o$ and spike $c^s_i$ in the bin that $c^o$ is assigned to. Having defined distortion $D(\{c^s_i\})$, the initial spike locations $c^s_i$ given $H$ spikes can be found as follows: i) construct $H$ spikes evenly spaced in the interval $[0, W_{\mathcal{B}_M})$, ii) use conventional Lloyd-Max algorithm with no rate constraints to converge to a set of $H$ bin centroids $c^s_i$. Next, adding consideration for rate, the RD cost of the three spikes can then be written as: \vspace{-0.1in} \begin{small} \begin{equation} \begin{array}{l} D(\{c^s_i\}) + \lambda \left( -\log (\sum\limits_{c^o=0}^{b_1-1} g(c^o)) -\log (\sum\limits_{c^o=b_1}^{b_2-1} g(c^o)) -\log (\sum\limits_{c^o=b_2}^{W_{\mathcal{B}_M}-1} g(c^o)) \right) \end{array} \label{eq:rdSpike} \end{equation} \end{small}\noindent (\ref{eq:rdSpike}) is essentially the aggregate of RD costs (\ref{eq:RDopt}) for all blocks in $\mathcal{B}_M$. To minimize (\ref{eq:rdSpike}), rc-LM alternately optimizes bin boundaries $b_i$ and spike locations $c^s_i$ at a time until convergence. Given spikes $c^s_i$ are fixed, each bin boundary $b_i$ is optimized via exhaustive search in the range $[c^s_i, c^s_{i+1})$ to minimize both rate and distortion in (\ref{eq:rdSpike}). Given bin boundaries $b_i$ are fixed, optimal $c^s_i$ can be computed simply as the bin average: \begin{equation} c^s_i = \frac{\sum_{c^o = b_i}^{b_{i+1}-1} g(c^o) c^o}{\sum_{c^o=b_i}^{b_{i+1}-1} g(c^o)} \label{eq:centroid} \end{equation} where $b_0 = 0$ and $b_3 = W_{\mathcal{B}_M}$. Upon convergence, we can then identify the small fraction of blocks with no spikes in their feasible regions $\mathcal{F}_b$ and assign an appropriate constant $p_c$ so that $P(c_b)$ is well defined according to (\ref{eq:Pcb}). Computing $P(c_b)$ with $H$ spikes where $H \neq 3$ can be done similarly. \subsubsection{Finding the optimal $P(c_b)$} \label{subsubsec:prob} To find the optimal $P(c_b)$, we add an outer loop for this $P(c_b)$ construction procedure to search for the optimal number of spikes $H$. Pseudo-code of the complete algorithm is shown in Algorithm \ref{alg:ProbC}. We note that in practice, we observe that the number of iterations until convergence is small. \begin{algorithm}[htb] \caption{Computing the optimal shift distribution $P(c_b)$} \label{alg:ProbC} \begin{algorithmic}[1] \FOR{each number of spikes $H \in [1, W_{\mathcal{B}_M}]$} \STATE Initialize distribution $P^o(c_b)$ via LM; \STATE $t = 0$; \REPEAT \STATE $t = t + 1$; \STATE Update $H$ spike locations $c^s_i$ via (\ref{eq:centroid}); \STATE Update bin boundaries $b_i$ by minimizing (\ref{eq:rdSpike}); \STATE Compute $p_c$ for a new $P^t(c_b)$; \UNTIL{ $\| P^{t-1}(c_b) - P^{t}(c_b) \| \leq \epsilon$ } \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Comparison with Coset Coding} \label{subsec:coset} We now discuss the similarity between our proposed approaches and coset coding methods in DSC~\cite{pradhan:03}. Consider first fixed target merging of one q-coeff of a single block $b$. In a scalar implementation of coset coding, given possible SI values $X^n_b, n \in \{1, \ldots, N\}$, seen as ``noisy'' versions of a target $X^0_b$, the largest difference $Z_{b} = \max_{n} | X^n_b - X^0_b|$ with respect to $X^0_b$ is first computed. The size of the coset $W$ is then selected such that $W > 2 Z_{b}$. The coset index $i_b = X^0_b \bmod W$ is computed at the encoder for transmission. At the decoder, the reconstructed value $\hat{X}_b$ is the integer closest to received SI $X^n_b$ with the same coset index $i_b$, \textit{i.e.}, \begin{equation} \hat{X}_b = \arg \min_{X \in \mathbb{Z}} |X^n_b - X| ~~~ \mbox{s.t.} ~~ i_b = X \bmod W \label{eq:coset} \end{equation} Using the aforementioned coset coding scheme for blocks $b \in \mathcal{B}_M$, coding of $i_b = X^0_b \bmod W = X^0_{b,2}$ per block is necessary, where coset size $W$ is chosen such that $W > 2 Z_{\mathcal{B}_M}$. In our fixed target merging scheme using PWC functions, we code a shift $c_b = W^{\#}_{\mathcal{B}_M}/2 - X^0_{b,2}$ for each block $b$, where step size $W^{\#}_{\mathcal{B}_M}$ is also proportional to $2 Z_{\mathcal{B}_M}$. Comparing the two schemes one can see that the number of choices that need to be sent to the decoder is the same (one of $W^{\#}_{\mathcal{B}_M}$ possible values in both cases). Both the shift value $c_b$ and $i_b$ are functions of $X^0_{b,2}$, the LSBs of $X^0_b$, which are likely to have an approximately uniform distribution. Thus so the overhead rate should be the same for both coset coding and fixed target merging. Consider now the optimized merging case. In this scenario we are able to choose $W_{\mathcal{B}_M} = Z^*_{\mathcal{B}_M} + 1$---likely much smaller than $2 Z_{\mathcal{B}_M} \leq 2 Z^*_{\mathcal{B}_M}$---so that we can still guarantee identical reconstruction, with a reduction in rate that comes at the cost of an increase in distortion. As for the coset coding approach, if we were to reduce to choose a smaller $W_{\mathcal{B}_M}$ as well, we in fact can no longer guarantee identical reconstruction. This is because when $W_{\mathcal{B}_M} < 2 Z_{\mathcal{B}_M}$ there will be cases where not all the $X^n_b$ are in the same interval, and thus the same $i_b$ will lead to two different values at the decoder depending on the SI received. This imperfect merging will lead to undesirable coding drift in the following predicted frames, as discussed in Section \ref{sec:system}. \subsection{IVSS System Overview} \begin{figure}[ht] \centering \includegraphics[width = 0.45\textwidth]{figures/dvs.eps} \caption{Example of an acyclic picture interactivity graph for {\em dynamic view switching}. Each picture $\Pi_{v,t}$ has subscript indicating its view index $v$ and time instant $t$. After viewing picture $\Pi_{2,1}$ of stream 2, the client can choose to keep watching the same stream and jump to $\Pi_{2,2}$, or switch to $\Pi_{1,2}$ or $\Pi_{3,2}$ of stream 1 and 3, respectively.} \label{fig:DVS} \end{figure} We provide an overview of our proposed coding system for \textit{interactive video stream switching} (IVSS), in which our proposed \textit{merge frame} is a key enabling component. In the sequel, a ``picture'' is a raw captured image in a video sequence, while a ``frame'' is a particular coded version of the picture (\textit{e.g.}, I-frame, P-frame). In this terminology, a ``picture'' can have multiple coded versions or ``frames''. In an IVSS system, there are multiple pre-encoded video streams that are related (\textit{e.g.}, videos capturing the same 3D scene from different viewpoints \cite{cheung11tip}). During video playback of a single stream, at a \textit{switch instant}, the client can switch from a picture of the original stream to a picture of a different destination stream. Fig.\;\ref{fig:DVS} illustrates an example \textit{picture interactivity graph} for three streams, where there is a switch instant every two pictures in time. An arrow $\Pi_p \rightarrow \Pi_q$ indicates that a switch is possible from picture $\Pi_p$ to picture $\Pi_q$. This particular graph is \textit{acyclic}, {\em i.e.}, it has no loops and we cannot have both $\Pi_p \rightarrow \Pi_q$ and $\Pi_q \rightarrow \Pi_p$. \begin{figure}[ht] \centering \includegraphics[width = 0.4\textwidth]{figures/svs.eps} \caption{Example of a cyclic picture interactivity graph for {\em static view switching}. Each picture $\Pi_{v,t}$ has subscript indicating its view index $v$ and time instant $t$. After viewing $\Pi_{2,2}$ of stream 2, the client can choose to keep watching stream 2 in time and jump to $\Pi_{2,3}$, or change to $\Pi_{1,2}$ or $\Pi_{3,2}$ of stream 1 and 3, respectively, corresponding to the same time instant as $\Pi_{2,2}$.} \label{fig:SVS} \end{figure} The scenario in Fig.\;\ref{fig:DVS} is an example of \textit{dynamic view switching}~\cite{huang12}, where a frame at time $t$ is always followed by a frame at time $t+1$. In contrast, in \textit{static view switching} a user can stop temporal playback and interactively select the angle from which to observe a 3D scene frozen in time \cite{lou05}. Fig.\;\ref{fig:SVS} shows an example of static view switching, where the corresponding graph is \textit{cyclic}, \textit{i.e.}, it contains loops so that we can have both $\Pi_p \rightarrow \Pi_q$ and $\Pi_q \rightarrow \Pi_p$. We will discuss the merge frame design for the cyclic case in Section~\ref{sec:target}. \subsection{Stream-Switch Mechanism in IVSS} At a given switch instant, stream switching works as follows. First, for each possible switch $\Pi_p \rightarrow \Pi_q$, we encode a P-frame $P_{q\,|\,p}$ for $\Pi_q$, where a decoded version of $\Pi_p$ is used as a predictor. Reconstructed $P_{q\,|\,p}$ is called a \textit{side information} (SI) frame, which constitutes a particular reconstruction of destination $\Pi_q$. Because there are in general multiple origins for a given destination (the \textit{in-degree} for destination picture in the picture interactivity graph), there are multiple corresponding SI frames. Having multiple reconstructions of the same picture $\Pi_q$ creates a problem for the following frame(s) that use $\Pi_q$ as a predictor for predictive coding, because one does not know \textit{a priori} which reconstructed SI frame $P_{q\,|\,p}$ will be available at the decoder buffer for prediction. This illustrates the need for our proposed merge frame (called \textit{M-frame} in the sequel) $M_q$, which is an \textit{extra} frame corresponding to destination $\Pi_q$. Correct decoding of $M_q$ means a unique reconstruction of $\Pi_q$, no matter which SI frame $P_{q\,|\,p}$ is actually available at the decoder. \begin{figure}[ht] \centering \includegraphics[width = 0.4\textwidth]{figures/usecase.eps} \caption{Example of stream-switching from one pre-encoded stream to another using merge frame. SI frames $P_{1,3}^{(1)}$ and $P_{1,3}^{(2)}$ are first constructed using predictors $P_{1,2}$ and $P_{2,2}$, respectively. M-frame $M_{1,3}$ is encoded using the two SI frames. I-, P- and M-frames are represented as circles, squares and diamonds, respectively.} \label{fig:usecase} \end{figure} As an illustration, in Fig.\;\ref{fig:usecase} two P-frames, $P_{1,3}^{(1)}$ and $P_{1,3}^{(2)}$, generated from predictors $P_{1,2}$ and $P_{2,2}$ respectively, are the SI frames. An M-frame $M_{1,3}$ is added to merge the SI frames to produce an identical reconstruction for $\Pi_{1,3}$. During a stream-switch, the server can transmit any one of the two SI frames \textit{and} $M_{1,3}$ leading to the same reconstructed frame for $\Pi_{1,3}$, thus avoiding coding drift in the following frame $P_{1,4}$. Note that one P-frame and one M-frame are sent. An alternative approach based on SP frames would require sending a primary SP-frame $S^1_{1,3}$ (using $P_{1,2}$ as the predictor) for the switch $\Pi_{1,2} \rightarrow \Pi_{1,3}$, or a losslessly coded secondary SP-frame $S^2_{1,3}$ (using $P_{2,2}$ as the predictor) for the switch $\Pi_{2,2} \rightarrow \Pi_{1,3}$. SP-frame approaches are asymmetric; rate is much lower when only a primary SP-frame is needed. In contrast, the switching cost using M-frame is always the same (P- and M-frames are transmitted). As will be shown, a combination of a P-frame and an M-frame requires lower rate than a secondary SP-frame. \subsection{Merge Frame Overview} In our proposed M-frame, each fixed-size code block in an SI frame is first transformed to the DCT domain. DCT coefficients are then quantized. The quantized coefficients across SI frames (called \textit{q-coeffs} for short in the sequel) are then examined. If the q-coeffs of a given block are very different across SI frames, then the overhead to merge their differences to targeted q-coeffs would be large. Thus, we will encode the block as a conventional intra block. On the other hand, if the q-coeffs of a given block are already identical across all SI frames, then we can simply inform the decoder that the q-coeffs can be used without further processing. Finally, if the q-coeffs across SI frames are not identical but are similar, then each q-coeff is then merged identically to a target value via our proposed merge operator. Hence, together there are three coding modes for each code block: \textit{intra}, \textit{skip} and \textit{merge}. In this paper, we focus our attention on optimizing the parameters in \textit{merge} mode as the \textit{intra} and \textit{skip} modes are straightforward. \subsection{Fixed Target Reconstruction using Merge Operator} We first show that given a target reconstruction value $a$ and a step size $W$, we can always find a shift $c$ so that $f(x)$ in (\ref{eq:floor}) is such that $f(x) = a$ for all inputs $x$ in the interval $[a - W/2, a+W/2)$. To see this, first write target reconstruction value $a = a_1 W + a_2$, where $a_1$ and $a_2 = a \bmod W$ are integers and $0 \leq a_2 < W$. Similarly, we write input $x = a_1 W + x_2$ where integer $x_2$ can be bounded: \begin{align} a - \frac{W}{2} \leq & ~~x ~ < a + \frac{W}{2} \nonumber \\ a_1 W + a_2 - \frac{W}{2} \leq & ~~ a_1 W + x_2 ~ < a_1 W + a_2 + \frac{W}{2} \nonumber \\ a_2 - \frac{W}{2} \leq & ~~ x_2 ~ < a_2 + \frac{W}{2} \label{eq:boundX2} \end{align} We now set $c = \frac{W}{2} - a_2$. We show that this ensures $f(x) = a$ for $x \in [a - W/2, a + W/2)$: \begin{align} f(x) & = \left\lfloor \frac{a_1 W + x_2 + \frac{W}{2} - a_2}{W} \right\rfloor W + \frac{W}{2} - \left( \frac{W}{2} - a_2 \right) \label{eq:targetMerge} \\ & = a_1 W + a_2 = a . \nonumber \end{align} where the second line is true because $x_2 + \frac{W}{2} - a_2$ in the numerator of the ``round-down" operator argument can be bounded in $[0, W)$ using (\ref{eq:boundX2}): \begin{align} a_2 - \frac{W}{2} + \frac{W}{2} - a_2 \leq & ~~ x_2 + \frac{W}{2} - a_2 ~ < a_2 + \frac{W}{2} + \frac{W}{2} - a_2 \nonumber \\ 0 \leq & ~~ x_2 + \frac{W}{2} - a_2 ~ < W \end{align} Next, recall from Section~\ref{subsec:PWC} that we include the desired target $\mathbf{T}$ as the first SI frame $\mathbf{S}^0$. For a given frequency of a particular block $b$, we first compute the \textit{maximum target difference} $Z_b$ as the largest absolute difference between target q-coeff $X^0_b$ and $X^n_b$ of any SI frame $\mathbf{S}^n$, \textit{i.e.}, \begin{equation} Z_b = \max_{n \in \{1, \ldots, N\}} \left| X^0_b - X^n_b \right| \label{eq:maxDiff} \end{equation} Based on this we can choose step size and shift based on the following lemma. \begin{lemma} \label{lemma:merge} Choosing step size $W^{\#}_b = 2 Z_b + 2$ and shift $c_b = W^{\#}_b/2 - X^0_{b,2}$, where $X^0_{b,2} = X^0_b \bmod W^{\#}_b$, guarantees that $f(X^n_b) = X^0_b, ~ \forall n \in \{0, \ldots, N\}$. Note that $W^{\#}_b$ is an even number, and $c$ is an integer as required. \end{lemma} \begin{proof} Given shift $c_b = W^{\#}_b/2 - X^0_{b,2}$, showing $X^n_b \in [X^0_{b} - W^{\#}_b/2, X^0_{b} + W^{\#}_b/2)$ implies $f(X^n_b) = X^0_b, ~ \forall n \in \{0, \ldots, N\}$. Defining step size $W^{\#}_b = 2 Z_b + 2$ means the required interval for $X^n_b$ can be rewritten as $[X^0_{b} - Z_b - 1, X^0_{b} + Z_b + 1)$. By the definition of $Z_b$, we know $X^0_b - Z_b \leq X^n_b \leq X^0_b + Z_b$. Hence the required interval for $X^n_b$ is met. \end{proof} Note that we can achieve fixed target merging for a given $X^0_b$ as long as the step size is larger than $W^{\#}_b$. For example, we can assign the same step size $W^{\#}_{\mathcal{B}_M}$ for all blocks in a group $\mathcal{B}_M$, so that we reduce the rate overhead: \begin{equation} W^{\#}_{\mathcal{B}_M} = 2 + 2 Z_{\mathcal{B}_M} \label{eq:perfectStepSize} \end{equation} where $Z_{\mathcal{B}_M} = \max_{b \in \mathcal{B}_M} Z_b$ is the \textit{group-wise maximum target difference}, and $Z_b$, the block-wise maximum target difference for block $b$, is computed using (\ref{eq:maxDiff}). In summary: \begin{enumerate} \item We define a set of blocks $\mathcal{B}_M$ and use $W^{\#}_{\mathcal{B}_M}(k)$ computed using (\ref{eq:perfectStepSize}) for frequency $k$ of all blocks in $\mathcal{B}_M$. \item For block $b$, we set shift $c_b(k) = W^{\#}_{\mathcal{B}_M}(k)/2 - X^0_{b,2}(k)$, where $X^0_{b,2}(k) = X^0_b(k) \bmod W^{\#}_{\mathcal{B}_M}(k)$. A different shift is used for each frequency $k$ and block $b$, and transmitted as part of the M-frame along with $W^{\#}_{\mathcal{B}_M}(k)$. \end{enumerate}
1,941,325,220,379
arxiv
\section{Introduction} Nuclear reactors are the most intense man-made sources of antineutrinos, and they were involved in neutrino physics from their first detection. Since then, they have been a key component in the study of neutrino properties - most importantly the study of neutrino oscillations. In particular, the experimental effort was focused on the determination of neutrino mixing angles $\theta_{13}$ and $\theta_{23}$, and more recently on the precise measurement of the final angle $\theta_{13}$. In these measurements information on the reactor antineutrino flux and spectrum is integral to the analysis, and affects the final values. In the quest for the precise determination of the $\theta_{13}$ mixing angle, the Double Chooz~\cite{Abe2012}, RENO~\cite{Ahn2012} and Daya Bay~\cite{An2012} experimental facilities have provided a wealth of information. During the analysis of the data, it was noticed that the measured antineutrino spectrum was systematically lower than was predicted, in all three facilities. The question was further complicated by a reevaluation of the reactor flux, which produced a total discrepancy between the measured and the expected antineutrino spectrum of approximately 6\%~\cite{Mention2011,Huber2011} - the so-called ``reactor antineutrino anomaly''\cite{Hayes2016}. Further reports have confirmed the existence of the anomaly and uncovered an unexplained structural feature in the antineutrino spectrum at energies between 5 MeV and 7 MeV of antineutrino energy~\cite{An2015,An2016,Dwyer2015,Dwyer2015a}. The existence of the anomaly has spurred an active discussion on the nature of neutrinos and the possible existence of sterile neutrinos. But the uncertainties of the determination of theoretical lepton spectra are relatively large and may explain the anomaly, especially in view that no anomalous neutrino disappearance was observed in a recent report~\cite{Aartsen2016}. Theoretical determination of the antineutrino spectra is based on available data, i.e the energies of the transitions, their parity and angular momentum, and corresponding branching ratios. Two approaches are used in order to determine the total reactor lepton spectra: (i) the conversion method, and (ii) the ``ab initio'' summation method. With the conversion method one uses the precisely measured aggregate electron spectrum to fit a relatively small (compared to the total number of measured transitions) set of virtual transitions from which one obtains the corresponding antineutrino spectrum. Measurements were performed in Grenoble on $^{235}$U, $^{239}$Pu and $^{241}$Pu~\cite{Feilitzsch1982,Hahn1989,Schreckenbach1985} and Garching on $^{238}$U~\cite{Haag2014}. This method requires no information on, but also provides no insight into, fission yields and branching ratios. The ``ab initio'' summation method takes the opposite approach. By combining individual electron and antineutrino spectra for each branching ratio of each fission fragment, where the endpoint energies and relative probabilities are taken from data, the total lepton spectra are obtained. While this method should, in principle, be able to reproduce the spectra in full, the results are dependent on the accuracy of the available data. In particular, the incorrect assignment of feeding probabilities of excited states in the daughter nuclei can have a significant impact~\cite{Algora2010,Fallot2012}. While approximately 25\% of all transitions are forbidden, in the limit of a vanishing electron mass, the shape factors for forbidden transitions are symmetric under the exchange of electron and antineutrino energies~\cite{Huber2011}. Thus, in most studies the allowed shape factor is also used for forbidden transitions. However, a recent study reveals significant differences in the total antineutrino spectra depending on the treatment of forbidden transitions~\cite{Hayes2014}. The authors have treated all unique forbidden transitions as unique first-forbidden transitions, and all nonunique forbidden transitions as either allowed, unique first-forbidden $2^{-}$ transitions, nonunique $0^{-}$ or nonunique $1^{-}$ transitions. Treating all transitions equally introduces a systematic uncertainty in the results, but the authors have shown a consistent and non-negligible effect of forbidden transitions on the total antineutrino spectra coming from products of fissile material in a typical reactor. {More recently, in Ref.~\cite{Hayen2019} microscopic calculations of the first-forbidden transitions in reactor antineutrino spectra have been performed. Explicit calculation of the shape factor showed differences in cumulative electron and antineutrino spectra in comparison to usually employed approximations. It has been shown that forbidden decays represent an essential ingredient for reliable understanding of reactor antineutrino spectra~\cite{Hayen2019}, thus further research to assess their role is called for.} Very recently, a study performed at the Daya Bay experimental facility observed a correlation between reactor core fuel evolution and changes in the reactor antineutrino flux and its energy spectrum. In fact, a careful analysis of 2.2 million inverse beta decay events over 1230 days, a discrepancy between the assumed and measured effective fission fractions was uncovered with major impact on the total antineutrino spectrum - a 7.8 \% difference for $^{235}$U. Whether this change may account for the complete reactor antineutrino anomaly, or just a significant part will require further study. In light of these developments, even in the case of no anomaly, it is essential to quantitatively determine the effect of first-forbidden transitions on the total lepton spectra coming from the $\beta$-decay of fission fragments of main reactor fuel materials. In this contribution we present the first fully theoretical calculation of electron and antineutrino spectra which properly takes into account the specific shape factors for each transition. We employ the relativistic Hartree-Bogoliubov (RHB) model to describe the nuclear ground state~\cite{Vretenar2005a}, and use the proton-neutron relativistic quasiparticle random phase approximation (pn-RQRPA) formulated in the canonical single-nucleon basis of the RHB model to obtain the excited states~\cite{Paar2004}. With this fully self-consistent model we have calculated the total decay rates and branching ratios for all $\beta$-unstable fission fragments, properly taking into account the shape factors of first-forbidden transitions. We generated the lepton spectra for each transition, weighted them with their corresponding branching ratios and fission yields and obtained the total electron and antineutrino spectra per fission. This process was performed for all four major contributors to the reactor antineutrino spectrum: $^{235}$U, $^{238}$U, $^{239}$Pu and $^{241}$Pu. Here we present the results for $^{235}$U and $^{239}$Pu, which are the two isotopes that provide the dominant contribution over the fuel cycle. The results for the remaining two isotopes are very similar and do not affect the final conclusions. \section{Evaluation of $e^{-}$ and $\bar{\nu}_{e}$ spectra} \label{sec:theory} During the operation of a nuclear reactor the fissile isotopes fission, generating a distribution of fission products. Many of the fission products are unstable and $\beta$-decay towards stability, \begin{equation} \label{eq:decay} {}^{A}_{Z}X_{N} \to \, {}^{A}_{Z+1}Y_{N-1} + e^{-} + \bar{\nu}_{e} \end{equation} emitting an electron and an antineutrino in the process with their maximal energies begin determined by the difference in energies between the initial and final states. Of all the material in the reactor core, isotopes $^{235}$U, $^{238}$U, $^{239}$Pu and $^{241}$Pu are responsible for more than 99.7\% of all antineutrinos~\cite{An2016a}. Of those, $^{235}$U and $^{239}$Pu contribute more than 90\% of fissions in a reactor, with $^{235}$U dominating at the start of the burn-up cycle, but contributing roughly equally to $^{239}$Pu after 20000 MWD/TU. For a particular transition between the ground state of the parent nucleus and a state (ground state or excited) in the daughter nucleus, the electron spectrum is of the form \begin{equation} \label{eq:spec4} S_{f}^{i}(E) = W\sqrt{W^{2} - 1}(W_{0} - W)^{2} C(W) \delta(Z,W), \end{equation} where $W$ and $W_{0}$ are the electron energy and the maximum electron energy, respectively, both in units of electron mass. $C(W)$ is the shape factor and $\delta(Z,W)$ is the correction factor that compensates for various approximations. The correction factor takes into account the fact that the electron is moving in the Coulomb field of the nucleus (the Fermi function), the effects of the finite size of the charge distribution $L_{0}(Z,W)$ and the weak-interaction finite-size correction $C(Z,W)$. For the treatment of these corrections we follow Ref.~\cite{Huber2011}, and neglect the corrections arising from weak magnetism and screening effects as the size of these corrections is less than 2\% for all antineutrino energies. The shape factor is a critical quantity that determines the electron spectrum. For allowed decays it is simply equal to the Gamow-Teller strength and does not depend on energy, and thus does not affect the spectrum shape. In the case of first-forbidden transitions, the shape factor is energy dependent and can be written as \begin{equation} \label{eq:shape} C(W) = k\left(1 + aW + bW^{-1} + cW^{2} \right), \end{equation} where $k$, $ka$, $kb$ and $kc$ are given by combinations of transition matrix elements~\cite{Behrens1971,Behrens1982,Marketin2016}. Thus the electron spectrum coming from forbidden transitions can significantly differ from the shape of the allowed transitions, depending on which matrix elements dominate for a particular transition. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{multipoles.eps} \caption{(color online) \label{fig:multipoles}The electron spectra for transitions of equal energy but different angular momenta. {The calculated spectra obtained with Gamow-Teller (GT) transitions (solid black line), $0^{-}$ (solid red line), $1^{-}$ (dashed green line), and $2^{-}$ (dot-dashed blue line) transitions are separately displayed.} The spectra are shown without any corrections.} \end{figure} In Fig. \ref{fig:multipoles} we show the {calculated electron spectra for a set of} hypothetical transitions of equal energy, but different angular momenta. The full black line denotes the spectrum assuming an allowed shape factor, {including only Gamow-Teller transitions}. The spectrum obtained by assuming a $0^{-}$ transitions is denoted with a full red line which follows the allowed spectrum almost completely. The shape factor of $0^{-}$ transitions consists of two terms, one of which is energy independent and dominates the total transition strength. Thus, the shape of the spectrum for $0^{-}$ transitions is almost identical to the shape of the Gamow-Teller transitions. This explains the excellent agreement that was achieved in Ref.~\cite{Sonzogni2015}, where the $\beta$ spectra for $^{92}$Rb and $^{96}$Y were described assuming allowed shape, even though both decays are dominantly ($> 95\%$) $0^{-} \to 0^{+}$. This is an important point because some of the most contributing nuclei decay by $0^{-}$ transitions (see Tables II and III in Ref.~\cite{Sonzogni2015}). Higher angular momentum transitions have a significantly different shape from the allowed spectrum, and may have a noticeable impact on the total antineutrino spectra. In particular, the components of a $1^{-}$ transitions are found in all terms of Eq. (\ref{eq:shape}), and the relative importance of individual matrix elements will form the final transition spectrum. Typically though, the maximum of the $1^{-}$ transitions is shifted to higher energies compared to Gamow-Teller transitions, and the spectrum is slightly narrower, in total. $2^{-}$ spectra are, in general, wider than the allowed for the same transition energy. In many decays, first-forbidden transitions provide an appreciable contribution to the total decay rate, and thus to the total $\beta$ and $\bar{\nu}$ spectra~\cite{Moeller2003,Marketin2016,Mustonen2016}. This is particularly true in very neutron-rich nuclei where additional neutrons in the higher shell enable more parity-changing transitions. As the most neutron-rich fission products decay towards stability, on average they decay 6 times. In this way, the distortion of the lepton spectra arising from forbidden transitions may accumulate to produce a noticeable effect on the total observed spectrum. To establish a quantitative measure of this effect, we have determined the electron and antineutrino spectra for the four contributing isotopes in reactors using a fully theoretical ``ab initio'' approach. Using the decay data obtained in a large scale calculation of $\beta$-decay properties of r-process nuclei~\cite{Marketin2016}, we have generated the $\beta$ and $\bar{\nu}$ spectra for each transition $S_{f}^{i}$. To obtain the lepton spectrum arising from a single decay of a particular nuclide we weigh transition spectra $S_{f}^{i}$ with their respective branching ratios and sum, \begin{equation} \label{eq:spec3} S_{f}(E) = \sum_{i} \frac{{\lambda}_{i}}{{\lambda}_{tot}} S_{f}^{i}(Z,A,E_{max},E,J^{\pi}). \end{equation} Here $f$ denotes a particular fission fragment, and $i$ denotes a transition to a particular final state in the daughter nucleus, where the sum runs over all energetically allowed transitions. Finally, these spectra are weighted by their respective cumulative fission yields and summed in order to obtain the total electron and antineutrino spectra for a particular actinide. \begin{equation} \label{eq:spec2} S_{k}(E) = \sum_{f} Y^{(k)}_{f} S_{f}(E), \end{equation} where $k$ stands for $^{235}$U, $^{238}$U, $^{239}$Pu or $^{241}$Pu, $Y_{f}$ are the fission yields~\cite{Katakura2012} and the sum runs over all fission fragments. {In the present study, both independent and cumulative fission yields are adopted from the database of JAEA~\cite{Katakura2012} . We note that the fission yields provided by different nuclear databases are slightly different, and that may affect the aggregate spectra.} To assess the effect of first-forbidden transitions on the total lepton spectrum we perform two calculations: (i) baseline calculation where all transitions are treated as allowed, (ii) calculation where we take into account shape factors for parity changing transitions. In Fig.~\ref{fig:U235spectra} we plot the resulting electron (top panel) and antineutrino (bottom panel) spectra for $^{235}$U, where the theoretical results are denoted by full lines, and data by the dashed black line. The results for $^{238}$U, $^{239}$Pu and $^{241}$Pu are very similar and provide no additional insight. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{U235_spectra.eps} \caption{(color online) \label{fig:U235spectra} (top panel) The electron spectra obtained by treating all transitions as allowed (black dashed line) and electron spectra obtained by taking into account shape factors of first-forbidden transitions compared with available data. (bottom panel) Same as top panel for antineutrino spectra.} \end{figure} The calculated spectra deviate significantly from the measurements, especially at high lepton energies. In particular, in the description of $\beta$ decays it is difficult to predict transitions to low-lying states in the daughter nuclei with the standard 1p-1h RPA, as it cannot describe the fragmentation and spreading of transitions. This problem can be addresses by using second RPA or particle-vibration coupling models such as in Ref. ~\cite{Robin2016}. Additionally, nuclei which contribute the most at high energies are nuclei with an odd number of nucleons which were not treated properly in the calculation of $\beta$-decay half-lives. A possible solution may be the equal filling approximation as used in Ref. \cite{Shafer2016}, where the authors observe a low-lying Gamow-Teller state (see Fig. 1. and the following discussion). In fact, a very detailed description of the structure of all the decaying nuclei is required to fully reproduce the data. This includes the properties of both the ground state and the excited states with accuracy beyond the capabilities of current models. {We note that the calculated values for the beta decay rates may deviate from the measurements, especially for low energy transitions. However, as already discussed on the $\beta$-decay rates for the r-process in Ref. \cite{Marketin2016}, our agreement with measured data improves with increasing $Q$-values, and this is the relevant aspect for the high energy part of the spectra.} At the scale used in Fig.~\ref{fig:U235spectra} there is no visible difference between the two calculations, with and without taking into account the shape factor of first-forbidden transitions. This is to be expected as the magnitude of the anomaly is only 6\%, and changes of the spectrum comparable to the anomaly will not be visible. However, by examining the ratio of the spectrum obtained by taking into account the shape factor of forbidden transitions and the spectrum obtained by assuming the allowed shape for all transitions we can obtain valuable information. This ratio is shown in Fig.~\ref{fig:U235} both for the electrons, denoted by a dashed black line, and antineutrinos, denoted by the full red line. Note that the energy threshold for the detection of antineutrinos in the inverse beta decay is 1.8 MeV, thus only the results above this energy are shown. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{U235_ff_ratio.eps} \caption{(color online) \label{fig:U235} (top panel) Ratio of electron spectra calculated with and without taking the first-forbidden shape factor into account. (bottom panel) Same as top panel, but for antineutrino spectra.} \end{figure} The results indicate that, by taking into account the effect of first-forbidden transitions, the total theoretical antineutrino spectrum of $^{235}$U is lowered in the energy region from 2.5 to 6 MeV, with the largest effect being centered around 5 MeV. The magnitude of the reduction is up to 2\%, which is roughly half of the reported anomaly. The effect is also the strongest in the energy region where a shoulder was observed in all three experimental facilities (see Section 5. of Ref.~\cite{Hayes2016}). These results are systematic in that they appear in the calculation for all four contributing isotopes in the reactor core: $^{235}$U, $^{238}$U, $^{239}$Pu or $^{241}$Pu. {Our results are comparable to previous studies, in particular, Fig.~\ref{fig:U235} displays similar effect of the forbidden transitions on the reactor antineutrino spectra as Fig. 3. in Ref.~\cite{Hayes2014}.} In Ref.~\cite{Hayes2014}, it was found that the uncertainties introduced by forbidden transitions equal approximately 4\%, and the results of the present calculation agree with that value completely. For energies above 8 MeV, the majority of transitions that provide the dominant contribution to the spectra are transitions within odd-A nuclei which are very difficult to describe with the model. Additionally, the antineutrino spectra are very low at such high energies and thus we do not display the results above this energy. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Pu239_ff_ratio.eps} \caption{(color online) \label{fig:Pu239} Same as Fig. \ref{fig:U235} for the case of $^{239}$Pu.} \end{figure} \section{Conclusion} In summary, we have performed the first self-consistent theoretical calculation of electron and antineutrino spectra resulting from $\beta$-decay of fission product of the main isotopes found inside a typical nuclear reactor, including forbidden transitions. In particular, the focus was on the treatment of first-forbidden transitions and their impact on the shape of the resulting lepton spectra. Having examined the three components of the first-forbidden transitions, we show that the $0^{-}$ transitions have the same shape as the allowed transitions, but the $1^{-}$ and $2^{-}$ deviate from the allowed shape and affect the total spectra significantly. By properly treating first-forbidden transitions we observe the change of the antineutrino spectra to be approximately 3\%, which is in agreement with previous studies, and is comparable to the magnitude of the anomaly itself. Therefore, proper treatment of the first-forbidden transitions is important in the study of reactor antineutrino spectra and should be taken into account in any high-precision determination of the reactor spectra. \subsection{Acknowledgments} This work was supported in part by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse, by the {Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 279384907 - SFB 1245 "Nuclei: From Fundamental Interactions to Structure and Stars"}, the IAEA Research Contract No. 18094/R0, the Croatian Science Foundation under the project Structure and Dynamics of Exotic Femtosystems (IP-2014-09-9159) and by the QuantiXLie Centre of Excellence, a project co financed by the Croatian Government and European Union through the European Regional Development Fund, the Competitiveness and Cohesion Operational Programme (KK.01.1.1.01). \\
1,941,325,220,380
arxiv
\section{Introduction} \label{sec:intro} Open-domain question answering (OpenQA) aims to answer factual open questions with a large external corpus of passages. Current approaches to OpenQA usually adopt a two-stage retriever-reader paradigm~\cite{chen2017reading, zhu2021retrieving} to fetch the final answer span. The performance of OpenQA systems is largely bounded by the retriever as it determines the evidential documents for the reader to examine. Traditional retrievers, such as TF-IDF and BM25 \cite{robertson2009probabilistic}, are considered incapable of adapting to scenarios where deep semantic understanding is required. Recent works \cite{lee2019latent, karpukhin2020dense, qu2021rocketqa} show that by fine-tuning pre-trained language models on sufficient downstream data, dense retrievers can significantly outperform traditional term-based retrievers. Considering the data-hungry nature of the neural retrieval models, extensive efforts~ \cite{lee2019latent,chang2020pre,sachan2021end} have been made to design self-supervised tasks to pre-train the retriever. However, these pre-training tasks construct relevance signals largely depending on easily attainable sentence-level or document-level contextual relationships. For example, the relationship between a sentence and its originated context~(shown by the ICT query in Figure \ref{Fig:img1}) may not be sufficient enough to facilitate question-passage matching for the tasks of OpenQA. We also find that these pre-trained retrievers still fall far behind BM25 in our pilot study on the zero-shot experiment. In order to address the shortcomings of the matching-oriented pre-training tasks as mentioned above, we propose a pre-training method with better surrogates of real natural question-passage (Q-P) pairs. We consider two conditions of relevance within Q-P pairs, which is similar to the process of distantly supervised retriever learning~\cite{mintz2009distant, chen2017reading}. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item[1)] \textbf{\EVI} \quad The evidence, such as entities and their corresponding relations, should exist across the query and the targeted passage as they both discuss similar facts or events related to the answer. \item[2)] \textbf{Answer Containing} \quad The golden passage should contain the answer of the query, which means that a text span within the passage can provide the information-seeking target of the query. \end{itemize} In this paper, we propose \textbf{H}yper\textbf{L}ink-induced \textbf{P}re-training (\name), a pre-training method to learn effective Q-P relevance induced by the hyperlink topology within naturally-occurring Web documents. Specifically, these Q-P pairs are automatically extracted from the online documents with relevance adequately designed via hyperlink-based topology to facilitate downstream retrieval for question answering. Figure \ref{Fig:img1} shows an example of comparison between the human-written query and different pseudo queries. By the guidance of hyperlinks, our \name~query hold the relevance of answer containing with the passage~(query title occurs in the passage). Meanwhile, the \name~query can introduce far more effective relevance of \evi~than other pseudo queries by deeply mining the hyperlink topology, e.g., the dual-link structure. In figure \ref{Fig:img1}, both \name~query and the passage both contain information corresponding to the same fact of \textit{``Mitja Okorn directed the film of Letters to Santa''}. This makes our pseudo query low-cost and a good surrogate for the manually written query. Our contributions are two-fold. First, we present a hyperlink-induced relevance construction methodology that can better facilitate downstream passage retrieval for question answering, and specifically, we propose a pre-training method: Hyperlink-induced Pre-training (\name). Second, we conduct evaluations on six popular QA datasets, investigating the effectiveness of our approach under zero-shot, few-shot, multi-hop, and out-of-domain (OOD) scenarios. The experiments show \name~outperforms BM25 in most of the cases under the zero-shot scenario and other pre-training methods under all scenarios. \begin{figure*}[tb] \begin{center} \includegraphics[width=1\textwidth]{img/img2_b.jpg} \caption{The figure on the left shows a partial Wikipedia graph where two types of pseudo Q-P pairs $(a_i,b_j)$ and $(c_k,d_l)$ are presented. The text boxes on the right show two concrete examples of \name~Q-P pairs where text highlighted in green gives evidence while in orange indicates the answer span.} \label{Fig:main} \end{center} \end{figure*} \section{Related Work} \textbf{Dense Retriever Pre-training} \quad Previous works have attempted to conduct additional pre-training for dense retrievers on various weakly supervised data. \citet{web_search_pretrain_1} and \citet{web_search_pretrain_2} pre-trained ranking models on click-logs and BM25-induced signals respectively for web search. \citet{lee2019latent} proposed the inverse cloze task~(ICT) to pre-train a dense retrieval model, which randomly selects sentences as pseudo queries, and matched them to the passages that they originate from. Besides, \citet{chang2020pre} proposed the pre-training task of wiki link prediction~(WLP) and body first selection~(BFS) tasks. Similar to our work, the WLP task also leveraged the hyperlinks within Wikipedia to construct relevant text pairs. However, as shown in figure~\ref{Fig:img1}, the WLP pseudo query can only ensure the weak doc-wise contextual relationship with the passage. \citet{guu2020retrieval} proposed the masked-salient-span pre-training task which optimizes a retrieval model by the distant supervision of language model objective. As a follow-up, \citet{sachan2021end} combined ICT with the masked-salient-span task and further improved the pre-training effectiveness. \noindent \textbf{Data Augmentation via Question Generation} \quad \citet{zero-shot-qg}, \citet{reddy2021towards} and \citet{ouguz2021domain} all investigate training a dense retriever on questions synthesized by large question generative~(QG) models. Targeting on the zero-shot setting, \citet{zero-shot-qg} trained a question generator on general-domain question passage pairs from community platforms and publicly available academic datasets. \citet{reddy2021towards} focused more on domain transfer and trained the QG model on QA datasets of Wikipedia articles. \citet{ouguz2021domain} uses the synthetically generated questions from PAQ dataset~\cite{paq} and the post-comment pairs from dataset of Reddit conversations for retrieval pre-training. Recently, \citet{shinoda2021can} reveals that the QG models tend to generate questions with high lexical overlap which amplify the bias of QA dataset. Different to these studies, our method focuses on a more general setting where the retriever is only trained with the naturally occurring web documents, and has no access to any downstream datasets. \section{Hyperlink-induced Pre-training~(\name)} \label{sec:methodology} In this section, we firstly discuss the background of OpenQA retrieval, then our methodology and training framework. \subsection{Preliminaries} \noindent \textbf{Passage Retrieval} \quad Given a question $q$, passage retrieval aims to provide a set of relevant passages $p$ from a large corpus $\mathcal{D}$. Our work adopts Wikipedia as source corpus and each passage is a disjoint segment within a document from $\mathcal{D}$. \noindent \textbf{OpenQA Q-P Relevance} \quad For OpenQA, a passage $p$ is considered relevant to the query $q$ if $p$ conveys similar facts and contains the answer to $q$. These two conditions of relevance, namely \evi~and answer containing, are properly introduced into the \name~Q-P pairs under the guidance of desired hyperlink structure. We will discuss more in this section. To better formulate the relevance of pseudo Q-P pairs, we denote the sequence of passages within a document as $A = [a_1, a_2, ..., a_{n_A}]$ where $A \in \mathcal{D}$. The corresponding topical entity and the title of document $A$ and its passage splits are denoted as $e_A$ and $t_A$, respectively. We use $m_A$ to indicate a mention of entity $e_A$, which is a hypertext span linking to document $A$. Note that the mention span $m_A$ is usually identical to the document title $t_A$ or a variant version of it. Further, we define $\mathcal{F}_{(p)}$ as the entity-level factual information conveyed by the passage $p$, which is a set consists of the topical entity $e_{P}$ and the entities mentioned within passage $p$. \noindent \textbf{\EVI~in \name} \quad With appropriately designed hyperlink topologies, our \name~Q-P pairs guarantee the co-occurrence of entities which are presented as hypertext or topics in $q$ and $p$. This is considered as evidence across the Q-P pairs: \begin{gather} \mathcal{F}_{(q)} \cap \mathcal{F}_{(p)} \neq \emptyset \label{con:evidence0} \end{gather} Furthermore, we conjecture that \name~is more likely to achieve fact-level relevance than entity-level overlap. We conduct human evaluation in Section \ref{sec:human} and case studies in Appendix \ref{appendix:case} to support this conjecture. Moreover, we demonstrate that any Q-P pair containing hyperlink-induced factual evidence, which can be represented as triples, is included in our proposed topologies, which are included in Appendix \ref{appendix:reduction}. \noindent \textbf{Answer Containing in \name} \quad We consider the document title $t_Q$ as the information-seeking target of $q$. Accordingly, the relevance of answer containing can be formulated as \begin{gather} t_Q \subseteq p \label{con:answer0} \end{gather} The rationale behind this is that both the natural question and the Wikipedia document are intended to describe related facts and events regarding a targeted object, whereas the object is an answer for a question but a topical entity for a Wikipedia document. This similarity leads us to take the document title as the information-seeking target of its context. \subsection{Hyperlink-induced Q-P Pairs} \label{sec:HIS} Based on analysis of how queries match their evidential passages in the NQ~\cite{kwiatkowski2019natural} dataset, we propose two kinds of hyperlink topology for relevance construction: Dual-link and Co-mention. We present our exploratory data analysis on NQ dataset in Appendix \ref{appendix:analysis_evidence}. Here we discuss the desired hyperlink topologies and the corresponding relevance of the pseudo Q-P pairs. \noindent \textbf{Dual-link (DL)}\quad Among all NQ training samples, 55\% of questions mention the title of their corresponding golden passage. This observation motivates us to leverage the topology of dual-link~(DL) for relevance construction. We consider a passage pair $(a_i, b_j)$ follows the dual-link topology if they link to each other. An example of a DL pair $(a_i, b_j)$ is shown in Figure \ref{Fig:main}, in which passage $b_j$ mentions the title of document $A$ as $m_A$, satisfying the condition of answer containing: \begin{gather} t_A \approx m_A {\rm \quad and \quad} m_A \subseteq b_j \label{con:answer1} \end{gather} \noindent Further, since the passages $a_{i}$ and $b_{j}$ both mention the topical entity of the other, the entities $e_{A}$ and $e_{B}$ appear in both passages as evidence: \begin{gather} \{e_A, e_B\} \subseteq \mathcal{F}_{(a_i)} \cap \mathcal{F}_{(b_j)} \label{con:evidence1} \end{gather} \noindent \textbf{Co-mention (CM)} \quad Among all NQ training samples, about 40\% of questions fail to match the dual-link condition but mention the same third-party entity as their corresponding golden passages. In light of this observation, we utilize another topology of Co-mention~(CM). We consider that a passage pair $(c_k, d_l)$ follows the Co-mention topology if they both link to a third-party document $E$ and $d_l$ links to $c_k$. Figure \ref{Fig:main} illustrates a CM pair $(c_l, d_k)$ where answer containing is ensured as the title of $c_k$ occurs in $d_l$: \begin{gather} t_C \approx m_C {\rm \quad and \quad} m_C \subseteq d_l \label{con:answer2} \end{gather} \noindent Since both $c_l$ and $d_k$ mention a third-party entity $e_E$, and that $e_C$ is a topical entity in $c_l$ while a mentioned entity in $d_k$, we have entity-level evidence across $c_l$ and $d_k$ as: \begin{gather} \{e_C, e_E\} \subseteq \mathcal{F}_{(c_k)} \cap \mathcal{F}_{(d_l)} \label{con:evidence2} \end{gather} In practice, we use sentence-level queries which contain the corresponding evidential hypertext, and we do not prepend the title to the passage in order to reduce the superficial entity-level overlap. To improve the quality of CM pairs, we filter out those with a co-mentioned entity which has a top 10\% highest-ranked in-degree among the Wikipedia entity. We also present pseudo code in Appendix \ref{appendix:pseudo} to illustrate how we construct our pseudo Q-P pairs. Furthermore, we highlight that \name~has the following advantages: 1) it introduces more semantic variants and paraphrasing for better text matching. 2) The hypertext reflects potential interests or needs of users in relevant information, which is consistent to the downstream information-seeking propose. \subsection{Bi-encoder Training} We adopt a BERT-based bi-encoder to encode queries and passages separately into d-dimension vectors. The output representation is derived from the last hidden state of the [CLS] token and the final matching score is measured by the inner product: \begin{align} & h_q = {\rm BERT_Q}(q)({\rm[CLS]}) \nonumber \\ & h_p = {\rm BERT_P}(p)({\rm[CLS]}) \nonumber \\ & {\rm S}(p,q) = h_q^{\rm T} \cdot h_p \nonumber \end{align} Let $B=\{ \langle q_{i}, p_{i}^{+}, p_{i}^{-} \rangle \}_{i=1}^{n} $ be a mini-batch with $n$ instances. Each instance contains a question $q_{i}$ paired with a positive passage $p_{i}^{+}$ and a negative passage $p_{i}^{-}$. With in-batch negative sampling, each question $q_{i}$ considers all the passages in $B$ except its own gold $p_{i}^{+}$ as negatives, resulting in $2n-1$ negatives per question in total. We use the negative log likelihood of the positive passage as our loss for optimization: \begin{align} & L(q_{i}, p_{i}^{+}, p_{i,1}^{-},...,p_{i,2n-1}^{-}) \nonumber \\ = & -log\frac {e^{S(q_{i},p_{i}^{+})}} { e^{S(q_{i},p_{i}^{+})} + {\textstyle \sum_{j=1}^{2n-1}} e^{S(q_{i},p_{i,j}^{-})} } \nonumber \end{align} \section{Experimental Setup} In this session, we discuss the pre-training corpus preparation, downstream datasets, the hyper-parameter and the basic setup for our experiments. \subsection{Pre-training Corpus} We adopt Wikipedia as our source corpus $\mathcal{D}$ for pre-training as it is the largest encyclopedia covering diverse topics with good content quality and linking structures. We choose the snapshot 03-01-2021 of an English Wikipedia dump, and process it with WikiExtractor\footnote{Available at https://github.com/attardi/wikiextractor} to obtain clean context. After filtering out documents with blank text or a title less than three letters, following previous work~\cite{karpukhin2020dense}, we split the remaining documents into disjoint chunks of 100 words as passages, resulting in over 22 million passages in the end. \subsection{Downstream Datasets} We evaluate our method on several open-domain question answering benchmarks which are shown below. \noindent \textbf{Natural Questions (NQ)} \cite{kwiatkowski2019natural} is a popular QA dataset with real queries from Google Search and annotated answers from Wikipedia. \noindent \textbf{TriviaQA} \cite{joshi2017triviaqa} contains question-answer pairs scraped from trivia websites. \noindent \textbf{WebQuestions (WQ)} \cite{berant2013semantic} consists of questions generated by Google Suggest API with entity-level answers from Freebase. \noindent \textbf{HotpotQA} (Fullwiki) \cite{yang2018hotpotqa} is a human-annotated multi-hop question answering dataset. \noindent \textbf{BioASQ} \cite{tsatsaronis2015overview} is a competition on biomedical semantic indexing and question answering. We evaluate its factoid questions from task 8B. \noindent \textbf{MS MARCO} (Passage Ranking) \cite{nguyen2016ms} consists of real-world user queries and a large collection of Web passages extracted by Bing search engine. \noindent \textbf{Retrieval Corpus} \quad For downstream retrieval, we use the 21M Wikipedia passages provided by DPR \cite{karpukhin2020dense} for NQ, TriviaQA and WQ. For BioASQ, we take the abstracts of PubMed articles from task 8A with the same split to \citet{reddy2021towards}'s work. For HotpotQA and MS MARCO, we use the official corpus. \subsection{Implementation Details} During the pre-training, we train the bi-encoder for 5 epochs with parameters shared, using a batch size of 400 and an Adam optimizer \citep{kingma2014adam} with a learning rate $2 \times 10^{-5}$, linear scheduling with 10\% warm-up steps. Our \name~and all the reproduced baselines are trained on 20 million Q-P pairs with in-batch negative sampling, and the best checkpoints are selected based on the average rank of gold passages evaluated on the NQ dev set. The pre-training takes around 3 days using eight NVIDIA V100 32GB GPUs. For the downstream, we use the same hyper-parameters for all experiments. Specifically, we fine-tune the pre-trained models for 40 epochs with a batch size of 256 and the same optimizer and learning rate settings to the pre-training. We conduct evaluation on respective dev sets to select best checkpoints, and we use the last checkpoint if there is no dev set or test set (e.g. HotpotQA). More details can be found in the Appendix \ref{appendix:parameter}. \subsection{Baselines} Most existing baselines have been implemented under different experimental settings, which have a substantial effect on the retrieval performance. To ensure fairness, we reproduce several pre-training methods (ICT, WLP, BFS, and their combination) under the same experimental setting, such as batch size, base model, amount of pre-training data, and so on. The only difference between our method and the re-implemented baselines is the self-supervision signal derived from the respective pre-training samples. Our reproduced BM25 baseline is better than that reported in \citet{karpukhin2020dense}, and the re-implemented pre-training methods also perform better than those reported by the recent work\footnote{Our reproduced ICT and BFS surpass the reproduction from recent work \cite{ouguz2021domain} by 15 and 12 points, respectively, in terms of top-20 retrieval accuracy on NQ test set under zero-shot setting.}. In addition, we include the work REALM \cite{guu2020retrieval} as a baseline which has recently been reproduced by \citet{sachan2021end} using 240 GPUs and is named masked salient spans (MSS). We note that most related works gain improvements from varying downstream setting or synthetic pre-training with access to the downstream data of respective domain, which is out of the scope of our interests. \begin{table*} \centering \scalebox{0.8}{ \begin{tabular}{lccccccccc} \hline & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{TriviaQA} & \multicolumn{3}{c}{WQ} \\ & top5 & top20 & top100 & top5 & top20 & top100 & top5 & top20 & top100 \\ \hline & \multicolumn{9}{c}{w/o fine-tuning (zero-shot)} \\ \cline{2-10} $\textrm{BM25}^{\dag}$ & 43.6 & 62.9 & 78.1 & \textbf{66.4} & 76.4 & 83.2 & 42.6 & 62.8 & 76.8 \\ $\textrm{ICT}^{\dag}$ \cite{lee2019latent} & 23.4 & 40.7 & 58.1 & 33.3 & 51.3 & 69.9 & 19.9 & 36.2 & 56.0 \\ $\textrm{WLP}^{\dag}$ \cite{chang2020pre} & 28.5 & 47.3 & 65.3 & 51.3 & 67.0 & 79.1 & 26.9 & 49.0 & 68.1 \\ $\textrm{BFS}^{\dag}$ \cite{chang2020pre} & 31.0 & 49.9 & 67.5 & 43.8 & 61.1 & 74.7 & 28.5 & 48.0 & 67.7 \\ $\textrm{ICT+WLP+BFS}^{\dag}$ \cite{chang2020pre} & 32.3 & 50.2 & 68.0 & 49.7 & 65.5 & 78.3 & 28.4 & 47.8 & 67.5 \\ MSS \cite{sachan2021end} & 41.7 & 59.8 & 74.9 & 53.3 & 68.2 & 79.4 & - & - & - \\ \name & \textbf{51.2} & \textbf{70.2} & \textbf{82.0} & 65.9 & \textbf{76.9} & \textbf{84.0} & \textbf{49.3} & \textbf{66.9} & \textbf{80.8} \\ \hline & \multicolumn{9}{c}{w/ fine-tuning} \\ \cline{2-10} $ \textrm{No Pre-train}^{\dag}$ & 68.5 & 79.6 & 86.5 & 71.3 & 79.7 & 85.0 & 61.6 & 74.5 & 81.7 \\ $\textrm{ICT}^{\dag}$ \cite{lee2019latent} & 69.8 & 81.1 & 87.0 & 70.4 & 79.8 & 85.5 & 63.7 & 75.5 & 83.4 \\ $\textrm{WLP}^{\dag}$ \cite{chang2020pre} & 69.8 & \textbf{81.4} & 87.4 & 73.1 & 81.5 & 86.1 & 64.5 & 75.2 & 83.9 \\ $\textrm{BFS}^{\dag}$ \cite{chang2020pre} & 68.7 & 80.1 & 86.5 & 72.8 & 80.8 & 86.0 & 63.0 & 75.1 & 83.5 \\ $\textrm{ICT+WLP+BFS}^{\dag}$ \cite{chang2020pre} & 68.9 & 80.9 & 87.7 & 74.6 & 82.2 & 86.5 & 64.1 & \textbf{76.7} & 84.4 \\ \name & \textbf{70.9} & \textbf{81.4} & \textbf{88.0} & \textbf{75.3} & \textbf{82.4} & \textbf{86.9} & \textbf{65.5} & 76.5 & \textbf{84.5} \\ \hline \end{tabular} } \caption{Top-k $(k \in \{5,20,100\})$ retrieval accuracy, measured as the percentage of top $k$ retrieved passages with the answer contained. The upper block of the table describes the performance under zero-shot setting, while the lower under the full-set fine-tuning setting. $\dag$: Our re-implementation.} \label{table:main-result} \vspace{-0.2cm} \end{table*} \section{Experiments} \subsection{Main Results} Table \ref{table:main-result} shows the retrieval accuracy of different models on three popular QA datasets under zero-shot and full-set fine-tuning settings. Under zero-shot setting, \name~consistently outperforms BM25 except for the top-5 retrieval accuracy of TriviaQA, while all other pre-training baselines are far behind. We attribute the minor improvement over BM25 on TriviaQA to a high overlap between questions and passages, which gives term-based retriever a clear advantage. We investigate the coverage of the question tokens that appear in the gold passage and find that the overlap is indeed higher in TriviaQA (62.8\%) than NQ (60.7\%) and WQ (57.5\%). After fine-tuning, all models with intermediate pre-training give better results than the vanilla DPR while our \name~achieves the best in nearly all cases. Among ICT, WLP and BFS, we observe that WLP is the most competitive with or without fine-tuning, and additional improvements can be achieved by combining three of them. This observation indicates that pre-training with diverse relevance leads to better generalization to downstream tasks, while document-wise relevance is more adaptable for the OpenQA retrieval. The advantage of document-wise relevance may come from the fact that texts in different documents are likely written by different parties, providing less superficial cues for text matching, which is beneficial for the downstream retrieval. Our \name~learns both coarse-grained document-wise relationships as well as the fine-grained entity-level evidence, which results in a significant improvement. \subsection{Few-shot Learning} To investigate the retrieval effectiveness in a more realistic scenario, we conduct experiments for few-shot learning. Specifically, we fine-tune the pre-trained models on large datasets (NQ, TriviaQA) with $m$ ($m\in\{16, 256, 1024\}$) samples and present the few-shot retrieval results in Table \ref{table:few-shot}. With only a few hundred labeled data for fine-tuning, all the models with intermediate pre-training perform better than those without, and \name~outperforms the others by a larger margin when $m$ is smaller. Moreover, among three re-implemented baselines, WLP gains the largest improvement with increasing number of samples, outperforming ICT and BFS when a thousand labelled samples are provided for fine-tuning. \begin{table}[H] \centering \scalebox{0.7}{ \centering \begin{tabular}{lcccccc} \hline & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{TriviaQA} \\ & top5 & top20 & top100 & top5 & top20 & top100 \\ \hline & \multicolumn{6}{c}{m = 16} \\ \cline{2-7} No Pre-train & 12.7 & 24.2 & 40.2 & 18.6 & 32.6 & 51.0 \\ ICT & 37.1 & 54.4 & 70.5 & 47.2 & 62.5 & 75.8 \\ WLP & 29.8 & 48.2 & 65.5 & 51.4 & 66.9 & 79.2 \\ BFS & 39.8 & 57.9 & 73.2 & 46.9 & 62.2 & 75.2 \\ \name & \textbf{51.9} & \textbf{70.3} & \textbf{81.6} & \textbf{65.9} & \textbf{76.9} & \textbf{84.0} \\ \hline & \multicolumn{6}{c}{m = 128} \\ \cline{2-7} No Pre-train & 38.0 & 53.4 & 68.8 & 38.0 & 53.4 & 68.8 \\ ICT & 47.0 & 64.2 & 77.4 & 58.5 & 71.4 & 81.0 \\ WLP & 44.9 & 62.4 & 76.6 & 63.1 & 74.5 & 82.6 \\ BFS & 44.4 & 62.8 & 76.7 & 59.2 & 71.7 & 80.8 \\ \name & \textbf{55.2} & \textbf{71.3} & \textbf{81.8} & \textbf{67.7} & \textbf{77.7} & \textbf{84.4} \\ \hline & \multicolumn{6}{c}{m = 1024} \\ \cline{2-7} No Pre-train & 49.7 & 66.4 & 78.8 & 54.0 & 67.2 & 77.6 \\ ICT & 55.9 & 72.2 & 83.7 & 63.8 & 75.7 & 83.3 \\ WLP & 57.2 & 73.6 & 83.9 & 67.2 & 77.5 & 84.5 \\ BFS & 53.7 & 71.7 & 83.1 & 63.6 & 75.3 & 83.1 \\ \name & \textbf{60.6} & \textbf{76.4} & \textbf{85.3} & \textbf{70.2} & \textbf{79.8} & \textbf{85.4} \\ \hline \end{tabular}} \caption{Few-shot retrieval accuracy on NQ and TriviaQA test sets after fine-tuning with $m$ annotated samples.} \label{table:few-shot} \end{table} \subsection{Out-of-domain (OOD) Scenario} \label{sec:ood} \begin{table*}[] \centering \scalebox{0.8}{ \begin{tabular}{lcccccccccc} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Negative} & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{TriviaQA} & \multicolumn{3}{c}{WebQ} \\ & & top5 & top20 & top100 & top5 & top20 & top100 & top5 & top20 & top100 \\ \hline \multirow{2}{*}{Dual-link} & 0 & 46.2 & 64.7 & 78.0 & 60.5 & 73.0 & 81.2 & 44.6 & 65.2 & 78.8 \\ & 1 & 49.0 & 67.8 & 79.7 & 62.0 & 73.8 & 82.1 & 48.4 & \textbf{67.1} & 79.5 \\ \hline \multirow{2}{*}{Co-mention} & 0 & 35.8 & 57.1 & 75.1 & 58.9 & 73.1 & 82.6 & 36.2 & 58.9 & 76.2 \\ & 1 & 42.5 & 62.2 & 77.9 & 63.2 & 75.8 & 83.7 & 45.4 & 64.5 & 78.9 \\ \hline \multirow{2}{*}{\name} & 0 & 45.7 & 66.0 & 79.9 & 62.6 & 75.2 & 83.0 & 43.9 & 64.1 & 79.4 \\ & 1 & \textbf{51.2} & \textbf{70.2} & \textbf{82.0} & \textbf{65.9} & \textbf{76.9} & \textbf{84.0} & \textbf{49.3} & 66.9 & \textbf{80.8} \\ \hline \end{tabular}} \caption{Ablation studies on different types of topologies and negatives. The retrieval accuracy of models trained with different types of Q-P pairs and additional negatives on NQ, TriviaQA, and WebQ datasets.} \label{table:abl} \vspace{-0.3cm} \end{table*} While \name~is pre-trained on Wikipedia pages, we conduct additional experiments on BioASQ and MS MARCO datasets with non-Wikipedia corpus to further verify its out-of-domain (OOD) generalization. Following \citet{gururangan2020don}, we measure the similarity between corpus by computing the vocabulary overlap of the top 10K frequent words (excluding stopwords). We observe a vocabulary overlap of 36.2\% between BioASQ and Wikipedia while 61.4\% between MS MARCO and Wikipedia, indicating that these two domains differ considerably from our pre-training corpus. The results of zero-shot retrieval on BioASQ and MS MARCO datasets are presented in Table \ref{table:ood}. For BioASQ, \name~is competitive with both BM25 and AugDPR\cite{reddy2021towards} while significantly outperforming ICT, WLP, and BFS. Note that AugDPR is a baseline that has access to NQ labeled data whereas our \name~is trained in an unsupervised way. For MS MARCO, \name~consistently outperforms other pre-training methods but falls behind BM25 under zero-shot setting. We conjecture the performance degradation on MS MARCO is attributed to two factors: 1) the Q-P lexical overlap of MS MARCO (65.7\%) is higher than that in BioASQ (48.7\%) as well as other datasets; 2) the information-seeking target of the MS MARCO query is the entire passage rather than a short answer span, which is biased towards our proposed answer containing. we also observe that pre-training exclusively with DL pairs achieves better results in MS MARCO, indicating the generality of relevance induced by DL topology. \begin{table}[H] \centering \scalebox{0.8}{ \begin{tabular}{lcccc} \hline & \multicolumn{2}{c}{BioASQ} & \multicolumn{2}{c}{MS MARCO} \\ & top20 & top100 & R@20 & R@100 \\ \hline BM25 & $\textrm{42.1}^{\ddag}$ & $\textrm{50.5}^{\ddag}$ & \textbf{49.0} & \textbf{69.0} \\ DPR & $\textrm{34.7}^{\ddag}$ & $\textrm{46.9}^{\ddag}$ & - & - \\ AugDPR & $\textrm{41.4}^{\ddag}$ & $\textrm{52.4}^{\ddag}$ & - & - \\ \hline ICT & 8.9 & 18.6 & 10.8 & 19.5 \\ WLP & 29.7 & 44.3 & 18.4 & 36.0 \\ BFS & 28.4 & 41.9 & 28.0 & 44.7 \\ \hline \name~(DL) & \textbf{46.0} & 56.9 & 42.0 & 62.6 \\ \name~(CM) & 37.8 & 54.7 & 26.6 & 47.3 \\ \name~(DL+CM) & 40.8 & \textbf{58.3} & 37.3 & 60.0 \\ \hline \end{tabular}} \caption{Top-20/100 zero-shot retrieval accuracy on BioASQ and Top-20/100 zero-shot recall on MS MARCO. \ddag: \cite{reddy2021towards}} \label{table:ood} \vspace{-0.2cm} \end{table} \subsection{Multi-hop Retrieval} While \name~aims to acquires the ability in matching document-wise concepts and facts, it raises our interest in its capability for multi-hop scenarios. We evaluate our methods on HotpotQA in a single-hop manner. Specifically, for each query, we randomly selects one golden passage from the two as a positive passage and one additional passage with high TF-IDF scores as a negative passage. Our models are further fine-tuned on the HotpotQA training set and evaluated on the bridge and the comparison type questions from the development set, respectively. The results of our study are shown in Table \ref{table:hotpot} which reveals that \name~consistently outperforms others methods, with up to a 11-point improvement on top-5 retrieval accuracy of bridge questions. Furthermore, WLP yields a 4-point advantages in average over ICT and BFS on bridge questions, showing that document-wise relevance contributes to better associative abilities. We include a case study in Appendix \ref{appendix:hotpot}. \begin{table}[H] \centering \scalebox{0.7}{ \centering \begin{tabular}{lcccccc} \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Bridge} & \multicolumn{3}{c}{Comparison} \\ & top5 & top20 & top100 & top5 & top20 & top100 \\ \hline No Pre-train & 25.0 & 40.5 & 58.0 & 83.0 & 94.2 & 97.4 \\ ICT & 28.1 & 43.8 & 61.8 & 84.8 & 94.4 & 98.3 \\ WLP & 32.1 & 49.1 & 66.0 & 89.7 & 97.3 & 99.2 \\ BFS & 29.0 & 44.7 & 62.1 & 87.4 & 95.8 & 98.7 \\ \name & \textbf{36.9} & \textbf{53.0} & \textbf{68.5} & \textbf{94.4} & \textbf{98.5} & \textbf{99.5} \\ \hline \end{tabular} } \caption{Retrieval accuracy on questions from HotpotQA dev set, measured as the percentage of top-k retrieved passages which include both golds.} \label{table:hotpot} \vspace{-0.2cm} \end{table} \section{Analysis} \subsection{Ablation Study} \label{sec:abl} To better understand how different key factors affect the results, we conduct ablation experiments with results shown in Table \ref{table:abl}. \noindent \textbf{Hyperlink-based Topologies} \quad Our proposed dual-link (DL) and co-mention (CM) Q-P pairs, provide evidence induced by different hyperlink-based topologies. To examine their respective effectiveness, we pre-train retrievers on Q-P pairs derived from each topology and their combinations. We present zero-shot retrieval results in Table \ref{table:abl}, which show that retrievers pre-trained on DL pairs has a distinct advantage over that on CM pairs, while combining both gives extra improvement. \noindent \textbf{Negative Passage} \quad In practice, negative sampling is essential for learning a high-quality encoder. Besides in-batch negative, our reported \name~employs one additional negative for each query. We further explore the impact of the additional negatives during pre-training. In our ablation study, pre-training with additional negatives improves the results significantly, which may be attributed to using more in-batch pairs for text matching. More details on implementation and negative sampling strategies can be found in Appendix \ref{appendix:negative}. \subsection{Analysis on Q-P Overlap} We carry out extensive analysis on the Q-P lexical overlap in the task of retrieval. Specifically, we tokenize $q$, $p$ using the BERT tokenizer and measure the Q-P overlap as the proportion of the question tokens that appear in the corresponding passage. Based on the degree of Q-P overlap, we divided the NQ dev set into five categories for further analysis. \noindent \textbf{Distribution of Q-P Overlap} \quad Figure \ref{Fig:overlap-data} shows both the pre-training and the retrieved pairs of \name~have a more similar overlap distribution with the downstream NQ dataset than the other methods, which implies the consistency between the relevance provided by \name~and that in real information-seeking scenario. \begin{figure}[htbp] \centering \begin{minipage}[t]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{img/HLP-overlap-data.jpg} \end{minipage} \begin{minipage}[t]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{img/HLP-overlap-pretrain-top1.jpg} \end{minipage} \caption{Distribution of overlap on pseudo and downstream Q-P pairs (left), and that between the query and the top-1 passage retrieved by different pre-trained models (right).} \label{Fig:overlap-data} \vspace{-0.2cm} \end{figure} \noindent \textbf{Retrieval Performance vs. Q-P Overlap} \quad Figure~\ref{Fig:overlap-performance} shows the top-20 retrieval accuracy on the samples with varying degrees of Q-P overlap. Both figures show that the retrievers are more likely to return answer-containing passages when there is higher Q-P overlap, suggesting that all these models can exploit lexical overlap for passage retrieval. Under the zero-shot setting, \name~outperforms all the methods except BM25 when $r$ is larger than 0.8, which reflects the strong reasoning ability of \name~and the overlap-dependent nature of the term-based retrievers. After fine-tuning, models with additional pre-training perform better than the vanilla DPR while \name~outperforms all other methods in most of the cases. It is important to note that \name~is pre-trained on more high-overlap text pairs while it performs better than all the other methods when fewer overlaps are provided. We speculate that this is because the overlapping in \name~Q-P pairs mostly comes from the factual information, such as entity, which introduces fewer superficial cues, allowing for better adaptation to the downstream cases. \begin{figure}[htbp] \centering \begin{minipage}[t]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{img/HLP-overlap-pretrain.jpg} \end{minipage} \begin{minipage}[t]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{img/HLP-overlap-finetune.jpg} \end{minipage} \caption{Top-20 retrieval accuracy of pre-training (left) and fine-tuning (right) on the divided NQ dev set.} \label{Fig:overlap-performance} \vspace{-0.2cm} \end{figure} \subsection{Human Evaluation on Q-P pairs} \label{sec:human} We conduct human evaluation to investigate the proportion of Q-P pairs that convey the similar fact-level information. Specially, we randomly selected one hundred examples from our constructed Q-P pairs and asked annotators to identify whether the query and the corresponding passage convey similar facts. Each case is evaluated by three annotators and the result is determined by their votes. Our results are shown in Table \ref{table:human}, and we further present case studies in Appendix \ref{appendix:case}. \begin{table}[H] \centering \scalebox{0.8}{ \begin{tabular}{cccc} \hline & DL & CM & WLP \\ \hline Votes & 61\% & 40\% & 15\% \\ \hline \end{tabular}} \caption{Human evaluation on pseudo Q-P pairs constructed by different methods.} \label{table:human} \vspace{-0.2cm} \end{table} \section{Conclusion} This paper proposes Hyperlink-induced Pre-training (\name), a pre-training method for OpenQA passage retrieval by leveraging the online textual relevance induced by hyperlink-based topology. Our experiments show that \name~gains significant improvements across multiple QA datasets under different scenarios, consistently outperforming other pre-training methods. Our method provides insights into OpenQA passage retrieval by analyzing the underlying bi-text relevance. Future work involves addressing tasks like MS MARCO where the granularity of the information-seeking target is at the passage level. \section{Acknowledgments} This work is partially supported by the Hong Kong RGC GRF Project 16202218, CRF Project C6030-18G, C1031-18G, C5026-18G, AOE Project AoE/E-603/18, China NSFC No. 61729201. We thank all the reviewers for their insightful comments.
1,941,325,220,381
arxiv
\section{Introduction} \label{introduction} The study of the phase diagram of fundamental--adjoint pure gauge systems, \begin{equation} S = \beta_f \sum_{P} [1 - \frac{1}{N} {\rm Re} {\rm Tr} U_{P} ] + \beta_a \sum_{P} [1 - \frac{1}{N^2} {\rm Tr} U^\dagger_{P} {\rm Tr} U_{P}] , \label{eq:aaction} \end{equation} in the early 80's revealed a non-trivial phase structure with first order (bulk) transitions in the region of small $\beta_f$ \cite{Greensite81,Bhanot82}. In particular a line of first order transitions points towards the fundamental axis and, after terminating, extends as a roughly straight line of bulk crossovers to the fundamental axis and beyond. This non-trivial phase structure, and in particular the critical endpoint, has been argued to be associated with the dip in the discrete $\beta$-function of the theory with standard Wilson action, which occurs in the region where the bulk crossover line crosses the fundamental axis. In a couple of recent papers Gavai, Grady and Mathur \cite{Gavai94} and Mathur and Gavai \cite{Gavai94b} returned to investigating the behavior of pure gauge SU(2) theory in the fundamental--adjoint plane at finite, non-zero temperature. They raised doubts about the bulk nature of the phase transition and claimed that their results were consistent with the transitions, for lattices with temporal extent $N_t = 4$, 6 and 8, to be of thermal, deconfining nature, displaced toward weak coupling with increasing $N_t$. On the transition line for a fixed $N_t$ there is then a switch from second order behavior near the fundamental axis to first order behavior at larger adjoint coupling. In a Landau Ginzburg model of the effective action in terms of Polyakov lines, Mathur \cite{Mathur95} reported that he could reproduce the claimed behavior seen in the numerical simulations. These results, should they be confirmed, are rather unsettling, since they contradict the usual universality picture of lattice gauge theory with a second order deconfinement transition for gauge group SU(2). Puzzled by the finding of Ref.~\cite{Gavai94}, we studied the finite temperature behavior of pure gauge SU(3) theory in the fundamental--adjoint plane \cite{SU3_F_A,SU3_F_Ab}. We obtained results in agreement with the usual universality picture: there is a first order bulk transition line ending at \begin{equation} (\beta^*_f,\beta^*_a) = (4.00(7), 2.06(8)) . \label{eq:endpoint} \end{equation} The thermal deconfinement transition lines for fixed $N_t$ (being of first order for gauge group SU(3)) in the fundamental adjoint plane are ordered such that the thermal transition for smaller $N_t$ occurs to the left, at smaller $\beta_f$, than that for a larger $N_t$. In this order the thermal transition lines join on to the bulk transition line. The thermal transition line for $N_t = 4$ joins the bulk transition line very close to the critical endpoint. This is shown in Figure 4 of Ref.~\cite{SU3_F_Ab}, reproduced here as Figure~\ref{fig:phas_diag_T}. \begin{figure} \begin{center} \vskip 30mm \leavevmode \epsfysize=360pt \epsfbox[90 40 580 490]{phas_diag_T.eps} \end{center} \caption{The phase diagram together with the thermal deconfinement transition points for $N_t=2$, 4, 6 and 8 from Ref.~\protect\cite{SU3_F_Ab}. The lower plot shows an enlargement of the region around the end point of the bulk transition.} \label{fig:phas_diag_T} \end{figure} To solidify this picture, which is in agreement with the usual universality scenario, we have continued the investigation studying zero-temperature observables. We computed the string tension and the masses of some glueballs, in particular the $0^{++}$ glueball, along the thermal transition line for $N_t = 4$. For universal continuum behavior $\sqrt{\sigma}$ and the glueball masses should be constant along the thermal transition line for a fixed $N_t$, leading to constant ratios $T_c/\sqrt{\sigma}$ and $m_g/\sqrt{\sigma}$. Of course, at small $N_t$, corresponding to a large lattice spacing $a$, we expect to see some deviations from this constant behavior. However, we find large deviations for $m_{0^{++}}/\sqrt{\sigma}$ as we approach the critical endpoint of the bulk transition along the $N_t=4$ thermal transition line. The scalar glueball mass decreases dramatically, much more than what could be expected from simple scaling violations at a large lattice spacing. On the other hand, this is not really a surprise since at the critical endpoint at least one mass in the $0^{++}$-channel has to vanish. We elaborate on our findings in the next sections and then discuss the implications for the scaling behavior along the fundamental (or Wilson) axis. \section{Observables and Analysis} \label{analysis} We have made simulations of the model with action (\ref{eq:aaction}) along the thermal transition line for $N_t=4$, and continued along the bulk transition line, on a $12^4$ lattice. Experience indicates this size to be sufficient to avoid significant finite size effects. The simulations were carried out with a 10-hit Metropolis algorithm tuned to an acceptance rate of about 50\%. Observables were measured every 20 sweeps after thermalization. Close to the fundamental axis this resulted in essentially statistically independent measurements, while closer to the critical endpoint the autocorrelation time was significantly increased. We computed finite $T$ approximants to the potential from time-like Wilson loops constructed with `APE'-smeared spatial links \cite{APE_smear} to increase the overlap to the ground state potential. On and off axis spatial paths were considered with distances $R = n$, $\sqrt{2} n$, $\sqrt{3} n$ and $\sqrt{5} n$, with $n = 1$, 2, \dots an integer. The string tension was then extracted from the usual fit \begin{equation} V(\vec R) = V_0 - \frac{e}{R} + l \left( G_L(\vec R) - \frac{1}{R} \right) + \sigma R . \label{eq:fit_form} \end{equation} Here $G_L(\vec R)$ is the lattice Coulomb potential, included in the fit to take account of short distance lattice artefacts. Our fits are fully correlated $\chi^2$-fits with the correlations estimated by bootstrap, after binning to alleviate autocorrelation effects. The results of the best fits are listed in Table~\ref{tab:fits}. \begin{table}[ht] \centerline{% \begin{tabular}{|l|l|l|l|r|l|l|c|} \hline $\beta_a$ & $\beta_{fc}(N_t=4)$ & $\beta_f$ & $L$ & $N_{meas}$ & $\sqrt{\sigma}$ & $m_{0^{++}}$ & $m_{2^{++}}(t=1)$ \\ \hline 0.0 & 5.6925(2) & 5.7$^a$ & & & 0.4099( 12) & 0.97( 2) & 2.39(13) \\ 0.5 & 5.25(5) & 5.25 & 12 & 500 & 0.4218( 28) & 0.93(11) & 2.29(13) \\ 1.0 & 4.85(5) & 4.85 & 12 & 500 & 0.4024( 82) & 0.78(28) & 2.45(19) \\ 1.5 & 4.45(5) & 4.45 & 12 & 500 & 0.3743( 51) & 0.56(17) & 2.13( 9) \\ 2.0 & 4.035(5) & 4.03 & 12 & 1000 & 0.555 ( 11) & 0.34( 6) & 3.11(16) \\ 2.0 & 4.035(5) & 4.035 & 12 & 2000 & 0.4725(128) & 0.20( 4) & 3.17(14) \\ 2.0 & 4.035(5) & 4.04 & 12 & 1000 & 0.3750( 24) & 0.27( 8) & 2.37( 9) \\ \hline 2.25 & 3.8475(25) & 3.8475$^b$ & 12 & 500 & 0.619 ( 18) & 0.37(10) & 3.15(37) \\ 2.25 & 3.8475(25) & 3.8475$^c$ & 12 & 500 & 0.3005( 22) & 0.61( 8) & 1.59( 7) \\ 2.25 & 3.8475(25) & 3.8475$^c$ & 16 & 500 & 0.2965( 19) & 0.62( 5) & 1.72( 9) \\ \hline \end{tabular}% \caption{The results in the neighborhood of the $N_t=4$ thermal transition line. Comments: (a) $\protect\sqrt{\sigma}$ at $\beta_a=0.0$ comes from \protect\cite{MTc_sig}, $m_{0^{++}}$ from \protect\cite{GF11_gb}; we did not take their best value, but rather the effective mass from the same distance as at $\beta_a=0.5$; $m_{2^{++}}$ is from \protect\cite{dFSST86}; (b) in the disordered phase and (c) in the ordered phase on the bulk transition.} \label{tab:fits} \medskip\noindent \end{table} We also computed glueball correlation functions in the $0^{++}$, $2^{++}$ and $1^{+-}$ channel that can be built from simple plaquette operators. In an attempt to improve the signals we built these plaquettes not only from the original, but also from the smeared links, already used for the computation of the potential. Not surprisingly for computations done around the critical coupling for the $N_t = 4$ thermal phase transition, we did not obtain a significant signal in the $1^{+-}$ channel and only an effective mass from time distances $t = 0/1$ in the $2^{++}$ channel -- we had 500 measurements everywhere, except for $\beta_a = 2.0$, near the critical endpoint where the number was increased, as given in Table~\ref{tab:fits}. In the $0^{++}$ channel we got a signal at distance $t = 1/2$ at small $\beta_a$ and out to $t = 3/4$ at $\beta_a = 2.0$. Our best results are also given in Table~\ref{tab:fits}. The quantities $\sqrt{\sigma}$ and $m_{0^{++}}$ are shown in Figure~\ref{fig:m_and_sig} plotted versus $\beta_a$. As can be seen, $\sqrt{\sigma}$ remains approximately constant along the thermal transition line for $N_t=4$ -- the errors shown in the figure are statistical only; no estimate of the error from the uncertainty in the determination of $\beta_{fc}$ has been attempted except for $\beta_a=2.0$. There, the computation has been repeated for two nearby couplings, also listed in Table~\ref{tab:fits}; the variation with $\beta_f$ becomes so rapid that the error in the determination of $\beta_{fc}$ becomes the dominating factor. While our estimate for $m_{2^{++}}$, albeit an unreliable estimate since we had to use distances $t=0$ and 1 to obtain enough of a signal to extract an effective mass, also remains approximately constant, $m_{0^{++}}$ shows a remarkable decrease as the critical endpoint is approached. This observed behavior suggests that the mass in the $0^{++}$ channel vanishes at the critical endpoint, thereby giving strong additional evidence for the existence of this critical endpoint, since at a critical point at least one mass, in the $0^{++}$ channel to have a rotationally invariant continuum limit, must vanish. Since no other observable seems to be dramatically affected by the critical endpoint, we conjecture that the continuum theory one would obtain there is simply the (trivial) $\phi^4$ theory. To substantiate this claim somewhat, we made a fit to the scalar mass of the form \begin{equation} m_{0^{++}} = A \left( \beta^*_a - \beta_a \right)^p \label{eq:massfit} \end{equation} expected near a critical point. A 3-parameter fit gave $A=0.76(11)$, $p=0.35(20)$ and $\beta^*_a=2.02(6)$ with a $\chi^2$ of $0.29$ for 2 dof. Note that the estimate for $\beta^*_a$ is in agreement with the previous estimate (\ref{eq:endpoint}) obtained in \cite{SU3_F_Ab} from fits to the jump in the plaquette across the bulk transition line. Within its large error, the exponent $p$ is compatible with the mean field value $0.5$ of $\phi^4$ theory, up to logarithmic corrections. Since the errors of the fit parameters are rather large we also made a fit with $\beta^*_a$ held fixed at its value $2.06$ of (\ref{eq:endpoint}). This fit gave $A=0.71(3)$ and $p=0.44(5)$ with a $\chi^2$ of $0.47$ for 3 dof. Again, $p$ is compatible with the mean field value. Finally, a fit with $\beta^*_a=2.06$ and $p=0.5$ both held fixed gave $A=0.68(1)$ with a $\chi^2$ of $1.50$ for 4 dof. This last, still very acceptable fit is included in Figure~\ref{fig:m_and_sig}. \begin{figure} \begin{center} \leavevmode \epsfysize=360pt \epsfbox[90 40 580 490]{m_and_sig.eps} \end{center} \caption{$\protect\sqrt{\sigma}$ (octagons) and $m_{0^{++}}$ (squares) as a function of $\beta_a$ along the thermal transition line for $N_t=4$. At $\beta_a=2.25$ we show the results from both phases at the bulk transition. The dashed vertical line gives the location of the critical endpoint, (\protect\ref{eq:endpoint}), with the dotted lines indicating the error band. The curve is a fit to $m_{0^{++}}=A(2.06-\beta_a)^{1/2}$.} \label{fig:m_and_sig} \end{figure} \begin{figure} \begin{center} \leavevmode \epsfysize=360pt \epsfbox[90 40 580 490]{pot_b3p8475_ba2p25.eps} \end{center} \caption{The potential in both phases on the bulk transition line at $(\beta_f,\beta_a)=(3.8475,2.25)$ on $12^4$ lattices ($\Box$ and $\times$) and on a $16^4$ lattice ($\circ$).} \label{fig:pot_ba2p25} \end{figure} While we believe to have assembled impressive further evidence for the bulk nature of the phase transition line in the fundamental--adjoint plane and of its critical endpoint, we decided to make one further test. We computed the potential in both phases on the bulk transition line at $\beta_a = 2.25$. This is about the location where the $N_t=6$ thermal transition line joins onto the bulk transition line according to Ref.~\cite{SU3_F_Ab}. Hence, even in the ordered phase a lattice of size $12^4$ should be large enough to obtain a reliable potential. To make sure that this is indeed the case, we repeated the computation in the ordered phase on a $16^4$ lattice. As can be seen in Table~\ref{tab:fits} the finite size effect on the $12^4$ lattice does indeed appear negligible. On both sides of the bulk transition -- at identical couplings: we did not observe even an attempt at tunneling from one phase to the other -- we find a clearly confining potential. Therefore the bulk transition does not, as expected, affect confinement, provided the lattices are large enough. Of course, the string tension (and glueball masses) jump from one side of the bulk transition to the other. Indeed, as the bulk transition line is approached from the fundamental axis the string tension varies more and more rapidly -- and the thermal deconfinement transition lines for different $N_t$ come closer together -- {\it i.e.,} the dip in the step $\beta$-function becomes deeper until a jump is developed through the bulk transition line. \section{Implications for Scaling} \label{scaling} In the previous section we have corroborated the existence of a first order bulk phase transition line ending in a critical endpoint. We have provided evidence that physical observables are little affected by this phase transition line, except for a stronger dependence on $\beta_f$ at fixed $\beta_a$ -- a deepening of the dip in the step $\beta$-function -- and eventually a jump across the bulk transition line. The notable exception to this is the glueball mass in the $0^{++}$ channel which decreases as the critical endpoint is approached and vanishes there. \begin{figure} \begin{center} \leavevmode \epsfysize=360pt \epsfbox[90 40 580 490]{m0_ov_sig.eps} \end{center} \caption{$10T_c/\protect\sqrt{\sigma}$ (octagons) and $m_{0^{++}}/\protect\sqrt{\sigma}$ (squares) as a function of $\beta$ for the fundamental (Wilson) action.} \label{fig:m0_ov_sig} \end{figure} The influence of the critical endpoint on the $0^{++}$ glueball appears still visible in the crossover region on the fundamental (Wilson) axis. This has first been argued in Ref.~\cite{Rajan86}. The $0^{++}$ glueball is lighter in the crossover region, leading to a different scaling behavior than other observables. This can be seen in Figure~\ref{fig:m0_ov_sig} where we show the latest data of $T_c/\sqrt{\sigma}$ from Ref.~\cite{Boyd95} and $m_{0^{++}}/\sqrt{\sigma}$ with $\sqrt{\sigma}$ taken from Refs.~\cite{MTc_sig,B-S_sig} and the glueball mass from Refs.~\cite{GF11_gb,M-T_gb,Bali_gb}. While $T_c/\sqrt{\sigma}$ stays approximately constant in the $\beta$ interval shown, $m_{0^{++}}/\sqrt{\sigma}$ decreases visibly in the crossover region around $\beta = 5.7$. It has long been known that the scalar glueball mass scales differently in the crossover region than $T_c$ and the string tension, whose scaling behavior as a function of $\beta$ deviates from asymptotic scaling but agrees with the step $\beta$-function found in MCRG computations. The scalar glueball seemed more compatible with asymptotic scaling. However, in view of our findings that the different behavior of the scalar glueball mass comes from the influence of the nearby critical endpoint this asymptotic scaling behavior appears to be accidental. It would be interesting to determine the scaling behavior of other (pure gauge) observables. We conjecture that they would behave more like $T_c$ or $\sqrt{\sigma}$ than like the scalar glueball mass. Unfortunately no reliable, large volume, glueball masses for $J^{PC}$ quantum numbers other than $0^{++}$ for couplings in the crossover region exist in the literature to check this conjecture. \section*{Acknowledgements} This work was partly supported by the DOE under grants \#~DE-FG05-85ER250000 and \#~DE-FG05-92ER40742. The computations were carried out on the cluster of IBM RS6000's at SCRI.
1,941,325,220,382
arxiv
\section{Introduction} To study the linear polarization of asteroids and other point source objects, the Dual-Beam Imaging Polarimeter (DBIP) was commissioned in March of 2007 \citep{JRM_dbip}. In August of 2007 we expanded DBIP's capabilities to include analysis of circular polarization with the addition of a quarterwave plate. Typically, the most important quantities for analysis are the fractional polarizations $q=Q/I$, $u=U/I$, and $v=V/I$, expressed as percentages, and in the following text we will deal with these quantities when we refer to polarization measurements. Here we present our subsequent calibration and determination of systematic errors which were found to be comparable to statistical errors for typical observing situations: $\sim0.1\%$ polarization. \section{Optical Setup} The original setup of DBIP was a serial arrangement of a halfwave plate in an encoded rotation stage, a filter and a double-calcite Savart plate placed between the telescope and the $2k \times 2k$ Tektronix CCD camera. To extend DBIP to full-Stokes sensitivity, a quarterwave plate in a rotation stage was placed ahead of the halfwave plate. This setup allows for simultaneous measurement of linear and circular polarization, though at the potential cost of increased crosstalk between polarizations, which is discussed further in \S\ref{JRM_crosstalk} Figure~\ref{JRM_fig.optics}, modified from \citet{JRM_dbip}, shows a schematic representation of the new optical path with the quarterwave plate added. \begin{figure}[h] \plotfiddle{MasieroFig1.eps}{1.75in}{0}{40}{40}{-100}{0} \caption{Schematic of the optical components of DBIP. Modified from \citet{JRM_dbip}.} \label{JRM_fig.optics} \end{figure} \section{Calibration} As with any optical system, misalignments and imperfections in the components will lead to errors in measurement. In the case of DBIP, the waveplates are the most sensitive components to these errors, as they are the only moving parts and require precisely determined angular zero-points. Errors in angular alignment of the waveplate or tilt with respect to the optical axis as well as chromatic retardance or fast-axis angle variations will show up in our system as variations in measured position angle of polarization, depolarization of the signal, or crosstalk between linear and circular polarization. To minimize and quantify these errors we performed an extensive calibration campaign. \subsection{Waveplate Alignment} \label{JRM_align} Our first step of calibration was to determine the alignment of the waveplate zero-points using known standard stars. Having already aligned the halfwave plate against standards before the installation of the quarterwave plate \citep{JRM_dbip}, we were able to re-observe one of the same polarization standards (NGC 2024-1) in full-Stokes mode to align the quarterwave zero-point while confirming that we could reproduce the linear polarization results for this target. The set of observations of NGC 2024-1, both before and after the addition of the quarterwave plate, are listed in Table~\ref{JRM_tab.stds}, where a circular polarization value of ``---'' indicates a measurement taken before the installation of the quarterwave plate. \begin{table}[h] \caption{Polarized Standard Star Observations} \smallskip \begin{center} {\scriptsize \begin{tabular}{ccccccc} \tableline \noalign{\smallskip} Name & Obs Date & $\%~$Lin Pol$_{lit}$ & $\theta_{lit}$ & $\%~$Lin Pol$_{obs}$ & $\theta_{obs}$ & $\%~$Circ Pol$_{obs}$ \\ \noalign{\smallskip} \tableline \noalign{\smallskip} BD-12 5133 & 3/24/07 & $4.37 \pm 0.04$ & $146.84 \pm 0.25$ & $4.26 \pm 0.01$ & $146.20 \pm 0.10$ & --- \\ \tableline NGC 2024-1 & 3/24/07 & $9.65 \pm 0.06$ & $135.47 \pm 0.59$ & $9.70 \pm 0.02$ & $136.11 \pm 0.05$ & --- \\ NGC 2024-1 & 1/17/08 & $9.65 \pm 0.06$ & $135.47 \pm 0.59$ & $9.68 \pm 0.04$ & $135.78 \pm 0.12$ & $-0.04 \pm 0.04$\\ NGC 2024-1 & 3/12/08 & $9.65 \pm 0.06$ & $135.47 \pm 0.59$ & $9.65 \pm 0.02$ & $135.75 \pm 0.05$ & $ 0.12 \pm 0.02$\\ \tableline BD-13 5073 & 5/14/08 & $3.66 \pm 0.02$ & $152.55 \pm 0.11$ & $4.62 \pm 0.05$ & $149.28 \pm 0.31$ & $ 0.07 \pm 0.05$\\ \tableline BD-12 5133 & 5/14/08 & $4.37 \pm 0.04$ & $146.84 \pm 0.25$ & $4.30 \pm 0.05$ & $146.26 \pm 0.31$ & $-0.04 \pm 0.05$\\ BD-12 5133 & 6/11/08 & $4.37 \pm 0.04$ & $146.84 \pm 0.25$ & $4.29 \pm 0.03$ & $145.17 \pm 0.21$ & $ 0.02 \pm 0.03$\\ \tableline VI Cyg 12 & 6/11/08 & $8.95 \pm 0.09$ & $115.00 \pm 0.30$ & $8.69 \pm 0.04$ & $116.04 \pm 0.13$ & $-0.16 \pm 0.04$\\ \noalign{\smallskip} \tableline \end{tabular} } \label{JRM_tab.stds} \end{center} \end{table} \begin{table}[h] \caption{Unpolarized Standard Star Observations} \smallskip \begin{center} {\scriptsize \begin{tabular}{cccccc} \tableline \noalign{\smallskip} Name & Obs Date & $\%~$Lin Pol$_{lit}$ & $\%~$Lin Pol$_{obs}$ & $\theta_{obs}$ & $\%~$Circ Pol$_{obs}$ \\ \noalign{\smallskip} \tableline \noalign{\smallskip} HD 64299 & 03/23/07 & $0.15 \pm 0.03$ & $0.10 \pm 0.01$ & $83 \pm 4 $ & --- \\ \tableline WD 1615-154 & 03/24/07 & $0.05 \pm 0.03$ & $0.02 \pm 0.02$ & --- & --- \\ WD 1615-154 & 03/12/08 & $0.05 \pm 0.03$ & $0.04 \pm 0.03$ & --- & $-0.01 \pm 0.03$ \\ WD 1615-154 & 05/14/08 & $0.05 \pm 0.03$ & $0.02 \pm 0.03$ & --- & $0.08 \pm 0.03$ \\ WD 1615-154 & 06/11/08 & $0.05 \pm 0.03$ & $0.19 \pm 0.06$ & $158 \pm 9 $ & $0.05 \pm 0.06$ \\ \tableline BD+28d4211 & 08/29/07 & $0.05 \pm 0.03$ & $0.02 \pm 0.02$ & --- & $0.00 \pm 0.02$ \\ \tableline WD 2149+021 & 08/30/07 & $0.04 \pm 0.01$ & $0.03 \pm 0.02$ & --- & $0.00 \pm 0.02$ \\ \tableline G191B2B & 01/17/08 & $0.06 \pm 0.04$ & $0.02 \pm 0.03$ & --- & $0.01 \pm 0.03$ \\ \noalign{\smallskip} \tableline \end{tabular} } \label{JRM_tab.unpol} \end{center} \end{table} \subsection{Instrumental Polarization} In order to test for instrumental polarization or depolarization, we have observed polarized and unpolarized standard stars over a $15$ month baseline. Tables~\ref{JRM_tab.stds} and \ref{JRM_tab.unpol} give our measured polarizations and position angles for polarized and unpolarized standard stars, respectively, as well as literature values for these objects from \citet{JRM_fossati07}, \citet{JRM_hubbleSTD2} and the Keck/LRISp standards\footnote{http://www2.keck.hawaii.edu/inst/lris/polarimeter/polarimeter.html}. Our measurements for both polarized and unpolarized standards agree within $3~\sigma$ of the literature values, confirming that instrument systematics are less than a $0.1\%$ effect. The only exceptions to this are the observations of BD-13 5073 and WD 1615-154. BD-13 5073 clearly shows evidence of variation in the amplitude and direction of polarization from the literature values over only a few years, showing it cannot be depended upon as a polarized standard. Our observation of WD 1615-154 on 6/11/08 shows anomalously high polarization compared to literature values and our previous observations at the $\sim3~\sigma$ level. With the current data it is unclear if the polarization properties of the object have changed or if this measurement is just an outlier. \subsection{Crosstalk} \label{JRM_crosstalk} Instrumental crosstalk between Stokes vectors is one of the more subtle errors that can affect polarization measurements and quantifying its magnitude is a critical step toward obtaining high precision polarimetry. Crosstalk between linear Stokes vectors ($Q$ to $U$ or $U$ to $Q$) happens when the zero-point location of the halfwave retarder is offset from the defined $Q$ direction, and is easily corrected by aligning the waveplate, as discussed above in \S\ref{JRM_align} Crosstalk from circular to linear and back ($V$ to $Q\&U$ or $Q\&U$ to $V$) can be caused by waveplate offsets or by variations in retardance of the wave-plates as a function of wavelength, position on the plate, etc. To fully determine the instrumental crosstalk, light with a known polarization must be sent through the instrument and measured upon exiting. It is possible to do this by installing polarizers just in front of the instrument during observations \citep[recent examples include][etc.]{JRM_perrin08,JRM_snik06}, however due to limited observing time and space in the optical path this was not a feasible calibration method for DBIP. Instead, we performed lab bench calibrations to quantify instrumental crosstalk. To measure $Q\&U$ to $V$ crosstalk we sent a collimated light source through a linear polarizer into DBIP and measured one of the two exiting beams with a precision photodiode. By stepping the angular position of the input linear polarizer, we were able to fully characterize this linear-to-circular crosstalk. To measure $V$ to $Q\&U$ crosstalk a fixed quarterwave plate was installed after the linear polarizer. By stepping the linear polarizer again, we were able to vary the input polarization from circular to linear and back and characterize this circular-to-linear crosstalk. \begin{figure}[h] \plotfiddle{MasieroFig2.eps}{4in}{0}{40}{40}{-110}{0} \caption{Polarization crosstalk measurements for DBIP for two cases: (a) for input of pure linear polarization rotated through 360 degrees, and (b) for input of rotated linear polarization passed through a fixed quarterwave plate. Sinusoids for both cases were fitted using a chi-squared minimizer, and are labeled with the Stokes vectors they represent. } \label{JRM_fig.crosstalk} \end{figure} Figure~\ref{JRM_fig.crosstalk} shows our measured polarization state for both setups as the input linear polarizer was stepped through $360~$degrees. Sinusoids describe the behavior of the polarization vectors in both setups and were fit with a chi-squared minimizer to the data to determine percentage crosstalk as well as any depolarization from the optics. We find that the circular polarization measured when inputing pure linearly polarized light had an amplitude of $1.4\%$ and an offset of $-1.4\%$, with crosstalk preferentially occurring when both Q and U are positive (around $22^\circ$ on our plot). We measure a depolarization of both linear signals varying from $<0.5\%$ near $22^\circ$ to $\sim4\%$ near $112^\circ$. Since these crosstalk errors are only a few percent of the input signal, for sources with ``typical'' polarizations (i.e. $5-10\%$) the crosstalk error is comparable to the desired statistical errors of $\sim0.1\%$ polarization, as we measured for our polarized and unpolarized standards. For the measurement of circular-to-linear crosstalk, we find a small amount of linear $u$ polarization generated from a purely circularly polarized input beam, preferentially when $+v$ is input. Note that the phased oscillation of $q$ and $u$ indicate a small misalignment of the quarterwave plate axes with the instrument-defined $Q$ orientation, while it is the slight phase offset between $-u$ and $+q$ that is produced by the crosstalk. We also see clear evidence for a depolarization of the circularly polarized signal at the level of $\sim4\%$ of the input polarization. For targets with total circular polarizations less than a few percent these errors should be comparable the $0.1\%$ noise error typically obtained in our observations. \section{Conclusion} We have presented the results of our extended commissioning of DBIP into full-Stokes mode. Crosstalk values are shown to be small and can be ignored for objects with ``typical'' (i.e. $<5\%$) polarizations. With errors well constrained DBIP is now available for scientific studies of linear and circular polarization of point sources in the optical. DBIP has already been successfully used to characterize the polarization-phase curves of asteroids with some of the most pristine compositions in the inner solar system \citep{JRM_aquitania} and a number of other programs to study small solar system bodies are currently under way. \acknowledgements JM would like to thank Robert Jedicke and Colin Aspin for providing funding to attend the Astronomical Polarimetry 2008 conference. JM was partially funded under NASA PAST grant NNG06GI46G. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit on Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to work with telescopes located on this sacred mountain.
1,941,325,220,383
arxiv
\section{Introduction} In this paper, we investigate how the two notions of \emph{Grothendieck bifibration} and of \emph{Quillen model category} may be suitably combined together. We are specifically interested in the situation of a Grothendieck bifibration $p\from \cat{E}\to\cat{B}$ where the basis category~$\cat{B}$ as well as each fiber $\cat{E}_A$ for an object~$A$ of the basis category~$\cat{B}$ is equipped with a Quillen model structure. Our main purpose will be to identify necessary and sufficient conditions on the Grothendieck bifibration $p\from \cat{E}\to\cat{B}$ to ensure that the total category $\cat{E}$ inherits a model structure from the model structures assigned to the basis~$\cat{B}$ and to the fibers~$\cat{E}_A$'s. We start our inquiry by recalling the fundamental relationship between bifibrations and adjunctions. This connection will guide us all along the paper. Our plan is indeed to proceed by analogy, and to carve out a notion of \emph{Quillen bifibration} playing the same role for Grothendieck bifibrations as the notion of \emph{Quillen adjunction} plays today for the notion of adjunction. \medbreak \paragraph{Grothendieck bifibrations and adjunctions.} We will generally work with \emph{cloven} bifibrations. Recall that a \emph{cleavage on a Grothendieck fibration} is a choice, for every morphism~$u\from A\to B$ and for every object~$Y$ above~$B$, of a cartesian morphism $\cart u Y\from \pull u Y\to Y$ above~$u$. Dually, a \emph{cleavage on a Grothendieck opfibration} is a choice, for every morphism~$u\from A\to B$ and for every object~$X$ above~$A$, of a left cartesian morphism $\cocart u X \from X\to \push u X$ above~$u$. In a cloven Grothendieck fibration, every morphism $u\from A\to B$ in the basis category~$\cat{B}$ induces a functor \begin{equation} \label{eq:right-cleavage}% \begin{tikzcd}[column sep=.6em] u^{\ast} & : & \cat{E}_B\arrow[rrrr] &&&& \cat{E}_A \end{tikzcd} \end{equation} Symmetrically, in a cloven Grothendieck opfibration, every morphism $u\from A\to B$ in the basis category~$\cat{B}$ induces a functor \begin{equation} \label{eq:left-cleavage \begin{tikzcd}[column sep=.6em] u_{!} & : & \cat{E}_A\arrow[rrrr] &&&& \cat{E}_B \end{tikzcd} \end{equation} A \emph{cloven bifibration} (or more simply a bifibration) is a left and right Grothendieck fibration~$p\from \cat{E}\to\cat{B}$ equipped with a cleavage on both sides. \medbreak Formulated in this way, a bifibration $p\from \cat{E}\to\cat{B}$ is simply the ``juxtaposition'' of a left and of a right Grothendieck fibration, with no apparent connection between the two structures. Hence, a remarkable phenomenon is that the two fibrational structures are in fact strongly interdependent. Indeed, it appears that in a bifibration $p\from \cat{E}\to\cat{B}$, the pair of functors (\ref{eq:right-cleavage}) and (\ref{eq:left-cleavage}) associated to a morphism $u\from A\to B$ defines an adjunction between the fiber categories \begin{displaymath} \begin{tikzcd}[column sep=.8em] u_{!} & : & {\cat{E}_A} \arrow[rrrr, yshift=-.5ex] &&&& {\cat{E}_B} \arrow[llll, yshift=.5ex] & : & u^{\ast} \end{tikzcd} \end{displaymath} where the functor $u_{!}$ is left adjoint to the functor~$\pull u$. The bond between bifibrations and adjunctions is even tighter when one looks at it from the point of view of indexed categories. Recall that a (covariantly) \emph{indexed category} of basis~$\cat{B}$ is defined as a pseudofunctor \begin{equation} \label{eq:pseudofunctor}% \begin{tikzcd}[column sep=1em] \cat{P} & : & \cat{B}\arrow[rrrr] &&&& \concrete{Cat} \end{tikzcd} \end{equation} where $\concrete{Cat}$ denotes the 2-category of categories, functors and natural transformations. Every cloven Grothendieck opfibration $p\from \cat{E}\to\cat{B}$ induces an indexed category $\cat{P}$ which transports every object~$A$ of the basis~$\cat{B}$ to the fiber category $\cat{E}_A$, and every morphism $u\from A\to B$ of the basis to the functor $\push u \from \cat{E}_A\to\cat{E}_B$. Conversely, the Grothendieck construction enables one to construct a cloven Grothendieck opfibration $p\from \cat{E}\to\cat{B}$ from an indexed category $\cat{P}$. This back-and-forth translation defines an equivalence of categories between \begin{center} \begin{tabular}{ccc} \begin{tabular}{c} the category of \\ cloven Grothendieck opfibrations \\ with basis category~$\cat{B}$ \end{tabular} & \hspace{1em} $\rightleftharpoons$ \hspace{1.3em} & \begin{tabular}{c} the category of \\ indexed categories \\ with basis category~$\cat{B}$ \end{tabular} \end{tabular} \end{center} All this is well-known. What is a little bit less familiar (possibly) and which matters to us here is that this correspondence may be adapted to Grothendieck bifibrations, in the following way. Consider the 2-category $\concrete{Adj}$ with categories as objects, with adjunctions \begin{equation} \label{eq:M-N-adjunction}% \begin{tikzcd}[column sep=.8em] L & \from & {\cat{M}} \arrow[rrrr, yshift=-.5ex] &&&& {\cat{N}} \arrow[llll, yshift=.5ex] & \from & R \end{tikzcd} \end{equation} as morphisms from $\cat{M}$ to $\cat{N}$, and with natural transformations \begin{displaymath} \begin{tikzcd}[column sep=.8em, row sep=0em] \theta & \from & {L_1} \arrow[rrrr, double, -implies] &&&& {L_2} & \from & \cat{M} \arrow[rrrr] &&&& \cat{N} \end{tikzcd} \end{displaymath} between the left adjoint functors as 2-dimensional cells $\theta\from (L_1,R_1)\Rightarrow (L_2,R_2)$. In the same way as we have done earlier, an \emph{indexed category-with-adjunctions} with basis category $\cat{B}$ is defined as a pseudofunctor \begin{equation} \label{eq:pseudofunctor-to-adj}% \begin{tikzcd}[column sep=1em] \cat{P} & \from & \cat{B}\arrow[rrrr] &&&& \concrete{Adj} \end{tikzcd} \end{equation} For the same reasons as in the case of Grothendieck opfibrations, there is an equivalence of category between \begin{center} \begin{tabular}{ccc} \begin{tabular}{c} the category of \\ cloven bifibrations \\ with basis category~$\cat{B}$ \end{tabular} & \hspace{.5em} $\rightleftharpoons$ \hspace{.8em} & \begin{tabular}{c} the category of \\ indexed categories-with-adjunctions \\ with basis category~$\cat{B}$ \end{tabular} \end{tabular} \end{center} From this follows, among other consequences, that a cloven bifibration $p\from \cat{E}\to\cat{B}$ is the same thing as a cloven right fibration where the functor $\pull u \from \cat{E}_B\to\cat{E}_A$ comes equipped with a left adjoint $\push u \from \cat{E}_A\to\cat{E}_B$ for every morphism~$u\from A\to B$ of the basis category~$\cat{B}$. \medbreak By way of illustration, consider the ordinal category $\lincat 2$ with two objects $0$ and $1$ and a unique non-identity morphism $u\from 0\to 1$. By the discussion above, a Grothendieck bifibration $p:\cat{E}\to \cat{B}$ on the basis category $\cat{B}=\lincat 2$ is the same thing as an adjunction~(\ref{eq:M-N-adjunction}). The correspondence relies on the observation that every adjunction~(\ref{eq:M-N-adjunction}) can be turned into a bifibration $p\from \cat{E}\to \cat{B}$ where the category $\cat{E}$ is defined as the category of \emph{collage} associated to the adjunction~$(L,R)$, with fibers $\cat{E}_0=\cat{M}$, $\cat{E}_1=\cat{N}$ and mediating functors $u^*=R$ and $u_!=L$, see \cite{street:fib-in-bicat} for the notion of collage. For that reason, the Grothendieck construction for bifibrations may be seen as a generalized and fibrational notion of collage. \paragraph{Model structures and Quillen adjunctions.} Seen from that angle, the notion of Grothendieck bifibration provides a fibrational counterpart (and also a far-reaching generalization) of the fundamental notion of adjunction between categories. This perspective opens a firm connection with modern homotopy theory, thanks to the notion of \emph{Quillen adjunction} between model categories. Recall that a \emph{model structure} on a category~$\cat{M}$ delineates three classes $\class C$, $\class W$, $\class F$ of maps called \emph{cofibrations}, \emph{weak equivalences} and \emph{fibrations} respectively ; these classes of maps are moreover required to satisfy a number of properties recalled in definition~\ref{def:model-category}. A fibration or a cofibration which is at the same time a weak equivalence is called \emph{acyclic}. \begin{remark} \label{rem:about-limits-in-model-categories}% By extension, we find sometimes convenient to call \emph{model structure} a category~$\cat{M}$ \emph{together} with its model structure $(\class C, \class W, \class F)$. The appropriate name for that notion would be \emph{model category} but the terminology is already used in the literature for a \emph{finitely complete} and \emph{finitely cocomplete} category~$\cat{C}$ equipped with a model structure $(\class C, \class W, \class F)$. The extra completeness assumptions play a role in the construction of the homotopy category $\Ho{\cat{C}}$, and it is thus integrated in the accepted definition of a ``model category''. We prefer to work with ``model structures'' for two reasons. On the one hand, the construction of $\Ho{\cat{C}}$ can be performed using the weaker assumption that the category~$\cat{C}$ has finite products and finite coproducts, as noticed by Egger~\cite{egger:model-cat-no-eq}. On the other hand, the extra completeness assumptions are independent of the relationship between Grothendieck bifibrations and model structures, and may be treated separately. \end{remark} We recall below the notions of left and right Quillen functor between model structures. \begin{definition}[Quillen functors] A functor $F\from \cat{M}\to\cat{N}$ between two model structures $\cat{M}$ and $\cat{N}$ is called a \emph{left} \emph{Quillen} \emph{functor} when it transports every cofibration of~$\cat{M}$ to a cofibration of~$\cat{N}$ and every acyclic cofibration of~$\cat{M}$ to an acyclic cofibration of~$\cat{N}$. Dually, a functor $F\from \cat{M}\to\cat{N}$ is called a \emph{right Quillen functor} when it transports every fibration of~$\cat{M}$ to a fibration of~$\cat{N}$ and every acyclic fibration of~$\cat{M}$ to an acyclic fibration of~$\cat{N}$. A functor $F\from \cat{M}\to\cat{N}$ which is at the same time a left and a right Quillen functor is called a \emph{Quillen functor}. \end{definition} A simple argument shows that a Quillen functor $F\from \cat{M}\to\cat{N}$ transports every weak equivalence of~$\cat{M}$ to a weak equivalence of~$\cat{N}$. For that reason, a Quillen functor is the same thing as a functor which transports every cofibration, weak equivalence or fibration $f\from A\to B$ of~$\cat{M}$ to a map $Ff\from FA\to FB$ with the same status in the model structure of~$\cat{N}$. \medbreak The notion of \emph{Quillen adjunction} relies on the following observation. \begin{proposition}\label{proposition/LR} Suppose given an adjunction \begin{equation} \label{eq:M-N-Quillen-adjunction}% \begin{tikzcd}[column sep=.8em] L & \from & {\cat{M}} \arrow[rrrr, yshift=-.5ex] &&&& {\cat{N}} \arrow[llll, yshift=.5ex] & \from & R \end{tikzcd} \end{equation} between two model categories~$\cat{M}$ and~$\cat{N}$. The following assertions are equivalent: \begin{itemize} \item the left adjoint functor~$L\from \cat{M}\to\cat{N}$ is a left Quillen functor, \item the right adjoint functor~$R\from \cat{N}\to\cat{M}$ is a right Quillen functor. \end{itemize} \end{proposition} \begin{definition}[Quillen adjunctions] An adjunction $L \from \cat M \leftrightarrows \cat N \cofrom R$ between two model categories~$\cat{M}$ and~$\cat{N}$ is called a \emph{Quillen adjunction} when the equivalent assertions of Prop.~\ref{proposition/LR} hold. \end{definition} \paragraph{Quillen bifibrations.} At this stage, we are ready to introduce the notion of \emph{Quillen bifibration} which we will study in the paper. We start by observing that whenever the total category~$\cat{E}$ of a functor $p\from \cat{E}\to\cat{B}$ is equipped with a model structure $(\class C_{\cat{E}},\class W_{\cat{E}},\class F_{\cat{E}})$, every fiber~$\cat{E}_A$ associated to an object~$A$ of the basis category~$\cat{B}$ comes equipped with three classes of maps noted $\class C_A$, $\class W_A$, $\class F_A$ called cofibrations, weak equivalences and fibrations above the object~$A$, respectively. The classes are defined in the expected way: \begin{displaymath} \class C_A = \class C_{\cat{E}} \cap \class{Hom}_A \quad\quad \class W_A = \class W_{\cat{E}} \cap \class{Hom}_A \quad\quad \class F_A = \class F_{\cat{E}} \cap \class{Hom}_A \end{displaymath} where $\class{Hom}_A$ denotes the class of maps~$f$ of the category~$\cat{E}$ above the object~$A$, that is, such that $p(f)=\id A$. We declare that the model structure $(\class C_{\cat{E}},\class W_{\cat{E}},\class F_{\cat{E}})$ on the total category~$\cat{E}$ \emph{restricts} to a model structure on the fiber~$\cat{E}_A$ when the three classes $\class C_A$, $\class W_A$, $\class F_A$ satisfy the properties required of a model structure on the category~$\cat{E}_A$. \medbreak \noindent This leads us to the main concept of the paper: \begin{definition}[Quillen bifibrations] A Quillen bifibration $p\from \cat{E}\to\cat{B}$ is a Grothendieck bifibration where the basis category~$\cat{B}$ and the total category~$\cat{E}$ are equipped with a model structure, in such a way that \begin{itemize} \item the functor $p\from \cat{E}\to\cat{B}$ is a Quillen functor, \item the model structure of~$\cat{E}$ restricts to a model structure on the fiber~$\cat{E}_A$, for every object~$A$ of the basis category~$\cat{B}$. \end{itemize} \end{definition} This definition of Quillen bifibration deserves to be commented. The first requirement that $p\from \cat{E}\to\cat{B}$ is a Quillen functor means that every cofibration, weak equivalence and fibration $f\from X\to Y$ of the total category~$\cat{E}$ lies above a map $u\from A\to B$ of the same status in the model category~$\cat{B}$. This condition makes sense, and we will see in section~\ref{sec:quillen-bifib} that it is satisfied in a number of important examples. The second requirement means that the model structure $(\class C_{\cat{E}},\class W_{\cat{E}},\class F_{\cat{E}})$ combines into a single model structure on the total category~$\cat{E}$ the family of model structures $(\class C_A,\class W_A,\class F_A)$ on the fiber categories~$\cat{E}_A$. \paragraph{A Grothendieck construction for Quillen bifibrations.} The notion of Quillen bifibration is tightly connected to the notion of Quillen adjunction, thanks to the following observation established in section~\ref{sec:quillen-bifib}. \begin{proposition} In a Quillen bifibration $p\from \cat{E}\to\cat{B}$, the adjunction \begin{displaymath} \begin{tikzcd}[column sep=.8em] \push u & \from & {\cat{E}_A} \arrow[rrrr, yshift=-.5ex] &&&& {\cat{E}_B} \arrow[llll, yshift=.5ex] & \from & u^* \end{tikzcd} \end{displaymath} is a Quillen adjunction, for every morphism $u\from A\to B$ of the basis category~$\cat{B}$. \end{proposition} \noindent From this follows that a Quillen bifibration induces an \emph{indexed model structure} \begin{equation} \label{eq:pseudofunctor-to-quillen}% \begin{tikzcd}[column sep=1em] \cat{P} & \from & \cat{B}\arrow[rrrr] &&&& \concrete{Quil} \end{tikzcd} \end{equation} defined as a pseudofunctor from a model structure~$\cat{B}$ to the 2-category $\concrete{Quil}$ of model structures, Quillen adjunctions, and natural transformations. Our main contribution in this paper is to formulate necessary and sufficient conditions for a Grothendieck construction to hold in this situation. More specifically, we resolve the following problem. \medbreak \noindent \emph{A. Hypothesis of the problem.} We suppose given an indexed Quillen category as we have just defined in~(\ref{eq:pseudofunctor-to-quillen}) or equivalently, a Grothendieck bifibration $p\from \cat{E}\to\cat{B}$ where \begin{itemize} \item the basis category~$\cat{B}$ is equipped with a model structure~$(\class C,\class W,\class F)$, \item every fiber~$\cat{E}_A$ is equipped with a model structure $(\class C_A,\class W_A,\class F_A)$, \item the adjunction $(\push u,\pull u)$ is a Quillen adjunction, for every morphism $u\from A\to B$ of the basis category~$\cat{B}$. \end{itemize} \noindent \emph{B. Resolution of the problem.} We find necessary and sufficient conditions to ensure that there exists a model structure $(\class C_{\cat{E}},\class W_{\cat{E}},\class F_{\cat{E}})$ on the total category~$\cat{E}$ such that \begin{itemize} \item the Grothendieck bifibration $p\from \cat{E}\to\cat{B}$ defines a Quillen bifibration, \item for every object~$A$ of the basis category, the model structure $(\class C,\class W,\class F)$ of the total category~$\cat{E}$ restricts to the model structure $(\class C_A,\class W_A,\class F_A)$ of the fiber~$\cat{E}_A$. \end{itemize} We establish in the course of the paper (see section~\ref{sec:quillen-bifib}) that there exists at most one solution to the problem, which is obtained by defining the cofibrations and fibrations of the total category~$\cat{E}$ in the following way: \begin{itemize} \item a morphism $f\from X\to Y$ of the total category~$\cat{E}$ is a \emph{total cofibration} when it factors as $X\to Z\to Y$ where $X\to Z$ is a cocartesian map above a cofibration $u\from A\to B$ of~$\cat{B}$, and $Z\to Y$ is a cofibration in the fiber~$\cat{E}_B$, \item a morphism $f\from X\to Y$ of the total category~$\cat{E}$ is a \emph{total fibration} when it factors as $X\to Z\to Y$ where $Z\to Y$ is a cartesian map above a fibration $u\from A\to B$ of~$\cat{B}$, and $X\to Z$ is a fibration in the fiber~$\cat{E}_A$. \end{itemize} \begin{proposition}[Uniqueness of the solution] When the solution $(\class C_{\cat{E}},\class W_{\cat{E}},\class F_{\cat{E}})$ exists, it is uniquely determined by the fact that its fibrations and cofibrations are the total cofibrations and total fibrations of the total category~$\cat{E}$, respectively. \end{proposition} Besides the formulation of Quillen bifibrations, our main contribution is to devise two conditions called \eqref{hyp:weak-conservative} for \emph{homotopical conservativity} and \eqref{hyp:hBC} for \emph{homotopical Beck-Chevalley}, and to show (see theorem~\ref{thm:main}) that they are sufficient and necessary for the solution to exist. \subsection{Related works} The interplay between bifibred categories and model structures was first explored by Roig in \cite{roig:model-bifibred}, providing results in homological diffrentially graded algebra. Stanculescu then spotted a mistake in Roig's theorem and subsequently corrected it in \cite{stanculescu:bifib-model}. Finally, \cite{harpaz-prasma:grothendieck-model} tackles the problem of reflecting Lurie's Grothendieck construction for $\infty$-categories at the level of model categories, hence giving a model for lax colimits of diagrams of $\infty$-categories. This work is directly in line with, and greatly inspired by, these papers. In our view, both Roig-Stanculescu's and Harpaz-Prasma's results suffer from flaws. The former introduces a very strong asymmetry, making natural expectations unmet. For example, for any Grothendieck bifibration $p\from \cat E \to \cat B$, the opposite functor $\op p\from \op{\cat E} \to \op{\cat B}$ is also a Grothendieck bifibration. So we shall expect that when it is possible to apply Roig-Stanculescu's result to the functor $p$, provinding this way a model structure on $\cat E$, it is also possible to apply it to $\op p$, yielding on $\op{\cat E}$ the opposite model structure. This is not the case: for almost every such $p$ for which the result applies, it does not for the functor $\op p$. The latter result by Harpaz and Prasma on the contrary forces the symmetry by imposing a rather strong assumption: the adjoint pair $(\push u,\pull u)$ associated to a morphism $u$ of the base $\cat B$, already required to be a Quillen adjunction in \cite{roig:model-bifibred} and \cite{stanculescu:bifib-model}, needs in addition to be a Quillen equivalence whenever $u$ is a weak equivalence. While it is a key property for their applications, it put aside real world examples that nevertheless satisfy the conclusion of the result. The goal of this paper is to lay out a common framework fixing these flaws. This is achieved in theorem \ref{thm:main} by giving necessary and sufficient conditions for the resulting model structure on $\cat E$ to be the one described in both cited results. \paragraph{Plan of the paper.} Section \ref{sec:liminaries} recalls the basic facts we will need latter about Grothendieck bifibrations and model categories. It also introduces \define{intertwined weak factorization systems}, a notion that pops here and there on forums and the n\nobreakdash-Category Caf\'e, but does not appear in the literature to the best of our knowledge. Its interest mostly resides in that it singles out the 2-out-of-3 property of weak equivalences in a model category from the other more {\em combinatorial} properties. Finally we recall in that section a result of \cite{stanculescu:bifib-model} in order to make this paper self-contained. Section \ref{sec:actual-thm} contains the main theorem~\ref{thm:main} that we previously announced. Its proof is cut into two parts: first we prove the necessity of conditions~\ref{hyp:weak-conservative} and \ref{hyp:hBC}, and then we show that they are sufficient as well. The proof of necessity is the easy part and comes somehow as a bonus, while the proof of sufficiency is much harder and expose how conditions~\eqref{hyp:weak-conservative} and \eqref{hyp:hBC} play their role. Section \ref{sec:examples} illustrates \ref{thm:main} with some applications in usual homotopical algebra. First, it gives an original view on Kan's theorem about Reedy model structures by stating it in a bifibrationnal setting. Here should it be said that this was our motivating example. We realized that neither Roig-Stanculescu's or Harpaz-Prasma's theorem could be apply to the Reedy construction, although the conclusion of these results was giving Kan's theorem back. As in any of those {\em too good no to be true} situations, we took that as an incentive to strip down the previous results in order to only keep what makes them {\em tick}, which eventually has led to the equivalence of theorem \ref{thm:main}. Section \ref{subsec:versus-hp-rs} gives more details about Roig-Stanculescu's and Harpaz-Prasma's theorem, and explains how their analysis started the process of this work. \paragraph{Convention.} All written diagrams commute if not said otherwise. When objects are missing and replaced by a dot, they can be parsed from other informations on the diagram. Gray parts help to understand the diagram's context. \paragraph{Acknowledgments.} The authors are grateful to Clemens Berger for making them aware of important references at the beginning of this work, and to Georges Maltsiniotis for an early review of theorem~\ref{thm:main} and instructive discussions around possible weakenings of the notion of Quillen bifibration. \section{Liminaries} \label{sec:liminaries} \subsection{Grothendieck bifibrations} \label{subsec:lim-bifib} In this section, we recall a number of basic definitions and facts about Grothendieck bifibrations. \medbreak Given a functor $p:\cat{E}\to\cat{B}$, we shall use the following terminology. The categories~$\cat{B}$ and~$\cat{E}$ are called the \emph{basis category}~$\cat{B}$ and the \emph{total category}~$\cat{E}$ of the functor $p:\cat{E}\to\cat{B}$. We say that an object $X$ of the total category~$\cat{E}$ is above an object $A$ of the basis category~$\cat{B}$ when $p(X)=A$ and, similarly, that a morphism $f:X\to Y$ is above a morphism $u:A\to B$ when $p(f)=u$. The fiber of an object~$A$ in the basis category~$\cat{B}$ with respect to $p$ is defined as the subcategory of $\cat{E}$ whose objects are the objects~$X$ such that $p(X)=A$ and whose morphisms are the morphisms~$f$ such that $p(f)=\id A$. In other words, the fiber of~$A$ is the category of objects above~$A$, and of morphisms above the identity $\id A$. The fiber is noted $p_A$ or $\cat{E}_A$ when no confusion is possible. \medbreak A morphism $f:X\to Y$ in a category~$\cat{E}$ is called \emph{cartesian} with respect to the functor $p:\cat{E}\to\cat{B}$ when the commutative diagram $$ \begin{tikzcd}[column sep=2em, row sep=1em] \cat{E}(Z,X) \arrow[rr,"{f\circ -}"] \arrow[dd,"{p}", swap] && \cat{E}(Z,Y) \arrow[dd,"p"] \\ \\ \cat{B}(C,A) \arrow[rr,"{u\circ -}"] && \cat{B}(C,B) \end{tikzcd} $$ is a pullback diagram for every object $Z$ in the category~$\cat{E}$. Here, we write~$u:A\to B$ and~$C$ for the images $u=p(f)$ and $C=p(Z)$ of the morphism~$f$ and of the object~$Z$, respectively. Unfolding the definition, this means that for every pair of morphisms $v:C\to A$ and $g:Z\to Y$ above $u\circ v:C\to B$, there exists a unique morphism $h:Z\to X$ above $v$ such that $h\circ f=g$. The situation may be depicted as follows: \begin{center} $\begin{array}{c} \begin{tikzcd}[column sep=1.2em, row sep=.8em] Z \arrow[rrrrrrd, bend left, "g"] \arrow[rrd,dashed,"h"] \arrow[dd,dotted,no head] \\ && X \arrow[rrrr,"f"] \arrow[dd,dotted,no head] &&&& Y \arrow[dd,dotted,no head] \\ C \arrow[rrrrrrd, bend left] \arrow[rrd,"v"{swap}] \\ && A\arrow[rrrr,"u"{swap}] &&&& B \end{tikzcd} \end{array}$ \end{center} Dually, a morphism $f:X\to Y$ in a category~$\cat{E}$ is called \emph{cocartesian} with respect to the functor $p:\cat{E}\to\cat{B}$ when the commutative diagram $$ \begin{tikzcd}[column sep=2em, row sep=1em] \cat{E}(Y,Z) \arrow[rr,"{-\circ f}"] \arrow[dd,"{p}", swap] && \cat{E}(X,Z) \arrow[dd,"p"] \\ \\ \cat{B}(B,C) \arrow[rr,"{-\circ u}"] && \cat{B}(A,C) \end{tikzcd} $$ is a pullback diagram for every object $Z$ in the category~$\cat{E}$. This means that for every pair of morphisms $v:A\to C$ and $g:X\to Z$ above $v\circ u:A\to C$, there exists a unique morphism $h:Z\to X$ above~$v$ such that $h\circ f=g$. Diagrammatically: \begin{center} $\begin{array}{c} \begin{tikzcd}[column sep=1.2em, row sep=.8em] && &&&& Z \arrow[dd,dotted,no head] \\ X\arrow[rrrr,"f"] \arrow[rrrrrru, bend left, "g"] \arrow[dd,dotted,no head] &&&& Y \arrow[rru,dashed,"h"] \arrow[dd,dotted,no head] \\ && &&&& C \\ A\arrow[rrrr,"u"{swap}] \arrow[rrrrrru, bend left] &&&& B \arrow[rru,"v"{swap}] \end{tikzcd} \end{array}$ \end{center} A functor $p:\cat{E}\to\cat{B}$ is called a \emph{Grothendieck opfibration} when for every morphism $u:A\to B$ and for every object~$Y$ above~$B$, there exists a cartesian morphism $f:X\to Y$ above~$u$. Symmetrically, a functor $p:\cat{E}\to\cat{B}$ is called a \emph{Grothendieck opfibration} when for every morphism $u:A\to B$ and for every object~$X$ above~$A$, there exists a cocartesian morphism $f:X\to Y$ above~$u$. Note that a functor $p:\cat{E}\to\cat{B}$ is a Grothendieck opfibration precisely when the functor $p^{op}:\cat{E}^{op}\to\cat{B}^{op}$ is a Grothendieck fibration. A \emph{Grothendieck bifibration} is a functor $p:\cat{E}\to\cat{B}$ which is at the same time a Grothendieck fibration and opfibration. \begin{definition} A \define{cloven Grothendieck bifibration} is a functor $p \from \cat E \to \cat B$ together with \begin{itemize} \item for any $Y \in \cat E$ and $u \from A \to pY$, an object $\pull u Y \in \cat E$ and a cartesian morphism $\cart[p] u Y \from \pull u Y \to Y$ above $u$, \item for any $X \in \cat E$ and $u \from pX \to B$, an object $\push u X \in \cat E$ and a cocartesian morphism $\cocart[p] u X\from X \to \push u X$ above $u$. \end{itemize} \end{definition} When the context is clear enough, we might omit the index $p$. The domain category $\cat E$ is often called the \define{total category} of $p$, and its codomain $\cat B$ the \define{base category}. We shall use this terminology when suited. \begin{remark} If $\cat E$ and $\cat B$ are small relatively to a universe $\mathbb U$ in which we suppose the axiom of choice, then a cloven Grothendieck bifibration is exactly the same as the original notion of Grothendieck bifibration. Hence, in this article, we treat the two names as synonym. \end{remark} The data of such cartesian and cocartesian morphisms gives two factorizations of an arrow $f\from X \to Y$ above some arrow $u\from A \to B$,$\pushfact f$ in the fiber $\fiber {\cat E} B$ and $\pullfact f$ in the fiber $\fiber {\cat E} A$: one goes through $\cart u Y$ and the other through $\cocart u X$. See the diagram below: \begin{displaymath} \begin{tikzcd} X \ar[dr,"f"{description}] \ar[d,"\pullfact f"swap] \ar[r,""] & \push u X \ar[d,"\pushfact f"] \\ \pull u Y \ar[r,""swap]& Y \end{tikzcd} \end{displaymath} In turn, this allows $\push u$ and $\pull u$ to be extended as \define{adjoint functors}: \begin{displaymath} \push u \from \fiber{\cat E} A \rightleftarrows \fiber{\cat E} B \cofrom \pull u \end{displaymath} where the action of $\push u$ on a morphism $k\from X \to X'$ of $\fiber{\cat E} A$ is given by $\pushfact{(\cocart u {X'}\circ k)}$ and the action of $\pull u$ on a morphism $\ell \from Y' \to Y$ is given by $\pullfact{(\ell \circ \cart u {Y'})}$: \begin{displaymath} \begin{tikzcd} X \ar[d,"k"swap] \ar[r,""] & \push u X \ar[d,"\push u (k)"] \\ X' \ar[r,""swap]& \push u {X'} \end{tikzcd} \hskip 5em \begin{tikzcd} \pull u {Y'} \ar[d,"\pull u \ell"swap] \ar[r,""] & Y' \ar[d,"\ell"] \\ \pull u Y \ar[r,""swap]& Y \end{tikzcd} \end{displaymath} This gives a mapping $\cat B \to \concrete{Adj}$ from the category $\cat B$ to the $2$-category $\concrete{Adj}$ of adjunctions: it maps an object $A$ to the fiber $\fiber{\cat E}A$, and a morphism $u$ to the push-pull adjunction $(\push u,\pull u)$. This mapping is even a pseudofunctor: \begin{itemize} \item For any $A \in\cat B$ and $X \in \fiber {\cat E} A$, we can factor $\id X \from X \to X$ through $\cocart {\id A}{X}$ and $\cart {\id A} X$: \begin{displaymath} \begin{tikzcd} X \drar[equal] \rar["\cocart {\id A}X"] & \push{(\id A)} X \dar["\pushfact{(\id X)}"] \\ {} & X \end{tikzcd} \hskip 8em \begin{tikzcd} X \drar[equal] \dar["\pullfact{(\id X)}"swap] & \\ \pull{(\id A)} X \rar["\cart {\id A}X"swap] & X \end{tikzcd} \end{displaymath} In particular by looking at the diagram on the left, both $\cocart {\id A} X \circ \pushfact{(\id X)}$ and the identity of $\push{(\id A)} X$ are solution to the problem of finding an arrow $f$ above $\id A$ such that $f\cocart{\id A} X = \cocart{\id A} X$: by the unicity condition of the cocartesian morphisms, it means that they are equal, or otherwise said that $\pushfact{(\id X)}$ is an isomorphism with inverse $\cocart{\id A}X$. Dually, looking at the diagram on the right, we deduce that $\pullfact{(\id X)}$ is an isomorphism with inverse $\cart{\id A} X$. All is natural in $X$, so we end up with \begin{displaymath} \push{(\id A)} \simeq \id {\fiber{\cat E}A} \simeq \pull{(\id A)} \end{displaymath} \item For any $u\from A\to B$ and $v \from B \to C$ in $\cat B$, and for any $X \in \fiber{\cat E}A$, the cocartesian morphism $\cocart {vu} X\from X \to \push{(vu)} X$ is above $vu$ by definition hence should factorize as $h\cocart u X$ for some $h$ above $v$, yielding $\pushfact h$ as a morphism in $\fiber{\cat E} C$ such that the following commutes: \begin{displaymath} \begin{tikzcd} X \rar["\cocart u X"] \ar[drr,bend right=15,"\cocart{vu}{X}"swap] & \push u X \rar["\cocart v {\push u X}"] \drar["h"swap] & \push v {\push u X} \dar["\pushfact h"] \\ & & \push {(vu)} X \end{tikzcd} \end{displaymath} Writing simply $k$ for the composite $\cocart v {\push u X} \circ \cocart u X$, the following commutes: \begin{displaymath} \begin{tikzcd} X \drar["\cocart u X"swap] \ar[rr,"\cocart{vu}{X}"] & & \push {(vu)} X \ar[dd,"\pushfact k"] \\ & \push u X \drar["\cocart v {\push u X}"swap] & \\ & & \push v {\push u X} \end{tikzcd} \end{displaymath} Clearly $\pushfact h\pushfact k$ and $\id{\push v\push u X}$ both are solution to the problem of finding $f$ above $\id C$ such that $f\cocart{vu} X = \cocart{vu} X$: the uniqueness condition in the definition of cocartesian morphisms forces them to be equal. Conversely, we use the cocartesianness of $\cocart u X$ and $\cocart v {\push u X}$ in two steps: first $\pushfact k \pushfact h \cocart v {\push u X} = \cocart v {\push u X}$ because they both answer the problem of finding $f$ above $v$ such that $f \cocart u X = \cocart v {\push u X} \circ \cocart u X$; from which we deduce $\pushfact k \pushfact h = \id{\push v \push u X}$ as they both answer the problem of finding a map $f$ above $\id C$ such that $f \cocart v {\push u X} = \cocart v{\push u X}$. In the end, $\pushfact h$ and $\pushfact k$ are isomorphisms, inverse to each other. All we did was natural in $X$, hence we have \begin{displaymath} \push {(vu)} \simeq \push v \push u \end{displaymath} \item The dual argument shows that $\pull {(vu)} \simeq \pull u \pull v$. \item To prove rigorously the pseudo functoriality of $\cat B \to \concrete{Adj}$, we should show that the isomorphisms we have exhibited above are coherent. This is true, but irrelevant to this work, so we will skip it. \end{itemize} The pseudo functoriality relates through an isomorphism the chosen (co)cartesian morphism above a composite $vu$ with the composite of the chosen (co)cartesian morphisms above $u$ and $v$. The following lemma gives some kind of extension of this result. \begin{lemma} \label{lem:pseudo-push-iso}% Let $u\from A \to B$, $v\from B \to C$ and $w\from C \to D$ in $\cat B$. Suppose $f \from X \to Y$ in $\cat E$ is above the composite $wvu$. Then for the unique maps $h\from\push v \push u X \to Y$ and $k\from \push {(vu)} X \to Y$ above $w$ that fill the commutative triangles \begin{displaymath} \begin{tikzcd} X \rar["\cocart u X"] \ar[drrr,bend right=15,"f"swap] & \push u X \rar["\cocart v {\push u X}"] \ar[drr,lightgray] & \push v \push u X \drar["h"] & \\ & & & Y \end{tikzcd} \quad% \begin{tikzcd} X \ar[rr,"\cocart {vu} X"] \ar[drrr,bend right=15,"f"swap] & & \push {(vu)} X \drar["k"] & \\ & & & Y \end{tikzcd} \end{displaymath} there exists an isomorphism $\phi$ in the fiber $\fiber{\cat E}C$ such that $h\phi = k$. \end{lemma} \begin{proof} We know there is a isomorphism $\phi \from \push{(vu)} X \to \push u \push v X$ above $\id C$ such that $\phi \cocart {vu} X = \cocart v {\push u X} \circ \cocart u X$. But then $h\phi \from \push{(vu)} X \to Y$ is above $w$ and fills the same triangle $k$ does in the statement: by unicity, $k = h\phi$. \begin{displaymath} \begin{tikzcd} & & \push {(vu)} X \dar["\phi"] \ar[ddr,bend left,"k"] & \\ X \ar[urr,"\cocart {vu} X",bend left=20] \rar["\cocart u X"] \ar[drrr,bend right=15,"f"swap] & \push u X \rar["\cocart v {\push u X}"] & \push v \push u X \drar["h"] & \\ & & & Y \end{tikzcd} \end{displaymath} \end{proof} Of course, we have the dual statement, that accepts a dual proof. \begin{lemma} \label{lem:pseudo-pull-iso}% Let $u\from A \to B$, $v\from B \to C$ and $w\from C \to D$ in $\cat B$. Suppose $f \from X \to Y$ in $\cat E$ is above the composite $wvu$. Then for the unique maps $h\from X \to \pull u \pull v Y$ and $k\from X \to \pull {(vu)} Y$ above $w$ that fill the commutative triangles \begin{displaymath} \begin{tikzcd} X \ar[drrr,bend left=15,"f"] \ar[drr,lightgray] \drar["h"swap] & & & \\ & \pull u \pull v Y \rar["\cart u {\pull v Y}"swap] & \pull v Y \rar["\cart v Y"swap] & Y \end{tikzcd} \quad% \begin{tikzcd} X \ar[drrr,bend left=15,"f"] \drar["k"swap] & & & \\ & \pull {(vu)} Y \ar[rr,"\cart {vu}Y"swap] & & Y \end{tikzcd} \end{displaymath} there exists an isomorphism $\phi$ in the fiber $\fiber{\cat E}C$ such that $\phi k = h$. \end{lemma} Suppose now that we have a chain of composable maps in $\cat B$: \begin{displaymath} \begin{tikzcd} A_0 \rar["u_1"] & A_1 \rar["u_2"] & \dots \rar["u_n"] & A_n \end{tikzcd} \end{displaymath} And let $f \from X \to Y$ be a map above the composite $u_n\dots u_1u_0$. Choose $0\leq i,j\leq n$ such that $i+j\leq n$. Then, using (co)cartesian choices above maps in $\cat B$, one can construct two canonical maps associated to $f$: these are the unique maps \begin{displaymath} \begin{aligned} h &\from \push {(u_i)} \cdots \push {(u_0)} X \to \pull {(u_{n-j+1})} \cdots \pull {(u_n)} Y \\ &\text{and} \\ k &\from \push{(u_i\cdots u_0)} X \to \pull{(u_n\cdots u_{n-j+1})} Y \end{aligned} \end{displaymath} above $u_{n-j}\cdots u_{i+1}\from A_i \to A_{n-j}$ (which is defined as $\id {A_i}$ in case $i+j=n$) filling in the following commutative diagrams: \begin{displaymath} \begin{tikzcd}[column sep=small] X \rar["\lambda"] & \dots \rar["\lambda"] & \push {(u_i)} \cdots \push{(u_0)} X \ar[phantom,ddr,""{coordinate,name=Z}] & & & \ar[from=1-1,to=3-6,"f"swap,rounded corners,to path={(\tikztostart.south) |- (Z) [near end]\tikztonodes -| (\tikztotarget.north)}] \ar[from=1-3,to=3-4,"h"{fill=white},crossing over] \\ & & & & & \\ & & & \pull {(u_{n-j+1})} \cdots \pull {(u_n)} Y \rar["\rho"] & \dots \rar["\rho"] & Y \\ &&&&& \\ X \ar[rr,"\lambda"] & & \push {(u_i\cdots u_0)} X \ar[phantom,ddr,""{coordinate,name=Z}] & & & \ar[from=5-1,to=7-6,"f"swap,rounded corners,to path={(\tikztostart.south) |- (Z) [near end]\tikztonodes -| (\tikztotarget.north)}] \ar[from=5-3,to=7-4,"k"{fill=white},crossing over] \\ & & & & & \\ & & & \pull {(u_n\cdots u_{n-j+1})} Y \ar[rr,"\rho"] & & Y \end{tikzcd} \end{displaymath} By applying the previous lemmas multiples times, we get the following useful corollary. \begin{corollary} \label{cor:pseudo-iso-in-fiber}% There is fiber isomorphisms $\phi$ and $\psi$ such that the following commutes: \begin{displaymath} \begin{tikzcd}[column sep=small] X \rar["\lambda"] & \dots \rar["\lambda"] & \push {(u_i)} \cdots \push{(u_0)} X \ar[phantom,ddr,""{coordinate,name=Z}] & & & \ar[from=1-1,to=3-6,"f",rounded corners,to path={(\tikztostart.south) |- (Z) -| (\tikztotarget.north) [near start]\tikztonodes}] \ar[from=1-1,to=4-3,"\lambda"{swap},crossing over] \ar[from=1-3,to=3-4,"h"{fill=white},crossing over] \\ & & & & & \\ & & & \pull {(u_{n-j+1})} \cdots \pull {(u_n)} Y \rar["\rho"] & \dots \rar["\rho"] & Y \\ & & \push {(u_i\cdots u_0)} X \ar[phantom,ddr,""{coordinate,name=Z}] \ar[uuu,"\phi",crossing over] & & & \ar[from=4-3,to=6-4,"k"{fill=white},crossing over] \\ & & & & & \\ & & & \pull {(u_n\cdots u_{n-j+1})} Y \ar[uuu,"\psi",crossing over] & & \ar[from=6-4,to=3-6,"\rho"{swap}] \end{tikzcd} \end{displaymath} \end{corollary} We will extensively use this corollary when $i+j = n$. Indeed, in that case $h,k,\phi,\psi$ all are in the same fiber $\fiber{\cat E}{A_i}$ and then $h$ and $k$ are isomorphic as arrows in that fiber. Every property on $h$ that is invariant by isomorphism of arrows will still hold on $k$, and conversely. \subsection{Weak factorization systems}% \label{subsec:wfs-intertwined}% In any category $\cat M$, we denote $j \weakorth q$, and we say that $j$ has the left lifting property relatively to $q$ (or that $q$ has the right lifting property relatively to $j$), when for any commutative square of the form \begin{displaymath} \begin{tikzcd} A \ar[r] \ar[d,"j" swap] & C \ar[d,"q"] \\ B \ar[r] & D \end{tikzcd} \end{displaymath} there exists a morphism $h \from B \to C$, making the two triangles commute in the following diagram: \begin{displaymath} \begin{tikzcd} A \ar[r] \ar[d,"j" swap] & C \ar[d,"q"] \\ B \ar[r] \ar[ur,"h" description] & D \end{tikzcd} \end{displaymath} Such a morphism~$h$ is called a \emph{lift} of the original commutative square. \medbreak A \define{weak factorization system} on a category $\cat M$ is the data of a couple $(\class L,\class R)$ of classes of arrows in $\cat M$ such that \begin{displaymath} \class L = \{ j : \forall q \in R, j\weakorth q\} \quad \text{and} \quad \class R = \{ q : \forall j \in L, j\weakorth q\} \end{displaymath} and such that every morphism $f$ of $\cat M$ may be factored as $f=qj$ with $j\in \class L,q\in \class R$. The elements of $\class L$ are called the \define{left maps} and the elements of $\class R$ the \define{right maps} of the factorization system. Let now $\cat M$ and $\cat N$ be categories with both a factorization system. Then an adjunction $L \from \cat M \rightleftarrows \cat N \cofrom R$ is said to be \define{wfs-preserving} if the left adjoint $L$ preserves the left maps, or equivalently if the right adjoint $R$ respects the right maps. \medskip As a key ingredient in the proof of our main result, the following lemma deserves to be stated fully and independently. It explains how to construct a weak factorization system on the total category of a Grothendieck bifibration, given that the basis and fibers all have one in a way that the adjunctions arising from the bifibration are wfs-preserving. \begin{lemma}[Stanculescu]% \label{lem:stan-lemma}% Let $\pi \from \cat F \to \cat C$ be a Grothendieck bifibration with weak factorization systems $(\class L_C,\class R_C)$ on each fiber $\fiber{\cat F}C$ and $(\class L, \class R)$ on $\cat C$. If the adjoint pair $(\push u,\pull u)$ is a wfs-adjunction for every morphism $u$ of $\cat C$, then there is a weak factorization system $(\class L_{\cat F},\class R_{\cat F})$ on $\cat F$ defined by \begin{displaymath} \begin{aligned} \class L_{\cat F} &= \{ f \from X \to Y \in \cat F : \pi(f) \in \class L , \pushfact f \in \class L_{\pi Y} \}, \\ \class R_{\cat F} &= \{ f \from X \to Y \in \cat F : \pi(f) \in \class R , \pullfact f \in \class R_{\pi X} \} \end{aligned} \end{displaymath} \end{lemma} For the proof in \cite[2.2]{stanculescu:bifib-model} is based on a different (yet equivalent) definition of weak factorization systems, here is a proof in our language for readers's convinience. \begin{proof} Let us begin with the easy part, which is the factorization property. For a map $f\from X \to Y$ of $\cat F$, one gets a factorization $\pi(f) = r \ell$ in $\cat C$ with $\ell \from \pi X \to C \in \class L$ and $r\from C \to \pi Y\in \class R$. It induces a fiber morphism $\push \ell X \to \pull r Y$ in $\fiber{\cat F}C$ that we can in turn factor as $r_C\ell_C$ with $\ell_C \in \class L_C$ and $r_C \in \class R_C$. \begin{displaymath} \begin{tikzcd} X \rar \drar[densely dashed,"\tilde \ell",swap] & \push \ell X \dar["\ell_C"] & \\ & \cdot \dar["r_C",swap] \drar[densely dashed,"\tilde r"] & \\ & \pull r Y \rar & Y \end{tikzcd} \end{displaymath} Then the wanted factorization of $f$ is $\tilde r \tilde \ell$ where $\tilde r$ is the morphism of $\cat F$ such that $\pi(\tilde r) = r$ and $\pullfact {\tilde r} = r_C$, and $\tilde \ell$ the one such that $\pi (\tilde \ell) = \ell$ and $\pushfact{\tilde \ell} = \ell_C$. This is summed up in the previous diagram. Lifting properties follow the same kind of pattern: take the image by $\pi$ and do the job in $\cat C$, then push and pull in $\cat F$ so that you end up in a fiber when everything goes smoothly. Take a map ${j \from X \to Y} \in \class L_{\cat F}$ and let us show that it lift against elements of $\class R_{\cat F}$. Consider in $\cat F$ a commutative square with the map $q$ on the right in $\class R_{\cat F}$: \begin{displaymath} \begin{tikzcd} X \dar["j",swap] \rar["f"] & V \dar["q"] \\ Y \rar["g",swap] & W \end{tikzcd} \end{displaymath} By definition, $\pi (j) \in \class L$ has the left lifting property against $\pi(q)$, hence a lift $h$: \begin{displaymath} \begin{tikzcd} \pi X \dar["\pi(j)",swap] \rar["\pi(f)"] & \pi V \dar["\pi(q)"] \\ \pi Y \rar["\pi(g)",swap] \urar["h"] & \pi W \end{tikzcd} \end{displaymath} Now filling the original square with $\tilde h \from Y \to V$ above $h$ is equivalent to fill the following induced solid square in $\fiber{\cat F} {\pi Y}$: \begin{displaymath} \begin{tikzcd} \color{lightgray} X \drar["j",swap,lightgray] \ar[rrr,bend left,"f",lightgray] \rar[lightgray] & \cdot \dar["\pushfact j",swap] \rar & \cdot \dar["\pull h (\pullfact q)"] \rar[lightgray] & \color{lightgray} V \drar[lightgray,"q"] \dar[lightgray,"\pullfact q"] & \\ & Y \rar \ar[rrr,bend right,"g",swap,lightgray] & \cdot \rar[lightgray] & \color{lightgray} \cdot \rar[lightgray] & \color{lightgray} W \end{tikzcd} \end{displaymath} But $\pushfact j \in \class L_{\pi Y}$, and $\pull h$ is the right adjoint of a wfs-preserving adjunction, hence maps the right map $\pullfact q$ of $\fiber{\cat F}{\pi V}$ to a right map in $\fiber{\cat F}{\pi Y}$: so there is such a filler. Conversely, if $j \from X \to Y$ in $\cat F$ has the left lifting property relatively to all maps of $\class R_{\cat F}$, then one has to show that it is in $\class L_{\cat F}$. Consider in $\fiber{\cat F}{\pi Y}$ a commutative square as \begin{displaymath} \begin{tikzcd} \color{lightgray} X \rar[lightgray,"\cocart {\pi(j)} X"] \drar["j",swap,lightgray] & \cdot \rar["f"] \dar["\pushfact j"] & Y' \dar["q"] \\ & Y \rar["g",swap] & Y'' \end{tikzcd} \qquad q \in \class R_{\pi Y} \end{displaymath} Then, because $q$ also is in $\class R_{\cat F}$, there is an $h \from Y \to Y'$ such that $g = qh$ and $hj = f\cocart{\pi(j)} X$. But then, $h\pushfact j$ and $f$ both are solution to the factorization problem of $j$ through the cocartesian arrow $\cocart{\pi(j)} X$, hence should be equal. Meaning $h$ is a filler of the original square in the fiber $\fiber{\cat F}{\pi Y}$. We conclude that $\pushfact j$ is a left map in its fiber. Now consider a commutative square in $\cat C$: \begin{displaymath} \begin{tikzcd} \pi X \dar["\pi(j)",swap] \rar["f"] & C \dar["q"] \\ \pi Y \rar["g",swap] & D \end{tikzcd} \qquad q \in \class R \end{displaymath} It induced a commutative square in $\cat F$: \begin{displaymath} \begin{tikzcd} X \dar["j",swap] \rar & \pull q \push g Y \dar["\kappa"] \\ Y \rar & \push g Y \end{tikzcd} \end{displaymath} Now the arrow on the right is cartesian above a right map, hence is in $\class R_{\cat F}$ by definition. So $j$ lift against it, giving us a filler $h \from Y \to \pull q \push g Y$ whose image $\pi(h) \from Y \to C$ fills the square in $\cat C$. We conclude that $\pi(j)$ is a left map of $\cat C$. In the end, $j \in \class L_{\cat F}$ as we wanted to show. Similarly, we can show that $\class R_{\cat F}$ is exactly the class of maps that have the right lifting property against all maps of $\class L_{\cat F}$. \end{proof} \subsection{Intertwined weak factorization system and model categories}% \label{subsec:lim-model}% Quillen introduced model categories in \cite{quillen:homotopical-algebra} as categories with sufficient structural analogies with the category of topological spaces so that a sensible notion of {\em homotopy between maps} can be provided. Not necessarily obvious at first sight are the redundancies of Quillen's definition. Even though intentionally important in the conceptual understanding of a model category, the extra checkings required can make a simple proof into a painful process. To ease things a little bit, this part is dedicated to extract the minimal definition of a model category at the cost of trading topological intuition for combinatorial comfort. \medskip Recall the definition of a model structure. \begin{definition}% \label{def:model-category}% A model structure on a category $\cat M$ is the data of three classes of maps $\class C$, $\class W$, $\class F$ such that: \begin{enumerate}[label=(\roman*)] \item $\class W$ has the 2-out-of-3 property, i.e.\ if two elements among $\{f,g,gf\}$ are in $W$ for composable morphisms $f$ and $g$, then so is the third, \item $(\class C,\class W \cap \class F)$ and $(\class C \cap \class W,\class F)$ both are weak fatorization systems. \end{enumerate} \end{definition} The morphism in $\class W$ are called the \define{weak equivalences}, those in $\class C$ the \define{cofibrations} and those in $\class F$ the \define{fibrations.} Given the role played by the two classes $\class C \cap W$ and $\class F \cap \class W$, we also give names to their elements: a fibration (respectively a cofibration) which is also a weak equivalence is called an \define{acyclic fibration} (respectively an \define{acyclic cofibration}). \begin{remark} It is crucial for the rest of the document to remark that there is some redundancy in the previous definition: in a model structure, any two of the three classes $\class C,\class W,\class F$ determine the last one. Indeed, knowing of $\class C$ and $\class W$ gives us $\class F$ as the class of morphisms having the right lifting property relatively to every element of $\class C \cap \class W$. Dually, $\class C$ is given as the class of morphism having the left lifting property relatively to every element of $\class W \cap \class F$. Finally, and it is the relevant case for the purpose of this article, the weak equivalences are exactly those morphisms that we can write $qj$ where $j$ is an acyclic cofibration and $q$ is an acyclic fibration. The first inclusion $\{qj: q \in \class F \cap \class W, j \in \class C \cap \class W \} \subseteq \class W$ is a direct consequence of the 2-out-of-3 property. The converse inclusion is given by applying one of the two weak factorization systems and then using the 2-out-of-3 property: if $w\in \class W$, it is writable as $w=qj$ with $q \in \class F, j\in \class C \cap \class W$; but then $w$ and $j$ being a weak equivalence, $q$ also is. Hence the conclusion. \end{remark} Recall also the notion of morphisms between model structures: a \define{Quillen adjunction} between two model structures $\cat M$ and $\cat N$ is an adjunction ${L \from \cat M \rightleftarrows \cat N \cofrom R}$ which is wfs-preserving for both the weak factorization system (acyclic cofibrations, fibrations) and the (cofibrations, acyclic fibrations) one. Finally, to conclude those remainders about model structures, let us introduce some new vocabulary. \begin{definition}[Homotopically conservative functor] A functor $F \from \cat M \to \cat N$ between model structures is said to be \define{homotopically conservative} if it preserves and reflects weak equivalences. \end{definition} \begin{remark} To get one's head around this terminlogy, let us make two observations: \begin{enumerate}[label=(\arabic*)] \item If $\cat M$ and $\cat N$ are endowed with the trivial model structure, in which weak equivalences are isomorphisms and cofibrations and fibrations are all morphisms, then the notion boils down to the usual conservative functors. \item Every functor $F \from \cat M \to \cat N$ preserving weak equivalences induces a functor $\Ho F \from \Ho {\cat M} \to \Ho {\cat N}$. Given that weak equivalences are saturated in a model category, homotopically conservative functors are exactly those $F$ such that $\Ho F$ is conservative as a usual functor. \end{enumerate} \end{remark} \medskip Let us pursue with the following definition, apparently absent from literature. \begin{definition} A weak factorization system $(\class L_1,\class R_1)$ on a category $\cat C$ is \define{intertwined} with another $(\class L_2,\class R_2)$ on the same category when: \begin{displaymath} \class L_1 \subseteq \class L_2 \qquad\text{ and }\qquad \class R_2 \subseteq \class R_1. \end{displaymath} \end{definition} The careful reader will notice that the properties $\class L_1 \subseteq \class L_2$ and $\class R_2 \subseteq \class R_1$ are actually equivalent to each other, but the definition is more naturally stated in this way. A similar notion is formulated by Shulman for orthogonal factorization systems, in a blog post on the $n$-Category Caf\'e \cite{shulman:ncat-ternary} with a brief mention at the end of a version for weak factorization systems. This is the only appearance of such objects known to us. The similarity with the weak factorization systems of a model category is immediately noticeable and in fact it goes further than a mere resemblance, as indicated in the following two results. \begin{proposition} Let $(\class L_1,\class R_1)$ together with $(\class L_2,\class R_2)$ form intertwined weak factorization systems on a category $\cat C$. Denoting $\class W = \class R_2 \circ \class L_1$, the following class identities hold: \begin{displaymath} \class L_1 = \class W \cap \class L_2, \qquad \class R_2 = \class W \cap \class R_1. \end{displaymath} \end{proposition} \begin{proof} Let us prove the first identity only, as the second one is strictly dual. Suppose $f\from A \to B \in \class L_1$, then $f \in \class L_2$ by the very definition of intertwined weak factorization systems, and $f = \id B f \in \class W$, hence the first inclusion: $\class L_1 \subseteq \class W \cap \class L_2$. Conversely, take $f \in \class W \cap \class L_2$. Then in particular there exists $j\in \class L_1$ and $q \in \class R_2$ such that $f = qj$. Put otherwise, the following square commutes: \begin{displaymath} \begin{tikzcd} A \ar[d,"f",swap] \ar[r,"j"] & C \ar[d,"q"] \\ B \ar[r,equal] & B. \end{tikzcd} \end{displaymath} But $f$ is in $\class L_2$ and $q$ is in $\class R_2 \subseteq \class R_1$, hence a lift $s \from B \to C$ such that $qs = \id B$ and $sf = j$. Now for any $p \in \class R_1$ and any commutative square \begin{displaymath} \begin{tikzcd} A \ar[dd,"f"swap,bend right] \ar[d,"j"] \ar[r,"x"] & D \ar[dd,"p"] \\ C \ar[d,"q"]& \\ B \ar[r,"y"] & E \end{tikzcd} \end{displaymath} there is a lift $h \from C \to D$ taking advantage of $j$ having the left lifting property against $p$. Then $hs \from B \to D$ provides a lift showing that $f$ has the left lifting property against $p$: indeed $phs = yqs = y$ and $hsf = hsqj = hj = x$. Having the left lifting property against any morphism in $\class R_1$, the morphism $f$ ought to be in $\class L_1$, hence providing the reverse inclusion: $\class W \cap \class L_2 \subseteq \class L_1$. \end{proof} \begin{corollary} \label{cor:intertwined-model}% Let $(\class L_1,\class R_1)$ and $(\class L_2,\class R_2)$ form intertwined weak factorization systems on a category $\cat M$, and denote again $\class W = \class R_2 \circ \class L_1$. The category $\cat M$ has a model structure with weak equivalences $\class W$, fibrations $\class R_1$ and cofibrations $\class L_2$ if and only if $\class W$ has the 2-out-of-3 property. \end{corollary} Of course in that case, we also get the class of acyclic cofibrations as $\class L_1$ and the class of acyclic fibrations as $\class R_2$. \medskip So there it is: we shreded apart the notion of a model structure to the point that what remains is the pretty tame notion of intertwined factorization systems $(\class L_1,\class R_1)$ and $(\class L_2,\class R_2)$ such that $\class R_2\circ\class L_1$ has the 2-out-of-3 property. But it has the neat advantage to be easily checkable, especially in the context of formal constructions, as it is the case in this paper. It also emphasizes the fact that Quillen adjunctions are really the right notion of morphisms for intertwined weak factorization systems and have {\em a priori} nothing to do with weak equivalences. We shall really put that on a stand because everything that follows in the main theorem can be restated with mere intertwined weak factorization systems in place of model structures and it still holds: in fact it represents the easy part of the theorem and all the hard core of the result resides in the 2-out-of-3 property, as usually encountered with model structures. \section{Quillen bifibrations} \label{sec:quillen-bifib} Recall from the introduction that a \define{Quillen bifibration} is a Grothendieck bifibration $p \from \cat E \to \cat B$ between categories with model structures such that: \begin{enumerate}[label=(\roman*)] \item the functor~$p$ is both a left and right Quillen functor, \item the model structure on $\cat E$ restricts to a model structure on the fiber $\fiber{\cat E}A$, for every object $A$ of the category~$\cat B$. \end{enumerate} In this section, we show that in a Quillen bifibration the model structure on the basis~$\cat B$ and on every fiber $\cat E_A$ determines the original model structure on the total category $\cat E$. In the remainder of this section, we fix a Quillen bifibration $p\from \cat E \to \cat B$. \begin{lemma} \label{lem:quillen-bifib-push-pull-are-quillen}% For every morphism $u \from A \to B$ in $\cat B$, the adjunction $\push u \from \fiber{\cat E}A \rightleftarrows \fiber{\cat E}B \cofrom \pull u$ is a Quillen adjunction. \end{lemma} \begin{proof} Let $f\from X\to Y$ be a cofibration in the fiber $\fiber{\cat E}A$. We want to show that the morphism $\push u (f)$ of $\fiber{\cat E}B$ is a cofibration. Take an arbitrary acyclic fibration $q \from W \to Z$ in $\fiber{\cat E} B$ and a commutative square in that fiber: \begin{displaymath} \begin{tikzcd} \push u X \rar["g"] \dar["\push u (f)"swap] & W \dar["q"] \\ \push u Y \rar["g'"swap] & Z \end{tikzcd} \end{displaymath} We need to find a lift $h \from \push u Y \to W$ making the diagram commutes, i.e.\ such that $qh = g'$ and $h\push u(f) = g$. Let us begin by precomposing with the square defining $\push u(f)$: \begin{displaymath} \begin{tikzcd} X \rar["\lambda"] \dar["f"swap] & \push u X \rar["g"] \dar["\push u (f)"swap] & W \dar["q"] \\ Y \rar["\lambda"swap] & \push u Y \rar["g'"swap] & Z \end{tikzcd} \end{displaymath} As a cofibration, $f$ has the left lifting property against $q$, providing a map ${k\from Y \to W}$ that makes the following commute: \begin{displaymath} \begin{tikzcd}[row sep=large] X \rar["\lambda"] \dar["f"swap] & \push u X \rar["g"] \dar[lightgray,"\push u (f)"{swap,near start}] & W \dar["q"] \\ Y \rar["\lambda"swap] \ar[urr,"k"{near start},crossing over] & \push u Y \rar["g'"swap] & Z \end{tikzcd} \end{displaymath} Now we use the cocartesian property of $\cocart u Y \from Y \to \push u Y$ on $k$, to find a map $h\from \push u Y \to W$ above the identity $\id B$ such that $h\cocart u Y = k$. All it remains to show is that $qh = g'$ and $h\push u (f) = g$. Notice that both $qh$ and $g'$ answer to the problem of finding a map $x \from \push u Y \to Z$ above $\id B$ such that $x\cocart u Y = qk$: hence, by the unicity condition in the cocartesian property of $\cocart u Y$, they must be equal. Similarly, $h\circ\push u (f)$ and $g$ solve the problem of finding $x \from \push u X \to W$ above $\id B$ such that $x\cocart u X = kf$: the cocartesian property of $\cocart u X$ allows us to conclude that they are equal. In the end, $\push u(f)$ has the left lifting property against every acyclic fibration of $\fiber{\cat E}B$, so it is a cofibration. We prove dually that the image $\pull u f$ of a fibration $f$ in $\fiber{\cat E} B$ is a fibration of the fiber $\fiber{\cat E} A$. \end{proof} \begin{lemma} \label{lem:quill-bifib-cocart-above-cof}% A cocartesian morphism in $\cat E$ above a (acyclic) cofibration of $\cat B$ is a (acyclic) cofibration. \end{lemma} \begin{proof} Let $f\from X \to Y$ be cocartesian above a cofibration $u \from A \to B$ in $\cat B$. Given a commutative square of $\cat E$ \begin{equation} \label{eq:first-square}% \begin{tikzcd} X \dar["f"{swap}] \rar["g"] & W \dar["q"] \\ Y \rar["g'"{swap}] & Z \end{tikzcd} \end{equation} with $q$ an acyclic fibration, we can take its image in $\cat B$: \begin{displaymath} \begin{tikzcd} A \dar["u"{swap}] \rar & pW \dar["p(q)"] \\ B \rar & pZ \end{tikzcd} \end{displaymath} Since $u$ is a cofibration and $p(q)$ an acyclic fibration, there exists a morphism~$h \from B \to pW$ making the expected diagram commute: \begin{displaymath} \begin{tikzcd} A \dar["u"{swap}] \rar & pW \dar["p(q)"] \\ B \rar \arrow[ru,"h" description] & pZ \end{tikzcd} \end{displaymath} Because $f$ is cocartesian, we know that there exists a (unique) map $\tilde h \from Y \to W$ above $h$ making the diagram below commute: \begin{displaymath} \begin{tikzcd} X \dar["f"{swap}] \rar["g"] & W \\ Y \urar["\tilde h"{swap}] & \end{tikzcd} \end{displaymath} For the morphism~$\tilde h$ to be a lift in the first commutative square~(\ref{eq:first-square}), there remains to show that~$q\tilde h = g'$. Because $\tilde h$ is above $h$ and $p(q)h = p(g')$, we have that the composite~$q\tilde h$ is above $g'$. Moreover $q\tilde h f = q g = g' f$. Using the uniqueness property in the universal definition of cocartesian maps, we deduce $q\tilde h = g'$. We have just shown that the cocartesian morphism~$f$ is weakly orthogonal to every acyclic fibration, and we thus conclude that $f$ is a cofibration. The case of cocartesian morphisms above acyclic cofibrations is treated in a similar way. \end{proof} The same argument establishes the dual statement: \begin{lemma} \label{lem:quill-bifib-cart-above-fib}% A cartesian morphism in $\cat E$ above a (acyclic) fibration of $\cat B$ is a (acyclic) fibration. \end{lemma} \begin{proposition} \label{prop:quill-bifib-charac-total-cof}% A map $f\from X \to Y$ in $\cat E$ is a (acyclic) cofibration if and only if $p(f)$ is a (acyclic) cofibration in $\cat E$ and $\pushfact f$ is a (acyclic) cofibration in the fiber $\fiber{\cat E}{pY}$. \end{proposition} \begin{proof} A direction of the equivalence is easy: if $p(f) = u \from A\to B$ is a cofibration, then so is the cocartesian morphism $\cocart u X$ above it by lemma \ref{lem:quill-bifib-cocart-above-cof}; if moreover $\pushfact f$ is a cofibration in the fiber $\fiber{\cat E}B$, then $f = \pushfact f \cocart u X$ is a composite of cofibration, hence it is a cofibration itself. Conversely, suppose that $f\from X\to Y$ is a cofibration in $\cat E$. Then surely $p(f) = u\from A\to B$ also is a cofibration in $\cat B$, since $p$ is a left Quillen functor. Now we want to show that $\pushfact f \from \push u X \to Y$ is a cofibration in the fiber $\fiber{\cat E}B$. Consider a commutative square in that fiber \begin{displaymath} \begin{tikzcd} \push u X \rar["g"] \dar["\pushfact f"{swap}] & W \dar["q"] \\ Y \rar["g'"{swap}] & Z \end{tikzcd} \end{displaymath} where $q$ is an acyclic fibration of the fiber $\fiber{\cat E}B$, and $g,g'$ are arbitrary morphisms in that fiber. % Since $f$ itself is a cofibration in~$\cat{E}$, we know that there exists a lift $h \from Y \to W$ for the outer square (with four sides $f$, $q$, $g\cocart u X$ and $g'$) of the following diagram: \begin{displaymath} \begin{tikzcd}\cocart u X X \drar["f"{swap}] \rar["\cocart u X"] & \push u X \rar["g"] \dar[lightgray,"\pushfact f"] & W \dar["q"] \\ & Y \rar["g'"{swap}] \urar["h"] & Z \end{tikzcd} \end{displaymath} Now, there remains to show that $h\pushfact f = g$. We already know that $g\cocart u X = h \pushfact f \cocart u X$, and taking advantage of the fact that the morphism $\cocart u X$ is cocartesian, we only need to show that $p(g) = p(h \pushfact f)$. % Since $g$ and $\pushfact f$ are fiber morphisms, it means we need to show that $h$ also. This follows from the fact that $qh = g'$ and that $q$ and $g'$ are fiber morphisms. \end{proof} In the same way, we get the dual statement: \begin{proposition} \label{prop:quill-bifib-charac-total-fib}% A map $f\from X \to Y$ in $\cat E$ is a (acyclic) cofibration if and only if $p(f)$ is a (acyclic) cofibration in $\cat E$ and $\pushfact f$ is a (acyclic) cofibration in the fiber $\fiber{\cat E}{pY}$. \end{proposition} In particular, this means that the model structure on the total category $\cat E$ is entirely determined by the model structures on the basis $\cat B$ and on each fiber~$\cat E_B$ of the bifibration. As these characterizations turn out to be important for what follows, we shall name them. \begin{definition} \label{def:total-model}% Let $p\from \cat E \to \cat B$ be a Grothendieck bifibration such that its basis $\cat B$ and each fiber $\fiber{\cat E}A\ (A\in\cat B)$ have a model structure. \begin{itemize} \item a \define{total cofibration} is a morphism $f\from X \to Y$ of $\cat E$ above a cofibration $u\from A \to B$ of $\cat B$ such that $\pushfact f$ is a cofibration in the fiber $\fiber{\cat E} B$, \item a \define{total fibration} is a morphism $f\from X \to Y$ of $\cat E$ above a fibration $u\from A \to B$ of $\cat B$ such that $\pullfact f$ is a fibration in the fiber $\fiber{\cat E} A$, \item a \define{total acyclic cofibration} is a morphism $f\from X \to Y$ of $\cat E$ above an acyclic cofibration $u\from A \to B$ of $\cat B$ such that $\pushfact f$ is an acyclic cofibration in the fiber $\fiber{\cat E} B$, \item a \define{total acyclic fibration} is a morphism $f\from X \to Y$ of $\cat E$ above an acyclic fibration $u\from A \to B$ of $\cat B$ such that $\pullfact f$ is an acyclic fibration in the fiber $\fiber{\cat E} A$. \end{itemize} \end{definition} Using this terminology, the two propositions \ref{prop:quill-bifib-charac-total-cof} and \ref{prop:quill-bifib-charac-total-fib} just established come together as: the cofibrations, fibrations, acyclic cofibrations and acyclic fibrations of a Quillen bifibration are necessarily the total ones. Note also that the definitions of total cofibration and total fibration given in definition~\ref{def:total-model} coincides with the definition given in the introduction. \medskip We end this section by giving simple examples of Quillen bifibrations. They should serve as both a motivation and a guide for the reader to navigate into the following definitions and proofs: it surely has worked that way for us authors. \begin{example}% \label{ex:main-thm-motiv}% ~% \begin{enumerate}[label=(\arabic*)]% \item One of the simplest instances of a Grothendieck bifibration other than the identity functor, is a projection from a product: \begin{displaymath} p\from \cat M \times \cat B \to \cat B \end{displaymath} Cartesian and cocartesian morphisms coincide and are those of the form $(\id M,u)$ for $M \in \cat M$ and $u$ a morphism of $\cat B$. In particular, one have $\pullfact {(f,u)} = (f,\id A)$ and $\pushfact {(f,u)} = (f,\id B)$ for any $u \from A \to B$ in $\cat B$ and any $f$ in $\cat M$. If $\cat B$ and $\cat M$ are model categories, each fiber $\fiber p A \simeq \cat M$ inherits a model structure from $\cat M$ and the total fibrations and cofibrations coincide precisely with the one of the usual model structure on the product $\cat M \times \cat B$. \item For a category $\cat B$, one can consider the codomain functor: \begin{displaymath} \mathrm{cod} \from \functorcat{\lincat 2}{\cat B} \to \cat B,\, (X\overset f \to A) \mapsto A \end{displaymath} Cocartesian morphisms above $u$ relatively to $\mathrm{cod}$ are those commutative square of the form \begin{displaymath} \begin{tikzcd} X \dar["f"] \ar[r,equal] & X \dar["uf"] \\ A \ar[r,"u"] & B \end{tikzcd} \end{displaymath} whereas cartesian morphisms above $u$ are the pullback squares along $u$. Hence $\mathrm{cod}$ is a Grothendieck bifibration whenever $\cat B$ admits pullbacks. If moreover $\cat B$ is a model category, then each fiber $\fiber \mathrm{cod} A \simeq \slice{\cat B} A$ inherits a model structure (namely an arrow is a fibration or a cofibration if it is such as an arrow of $\cat B$), and the total fibrations and cofibrations coincide with the one in the injective model structure on $\functorcat{\lincat 2}{\cat B}$: i.e.\ a cofibration is a commutative square with the top and bottom arrows being cofibrations in $\cat B$, whereas fibrations are those commutative squares \begin{displaymath} \begin{tikzcd} X \ar[dd,"f"] \ar[rr] \drar["h",densely dashed] & & Y \ar[dd,"g"] \\ & \fiberprod A Y {u,g} \dlar \urar & \\ A \ar[rr,"u"] & & B \end{tikzcd} \end{displaymath} where both $u$ and $h$ are fibrations in $\cat B$. \item Similarly, the total fibrations and cofibrations of the Grothendieck bifibration $\mathrm{dom} \from \functorcat{\lincat 2}{\cat B} \to \cat B$ over a model category $\cat B$ are exactly those of the projective model structure on $\functorcat{\lincat 2}{\cat B}$. \item In both \cite{stanculescu:bifib-model} and \cite{harpaz-prasma:grothendieck-model}, the authors prove a theorem similar to our, putting a model structure on the total category of a Grothendieck bifibration under specific hypothesis. In both case, fibrations and cofibrations of this model structure end up being the total ones. The following theorem encompasses in particular this two results. \end{enumerate} \end{example} \section{A Grothendieck construction for Quillen bifibrations}% \label{sec:actual-thm}% Now we have the tools to move on to the main goal of this paper, which is to turn a Grothendieck bifibration $p \from \cat E \to \cat B$ into a Quillen bifibration whenever both the basis category $\cat B$ and every fiber $\fiber{\cat E}A \ (A\in\cat B)$ admit model structures in such a way that all the pairs of adjoint push and pull functors between fibers are ``homotopically well-behaved''. To be more precise, we now suppose $\cat B$ to be equipped with a model structure $(\class C,\class W,\class F)$, and each fiber $\fiber {\cat E} A$ ($A\in \cat B$) to be equipped with a model structure $(\class C_A,\class W_A,\class F_A)$. We also make the following \showcase{fundamental assumption}: \begin{displaymath} \tag{Q\relax}% \label{hyp:quillen-push-pull}% \parbox{\dimexpr\linewidth-7em}{% \strut% For all $u$ in $\cat B$, the adjoint pair $(\push u,\pull u)$ is a Quillen adjunction. \strut% }% \end{displaymath} We defined in definition~\ref{def:total-model} notions of total cofibrations and total fibrations, as well as their acyclic counterparts. These are reminiscent of what happens with Quillen bifibrations, but they can be defined for any Grothendieck bifibration whose basis and fibers have model structures. We must insist that in that framework, {\em total cofibrations} and {\em total fibrations} are only names, and by no means are they giving the total category $\cat E$ a model structure. Indeed, the goal of this section, and to some extent even the goal of this paper, is to provide a complete characterization, under hypothesis~\eqref{hyp:quillen-push-pull}, of the Grothendieck bifibrations $p\from \cat E \to \cat B$ for which the total cofibrations and total fibrations make $p$ into a Quillen bifibration. For the rest of this section, we shall denote $\totalcof{\cat E}$, $\totalfib{\cat E}$, $\totalacof{\cat E}$ and $\totalafib{\cat E}$ for the respective classes of total cofibrations, total fibrations, total acyclic cofibrations, and total acyclic fibrations, that is: \begin{displaymath} \begin{aligned} \totalcof{\cat E} &= \{f \from X \to Y \in \cat E : p(f) \in \class C, \pushfact f \in \class C_{pY} \}, \\ \totalfib{\cat E} &= \{f \from X \to Y \in \cat E : p(f) \in \class F, \pullfact f \in \class F_{pX} \}, \\ \totalacof{\cat E} &= \{f \from X \to Y \in \cat E : p(f) \in \class W \cap \class C, \pushfact f \in \class W_{pY} \cap \class C_{pY} \}, \\ \totalafib{\cat E} &= \{f \from X \to Y \in \cat E : p(f) \in \class W \cap \class F, \pullfact f \in \class W_{pX} \cap \class F_{pX} \} \end{aligned} \end{displaymath} \subsection{Main theorem} In order to state the theorem correctly, we will need some vocabulary. Recall that the {\em mate} $\mu \colon \push {u'} \pull v \to \pull {v'} \push u$ associated to a commutative square of $\cat B$ \begin{displaymath} \begin{tikzcd} A \ar[d,"u'" swap] \ar[r,"v"] & C \ar[d,"u"] \\ C' \ar[r,"v'" swap] & B \end{tikzcd} \end{displaymath} is the natural transformation constructed at point $Z \in \fiber {\cat E} C$ in two steps as follow: the composite \begin{displaymath} \pull v Z \to Z \to \push u Z \end{displaymath} which is above $uv$, factors through the cartesian arrow $\cart {v'} {\push u Z} \from \pull {v'} \push u Z \to \push u Z$ (because $v'u' = uv$) into a morphism $\pull v Z \to \pull {v'} {\push u Z}$ above $u'$, which in turn factors through the cocartesian arrow $\cocart {u'} {\push v Z} \from \pull {u'} \push v Z \to \pull {v'} \push u Z$ giving rise to $\mu_Z$, as summarized in the diagram below. \begin{displaymath} \begin{tikzcd} \pull v Z \ar[rr,"\rho"] \dar["\lambda"] & & Z \ar[dd,"\lambda"] \\ \push {u'} \pull v Z \drar[densely dashed,"\mu_Z"] & & \\ & \pull {v'} \push u Z \rar["\rho"] & \push u Z \end{tikzcd} \end{displaymath} \begin{definition} A commutative square of $\cat B$ is said to satifisfy the {\em homotopical Beck-Chevalley condition} if its mate is pointwise a weak equivalence. \end{definition} Consider then the following properties on the Grothendieck bifibration $p$: \begin{displaymath} \tag{hBC\relax} \label{hyp:hBC}% \parbox{\dimexpr\linewidth-7em}{% \strut Every commutative square of $\cat B$ of the form \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] A \ar[d,"u'" swap] \ar[r,"v"] \& C \ar[d,"u"] \\ C' \ar[r,"v'" swap] \& B \end{tikzcd} \qquad \begin{aligned} u,u' &\in \class C \cap \class W, \\ v,v' &\in \class F \cap \class W \end{aligned} \end{displaymath} satisfies the homotopical Beck-Chevalley condition. \strut }% \end{displaymath} and \begin{displaymath} \tag{hCon\relax} \label{hyp:weak-conservative}% \parbox{\dimexpr\linewidth-5em}{% \strut The functors $\push u$ and $\pull v$ are homotopically conservative whenever $u$ is an acyclic cofibration and $v$ an acyclic fibration. \strut }% \end{displaymath} The theorem states that this is exactly what it takes to make the names ``total cofibrations'' and ``total fibrations'' legitimate, and to turn $p \from \cat E \to \cat B$ into a Quillen bifibration. \begin{theorem}% \label{thm:main}% Under hypothesis \eqref{hyp:quillen-push-pull}, the total category $\cat E$ admits a model structure with $\totalcof{\cat E}$ and $\totalfib{\cat E}$ as cofibrations and fibrations respectively if and only if properties \eqref{hyp:hBC} and \eqref{hyp:weak-conservative} are satisfied. In that case, the functor $p \from \cat E \to \cat B$ is a Quillen bifibration. \end{theorem} The proof begin with a very candid remark that we promote as a proposition because we shall use it several times in the rest of the proof. \begin{proposition} \label{prop:total-intertwined-wfs}% $(\totalacof{\cat E},\totalfib{\cat E})$ and $(\totalcof{\cat E},\totalafib{\cat E})$ are intertwined weak factorization systems. \end{proposition} \begin{proof} Obviously $\totalacof{\cat E} \subseteq \totalcof{\cat E}$ and $\totalafib{\cat E} \subseteq \totalfib{\cat E}$. Independently, a direct application of lemma \ref{lem:stan-lemma} shows that $(\totalacof{\cat E},\totalfib{\cat E})$ and $(\totalcof{\cat E},\totalafib{\cat E})$ are both weak factorization systems on $\cat E$. \end{proof} The strategy to prove theorem \ref{thm:main} then goes as follow: \begin{itemize} \item first we will show the necessity of conditions~\eqref{hyp:hBC} and~\eqref{hyp:weak-conservative}: if $\totalcof{\cat E}$ and $\totalfib{\cat E}$ are the cofibrations and fibrations of a model structure on $\cat E$, then hypothesis~\eqref{hyp:hBC} and~\eqref{hyp:weak-conservative} are met, \item next, the harder part is the sufficiency: because of proposition \ref{prop:total-intertwined-wfs}, it is enough to show that the induced class $\totalweak{\cat E} = \totalafib{\cat E} \circ \totalacof{\cat E} $ of {\em total weak equivalences} has the 2-out-of-3 property to conclude through corollary \ref{cor:intertwined-model}. \end{itemize} \subsection{Proof, part I: necessity} \label{subsec:necessity-main} In all this section, we suppose that $\totalcof{\cat E}$ and $\totalfib{\cat E}$ provide respectively the cofibrations and fibrations of a model structure on the total category $\cat E$. We will denote $\totalweak{\cat E}$ the corresponding class of weak equivalences. First, we prove a technical lemma, directly following from proposition \ref{prop:total-intertwined-wfs}, that will be extensively used in the following. Informally, it states that the name given to the members of $\totalacof{\cat E}$ and $\totalafib{\cat E}$ are not foolish. \begin{lemma} \label{lem:total-acyclic-are-acyclic}% $\totalacof{\cat E} = \totalweak{\cat E} \cap \totalcof{\cat E}$ and $\totalafib{\cat E} = \totalweak{\cat E} \cap \totalfib{\cat E}$. \end{lemma} \begin{proof} By proposition \ref{prop:total-intertwined-wfs}, we know that both $(\totalacof{\cat E},\totalfib{\cat E})$ and $(\totalweak{\cat E} \cap \totalcof{\cat E},\totalfib{\cat E})$ are weak factorization systems with the same class of right maps, hence their class of left maps should coincide. Similarly the weak factorization systems $(\totalcof{\cat E},\totalafib{\cat E})$ and $(\totalcof{\cat E}, \totalweak{\cat E} \cap \totalfib{\cat E})$ have the same class of left maps, hence their class of right maps coincide. \end{proof} \begin{corollary} \label{cor:inclusion-weak-conservative}% For any object $A$ of $\cat B$, the inclusion functor $\fiber{\cat E}A \to \cat E$ is homotopically conservative. \end{corollary} \begin{proof} The preservation of weak equivalences comes from the fact that acyclic cofibrations and acyclic fibrations of $\fiber{\cat E}A$ are elements of $\totalacof{\cat E}$ and $\totalafib{\cat E}$ respectively. Thus, by lemma \ref{lem:total-acyclic-are-acyclic}, they are elements of $\totalweak{\cat E}$. Conversely, suppose that $f$ is a map of $\fiber{\cat E} A$ which is a weak equivalence of $\cat E$. We want to show that $f$ is a weak equivalence of the fiber $\fiber{\cat E}A$. The map $f$ factors in the fiber $\fiber{\cat E}A$ as $f = qj$ where $j \in \class C_A \cap \class W_A$ and $q \in \class F_A$. We just need to show that $q \in \class W_A$. By lemma \ref{lem:total-acyclic-are-acyclic}, $j$ is also a weak equivalence of $\cat E$. By the 2-out-of-3 property of $\class W_{\cat E}$, the map $q$ is a weak equivalence of $\cat E$. As a fibration of $\fiber{\cat E}A$, $q$ is also a fibration of $\cat E$. This establishes that $q$ is an acyclic fibration of $\cat E$. By lemma \ref{lem:total-acyclic-are-acyclic}, $q$ is thus an element of $\totalafib{\cat E}$. This conludes the proof that $q = \pullfact q$ is an acyclic fibration, and thus a weak equivalence, in the fiber $\fiber{\cat E}A$. \end{proof} \begin{proposition}[Property (hCon\relax)] If $j\from A \to B$ in an acyclic cofibration in $\cat B$, then $\push j \from \fiber{\cat E} A \to \fiber{\cat E} B$ is homotopically conservative. If $q\from A \to B$ in an acyclic fibration in $\cat B$, then $\pull q \from \fiber{\cat E} B \to \fiber{\cat E} A$ is homotopically conservative. \end{proposition} \begin{proof} We only prove the first part of the proposition, as the second one is dual. Recall that the image $\push j (f)$ of a map $f\from X \to Y$ of $\fiber{\cat E} A$ is computed as the unique morphism of $\fiber{\cat E} B$ making the following square commute: \begin{displaymath} \begin{tikzcd} X \rar \dar["f",swap] & \push j X \dar["\push j(f)"] \\ Y \rar & \push j Y \end{tikzcd} \end{displaymath} The horizontal morphisms in the diagram are cocartesian above the acyclic cofibration $j$. As such they are elements $\totalacof{\cat E}$, and thus weak equivalence in $\cat E$ by lemma \ref{lem:total-acyclic-are-acyclic}. By the 2-out-of-3 property of $\totalweak {\cat E}$, $f$ is a weak equivalence in $\cat E$ if and only if $\push j (f)$ is one also in $\cat E$. Corollary \ref{cor:inclusion-weak-conservative} allows then to conclude: $f$ is a weak equivalence in the fiber $\fiber{\cat E} A$ if and only if $\push j (f)$ is one in the fiber $\fiber{\cat E} B$. \end{proof} \begin{proposition}[Property (hBC\relax)] Commutative squares of $\cat B$ of the form \begin{displaymath} \begin{tikzcd} A \rar["v"] \dar["u'",swap] & C \dar["u"] \\ C' \rar["v'",swap] & B \end{tikzcd} \quad u,u' \in \class C \cap \class W \quad v,v' \in \class F \cap \class W \end{displaymath} satisfy the homotopical Beck-Chevalley condition. \end{proposition} \begin{proof} Recall that for such a square in $\cat B$, the component of the mate $\mu \from \push {u'} \pull v \to \pull {v'} \push u$ at $Z \in \fiber{\cat E}C$ is defined as the unique map of $\fiber{\cat E}{Z'}$ making the following diagram commute: \begin{displaymath} \begin{tikzcd} \pull v Z \ar[rr,"\rho"] \dar["\lambda"] & & Z \ar[dd,"\lambda"] \\ \push {u'} \pull v Z \drar["\mu_Z"] & & \\ & \pull {v'} \push u Z \rar["\rho"] & \push u Z \end{tikzcd} \end{displaymath} Arrows labelled $\rho$ and $\lambda$ are respectively cartesian above acyclic fibrations and cocartesian above acyclic cofibrations, hence weak equivalences of $\cat E$ by lemma \ref{lem:total-acyclic-are-acyclic}. By applying the 2-out-of-3 property of $\totalweak{\cat E}$ three times in a row, we conclude that the fiber map $\mu_Z$ is a weak equivalence of $\cat E$, hence also of $\fiber{\cat E}{C'}$ by corollary \ref{cor:inclusion-weak-conservative}. \end{proof} \subsection{Proof, part II: sufficiency} \label{subsec:sufficiency} We have established the necessity of \eqref{hyp:hBC} and \eqref{hyp:weak-conservative} in theorem \ref{thm:main}. We now prove the sufficiency of these conditions. This is the hard part of the proof. Recall that every fiber $\fiber{\cat E}A$ of the Grothendieck bifibration $p \from \cat E \to \cat B$ is equipped with a model structure in such a way that \eqref{hyp:quillen-push-pull} is satisfied. From now on, we make the additional assumptions that \eqref{hyp:hBC} and \eqref{hyp:weak-conservative} are satisfied. We will use the notation $\totalweak{\cat E} = \totalafib{\cat E} \circ \totalacof{\cat E}$ the class of maps that can be written as a total acyclic cofibration postcomposed with a total acyclic fibration. The overall goal of this section is to prove that \begin{claim*} $(\totalcof{\cat E},\totalweak{\cat E},\totalfib{\cat E})$ defines a model structure on the total category $\cat E$. \end{claim*} By proposition \ref{prop:total-intertwined-wfs}, we already know that $(\totalacof{\cat E},\totalfib{\cat E})$ and $(\totalcof{\cat E},\totalafib{\cat E})$ are intertwined weak factorization systems. From this follows that, by corollary \ref{cor:intertwined-model}, we only need to show that the class $\totalweak{\cat E}$ of \define{total weak equivalences} satisfies the 2-out-of-3 property. \medskip A first step is to get a better understanding of the total weak equivalences. For $f \from X \to Y$ in $\cat E$ such that $p(f) = vu$ for two composable morphisms $u\from pX \to C$ and $v\from C \to pY$ of $\cat B$, there is a unique morphism inside the fiber $\fiber{\cat E} C$ \begin{displaymath} \middlefact f u v \from \push u X \to \pull v Y \end{displaymath} such that $f = \cart v Y \circ \middlefact f u v \circ \cocart u X$. This morphism $\middlefact f u v$ can be constructed as $\pullfact k$ where $k$ is the unique morphism above $v$ factorizing $f$ through $\cocart u X$; or equivalently as $\pushfact \ell$ where $\ell$ is the unique morphism above $u$ factorizing $f$ through $\cart v Y$. This is summed up in the following commutative diagram: \begin{displaymath} \begin{tikzcd}[row sep=large, column sep=huge] X \rar["\lambda"] \drar[densely dashed,"\ell" swap] & \push u X \dar["\pushfact \ell = \middlefact f u v = \pullfact k" description] \drar[densely dashed,"k"] & \\ & \pull v Y \rar["\rho" swap] & Y \end{tikzcd} \end{displaymath} Notice that, in particular, a morphism $f$ of $\totalweak{\cat E}$ is exactly a morphism of $\cat E$ for which \showcase{there exists} a factorization $p(f) = qj$ with $j \in \class W \cap \class C$ and $q \in \class W \cap \class F$ such that $\middlefact f j q$ is a weak equivalence in the corresponding fiber. We shall strive to show that, under our hypothesis \eqref{hyp:weak-conservative} and \eqref{hyp:hBC}, a morphism $f$ of $\totalweak{\cat E}$ satisfies the same property that $\middlefact f j q$ is a weak equivalence \showcase{for all} such factorization $p(f) = qj$. This is the contain of proposition \ref{prop:exists-forall}. We start by showing the property in the particular case where $p(f)$ is an acyclic cofibration (lemma \ref{lem:exists-forall-above-acof}) or an acyclic fibration (lemma \ref{lem:exists-forall-above-afib}). \begin{lemma} \label{lem:exists-forall-above-acof}% Suppose that $f\from X \to Y$ is a morphism of $\cat E$ such that $p(f)$ is an acyclic cofibration in $\cat B$. If $p(f) = qj$ with $q \in \class W \cap \class C$ and $j \in \class W \cap \class F$, then $\pushfact f$ is a weak equivalence if and only if $\middlefact f j q$ is a weak equivalence. \end{lemma} \begin{proof} Since $p(f) = qj$, lemma \ref{lem:pseudo-push-iso} provides an isomorphism $\phi$ in the fiber $\fiber{\cat E}{pY}$ such that $\pushfact f = \tilde {\pushfact f} \phi$, where $\tilde {\pushfact f}$ is the morphism obtained by pushing in two steps: \begin{displaymath} \begin{tikzcd} X \rar["\lambda"] \ar[rrr,bend left,"\lambda"] \ar[drr,"f"{swap}] & \push j X \rar["\lambda"] & \push q \push j X \dar["\tilde{\pushfact f}"] & \push {(qj)} X \dlar["\pushfact f"] \lar["\simeq","\phi"{swap}] \\ & & Y & \end{tikzcd} \end{displaymath} By definition, $\middlefact f j q$ is the image of $\tilde {\pushfact f}$ under the natural bijection $\hom[\fiber{\cat E}{pY}]{\push q \push j X}{Y} \overset \simeq\to \hom[\fiber{\cat E}{pX}]{\push j X}{\pull q Y}$. So it can be written $\middlefact f j q = \pull q (\tilde {\pushfact f}) \circ \eta_{\push j X}$ using the unit $\eta$ of the adjunction $(\push q,\pull q)$. We can now complete the previous diagram as follow: \begin{displaymath} \begin{tikzcd}[row sep=large,column sep=large] X \rar["\lambda"] \ar[rrr,bend left=20,"\lambda"] \ar[drr,lightgray,"f"{very near start,swap}] & \push j X \dar[crossing over,"\eta"{fill=white}] \rar["\lambda"] \ar[dd,crossing over,bend right=60,"\middlefact fjq" swap]& \push q \push j X \dar["\tilde{\pushfact f}"] & \push {(qj)} X \dlar["\pushfact f"] \lar["\simeq","\phi"{swap}] \\ & \pull q \push q \push j X \dar["\pull q(\tilde{\pushfact f})"] \urar[crossing over,"\rho" swap] & Y & \\ & \pull q Y \urar["\rho" swap] & & \end{tikzcd} \end{displaymath} Proving that $\eta_{\push j X}$ is a weak equivalence is then enough to conclude: in that case $\middlefact f j q$ is an weak equivalence if and only if $\pull q(\tilde {\pushfact f})$ is such by the two-of-three property ; $\pull q(\tilde {\pushfact f})$ is a weak equivalence if and only if $\tilde {\pushfact f}$ is a weak equivalence in $\fiber{\cat E}{pY}$ by \eqref{hyp:weak-conservative} ; and finally $\tilde {\pushfact f}$ is a weak equivalence if an only if $\pushfact f$ is such because they are isomorphic as arrows in $\fiber{\cat E}{pY}$. So it remains to show that $\eta_{\push j X}$ is a weak equivalence in its fiber. Since $p(f) = qj$, the following square commutes in $\cat B$: \begin{displaymath} \begin{tikzcd} pX \rar[equal,"\id {pX}"] \dar["j" swap]& pX \dar["qj"] \\ C \rar["q" swap] & pY \end{tikzcd} \end{displaymath} This is a square of the correct form to apply \eqref{hyp:hBC}: hence the associated mate at component $X$ \begin{displaymath} \mu_X \from \push j \pull {(\id {pX})} X \to \pull q \push {(qj)} X \end{displaymath} is a weak equivalence in the fiber $\fiber{\cat E} C$. Corollary \ref{cor:pseudo-iso-in-fiber} ensures that $\mu_X$ is isomorphic as arrow of $\fiber{\cat E}C$ to the unique fiber morphism that factors $\cart q {\push q \push j X}$ through $\cocart{q} {\push j X}$: \begin{displaymath} \begin{tikzcd} \push j X \rar["\lambda"] \dar & \push q \push j X \\ \pull q \push q \push j X \urar["\rho" swap] & \end{tikzcd} \end{displaymath} This is exactly the definition of the unit $\eta$ at $\push j X$. Isomorphic morphisms being weak equivalences together, $\eta_{\push j X}$ is also acyclic in $\fiber{\cat E} C$. \end{proof} Of course, one gets the dual lemma by dualizing the proof that we let for the reader to write down. \begin{lemma} \label{lem:exists-forall-above-afib}% Let $f\from X \to Y$ a morphism of $\cat E$ such that $p(f)$ is an acyclic fibration in $\cat B$. If $p(f) = qj$ with $q \in \class W \cap \class C$ and $j \in \class W \cap \class F$, then $\pullfact f$ is a weak equivalence if and only if $\middlefact f j q$ is a weak equivalence. \end{lemma} We shall now prove the key proposition of this section. \begin{proposition} \label{prop:exists-forall}% Let $f \from X \to Y$ in $\cat E$. If $p(f) = qj = q'j'$ for some $j,j \in \class W \cap \class C$ and $q,q' \in \class W \cap \class F$, then $\middlefact f j q$ is a weak equivalence if and only if $\middlefact f {j'} {q'}$ is a weak equivalence. \end{proposition} \begin{proof} By hypothesis the following square commutes in $\cat B$: \begin{displaymath} \begin{tikzcd} pX \rar["j'"] \dar["j" swap] & C' \dar["q'"] \\ C \rar["q" swap] & pY \end{tikzcd} \end{displaymath} Since $j$ is an acyclic cofibration and $q'$ a (acyclic) fibration, there is a filler $h \from C \to C'$ of the previous square, that is a weak equivalence by the 2-out-of-3 property. Hence it can be factored $h=h_fh_c$ as an acyclic cofibration followed by an acyclic fibration in $\cat B$. Write $j'' = h_cj$ and $q'' = q'h_f$ which are respectively an acyclic cofibration and an acyclic fibration as composite of such, and produce a new factorization of $p(f) = q''j''$. \begin{displaymath} \begin{tikzcd} pX \ar[rr,"j'"] \ar[dd,"j" swap] \drar["j''"] & & C' \ar[dd,"q'"] \\ & C'' \urar["h_f"] \drar["q''"] & \\ C \urar["h_c"] \ar[rr,"q" swap] & & pY \end{tikzcd} \end{displaymath} Write $r$ for the composite $\middlefact f {j'} {q'} \circ \cocart X {j'} \from X \to \push {j'} X \to \pull {q'} Y$. Then $r$ is above the acyclic cofibration $j' = h_fj''$ and lemma \ref{lem:exists-forall-above-acof} can be applied: $\pushfact r$ is a weak equivalence in $\fiber{\cat E}{C'}$ if and only if $\middlefact r {j''} {h_f} \from \push {j''} X \to \pull {(h_f)} \pull{q'} Y$ is a weak equivalence in $\fiber{\cat E}{C''}$. And by very definition $\pushfact r = \middlefact {j'} f {q'}$. So $\middlefact {j'} f {q'}$ is a weak equivalence in $\fiber{\cat E}{C'}$ if and only if $\middlefact r {j''} {h_f}$ is such in $\fiber{\cat E}{C''}$. Similarly write $s$ for the composite $\cart q Y \circ \middlefact f j q \from \push j X \to \pull q Y\to Y$. Then $s$ is above the acyclic fibration $q = q''h_c$ and lemma \ref{lem:exists-forall-above-afib} can be applied: $\pullfact s$ is a weak equivalence in $\fiber{\cat E}{C}$ if and only if $\middlefact s {h_c} {q''} \from \push {(h_c)} \push j X \to \pull {q''} Y $ is a weak equivalence (in $\fiber{\cat E}{C''}$). And by very definition $\pullfact s = \middlefact {j} f {q}$. So $\middlefact {j} f {q}$ is a weak equivalence in $\fiber{\cat E}{C}$ if and only if $\middlefact s {h_c} {q''}$ is such in $\fiber{\cat E}{C''}$. Now recall that $j'' = h_cj$ and $q'' = q'h_f$. By lemmas \ref{lem:pseudo-push-iso} and \ref{lem:pseudo-pull-iso}, there exists isomorphisms $\push {j''} X \simeq \push{h_c}\push{j} X$ and $\pull{q''}Y \simeq \pull {h_f} \pull {q'} Y$ in fiber $\fiber{\cat E}{C''}$ making the following commute : \begin{displaymath} \begin{tikzcd}[row sep = large]% \push {h_c} \push {j} X \drar["\middlefact s {h_c} {q''}" swap] \rar["\simeq"] & \push{j''} X \dar["\middlefact f {j''} {q''}" description] \drar["\middlefact r {j''} {h_f}"] & \\ & \pull{q''} Y \rar["\simeq"] & \pull {h_f} \pull {q'} Y \end{tikzcd} \end{displaymath} In particular, the morphisms $\middlefact r {j''} {h_f}$ and $\middlefact s {h_c} {q''}$ are weak equivalences together. We conclude the argument: $\middlefact f {j'} {q'}$ is a weak equivalence in $\fiber{\cat E}{C'}$ if and only if $\middlefact r {j''} {h_f}$ is such in $\fiber{\cat E}{C''}$ if and only if $\middlefact s {h_c} {q''}$ is such in $\fiber{\cat E}{C''}$ if and only if $\middlefact f j q$ is a weak equivalence in $\fiber{\cat E}C$. \end{proof} The previous result allow the following ``trick'': to prove that a map $f$ of $\cat E$ is in $\totalweak{\cat E}$, you just need to find \showcase{some} factorization $p(f) = qj$ as an acyclic cofibration followed by an acyclic fibration such that $\middlefact f j q$ is acyclic inside its fiber (this is just the definition of $\totalweak{\cat E}$ after all); but if given that $f \in \totalweak{\cat E}$, you can use that $\middlefact f j q$ is a weak equivalence for \showcase{every} admissible factorization of $p(f)$! \medskip% We shall use that extensively in the proof of the two-out-of-three property for $\totalweak{\cat E}$. This will conclude the proof of sufficiency in theorem \ref{thm:main}. \begin{proposition} The class $\totalweak{\cat E}$ has the 2-out-of-3 property. \end{proposition} \begin{proof} We suppose given a commutative triangle $h = gf$ in the total category $\cat E$, and we proceed by case analysis. First case: suppose that $f,g \in \totalweak{\cat E}$, and we want to show that $h \in \totalweak{\cat E}$. Since $f$ and $g$ are elements of $\totalweak {\cat E}$, there exists a pair of factorizations $p(f) = qj$ and $p(g) = q'j'$ with $j,j'$ acyclic cofibrations and $q,q'$ acyclic fibrations of $\cat B$ such that both $\middlefact f j q$ and $\middlefact g {q'} {j'}$ are weak equivalences in their respective fibers. The weak equivalence $j'q$ of $\cat B$ can be factorized as $q''j''$ with $j''$ acyclic cofibration and $q''$ acyclic fibration. We write $i = j''j$ and $r = q'q''$ and we notice that $p(h) = ri$, as depicted below. \begin{equation} \label{eq:decompose-pf-and-pg-acyclic}% \begin{tikzcd} pX \ar[rr,bend left=40, "i"] \rar["j"] \drar[lightgray,"p(f)" swap] \ar[ddrr,lightgray,bend left,"p(h)"] & A \dar["q"{fill=white},crossing over] \rar["j''"] & C \dar["q''"] \ar[dd,bend left=50,"r"] \\ & pY \rar["j'"{fill=white},crossing over] \drar[lightgray,"p(g)" swap] & B \dar["q'"] \\ & & pZ \end{tikzcd} \end{equation} Since $i$ is an acyclic cofibration and $r$ is an acyclic fibration, it is enough to show that $\middlefact h i r \from \push i X \to \pull r Y$ is a weak equivalence in $\fiber{\cat E}C$ in order to conclude that $h \in \totalweak{\cat E}$. Since $i = j''j$ and $r=q'q''$, corollary \ref{cor:pseudo-iso-in-fiber} states that it is equivalent to show that the isomorphic arrow $\tilde h \from \push {j''} \push j X \to \pull {q''} \pull {q'}$ is a weak equivalence, where $\tilde h$ is defined as the unique arrow in fiber $\fiber{\cat E}C$ making the following commute: \begin{displaymath} \begin{tikzcd} X \rar["\lambda"] \ar[drrrr,"h" near end,rounded corners,to path={(\tikztostart.south) |- ([yshift=1em]\tikztotarget.north) -- (\tikztotarget)}] & \push j X \rar["\lambda"] & \push {j''} {\push j X} \ar[d,"\tilde h"{fill=white},crossing over] & & \\ & & \pull{q''} \pull{q'} Z \rar["\rho" swap] & \pull {q'} Z \rar["\rho" swap] & Z \end{tikzcd} \end{displaymath} Since $h=gf$, such an arrow $\tilde h$ is given by the composite \begin{displaymath} \begin{tikzcd}[column sep=huge] \push {j''} \push {j'} X \rar[" \push {j''}(\middlefact f {j} {q})"] & \push {j''} \pull q Y \rar["\mu_Y"] & \pull{q''}\push{j'}Y \rar["\pull {q''}(\middlefact g {j'} {q'})"] & \pull {q''} \pull {q'} Z \end{tikzcd} \end{displaymath} where $\mu_Y$ is the component at $Y$ of the mate $\mu \from \push{j''}\pull q \to\pull{q''}\push{j'}$ of the commutative square $q''j'' = j'q$ of $\cat B$ (see diagram \eqref{eq:decompose-pf-and-pg-acyclic} above). \begin{displaymath} \begin{tikzcd}% X \ar[rrrr,lightgray,"f",bend left=20] \rar["\lambda"] & \push j X \rar["\lambda"] \dar["\middlefact f j q"]& \push {j''} {\push j X} & & \color{lightgray} Y \ar[ddd,lightgray,"g",bend left=20] \\ & \pull q Y \ar[urrr,"\rho" near end,lightgray,bend left=10] \rar["\lambda"{fill=white}] & \push{j''} \pull q Y \dar["\mu_Y"] \uar["\push {j''} (\middlefact f j q)"{fill=white,swap},crossing over,leftarrow] & & \\ & & \pull{q''} \push{j'} Y \rar["\rho" swap] \dar["\pull {q''}(\middlefact g {j'} {q'})" swap] & \push {j'} Y \dar["\middlefact g {j'} {q'}" swap] \ar[uur,lightgray,"\lambda",leftarrow,bend left=10] & \\ & & \pull{q''} \pull{q'} Z \rar["\rho" swap] & \pull {q'} Z \rar["\rho" swap] & Z \end{tikzcd} \end{displaymath} % We can conclude that $\tilde h$ is a weak equivalence in $\fiber{\cat E}C$ because it is a composite of such. Indeed: \begin{itemize} \item hypothesis \eqref{hyp:hBC} can be applied to the square $q''j''=j'q$, and so $\mu_Y$ is a weak equivalence in $\fiber{\cat E}C$, \item and by hypothesis \eqref{hyp:weak-conservative}, the functors $\push{j''}$ and $\pull {q''}$ maps the weak equivalences $\middlefact f j q$ and $\middlefact g {j'} {q'}$ to weak equivalences in $\fiber{\cat E}C$. \end{itemize} Suppose now that $f$ and $h$ are in $\totalweak {\cat E}$ and we will show that $g$ also is. Since $p(f)$ and $p(h)$ are weak equivalences in $\cat B$, we can use the two-out-of-three property of $\class W$ to deduce that also $p(g)$ is. By hypothesis, $p(f) = qj$ with $j\in \class C \cap \class W$ and $q \in \class F \cap \class W$ and $\middlefact f j q$ a weak equivalence. Also write $p(g) = q'j'$ for some $j'\in \class C \cap \class W$ and $q' \in \class F \cap \class W$. We are done if we show that $\middlefact g {j'} {q'}$ is a weak equivalence. But in that situation, one can define $j'',q'',i,r,$ and $\tilde h$ as before. So we end up with the same big diagram, except that this time $\push {j''} (\middlefact f j q)$, $\mu_Y$ and the composite $\tilde h$ are weak equivalences of $\fiber{\cat E} C$, yielding $\pull {q''}(\middlefact g {j'} {q'})$ as a weak equivalence by the 2-out-of-3 property. But $\pull {q''}$ being homotopically conservative by \eqref{hyp:weak-conservative}, this shows that $\middlefact g {j'} {q'}$ is a weak equivalence in $\fiber{\cat E}B$. The last case, where $g$ and $h$ are in $\totalweak{\cat E}$ is strictly dual. \end{proof} \section{Illustrations} \label{sec:examples} Since the very start, our work is motivated by the idea that the Reedy model structure can be reconstructed by applying a series of Grothendieck constructions of model categories. The key observation is that the notion of latching and matching functors define a bifibration at each step of the construction of the model structure. We explain in \ref{subsec:reedy} how the Reedy construction can be reunderstood from our bifibrational point of view. In section \ref{subsec:gen-reedy}, we describe how to adapt to express generalized Reedy constructions in a similar fashion. In section \ref{subsec:versus-hp-rs}, we recall the previous notions of bifibration of model categories appearing in the literature and, although all of them are special cases of Quillen bifibrations, we indicate why they do not fit the purpose. \subsection{A bifibrational view on Reedy model structures} \label{subsec:reedy} Recall that a \define{Reedy category} is a small category $\cat R$ together with two subcategories $\cat R^+$ and $\cat R^-$ and a degree function $d \from \ob{\cat R} \to \lambda$ for some ordinal $\lambda$ such that \begin{itemize} \item every morphism $f$ admits a unique factorization $f = f^+f^-$ with $f^- \in \cat R^-$ and $f^+ \in \cat R^+$, \item non-identity morphisms of $\cat R^+$ strictly raise the degree and those of $\cat R^-$ strictly lower it. \end{itemize} For such a Reedy category, let $\deginf{\cat R} \mu$ denote the full subcategory spanned by objects of degree strictly less than $\mu$. In particular, $\cat R = \deginf{\cat R}\lambda$. Remark also that every $\deginf{\cat R}\mu$ inherits a structure of Reedy category from $\cat R$. We are interested in the structure of the category of diagrams of shape $\cat R$ in a complete and cocomplete category $\cat C$. The category $\cat C$ is in particular tensored and cotensored over $\concrete{Set}$, those being respectively given by \begin{displaymath} S \mathbin \odot C = \coprod_{s \in S} C, \qquad S \mathbin \pitchfork C = \prod_{s \in S} C, \qquad S\in\concrete{Set}, C\in \cat C. \end{displaymath} For every $r \in \cat R$ of degree $\mu$, a diagram $X \from \deginf{\cat R}\mu \to \cat C$ induces two objects in $\cat C$, called the \define{latching} and \define{matching} objects of $X$ at $r$, and respectively defined as: \begin{displaymath} \latching r X = \colimend{s \in \deginf{\cat R}\mu} \hom[\cat R] s r \mathbin \odot X_s , \qquad \matching r X = \limend{s \in \deginf{\cat R}\mu} \hom[\cat R] r s \mathbin \pitchfork X_s \end{displaymath} By abuse, we also denote $\latching r X$ and $\matching r X$ for the latching and matching objects of the restriction to $\deginf{\cat R}\mu$ of some $X \from \deginf{\cat R}\kappa \to \cat C$ with $\kappa \geq \mu$. In particular, when $\kappa = \lambda$, $X$ is a diagram of shape the entire category $\cat R$ and we retrieve the textbook notion of latching and matching objects (see for instance \cite{hovey:model-cat}). Universal properties of limits and colimits induce a family of canonical morphisms $\alpha_r \from \latching r X \to \matching r X$, which can also be understood in the following way. First, one notices that the two functors defined as $\deginf{\cat R}{\mu+1} \to \cat C$ \begin{displaymath} r \mapsto \left\{ \begin{aligned} X_r \ &\text{if $d(r) < \mu$}\\ \latching r X \ &\text{if $d(r) = \mu$}\\ \end{aligned} \right. ,\qquad r \mapsto \left\{ \begin{aligned} X_r \ &\text{if $d(r) < \mu$}\\ \matching r X \ &\text{if $d(r) = \mu$}\\ \end{aligned} \right. \end{displaymath} are the skeleton and coskeleton $X$, which provide a left and a right Kan extensions $X$ along the inclusion $\deginf i \mu \from \deginf{\cat R}{\mu} \to \deginf{\cat R}{\mu+1}$. We will write these two functors $\latching \mu X$ and $\matching \mu X$ respectively. The family of morphisms $\alpha_r$ then describes the unique natural transformation $\alpha \from \latching \mu X \to \matching \mu X$ that restrict to the identity on $\deginf {\cat R} \mu$. The following property is, in our opinion, the key feature of Reedy categories. \begin{proposition} \label{prop:key-feature-reedy}% Extensions of a diagram $X \from \deginf{\cat R}\mu \to \cat C$ to $\deginf{\cat R}{\mu+1}$ are in one-to-one correspondence with families of factorizations of the $\alpha_r$'s \begin{displaymath} \left( \latching r X \to \bullet \to \matching r X \right)_{r \in \cat R,d(r) = \mu} \end{displaymath} \end{proposition} \begin{proof} One direction is easy. Every extension $\hat X \from \deginf{\cat R}{\mu+1} \to \cat C$ of $X$ produces such a family of factorizations, but it has nothing to do with the structure of Reedy category: for every $r$ of degree $\mu$ in $\cat R$, the functoriality of $\hat X$ ensures that there is a coherent family of morphisms $X_s = \hat X_s \to \hat X_r$ for each arrow $s \to r$, and symetrically a coherent family of morphisms $\hat X_r \to \hat X_{s'} = X_{s'}$ for each arrow $r \to s'$. Hence the factorization of $\alpha_r$ given by the universal properties of limits and colimits \begin{displaymath} \latching r X \to \hat X_r \to \matching r X \end{displaymath} The useful feature is the converse: when usually, to construct an extension of $X$, one should define images for arrows $r \to r'$ between objects of degree $\mu$ in a functorial way, here every family automatically induces such arrows! This is a fortunate effect of the unique factorization property. Given factorizations $\latching r X \to X_r \to \matching r X$, one can define $X(f)$ for $f \from r \to r'$ as follow: factor $f = f^+f^-$ with $f^- \from r \to s$ lowering the degree and $f^+ \from s \to r'$ raising it, so that in particular $s \in \deginf{\cat R}\mu$; $f^-$ then gives rise to a canonical projection $\matching r X \to X_s$ and $f^+$ to a canonical injection $X_s \to \latching {r'} X$; the wanted arrow $X(f)$ is given by the composite \begin{displaymath} X_r \to \matching r X \to X_s \to \latching {r'} X \to X_{r'} \end{displaymath} Well-definition and functoriality of the said extension are following from uniqueness in the factorization property of the Reedy category $\cat R$. \end{proof} \medskip From now on, we fix a model category $\cat M$, that is a complete and cocomplete category $\cat M$ with a model structure $(\class C,\class W,\class F)$. The motivation behind Kan's notion of Reedy categories is to gives sufficient conditions on $\cat R$ to equip $\functorcat{\cat R}{\cat M}$ with a model structure where weak equivalences are pointwise. \begin{definition}% Let $\cat R$ be Reedy. The \define{Reedy triple} on the functor category $\functorcat{\cat R}{\cat M}$ is the data of the three following classes \begin{itemize} \item Reedy cofibrations : those $f \from X \to Y$ such that for all $r \in\cat R$, the map $\latching r Y \sqcup_{\latching r X} X_r \to Y_r$ is a cofibration, \item Reedy weak equivalences : those $f \from X \to Y$ such that for $r \in \cat R$, $f_r \from X_r \to Y_r$ is a weak equivalence, \item Reedy fibrations : those $f \from X \to Y$ such that for all $r \in\cat R$, the map $X_r \to \matching r X \times_{\matching r Y} Y_r$ is a fibration. \end{itemize} \end{definition} Kan's theorem about Reedy categories, whose our main result gives a slick proof, then states as follow: the Reedy triple makes $\functorcat{\cat R}{\cat M}$ into a model category. A first reading of this definition/theorem is quite astonishing: the distinguished morphisms are defined through those latching and matching objects, and it is not clear, apart from being driven by the proof, why we should emphasize those construction that much. We shall say a word about that later. \begin{remark} \label{rem:restr-along-inj-isofib}% Before going into proposition~\ref{prop:reedy-restriction-bifibration} below, we need to make a quick remark about extensions of diagrams up to isomorphism. Suppose given a injective-on-objects functor $i \from \cat A \to \cat B$ between small categories and a category $\cat C$, then for every diagram $D \from \cat A \to \cat C$, every diagram $D' \from \cat B \to \cat C$ and every isomorphism $\alpha \from D\to D'i$, there exists a diagram $D''\from \cat B \to \cat C$ isomorphic to $D'$ such that $D''i = D$ (and the isomorphism $\beta\from D'' \to D'$ can be chosen so that $\beta i = \alpha$). Informally it says that every ``up to isomorphism'' extension of $D$ can be rectified into a strict extension of $D$. Put formally, we are claiming that the restriction functor $\restr i \from \functorcat{\cat B}{\cat C} \to \functorcat{\cat A}{\cat C}$ is an isofibration. Although it can be shown easily by hand, we would like to present an alternate proof based on homotopical algebra. Taking a universe $\mathbb U$ big enough for $\cat C$ to be small relatively to $\mathbb U$, we can consider the folk model structure on the category $\concrete{Cat}$ of $\mathbb U$-small categories. With its usual cartesian product, $\concrete{Cat}$ is a closed monoidal model category in which every object is fibrant. It follows that $\functorcat{-}{\cat C}$ maps cofibrations to fibrations (see \cite[Remark 4.2.3]{hovey:model-cat}). Then, the injective-on-objects functor $i \from \cat A\to \cat B$ is a cofibration, so it is mapped to a fibration $\restr i \from \functorcat{\cat B}{\cat C} \to \functorcat{\cat A}{\cat C}$. Recall that fibrations in $\concrete{Cat}$ are precisely the isofibrations and we obtain the result. \end{remark} \begin{proposition} \label{prop:reedy-restriction-bifibration}% Let $\cat R$ be Reedy. The restriction functor $\restr{\deginf i \mu} \from \functorcat{\deginf{\cat R}{\mu+1}}{\cat M} \to \functorcat{\deginf{\cat R}{\mu}}{\cat M}$ is a Grothendieck bifibration. \end{proposition} \begin{proof} The claim is that a morphism $f \from X \to Y$ is cartesian precisely when the following diagram is a pullback square: \begin{equation} \label{eq:cartesian-claim} \begin{tikzcd}[column sep=large] X \dar \rar["f"] & Y \dar \\ \matching \mu p X \rar["\matching \mu p(f)" swap] & \matching \mu p Y \end{tikzcd} \end{equation} where the vertical arrows are the component at $X$ and $Y$ of the unit $\eta$ of the adjunction $(p,\matching \mu)$. Indeed, such a diagram is a pullback square if and only if the following square is a pullback for all $Z$: \begin{displaymath} \begin{tikzcd}[column sep=large] \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z X \dar["\eta_X \circ {-}"] \rar["f\circ{-}"] & \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z Y \dar["\eta_Y \circ {-}"] \\ \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z {\matching \mu p X} \rar["\matching \mu p(f) \circ {-}" swap] & \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z {\matching \mu p Y} \end{tikzcd} \end{displaymath} We can take advantage of the adjunction $(p,\matching \mu)$ and its natural isomorphism \begin{displaymath} \phi_{Z,A} \from \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z {\matching \mu A} \simeq \hom[\functorcat{\deginf{\cat R}{\mu}}{\cat M}] {pZ} A \end{displaymath} As in any adjunction, this isomorphism is related to the unit by the following identity: for any $g \from Z \to X$, $p(g) = \phi(\eta_X g)$. So in the end, the square in~(\ref{eq:cartesian-claim}) is a pullback if and only if for every $Z$ the outer square of the following diagram is a pullback: \begin{displaymath} \begin{tikzcd}[column sep=large]% \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z X \dar["\eta_X \circ {-}"] \rar["f\circ{-}"] \ar[dd,"p"{swap},rounded corners,to path={-- ([xshift=-2em]\tikztostart.west) -- ([xshift=-2em]\tikztotarget.west) \tikztonodes -- (\tikztotarget)}] & \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z Y \dar["\eta_Y \circ {-}"] \ar[dd,"p",rounded corners,to path={-- ([xshift=2em]\tikztostart.east) -- ([xshift=2em]\tikztotarget.east) \tikztonodes -- (\tikztotarget)}] \\ \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z {\matching \mu p X} \rar["\matching \mu p(f) \circ {-}" swap] \dar["\phi"{swap},"\rotatebox{-90}{$\simeq$}"] & \hom[\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}] Z {\matching \mu p Y} \dar["\phi"{swap},"\rotatebox{-90}{$\simeq$}"] \\ \hom[\functorcat{\deginf{\cat R}{\mu}}{\cat M}] {pZ} {pX} \rar["p(f) \circ {-}" swap] & \hom[\functorcat{\deginf{\cat R}{\mu}}{\cat M}] {pZ} {pY} \end{tikzcd} \end{displaymath} This is exactly the definition of a cartesian morphism. Dually, we can prove that cocartesian morphisms are those $f \from X \to Y$ such that the following is a pushout square: \begin{displaymath} \begin{tikzcd} \latching \mu p X \dar \rar["\latching \mu p(f)"] & \latching \mu p \dar Y \\ X \rar["f" swap] & Y \end{tikzcd} \end{displaymath} Now for $u \from A \to pY$ in $\functorcat{\deginf{\cat R}{\mu}}{\cat M}$, one should construct a cartesian morphism $f \from X \to Y$ above $u$. First notice that we constructed $\matching \mu$ in such a way that $p \matching \mu = \id{}$ (even more, the counit $p \matching \mu \to \id{}$ is the identity natural transformation). So $\matching \mu A$ is above $A$ and we could be tempted to take, for the wanted $f$, the morphism $\kappa \from \matching \mu A \times_{\matching \mu pY} Y \to Y$ appearing in the following pullback square: \begin{equation} \label{eq:pullback-not-exactly-above}% \begin{tikzcd} \bullet \dar \rar["\kappa"] & Y \dar \\ \matching \mu A \rar["\matching \mu u" swap] & \matching \mu p Y \end{tikzcd} \end{equation} But $\kappa$ is not necessarily above $u$. Indeed, as a right adjoint, $p$ preserves pullbacks. So we get that the following is a pullback in $\functorcat{\deginf{\cat R}\mu}{\cat M}$: \begin{displaymath} \begin{tikzcd} p(\bullet) \dar \rar["p(\kappa)"] & pY \dar["\id Y"] \\ A \rar["u" swap] & pY \end{tikzcd} \end{displaymath} We certainly know another pullback square of the same diagram, namely \begin{displaymath} \begin{tikzcd} A \dar["\id A" swap] \rar["u"] & pY \dar["\id Y"] \\ A \rar["u" swap] & pY \end{tikzcd} \end{displaymath} So, by universal property, we obtain an isomorphism $\alpha\from A \to p(\matching \mu A \times_{\matching \mu pY} Y)$. Now we summon remark~\ref{rem:restr-along-inj-isofib} to get an extension $X$ of $A$ and an isomorphism $\beta \from X \to \matching \mu A \times_{\matching \mu pY} Y$ above $\alpha$. The wanted $f \from X \to Y$ is then just the composite $\kappa\beta$, which is cartesian because the outer square in the following is a pullback (as we chose~\eqref{eq:pullback-not-exactly-above} to be one): \begin{displaymath} \begin{tikzcd} X \drar["\beta"] \ar[dd] \ar[rr,"f"] & & Y \ar[dd] \\ & \bullet \dlar \urar["\kappa"] & \\ \matching \mu A \ar[rr,"\matching \mu u" swap] & & \matching \mu p Y \end{tikzcd} \end{displaymath} The fact that the vertical map $X \to \matching \mu A = \matching \mu p X$ is indeed the unit $\eta$ of the adjunction at component $X$ comes directly from the fact that its image by $p$ is $\id A$. The existence of cocartesian morphism above any $u \from pX \to B$ is strictly dual, using this time the cocontinuity of $p$ as a left adjoint. \end{proof} \begin{remark} First, we should notice that proposition \ref{prop:key-feature-reedy} make the following multievaluation functor an equivalence: \begin{displaymath} \tag{I} \label{eq:fiber-multieval}% \fiber{\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}} A \xrightarrow \sim \prod_{r \in \cat R, d(r) = \mu} \coslice{\slice{\cat M}{\matching r A}}{\latching r A} \end{displaymath} The notation $\coslice{\slice{\cat M}{\matching r A}}{\latching r A}$ is slightly abusive and means the coslice category of $\slice{\cat M}{\matching r A}$ by $\alpha_r$, or equivalently the slice category of $\coslice{\cat M}{\latching r A}$ by $\alpha_r$. Secondly, we can draw from the previous proof that for a morphism $f \from X \to Y$, the fiber morphisms $\pullfact f$ and $\pushfact f$ are, modulo identification \eqref{eq:fiber-multieval}, the respective induced families defining the Reedy triple: \begin{displaymath} (X_r \to \matching r X \times_{\matching r Y} Y_r)_{r,d(r)=\mu} ,\qquad (X_r \sqcup_{\latching r X} \latching r Y \to Y_r)_{r,d(r)=\mu} \end{displaymath} So here it is: the reason behind those {\em a priori} mysterious morphisms, involving latching an matching, are nothing else but the witness of a hidden bifibrational structure. Putting this into light was a tremendous leap in our conceptual understanding of Reedy model structures and their generalizations. \end{remark} The following proposition is the induction step for successor ordinals in the usual proof of the existence of Reedy model structures. Our main theorem \ref{thm:main} allows a very smooth argument. \begin{proposition} \label{prop:reedy-from-main-thm}% If the Reedy triple on $\functorcat{\deginf{\cat R}{\mu}}{\cat M}$ forms a model structure, then it is also the case on $\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}$. \end{proposition} \begin{proof} Our course, the goal is to use theorem \ref{thm:main} on the Grothendieck bifibration $\restr{\deginf i \mu} \from \functorcat{\deginf{\cat R}{\mu+1}}{\cat M} \to \functorcat{\deginf{\cat R}{\mu}}{\cat M}$. By hypothesis, the base $\functorcat{\deginf{\cat R}{\mu}}{\cat M}$ has a model structure given by the Reedy triple. Each fiber $\fiber{(\restr{\deginf i \mu})} A$ above a diagram $A$ is endowed, via identification \eqref{eq:fiber-multieval}, with the product model structure: indeed, if $\cat N$ is a model category, so is its slices $\slice{\cat N}N$ and coslices $\coslice{\cat N}N$ categories, just defining a morphism to be a cofibration, a fibration or a weak equivalence if it is in $\cat N$; products of model categories are model categories by taking the pointwise defined structure. All in all, it means that the following makes the fiber $\fiber{(\restr{\deginf i \mu})} A$ into a model category: a fiber map $f \from X \to X'$ in $\fiber{(\restr{\deginf i \mu})} A$ is a cofibration, a fibration or a weak equivalence if and only if $f_r \from X_r \to X'_r$ is one for every $r \in \cat R$ of degree $\mu$. Now the proof amounts to show that hypothesis \eqref{hyp:quillen-push-pull}, \eqref{hyp:weak-conservative} and \eqref{hyp:hBC} are satisfied in this framework. Let us first tackle \eqref{hyp:quillen-push-pull}. Suppose $u \from A \to B$ in $\functorcat{\deginf{\cat R}{\mu}}{\cat M}$ and $f \from Y \to Y'$ a fiber morphism at $B$. Then by definition of the cartesian morphisms in $\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}$, $\pull u f$ is the unique map above $A$ making the following diagram commute for all $r$ of degree $\mu$: \begin{equation} \label{eq:pullback-reedy}% \begin{tikzcd} (\pull u Y)_r \rar \dar["(\pull u f)_r" swap] & Y_r \dar["f_r"] \\ (\pull u Y')_r \rar \dar & Y'_r \dar \\ \matching r A \rar["\matching r u" swap] & \matching r B \end{tikzcd} \end{equation} where the lower square and outer square are pullback diagrams. By the pasting lemma, so is the upper square. Hence $(\pull u f)_r$ is a pullback of $f_r$, and as such is a (acyclic) fibration whenever $f_r$ is one. This proves that $\pull u$ is right Quillen for any $u$, that is \eqref{hyp:quillen-push-pull}. Goals \eqref{hyp:weak-conservative} and \eqref{hyp:hBC} will be handle pretty much the same way one another and it lies on the following well know fact about Reedy model structures \cite[lemma 15.3.9]{hirschhorn:loc}: for $r \in \cat R$ of degree $\mu$, the functor $\matching r \from \functorcat{\deginf {\cat R} \mu}{\cat M} \to \cat M$ preserves acyclic fibrations\footnote{Actually it is right Quillen, but we will not need that much here.}. This has a wonderful consequence: if $u$ is an acyclic fibration of $\functorcat{\deginf {\cat R} \mu}{\cat M}$, any pullback of $\matching r u$ is an acyclic fibration hence a weak equivalence. So the upper square of diagram \eqref{eq:pullback-reedy} has acyclic horizontal arrows. By the 2-out-of-3 property, $f_r$ on the right is a weak equivalence if an only if $(\pull u f)_r$ is one. This being true for each $r \in \cat R$ of degree $\mu$ makes $\pull u$ homotopically conservative whenever $u$ is an acyclic fibration. This validates half of the property \eqref{hyp:weak-conservative}. The other half is proven dually, resting on the dual lemma: for any $r \in \cat R$ of degree $\mu$, the latching functor $\latching r \from \functorcat{\deginf {\cat R} \mu}{\cat M} \to \cat M$ preserves acyclic cofibrations; then deducing that pushouts of $\latching r u$ are weak equivalences whenever $u$ is an acyclic cofibration. It remains to show \eqref{hyp:hBC}. Everything is already in place and it is just a matter of expressing it. For a commutative square of $\functorcat{\deginf {\cat R} \mu}{\cat M}$ \begin{displaymath} \begin{tikzcd} A \rar["v"] \dar["u'" swap] & C \dar["u"] \\ C' \rar["v'" swap] & B \end{tikzcd} \end{displaymath} with $u,u'$ Reedy acyclic cofibrations and $v,v'$ Reedy acyclic fibrations, the mate at an extension $Z$ of $C$ is the unique fiber morphism $\nu_Z \from (\push {u'} \pull v Z) \to (\pull {v'} \push u Z)$ making the following commute for every $r \in \cat R$ of degree $\mu$: \begin{displaymath} \begin{tikzcd} & \color{lightgray} \matching r A \ar[rr,"\matching r v",lightgray] & & \color{lightgray} \matching r C & \\ \color{lightgray} \latching r A \dar["\latching r u'" swap,lightgray] \rar[lightgray] & (\pull v Z)_r \ar[rr] \dar \uar[lightgray] & & Z_r \ar[dd] \uar[lightgray] & \color{lightgray} \latching r C \lar[lightgray] \ar[dd,"\latching r v",lightgray] \\ \color{lightgray} \latching r C' \rar[lightgray] & (\push {u'} \pull v Z)_r \drar["(\nu_Z)_r"] & & & \\ & & (\pull {v'} \push u Z)_r \rar \dar[lightgray] & (\push u Z)_r \dar[lightgray] & \color{lightgray} \latching r B \lar[lightgray] \\ & & \color{lightgray} \matching r C' \rar["\matching r v'" swap,lightgray] & \color{lightgray} \matching r B & \end{tikzcd} \end{displaymath} where grayscaled square are either pullbacks (when involving matching objects) or pushouts (when involving latching objects). So by the same argument as above, the horizontal and vertical arrows of the pentagone are weak equivalences, making the $r$-component of the mate $(\nu_Z)_r$ a weak equivalence also by the 2-out-of-3 property. Theorem \ref{thm:main} now applies, and yield a model structure on $\functorcat{\deginf{\cat R}{\mu+1}}{\cat M}$ which is readily the Reedy triple. \end{proof} \subsection{Notions of generalized Reedy categories} \label{subsec:gen-reedy} From time to time, people stumble accross almost Reedy categories and build {\em ad hoc} workarounds to end up with a structure ``à la Reedy''. The most popular such generalizations are probably Cisinski's \cite{cisinski:thesis} and Berger-Moerdijk's \cite{berger-moerdijk:gen-reedy}, allowing for non trivial automorphisms. In \cite{shulman:gen-reedy}, Shulman establishes a common framework for every such known generalization of Reedy categories (including enriched ones, which go behind the scope of this paper). Roughly put, Shulman defines {\em almost-Reedy categories} to be those small categories $\cat C$ with a degree function on the objects that satisfy the following property: taking $x$ of degree $\mu$ and denoting $\deginf{\cat C}\mu$ the full subcategory of $\cat C$ of objects of degree strictly less than $\mu$, and ${\cat C}_x$ the full subcategory of $\cat C$ spanned by $\deginf{\cat C}\mu$ and $x$, then the diagram category $\functorcat{\cat C_x}{\cat M}$ is obtained as the {\em bigluing} (to be defined below) of two nicely behaved functors $\functorcat{\deginf{\cat C}\mu}{\cat M} \to \cat M$, namely the weighted colimit and weigthed limit functors, respectively weighted by $\hom[\cat C] - x$ and $\hom[\cat C] x -$. In particular, usual Reedy categories are recovered when realizing that the given formulas of latching and matching objects are precisely these weighted colimits and limits. \medskip In order to understand completely the generalization proposed in \cite{shulman:gen-reedy}, we propose an alternative view on the Reedy construction that we exposed in detail in the previous section. For starter, here is a nice consequence of theorem~\ref{thm:main}: \begin{lemma} \label{lem:quillen-bifib-by-pullback}% Suppose there is a strict pullback square of categories \begin{displaymath} \begin{tikzcd} \cat F \rar \dar["q"] & \cat E \dar["p"] \\ \cat C \rar["F"] & \cat B \end{tikzcd} \end{displaymath} in which $\cat C$ has a model structure and $p$ is a Quillen bifibration. If \begin{enumerate}[label=(\roman*)] \item \label{item:weak-conservative-through-F}% $\push{F(u)}$ and $\pull{F(v)}$ are homotopically conservative whenever $u$ is an acyclic cofibration and $v$ an acyclic fibration in $\cat C$, \item \label{item:hBC-through-F}% $F$ maps squares of the form \begin{displaymath} \begin{tikzcd} A \rar["v"] \dar["u'"] & C \dar["u"] \\ C' \rar["v'"] & B \end{tikzcd} \end{displaymath} with $u,u'$ acyclic cofibrations and $v,v'$ acyclic fibrations in $\cat C$ to squares in $\cat B$ that satisfy the homotopical Beck-Chevalley condition, \end{enumerate} then $q$ is also a Quillen bifibration. \end{lemma} \begin{proof} Denote $p' \from \cat B \to \concrete{Adj}$ the pseudo functor $A \mapsto \fiber{\cat E}A$ associated to $p$. Then it is widely known that the pullback $q$ of $p$ along $F$ is the bifibration obtained by Grothendieck construction of the pseudo functor $p'F\from \cat C \to \concrete{Adj}$. It has fiber $\fiber{\cat F}C = \fiber{\cat E}{FC}$ at $C \in\cat C$, which has a model structure; and for any $u \from C \to D$ in $\cat C$, the adjunction \begin{displaymath} \push u \from \fiber{\cat F}C \rightleftarrows \fiber{\cat F}D \cofrom \pull u \end{displaymath} is given by the pair $(\push {F(u)}, \pull {F(u)})$ defined by $p$. Hence theorem~\ref{thm:main} asserts that $q$ is a Quillen bifibration as soon as \eqref{hyp:hBC} and \eqref{hyp:weak-conservative} are satisfied. The conditions of the lemma are precisely there to ensure that this is the case. \end{proof} Now recall that $\standsimp 1$ and $\standsimp 2$ are the posetal categories associated to $\{0<1\}$ and $\{0<1<2\}$ respectively, and write $c \from \standsimp 1 \to \standsimp 2$ for the functor associated with the mapping $0 \mapsto 0, 1\mapsto 2$. Given a Reedy category $\cat R$ and an object $r$ of degree $\mu$, denote $\deginf i r \from \deginf{\cat R}\mu \to \deginf{\cat R}r$ the inclusion of the full subcategory of $\cat R$ spanned by the object of degree strictly less than $\mu$ into the one spanned by the same objects plus $r$. Then proposition~\ref{prop:key-feature-reedy} asserts that the following is a strict pullback square of categories: \begin{displaymath} \begin{tikzcd} \functorcat{\deginf{\cat R}r}{\cat M} \rar \dar["\restr{\deginf i r}"{swap}] & \functorcat{\standsimp 2}{\cat M} \dar["\restr c"] \\ \functorcat{\deginf{\cat R}\mu}{\cat M} \rar["\alpha_r"{swap}] & \functorcat{\standsimp 1}{\cat M} \end{tikzcd} \end{displaymath} where the bottom functor maps every diagram $X \from \deginf {\cat R} \mu \to \cat M$ to the canonical arrow $\alpha_r \from \latching r X \to \matching r X$. Moreover the functor $\restr c$ is a Grothendieck bifibration: one can easily verify that an arrow in $\functorcat{\standsimp 2}{\cat M}$ \begin{displaymath} \begin{tikzcd} \bullet \rar["f"] \dar & \bullet \dar \\ \bullet \rar["g"] \dar & \bullet \dar \\ \bullet \rar["h"] & \bullet \end{tikzcd} \end{displaymath} is cartesian if and only if the bottom square is a pullback, and is cocartesian if and only if the top square is a pushout. In particular, for each object $k\from A\to B$ of $\functorcat{\standsimp 1}{\cat M}$ we have a model structure on its fiber $\fiber{(\restr c)}k \simeq \coslice{\slice{\cat M}B}A$. Stability of cofibrations by pushout and of fibrations by pullback in the model category $\cat M$ translates to say that hypothesis~\ref{hyp:quillen-push-pull} is satisfied by $\restr c$. In other word, by equipping the basis category $\functorcat{\standsimp 1}{\cat M}$ with the trivial model structure, theorem~\ref{thm:main} applies (\eqref{hyp:hBC} and \eqref{hyp:weak-conservative} are vacuously met) and makes $\restr c$ a Quillen bifibration. The content of the proof of proposition~\ref{prop:reedy-from-main-thm} is precisely showing conditions~\ref{item:weak-conservative-through-F} and~\ref{item:hBC-through-F} of lemma~\ref{lem:quillen-bifib-by-pullback}. We can then conclude that $\restr i r \from \functorcat{\deginf {\cat R} r}{\cat M} \to \functorcat{\deginf {\cat R} r}{\cat M}$ is a Quillen bifibration as in proposition~\ref{prop:reedy-from-main-thm}. The result of \cite[Theorem 3.11]{shulman:gen-reedy} fall within this view. Shulman defines the \define{bigluing} of a natural transformation $\alpha \from F \to G$ between two functors $F,G \from \cat M \to \cat N$ as the category $\bigluing \alpha$ whose: \begin{itemize} \item objects are factorizations \begin{displaymath} \alpha_M \from FM \overset f \to N \overset g \to GM \end{displaymath} \item morphisms $(f,g) \overset {(h,k)} \to (f',g')$ are commutative diagrams of the form \begin{displaymath} \begin{tikzcd} FM \rar["f"] \dar["F(h)" swap] & N \rar["g"] \dar["k"] & GM \dar["G(h)"] \\ FM' \rar["{f'}" swap] & N' \rar["{g'}" swap] & GM' \end{tikzcd} \end{displaymath} \end{itemize} Otherwise put, the category $\bigluing \alpha$ is a pullback as in: \begin{displaymath} \begin{tikzcd} \bigluing \alpha \rar \dar & \functorcat{\standsimp 2}{\cat N} \dar["\restr c"] \\ \cat M \rar["\alpha"{swap}] & \functorcat{\standsimp 1}{\cat N} \end{tikzcd} \end{displaymath} In the same fashion as in the proof of proposition \ref{prop:reedy-from-main-thm}, we can show that conditions~\ref{item:weak-conservative-through-F} and~\ref{item:hBC-through-F} are satisfied for the bottom functor (that we named abusively $\alpha$) when $F$ maps acyclic cofibrations to {\em couniversal weak equivalences} and $G$ maps acyclic fibrations to {\em universal weak equivalences}. By a couniversal weak equivalence is meant a map every pushout of which is a weak equivalence; and by a universal weak equivalence is meant a map every pullback of which is a weak equivalence. Now lemma~\ref{lem:quillen-bifib-by-pullback} directly proves Shulman's theorem. \begin{theorem}[Shulman] \label{thm:shulman}% Suppose $\cat N$ and $\cat M$ are both model categories. Let $\alpha \from F\to G$ between $F,G \from \cat M \to \cat N$ satisfying that: \begin{itemize} \item $F$ is cocontinuous and maps acyclic cofibrations to couniversal weak equivalences, \item $G$ is continuous and maps acyclic fibrations to universal weak equivalence. \end{itemize} Then $\bigluing \alpha$ is a model category whose: \begin{itemize} \item cofibrations are the maps $(h,k)$ such that both $h$ and the map $FM' \sqcup_{FM} N \to N'$ induced by $k$ are cofibrations in $\cat M$ and $\cat N$ respectively, \item fibrations are the maps $(h,k)$ such that both $h$ and the map $N \to GM \times_{GM'} N'$ induced by $k$ are fibrations in $\cat M$ and $\cat N$ respectively, \item weak equivalences are the maps $(h,k)$ where both $h$ and $k$ are weak equivalences in $\cat M$ and $\cat N$ respectively. \end{itemize} \end{theorem} Maybe the best way to understand this theorem is to see it at play. Recall that a generalized Reedy category in the sense of Berger and Moerdijk is a kind of Reedy category with degree preserving isomorphism: precisely it is a category $\cat R$ with a degree function $d: \ob{\cat R} \to \lambda$ and wide subcategories $\cat R^+$ and $\cat R^-$ such that: \begin{itemize} \item \showcase{non-invertible} morphisms of $\cat R^+$ strictly raise the degree while those of $\cat R^-$ striclty lower it, \item isomorphisms all preserve the degree, \item $\cat R^+ \cap \cat R^-$ contains exactly the isomorphisms as morphisms, \item every morphism $f$ can be factorized as $f=f^+f^-$ with $f^+ \in \cat R^+$ and $f^- \in\cat R^-$, and such a factorization is unique up to isomorphism, \item if $\theta$ is an isomorphism and $\theta f = f$ for some $f \in \cat R^-$, then $\theta$ is an identity. \end{itemize} The central result in \cite{berger-moerdijk:gen-reedy} goes as follow: \begin{enumerate}[label=(\arabic*)] \item the latching and matching objects at $r\in\cat R$ of some $X \from \cat R \to \cat M$ are defined as in the classical case, but now the automorphism group $\aut r$ acts on them, so that $\latching r X$ and $\matching r X$ are objects of $\functorcat{\aut r}{\cat M}$ rather than mere objects of $\cat M$. \item suppose $\cat M$ such that every $\functorcat{\aut r}{\cat M}$ bears the projective model structure, and define Reedy cofibrations, Reedy fibrations and Reedy weak equivalences as usual but considering the usual induced maps $X_r \sqcup_{\latching r X} \latching r Y \to Y_r$ and $X_r \to Y_r \times_{\matching r Y} \matching r X$ in $\functorcat{\aut r}{\cat M}$, not in $\cat M$. \item then Reedy cofibrations, Reedy fibrations and Reedy weak equivalences give $\functorcat{\cat R}{\cat M}$ a model structure. \end{enumerate} In that framework, theorem~\ref{thm:shulman} is applied repeatedly with $\alpha$ being the canonical natural transformation between $\latching r, \matching r \from \functorcat{\deginf{\cat R}\mu}{\cat M} \to \functorcat{\aut r}{\cat M}$ whenever $r$ is of degree $\mu$. In particular, here we see the importance to be able to vary the codomain category $\cat N$ of Shulman's result in each successor step, and not to work with an homogeneous $\cat N$ all along. \subsection{ Related works on Quillen bifibrations} \label{subsec:versus-hp-rs} Our work builds on the papers \cite{roig:model-bifibred}, \cite{stanculescu:bifib-model} on the one hand, and \cite{harpaz-prasma:grothendieck-model} on the other hand, whose results can be seen as special instances of our main theorem \ref{thm:main}. In these two lines of work, a number of sufficient conditions are given in order to construct a Quillen bifibration. The fact that their conditions and constructions are special cases of ours follows from the equivalence established in theorem \ref{thm:main}. As a matter of fact, it is quite instructive to review and to point out the divergences between the two approaches and ours, since it also provides a way to appreciate the subtle aspects of our construction. \medskip Let us state the two results and comment them. \begin{theorem}[Roig, Stanculescu] \label{thm:rs}% Let $p \from \cat E \to \cat B$ be a Grothendieck bifibration. Suppose that $\cat B$ is a model category with structure $(\totalcof\null,\totalweak\null,\totalfib\null)$ and that each fiber $\fiber {\cat E} A$ also with structure $(\totalcof A,\totalweak A,\totalfib A)$. Suppose also assumption \eqref{hyp:quillen-push-pull}. Then $\cat E$ is a model category with \begin{itemize} \item cofibrations the total ones, \item weak equivalences those $f \from X \to Y$ such that $p(f) \in \totalweak\null$ and $\pullfact f \in \totalweak{pX}$, \item fibrations the total ones, \end{itemize} provided that \begin{enumerate}[label=(\roman*)] \item\label{hyp:rs-hcons} $\pull u$ is homotopically conservative for all $u \in \totalweak\null$, \item\label{hyp:rs-unit} for $u \from A \to B$ an acyclic cofibration in $\cat B$, the unit of the adjoint pair $(\push u,\pull u)$ is pointwise a weak equivalence in $\fiber{\cat E}A$. \end{enumerate} \end{theorem} The formulation of the theorem is not symmetric, since it emphasizes the cartesian morphisms over the cocartesian ones in the definition of weak equivalences. This lack of symmetry in the definition of the weak equivalences has the unfortunate effect of giving a similar bias to the sufficient conditions: in order to obtain the weak factorization systems, cocartesian morphisms above acyclic cofibrations should be acyclic, which is the meaning of this apparently weird condition \ref{hyp:rs-unit}; at the same time, cartesian morphisms above acyclic fibrations should also be acyclic but this is vacuously true with the definition of weak equivalences in theorem \ref{thm:rs}. Condition \ref{hyp:rs-hcons} is only here for the 2-out-of-3 property, which boils down to it. \begin{theorem}[Harpaz, Prasma] Let $p \from \cat E \to \cat B$ be a Grothendieck bifibration. Suppose that $\cat B$ is a model category with structure $(\totalcof\null,\totalweak\null,\totalfib\null)$ and that each fiber $\fiber {\cat E} A$ also with structure $(\totalcof A,\totalweak A,\totalfib A)$. Suppose also assumption \eqref{hyp:quillen-push-pull}. Then $\cat E$ is a model category with \begin{itemize} \item cofibrations the total ones, \item weak equivalences those $f \from X \to Y$ such that $u = p(f) \in \totalweak\null$ and $\pull u(r) \circ \pullfact f \in \totalweak{pX}$, where $r \from Y \to Y^{\rm fib}$ is a fibrant replacement of $Y$ in $\fiber{\cat E}{pY}$, \item fibrations the total ones, \end{itemize} provided that \begin{enumerate}[label=(\roman*')] \item\label{hyp:hp-qequiv} the adjoint pair $(\push u,\pull u)$ is a Quillen equivalence for all $u \in \totalweak\null$, \item\label{hyp:hp-hcons} $\push u$ and $\pull v$ preserves weak equivalences whenever $u$ is an acyclic cofibration and $v$ an acyclic fibration. \end{enumerate} \end{theorem} At first glance, Harpaz and Prasma introduces the same asymmetry that Roig and Stanculescu in the definition of weak equivalences. They show however that, under condition \ref{hyp:hp-qequiv}, weak equivalences can be equivalently described as those $f \from X \to Y$ such that $u = p(f) \in\totalweak\null$ and \begin{displaymath} \push u X^{\rm cof} \to \push u X \to Y \in \totalweak{pY} \end{displaymath} where the first arrow is the image by $\push u$ of a cofibrant replacement $X^{\rm cof} \to X$. Hence, they manage to adapt Roig-Stanculescu's result and to make it self dual. There is a cost however, namely condition \ref{hyp:hp-qequiv}. Informally, it says that weakly equivalent objects of $\cat B$ should have fibers with the same homotopy theory. Harpaz and Prasma observe moreover that under \ref{hyp:hp-qequiv}, \ref{hyp:rs-hcons} and \ref{hyp:rs-unit} implies \ref{hyp:hp-hcons}. The condition is quite strong: in particular for the simple Grothendieck bifibration $\mathrm{cod} \from \functorcat{\lincat 2}{\cat B} \to \cat B$ of example \ref{ex:main-thm-motiv}, it is equivalent to the fact that the model category $\cat B$ is right proper. This explains why condition \ref{hyp:hp-qequiv} has to be weakened in order to recover the Reedy construction, as we do in this paper. \medskip It is possible to understand our work as a reflection on these results, in the following way. A common pattern in the train of thoughts developped in the three papers \cite{roig:model-bifibred,stanculescu:bifib-model,harpaz-prasma:grothendieck-model} is their strong focus on cartesian and cocartesian morphisms above \emph{weak equivalences}. Looking at what it takes to construct weak factorization systems using Stancuslescu's lemma (cf.\ lemma \ref{lem:stan-lemma}), it is quite unavoidable to {\em push} along (acyclic) cofibrations and {\em pull} along (acyclic) fibrations in order to put everything in a common fiber, and then to use the fiberwise model structure. On the other hand, \emph{nothing} compels us apparently to push or to pull along weak equivalences of $\cat B$ in order to define a model structure on $\cat E$. This is precisely the Ariadne's thread which we followed in the paper: organize everything so that cocartesian morphisms above (acyclic) cofibrations are (acyclic) cofibrations, and cartesian morphisms above (acyclic) fibrations are (acyclic) fibrations. This line of thought requires in particular to see every weak equivalences of the basis category $\cat{B}$ as the \emph{composite} of an acyclic cofibration followed by an acyclic fibration. One hidden source of inspiration for this divide comes from the dualities of proof theory, and the intuition that pushing along an (acyclic) cofibration should be seen as a \emph{positive operation} (or a constructor) while pulling along an (acyclic) fibration should be seen as a \emph{negative operation} (or a deconstructor), see~\cite{mellies-zeilberger-lics-2016,mellies-zeilberger-mscs-2017} for details. All the rest, and in particular hypothesis \eqref{hyp:weak-conservative} and \eqref{hyp:hBC}, follows from that perspective, together with the idea of applying the framework to reunderstand the Reedy construction from a bifibrational point of view. \medskip Let us finally mention that we are currently preparing a companion paper~\cite{cagne-mellies-companion-2017} where we carefully analyze the relationship between the functor $\Ho p: \Ho{\cat E} \to \Ho{\cat B}$ between the homotopy categories $\Ho{\cat E}$ and $\Ho{\cat B}$ obtained from a Quillen bifibration $p:\cat E\to\cat B$ by Quillen localisation, and the Grothendieck bifibration $q:\cat F\to\cat B$ obtained by localising each fiber $\fiber{\cat E}A$ of the Quillen bifibration~$p$ independently as $\fiber{\cat F} A = \Ho{\fiber{\cat E}A}$. \addcontentsline{toc}{section}{References} \bibliographystyle{alpha}
1,941,325,220,384
arxiv
\section{Introduction} We often assume random order arrival in theoretical studies of online algorithms to obtain much better performance guarantees than under worst-case assumptions. This means that we have full randomness knowledge about an instance before it is handed to an algorithm. In practice, however, often this assumption is not quite right; although it is reasonable to assume some randomness in the input sequence, but not reasonable to assume that the arrival ordering is uniformly random. We study the limits of this randomness for the secretary problems in this paper. The secretary problem was introduced as the problem of irrevocably hiring the best secretary out of $n$ rankable applicants and first analyzed in~\cite{Lindley61,dynkin1963optimum,ChowMRS64,GilbertM66}. In this problem the goal is to find the best strategy when choosing between a sequence of alternatives. In particular, asymptotically optimal algorithm with success probability $\frac{1}{e}$ was proposed, when perfect randomness is available (i.e., random orders are chosen uniformly at random from the set of all $n!$ permutations). Gilbert and Mosteller~\cite{GilbertM66} showed with perfect randomness, no algorithm could achieve better probability of success than some simple {\em wait-and-pick} algorithm with specific threshold $m\in [n-1]$ (which can be proved to be in $\{\lfloor n/e \rfloor,\lceil n/e \rceil\}$). Wait-and-pick are deterministic algorithms observing values until some pre-defined threshold step $m\in [n-1]$, and after that they accept the first value that is larger than all the previously observed ones, or the last occurring value otherwise. The seminal work of Kesselheim, Kleinberg, and Niazadeh~\cite{KesselheimKN15} (STOC'15) initiates an investigation into relaxations of the random-ordering assumption for the secretary problem. In particular, they define a distribution over permutations to be {\em admissible}, if there exists an algorithm which guarantees at least a constant probability of selecting the element of maximum value over permutations from this distribution, no matter what values the adversary assigns to elements; the distribution is {\em optimal}, if the constant probability approaches the best secretary bound (e.g., $1/e$ for the classic one) as the number of elements, $n$, goes to infinity. They specifically raise the following main question for the secretary problem. {\em ``What natural properties of a distribution suffice to guarantee that it is admissible? What properties suffice to guarantee that it is optimal?''}~\cite{KesselheimKN15} For example, they show two sets of properties of distributions over permutations, namely {\em block-independence} and {\em uniform-induced-ordering}, result in optimal distributions. More importantly, motivated by the theory of pseudorandomness, Kesselheim et al.~\cite{KesselheimKN15} raise the question of the minimum entropy of an admissible/optimal distribution over permutations and whether there is an explicit construction that achieves the minimum entropy. They prove tight bound $\Theta(\log\log n)$ for minimum entropy of an admissible distribution for the secretary problem. More precisely, they prove if a distribution over permutations has entropy $o(\log\log n)$ then no algorithm (deterministic or randomized) achieves a constant probability of success. They also present a polynomial-time construction of a set of $\mbox{polylog }(n)$ permutations such that wait-and-pick algorithm choosing a random order uniformly from this set (i.e., with entropy $O(\log\log n)$) achieves probability of success $\frac{1}{e} - \omega(\frac{1}{(\log\log\log(n))^{c}})$, for any positive constant $c < 1$. Their construction includes several reduction steps, uses composition of three Reed-Solomon codes and auxiliary composition functions (see their full ArXiv version~\cite{KesselheimKN15-arxiv} for more details and a better understanding of their approach). Though they also consider entropy of admissible distributions for the classic {\em multiple-choice secretary} (a.k.a.~$k$-secretary, used interchangeably in this paper; see~\cite{HajiaghayiKP04,kleinberg2005multiple,babaioff2007matroids} for introduction and analysis), bounds that they obtain are far from being tight as we describe next. In this paper, we study the problem for the entropy of both admissible and optimal distributions of permutations to the multiple-choice secretary problem and provide tight bounds for the problem. This completely resolves the entropy-optimality question for the multiple-secretary problem. In particular, we construct a distribution with entropy $\Theta(\log\log n)$ such that a deterministic threshold-based algorithm gives a nearly-optimal competitive ratio $1-O(\log(k)/k^{1/3})$ for $k=O((\log n)^{3/14})$, which improves in two ways the previous best construction by Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15}. First, our solution works for exponentially larger range of parameters $k$, as in~\cite{KesselheimKN15} $k=O((\log\log\log n)^{\epsilon})$ for some $\epsilon\in (0,1)$. Second, our algorithm is a simple deterministic {\em single-threshold} algorithm (only drawing a permutation from a stochastic uniform distribution), while the algorithm in~\cite{KesselheimKN15} uses additional randomness. We also prove a corresponding lower bound for entropy of optimal solutions to the $k$-secretary problem, matching the entropy of our algorithm. No previous lower bound was known for the $k$-secretary problem. We further show the strength of our techniques by obtaining fine-grained results for optimal distributions of permutations for the secretary problem (equivalent to $1$-secretary). For entropy $\Theta(\log\log n)$, we precisely characterize the success probability of uniform distributions that is below, and close to, $1/e$, and construct such distributions in polynomial time. Furthermore, we prove even higher entropy $\Theta(\log(n))$ suffices for a success probability above $1/e$, but, no uniform probability distribution with small support and entropy strictly less than $\log (n)$ can have success probability above $1/e$. Last but not least, with maximum entropy, $\Theta(n \log(n))$, of the uniform distribution with support $n!$, we find the precise formula $OPT_n$ for the optimal success probability of any secretary algorithm. In addition, we prove that any secretary algorithm that uses any, not necessarily uniform distribution, has success probability at most $OPT_n$. This improves the result of Samuels from 1981 \cite{Samuels81}, who proved that under uniform distribution no secretary algorithm can achieve success probability of $1/e + \varepsilon$, for any constant $\varepsilon > 0$. \subsection{Preliminaries} Let $[i] = \{1,2,\ldots,i\}$, and $n$ be the number of arriving elements/items. Each of them has a unique index $i\in [n]$, and corresponding unique value $v(i)$ assigned to it by an adversary. The adversary knows the algorithm and the distribution of random arrival orders. Let $\Pi_n$ denote the set of all $n !$ permutations of the sequence $(1,2,\ldots,n)$. A {\em probability distribution} $p$ over $\Pi_n$ is a function $p : \Pi_n \longrightarrow [0,1]$ such that $\sum_{\pi \in \Pi_n} p(\pi) = 1$. {\em Shannon entropy}, or simply, {\em entropy}, of the probability distribution $p$ is defined as ${\mathcal{H}}(p) = - \sum_{\pi \in \Pi_n} p(\pi) \cdot \log(p(\pi))$, where $\log$ has base $2$, and if $p(\pi)=0$ for some $\pi \in \Pi_n$, then we assume that $0 \cdot \log(0) = 0$. Given a distribution $\mathcal{D}$ on $\Pi_n$, $\pi \sim \mathcal{D}$ means that $\pi$ is sampled from $\mathcal{D}$. A special case of a distribution, convenient to design efficiently, is when we are given a (multi-)set $\mathcal{L}\subseteq \Pi_n$ of permutations, called a {\em support}, and random order is selected uniformly at random (u.a.r. for short) from this set; in this case we write $\pi \sim \mathcal{L}$. The entropy of this distribution is $\log |\mathcal{L}|$. We call such an associated probabilistic distribution {\em uniform}, and otherwise {\em non-uniform}. We often abbreviate ``random variable" to r.v., and ``uniformly at random'' to u.a.r. For a positive integer $k < n$, let $[n]_k$ be the set of all $k$-element subsets of $[n]$. Given a sequence of (not necessarily sorted) {\em values} $v(1),v(2),\ldots,v(n) \in \mathbb{R}$, we denote by $ind(k\rq{}) \in \{1,2,\ldots,n\}$ the index of the element with the $k\rq{}$th largest value, that is, the $k\rq{}$th largest value is $v(ind(k\rq{}))$. \noindent {\bf Wait-and-pick algorithms.} An algorithm for the $k$-secretary problem is called {\em wait-and-pick} if it only observes the first $m$ values ($m \in \{1,2,\ldots,n-1\}$ is a fixed observation period threshold, called a {\em time threshold}), selects one of the observed values $x$ ($x$ is a fixed {\em value threshold}), and then selects every value of at least $x$ received after position $m$; however, it cannot select more than $k$ values in this way, and it may also select the last $i$ values (even if they are smaller than $x$) provided it selected only $k-i$ values before that. We also consider a sub-class of wait-and-pick algorithms, which as their value threshold $x$ choose the $\tau$-th largest value, for some $\tau \in \{1,2,\ldots,m\}$, among the first $m$ observed values. In this case we say that such wait-and-pick algorithm has a {\em statistic} $\tau$ and value $x$ is also called a statistic in this case. The definition of the wait-and-pick algorithms applies also to the secretary problem, i.e., with $k=1$. It has been shown that some wait-and-pick algorithms are optimal in case of perfect randomness in selection of random arrival order, see \cite{GilbertM66}. \ignore{ {\em Wait-and-pick} algorithms, also called {\em threshold} or {\em classic} secretarial algorithms, are parametrized by {\em threshold} $m\in [n-1]$, denoted by $m_0$. They work as follows: they apply random order to the adversarial assignment of values, keep observing the first $m$ values in this random order and then select the first $i$-th arriving element, for $i>m$, whose value is larger than all the previously observed values. If such $i$ does not exist, then the last, $n$-th, element is selected. It has been shown that some wait-and-pick algorithms are optimal in case of perfect randomness in selection of random arrival order, see \cite{GilbertM66}. } \section{Our results and techniques} \subsection{Multiple-choice secretary ($k$-secretary) problem} \ignore{ \pk{Moze lepiej tak: At the same time, Our algorithms are a) entropy optimal b) asymptotically achieve the best possible ratio $(1-O(1/\sqrt{k}))$ (optimal even for distributions with arbitrary large entropy). PUNCHLINE: We are entropy-optimal for $k$-secretary because we achieve best possible $(1-O(1/\sqrt{k}))$ competitive ratio that is possible with any distribution, and with provable optimal entropy of $\Theta(\log \log n)$ AND we solve an open problem from \cite{AlbersL21} because our analysis can give PRECISE formula $$ (1-\delta)(1-\epsilon)\Big(1-\frac{m}{n}\Big)\Big(1-\frac{1}{k}\Big)\Big(1-\frac{1}{\sqrt{k}}\Big) \, , $$ where $\epsilon = \delta = 1/\sqrt{k}$, on the competitive ratio of the simple wait-and-pick algorithm with threshold $m=n/\sqrt{k}$ and statistic $k\frac{m}{n}$. I think that we can claim that this the PRECISE formula for the (expected) competitive ratio of this algorithm (check that !!!). Or if not precise then it is very tight lower bound estimate!!! BTW, combinatorial analysis in \cite{AlbersL21} takes about 8 pages, and our analysis is less than 3 pages (Section 4)}. } \paragraph{Main contribution: algorithmic results.} Our algorithms, as well as many algorithms in the literature, are of wait-and-pick type. Our main result is a tight result for optimal policy for multiple choice secretary problem under low entropy distributions. In the two theorems below we assume that the adversarial values are such that $v(1) \geq v(2) \geq \cdots \geq v(n)$. \begin{theorem}\label{theorem:main_derand_1} For any $k < (\log\log{n})^{1/4}$ there exists a permutations distribution $\mathcal{D}$ such that $$\mathbb{E}_{\sigma\sim\mathcal{D}}(ALG(\sigma)) \ge \bigg(1 - \frac{k^2}{\sqrt{\log{n}\log{n}}}\bigg)\bigg(1-\frac{4\log{k}}{k^{1/3}}\bigg)\sum_{i = 1}^{k}v(i),$$ where $ALG$ is a deterministic wait-and-pick multiple-choice secretary algorithm with time threshold $m = n / k^{1/3}$. The distribution $\mathcal{D}$ has the optimal entropy $O(\log\log{n})$ and can be computed in polynomial time in $n$. \end{theorem} \begin{theorem}\label{theorem:main_derand_2} For any $k < \frac{\log{n}}{\log\log{n}}$ there exists a permutations distribution $\mathcal{D}$ such that $$\mathbb{E}_{\sigma\sim\mathcal{D}}(ALG(\sigma)) \ge \bigg(1 - \frac{k^2}{\sqrt{\log{n}}}\bigg)\bigg(1-\frac{5\log{k}}{k^{1/3}}\bigg)\sum_{i = 1}^{k}v(i),$$ where $ALG$ is a deterministic wait-and-pick multiple-choice secretary algorithm with threshold $m = \frac{n}{k^{1/3}}$. The distribution $\mathcal{D}$ has the optimal entropy $O(\log\log{n})$ and can be computed in polynomial~time~in~$n$. \end{theorem} \noindent Detailed analysis of these results can be found in Section~\ref{sec:low-entropy-k-secretary}: that of Theorem \ref{theorem:main_derand_1} as Theorem~\ref{thm:first-main-proof}, and that of Theorem \ref{theorem:main_derand_2} as Theorem~\ref{thm:second-main-proof}, respectively. \ignore{ \pk{Our probabilistic analysis implies that a very simple, deterministic, threshold algorithm with $m = 1/\sqrt{k}$ and $(1-O(1/\sqrt{k}))$-competitive ratio for the $k$-secretary problem when random permutation is chosen from $\Pi_n$? Does this improve over previous known such algorithms with respect to simplicity?} } \noindent {\bf Optimality of our results vs previous results.} Theorem \ref{theorem:main_derand_2} achieves nearly-optimal competitive ratio $1-O(\log(k)/k^{1/3})$, with provably minimal entropy $O(\log \log n)$, when $k=O((\log n)^{3/14})$, for the $k$-secretary problem. The ratio $(1-O(1/\sqrt{k}))$ is best possible for the $k$-secretary problem in the random order model, see \cite{kleinberg2005multiple,Gupta_Singla,AgrawalWY14}. The previous best result was by Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15}, and their competitive ratio is $(1-O(1/k^{1/3})-o(1))$ and uses entropy $O(\log \log n)$. Optimality of the entropy follows by our new lower bounds in Theorem \ref{thm:lower-general} and \ref{thm:lower}, see the discussion after Theorem \ref{thm:lower} below. Note that such lower bounds we not known before for the $k$-secretary problem. Theorems \ref{theorem:main_derand_1} and \ref{theorem:main_derand_2} improve the previous best results by Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15} even more: our solution works for exponentially larger range of parameters $k$, as in~\cite{KesselheimKN15} $k=O((\log\log\log n)^{\epsilon})$ for some $\epsilon\in (0,1)$. And, finally, our algorithm is a simple deterministic {\em single-threshold} algorithm (only drawing a permutation from a stochastic uniform distribution with entropy $O(\log \log n)$), while the algorithm in~\cite{KesselheimKN15} uses additional randomness. In fact, the randomized algorithm in \cite{KesselheimKN15} uses internal randomness that has entropy at least $\Omega((\log k)^2)$, see Proposition \ref{prop:Kessel_Large_Entropy} in Section~\ref{sec:previous-suboptimality}. Their construction of the distribution on random orders that their algorithm uses, has entropy $O(\log \log n)$, but it only applies to $k=O((\log\log\log n)^{\epsilon})$ for some fixed $\epsilon\in (0,1)$. However, if their randomized algorithm would be used with higher values of parameter $k$, for example $k = \Theta(\log n)$ as in our case, the entropy of its internal randomization would be $\Omega((\log \log n)^2)$, which is asymptotically larger than the optimal entropy $O(\log \log n)$ for the random orders. \medskip \noindent {\bf Technical contributions.} Our starting point to obtain Theorem \ref{theorem:main_derand_1} and \ref{theorem:main_derand_2} is a probabilistic analysis in Section \ref{section:prob_analysis_k-secr} of wait-and-pick multiple-choice secretary algorithms. In this analysis we exploit the fact that the indicator random variables, which indicate if indices fall in an interval in a random permutation, are {\em negatively associated}, see, e.g., \cite{D_Wajc_2017}. Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15} provide explicit constructions of small entropy probability distributions for the $k$-secretary algorithms (for both $k=1$ and $k > 1$) by using a product of three Reed-Solomon codes and explicit enumeration of the resulting lower-dimensional permutations. Our approach builds on their idea of using Reed-Solomon codes, by formalizing a notion of {\em dimensionality-reduction}, see Section \ref{section:dim_reduction_2}. We provide two dimension reduction constructions. The first one (Theorem \ref{theorem:main_derand_1}) uses only a product of two Reed-Solomon codes and explicit enumeration. The second one (Theorem \ref{theorem:main_derand_2}) uses only one Reed-Solomon code and instead of explicit enumeration, its second step is completely new -- an algorithmic derandomization of our probabilistic analysis from Section \ref{section:prob_analysis_k-secr}. Our new derandomization technique is based on the method of conditional expectation with a special pessimistic estimator for the failure probability. This estimator is derived from the proof of Chernoff bound and is inspired by Young's \cite{Young95} oblivious rounding. We obtain it by first combinatorializing the Hoeffding argument in Theorem \ref{theorem:Hoeffding_k-secretary} by defining the notion of $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples and replacing Hoeffding argument by Chernoff argument (combinatorialization introduces $0/1$ r.v.'s, instead of r.v.'s that assume adversarial values in Theorem \ref{theorem:Hoeffding_k-secretary}). We show two applications of this technique of dimension reduction followed by the Chernoff bound derandomization: to the $k$-secretary problem and to the classic $1$-secretary problem. For the former problem, it implies an entropy-optimal algorithm with optimal competitive factor (our main results in Theorems \ref{theorem:main_derand_1} and \ref{theorem:main_derand_2}), and for the latter it gives a fine-grained analysis of the best possible success probability of algorithms (see Theorems \ref{Thm:double_reduction_final} and \ref{Thm:single_reduction_final} below). The only problem-specific parts of this new derandomization technique are the combinatorialization ($k + \lfloor k^{2/3} \log {k} \rfloor$-tuples for $k$-secretary in Section \ref{sec:derand_Hoeffding}, and $k$-tuples for $1$-secretary in Section \ref{sec:existential-proof}) and computation of conditional probabilities (see Algorithms \ref{algo:Cond_prob_2} and \ref{algo:Cond_prob}). The rest of the technique, e.g., pessimistic estimator, the main algorithm and its analysis, are the same for both problems. The running time of the resulting derandomization algorithm is at least $n^k$ and to make it polynomial, we design {\em dimension reductions}. We propose two dimension reduction methods based on a refined use of Reed-Solomon codes. As a new technical ingredient, we construct a family of functions that have bounded number of collisions and their preimages are of almost same sizes (up to additive $1$), by carefully using algebraic properties of polynomials (see Lemma~\ref{lem:Reed_Solomon_Construction}). This gives our first dimension reduction construction, which together with derandomization leads to Theorem \ref{theorem:main_derand_2}, see details in Section \ref{sec:low-entropy-k-secretary}. Our second construction of such function family is based on a direct product of two Reed-Solomon codes in Section \ref{subsec:product-two-codes}, leading to Theorem \ref{theorem:main_derand_1}, see details in Section~\ref{sec:low-entropy-k-secretary}. The use of Reed-Solomon codes are inspired by Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15, KesselheimKN15-arxiv}. Our construction significantly improves and simplifies their constructions by adding the constraint on sizes of preimages and using only one or two, instead of three, codes and we do not need any auxiliary composition functions. The constraint on preimages, precisely tailored for the $k$-secretary problem, allows us to apply more direct techniques of finding permutations distributions over a set with reduced dimension. This constraint is crucial for proving the competitive ratios. Both constructions are computable in polynomial time and we believe that they are of independent interest. We augment the dimension reductions with a detailed analysis of how they relate to the error guarantees of the $k$-secretary ($1$-secretary, resp.) algorithms, see Section~\ref{sec:low-entropy-k-secretary} (Appendix \ref{sec:two_distrbiutions_proofs-classic}, resp.). \paragraph{Lower bounds.} We are the first to prove two lower bounds on entropy of $k$-secretary algorithms achieving expected competitive ratio $1-\epsilon$. Their proofs can be found in Section~\ref{sec:lower-bounds}. The first one is for any algorithm, but works only for $k\le \log^a n$ for some constant $a\in (0,1)$. \begin{theorem \label{thm:lower-general} Assume $k\le \log^a n$ for some constant $a\in (0,1)$. Let $\epsilon\in (0,1)$ be a given parameter. Then, any algorithm (even fully randomized) solving $k$-secretary problem while drawing permutations from some distribution on $\Pi_n$ with an entropy $H\le \frac{1-\epsilon}{9} \log\log n$, cannot achieve the expected competitive ratio of at least $1-\epsilon$ for sufficiently large $n$. \end{theorem} The second lower bound on entropy is for the wait-and-pick algorithms for any $k<n/2$. \begin{theorem \label{thm:lower} Any wait-and-pick algorithm solving $k$-secretary problem, for $k<n/2$, with expected competitive ratio of at least $(1-\epsilon)$ requires entropy $\Omega(\min\{\log 1/\epsilon,\log \frac{n}{2k}\})$. \end{theorem} It follows from Theorem~\ref{thm:lower-general} that entropy $\Omega(\log\log n)$ is necessary for any algorithm to achieve even a constant positive competitive ratio $1-\epsilon$, for $k=O(\log^a n)$, where $a<1$. In particular, it proves that our upper bounds in Theorems~\ref{theorem:main_derand_1} and~\ref{theorem:main_derand_2} are tight. Theorem~\ref{thm:lower} implies that entropy $\Omega(\log\log n)$ is necessary for any wait-and-pick algorithm to achieve a close-to-optimal competitive ratio $1-\Omega(\frac{1}{k^a})$, for {\em any} $k<n/2$, where constant $a\le 1/2$. Even more, in such case entropy $\Omega(\log k)$ is necessary, which could be as large as $\Omega(\log n)$ for $k$ being a polynomial in $n$. \medskip \noindent {\bf Technical contributions.} The lower bound for all algorithms builds on the concept of semitone sequences with respect to the set of permutation used by the algorithm. It was proposed in \cite{KesselheimKN15} in the context of $1$-secretary problem. Intuitively, in each permutation of the set, the semitone sequence always positions next element before or after the previous elements of the sequence (in some permutations, it could be before, in others -- after). Such sequences occurred useful in cheating $1$-secretary algorithms by assigning different orders of values, but occurred hard to extend to the general $k$-secretary problem. The reason is that, in the latter, there are two challenges requiring new concepts. First, there are $k$ picks of values by the algorithm, instead of one -- this creates additional dependencies in probabilistic part of the proof (c.f., Lemma~\ref{lem:lower-random-adv}), which we overcome by introducing more complex parametrization of events and inductive proof. Second, the algorithm does not always have to choose maximum value to guarantee approximation ratio $1-\epsilon$, or can still choose the maximum value despite of the order of values assigned to the semitone sequence -- to address these challenges, we not only consider different orders the values in the proof (as was done in case of $1$-secretary in~\cite{KesselheimKN15}), but also expand them in a way the algorithm has to pick the largest value but it cannot pick it without considering the order (which is hard for the algorithm working on semitone sequences). It leads to so called hard assignments of values and their specific distribution in Lemma~\ref{lem:lower-random-adv} resembling biased binary search, see details in Section~\ref{sec:lower-general}. The lower bound for wait-and-pick algorithms, presented in Section~\ref{sec:lower-wait-and-pick}, is obtained by constructing a virtual bipartite graph with neighborhoods defined based on elements occurring on left-had sides of the permutation threshold, and later by analyzing relations between sets of elements on one side of the graph and sets of permutations represented by nodes on the other side of the graph. \subsection{Classical secretary ($1$-secretary) problem}\label{sec:classic-secretary-low-entropy} \noindent {\bf Characterization and lower bounds.} We prove in Proposition \ref{Thm:optimum_expansion} a characterization of the optimal success probability $OPT_n$ of secretary algorithms, which is complemented by an existential result in Theorem~\ref{th:optimal_secr_existence}. When the entropy is maximum, $\Theta(n \log(n))$, of the uniform distribution on the set of permutations with support $n!$, we find the precise formula for the optimal success probability of the best secretary algorithm, $OPT_n = 1/e + c_0/n + \Theta((1/n)^{3/2})$, where $c_0 = 1/2 - 1/(2e)$, see Proposition \ref{Thm:optimum_expansion}, Part 1. We prove that any secretary algorithm that uses any, not necessarily uniform distribution, has success probability at most $OPT_n$ (Part 2, Proposition \ref{Thm:optimum_expansion}). This improves the result of Samuels \cite{Samuels81}, who proved that under uniform distribution no secretary algorithm can achieve success probability of $1/e + \varepsilon$, for any constant $\varepsilon > 0$. We then prove that even entropy $\Theta(\log(n))$ suffices for a success probability above $1/e$ (Corollary \ref{cor:above_1_over_e}). But, interestingly, no uniform probability distribution with small support and entropy strictly less than $\log (n)$ can have success probability above $1/e$ (Part 3, Proposition~\ref{Thm:optimum_expansion}). \vspace*{1ex} \noindent {\bf Algorithmic results.} By adapting the same techniques of dimension reduction and derandomization via Chernoff bound developed for the $k$-secretary problem, we obtain the following fine-grained analysis results for the classical secretary problem. \begin{theorem}\label{Thm:double_reduction_final} There exists a permutation distribution $\mathcal{D}_{n}$ with entropy $O(\log\log{n})$, such that the wait-and-pick $1$-secretary algorithm with time threshold $\lfloor n/e \rfloor$, executed on $\mathcal{D}_{n}$ picks the highest element with probability at least $\frac{1}{e} - 3 \frac{(\log\log\log{n})^{5/2}}{\sqrt{\log\log{n}}}$. Distribution $\mathcal{D}_{n}$ can be computed in time polynomial in $n$. \end{theorem} \begin{theorem}\label{Thm:single_reduction_final} There exists a permutation distribution $\mathcal{D}_{n}$ that can be computed in time $O(n)$ and has entropy $O(\log\log{n})$, such that the wait-and-pick $1$-secretary algorithm with time threshold $\lfloor n/e \rfloor$, executed on the permutation drawn from $\mathcal{D}_{n}$ picks the best element with probability of at least $\frac{1}{e} - \frac{(C_{1}\log\log n)^2}{\log^{C/2}{n}} - o\left(\frac{(\log\log n)^2}{\log^{C/2}{n}}\right)$, where $C > 0$ can be any fixed constant and $C_1 = \frac{C}{\log(e/(e-1))}$. \end{theorem} \noindent {\bf Our results vs previous results.} Proofs of Theorem \ref{Thm:double_reduction_final} and \ref{Thm:single_reduction_final} can be found in Appendix \ref{sec:two_distrbiutions_proofs-classic}. The original analysis in \cite{Lindley61,dynkin1963optimum} shows that this algorithm's success probability with full $\Theta(n \log n)$ entropy is at least $1/e - 1/n$. Theorem \ref{Thm:single_reduction_final} has better error bound than Theorem \ref{Thm:double_reduction_final}, but is more complex and builds on the first result. Theorem \ref{Thm:single_reduction_final} guarantees almost the same success probability as that in our existential proof (Theorem \ref{th:optimal_secr_existence}) and the entropy of these distributions is optimal $O(\log \log (n))$. It also improves, over {\em doubly-exponentially}, on the additive error to $OPT_n$ of $\omega(\frac{1}{(\log\log\log(n))^{c}})$ due to Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15,KesselheimKN15-arxiv}, which holds for any positive constant $c < 1$. \vspace*{1ex} \noindent {\bf Technical contributions.} We obtain Theorems \ref{Thm:double_reduction_final} and \ref{Thm:single_reduction_final} by the same dimension reduction and derandomization techniques designed for the $k$-secretary problem. To apply these techniques, we need to develop problem-specific parts for the $1$-secretary problem: probabilistic analysis, leading to combinatorialization, and an algorithm for computing conditional probabilities (see Algorithm \ref{algo:Cond_prob}). We present a new probabilistic analysis of the $1$-secretary problem in Theorem \ref{th:optimal_secr_existence}. Towards this aim we identify a useful parameterization of the problem, denoted $k \in \{2,3, \ldots,n\}$, which is interpreted as corresponding to $k$ largest adversarial values. We characterize precise probability of success of any wait-and-pick algorithm with time threshold $m$ by analyzing how the set of $k$ largest adversarial values is located with respect to the threshold $m$. This leads to Theorem \ref{th:optimal_secr_existence}. This analysis lets us combinatorialize the problem by defining $k$-tuples, which are ordered subsets of size $k$ of $[n]$. While all other parts of the derandomization are the same as for the $k$-secretary problem, the main derandomization algorithm is now Algorithm \ref{algo:Find_perm} (instead of Algorithm \ref{algo:Find_perm_2}), and the conditional probabilities Algorithm \ref{algo:Cond_prob} replaces Algorithm \ref{algo:Cond_prob_2}. \ignore{ We achieve this result by combining the most natural probabilistic analysis with our refined algorithmic tools that allow for an efficient derandomization of our result. We believe that the entire set of tools allow for systematic approach for constructing low-entropy objects in many other areas. {\bf Improved and simplified dimension reductions (Section~\ref{sec:dim_reduction}).}~To make our algorithm from Theorem \ref{Thm:Derandomization} polynomial, we propose two dimension reduction methods based on a refined use of Reed-Solomon codes. On a high level, we construct a family of functions that have bounded number of collisions and their preimages are of almost same sizes. Our first construction of such function family is based on a generalized Reed-Solomon code (see Lemma~\ref{lem:Reed_Solomon_Construction}), where, by using algebraic properties of polynomials, we enforce same sizes of preimages (up to additive $1$). Our second construction of such function family is based on a direct product of two Reed-Solomon codes (see Section~\ref{subsec:product-two-codes}). Both these constructions are inspired by Kesselheim at al.~\cite{KesselheimKN15, KesselheimKN15-arxiv}, but they significantly improve their result by adding the constraint on sizes of preimages. This constraint, precisely tailored for the secretary problem, allows us to apply more direct techniques of finding permutations distributions over a set with reduced dimension. The main theorem of this section is the following. \begin{theorem} TBD \end{theorem} {\bf Method of conditional expectations.} TBD {\textbf{Bypass results.}} We believe that our methods, due to their simplicity and intuitiveness, are able to develop new state-of-art results for plenty other problems in the auction theory. We demonstrate the use of this techniques the on the example of Secretary Problem in which the techniques improves the current best approximation factor (\cite{KesselheimKN15}) from $ $ to $ $ for $O(\log\log{n})$ distributions. \jan{Uproscic ponizsze twierdzenie} {\bf Theorem \ref{Thm:Derandomization}}~(Restated) {\em There exists an $O(k \cdot \ell \cdot n^{k+2} \cdot poly \log(n))$-time deterministic algorithm to find a multi-set $\mathcal{L} = \{\pi_1,\pi_2,\ldots,\pi_{\ell}\}$ of permutations $\pi_j \in \Pi_n$, from which random arrival order is chosen u.a.r., so that the wait-and-pick algorithm with threshold $m = \lfloor n/e\rfloor$ achieves additive approximation to $OPT_n$ roughly $\varepsilon\rq{} + \frac{2}{k} \left(1 - \frac{1}{e} \right)^{k}$, where $\ell = \Theta(\frac{k\log{n}}{ (\varepsilon')^{2}})$, for $n > k \geq 3$, and $\varepsilon' > 0$.} } \section{Further related work} In this section, we present recent literature on important online stopping theory concepts such as secretary, prophet inequality, and prophet secretary. \vspace{-0.11in} \paragraph{Secretary Problem.} In this problem, we receive a sequence of randomly permuted numbers in an online fashion. Every time we observe a new number, we have the option to stop the sequence and select the most recent number. The goal is to maximize the probability of selecting the maximum of all numbers. The pioneering work of Lindley~\cite{Lindley61} and Dynkin~\cite{dynkin1963optimum} present a simple but elegant algorithm that succeeds with probability $1/e$. In particular, they show that the best strategy, a.k.a.~wait-and-pick, is to skip the first $1/e$ fraction of the numbers and then take the first number that exceeds all its predecessors. Although simple, this algorithm specifies the essence of best strategies for many generalizations of secretary problem. Interestingly, Gilbert and Mosteller~\cite{GilbertM66} show that when the values are drawn i.i.d.~from a known distribution, there is a wait-and-pick algorithm that selects the best value with probability approximately 0.5801 (see~\cite{DBLP:conf/aistats/EsfandiariHLM20} for generalization to non-identical distributions). The connection between secretary problem and online auction mechanisms has been explored by the work of Hajiaghayi, Kleinberg and Parkes~\cite{HajiaghayiKP04} and has brought lots of attention to this classical problem in computer science theory. In particular, this work introduces the {\em multiple-choice value version} of the problem, also known as the $k$-secretary problem (the original secretary problem only considers rankings and not values), in which the goal is to maximize the expected sum of the selected numbers, and discusses its applications in limited-supply online auctions. Kleinberg~\cite{kleinberg2005multiple} later presents a tight $(1-O(\sqrt{1/k}))$-approximation algorithm for multiple-choice secretary resolving an open problem of~\cite{HajiaghayiKP04}. The bipartite matching variant is studied by Kesselheim et al.~\cite{kesselheim2013optimal} for which they give a $1/e$-approximation solution using a generalization of the classical algorithm. Babaioff et al.~\cite{babaioff2007matroids} consider the {\em matroid} version and give an $\Omega(1/\log k)$-approximation algorithm when the set of selected items have to be an independent set of a rank $k$ matroid. Other generalizations of secretary problem such as the submodular variant has been initially studied by the Bateni, Hajiaghayi, and ZadiMoghaddam~\cite{BHZ13} and Gupta, Roth, Schoenebeck, and Talwar~\cite{DBLP:conf/wine/GuptaRST10}. \vspace{-0.11in} \paragraph{Prophet Inequality.} In prophet inequality, we are initially given $n$ distributions for each of the numbers in the sequence. Then, similar to the secretary problem setting, we observe the numbers one by one, and can stop the sequence at any point and select the most recent observation. The goal is to maximize the ratio between the expected value of the selected number and the expected value of the maximum of the sequence. This problem was first introduced by Krengel-Sucheston~\cite{krengel1977semiamarts,krengel1978semiamarts}, for which they gave a tight $1/2$-approximation algorithm. Later on, the research investigating the relation between prophet inequalities and online auctions was initiated by the work of the Hajiaghayi, Kleinberg, and Sandholm~\cite{hajiaghayi2007automated}. In particular this work considers the multiple-choice variant of the problem in which a selection of $k$ numbers is allowed and the goal is to maximize the ratio between the sum of the selected numbers and the sum of the $k$ maximum numbers. The best result on this topic is due to Alaei~\cite{alaei2014bayesian} who gives a $(1-{1}/{\sqrt{k+3}})$-approximation algorithm. This factor almost matches the lower bound of $1-\Omega(\sqrt{1/k})$ already known from the prior work of Hajiaghayi et al.~\cite{hajiaghayi2007automated}. Motivated by applications in online ad-allocation, Alaei, Hajiaghayi and Liaghat~\cite{AHL13} study the bipartite matching variant of prophet inequality and achieve the tight factor of $1/2$. Feldman et al.~\cite{feldman2015combinatorial} study the generalizations of the problem to combinatorial auctions in which there are multiple buyers and items and every buyer, upon her arrival, can select a bundle of available items. Using a posted pricing scheme they achieve the same tight bound of $1/2$. Furthermore, Kleinberg and Weinberg~\cite{KW-STOC12} study the problem when a selection of multiple items is allowed under a given set of matroid feasibility constraints and present a $1/2$-approximation algorithm. Yan \cite{yan2011mechanism} improves this bound to $1-1/e\approx 0.63$ when the arrival order can be determined by the algorithm. More recently Liu, Paes Leme, P{\'{a}}l, Schneider, and Sivan~\cite{DBLP:conf/sigecom/LiuLPSS21} obtain the first Efficient PTAS (i.e., a $1+\epsilon$ approximation for any constant $\epsilon> 0$) when the arrival order can be determined by the algorithm. Prophet inequality (as well as the secretary problem) has also been studied beyond a matroid or a matching. For the intersection of $p$ matroids, Kleinberg and Weinberg~\cite{KW-STOC12} gave an $O(1/p)$-approximation prophet inequality. Later, D\"{u}tting and Kleinberg~\cite{dutting2015polymatroid} extended this result to polymatroids. Rubinstein~\cite{rubinstein2016beyond} and Rubinstein and Singla~\cite{RS-SODA17} consider prophet inequalities and secretary problem for arbitrary downward-closed set systems. Babaioff et al.~\cite{babaioff2007matroids} show a lower bound of $\Omega(\log n \log\log n)$ for this problem. Prophet inequalities have also been studied for many combinatorial optimization problems (see e.g. \cite{DEHLS17,garg2008stochastic,gobel2014online,Meyerson-FOCS01}). \vspace{-0.11in} \paragraph{Prophet Secretary.} The original prophet inequality setting assumes either the buyer values or the buyer arrival order is chosen by an adversary. In practice, however, it is often conceivable that there is no adversary acting against you. Can we design better strategies in such settings? The {\em prophet secretary} model introduced by the Esfandiari, Hajiaghayi, Liaghat, and Monemizadeh~\cite{EHLM17} is a natural way to consider such a process where we assume both {\em stochastic knowledge} about buyer values and that the buyers arrive in a uniformly random order. The goal is to design a strategy that maximizes expected accepted value, where the expectation is over the random arrival order, the stochastic buyer values, and also any internal randomness of the strategy. This work indeed introduced a natural combination of the fundamental problems of prophet and secretary above. More formally, in the \textit{prophet secretary} problem we are initially given $n$ distributions $\mathcal{D}_1,\ldots,\mathcal{D}_n$ from which $X_1,\ldots,X_n$ are drawn. Then after applying a random permutation $\pi(1),\ldots,\pi(n)$ the values of the items are given to us in an online fashion, i.e., at step $i$ both $\pi(i)$ and $X_{\pi(i)}$ are revealed. The goal is to stop the sequence in a way that maximizes the expected value\footnote{Over all random permutations and draws from distributions} of the most recent item. Esfandiari, Hajiaghayi, Liaghat, and Monemizadeh~\cite{EHLM17} provide an algorithm that uses different thresholds for different items, and achieves an approximation factor of $1-1/e$ when $n$ tends to infinity. Beating the factor of $1-\frac{1}{e}\approx 0.63$ substantially for the prophet secretary problems, however, has been very challenging. A recent result by Azar et al.~\cite{ACK18} and then Correa et al.~\cite{CorreaSZ19} improves this bound by $\frac{1}{30}$ to $1-\frac{1}{e}+\frac{1}{30}\approx 0.665$. For the special case of {\em single item i.i.d.}, Hill and Kertz~\cite{hill1982comparisons}~give a characterization of the hardest distribution, and Abolhasani et al.~\cite{abolhassani2017beating} show that one can get a $0.73$-approximation. Recently, this factor has been improved to the tight bound of $0.745$ by Correa et al.~\cite{correa2017posted}. However finding the tight bought for the general prophet secretary problem still remains the main~open~problem. \section{Probabilistic analysis of $k$-secretary algorithms}\label{section:prob_analysis_k-secr} Let $v(1), v(2), \ldots, v(n)$ be the sequence of values chosen by the adversary. Let $a_{1}, \ldots, a_{k}$ be the indices of $k$ biggest values in non-increasing order. In this section we consider a wait-and-pick algorithm with time threshold $m$ and statistic $k\cdot\frac{m}{n}$. The algorithm reads first $m$ values from the input. Then, it assigns $t$ (statistic value) to the value of $k\cdot\frac{m}{n}$-th largest value from the already seen set of values. From this point, it adds to the final value the first $k$ values that are greater than $t$ which is also the output of the algorithm. We will provide in this section a probabilistic analysis of such $k$-secretary algorithms under uniform distribution over the set $\Pi_n$ of all permutations. \ignore{ Argument in the proof of Theorem 3.1 in the survey \cite{Gupta_Singla} proves the following. \begin{lemma}[Gupta, Singla \cite{Gupta_Singla}]\label{lemma:Chernoff-per-OLD} Let $X_{m}$ denote a random variable that counts the number of values from the set $\{1,2, \ldots, a\}$, for an integer number $ 1 \le a << n$, that are on positions smaller than the threshold $m$ in a random permutation $\sigma \sim \Pi_n$. Define $\mu = \mathbb{E}(X_{m}) = a\frac{m}{n}$. Then for any $\delta > 0$ \[ \mathbb{P}\mathrm{r}(|X_{m} - \mu| \ge \delta \mu) \le O(\exp(-\delta^{2}\mu)) \ . \] \end{lemma} } In the next lemma we will exploit the fact that the indicator random variables which indicate if indices fall in an interval in a random permutation are {\em negatively associated}, see, e.g., \cite{D_Wajc_2017}. \ignore{ \begin{lemma}\label{lemma:Chernoff-per} Let $X_{m}$ denote a random variable that counts the number of values from the set $\{1,2, \ldots, a\}$, for a fixed integer $a \geq 1$, that are on positions smaller than the threshold $m$ in a random permutation $\sigma \sim \Pi_n$. Define $\mu = \mathbb{E}(X_{m}) = a\frac{m}{n}$. Then \[ \mathbb{P}\mathrm{r}(|X_{m} - \mu| \ge \sqrt{a}\log{a}) \le \bigg(\frac{1}{a}\bigg)^{2} \ . \] \end{lemma} } \begin{lemma}\label{lemma:Chernoff-per} Let $X$ denote a random variable that counts the number of values from the set $\{1,2, \ldots, a\}$, for an integer number $ 1 \le a << n$, that are on positions smaller than the threshold $m$ in a random permutation $\sigma \sim \Pi_n$. Define $\mu = \mathbb{E}(X) = a\frac{m}{n}$. Then for any $\delta \in (0,1)$, we obtain that \[ \mathbb{P}\mathrm{r}(|X - \mu| \ge \delta \mu) \le 2 \exp(-\delta^{2}\mu/3) \ . \] \end{lemma} \begin{proof} For a number $i$ in the set $\{1,2, \ldots, a\}$ consider an indicator random variable $X_{i}$ equal to $1$ if the position of the number $i$ is in the first $m$ positions of a random permutation $\sigma$, and equal to $0$ otherwise. We have that $X = \sum_{i=1}^{a} X_{i}$. Using standard techniques, for instance Lemma $8$ and Lemma $9ii)$ from \cite{D_Wajc_2017}, we obtain that random variables $X_{1}, \ldots, X_{n}$ are negatively associated (NA) and we can apply the Chernoff concentration bound to their mean. Observe here, that $\mathbb{E}(X) = \mathbb{E}(\sum_{i=1}^{a} X_{i}) = \mu$. Therefore, by Theorem~5 in \cite{D_Wajc_2017}, we have that $$ \mathbb{P}\mathrm{r}(X \geq (1+\delta)\mu) \leq \left(\frac{\exp(\delta)}{(1+\delta)^{(1+\delta)}}\right)^{\mu} \,\, \mbox{for any} \,\, \delta > 0\, , $$ and $$ \mathbb{P}\mathrm{r}(X \leq (1-\delta)\mu) \leq \left(\frac{\exp(-\delta)}{(1-\delta)^{(1-\delta)}}\right)^{\mu} \,\, \mbox{for any} \,\, \delta \in (0,1) \, . $$ By a well known bound, shown for instance in \cite[page 5]{M_Goemans_2015}, we have that $\left(\frac{\exp(\delta)}{(1+\delta)^{(1+\delta)}}\right)^{\mu} \leq \exp(\frac{-\delta^2 \mu}{2+\delta}) < \exp(-\delta^2 \mu/3)$, where the last inequality follows by $\delta < 1$. Similarly, it is known that $\left(\frac{\exp(-\delta)}{(1-\delta)^{(1-\delta)}}\right)^{\mu} \leq \exp(-\delta^2 \mu/2)$ \cite{HagerupR90}, which together with the above implies that $$ \mathbb{P}\mathrm{r}(|X - \mu| \geq \delta \mu) \leq 2 \cdot \exp(-\delta^2 \mu/3) \, \mbox{ for any } \, \delta \in (0,1) \, , $$ see \cite[Corollary 5]{M_Goemans_2015}. \end{proof} \ignore{ \begin{proof} For a number $i$ in the set $\{1,2, \ldots, a\}$ consider an indicator random variable $X_{i}$ denoting whether the position of the number $i$ is on the first $m$ position of a random permutation $\sigma$ or not. Using standard techniques, for instance Lemma $8$ and Lemma $9ii)$ from \cite{D_Wajc_2017}, we obtain that $X_{1}, \ldots, X_{n}$ are negatively associated (NA) and we can apply the Chernoff-Hoeffding concentration bound to their mean. Observe here, that $\sum_{i=1}^{a} X_{i} = X_{m} = \mu$. Therefore, by Theorem~5 in \cite{D_Wajc_2017}, we have that $$\mathbb{P}\mathrm{r}(|X_{m} - \mu| \ge \sqrt{a}\log{a}) \le \exp\bigg(\frac{-2a\log^{2}{a}}{a}\bigg) = \exp\big(-2\log^{2}{a} \big) \le \bigg(\frac{1}{a}\bigg)^{2}.$$ \end{proof} } In the next lemma, we analyze the expected result of the wait-and-pick algorithm on a random permutation. Recall here how the algorithm works. First, it reads the first $m$ elements. Then, it calculates $k\frac{m}{n}$-th greatest so far read element whose value we denote $t$ (the statistic). Finally, it reads the remaining portion of the elements one by one and adds up the values of the first $k$ elements whose values are greater than $t$. This sum is the final result of the algorithm. Since we draw from the uniform distribution over all permutation, we can assume w.l.o.g. that $v(1) \ge v(2) \ge \ldots \ge v(k)$, in other words $k$ greatest values chosen by the adversary are on positions $1, \ldots, k$. \begin{lemma}\label{lemma:Success_k-secretary} For a given adversarial sequence of values $v(1) \ge v(2) \ge \ldots \ge v(k) \ge \ldots \ge v(n)$, consider a wait-and-pick algorithm $ALG$ with time threshold $m = \frac{n}{k^{1/3}}$ and statistic threshold $\tau = k\frac{m}{n} = k^{2/3}$, where $k$ is large enough, $\log k \geq 8$. Then $$\mathbb{E}_{\sigma \sim \Pi_n}(ALG(\sigma)) \ge \bigg(1 - \frac{3\log{k}}{k^{1/3}}\bigg)\bigg(1 - \frac{1}{k}\bigg)\sum_{i = 1}^{k} v(i) \ , $$ where $ALG(\sigma)$ is a random variable denoting the output of the algorithm on a random permutation $\sigma \sim \Pi_n$. \end{lemma} \begin{proof} First, we show that with high probability the statistic value $t$ used by the algorithm is a value from the set $\{v(k-k^{2/3}\log {k}), v(k-k^{2/3}\log {k}+1), \ldots, v(k+k^{2/3}\log {k})\}$. Let us define $a = k - k^{2/3}\log{k}$ and a random variable $X$ denoting the number of values from the set $\{1, 2, \ldots, a\}$ that are on the first $m$ positions in a random permutation $\sigma$, as defined in the proof of Lemma \ref{lemma:Chernoff-per}. Also denote $\mu_{X} = \mathbb{E}(X) = \frac{m}{n}(k - k^{2/3}\log{k})$. By the choice of $m$, we have $\mu_{X} = k^{2/3} - k^{1/3}\log{k}$. Let $\delta = k^{1/3} \log{k} / \mu_X$. By Lemma~\ref{lemma:Chernoff-per}, we obtain that $$\mathbb{P}\mathrm{r}(|X - \mu_{X}| \ge \delta \cdot \mu_X = k^{1/3} \log{k}) \le 2\exp{(-\delta^{2} \mu_X/3)} \implies$$ $$ \mathbb{P}\mathrm{r}(X \ge \mu_{X} + k^{1/3} \log{k}) \le 2 \exp\Big( -\frac{k^{2/3} \log^{2}k}{3 \mu_X} \Big) \le \frac{1}{2k} \, , $$ where the last inequality holds because $\log k \geq 8$. We observe that $\mu_{X} + k^{1/3} \log{k} = k^{2/3} - k^{1/3}\log{k} + k^{1/3} \log{k} = \tau$. This yields $$\mathbb{P}\mathrm{r}(X \ge \tau) \le \frac{1}{2k} \, , $$ which proves that with probability at most $\frac{1}{2k}$, the statistic $t$ will be assigned to a value smaller than $v(k -k^{2/3}\log k)$. Let $b = k + k^{2/3}\log{k}$ and $Y$ be a random variable denoting the number of values from the set $\{1,2, \ldots, b\}$ that are on the first $m$ positions in a random permutation $\sigma$. Observe, that $\mu_Y = E(Y_{m}) = \frac{m}{n}(k + k^{2/3}\log{k})$. We will now show that by an analogical Chernoff bound argument as before we have $$\mathbb{P}\mathrm{r}(Y \le \tau) \le \frac{1}{2k}.$$ Namely, $\mu_{Y} = k^{2/3} + k^{1/3}\log{k}$, and let $\delta = k^{1/3} \log{k} / \mu_Y$. By Lemma~\ref{lemma:Chernoff-per} $$\mathbb{P}\mathrm{r}(|Y - \mu_{Y}| \ge \delta \cdot \mu_Y = k^{1/3} \log{k}) \le 2\exp{(-\delta^{2} \mu_Y/3)} \implies$$ $$ \mathbb{P}\mathrm{r}(Y \le \underbrace{\mu_{Y} - k^{1/3} \log{k}}_{= \tau}) \le 2 \exp\Big( -\frac{k^{2/3} \log^{2}k}{3 \mu_Y} \Big) = 2 \exp\Big( -\frac{2 k^{2/3} \log^{2}k}{6 (k^{2/3} + k^{1/3}\log{k}) } \Big) \le \frac{1}{2k} \, , $$ where the last inequality holds because $\log k \geq 8$. Next, we observe that the event $t \in \{v(k-k^{2/3}\log{k}), v(k-k^{2/3}\log{k} + 1),\ldots, v(k+k^{2/3}\log{k})\}$ is a complement of the union of events $X \ge \tau$ and $Y \le \tau$, therefore by the union bound we obtain $$\mathbb{P}\mathrm{r}(t \in \{v(k-k^{2/3}\log{k}), v(k-k^{2/3}\log{k} + 1), \ldots, v(k+k^{2/3}\log{k})\}) \ge 1 - \frac{1}{k}.$$ Consider now the $i$-th greatest value $v(i)$ from the $(1-\epsilon)k$ greatest values assigned by the adversary, $i \in \{1,2,\ldots, (1-\epsilon)k\}$, where $\epsilon \geq \frac{\log k}{k^{1/3}}$, and for simplicity we write $(1-\epsilon)k$ instead of $\lfloor(1-\epsilon)k\rfloor$. Note that in order for the algorithm to choose $v(i)$, it is necessary that the largest possible value of the statistic $t$ (attained at position $k-k^{2/3}\log{k}$) must be smaller than the smallest possible value of $v(i)$ (attained at position $i = (1-\epsilon)k$); hence we need to assume that $(1-\epsilon)k \leq k-k^{2/3}\log{k}$, implying that $\epsilon \geq \frac{\log k}{k^{1/3}}$. Let us indeed take $\epsilon = \frac{\log k}{k^{1/3}}$. Let $V_{i}$ be an indicator random variable denoting whether the algorithm added element $v(i)$ to its sum or not. First, we observe that with probability $(1 - \frac{m}{n})$ the index $i$ is on positions $\{m+1, \ldots, n\}$ in a random permutation $\sigma$. Next, by the argument in the previous paragraph, with probability at least $1 - \frac{1}{k}$ we obtain that $t \in \{v(k-k^{2/3}\log{k}), v(k-k^{2/3}\log{k} + 1), \ldots, v(k+k^{2/3}\log{k})\}$. The algorithm chooses the first $k$ values that are greater than $t$ that lie on positions $\{m+1, \ldots, n\}$. Assuming that $t \in \{v(k-k^{2/3}\log{k}), v(k-k^{2/3}\log{k} + 1), \ldots, v(k+k^{2/3}\log{k})\}$, there are at most $k + k^{2/3}\log{k}$ values greater than $t$ in the whole permutation, thus the probability that the algorithm picks value $v(i)$, conditioned on the fact that this value appears on positions $\{m+1, \ldots, n\}$, is at least $(1 - \frac{1}{k})(\frac{k}{k + k^{2/3}\log{k}}) \ge (1 - \frac{1}{k})(1 - k^{-1/3}\log{k})$. In consequence, the random variable $V_{i}$ is one with probability at least $(1 - \frac{m}{n})(1 - \frac{1}{k})(1 - k^{-1/3}\log{k})$. By the linearity of expectation, we obtain the following bound on the competitive ratio of the wait-and-pick algorithm $ALG$: \[ \mathbb{E}(ALG(\sigma)) \ge \mathbb{E}\Bigg(\sum_{i = 1}^{(1-\epsilon)k} V_{i}\Bigg) \ge \bigg(1 - \epsilon \bigg)\bigg(1 - \frac{m}{n}\bigg)\bigg(1 - \frac{1}{k}\bigg)\bigg(1 - k^{-1/3}\log{k}\bigg)\sum_{i = 1}^{k} v(i) \] \[ \ge \bigg(1 - \frac{3\log{k}}{k^{1/3}}\bigg)\bigg(1 - \frac{1}{k}\bigg)\sum_{i = 1}^{k} v(i) \ , \] since $\epsilon = k^{-1/3}\log^{3}k$ and $\frac{m}{n} = \frac{1}{k^{1/3}}$, which proves the lemma. \end{proof} In the last theorem, we use the probabilistic method to leverage any competitive multiple-choice secretary algorithm working on the uniform distribution of all permutations to an algorithm working on distributions of permutations with much smaller entropy. Denote $V = \sum_{i=1}^{k} v(i)$. The following result holds. \begin{theorem}\label{theorem:Hoeffding_k-secretary} Consider any algorithm $ALG$ solving the multiple-choice secretary problem with the following competitive ratio \[ \mathbb{E}_{\pi \sim \Pi_n}(ALG(\pi)) \ge (1-\epsilon_2) \cdot V \ , \] for some $\epsilon_2 \in (0,1)$. Then, for any $\delta \in (0,1)$, there exists a multi-set $\mathcal{L}$ of permutations of size at most $\ell = \frac{k(\log{n}+\log{k})}{\delta^2(1-\epsilon_2)^{2}}$ such that \[ \mathbb{E}_{\pi \sim \mathcal {L}}(ALG(\pi)) \ge (1-\delta)(1-\epsilon_2) \cdot V \ . \] \end{theorem} \begin{proof} Fix any particular ordering of adversarial values. Let $\{\pi_{1}, \ldots, \pi_{\ell}\}$ be random permutations drawn independently from the uniform distribution of all permutations. Define $\mu = \sum_{i=1}^{\ell} \mathbb{E}(ALG(\pi_i))$. By the properties of the algorithm $ALG$ we get that $\mathbb{E}(ALG(\pi_{i})) \ge (1-\epsilon_2) \cdot V$, which implies $\mu \ge (1-\epsilon_2) \ell V$. We also note that $ALG(\pi_{i}) \le V$ with probability $1$. Thus, from Hoeffding theorem we obtain \[ \mathbb{P}\mathrm{r}\bigg(\sum_{i=1}^{\ell}ALG(\pi_{i}) < (1-\delta)\mu\bigg) \le \exp\bigg(\frac{-2\delta^2\mu^{2}}{\ell V^{2}}\bigg) \le \exp\bigg(\frac{-2\delta^2((1-\epsilon_2)\ell V)^{2}}{\ell V^{2}}\bigg) \] \[ = \exp(-2\delta^2(1-\epsilon_2)^{2} \ell) \ . \] Now, there are at most $\binom{n}{k}\cdot k! \le \exp(k\log{n} + k\log{k})$ possibilities of assigning $k$ biggest values to different positions in a permutation. Thus, assuming that $\ell \ge \frac{k\log{n} + k\log{k}}{\delta^2(1-\epsilon_2)^{2}}$, the union bound implies that there exists a (deterministic) (multi-)set of permutations $\pi'_{1}, \ldots, \pi'_{\ell}$ such that for each adversarial order, the inequality $\sum_{i=1}^{\ell} ALG(\pi'_{i}) \ge (1-\delta)(1-\epsilon_2)\cdot V$ holds. Selecting the set $\{ \pi'_{1}, \ldots, \pi'_{\ell}\}$ as $\mathcal{L}$ guarantees the bound stated in the theorem. \end{proof} \noindent {\bf Remark.} Taking as algorithm $ALG$ in Theorem \ref{theorem:Hoeffding_k-secretary} the wait and pick algorithm from Lemma \ref{lemma:Success_k-secretary}, we can plug $1-\epsilon_2 = \left(1 - \frac{3\log{k}}{k^{1/3}}\right)\left(1 - \frac{1}{k}\right)$, and obtain a $(1-\delta)(1-\epsilon_2)$-competitive algorithm using a small entropy uniform distribution on the multi-set $\mathcal{L}$ of permutations. \section{Derandomization via Chernoff bound for the $k$-secretary problem}\label{sec:derand_Hoeffding} Our goal here is to derandomize the Hoeffding argument from Theorem \ref{theorem:Hoeffding_k-secretary}. Unlike that theorem, we will use Chernoff bound to derandomize (see below for details). We will first precisely model the random experiment and events from Lemma \ref{lemma:Success_k-secretary} to be able to compute their (conditional) probability. For simplicity of notation we will, in this section, write $(1-\epsilon)k$ and $k\frac{m}{n}$ instead of $\lfloor (1-\epsilon)k\rfloor$ and $\lfloor k\frac{m}{n}\rfloor$, respectively. \noindent {\bf Combinatorialization.} Let $\hat{S} = \{j_0, j_1,\ldots, j_{k + \lfloor k^{2/3} \log {k} \rfloor}\}$, called a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple, be an ordered subset $\{j_1,\ldots, j_{k + \lfloor k^{2/3} \log {k} \rfloor}\} \subseteq [n]$ of $k + \lfloor k^{2/3} \log {k} \rfloor$ indices with a distinguished index $j_0 \in \{j_1,\ldots, j_{(1-\epsilon)k}\}$. The idea of $\hat{S}$ is to model the positions in the adversarial permutation of the $k + \lfloor k^{2/3} \log {k} \rfloor$ largest adversarial values $v(1),v(2), \ldots,v(k + \lfloor k^{2/3} \log {k} \rfloor)$ and $j_0$ is the position of one of the first $(1-\epsilon)k$ largest values. Let $\mathcal{K}$ be the set of all such $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples. To be precise, the adversary assigned value $v(u)$ (the $u$-th largest adversarial value, $u \geq 1$) to the position $j_u$ in his/her permutation and the random permutation $\pi \in \Pi_n$ places this value at the position $\pi^{-1}(j_u)$, for each $u \in \{1,2,\ldots,k + \lfloor k^{2/3} \log {k} \rfloor\}$. Let us choose independently and u.a.r.~a permutation $\pi \in \Pi_n$. Recall that we consider the following algorithm. First, it reads the first $m$ elements $\pi(1),\ldots,\pi(m)$. Then, it calculates $k\frac{m}{n}$-th greatest so far read element among $\pi(1),\ldots,\pi(m)$, whose (adversarial) value we denote $t$, the statistic. Finally, it reads the remaining elements elements $\pi(m+1),\ldots,\pi(n)$ one by one, and adds up the values of the first $k$ elements whose values are greater than $t$. This sum is the final result of the algorithm. \\ \noindent {\bf Re-proving Lemma \ref{lemma:Success_k-secretary}.} Under the experiment of choosing independently and u.a.r~a permutation $\pi \in \Pi_n$, referring to Lemma \ref{lemma:Success_k-secretary}, we define an event $A_{j_0}$ that the distinguished value $v(u')$ where $j_0 = j_{u'}$, appears to the right of position $m$ in $\pi$, that is, $A_{j_0} = \{\pi^{-1}(j_0) > m\}$. We define event $B_u$ that the statistic $t = v(u)$, that is, $B_u = \{t = v(u)\}$, for each $u \in \{k - \lfloor k^{2/3} \log {k} \rfloor,k - \lfloor k^{2/3} \log {k} \rfloor + 1, \ldots,k + \lfloor k^{2/3} \log {k} \rfloor\}$. Let finally $C_{j_0}$ be the event that the algorithm chooses, among the $k$ elements, the distinguished value $v(u')$, where $j_0 = j_{u'}$ for some $u' \in \{1,2,\ldots,(1-\epsilon)k\}$. In the proof of Lemma \ref{lemma:Success_k-secretary}, we show that \begin{eqnarray} \mathbb{P}\mathrm{r}\left[A_{j_0} \, \cap \, \left(\bigcup_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} B_u\right) \, \cap \, C_{j_0}\right] \geq \left(1 - \frac{m}{n}\right)\left(1 - \frac{1}{k}\right)\left(1 - \frac{\log k}{k^{1/3}}\right) = \rho_k\, \label{eq:exp_lb_1}, \end{eqnarray} where event $A_{j_0}$ holds with probability $1-m/n$, and conditioned on $A_{j_0}$, event $B = \bigcup_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} B_u$ holds with probability at least $1-1/k$, and conditioned on these two previous events, event $C_{j_0}$ holds with probability $1-\frac{\log^3 (k)}{k^{1/3}}$. Observe that events $B_u$ are mutually disjoint, therefore $\mathbb{P}\mathrm{r}\left[A_{j_0} \, \cap \, \left(\bigcup_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} B_u\right) \, \cap \, C_{j_0}\right] = \sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u \cap C_{j_0}]$. \\ First, as noted above, $\mathbb{P}\mathrm{r}[A_{j_0}] = 1-m/n$. Conditioned on $A_{j_0}$, event $B_u$ holds iff there exists a subset of indices $J \subseteq \{j_1,\ldots, j_{u-1}\} \setminus \{j_0\}$ such that $|J| = k \frac{m}{n} - 1$, and $\forall j \in J : \pi^{-1}(j) \leq m$, $\pi^{-1}(u) \leq m$, and $\forall j' \in \{j_1,\ldots, j_{u-1}\} \setminus J : \pi^{-1}(j') > m$. By using Bayes' formula on conditional probabilities, this leads to (proof of Lemma \ref{l:Prob_Algo_2} in Section \ref{sec:Cond_Prob_Thm_4_2} has a detailed justification of this formula): $$ \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u] = \mathbb{P}\mathrm{r}[A_{j_0}] \cdot \binom{u-2}{k\frac{m}{n}-1} \cdot \left(\prod_{j=1}^{k\frac{m}{n}} \frac{m - (j-1)}{n-1 - (j-1)}\right) \cdot \left(\prod_{j'=1}^{u-k\frac{m}{n}-1} \frac{n - m - j'}{n-(1+k\frac{m}{n}) - (j'-1)}\right) \, . $$ Conditioned on $A_{j_0}$ and $B_u$, when such a set $J$ is chosen, event $C_{j_0}$ holds iff $v(u')$ (where $j_0=j_{u'}$ for some $u' \in \{1,2,\ldots,(1-\epsilon)k\}$) appears in permutation $\pi$ among the first $k$ values to the right from threshold $m$ that are greater than the statistic $t=v(u)$. This surely happens (with probability $1$) if $u \in \{k - \lfloor k^{2/3} \log {k} \rfloor,\ldots, k + \lfloor k^{2/3} \log {k} \rfloor\}$ is such that $|\{j_1,\ldots, j_{u-1}\} \setminus J| \leq k$. If, on the other hand $u \in \{k - \lfloor k^{2/3} \log {k} \rfloor,\ldots, k + \lfloor k^{2/3} \log {k} \rfloor\}$ is such that $|\{j_1,\ldots, j_{u-1}\} \setminus J| > k$, then this happens with probability $k/|\{j_1,\ldots, j_{u-1}\} \setminus J|$, as the probability that index $j_{u'}$ is on any of the $k$ first positions among elements $\{j_1,\ldots, j_{u-1}\} \setminus J$ is $(w-1)!/w! = 1/w$, where $w = |\{j_1,\ldots, j_{u-1}\} \setminus J|$. Therefore we finally obtain $$ \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u \cap C_{j_0}] = \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u] \cdot \min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\} \, . $$ \noindent {\bf Probabilistic existence proof (Theorem \ref{theorem:Hoeffding_k-secretary})}. Let us fix a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S} = \{j_0, j_1,\ldots, j_{k + \lfloor k^{2/3} \log {k} \rfloor}\} \in \mathcal{K}$. We say that an independently and u.a.r.~chosen $\pi \in \Pi_n$ is {\em successful} for $\hat{S}$ iff event $A_{j_0} \, \cap \, B \, \cap \, C_{j_0}$ holds. We also say that $\pi$ {\em covers} $\hat{S}$ if $\pi$ is successful for $\hat{S}$. By the above argument $\pi$ is successful with probability at least $\rho_k$ from (\ref{eq:exp_lb_1}). We choose independently $\ell$ permutations $\pi_1, \ldots, \pi_{\ell}$ from $\Pi_n$ u.a.r., as in Theorem \ref{theorem:Hoeffding_k-secretary}, and $\mathcal{L} = \{\pi_1, \ldots, \pi_{\ell}\}$. Let $X^{\hat{S}}_1, \ldots, X^{\hat{S}}_{\ell}$ be random variables such that $X^{\hat{S}}_s = 1$ if the corresponding random permutation $\pi_t$ is successful for $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S}$, and $X^{\hat{S}}_s = 0$ otherwise, for $s = 1,2, \ldots, \ell$. Then for $X^{\hat{S}} = X^{\hat{S}}_1 + \cdots + X^{\hat{S}}_{\ell}$ we have that $\mathbb{E}[X^{\hat{S}}] \geq \rho_k \ell$ and by the Chernoff bound, like in Theorem \ref{theorem:Hoeffding_k-secretary}, we have \begin{eqnarray} \mathbb{P}\mathrm{r}[X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell] < \exp(-\delta^2 \rho_k \ell/2) \, , \label{eqn:Chernoff_Hoeffding_111} \end{eqnarray} for any $0 < \delta < 1$; note we use here Chernoff rather than Hoeffding bound, because r.v.'s $X^{\hat{S}}_s$ are $0/1$. The probability that there exists a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S} \in \mathcal{K}$ for which there does not exists a $(1-\delta) \rho_k$ fraction of successful permutations among these $\ell$ random permutations, by the union bound, is: \vspace*{-1ex} \[ \mathbb{P}\mathrm{r}[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell] \,\, < \,\, |\mathcal{K}| \cdot \exp(-\delta^2 \rho_k \ell/2) \, . \] Observe that $|\mathcal{K}| = \binom{n}{k'} (k')! (1-\epsilon) k$ and $\binom{n}{k'}\cdot (k')! \le \exp(k'\log{n} + k'\log{k'})$, where $k' = k + \lfloor k^{2/3} \log {k} \rfloor$. This gives $|\mathcal{K}| \leq \exp(k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)})$. This implies that the above probability is strictly smaller than $1$ if $\ell \geq \frac{k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}}{\delta^2 \rho_k/2}$. Therefore, all $|\mathcal{K}| = \binom{n}{k'} (k')! (1-\epsilon) k$ of the $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples $\hat{S}$ are covered with strictly positive probability for such $\ell$. This means that there exist $\Theta\left(\frac{k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}}{\delta^2 \rho_k}\right)$ permutations such that if we choose one of them u.a.r., then for any $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples $\hat{S}$, this permutation will be successful with probability at least $(1-\delta) \rho_k$, which is the competitive ratio of the $k$-secretary algorithm with threshold $m$. \subsection{Derandomization of Theorem \ref{theorem:Hoeffding_k-secretary} }\label{sec:derandomization} \begin{theorem}\label{Thm:Derandomization_2} Suppose that we are given integers $n$ and $k$, such that $n \geq 1$, $n > k$, $\log k \geq 8$, and error parameters $\delta \in (0,1)$ and $\epsilon = \frac{\log k}{k^{1/3}}$. Define $\rho_k = \left(1 - \frac{m}{n}\right)\left(1 - \frac{1}{k}\right)\left(1 - \frac{\log k}{k^{1/3}}\right)$. Then for $$\ell \geq \frac{k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}}{\delta^2 \rho_k/2} \, , \mbox{ where } \, k' = k + \lfloor k^{2/3} \log {k} \rfloor \, , $$ there exists a deterministic algorithm (Algorithm \ref{algo:Find_perm_2}) that finds a multi-set $\mathcal{L} = \{\pi_1,\pi_2,\ldots,\pi_{\ell}\}$ of $n$-element permutations $\pi_j \in \Pi_n$, for $j \in [\ell]$, such that for every $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple there are at least $(1-\delta) \cdot \rho_k \ell$ successful permutations from $\mathcal{L}$ (for the $k$-secretary wait-and-pick algorithm with time threshold $m$ and statistic $k\frac{m}{n}$). The running time of this algorithm to compute the multi-set $\mathcal{L}$ is $$O\left((1-\epsilon) \cdot \ell \cdot k^{k+1} \cdot n^{k+2} \cdot \left(k + \lfloor k^{2/3} \log {k} \rfloor\right)^{k m/n} \cdot poly \log(n)\right) \, .$$ The resulting $k$-secretary wait-and-pick algorithm with time threshold $m$ and statistic $k\frac{m}{n}$ chooses a permutation $\pi \in \mathcal{L}$ u.a.r.~and achieves an expected competitive ratio of at least $(1-\epsilon) \cdot (1-\delta) \cdot \rho_k$. \end{theorem} \vspace*{-1ex} \noindent We present here the proof of Theorem \ref{Thm:Derandomization_2}, whose missing details can be found in Section \ref{sec:derandomization-proofs_2}. \smallskip \noindent {\bf Preliminaries.} To derandomize the Hoeffding argument of Theorem \ref{theorem:Hoeffding_k-secretary}, we will derive a special conditional expectations method with a pessimistic estimator. We will model an experiment to choose u.a.r.~a permutation $\pi_j \in \Pi_n$ by independent \lq\lq{}index\rq\rq{} r.v.'s $X^i_j$: $\mathbb{P}\mathrm{r}[X^i_j \in \{1,2,\ldots, n-i+1\}] = 1/(n-i+1)$, for $i \in [n]$, to define $\pi = \pi_j \in \Pi_n$ ``sequentially": $\pi(1) = X^1_j$, $\pi(2)$ is the $X^2_j$-th element in $I_1 = \{1,2,\ldots,n\} \setminus \{\pi(1)\}$, $\pi(3)$ is the $X^3_j$-th element in $I_2 = \{1,2,\ldots,n\} \setminus \{\pi(1), \pi(2)\}$, etc, where elements are increasingly ordered. Suppose random permutations $\mathcal{L} = \{\pi_1, \ldots, \pi_\ell\}$ are generated using $X^1_j, X^2_j, \ldots, X^n_j$ for $j\in[\ell]$. Given a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S} \in \mathcal{K}$, recall the definition of r.v.~$X^{\hat{S}}_j$ for $j \in [\ell]$ given above. For $X^{\hat{S}} = X^{\hat{S}}_1 + \cdots + X^{\hat{S}}_{\ell}$ and $\delta \in (0,1)$, we have that $\mathbb{E}[X^{\hat{S}}] \geq \rho_k \ell$ and $ \mathbb{P}\mathrm{r}[X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell] < \exp(-\delta^2 \rho_k \ell/2) $, and $$\mathbb{P}\mathrm{r}[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell] \, < \, 1 \,\, \mbox{ for } \, \ell \geq \frac{k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}}{\delta^2 \rho_k/2} \, . $$ We call the $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S} \in \mathcal{K}$ {\em not well-covered} if $X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell$ (then a new r.v.~$Y^{\hat{S}} = 1$), and {\em well-covered} otherwise (then $Y^{\hat{S}} = 0$). Let $Y = \sum_{\hat{S} \in \mathcal{K}} Y^{\hat{S}}$. By the above argument $\mathbb{E}[Y] = \sum_{\hat{S} \in \mathcal{K}} \mathbb{E}[Y^{\hat{S}}] < 1$ if $\ell \geq 2 [k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}]/\delta^2 \rho_k$. We will keep the expectation $\mathbb{E}[Y]$ below $1$ in each step of derandomization, and these steps will sequentially define the permutations in $\mathcal{L}$. \smallskip \noindent {\bf Outline of derandomization.} We will choose permutations $\{\pi_1,\pi_2,\ldots,\pi_{\ell}\}$ sequentially, one by one, where $\pi_1 = (1,2,\ldots,n)$ is the identity permutation. For some $s \in [\ell-1]$ let permutations $\pi_1,\ldots,\pi_s$ have already been chosen ({\em ``fixed"}). We will chose a {\em ``semi-random"} permutation $\pi_{s+1}$ position by position using $X^i_{s+1}$. Suppose that $\pi_{s+1}(1),$ $ \pi_{s+1}(2),..., \pi_{s+1}(r)$ are already chosen for some $r \in [n-1]$, where all $\pi_{s+1}(i)$ ($i \in [r-1]$) are fixed and final, except $\pi_{s+1}(r)$ which is fixed but not final yet. We will vary $\pi_{s+1}(r) \in [n] \setminus \{\pi_{s+1}(1), \pi_{s+1}(2),..., \pi_{s+1}(r-1)\}$ to choose the best value for $\pi_{s+1}(r)$, assuming that $\pi_{s+1}(r+1), \pi_{s+1}(r+2),..., \pi_{s+1}(n)$ are random. Permutations $\pi_{s+2},\ldots,\pi_n$ are {\em ``fully-random"}. \smallskip \noindent {\bf Conditional probabilities.} Given $\hat{S} \in \mathcal{K}$ and $r \in [n-1]$, observe that $X^{\hat{S}}_{s+1}$ depends only on $\pi_{s+1}(1),\pi_{s+1}(2),$ $\ldots,$ $ \pi_{s+1}(r)$. We will show how to compute the {\bf conditional probabilities} (Algorithm \ref{algo:Cond_prob_2} in Appendix \ref{sec:Cond_Prob_Thm_4_2}) $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$ ($=\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$ if $r=0$), where randomness is over random positions $\pi_{s+1}(r+1),\pi_{s+1}(r+2), \ldots, \pi_{s+1}(n)$. Theorem \ref{theorem:semi-random-conditional_2} is proved in Section \ref{sec:Cond_Prob_Thm_4_2}. \begin{theorem}\label{theorem:semi-random-conditional_2} Suppose that values $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ have already been fixed for some $r \in \{0\} \cup [n]$. There exist a deterministic algorithm (Algorithm \ref{algo:Cond_prob_2}, Section \ref{sec:Cond_Prob_Thm_4_2}) to compute $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$, where the random event is the random choice of the semi-random permutation $\pi_{s+1}$ conditioned on its first $r$ elements already being fixed. Its running time is $$O((k + \lfloor k^{2/3} \log {k} \rfloor)^{km/n} \cdot n \cdot poly \log(n)) \, ,$$ and $m \in \{2,3,\ldots,n-1\}$ is the threshold of the $k$-secretarial algorithm. \end{theorem} \smallskip \noindent {\bf Pessimistic estimator.} Let $\hat{S} \in \mathcal{K}$. Denote $\mathbb{E}[X^{\hat{S}}_j] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_j = 1] = \mu_j$ for each $j \in [\ell]$, and $\mathbb{E}[X^{\hat{S}}] = \sum_{j=1}^{\ell} \mu_j = \mu$. By (\ref{eq:exp_lb_1}) we have that $\mu_j \geq \rho_k$, for each $j \in [\ell]$. We will now use Raghavan's proof of the Hoeffding bound, see \cite{Young95}, for any $\delta > 0$, using that $\mu_j \geq \rho_k$ (see more details in Section \ref{sec:Pess_Est_proofs_2}): \begin{eqnarray*} \mathbb{P}\mathrm{r}\left[X^{\hat{S}} < (1-\delta) \cdot \ell \cdot \rho_k \right] &\leq& \prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\delta)^{(1-\delta)\rho_k}} < \prod_{j=1}^{\ell} \frac{\exp(- \delta \mu_j)}{(1-\delta)^{(1-\delta)\rho_k}} \leq \prod_{j=1}^{\ell} \frac{\exp(- \delta \rho_k)}{(1-\delta)^{(1-\delta)\rho_k}} \nonumber \\ &=& \frac{1}{\exp(b(-\delta) \ell \rho_k)} \,\, < \,\, \frac{1}{\exp(\delta^2 \ell \rho_k/2)} \, , \end{eqnarray*} where $b(x) = (1+x) \ln(1+x) - x$, and the last inequality follows by $b(-x) > x^2/2$, see, e.g., \cite{Young95}. Thus, union bound implies: \begin{eqnarray} \mathbb{P}\mathrm{r}\left[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\delta) \cdot \ell \rho_k \right] \,\, \leq \,\, \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\delta)^{(1-\delta)\rho_k}} \label{Eq:Union_Bound_1_2} \, . \end{eqnarray} \noindent We will derive a pesimistic estimator of this failure probability in (\ref{Eq:Union_Bound_1_2}). Let $\phi_j(\hat{S}) = 1$ if $\pi_j$ is successful for $\hat{S}$, and $\phi_j(\hat{S}) = 0$ otherwise, and failure probability (\ref{Eq:Union_Bound_1_2}) is at most: \begin{eqnarray} & & \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[\phi_j(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}} \label{eq:first_term_2} \\ &= & \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\delta \cdot \phi_j(\hat{S})}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_j(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right)^{\ell - s - 1} \label{eq:second_term_2} \\ &\leq& \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\delta \cdot \phi_j(\hat{S})}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \rho_k}{(1-\delta)^{(1-\delta)\rho_k}}\right)^{\ell - s - 1} \nonumber \\ &=& \, \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)) \label{Eq:Pessimistic_Est_2} \, , \end{eqnarray} where equality (\ref{eq:second_term_2}) is conditional expectation under: (fixed) permutations $\pi_1,\ldots,\pi_s$ for some $s \in [\ell-1]$, the (semi-random) permutation $\pi_{s+1}$ currently being chosen, and (fully random) permutations $\pi_{s+2},\ldots,\pi_{\ell}$. The first term (\ref{eq:first_term_2}) is less than $|\mathcal{K}| / \exp(\delta^2 \ell \rho_k/2)$, which is strictly smaller than $1$ for large $\ell$. Let us denote $\mathbb{E}[\phi_{s+1}(\hat{S})] = \mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau]$, where positions $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ were fixed in the semi-random permutation $\pi_{s+1}$, $\pi_{s+1}(r)$ was fixed in particular to $\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$, and it can be computed by using the algorithm from Theorem \ref{theorem:semi-random-conditional_2}. This gives our pessimistic estimator $\Phi$. Because $s$ is fixed for all steps where the semi-random permutation is being decided, $\Phi$ is uniformly proportional to $\Phi_1$: \vspace*{-3ex} \begin{eqnarray} \Phi_1 = \sum_{\hat{S} \in \mathcal{K}} \left(\prod_{j=1}^{s} (1-\delta \cdot \phi_j(\hat{S}))\right) \cdot (1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]), \nonumber \\ \Phi_2 = \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} (1-\delta \cdot \phi_j(\hat{S}))\right) \cdot \mathbb{E}[\phi_{s+1}(\hat{S})] \, . \label{Eq:Pessimistic_Est_Obj} \end{eqnarray} \vspace*{-1ex} \noindent Recall $\pi_{s+1}(r)$ in semi-random permutation was fixed but not final. To make it final, we choose $\pi_{s+1}(r) \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$ that minimizes $\Phi_1$, which is equivalent to maximizing $\Phi_2$. Proof of Lemma \ref{lem:potential_correct_2} can be found in Section \ref{sec:Pess_Est_proofs_2}. \begin{lemma}\label{lem:potential_correct_2} $\Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r))$ is a pessimistic estimator of the failure probability in (\ref{Eq:Union_Bound_1_2}), if $\ell \geq 2 [k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}]/[\delta^2 \rho_k]$. \end{lemma} \begin{proof} (of Theorem \ref{Thm:Derandomization_2}) See the precise details of this proof in Section \ref{sec:Pess_Est_proofs_2}. \end{proof} \begin{algorithm}[t!] \SetAlgoVlined \DontPrintSemicolon \KwIn{Positive integers $n$, $k \leq n$, $\ell \geq 2$, such that $\log k \geq 8$.} \KwOut{A multi-set $\mathcal{L} \subseteq \Pi_n$ of $\ell$ permutations.} /* This algorithm uses Function ${\sf Prob}(A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0},\hat{S})$ from Algorithm \ref{algo:Cond_prob_2}, Section \ref{sec:Cond_Prob_Thm_4_2}. */ $\pi_1 := (1,2,\ldots,n)$ /* Identity permutation */ $\mathcal{L} := \{\pi_1\}$ Let $\mathcal{K}$ be the set of all $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples. \For{$\hat{S} \in \mathcal{K}$}{$w(\hat{S}) := 1-\delta \cdot \phi_1(\hat{S})$ \label{Alg:Weight_Init_2}} \For{$s = 1 \ldots \ell - 1$}{ \For{$r = 1 \ldots n$}{ \For{$\hat{S} \in \mathcal{K} \,\,$ {\tt (let} $\hat{S} = \{\hat{s}_0, \hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{k + \lfloor k^{2/3} \log {k} \rfloor}\}${\tt )}}{ \For{$\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$}{ $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau] := {\sf Prob}(A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0},\hat{S})$, for $u =k - \lfloor k^{2/3} \log {k} \rfloor, k - \lfloor k^{2/3} \log {k} \rfloor+1, \ldots, k + \lfloor k^{2/3} \log {k} \rfloor$. $\mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau] := \sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}\, | \, \pi_{s+1}(1), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau]$ } } Choose $\pi_{s+1}(r) = \tau$ for $\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$ to maximize $\sum_{\hat{S} \in \mathcal{K}} w(\hat{S}) \cdot \mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau]$. } $\mathcal{L} := \mathcal{L} \cup \{\pi_{s+1}\}$ \For{$\hat{S} \in \mathcal{K}$}{ $w(\hat{S}) := w(\hat{S}) \cdot (1-\delta \cdot \phi_{s+1}(\hat{S}))$ \label{Alg:Weight_Update_2} } } \Return $\mathcal{L}$ \caption{Find permutations distribution ($k$-secretary)} \label{algo:Find_perm_2} \end{algorithm} \section{Derandomization for $k$-secretary: details of the proof of Theorem \ref{theorem:Hoeffding_k-secretary} }\label{sec:derandomization-proofs_2} \subsection{Conditional probabilities and proof of Theorem \ref{theorem:semi-random-conditional_2}}\label{sec:Cond_Prob_Thm_4_2} Let $\hat{S} = \{\hat{s}_0, \hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{k + \lfloor k^{2/3} \log {k} \rfloor}\} \in \mathcal{K}$ be a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple. Recall the process of generating a random permutation $\pi_j$ by the index random variables $X^1_j, X^2_j, \ldots,$ $ X^n_j$, which generate elements $\pi_j(1), \pi_j(2), \ldots, \pi_j(n)$ sequentially, one-by-one, in this order. We will define an algorithm to compute $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$ for the semi-random permutation $\pi_{s+1}$, by using an approach of re-proving Lemma \ref{lemma:Success_k-secretary} from Section \ref{sec:derand_Hoeffding}. Slightly abusing notation we let for $r=0$ to have that $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$. In this case, we will also show below how to compute $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$ when $\pi_{s+1}$ is fully random. \noindent {\bf Proof of Theorem \ref{theorem:semi-random-conditional_2}.} We will now present the proof of Theorem \ref{theorem:semi-random-conditional_2}. Recall that $\hat{S} = \{\hat{s}_0, \hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{k + \lfloor k^{2/3} \log {k} \rfloor}\}.$ If $r=0$ and $\pi_{s+1}$ is fully random then by the approach of re-proving Lemma \ref{lemma:Success_k-secretary} from Section \ref{sec:derand_Hoeffding}, we have that $$ \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1] = \mathbb{P}\mathrm{r}\left[A_{\hat{s}_0} \, \cap \, \left(\bigcup_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} B_u\right) \, \cap \, C_{\hat{s}_0}\right] = \sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] \, , $$ where \vspace*{-5mm} \begin{eqnarray} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \left(1 - \frac{m}{n}\right) \cdot \binom{u-2}{k\frac{m}{n}-1} \cdot \left(\prod_{j=1}^{k\frac{m}{n}} \frac{m - (j-1)}{n-1 - (j-1)}\right) \cdot \nonumber \\ \cdot \left(\prod_{j'= 1}^{u-k\frac{m}{n}-1} \frac{n - m - j'}{n-(1+k\frac{m}{n}) - (j'-1)}\right) \cdot \min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\} \label{eqn:prob_11} \, . \end{eqnarray} Assume from now on that $r \geq 1$. Suppose now that values $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ have already been chosen for some $r \in [n]$, i.e., they all are fixed and final, except that $\pi_{s+1}(r)$ is fixed but not final. The algorithm will be based on an observation that the random process of generating the remaining values $\pi_{s+1}(r+1),\pi_{s+1}(r+2), \ldots, \pi_{s+1}(n)$ can be viewed as choosing u.a.r.~a random permutation of values in the set $[n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)\}$; so this random permutation has length $n-r$. \ignore{ To compute $$ \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)] = \sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)] \, , $$ we proceed as follows. For simplicity, we will write below $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$ instead of $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$. Below, we will only show how to compute probabilities $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$, and to obtain $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$ one needs to compute $\sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$. } \pagebreak \thispagestyle{empty} \vspace*{-2mm} \begin{algorithm}[H] \SetAlgoVlined \DontPrintSemicolon \SetKwFunction{FMain}{${\sf Prob}$} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}$, $\hat{S}$}}{ $p := 0$; $q := 0$; \hspace*{3mm} /* For loop below iterates over all subsets $J$ of size $k \frac{m}{n} - 1$. */\; \For{$J \mbox{ s.t. } J \subseteq \{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus \{\hat{s}_0\} \mbox{ and } |J| = k \frac{m}{n} - 1$}{ \If{$(\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J) \cap \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(\min(r,m))\} \not = \emptyset$\label{alg_2:step_1}}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$}\Else{ /* Now $(\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J) \cap \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(\min(r,m))\} = \emptyset$.*/\; \label{alg_2:line_7} \If{$r \leq m$\label{alg_2:step_2}}{ Let $J' = (J \cup \{\hat{s}_u\}) \cap \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)\}$.\;\label{alg_2:set_J_prime} \If{$|J'| + m-r < k\frac{m}{n}$\label{alg_2:step_2_1}}{ $q: = \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$ }\Else{ \If{$r = m$\label{alg_2:step_2_2}}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\}$\label{alg_2:prob_0_1}} \If{$r < m$\label{alg_2:step_2_3}}{ $k' := k\frac{m}{n}-|J'|$\; $q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \frac{n-m}{n-r} \cdot \left(\prod_{j=1}^{k'} \frac{m-r - (j-1)}{n-r-1 - (j-1)}\right) \cdot \cdot \left(\prod_{j'=1}^{u-k'-1} \frac{n - m - j'}{n-r-(1+k') - (j'-1)}\right) \cdot \min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\}$\; \label{alg_2:complicated_prob_1} } } }\Else{ /* We have now $r > m$ */\label{alg_2:step_3}\; \If{$J \cup \{\hat{s}_u\} \not \subseteq \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(m)\}$}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$}\Else{ /* We have here $J \cup \{\hat{s}_u\} \subseteq \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(m)\}$ */\; Let $T = \{\pi_{s+1}(m + 1),\pi_{s+1}(m + 2), \ldots, \pi_{s+1}(r)\}$.\; \label{alg_2:set_T} \If{$|\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J| \leq k$}{ $q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 1$}\Else{ \If{$\hat{s}_0 \in T$\label{alg_2:s_0_in_T}}{ Let $\hat{s}_0 = \pi_{s+1}(\tau)$ for some $\tau \in \{m + 1, m + 2, \ldots, r \}$.\; Let $J'' = (\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J) \cap \{\pi_{s+1}(m + 1),\pi_{s+1}(m + 2), \ldots, \pi_{s+1}(\tau - 1)\}$.\; \If{$|J''| < k$}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 1$} \If{$|J''| \geq k$}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$\label{alg_2:s_0_in_T_END}} }\Else{ Let $J'' = (\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J) \cap \{\pi_{s+1}(m + 1),\pi_{s+1}(m + 2), \ldots, \pi_{s+1}(r)\}$.\; \label{alg_2:s_0_not_in_T} \If{$|J''| \geq k$}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$} \If{$|J''| < k$}{$q:=\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \min \left\{\frac{k - |J''|}{u - |J''| - k\frac{m}{n}}, 1\right\}$\label{alg_2:s_0_not_in_T_END}} } } } } } $p:=p+q$; \; } \KwRet $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = p$; \; } \caption{Conditional probabilities ($k$-secretary)} \label{algo:Cond_prob_2} \end{algorithm} \noindent To compute $$ \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)] = $$ $$ = \sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)] \, , $$ we proceed as follows. For simplicity, we will write below $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$ instead of $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$. We will only show in Algorithm \ref{algo:Cond_prob_2} how to compute probabilities $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$, and to obtain $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$ one needs to compute $\sum_{u=k - \lfloor k^{2/3} \log {k} \rfloor}^{k + \lfloor k^{2/3} \log {k} \rfloor} \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$. \begin{lemma}\label{l:Prob_Algo_2} Algorithm \ref{algo:Cond_prob_2} ${\sf Prob}(A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0},\hat{S})$ correctly computes $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots,$ $\pi_{s+1}(r)]$ in time $O((k + \lfloor k^{2/3} \log {k} \rfloor)^{km/n} \cdot n \cdot poly \log(n))$. \end{lemma} \begin{proof} We will show first the correctness. When computing conditional probability $\mathbb{P}\mathrm{r}[A_{j_0} \cap B_u \cap C_{j_0}] = \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u \cap C_{j_0} \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$, we iterate fro all subsets $J$ in the for loop and sum up these probabilities in variable $p$. This reflects the binomial formula $\binom{u-2}{k\frac{m}{n}-1}$ in the equation (\ref{eqn:prob_11}). Recall the discussion in re-proving Lemma \ref{lemma:Success_k-secretary} in Section \ref{sec:derand_Hoeffding}, where index $\hat{s}_u$ is supposed to be the statistic $t$ that needs to be in positions $\{1,2\ldots,m\}$ of permutation $\pi_{s+1}$. The set $J \subseteq \{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus \{\hat{s}_0\}$ contains $k\frac{m}{n} - 1$ indices which correspond to values that are larger than value corresponding to $\hat{s}_u$. Indices from $J$ are also supposed to be in positions $\{1,2\ldots,m\}$ of $\pi_{s+1}$. The other indices from set $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ correspond to values higher than value of $\hat{s}_u$ and they must be in positions $\{m+1,m+2,\dots,n\}$ of $\pi_{s+1}$. These conditions together mean that the value of $\hat{s}_u$ defines the statistic $t$. Therefore, when condition in line \ref{alg_2:step_1} of Algorithm \ref{algo:Cond_prob_2} holds, then not all of indices from $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ are in positions $\{m+1,m+2,\dots,n\}$, meaning that $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$. When this condition does not hold then we consider two cases, $r \leq m$ in line \ref{alg_2:step_2}, and $r > m$ in line \ref{alg_2:step_3}. When $r \leq m$ holds in line \ref{alg_2:step_2}, then we know that indices $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ are in random positions $\{r+1,r+2,\dots,n\}$ of $\pi_{s+1}$. These indices are in positions $\{m+1,m+2,\dots,n\}$ in line \ref{alg_2:step_2_2} surely (with probability $1$) when $r=m$, and we will compute the probability that they are in positions $\{m+1,m+2,\dots,n\}$ in line \ref{alg_2:step_2_3} when $r < m$. Now, recall that the indices from $J \cup \{\hat{s}_u\}$ define the statistic $t$, so they must be in positions $\{1,2,\dots,m\}$. Set $J' \subseteq J$ from line \ref{alg_2:set_J_prime} contains those indices from $J$ which are on the fixed (non-random) positions $\{1,2,\dots,r\}$, so we need to compute the probability that the remaining indices $J \setminus J'$ are in positions $\{r+1,r+2,\ldots,m\}$, which will be done in line \ref{alg_2:step_2_3} when $r < m$. When $|J'| + m-r < k\frac{m}{n}$ in line \ref{alg_2:step_2_1}, then set $J \cup \{\hat{s}_u\}$ does not fit in positions from $1$ up to $m$ in $\pi_{s+1}$, therefore $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$. Otherwise, if $|J'| + m-r \geq k\frac{m}{n}$, then if $r=m$ in line \ref{alg_2:step_2_2}, then observe that at this point all indices from $J \cup \{\hat{s}_u\}$ are surely in positions $\{1,2,\dots,m\}$, and all indices from $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ are surely in positions $\{m+1,m+2,\dots,n\}$. Now, we only need to compute the probability that the index $\hat{s}_0$ will be chosen by the $k$-secretary algorithm as one of the first $k$ indices with values larger than that of the statistic $t$. This surely happens if $|\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J| \leq k$. If, on the other hand $|\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J| > k$, then this happens with probability $k/|\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J|$, as the probability that index $\hat{s}_{u'}$ is on any of the $k$ first positions among elements $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ is $(w-1)!/w! = 1/w$, where $w = |\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J|$. Observing that $|\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J| = u-1 - (k\frac{m}{n} - 1) = u - k\frac{m}{n}$, we have $ \mathbb{P}\mathrm{r}[A_{j_0} \cap B_u \cap C_{j_0}] = \min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\} \, $ as in line \ref{alg_2:prob_0_1}. Coming to line \ref{alg_2:step_2_3} when $r < m$, with probability $\frac{n-m}{n-r}$ in line \ref{alg_2:complicated_prob_1}, the index $\hat{s}_0$ will be on positions $\{m+1,m+2,\ldots,n\}$ among random positions $\{r+1,r+2,\ldots,n\}$ of $\pi_{s+1}$. Conditioned on this event, the probabilty that indices $J \setminus J'$ ($|J \setminus J'| = k'$) are in random positions $\{r+1,r+2,\ldots,m\}$ is $$ \prod_{j=1}^{k'} \frac{m-r - (j-1)}{n-r-1 - (j-1)} $$ in line \ref{alg_2:complicated_prob_1}. Conditioned on all those events (about index $\hat{s}_0$ and about indices $J \setminus J'$), the probability that the remaining indices from the set $R =\{\hat{s}_1, \ldots, \hat{s}_{u-1}, \hat{s}_u\} \setminus ((J \setminus J') \cup \{\hat{s}_0\})$ (noting that $|R| = u-k'-1$) are in random positions $\{m+1,m+2,\ldots,n\}$ (conditioning on the fact that $k'+1$ of the random positions $\{r+1,r+2,\ldots,n\}$ are already occupied by the previous $k'+1$ indices from the set $(J \setminus J') \cup \{\hat{s}_0\}$) is $$\prod_{j'=1}^{u-k'-1} \frac{n - m - j'}{n-r-(1+k') - (j'-1)} $$ in line \ref{alg_2:complicated_prob_1}. Finally, the last part, $\min \left\{\frac{k}{u - k\frac{m}{n}}, 1\right\}$, in the probability calculated in line \ref{alg_2:complicated_prob_1}, is the probability that index $\hat{s}_0$ is among the first $k$ largest values chosen by the $k$-secretary algorithm. The argument for this last part is the same as above argument for line \ref{alg_2:prob_0_1}. We will now analyze the case of $r > m$ from line \ref{alg_2:step_3}. In this case, the probability $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$ can only be non-zero if $J \cup \{\hat{s}_u\} \subseteq \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(m)\}$. Let us summarize the situation now. The indices $J \cup \{\hat{s}_u\}$ are surely in positions $\{1,2,\ldots,m\}$ as they should be. The indices $\{\hat{s}_1, \ldots, \hat{s}_{u-1}\} \setminus J$ are surely in positions $\{m+1,m+2,\ldots,n\}$ as they should be, because condition in line \ref{alg_2:line_7} holds. Therefore, the only property that we need to ensure now is that the index $\hat{s}_0$ is among the first $k$ largest values chosen by the $k$-secretary algorithm. We will do that by using the same argument as that for line \ref{alg_2:prob_0_1} above. Let $T = \{\pi_{s+1}(m + 1),\pi_{s+1}(m + 2), \ldots, \pi_{s+1}(r)\}$, see line \ref{alg_2:set_T}. If $|\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J| \leq k$ then $\hat{s}_0$ is surely among the first $k$ largest chosen values, so $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 1$. We analyze now the lines \ref{alg_2:s_0_in_T}-\ref{alg_2:s_0_in_T_END}. If index $\hat{s}_0$ is in set $T$ on position $\tau$ in permutation $\pi_{s+1}$, then set $J''$ contains all indices from $\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J$ that are before position $\tau$ in permutation $\pi_{s+1}$ and have (adversarial) values higher than that of the statistic $t$. So if $|J''| < k$ then index $\hat{s}_0$ is surely chosen as the first $k$ values larger than $t$ and $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 1$. If $|J''| \geq k$, then index $\hat{s}_0$ is surely not chosen as the first $k$ values larger than $t$ and so $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$. To conclude the proof now, we analyze now lines \ref{alg_2:s_0_not_in_T}-\ref{alg_2:s_0_not_in_T_END}. We have that $\hat{s}_0 \not \in T$, so $\hat{s}_0$ is on random positions $\{r+1,r+2,\ldots,n\}$ in $\pi_{s+1}$. Set $J''$ contains all indices from $\{\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_{u-1}\} \setminus J$ that are on non-random positions $\{m + 1, m + 2, \ldots, r\}$ in $\pi_{s+1}$. Therefore, if $|J''| \geq k$, then index $\hat{s}_0$ is surely not chosen as the first $k$ values larger than $t$ and $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = 0$. If $|J''| < k$, then index $\hat{s}_0$ will be chosen as the first $k$ values larger than $t$ with probability $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}] = \min \left\{\frac{k - |J''|}{u - |J''| - k\frac{m}{n}}, 1\right\}$, where the argument is the same as that for line \ref{alg_2:prob_0_1} above. We now argue about the implementation of the algorithm. The main for loop iterates for all subsets $J$ of size $k\frac{m}{n}-1$ of set of size $u-2$. Because $u \leq k + \lfloor k^{2/3} \log {k} \rfloor$, there are at most $(k + \lfloor k^{2/3} \log {k} \rfloor)^{km/n}$ such subsets. The main kind of operations inside each iteration of the for loop are operations on subsets of set $[n]$, which are set membership and set intersections, which can easily be implemented in time $O(n)$. The other kind of operations in computing $\mathbb{P}\mathrm{r}[A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0}]$ are divisions of numbers from the set $[n]$ and multiplications of the resulting rational expressions. Clearly, each of these arithmetic operations can be performed in time $O(poly \log(n))$. This means that this algorithm can be implemented in the total running time of $O((k + \lfloor k^{2/3} \log {k} \rfloor)^{km/n} \cdot n \cdot poly \log(n))$ as claimed. \end{proof} \noindent The proof of the above lemma finishes the proof of Theorem \ref{theorem:semi-random-conditional_2}. \subsection{Pessimistic estimator}\label{sec:Pess_Est_proofs_2} Let $\hat{S} \in \mathcal{K}$ be a $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple. Recall that $X^{\hat{S}} = X^{\hat{S}}_1 + \ldots + X^{\hat{S}}_{\ell}$. Denote also $\mathbb{E}[X^{\hat{S}}_j] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_j = 1] = \mu_j$ for each $j \in [\ell]$, and $\mathbb{E}[X^{\hat{S}}] = \sum_{j=1}^{\ell} \mu_j = \mu$. We will now use Raghavan's proof of the Hoeffding bound, see \cite{Young95}, for any $\delta > 0$: \begin{eqnarray*} \mathbb{P}\mathrm{r}\left[X^{\hat{S}} < (1-\delta) \cdot \mu\right] &=& \mathbb{P}\mathrm{r}\left[\prod_{j=1}^{\ell} \frac{(1-\delta)^{X^{\hat{S}}_j}}{(1-\delta)^{(1-\delta)\mu_j}} \geq 1\right] \\ &\leq& \mathbb{E}\left[\prod_{j=1}^{\ell} \frac{1-\delta \cdot X^{\hat{S}}_j}{(1-\delta)^{(1-\delta)\mu_j}}\right] \\ &=& \prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\delta)^{(1-\delta)\mu_j}} \\ &<& \prod_{j=1}^{\ell} \frac{\exp(- \delta \mu_j)}{(1-\delta)^{(1-\delta)\mu_j}} \\ &=& \frac{1}{\exp(b(-\delta) \mu)}\, , \end{eqnarray*} where $b(x) = (1+x) \ln(1+x) - x$, and the second step uses Bernoulli's inequality $(1+x)^r \leq 1 + rx$, that holds for $0 \leq r \leq 1$ and $x \geq -1$, and Markov's inequality, and the last inequality uses $1-x \leq \exp(-x)$, which holds for $x \geq 0$ and is strict if $x \not = 0$. By (\ref{eq:exp_lb_1}) we have that $\mu_j \geq \rho_k$, for each $j \in [\ell]$. Then we can further upper bound the last line of Raghavan's proof to obtain $\frac{1}{\exp(b(-\delta) \mu)} \leq \frac{1}{\exp(b(-\delta) \ell \rho_k)}$. Theorem \ref{theorem:Hoeffding_k-secretary} guarantees existence of the multi set $\mathcal{L}$ of permutations by bounding $\mathbb{P}\mathrm{r}[X^{\hat{S}} < (1-\delta) \cdot \rho_k \ell] \leq \exp(\delta^2 \rho_k \ell/2)$, see (\ref{eqn:Chernoff_Hoeffding_111}); note that we use here Chernoff rather than Hoeffding bound, as in that theorem. Now, repeating the Raghavan's proof with each $\mu_j$ replaced by $\rho_k$ implies that \begin{eqnarray} \mathbb{P}\mathrm{r}\left[X^{\hat{S}} < (1-\delta) \cdot \ell \cdot \rho_k \right] &\leq& \prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\delta)^{(1-\delta)\rho_k}} \label{Eq:Raghavan-1_2} \\ &<& \prod_{j=1}^{\ell} \frac{\exp(- \delta \mu_j)}{(1-\delta)^{(1-\delta)\rho_k}} \nonumber \\ &\leq& \prod_{j=1}^{\ell} \frac{\exp(- \delta \rho_k)}{(1-\delta)^{(1-\delta)\rho_k}} \nonumber \\ &=& \frac{1}{\exp(b(-\delta) \ell \rho_k)} \,\, < \,\, \frac{1}{\exp(\delta^2 \ell \rho_k/2)} \label{Eq:Raghavan-2_2} \, , \end{eqnarray} where the last inequality follows by a well known fact that $b(-x) > x^2/2$, see, e.g., \cite{Young95}. By this argument and by the union bound we obtain that: \begin{eqnarray} \mathbb{P}\mathrm{r}\left[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\delta) \cdot \ell \rho_k \right] \,\, \leq \,\, \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\delta \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\delta)^{(1-\delta)\rho_k}} \label{Eq:Union_Bound_2} \, . \end{eqnarray} Let us define a function $\phi_j(\hat{S})$ which is equal to $1$ if permutation $\pi_j$ is successful for the $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple $\hat{S}$, and $0$ otherwise. The above proof upper bounds the probability of failure by the expected value of $$ \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\delta \cdot \phi_j(\hat{S})}{(1-\delta)^{(1-\delta)\rho_k}} \, , $$ the expectation of which is less than $|\mathcal{K}| / \exp(\delta^2 \ell \rho_k/2)$, which is strictly smaller than $1$ for appropriately large $\ell$. Suppose that we have so far chosen the (fixed) permutations $\pi_1,\ldots,\pi_s$ for some $s \in \{1,2,\ldots,\ell-1\}$, the (semi-random) permutation $\pi_{s+1}$ is currently being chosen, and the remaining (fully random) permutations, if any, are $\pi_{s+2},\ldots,\pi_{\ell}$. The conditional expectation is then \begin{eqnarray} & & \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\delta \cdot \phi_j(\hat{S})}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_j(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right)^{\ell - s - 1} \nonumber \\ &\leq& \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\delta \cdot \phi_j(\hat{S})}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\delta)^{(1-\delta)\rho_k}}\right) \cdot \left(\frac{1-\delta \cdot \rho_k}{(1-\delta)^{(1-\delta)\rho_k}}\right)^{\ell - s - 1} \nonumber \\ &=& \, \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)) \label{Eq:Pessimistic_Est-2_2} \, , \end{eqnarray} where in the inequality, we used that $\mathbb{E}[\phi_j(\hat{S})] \geq \rho_k$. Note, that \begin{eqnarray*} \mathbb{E}[\phi_{s+1}(\hat{S})] &=& \mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau] \\ &=& \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau] \, , \end{eqnarray*} where positions $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ have already been fixed in the semi-random permutation $\pi_{s+1}$, $\pi_{s+1}(r)$ has been fixed in particular to $\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$, and this value can be computed by using the algorithm from Theorem \ref{theorem:semi-random-conditional_2}. This gives the pessimistic estimator $\Phi$ of the failure probability in (\ref{Eq:Union_Bound_2}) for our derandomization. Because $s$ is fixed for all steps where the semi-random permutation is being decided, this pessimistic estimator is uniformly proportional to $$ \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \left(1-\delta \cdot \phi_j(\hat{S})\right)\right) \cdot \left(1-\delta \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]\right) \, . $$ Recall that the value of $\pi_{s+1}(r)$ in the semi-random permutation was fixed but not final. To make it fixed and final, we simply choose the value $\pi_{s+1}(r) \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$ that minimizes this last expression, which is equivalent to maximizing \begin{eqnarray} \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \left(1-\delta \cdot \phi_j(\hat{S})\right)\right) \cdot \mathbb{E}[\phi_{s+1}(\hat{S})] \, . \label{Eq:Pessimistic_Est_Obj-2_2} \end{eqnarray} \ignore{ \begin{lemma} The above potential $\Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r))$ is a pessimistic estimator if $\ell \geq \frac{2 k \ln(n)}{\rho (\delta)^2}$. \end{lemma} } \begin{proof} (of Lemma \ref{lem:potential_correct_2}) This follows from the following $3$ properties: (a) it is an upper bound on the conditional probability of failure; (b) it is initially strictly less than $1$; (c) some new value of the next index variable in the partially fixed semi-random permutation $\pi_{s+1}$ can always be chosen without increasing it. Property (a) follows from (\ref{Eq:Raghavan-1_2}) and (\ref{Eq:Union_Bound_2}). To prove (b) we see by (\ref{Eq:Raghavan-2_2}) and (\ref{Eq:Union_Bound_2}) that $$ \mathbb{P}\mathrm{r}\left[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\delta) \cdot \ell \rho_k \right] < |\mathcal{K}| / \exp(\delta^2 \ell \rho_k/2) \, . $$ Observe that $|\mathcal{K}| = \binom{n}{k'} (k')! (1-\epsilon) k$ and $\binom{n}{k}\cdot k! \le \exp(k\log{n} + k\log{k})$, where $k' = k + \lfloor k^{2/3} \log {k} \rfloor$. So $|\mathcal{K}| \leq \exp(k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)})$. Therefore we obtain the following condition on $\ell$ $$ \frac{\exp(k'\log{n} + k'\log{'k} + \log{((1-\epsilon)k)})}{\exp(\delta^2 \ell \rho_k/2)} \leq 1 \,\,\, \Leftrightarrow \,\,\, \ell \geq \frac{2 \cdot [k'\log{n} + k'\log{k'} + \log{((1-\epsilon)k)}]}{\delta^2 \rho_k} \, . $$ (a) and (b) follow easily by the above arguments and by the assumption about $\ell$. Part (c) follows because $\Phi$ is an expected value conditioned on the choices made so far. For the precise argument let us observe that \begin{eqnarray*} & & \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)] \\ &=& \sum_{\tau \in T} \frac{1}{n-r+1} \cdot \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau] \, , \end{eqnarray*} where $T = [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$. Then by (\ref{Eq:Pessimistic_Est-2_2}) we obtain \begin{eqnarray*} & & \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)) \\ &=& \sum_{\tau \in T} \frac{1}{n-r+1} \cdot \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi(r) = \tau) \\ &\geq& \min \{ \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi(r) = \tau) \,\, : \,\, \tau \in T \} \, , \end{eqnarray*} which implies part (c). \end{proof} \begin{proof} (of Theorem \ref{Thm:Derandomization_2}) The computation of the conditional probabilities ${\sf Prob}(A_{\hat{s}_0} \cap B_u \cap C_{\hat{s}_0},\hat{S})$ by Algorithm \ref{algo:Cond_prob_2} is correct by Theorem \ref{theorem:semi-random-conditional_2}. Algorithm \ref{algo:Find_perm_2} is a direct translation of the optimization of the pessimistic estimator $\Phi$. In particular, observe that the correctness of the weight initialization in Line \ref{Alg:Weight_Init_2} of Algorithm \ref{algo:Find_perm_2}, and of weight updates in Line \ref{Alg:Weight_Update_2}, follow from the form of the pessimistic estimator objective function in (\ref{Eq:Pessimistic_Est_Obj-2_2}). The value of the pessimistic estimator $\Phi$ is strictly smaller than $1$ at the beginning and in each step, it is not increased by properties of the pessimistic estimator (Lemma \ref{lem:potential_correct_2}). Moreover, at the last step all values of all $\ell$ permutations will be fixed, that is, there will be no randomness in the computation of $\Phi$. Observe that $\Phi$ is an upper bound on the expected number of $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples from $\mathcal{K}$ that are not well-covered. So at the end of the derandomization process the number of such $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples will be $0$, implying that all these $k + \lfloor k^{2/3} \log {k} \rfloor$-tuples will be well-covered, as desired. A straightforward analysis of the running time of Algorithm \ref{algo:Find_perm_2} and Lemma \ref{l:Prob_Algo_2} imply that its running time can be bounded by $O((1-\epsilon) \cdot \ell \cdot k^{k+1} \cdot n^{k+2} \cdot (k + \lfloor k^{2/3} \log {k} \rfloor)^{k m/n} \cdot poly \log(n))$. \end{proof} \section{Improved dimension reduction for the $k$-secretary problem}\label{section:dim_reduction_2} A set $\mathcal{F}$ of functions $f : [n] \rightarrow \ell$ is called a \emph{dimensionality-reduction} set with parameters $(n,\ell, d)$ if it satisfies the following two conditions: \vspace*{2mm} \noindent (1) the number of functions that have the same value on any element of the domain is bounded:\\ $\forall_{i,j \in [n], i \neq j} : |\{f \in \mathcal{F} : f(i) = f(j) \}| \le d;$ \vspace*{2mm} \noindent (2) for each function, the elements of the domain are almost uniformly partitioned into the elements of the image: $\forall_{i \in [\ell] ,f \in \mathcal{F}} : |f^{-1}(i)| \le \frac{n}{\ell} + o(\ell).$ \vspace*{2mm} The dimensionality-reduction set of functions is key in our approach to find probability distribution that guarantees a high success probability for wait-and-pick $k$-secretarial algorithms. When applied once, it reduces the size of permutations needed to be considered for optimal success probability from $n$-elements to $\ell$-elements. The above conditions (1) and (2) are to ensure that the found set of $\ell$-element permutations can be reversed into $n$-element permutations without much loss of~success~probability. Kesselheim, Kleinberg and Niazadeh~\cite{KesselheimKN15} were first who used this type of reduction in context of secretary problems. Our refinement is adding the new condition $2)$. This condition significantly strengthens the reduction for wait-and-pick algorithms and has large consequences on later constructions of low entropy distributions. In particular condition $2)$ is crucial in proving the bounds on the competitive ratio. \subsection{A polynomial time construction of the set $\mathcal{F}$} We show a general pattern for constructing a set of functions that reduce the dimension of permutations from $n$ to $q < n$ for which we use refined Reed-Solomon codes. \begin{lemma} \label{lem:Reed_Solomon_Construction} There exists a set $\mathcal{F}$ of functions $f : [n] \longrightarrow [q]$, for some prime integer $q \geq 2$, such that for any two distinct indices $i, j \in [n]$, $i \not = j$, we have $$|\{f \in \mathcal{F} : f(i) = f(j)\}| \leq d \,\,\,\,\, \mbox{and} \,\,\,\,\, \forall {q' \in [q]} : |f^{-1}(q')| \in \left\{\myfloor{n/q}, \myfloor{n/q} + 1\right\},$$ where $1 \le d < q$ is an integer such that $n \le q^{d + 1}$. Moreover, $|\mathcal{F}| = q$ and set $\mathcal{F}$ can be constructed in deterministic polynomial time in $n, q, d$. \end{lemma} \begin{proof} Let us take any finite field $\mathbb{F}$ of size $q \geq 2$. It is known that $q$ must be of the following form: $q = p^r$, where $p$ is any prime number and $r \geq 1$ is any integer; this has been proved by Galois, see \cite[Chapter 19]{Stewart_Book}. We will do our construction assuming that $\mathbb{F} = \mathbb{F}_q$ is the Galois field, where $q$ is a prime number. Let us take the prime $q$ and the integer $d \geq 1$ such that $q^{d+1} \geq n$. We want to take here the smallest such prime number and an appropriate smallest $d$ such that $q^{d+1} \geq n$. Let us now consider the ring $\mathbb{F}[x]$ of univariate polynomials over the field $\mathbb{F}$ of degree $d$. The number of such polynomials is exactly $|\mathbb{F}[x]| = q^{d+1}$. By the field $\mathbb{F}_q$ we chose, we have that $\mathbb{F}_q = \{0,1,\ldots,q-1\}$. We will now define the following $q^{d+1} \times q$ matrix $M = (M_{i,q'})_{i \in [q^{d+1}], q' \in \{0,1\ldots,q-1\}}$ whose rows correspond to polynomials from $\mathbb{F}[x]$ and columns -- to elements of the field $\mathbb{F}_q$. Let now $\mathbb{G} \subset \mathbb{F}[x]$ be the set of all polynomials from $\mathbb{F}[x]$ with the free term equal to $0$, that is, all polynomials of the form $\sum_{i=1}^d a_i x^i \in \mathbb{F}[x]$, where all coefficients $a_i \in \mathbb{F}_q$, listed in {\em any} fixed order: $\mathbb{G} =\{g_1(x), g_2(x),\ldots,g_{q^d}(x)\}$. To define matrix $M$ we will list all polynomials from $\mathbb{F}[x]$ in the following order $\mathbb{F}[x] = \{f_1(x), f_2(x),\ldots,f_{q^{d+1}}(x)\}$, defined as follows. The first $q$ polynomials $f_1(x), f_2(x),\ldots,f_q(x)$ are $f_i(x) = g_i(x) + i-1$ for $i \in \{1,\ldots,q\}$; note that here $i-1 \in \mathbb{F}_q$. The next $q$ polynomials $f_{q+1}(x), f_{q+2}(x),\ldots,f_{2q}(x)$ are $f_{q+i}(x) = g_{q+i}(x) + i-1$ for $i \in \{1,\ldots,q\}$, and so on. In general, to define polynomials $f_{qj+1}(x), f_{qj+2}(x),\ldots,f_{qj+q}(x)$, we have $f_{qj+i}(x) = g_{qj+i}(x) + i-1$ for $i \in \{1,\ldots,q\}$, for any $j \in \{0,1,\ldots,q^d - 1\}$. We are now ready to define matrix $M$: $M_{i,q'} = f_i(q')$ for any $i \in [q^{d+1}], q' \in \{0,1\ldots,q-1\}$. From matrix $M$ we define the set of functions $\mathcal{F}$ by taking precisely $n$ first rows of matrix $M$ (recall that $q^{d+1} \geq n$) and letting the columns of this truncated matrix define functions in the set $\mathcal{F}$. More formally, $\mathcal{F} = \{h_{q'} : q' \in \{0,1\ldots,q-1\}\}$, where each function $h_{q'} : [n] \longrightarrow [q]$ for each $q' \in \{0,1\ldots,q-1\}$ is defined as $h_{q'}(i) = f_i(q')$ for $i \in \{1,2,\ldots,n\}$. We will now prove that $|h_{q'}^{-1}(q'')| \in \left\{\myfloor{\frac{n}{q}}, \myfloor{\frac{n}{q}} + 1\right\}$ for each function $h_{q'} \in \mathcal{F}$ and for each $q'' \in \{0,1\ldots,q-1\}\}$. Let us focus on column $q'$ of matrix $M$. Intuitively the property that we want to prove follows from the fact that when this column is partitioned into $q^{d+1} / q$ ``blocks" of $q$ consecutive elements, each such block is a permutation of the set $\{0,1\ldots,q-1\}$ of elements from the field $\mathbb{F}_q$. More formally, the $j$th such ``block" for $j \in \{0,1,\ldots,q^d - 1\}$ contains the elements $f_{qj+i}(q')$ for all $i \in \{1,\ldots,q\}$. But by our construction we have that $f_{qj+i}(q') = g_{qj+i}(q') + i-1$ for $i \in \{1,\ldots,q\}$. Here, $g_{qj+i}(q') \in \mathbb{F}_q$ is a fixed element from the Galois field $\mathbb{F}_q$ and elements $f_{qj+i}(q')$ for $i \in \{1,\ldots,q\}$ of the ``block" are obtained by adding all other elements $i-1$ from the field $\mathbb{F}_q$ to $g_{qj+i}(q') \in \mathbb{F}_q$. This, by properties of the field $\mathbb{F}_q$ imply that $f_{qj+i}(q')$ for $i \in \{1,\ldots,q\}$ are a permutation of the set $\{0,1,\ldots,q - 1\}$.\\ \noindent {\em Claim.} For any given $j \in \mathbb{F}_q = \{0,1,\ldots,q-1\}$ the values $j+i$, for $i \in \{0,1,\ldots,q-1\}$, where the addition is in the field $\mathbb{F}_q$ modulo $q$, are a permutation of the set $\{0,1,\ldots,q-1\}$, that is, $\{j+i : i \in \{0,1,\ldots,q-1\}\} = \{0,1,\ldots,q-1\}$. \\ \noindent \begin{proof} In this proof we assume that addition and substraction are in the field $\mathbb{F}_{q}$. The multiset $\{ j + i : i \in \{0,1,\ldots,q-1\} \} \subseteq \mathbb{F}_{q}$ consists of $q$ values, thus it suffices to show that all values from the multiset are distinct. Assume contrary the there exists two different elements $i$, $i' \in \mathbb{F}_{q}$ such that $j + i = j + i'$. It follows that $i' - i = 0$. This cannot be true since $|i'|, |i| < q$ and $i'$ and $i$ are different. \end{proof} The property that $|h_{q'}^{-1}(q'')| \in \left\{\myfloor{\frac{n}{q}}, \myfloor{\frac{n}{q}} + 1\right\}$ now follows from the fact that in the definition of the function $h_{q'}$ all the initial ``blocks" $\{ f_{qj+i}(q') : i \in \{1,\ldots,q\}\}$ for $j \in \{0,1,\ldots,\myfloor{\frac{n}{q}} - 1\}$ are fully used, and the last ``block" $\{ f_{qj+i}(q') : i \in \{1,\ldots,q\}\}$ for $j = \myfloor{\frac{n}{q}}\}$ is only partially used. Finally, we will prove now that $|\{f \in \mathcal{F} : f(i) = f(j)\}| \leq d$. This simply follows form the fact that for any two polynomials $g, h \in \mathbb{F}[x]$, they can assume the same values on at most $d$, their degree, number of elements from the field $\mathbb{F}_q = \{0,1,\ldots,q - 1\}$. This last property is true because the polynomial $g(x)-h(x)$ has degree $d$ and therefore it has at most $d$ zeros in the field $\mathbb{F}[x]$. Let us finally observe that the total number of polynomials, $q^{d+1}$, in the field $\mathbb{F}[x]$ can be exponential in $n$. However, this construction can easily be implemented in polynomial time in $n,q,d$, because we only need the initial $n$ of these polynomials. Thus we can simply disregard the remaining $q^{d+1} - n$ polynomials. \end{proof} \begin{corollary} \label{cor:dim-red-1} Observe that setting $q \in \Omega(\log{n})$, $d \in \Theta(q)$ in Lemma~\ref{lem:Reed_Solomon_Construction} results in a dimensionality-reduction set of functions $\mathcal{F}$ with parameters $(n,q, \sqrt{q})$. Moreover, set $\mathcal{F}$ has size $q$ and as long as $q \in O(n)$, it can be computed in polynomial time in $n$. \end{corollary} \subsection{Product of two Reed-Solomon codes}\label{subsec:product-two-codes} In the following we show that Reed-Solomon codes composed twice can produce a set of dimensionality-reduction functions with parameters $(n, \log{n}\log{n}, \sqrt{\log\log{n}})$. Assume we are given an integer $n$. Let $\ell_{2}$ be a prime number and $d_{2} = \myceil{\sqrt{\ell_{2}}}$. Choose $\ell_{1}$ to be a prime number in the interval $\left[ \frac{1}{2}\ell_{2}^{d_{2}}, \ell_{2}^{d_{2}} \right]$ and $d_{1} = \myceil{\sqrt{\ell_1}}$. The number $\ell_{1}$ exists due to the distribution of prime numbers. Additionally, the choice of numbers $\ell_{1}$ and $\ell_{2}$ must be such that $$\text{a) } {\ell_1}^{d_1} \ge n \text{, and b) } {\ell_2}^{d_2} = O(poly(n))$$ If those two conditions are satisfied, Lemma~\ref{lem:Reed_Solomon_Construction} ensures we can construct a set $\mathcal{F}_{1}$ of functions $f : [n] \rightarrow [\ell_{1}]$ with parameters $n$, $q := \ell_1$, $d := d_{1}$ in time $O(poly(n))$. Let $\mathcal{F}_{2}$ be another set of functions: $f : [\ell_{1}] \rightarrow [\ell_{2}]$ specified by Lemma~\ref{lem:Reed_Solomon_Construction} with parameters $n,$ $q := \ell_{2}$, $d := d_{2}$. The set $\mathcal{F}_{2}$ can also be constructed in polynomial time in $n$. We compose a set $\mathcal{F}$ of functions $f : [n] \rightarrow [\ell_{2}]$ from sets $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ in the following way: $\mathcal{F} = \left\{ f_{2} \circ f_{1} : f_{1} \in \mathcal{F}_{1}, f_{2} \in \mathcal{F}_{2} \right\}$. Observe, that $|\mathcal{F}| = |\mathcal{F}_{1}|\cdot |\mathcal{F}_{2}| = \ell_{1} \ell_{2}$. Next, we show that properties obtained from Lemma~\ref{lem:Reed_Solomon_Construction} for sets $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ lift to the set $\mathcal{F}$. \begin{lemma} \label{lem:differen-val-prob} For any two distnict numbers $i, j \in [n]$ we have: $$|\{f \in \mathcal{F} : f(i) = f(j)\}| \leq \ell_{2}d_{1} + \ell_{1}d_{2}$$ \end{lemma} \begin{proof} Take two distinct $i, j \in [n]$. Consider a function $f \in \mathcal{F}$. From the construction of $\mathcal{F}$ we know that $f = f_{2} \circ f_{1}$ for some $f_{1} \in \mathcal{F}_{1}$, $f_{2} \in \mathcal{F}_{2}$. Now, if $f(i) = f(j)$ then $f_{2}(f_{1}(i)) = f_{2}(f_{1}(j))$, which means that either $f_{1}(i) = f_{1}(j)$ or $f_{2}(i') = f_{2}(j')$, where $i' = f_{1}(i), j' = f_{1}(j)$ and $i' \neq j'$. For the fixed pair of indices $i, j$ the number of functions $f_{1} \in \mathcal{F}_{1}$ such that $f_{1}(i) = f_{1}(j)$ is at most $d_{1}$, therefore the first case can happen at most $|\mathcal{F}_{2}|d_{1} = \ell_{2} d_{1}$ times. Similarly, the second case can happen at most $|\mathcal{F}_{1}|d_{2} = \ell_{1} d_{2}$ times. The sum of these two bounds gives us the desired estimation. \end{proof} \begin{lemma} \label{lem:bins-sizes} For any function $f \in \mathcal{F}$ we have: $$\forall {\ell' \in [\ell_{2}]} : |f^{-1}(\ell')| \le \myfloor{\frac{n}{\ell_{2}}} + 3\myfloor{\frac{n}{\ell_{1}}}$$ \end{lemma} \begin{proof} Let us fix an integer $\ell' \in [\ell_{2}]$ and a function $f \in \mathcal{F}$. Observe, that the function $f$ has a unique decomposition $f = f_{2} \circ f_{1}, f_{1} \in \mathcal{F}_{1}, f_{2} \in \mathcal{F}_{2}$, thus $|f^{-1}(\ell')| = |(f_{2}\circ f_{1})^{-1}(\ell')|$. From Lemma~\ref{lem:Reed_Solomon_Construction} we have that the set $f_{2}^{-1}(\ell')$ has either $\myfloor{\frac{\ell_{1}}{\ell_{2}}}$ or $\myfloor{\frac{\ell_{1}}{\ell_{2}}} + 1$ elements. Similarly, for fixed $\ell'' \in [\ell_{1}]$ the set $f_{1}^{-1}(\ell'')$ has either $\myfloor{\frac{n}{\ell_{1}}}$ or $\myfloor{\frac{n}{\ell_{1}}} + 1$ elements. These two bounds combined give us $$|(f_{2}\circ f_{1})^{-1}(\ell')| \in \left[ \myfloor{\frac{n}{\ell_{1}}} \myfloor{\frac{\ell_{1}}{\ell_{2}}}, \myfloor{\frac{n}{\ell_{1}}} \myfloor{\frac{\ell_{1}}{\ell_{2}}} + \myfloor{\frac{n}{\ell_{1}}} + \myfloor{\frac{\ell_{1}}{\ell_{2}}} + 1 \right] \implies$$ $$|(f_{2}\circ f_{1})^{-1}(\ell')| \in \left[ \myfloor{\frac{n}{\ell_{1}}} \myfloor{\frac{\ell_{1}}{\ell_{2}}}, \myfloor{\frac{n}{\ell_{2}}} + 3\myfloor{\frac{n}{\ell_{1}}} \right],$$ where the last implication follows from the fact that: $\myfloor{\frac{a}{b}}\myfloor{\frac{b}{c}} \le \myfloor{\frac{a}{c}} + \myfloor{\frac{b}{c}} + 1$. \end{proof} \begin{corollary}\label{cor:dim-red-2} For any $q \le \log\log{n}$ there exists a dimensionality-reduction set of functions with parameters $(n, q, \sqrt{q})$. Moreover, such set has size $q^{\sqrt{q}}$ and can be computed in polynomial time in $q^{\sqrt{q}}$. \end{corollary} \begin{proof} Consider the above construction for parameters $\ell_{2} := q$ and $\ell_{1} := q^{\sqrt{q}}$. We can easily check that these parameters satisfy the conditions a) and b) of the above constructions. Sets $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ can be computed in time $O(q^{\sqrt{q}})$ and $O(q)$, respectively, due to Lemma~\ref{lem:Reed_Solomon_Construction}. The correctness follows from Lemmas~\ref{lem:differen-val-prob} and \ref{lem:bins-sizes}. \end{proof} \ignore{Reducing dimension from $n$ to $\log{n}$ may be sometimes insufficient, e.g., one may want to enumerate the set of all permutations, but the set of all $\log{n}$-element permutations is superpolynomial in $n$. To tackle this problem we propose a construction based on composition of two dimensionality-reduction set of functions, $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$. Roughly speaking, the goal of the first set is to reduce the dimension from $n$ to $\log{n}$, while the goal of the second set is to reduce the dimension from $\log{n}$ to $\log\log{n}$. Such composition is not a totally new concept. It was proposed by Kesselheim et al.~\cite{KesselheimKN15}. Our contribution, however, is in showing that each function that belongs to the composition of those two sets, $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$, can preserve the property of equal preimage sizes, that is, it satisfies condition (2) from the definition of dimensionality-reduction set of functions. The complete reduction is provided in Appendix~\ref{sec:reed-solomon-proofs}. The concise and formal summary is~given~below. \begin{corollary}\label{cor:dim-red-2} For any $\epsilon > 0$ and $q \ge (\log\log{n})^{\epsilon}$ there exists a dimensionality-reduction set of functions with parameters $(n, q, \sqrt{q})$. Moreover, such set has size $q^{\sqrt{q}}$ and can be computed in polynomial time in $q^{\sqrt{q}}$. \end{corollary} } \section{Low entropy distributions for the $k$-secretary problem}\label{sec:low-entropy-k-secretary} In this section, we give a general framework of leveraging dimensionality-reduction set of functions with a set of permutations over dimension $\ell$ to a set of permutations over a dimension $n > \ell$ such that wait-and-pick multiple-choice secretary algorithm executed on the latter distributions achieve the competitive ratio of the former distribution. For simplicity of notation we assume in this section that $v(1) \geq v(2) \geq \cdots \geq v(n)$. Let us set a dimensionality-reduction set of functions $\mathcal{F}$, a set $\mathcal{L}$ of $\ell$-element permutations, and a wait-and-pick algorithm $ALG$ such that: $$\mathbb{E}_{\pi \sim \mathcal{L}}(ALG(\pi)) \ge (1-\epsilon)\sum_{i = 1}^{k} v(i) \ .$$ Consider the following random experiment: first we draw u.a.r a function $f$ from $\mathcal{F}$ and then draw u.a.r a permutation $\pi$ from $\mathcal{L}$. We can relate an $n$-element permutation to such experiment as follows. First, function $f$ determines for each $u \in [n]$ number of the block $f(u)\in [\ell]$ to which $u$ is assigned. The permutation $\pi$ sets the order of these blocks. Ultimately, the $n$-element permutation is created as first ordering blocks according $\pi$, and then listing numbers from each block in one sequence preserving the order of blocks. The order of numbers inside a single block is irrelevant. Let $\mathcal{F} \circ \mathcal{L}$ be the set of $n$-element permutations witch are the support of the constructed distribution. The key properties in this random construction are twofold. First, if the probability that a pair of fixed indices $i,j$ will end up in the same block is $\frac{d}{\ell}$, then from union bound we conclude that the probability that indices of $k$ largest adversary elements will be assigned to different blocks is at least $1 - \frac{k^2 d}{\ell}$. On the other hand, if the blocks are roughly the same size, the relative order of the blocks assigned to which these indices are assigned will be the same as the relative order of these indices in the larger permutation. Moreover, the order of the blocks with respect to the time threshold in the smaller permutation will be the same as the order of these indices with respect to the threshold in the larger permutation. These properties let us carry smoothly properties of the wait-and-pick algorithm, e.g., a successful wait-and-pick algorithm on the smaller $\ell$-element permutation will be also successful on the larger $n$-element permutation. The above reasoning gives us Theorem~\ref{thm:dim-red-and-permutations}. \begin{theorem} \label{thm:dim-red-and-permutations} Let $\mathcal{F}$ be a set of dimensionality-reduction functions with parameters $(n, \ell, d)$ s.t.~$\ell^2 < \frac{n}{\ell}$, and $\mathcal{L}$ be a multiset of $\ell$-element permutations. Let $ALG$ be a wait-and-pick algorithm with a time threshold $m \in [\ell-1]$ that achieves $(1-\epsilon)$ competitive ratio on the uniform distribution over $\mathcal{L}$ $$\mathbb{E}_{\pi \sim \mathcal{L}}(ALG(\pi)) \ge (1-\epsilon)\sum_{i = 1}^{k} v(i).$$ Then, a wait-and-pick algorithm $ALG'$ with a time threshold $ \lfloor m \frac{n}{\ell} \rfloor \in [n-1]$ executed on the uniform distribution over the set $\mathcal{F} \circ \mathcal{L}$ achieves $$\mathbb{E}_{\pi \sim \mathcal{F} \circ \mathcal{L}}(ALG'(\pi)) \ge \bigg(1 - \frac{k^{2}}{d}\bigg)(1-\epsilon)\sum_{i = 1}^{k} v(i),$$ The set $\mathcal{F} \circ \mathcal{L}$ can be computed in time $O(|\mathcal{F}| \cdot |\mathcal{L}|)$. \end{theorem} The first application of the introduced framework comes from combining double dimensionality-reductions set of functions with the set of all permutations of size $\log\log{n}$. \begin{theorem}\label{thm:first-main-proof} For any $k < (\log\log{n})^{1/4}$ there exists a permutations distribution $\mathcal{D}$ such that $$\mathbb{E}_{\sigma\sim\mathcal{D}}(ALG(\sigma)) \ge \bigg(1 - \frac{k^2}{\sqrt{\log\log{n}}}\bigg)\bigg(1-\frac{4\log k}{k^{1/3}}\bigg)\sum_{i = 1}^{k}v(i),$$ where $ALG$ is a wait-and-pick multiple-choice secretary algorithm with time threshold $m = n / k^{1/3}$. The distribution $\mathcal{D}$ has the optimal entropy $O(\log\log{n})$ and can be computed in polynomial time in $n$. \end{theorem} \begin{proof} Let us set $\ell = \log\log{n}$. Consider a dimensionality-reduction set of functions $\mathcal{F}$ given by Corollary~\ref{cor:dim-red-2} with parameters $(n, \ell, \sqrt{\ell})$. Note, that the size of set $\mathcal{F}$ is $O(\ell^{\sqrt{\ell}}) = O(\log{n})$. Let $\mathcal{L}$ be the set of all $\ell$ elements permutations. From Stirling's approximation we obtain that $\log(|\mathcal{L}|) = \log(\ell !) = O(\ell\log{\ell}) = O(\log\log{n})$, thus $|\mathcal{L}| = O(\log(n))$ and we can enumerate all permutations in $\mathcal{L}$ in time polynomial in $n$. By Lemma~\ref{lemma:Success_k-secretary}, we get that the wait-and-pick algorithm $ALG'$ executed on uniform distribution over set $\mathcal{L}$ with time threshold $m = \frac{\ell}{k^{1/3}} = \frac{\log\log{n}}{k^{1/3}}$ gives the following competitive ratio: $$\mathbb{E}_{\pi \sim \mathcal{L}}(ALG'(\pi)) \ge \Big(1-\frac{3\log{k}}{k^{1/3}}\Big)\Big(1-\frac{1}{k}\Big) \sum_{i = 1}^{k}v(i).$$ By Theorem~\ref{thm:dim-red-and-permutations} applied to sets $\mathcal{F}$ and $\mathcal{L}$, the wait-and-pick algorithm with time threshold $m' := \frac{n}{k^{1/3}}$ achieves the following competitive ratio $$\mathbb{E}_{\pi \sim \mathcal{L'}}(ALG(\pi)) \ge \Big(1-\frac{k^{2}}{\sqrt{\log\log{n}}}\Big)\Big(1-\frac{3\log{k}}{k^{1/3}}\Big)\Big(1-\frac{1}{k}\Big) \sum_{i = 1}^{k}v(i)$$ $$\ge \Big(1-\frac{k^{2}}{\sqrt{\log\log{n}}}\Big)\Big(1 - \frac{4\log{k}}{k^{1/3}}\Big)\sum_{i = 1}^{k}v(i) \, ,$$ where $\mathcal{L}'$ is the multi-set derived from the construction of Theorem~\ref{thm:dim-red-and-permutations}. As the distribution $\mathcal{D}$ we take the uniform distribution on the multi-set $\mathcal{L}'$. Since $|\mathcal{L'}| = O(\log{n})$, the theorem is proven. \end{proof} A much stronger implication of the reduction framework can be proven when instead of the set of all permutations of size $O(\log\log{n})$ we consider the set of $\log{n}$-elements permutations constructed by the method of pessimistic estimator from Section~\ref{sec:derand_Hoeffding}. This yields the following. \begin{theorem}\label{thm:second-main-proof} For any $k \le \frac{\log{n}}{\log\log{n}}$ there exists a permutations distribution $\mathcal{D}$ such that $$\mathbb{E}_{\sigma\sim\mathcal{D}}(ALG(\sigma)) \ge \bigg(1 - \frac{k^2}{\sqrt{\log{n}}}\bigg)\bigg(1-\frac{5\log{k}}{k^{1/3}}\bigg)\sum_{i = 1}^{k}v(i),$$ where $ALG$ is a wait-and-pick multiple-choice secretary algorithm with time threshold $m = n / k^{1/3}$. The distribution $\mathcal{D}$ has the optimal entropy $O(\log\log{n})$ and can be computed in polynomial time in $n$. \end{theorem} \begin{proof} Let us set $\ell = \log{n}$. Consider a dimensionality-reduction set of functions $\mathcal{F}$ given by Corollary~\ref{cor:dim-red-1} with parameters $(n, \ell, \sqrt{\ell})$. Note, that the size of set $\mathcal{F}$ is $O(poly \log (n))$. Next, consider a set $\mathcal{L}$ of $\ell$-elements permutations given by Theorem~\ref{Thm:Derandomization_2} with parameters $\delta = \frac{1}{k^{1/3}}$ and $\epsilon = \frac{\log k}{k^{1/3}}$. The set has size $$|\mathcal{L}| \le \frac{k'\log{\ell} + k'\log{k'} + \log{((1-\epsilon)k)}}{\delta^2 \rho_k/2} \le O(k^2 \log{\ell}) \, , $$ where $k'=k + \lfloor k^{2/3} \log {k} \rfloor$, and can be computed in time $$O\left((1-\epsilon) \cdot \ell \cdot k^{k+1} \cdot \ell^{k+2} \cdot \left(k + \lfloor k^{2/3} \log {k} \rfloor\right)^{k m/\ell} \cdot poly \log(\ell)\right) \, , $$ which is $O(poly (n))$ for our choice of parameters $\ell,k$ and $\epsilon$ and $m = \frac{\ell}{k^{1/3}}$. Theorem~\ref{Thm:Derandomization_2} implies also that for every $k + \lfloor k^{2/3} \log {k} \rfloor$-tuple of adversarial values there are at least $\big(1-\frac{1}{k^{1/3}}\big)\rho_{k}|\mathcal{L}|$ successful permutations. In consequence, the wait-and-pick algorithm $ALG$ with time threshold $\frac{\ell}{k^{1/3}}$ has the following competitive ratio when executed on the uniform distribution over the set $\mathcal{L}$. $$\mathbb{E}_{\sigma\sim\mathcal{L}}(ALG(\sigma)) \ge (1-\epsilon)(1 - \delta)\rho_{k}\sum_{i = 1}^{k}v(i) = (1-\epsilon)(1 - \delta)\Big(1 - \frac{m}{\ell}\Big)\Big(1 - \frac{1}{k}\Big)\Big(1 - \frac{\log{k}}{k^{1/3}}\Big)\sum_{i = 1}^{k}v(i)$$ $$\ge \Big(1 - \frac{5\log{k}}{k^{1/3}} \Big)\sum_{i = 1}^{k}v(i) \, .$$ Finally, we combine sets $\mathcal{F}$ and $\mathcal{L}$ using Theorem~\ref{thm:dim-red-and-permutations} and obtain a set of $n$-elements permutations $\mathcal{F} \circ \mathcal{L}$. Let $\mathcal{D}$ be the uniform distribution over the set $\mathcal{F} \circ \mathcal{L}$. By Theorem~\ref{thm:dim-red-and-permutations} we obtain $$\mathbb{E}_{\sigma\sim\mathcal{D}}(ALG(\sigma)) \ge \bigg(1 - \frac{k^2}{\sqrt{\log{n} }}\bigg)\bigg(1-\frac{5\log{k}}{k^{1/3}}\bigg)\sum_{i = 1}^{k}v(i) \, ,$$ where $ALG$ is the wait-and-pick algorithm with threshold $m' = \frac{n}{k^{1/3}}$, which proves that claimed competitive ratio. By Corollary \ref{cor:dim-red-1}, $|\mathcal{F}|=\ell$. Since $|\mathcal{F} \circ \mathcal{L}| = |\mathcal{F}|\cdot|\mathcal{L}| \le \log{n} \cdot O(k^{2} \cdot \log{\log{n}})$, the entropy of the distribution is $O(\log\log{n})$. The polynomial time computability also follows from the upper bounds on sizes of sets $\mathcal{F}$ and $\mathcal{L}$. \end{proof} \section{Lower bounds for the $k$-secretary problem} \label{sec:lower-bounds} \ignore{ \subsection{Optimality of $(1-1/\sqrt{k})$ competitive ratio} \label{sec:optimality-k-secretary} We will show now that any, even randomized, algorithm for the $k$-secretary problem cannot have competitive ratio better than $(1-1/\sqrt{k})$, not only when it uses a uniform probability distribution $\mathcal{D}_{\Pi_n}$ on the set of all permutations $\Pi_n$ (this fact is well known \cite{kleinberg2005multiple,Gupta_Singla}), but also when it uses any distribution on $\Pi_n$. The main idea is to view a randomized algorithm $A$ (with some internal random bits) and the distribution $\mathcal{D}_{\Pi_n}$ together as a randomized algorithm $B = (A,\mathcal{D}_{\Pi_n})$ for the $k$-secretary problem. The randomness of the algorithm $B$ is the randomness of $A$ together with the randomness in $\mathcal{D}_{\Pi_n}$. Algorithm $B$ first samples $\pi \sim \mathcal{D}_{\Pi_n}$ and then runs $A$ on the items ordered according to permutation $\pi$. \begin{proposition}\label{prop:optimal_comp_ratio} The best possible competitive ratio of any, even randomized, algorithm $A$ for the $k$-secretary problem is $(1-\Omega(1/\sqrt{k}))$ even if it uses a random order chosen from {\em any} distribution on $\Pi_n$. \end{proposition} \begin{proof} Let us fix any deterministic algorithm $A$ for the $k$-secretary problem and any fixed permutation $\pi \in \Pi_n$. This pair $B=(A,\pi)$ can be viewed as a deterministic algorithm for the $k$-secretary problem where $A$ is executed on the items in order given by $\pi$. We will follow now an argument outlined in the survey by Gupta and Singla \cite{Gupta_Singla}. By Yao’s minimax principle \cite{Yao77}, it suffices to give a distribution over instances of adversarial assignments of values to items, that causes a large loss for any deterministic algorithm, in this case algorithm $B$. Suppose that each item has value $0$ with probability $1 - \frac{k}{n}$, and otherwise, it has value $1$ with probability $\frac{k}{2n}$, or value $2$ with the remaining probability $\frac{k}{2n}$. The number of non-zero items is therefore $k \pm O(\sqrt{k})$ with high probability by Chernoff bound, with about half $1$’s and half $2$’s. Therefore, the optimal value of this $k$-secretary instance is $V^* = 3k/2 \pm O(\sqrt{k})$ with high probability. Ideally, we want to pick all the $2$’s and then fill the remaining $k/2 \pm O(\sqrt{k})$ slots using the $1$’s. However, consider the state of the algorithm $B$ after $n/2$ arrivals. Since the algorithm does not know how many $2$’s will arrive in the second half, it does not know how many $1$’s to pick in the first half. Hence, it will either lose about $\Theta(\sqrt{k})$ $2$’s in the second half, or it will pick $\Theta(\sqrt{k})$ too few $1$’s from the first half. Either way, the algorithm will lose $\Omega(V^*/\sqrt{k})$ value. \pk{Is the argument below now formal enough?} Now by applying the Yao's principle, this loss applies to any deterministic worst-case adversarial assignment of values $\{0,1,2\}$ to the items in $[n]$ and any randomized algorithm that is a probability distribution on any deterministic $k$-secretary algorithms. Therefore, this loss also applies to any randomized algorithm that is a probability distribution $\mathcal{D}_{\mbox{pairs}}$ on the pairs $(A,\pi)$ of deterministic algorithm $A$ and permutation $\pi \in \Pi_n$. Let us now fix any randomized algorithm $B$ for the $k$-secretary problem. This algorithm is a probability distribution $\mathcal{D}_B$ on some set of deterministic $k$-secretary algorithms. Let us also choose {\em any} probability distribution $\mathcal{D}_{\Pi_n}$ on the set of permutations $\Pi$. The product distribution $(\mathcal{D}_B, \mathcal{D}_{\Pi_n})$ is an example of distribution of type $\mathcal{D}_{\mbox{pairs}}$ above. Therefore, the above lower bound applies to the randomized algorithm $(\mathcal{D}_B, \mathcal{D}_{\Pi_n})$, which first samples $\pi \sim \mathcal{D}_{\Pi_n}$ and then executes the randomized algorithm $B = \mathcal{D}_B$ on the items in order given by $\pi$. This argument shows that no randomized algorithm $B$ that uses {\em any} probability distribution on random orders can have a competitive ratio better than $(1-\Omega(1/\sqrt{k}))$. \end{proof} } \subsection{Entropy lower bound for $k=O(\log^a n)$, for some constant $a\in (0,1)$} \label{sec:lower-general} Our proof of a lower bound on the entropy of any $k$-secretary algorithm achieving ratio $1-\epsilon$, for a given $\epsilon\in (0,1)$, stated in Theorem~\ref{thm:lower-general}, generalizes the proof for the (classic) secretary problem in \cite{KesselheimKN15}. This generalization is in two ways: first, we reduce the problem of selecting the largest value to the $k$-secretary problem of achieving ratio $1-\epsilon$, by considering a special class of hard assignments of values. Second, when analyzing the former problem, we have to accommodate the fact that a our algorithm aiming at selecting the largest value can pick $k$ elements, while the classic adversarial algorithm can pick only one element. Below is an overview of the lower bound analysis. We consider a subset of permutations, $\Pi\subseteq \Pi_n$, of size $\ell$ on which the distribution is concentrated enough (see Lemma~\ref{lem:entropy-support} proved in~\cite{KesselheimKN15}). Next, we fix a semitone sequence $(x_1,\ldots,x_s)$ w.r.t. $\Pi$ of length $s=\frac{\log n}{\ell+1}$ and consider a specific class of hard assignments of values, defined later. A semitone sequence with respect to $\pi$, introduced in~\cite{KesselheimKN15}, is defined recursively as follows: an empty sequence is semitone with respect to any permutation $\pi$, and a sequence $(x_1, \ldots , x_s)$ is semitone w.r.t. $\pi$ if $\pi(x_s) \in\{ \min_{i\in [s]} \pi(x_i), \max_{i\in [s]} \pi(x_i)\}$ and $(x_1, \ldots, x_{s-1})$ is semitone w.r.t. $\pi$. It has been showed that for any given set $\Pi$ of $\ell$ permutations of $[n]$, there is always a sequence of length $s=\frac{\log n}{\ell+1}$ that is semitone with respect to all $\ell$ permutations. Let $V^*=\{1,\frac{k}{1-\epsilon},(\frac{k}{1-\epsilon})^2,\ldots,(\frac{k}{1-\epsilon})^{n-1}\}$. An assignment is {\em hard} if the values of the semitone sequence form a permutation of some subset of $V^*$ while elements not belonging to the semitone sequence have value $\frac{1-\epsilon}{k}$. Note that values allocated by hard assignment to elements not in the semitone system are negligible, in the sense that the sum of any $k$ of them is $1-\epsilon$ while the sum of $k$ largest values in the whole system is much bigger than $k$. Intuitively, every $k$-secretary algorithm achieving ratio $1-\epsilon$ must select largest value in hard assignments (which is in the semitone sequence) with probability at least $1-\epsilon$ -- this requires analysis of how efficient are deterministic online algorithms selecting $k$ out of $s$ values in finding the maximum value on certain random distribution of hard assignments (see Lemma~\ref{lem:lower-random-adv}) and applying Yao's principle to get an upper bound on the probability of success on any randomized algorithm against hard assignments (see Lemma \ref{lem:lower-deterministic-adv}). For the purpose of this proof, let us fix $k\le \log^a n$ for some constant $a\in (0,1)$, and parameter $\epsilon \in (0,1)$ (which could be a function of $n,k$). \begin{lemma} \label{lem:lower-random-adv} Consider a set of $\ell<\log n - 1$ permutations $\Pi\subseteq \Pi_n$ and a semitone sequence $(x_1,\ldots,x_s)$ w.r.t. set $\Pi$ of length $s=\frac{\log n}{\ell+1}<\log n$. Consider any deterministic online algorithm that for any given $\pi\in \Pi$ aims at selecting the largest value, using at most $k$ picks, against the following distribution of hard assignments. Let $V=V^*$. We proceed recursively: $v(x_s)$ is the middle element of $V$, and we apply the recursive procedure u.a.r.: (i) on sequence $(x_1,\ldots,x_{s-1})$ and new set $V$ containing $|V|/2$ {\em smallest} elements in $V$ with probability $\frac{1}{s}$ (i.e., $v(x_s)$ is larger than values of the remaining elements with probability $1/s$), and (ii) on sequence $(x_1,\ldots,x_{s-1})$ and new set $V$ containing $|V|/2$ {\em largest} elements in $V$ with probability $\frac{s-1}{s}$ (i.e., $v(x_s)$ is smaller than values of the remaining elements with probability $(s-1)/s$). \ignore{ First, $z$ is selected from $V^*$ u.a.r. Let $V=V_z$ be the pool of values to be assigned to the semitone sequence 1-1. Then, we proceed recursively: $v(x_s)=\max V$ with probability $\frac{1}{s}$ and $v(x_s)=\min V$ with probability $\frac{s-1}{s}$, while the assignment of the remaining values from $V\setminus \{v(x_s)\}$ to $(x_1,\ldots,x_{s-1})$ is done recursively and independently. } Then, for any $\pi\in\Pi$, the algorithm selects the maximum value with probability at most~$\frac{k}{s}$. \end{lemma} \begin{proof} We start from observing that the hard assignments produced in the formulation of the lemma are disjoint -- it follows directly by the fact that set $V$ of available values is an interval in $V^*$ and it shrinks by half each step; the number of steps $s<\log n$, so in each recursive step set $V$ is non-empty. In the remainder we prove the sought probability. Let $A_t^i$, for $1\le t\le s$ and $0\le i\le k$, be the event that the algorithm picks at most $i$ values from $v(x_1),\ldots,v(x_t)$. Let $B_t$ be the probability that the algorithm picks the largest of values $v(x_1),\ldots,v(x_t)$, in one of its picks. Let $C_t$ be the probability that the algorithm picks value $v(x_t)$. We prove, by induction on lexicographic pair $(t,i)$, that $\Pr{B_t|A_t^i}\le \frac{i}{t}$. Surely, the beginning of the inductive proof for any pair of parameters $(t,i=t)$ is correct: $\Pr{B_t|A_t^t}\le 1$. Consider an inductive step for $i< t\le s$. Since, by the definition of semitone sequence $(x_1,\ldots,x_s)$, element $x_t$ could be either before all elements $x_1,\ldots,x_{t-1}$ or after all elements $x_1,\ldots,x_{t-1}$ in permutation $\pi$, we need to analyze both of these cases: \vspace*{1ex} \noindent {\bf Case 1: $\pi(x_t)<\pi(x_1),\ldots,\pi(x_{t-1})$.} Consider the algorithm when it receives the value of $x_t$. It has not seen the values of elements $x_1,\ldots,x_{t-1}$ yet. Assume that the algorithm already picked $k-i$ values before processing element $x_t$. Note that, due to the definition of the hard assignment in the formulation of the lemma, the knowledge of values occurring by element $x_t$ only informs the algorithm about set $V$ from which the adversary draws values for sequence $(x_1,\ldots,x_{t-1})$; thus this choice of values is independent, for any fixed prefix of values until the occurrence of element $x_t$. We use this property when deriving the probabilities in this considered case. We consider two conditional sub-cases, depending on whether either $C_t$ or $\neg C_t$ holds, starting from the former: \[ \Pr{B_t|A_t^i \& C_t} = \frac{1}{t} + \frac{t-1}{t} \cdot \Pr{B_{t-1}|A_{t-1}^{i-1} \& C_t} = \frac{1}{t} + \frac{t-1}{t} \cdot \Pr{B_{t-1}|A_{t-1}^{i-1}} = \frac{1}{t} + \frac{t-1}{t} \cdot \frac{i-1}{t-1} = \frac{i}{t} \ , \] where \begin{itemize} \item the first equation comes from the fact that $val(x_t)$ is the largest among $v(x_1),\ldots,v(x_t)$ with probability $\frac{1}{t}$ (and it contributes to the formula because of the assumption $C_t$ that algorithm picks $v(x_t)$) and $v(x_t)$ is not the largest among $v(x_1),\ldots,v(x_t)$ with probability $\frac{t-1}{t}$ (in which case the largest value must be picked within the first $v(x_1),\ldots,v(x_{t-1})$ using $i-1$ picks), and \item the second equation comes from the fact that $B_{t-1}$ and $C_t$ are independent, and \item the last equation holds by inductive assumption for $(t-1,i-1)$. \end{itemize} In the complementary condition $\neg C_t$ we have: \[ \Pr{B_t|A_t^i \& \neg C_t} = \frac{t-1}{t} \cdot \Pr{B_{t-1}|A_{t-1}^{i} \& \neg C_t} = \frac{t-1}{t} \cdot \Pr{B_{t-1}|A_{t-1}^{i}} = \frac{t-1}{t} \cdot \frac{i}{t-1} = \frac{i}{t} \ , \] where \begin{itemize} \item the first equation follows because if the algorithm does not pick $v(x_t)$ then the largest of values $v(x_1),\ldots,v(x_t)$ must be within $v(x_1),\ldots,v(x_{t-1})$ and not $v(x_t)$ (the latter happens with probability $\frac{t-1}{t}$), and \item the second equation comes from the fact that $B_{t-1}$ and $\neg C_t$ are independent, and \item the last equation holds by inductive assumption for $(t-1,i)$. \end{itemize} Hence, \[ \Pr{B_t|A_t^i} = \Pr{B_t|A_t^i \& C_t}\cdot \Pr{C_t} + \mathbb{P}\mathrm{r}[B_t|A_t^i \& \neg C_t]\cdot \Pr{\neg C_t} = \frac{i}{t} \cdot \left( \Pr{C_t} + \Pr{\neg C_t}\right) = \frac{i}{t} \ . \] It concludes the analysis of Case 1. \vspace*{1ex} \noindent {\bf Case 2: $\pi(x_t)>\pi(x_1),\ldots,\pi(x_{t-1})$.} Consider the algorithm when it receives the value of $x_t$. It has already seen the values of elements $x_1,\ldots,x_{t-1}$; therefore, we can only argue about conditional event on the success in picking the largest value among $v(x_1),\ldots,v(x_{t-1})$, i.e., event $B_{t-1}$. Consider four conditional cases, depending on whether either of $C_t,\neg C_t$ holds and whether either of $B_{t-1},\neg B_{t-1}$ holds, starting from sub-case $B_{t-1}\& C_t$: \[ \Pr{B_t|A_t^i \& B_{t-1} \& C_t} = \frac{\Pr{B_t\& B_{t-1}|A_t^i \& C_t}}{\Pr{B_{t-1}|A_t^i \& C_t}} = \frac{\Pr{B_t\& B_{t-1}|A_t^i \& C_t}}{\Pr{B_{t-1}|A_{t-1}^{i-1}}} = 1 \ , \] since the algorithm already selected the largest value among $v(x_1),\ldots,v(x_{t-1})$ (by $B_{t-1}$) and now it also selects $v(x_t)$ (by $C_t$). We also used the observation $A_t^i\& C_t = A_{t-1}^{i-1}$. Next sub-case, when the conditions $\neg B_{t-1}\& C_t$ hold, implies: \[ \Pr{B_t|A_t^i \& \neg B_{t-1} \& C_t} = \frac{\Pr{B_t\& \neg B_{t-1}|A_t^i \& C_t}}{\Pr{\neg B_{t-1}|A_t^i \& C_t}} = \frac{\Pr{B_t\& \neg B_{t-1}|A_t^i \& C_t}}{1-\Pr{B_{t-1}|A_{t-1}^{i-1}}} = \frac{1}{t} \ , \] because when the maximum value among $v(x_1),\ldots,v(x_{t-1})$ was not selected (by $\neg B_{t-1}$) the possibility that the selected (by $C_t$) $v(x_t)$ is the largest among $v(x_1),\ldots,v(x_{t})$ is $\frac{1}{t}$, by definition of values $v(\cdot)$. As in the previous sub-case, we used $A_t^i\& C_t = A_{t-1}^{i-1}$. When we put the above two sub-cases together, for $B_{t-1}\& C_t$ and $\neg B_{t-1}\& C_t$, we get: \[ \Pr{B_t\& B_{t-1}|A_t^i \& C_t} + \Pr{B_t\& \neg B_{t-1}|A_t^i \& C_t} = \Pr{B_{t-1}|A_{t-1}^{i-1}} \cdot 1 + \left(1-\Pr{B_{t-1}|A_{t-1}^{i-1}}\right) \cdot \frac{1}{t} = \] \[ = \frac{i-1}{t-1} + \left(1-\frac{i-1}{t-1}\right) \cdot \frac{1}{t} = \frac{(i-1)t+(t-i)}{(t-1)t} = \frac{(t-1)i}{(t-1)t} = \frac{i}{t} \ , \] where the first equation comes from the previous sub-cases, the second is by inductive assumption, and others are by simple arithmetic. We now consider two remaining sub-cases, starting from $B_{t-1}\& \neg C_t$: \[ \Pr{B_t|A_t^i \& B_{t-1} \& \neg C_t} = \frac{\Pr{B_t\& B_{t-1}|A_t^i \& \neg C_t}}{\Pr{B_{t-1}|A_t^i \& \neg C_t}} = \frac{\Pr{B_t\& B_{t-1}|A_t^i \& \neg C_t}}{\Pr{B_{t-1}|A_{t-1}^i}} = \frac{t-1}{t} \ , \] since the algorithm already selected the largest value among $v(x_1),\ldots,v(x_{t-1})$ (by $B_{t-1}$) and now it also selects $v(x_t)$ (by $C_t$). We also used the observation $A_t^i\& \neg C_t = A_{t-1}^{i}$. Next sub-case, when the conditions $\neg B_{t-1}\& \neg C_t$ hold, implies: \[ \Pr{B_t|A_t^i \& \neg B_{t-1} \& \neg C_t} = \frac{\Pr{B_t\& \neg B_{t-1}|A_t^i \& \neg C_t}}{\Pr{\neg B_{t-1}|A_t^i \& \neg C_t}} = \frac{\Pr{B_t\& \neg B_{t-1}|A_t^i \& \neg C_t}}{1-\Pr{B_{t-1}|A_{t-1}^i}} = 0 \ , \] because when the maximum value among $x_1,\ldots,x_{t-1}$ was not selected (by $\neg B_{t-1}$) the possibility that the selected (by $C_t$) $v(x_t)$ is the largest among $v(x_1),\ldots,v(x_{t})$ is $\frac{1}{t}$, by definition of values $v(\cdot)$. We also used the observation $A_t^i\& \neg C_t = A_{t-1}^{i}$. When we put the last two sub-cases together, for $B_{t-1}\& \neg C_t$ and $\neg B_{t-1}\& \neg C_t$, we get: \[ \Pr{B_t\& B_{t-1}|A_t^i \& \neg C_t} + \Pr{B_t\& \neg B_{t-1}|A_t^i \& \neg C_t} = \Pr{B_{t-1}|A_{t-1}^i} + \left(1-\Pr{B_{t-1}|A_{t-1}^i}\right) \cdot \frac{1}{t} = \] \[ = \frac{i-1}{t-1} + \left(1-\frac{i-1}{t-1}\right) \cdot \frac{1}{t} = \frac{(i-1)t+(t-i)}{(t-1)t} = \frac{(t-1)i}{(t-1)t} = \frac{i}{t} \ , \] where the first equation comes from the previous sub-cases, the second is by inductive assumption, and others are by simple arithmetic. Hence, similarly as in Case 1, we have \[ \Pr{B_t|A_t^i} = \Pr{B_t|A_t^i \& C_t}\cdot \Pr{C_t} + \Pr{B_t|A_t^i \& \neg C_t}\cdot \Pr{\neg C_t} = \frac{i}{t} \cdot \left( \Pr{C_t} + \Pr{\neg C_t}\right) = \frac{i}{t} \ . \] It concludes the analysis of Case 2, and also the inductive proof. It follows that $\Pr{B_s|A_t^k} = \frac{k}{s}$, and since $\Pr{A_s^k} = 1$ (as the algorithm does $k$ picks in the whole semitone sequence), we get $\Pr{B_s}=\frac{k}{s}$. \end{proof} Applying Yao's principle~\cite{Yao77} to Lemma~\ref{lem:lower-random-adv}, we get: \begin{lemma} \label{lem:lower-deterministic-adv} Fix any $\epsilon \in (0,1)$. For any set $\Pi\subseteq \Pi_n$ of an $\ell<\log n -1$ permutations and any probabilistic distribution on it, and for any online algorithm using $k$ picks to select the maximum value, there is an adversarial (worst-case) hard assignment of values to elements in $[n]$ such that: (i) the maximum assigned value is unique and bigger by factor at least $\frac{k}{1-\epsilon}$ from other used values, (ii) highest $s=\frac{\log n}{\ell+1}$ values are at least $1$, while the remaining ones are $\frac{1-\epsilon}{k}$, (iii) the algorithm selects the maximum allocated value with probability at most $\frac{k}{s}$. \end{lemma} \begin{proof} Consider a semitone sequence w.r.t. the set of permutations $\Pi$, which has length $s=\frac{\log n}{\ell+1}$ (it exists as shown in \cite{KesselheimKN15}), and restrict for now to this sub-sequence of the whole $n$-value sequence. Consider any online algorithm that ignores elements that are not in this sub-sequence. We apply Yao's principle~\cite{Yao77} to Lemma~\ref{lem:lower-random-adv}: the latter computes a lower bound on the cost (probability of selecting largest value) of a deterministic $k$-secretary algorithm, for inputs being hard assignments selected from distribution specified in Lemma~\ref{lem:lower-random-adv}. The Yao's principle implies that there is a deterministic (worst-case) adversarial hard assignment values from set $V^*\cup \{\frac{1-\epsilon}{k}\}$ such that for any (even randomized) algorithm and probabilistic distribution on $\Pi$, the probability of the algorithm to select the largest of the assigned values with at most $k$ picks is at most $\frac{k}{s}$. The hard assignment satisfies, by definition, also the first two conditions in the lemma statement. \ignore{ Now, we consider the whole sequence of $n$ values. The adversary assigns value $\frac{1-\epsilon}{k}$ to all elements not from the semitone sequence, and uses the assignment to the semitone sequence described in the previous paragraph. Clearly, this assignment satisfies conditions (i) and (ii) of the lemma. If this algorithm selected the largest value (in the whole sequence) with probability larger than $\frac{k}{s}$, and thus violate condition (iii), then, since non-semitone elements are clearly recognized by their values, we could restrict the algorithm to the semitone sequence and obtain contradiction with Since $1/k$ is smaller by factor at least $k$ from any element in $V$, the algorithm can clearly reject all elements not from the semitone sequence (if not, it gets less than $k$ choices left for elements in the semitone sequence, while gaining not more than a value of a single element in the semitone sequence, more precisely, $1$). } \end{proof} We can extend Lemma~\ref{lem:lower-deterministic-adv} to any distribution on a set $\Pi$ of permutations of $[n]$ with an entropy $H$, by using the following lemma from~\cite{KesselheimKN15}, in order to obtain the final proof of Theorem~\ref{thm:lower-general} (re-stated below). \begin{lemma}[\cite{KesselheimKN15}] \label{lem:entropy-support} Let $\pi$ be drawn from a finite set $\Pi_n$ by a distribution of entropy $H$. Then, for any $\ell \ge 4$, there is a set $\Pi \subseteq \Pi_n$, $|\Pi| \le \ell$, such that $\Pr{\pi\in\Pi} \ge 1 - \frac{8H}{\log(\ell - 3)}$. \end{lemma} \noindent {\bf Theorem~\ref{thm:lower-general} } {\em Assume $k\le \log^a n$ for some constant $a\in (0,1)$. Then, any algorithm (even fully randomized) solving $k$-secretary problem while drawing permutations from some distribution on $\Pi_n$ with an entropy $H\le \frac{1-\epsilon}{9} \log\log n$, cannot achieve the expected ratio of at least $1-\epsilon$, for any $\epsilon\in (0,1)$ and sufficiently large $n$.} \vspace{1ex} \begin{proof} Let us fix $\ell=\sqrt{\frac{\log n}{k}}-1$. By Lemma~\ref{lem:entropy-support}, there is a set $\Pi\subseteq \Pi_n$ of size at most $\ell$ such that $\Pr{\pi\in\Pi} \ge 1 - \frac{8H}{\log(\ell - 3)}$. Let $s=\frac{\log n}{\ell +1}$ be the length of a semitone sequence w.r.t. $\Pi$. By Lemma~\ref{lem:lower-deterministic-adv} applied to the conditional distribution on set $\Pi$, there is an adversarial hard assignment of values such that the probability of selecting the largest value is at most $\frac{k}{s}$. Summing up the events and using Lemma~\ref{lem:entropy-support}, the probability of the algorithm selecting the largest value is at most \[ \frac{k}{s}\cdot 1 + \frac{8H}{\log(\ell - 3)} = \frac{k\cdot (\ell+1)}{\log n} + \frac{8H}{\log(\ell - 3)} = \sqrt{\frac{k}{\log n}} + \frac{8H}{\log(\sqrt{\frac{\log n}{k}} - 4)} \ , \] which is smaller than $1-\epsilon$, for any $\epsilon\in (0,1)$, for sufficiently large $n$, because $k\le \log^a n$, where $a\in (0,1)$ is a constant, and $H\le \frac{1-\epsilon}{9} \log\log n$. To complete the proof, recall that, by the definition of hard assignments and statements (i) and (ii) in Lemma~\ref{lem:lower-deterministic-adv}, the maximum value selected from $V$ is unique, and is bigger from other values by factor at least $\frac{k}{1-\epsilon}$, therefore the event of selecting $k$ values with ratio $1-\epsilon$ is a sub-event of the considered event of selecting largest value. Thus, the probability of the former is upper bounded by the probability of the latter and so the former cannot be achieved as well. \end{proof} \subsection{Entropy lower bound for wait-and-pick algorithms} \label{sec:lower-wait-and-pick} Assume that there is a deterministic wait-and-pick algorithm for the $k$-secretary problem with competitive ratio $1-\epsilon$. Let $m$ be the time threshold and $\tau$ be any statistic (i.e., the algorithm selects $\tau$-th largest element among the first $m$ elements as chosen element, and then from the elements after position $m$, it selects every element greater than or equal to the statistic). Our analysis works for any statistics. Let $\ell$ be the number of permutations, from which the order is chosen uniformly at random. We prove that no wait-and-pick algorithm achieves simultaneously a inverse-polynomially (in $k$) small error $\epsilon$ and entropy asymptotically smaller than $\log k$. More precisely, we re-state and prove Theorem~\ref{thm:lower}. \vspace*{3ex} \noindent {\bf Theorem~\ref{thm:lower} } {\em Any wait-and-pick algorithm solving $k$-secretarial problem with competitive ratio of at least $(1-\epsilon)$ requires entropy $\Omega(\min\{\log 1/\epsilon,\log \frac{n}{2k}\})$.} \vspace*{3ex} \begin{proof} W.l.o.g. and to simplify the analysis, we could assume that $1/\epsilon$ is an integer. We create a bipartite graph $G=(V,W,E)$, where $V$ is the set of $n$ elements, $W$ corresponds to the set of $\ell$ permutations, and a neighborhood of node $i\in W$ is defined as the set of elements (in $V$) which are on the left hand side of threshold $m$ in the $i$-th permutation. It follows that $|E|=\ell\cdot m$. Let $d$ denote an average degree of a node in $V$, i.e., $d=\frac{|E|}{n}=\frac{\ell \cdot m}{n}$. Consider first the case when $m\ge k$. We prove that $\ell \ge 1/\epsilon$. Consider a different strategy of the adversary: it processes elements $i\in W$ one by one, and selects $\epsilon \cdot k$ neighbors of element $i$ that has not been selected before to set $K$. This is continued until set $K$ has $k$ elements or all elements in $W$ has been considered. Note that if during the above construction the current set $K$ has at most $k(1-\epsilon)$ elements, the adversary can find $\epsilon \cdot k$ neighbors of the currently considered $i\in W$ that are different from elements in $K$ and thus can be added to $K$, by assumption $m\ge k$. If the construction stops because $K$ has $k$ elements, it means that $\ell\ge 1/\epsilon$, because $1/\epsilon$ elements in $W$ have had to be processed in the construction. If the construction stops because all the elements in $W$ have been processed but $K$ is of size smaller than $k$, it means that $|W|=\ell<1/\epsilon$; however, if we top up the set $K$ by arbitrary elements in $V$ so that the resulting $K$ is of size $k$, no matter what permutation is selected the algorithm misses at least $\epsilon \cdot k$ elements in $K$, and thus its value is smaller by factor less than $(1-\epsilon)$ from the optimum. and we get a contradiction. Thus, we proved $\ell \ge 1/\epsilon$, and thus the entropy needed is at least $\log 1/\epsilon$, which for optimal algorithms with $\epsilon=\Theta(k^{-1/2})$ gives entropy $\Theta(\log k)$. Consider now the complementary case when $m<k$. The following has to hold: $\ell\cdot (m+k)\ge n$. This is because in the opposite case the adversary could allocate value $1$ to an element which does not occur in the first $m+k$ positions of any of the $\ell$ permutations, and value $\frac{1-\epsilon}{k}$ to all other elements -- in such scenario, the algorithm would pick the first $k$ elements after the time threshold position $m$ (as it sees, and thus chooses, the same value all this time -- it follows straight from the definition of wait-and-pick thresholds), for any of the $\ell$ permutations, obtaining the total value of $1-\epsilon$, while the optimum is clearly $1+(k-1)\cdot\frac{1-\epsilon}{k}>1$ contradicting the approximation ratio $1-\epsilon$ of the algorithm. It follows from the equation $\ell\cdot (m+k)\ge n$ that $\ell \ge \frac{n}{m+k} > \frac{n}{2k}$, and thus the entropy is $\Omega(\log\frac{n}{2k})$. To summarize both cases, the entropy is $\Omega(\min\{\log 1/\epsilon,\log \frac{n}{2k}\})$. \ignore{ \dk{Ponizsze na razie sie nie stosuje} Assume, for the sake of contradiction, that for any constant $\eta>0$, $\ell=o(k^\eta)$ and $\epsilon=\Theta(k^{-\eta})$. By a pigeonhole principle, there is a set of elements $K\subseteq V$ such that $|K|=k$ and an average degree of an element in $K$ is at least $d$. Then, by the uniformity of distribution assumption, we have that ??? and consequently, each permutation must have at most $\epsilon \cdot k$ elements in from $K$ on their left hand side (considering adversary who allocates the same value to all elements in $K$ and arbitrary small value to others), thus in total the number of occurrences of elements from $K$ on left hand sides on the considered $\ell$ permutations is at most $\epsilon \cdot k \cdot \ell$. On the other hand, the sum of degrees of elements in $k$ is at least $k\cdot d \ge k\cdot \frac{\ell \cdot m}{n}$. They together imply $\epsilon \ge \frac{m}{n}$, and consequently, $m\le n\epsilon$. We would like to show that for $k$ sufficiently large but still sub-logarithmic, the entropy is $\Omega(\log\log n)$. } \end{proof} In particular, it follows from Theorem~\ref{thm:lower} that for $k$ such that $k$ is super-polylogarithmic and sub-$\frac{n}{\mbox{polylog } n}$, the entropy of approximation-optimal algorithms is $\omega(\log\log n)$. Moreover, if $k$ is within range of some polynomials of $n$ of degrees smaller than $1$, the entropy is $\Omega(\log n)$. \subsection{$\Omega(\log\log n + (\log k)^2)$ entropy of previous solutions} \label{sec:previous-suboptimality} All previous solutions but~\cite{KesselheimKN15} used uniform distributions on the set of all permutations of $[n]$, which requires large entropy $\Theta(n\log n)$.\footnote{% Some of them also used additional randomness, but with negligible entropy $o(n\log n)$.} In~\cite{KesselheimKN15}, the $k$-secretary algorithm uses $\Theta(\log\log n)$ entropy to choose a permutation u.a.r. from a given set, however, it also uses recursively additional entropy to choose the number of blocks $q'$. It starts with $q'$ being polynomial in $k$, and in a recursive call it selects a new $q'$ from the binomial distribution $Binom(q',1/2)$. It continues until $q'$ becomes $1$. Below we estimate from below the total entropy needed for this random process. Let $X_i$, for $i=1,\ldots,\tau$, denote the values of $q'$ selected in subsequent recursive calls, where $\tau$ is the first such that $X_\tau=1$. We have $X_1=Binom(q',1/2)$ and recursively, $X_{i+1}=Binom(X_i,1/2)$. We need to estimate the joint entropy ${\mathcal{H}}(X_1,\ldots,X_\tau)$ from below. Joint entropy can be expressed using conditional entropy as follows: \begin{equation} \label{eq:conditional-entropy} {\mathcal{H}}(X_1,\ldots,X_\tau) = {\mathcal{H}}(X_1) + \sum_{i=2}^\tau {\mathcal{H}}(X_i|X_{i-1},\ldots,X_1) \ . \end{equation} By the property of $Binom(q',1/2)$ and the fact that $q'$ is a polynomial on $k$, its entropy ${\mathcal{H}}(X_1)=\Theta(\log q')=\Theta(\log k)$. We have: \[ {\mathcal{H}}(X_i|X_{i-1},.\ldots,X_1) = \sum_{q_i\ge \ldots \ge q_{i-1}} \Pr{X_1=q_1,\ldots,X_{i-1}=q_{i-1}} \cdot {\mathcal{H}}(X_i|X_1=q_1,\ldots,X_{i-1}=q_{i-1}) \] \[ = \sum_{q_i\ge \ldots \ge q_{i-1}} \Pr{X_1=q_1,\ldots,X_{i-1}=q_{i-1}} \cdot {\mathcal{H}}(X_i|X_{i-1}=q_{i-1}) \] \[ = \Theta\left(\Pr{X_1\in (\frac{1}{3}q',\frac{2}{3}q'), X_2\in (\frac{1}{3}X_{1},\frac{2}{3}X_{1})\ldots,X_{i-1}\in (\frac{1}{3}X_{i-2},\frac{2}{3}X_{i-2})}\right) \cdot {\mathcal{H}}\left(X_i\Big| X_{i-1}\in (\frac{1}{3^{i-1}}q',\frac{2^{i-1}}{3^{i-1}}q')\right) \ , \] where the first equation is the definition of conditional entropy, second follows from the fact that once $q_{i-1}$ is fixed, the variable $X_1$ does not depend on the preceding $q_{i-2},\ldots,q_1$, and the final asymptotics follows from applying Chernoff bound to each $X_1,\ldots,X_{i-1}$ and taking the union bound. Therefore, for $i\le \frac{1}{2}\log_3 q'$, we have \[ {\mathcal{H}}(X_i|X_{i-1},.\ldots,X_1) = (1-o(1)) \cdot {\mathcal{H}}(X_i|X_{i-1}\in\Theta(\text{poly}(k))) = \Theta(\log k) \ . \] Consequently, putting all the above into Equation~(\ref{eq:conditional-entropy}), we get \[ {\mathcal{H}}(X_1,\ldots,X_\tau) = \Theta(\log_3 k \cdot \log k) = \Theta(\log^2 k) \ . \] The above proof leads to the following. \begin{proposition}\label{prop:Kessel_Large_Entropy} The randomized $k$-secretary algorithm of Kesselheim, Kleinberg and Niazadeh \cite{KesselheimKN15} uses randomization that has a total entropy $\Omega(\log\log n + (\log k)^2)$, where entropy $\log\log n$ corresponds to the distribution from which it samples a random order, and entropy $(\log k)^2$ corresponds to the internal random bits of the algorithm. \end{proposition} Our algorithm shaves off the additive $\Theta(\log^2 k)$ from the formula for all $k$ up to nearly $\log n$. \section{Lower bounds and characterization for the classical secretary problem}\label{section:lb_classic_secr} Unlike the $k$-secretary, given a wait-and-pick algorithm for the classical secretary ($1$-secretary) problem, we will denote its time threshold by $m_0$ (we will reserve $m$ to be used as a variable threshold in the analysis). We will first understand the optimal success probability of the best secretary algorithms. Let $f(k,m)=\frac{m}{k} (H_{k-1}-H_{m-1})$, where $H_k$ is the $k$-th harmonic number, $H_k = 1 + \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{k}$. It is easy to prove that $f(n,m_0)$ is the exact success probability of the wait-and-pick algorithm with threshold $m_0$ when random order is given by choosing u.a.r.~a permutation $\pi \in \Pi_n$, see \cite{Gupta_Singla}. \begin{lemma}\label{lemma:f_expansion_1} The following asymptotic behavior holds, if $k \rightarrow \infty$ and $j\le \sqrt{k}$ is such that $m=k/e+j$ is an integer in $[k]$: \[ f\left(k, \frac{k}{e} + j\right) = \frac{1}{e} - \left(\frac{1}{2e} - \frac{1}{2} + \frac{e j^2}{2k} \right) \frac{1}{k} + \Theta\left( \left( \frac{1}{k} \right)^{3/2} \right) \ . \] \end{lemma} The proof of Lemma~\ref{lemma:f_expansion_1} is in Appendix~\ref{sec:optimal-f_expansion_1}. We will now precisely characterize the maximum of function $f$. Recall that, $f(k,m)=\frac{m}{k} (H_{k-1}-H_{m-1})$, and note that $1\le m\le k$. We have the discrete derivative~of~$f$: $ h(m) = f(k,m+1)-f(k,m) = \frac{1}{k} (H_{k-1}-H_m-1) \ , $ which is positive for $m\le m_0$ and negative otherwise, for some $m_0=\max\{m>0: H_{k-1}-H_m-1>0\}$. \begin{lemma}\label{l:deriv_bounds_1} There exists an absolute constant $c > 1$ such that for any integer $k \geq c$, we have that $h\left(\lfloor \frac{k}{e} \rfloor - 1\right) > 0$ and $h\left(\lfloor \frac{k}{e} \rfloor + 1 \right) < 0$. Moreover, function $f(k,\cdot)$ achieves its maximum for $m \in \{\lfloor \frac{k}{e} \rfloor, \lfloor \frac{k}{e} \rfloor + 1\}$, and is monotonically increasing for smaller values of $m$ and monotonically decreasing for larger values of $m$. \end{lemma} Lemma~\ref{l:deriv_bounds_1} is proved in Appendix~\ref{sec:optimal-deriv_bounds_1}. Proposition \ref{Thm:optimum_expansion} below shows a characterization of the optimal success probability $OPT_n$ of secretary algorithms, complemented by existential result in Theorem~\ref{th:optimal_secr_existence}. \begin{proposition}\label{Thm:optimum_expansion} \ \begin{enumerate} \item\label{Thm:optimum_expansion_1} The optimal success probability of the best secretarial algorithm for the problem with $n$ items which uses a uniform random order from $\Pi_n$ is $OPT_n = 1/e + c_0/n + \Theta((1/n)^{3/2})$, where $c_0 = 1/2 - 1/(2e)$. \item\label{Thm:optimum_expansion_2} The success probability of any secretarial algorithm for the problem with $n$ items which uses any probabilistic distribution on $\Pi_n$ is at most $OPT_n = 1/e + c_0/n + \Theta((1/n)^{3/2})$. \item\label{Thm:optimum_expansion_3} There exists an infinite sequence of integers $n_1 < n_2 < n_3 < \ldots$, such that the success probability of any deterministic secretarial algorithm for the problem with $n \in \{n_1,n_2, n_3,\ldots\}$ items which uses any uniform probabilistic distribution on $\Pi_n$ with support $\ell < n$ is strictly smaller than $1/e$. \end{enumerate} \end{proposition} \begin{proof} \noindent Part \ref{Thm:optimum_expansion_1}. Gilbert and Mosteller~\cite{GilbertM66} proved that under maximum entropy, the probability of success is maximized by wait-and-pick algorithm with some threshold. Another important property, used in many papers (c.f., Gupta and Singla \cite{Gupta_Singla}), is that function $f(n,m)$ describes the probability of success of the wait-and-pick algorithm with threshold $m$. Consider wait-and-pick algorithms with threshold $m\in [n-1]$. \ignore{ We first argue that for $j$ such that $|j|>\sqrt{n}$, function $f$ does not reach its global maximum. Indeed, for any $m\in [n-1]$ consider the difference \[ f(n,m+1)-f(n,m) = \frac{1}{n} \cdot \left(H_{n-1}-H_m-1 \right) \ , \] which is positive until $H_m+1$ gets bigger than $H_{k-1}$, and remains negative afterwards. Thus, there is only one local maximum, which is also global maximum, of function $f$ wrt variable $m$. Lemma~\ref{lemma:f_expansion_1} for $k=n$ implies that $f\left(n, \lceil\frac{n}{e}-\sqrt{n}\right)\rceil \le f\left(n, \lceil\frac{n}{e}\rceil\right)$ and $f\left(n, \lceil\frac{n}{e}\rceil\right) \ge f\left(n, \lfloor\frac{n}{e}+\sqrt{n}\rfloor\right)$, therefore this unique local maximum is reached for some $|j|\le \sqrt{n}$. Because $\frac{n}{e} - 1 \leq \left\lfloor \frac{n}{e} \right\rfloor$ and $\left\lfloor \frac{n}{e} \right\rfloor + 1 \leq \frac{n}{e} + 1$, by Lemma \ref{l:deriv_bounds_1}, this unique local, and also global, maximum of function $f(n,\cdot)$ is achieved for some $j$ such that $|j| \leq 1$. } By Lemma \ref{l:deriv_bounds_1}, function $f(n,\cdot)$ achieves is maximum for threshold $m\in \left\{\left\lfloor \frac{n}{e} \right\rfloor,\left\lfloor \frac{n}{e} \right\rfloor + 1\right\}$, and by Lemma \ref{lemma:f_expansion_1}, taken for $k=n$, it could be seen that for any admissible value of $j$ (i.e., such that $n/e+j$ is an integer and $|j|\le 1$, thus also for $j\in \left\{\left\lfloor \frac{n}{e} \right\rfloor -n/e,\left\lfloor \frac{n}{e} \right\rfloor + 1-n/e\right\}$ for which $f(n,m)$ achieves its maximum), and for $c_0 = 1/2 - 1/(2e)$: $ f\left(n, \frac{n}{e}+j\right) = \frac{1}{e} + \frac{c_0}{n} + \Theta\left( \left( \frac{1}{n} \right)^{3/2} \right) \, . $ \ignore{ By Lemma \ref{lemma:f_expansion_1}, taken for $k=n$, it could be seen that for any admissible value of $j$ (i.e., such that $n/e+j$ is an integer and $|j|\le \sqrt{n}$), \[ f\left(n, \frac{n}{e}+j\right) \le \frac{1}{e} + \frac{c_0}{n} + \Theta\left( \left( \frac{1}{n} \right)^{3/2} \right) \, , \] where $c_0 = 1/2 - 1/(2e)$. This bound is met, up to $\Theta(n^{-3/2})$, for constant values of $j$, thus the above upper bound on $f$ is actually the $OPT_n$.}% \noindent Part \ref{Thm:optimum_expansion_2}. Consider a probabilistic distribution on set $\Pi_n$, which for every permutation $\pi\in\Pi_n$ assigns probability $p_\pi$ of being selected. Suppose that the permutation selected by the adversary is $\sigma \in \Pi_n$. Given a permutation $\pi\in\Pi_n$ selected by the algorithm, let $\chi(\pi,\sigma) = 1$ if the algorithm is successful on the adversarial permutation $\sigma$ and its selected permutation $\pi$, and $\chi(\pi,\sigma) = 0$ otherwise. Given a specific adversarial choice $\sigma \in \Pi_n$, the total weight of permutations resulting in success of the secretarial algorithm is $ \sum_{\pi\in\Pi_n} p_{\pi} \cdot \chi(\sigma,\pi) \, . $ Suppose now that the adversary selects its permutation $\sigma$ uniformly at random from $\Pi_n$. The expected total weight of permutations resulting in success of the secretarial algorithm is $ \sum_{\sigma \in \Pi_n} q_{\sigma} \cdot \left(\sum_{\pi\in\Pi_n} p_{\pi} \cdot \chi(\sigma,\pi)\right) $, where $q_{\sigma} = 1/n!$ for each $\sigma \in \Pi_n$. The above sum can be rewritten as follows: $ \sum_{\sigma \in \Pi_n} q_{\sigma} \cdot \left(\sum_{\pi\in\Pi_n} p_{\pi} \cdot \chi(\sigma,\pi)\right) = \sum_{\pi\in\Pi_n} p_{\pi} \cdot \left(\sum_{\sigma \in \Pi_n} q_{\sigma} \cdot \chi(\sigma,\pi)\right) \, , $ and now we can treat permutation $\pi$ as fixed and adversarial, and permutation $\sigma$ as chosen by the algorithm uniformly at random from $\Pi_n$, we have by Part \ref{Thm:optimum_expansion_1} that $ \sum_{\sigma \in \Pi_n} q_{\sigma} \cdot \chi(\sigma,\pi) = OPT_n $. This implies that the expected total weight of permutations resulting in success of the secretarial algorithm is at most $ \sum_{\pi\in\Pi_n} p_{\pi} \cdot \chi(\sigma,\pi) \leq \sum_{\pi\in\Pi_n} p_{\pi} \cdot OPT_n = OPT_n \, . $ Therefore, there exists a permutation $\sigma\in\Pi_n$ realizing this adversarial goal. Thus it is impossible that there is a secretarial algorithm that for any adversarial permutation $\sigma\in\Pi_n$ has success probability $> OPT_n$. \noindent Part \ref{Thm:optimum_expansion_3}. Let $\ell_i = 10^i$ and $n_i = 10 \ell_i$ for $i \in \mathbb{N}_{\geq 1}$. Let us take the infinite decimal expansion of $1/e = 0.367879441171442 ...$ and define as $d_i > 1$ the integer that is build from the first $i$ digits in this decimal expansion after the decimal point, that is, $d_1 = 3$, $d_2 = 36$, $d_3 = 367$, and so on. The sequence $d_i/\ell_i$ has the following properties: $\lim_{i \rightarrow +\infty} d_i/\ell_i = 1/e$, for each $i = 1,2,...$ we have that $d_i/\ell_i < 1/e < (d_i + 1)/\ell_i$ and, moreover, $j/\ell_i \not \in [1/e, 1/e + 1/n_i]$ for all $j \in \{0,1,2,\ldots, \ell_i\}$. Let us now take any $n = n_i$ for some (large enough) $i \in \mathbb{N}_{\geq 1}$ and consider the secretary problem with $n = n_i$ items. Consider also any deterministic secretarial algorithm for this problem that uses any uniform probability distribution on the set $\Pi_{n_i}$ with support $\ell_i$. By Part \ref{Thm:optimum_expansion_2} the success probability of this algorithm using this probability distribution is at most $OPT_{n_i} = 1/e + c_0/n_i + \Theta((1/n_i)^{3/2})$. Because the algorithm is deterministic, all possible probabilities in this probability distribution belong to the set $\{j/\ell_i : j \in \{0,1,2,\ldots, \ell_i\}\}$. We observe now that $j/\ell_i \not \in [1/e, 1/e + c_0/n_i + \Theta((1/n_i)^{3/2})]$ for $j \in \{0,1,2,\ldots, \ell_i\}$. This fact holds by the construction and by the fact that constant $c_0 \in (0,1)$, and we may also need to assume that $i \in \mathbb{N}_{\geq 1}$ is taken to be large enough to deal with the term $\Theta((1/n_i)^{3/2})$. Thus the success probability of this algorithm is strictly below $1/e$. \end{proof} \section{Probabilistic analysis of classical secretary algorithms} \label{sec:existential} \begin{theorem}\label{th:optimal_secr_existence} Given any integer parameter $3 \leq k \le n - m_{0}$, there exists a multi-set $\mathcal{L} \subseteq \Pi_n$ (that is, it may contain multiple copies of some permutations) of the set of all permutations of size $|\mathcal{L}| \leq O\left(\frac{k \cdot \log n}{(\varepsilon')^2}\right)$ such that if we choose one of these permutations u.a.r.~from $\mathcal{L}$, then the optimal wait-and-pick $1$-secretary algorithm with time threshold $m_{0}$ achieves a success probability of at least $$ (1-\varepsilon\rq{}) \rho_k, \mbox{ where } \rho_k = OPT_n - \frac{2}{k} \left( \frac{n - m_{0}}{n - 1} \right)^{k}, $$ and any $0 < \varepsilon\rq{} < 1$. The value $OPT_n$ denotes the probability of success of the algorithm with time threshold $m_0$ when a permutation is chosen u.a.r from set $\Pi_{n}$. We assume here that $m_{0} = \alpha n$ for a constant $\alpha \in (0,1)$ such that $\rho_3 =\Theta(1)$ (this holds, e.g., when $\alpha = 1/e$ and $n$ are large enough). \end{theorem} \noindent {\bf Proof sketch.} The complete proof is deferred to Appendix~\ref{sec:existential-proof}. We will use the probabilistic method to show existence of the set $\mathcal{L} \subseteq \Pi_n$. First, consider a random experiment that is choosing u.a.r.~a single permutation $\pi$ from $\Pi_n$. We estimate the probability of success of the secretarial algorithm with time threshold $m_{0}$. (Below, $ind(i)$ refers to the position of the $i$-th largest adversarial value from $v(1),\ldots,v(n)$ in permutation $\pi$, i.e., $\pi(ind(i))$.) This probability is lower bounded by the probability of union of the following disjoint events $E_i$, $i=2,3,\ldots, k$, where $ E_ i = A_i \cap B_i \cap C_i, \,\,\,\, A_i = \{ ind(i) \in \{1,2,\ldots, m_{0} \}\}, \,\,\,\, B_i = \bigcap_{j=1}^{i-1} \{ ind(j) \in \{m_{0} + 1, \ldots, n \} \}, $ and $ C_i = \{ \forall j=2,3,\ldots,i-1 : ind(1) < ind(j) \} \, . $ We say that $\pi$ {\em covers} the ordered $i$-tuple $\hat{S} = \{\pi(ind(i)), \pi(ind(1)), \pi(ind(2)), \ldots, \pi(ind(i-1)) \}$ if event $E_i$ holds. By applying the Bayes' formula on conditional probabilities, by conditioning on the events as they are listed below from left to right, we obtain: \vspace*{-1ex} \[ \mathbb{P}\mathrm{r}[E_i] = \mathbb{P}\mathrm{r}\left[A_i \cap B_i \cap C_i\right] = \frac{m_{0}}{n} \cdot \left( \prod_{j=1}^{i-1} \frac{n - m_{0} - (j - 1)}{(n - 1) - (j - 1)} \right) \cdot \frac{(i-2)!}{(i-1)!} \, . \] \vspace*{-2ex} \noindent It follows that \vspace*{-1ex} \[ OPT_n = \sum\limits_{2 \le i \le n - m_{0} + 1} \mathbb{P}\mathrm{r}[E_{i}] \le \sum\limits_{2 \le i \le k } \mathbb{P}\mathrm{r}[E_{i}] + \frac{m_{0}}{n\cdot k} \left( \frac{n - m_{0}}{n - 1} \right)^{k} \cdot \frac{n-1}{m_{0} - 1} \, , \mbox{ and then} \] \vspace*{-2ex} \begin{eqnarray} \sum_{i=2}^{k} \mathbb{P}\mathrm{r}[E_i] \,\, \geq \,\, OPT_n - \frac{n-1}{m_{0} - 1} \cdot \frac{m_{0}}{n \cdot k} \left( \frac{n - m_{0}}{n - 1} \right)^{k} \,\, \ge \,\, OPT_n - \frac{2}{k} \left( \frac{n - m_{0}}{n - 1} \right)^{k} \, . \label{Eq:Rho_Lower_Bound} \end{eqnarray} \noindent {\bf Combinatorialization.} Let us define an ordered $k$-tuple $\hat{S} = \{j_1, j_2, \ldots, j_k\} \subseteq [n]$. We say that an independently and u.a.r.~chosen $\pi \in \Pi_n$ is {\em successful} for $\hat{S}$ iff event $\bigcup_{i=2}^k E_i$ holds, where $ E_i = \{ \pi \mbox{ covers } i\mbox{-tuple }$ $\{j_i, j_1, j_2, \ldots, j_{i-1}\} \} $, for $i \in \{2,3,\ldots,k\}$. By above argument $\pi$ is successful with probability $\rho_k$. Next, we choose independently $\ell = c \log(n^k)$ permutations $\pi_1, \ldots, \pi_{\ell}$ from $\Pi_n$ u.a.r., for a fixed constant $c \geq 1$. These permutations will comprise the multi-set $\mathcal{L}$. Let $X^{\hat{S}}_1, \ldots, X^{\hat{S}}_{\ell}$ be random variables such that $X^{\hat{S}}_t = 1$ if the corresponding random permutation $\pi_t$ is successful for $k$-tuple $\hat{S}$, and $X^{\hat{S}}_t = 0$ otherwise, for $t = 1,2, \ldots, \ell$. Then for $X^{\hat{S}} = X^{\hat{S}}_1 + \cdots + X^{\hat{S}}_{\ell}$ we have that $\mathbb{E}[X^{\hat{S}}] = \rho_k \ell = c \rho_k \log(n^k)$ and by the Chernoff bound $ \mathbb{P}\mathrm{r}[X^{\hat{S}} < (1-\varepsilon\rq{}) \cdot \rho_k \ell] \leq 1/n^{(\varepsilon\rq{})^2 c \rho_k k/2} $, for any constant $0 < \varepsilon\rq{} < 1$. Now, the probability that there exists a $k$-tuple $\hat{S} = \{j_1, j_2, \ldots, j_k\}$ for which there does not exists a $(1-\varepsilon\rq{}) \rho_k$ fraction of successful permutations among these $\ell=c \log(n^k)$ random permutations is \vspace*{-1ex} \[ \mathbb{P}\mathrm{r}[\exists k\mbox{-tuple } \hat{S} : X^{\hat{S}} < (1-\varepsilon\rq{}) c \rho_k \log(n^k)] \leq \binom{n}{k} \cdot k! / n^{(\varepsilon\rq{})^2 c \rho_k k/2} \leq \frac{1}{ n^{(\varepsilon\rq{})^2 c \rho_k k/2 - k} } \, , \] \vspace*{-1.5ex} \noindent by union bound. Thus all $ \binom{n}{k} k!$ ordered $k$-tuples $\hat{S}$ are covered with probability $\geq 1 - \frac{1}{ n^{(\varepsilon\rq{})^2 c \rho_k k/2 - k} } > 0$, for $c > 4/(\rho_3 (\varepsilon\rq{})^2) = \Theta(1/ (\varepsilon\rq{})^2)$. So, there exist $\Theta( \log(n^k)/(\varepsilon\rq{})^2)$ permutations such that if we choose one of them u.a.r., then for any $k$-tuples $\hat{S}$, this permutation will be successful with probability $(1-\varepsilon\rq{}) \rho_k$, which is success probability of the algorithm with threshold $m_{0}$. In this proof we have a multi-set ${\mathcal L}$ of $\Theta( \log(n^k)/(\varepsilon\rq{})^2)$ permutations, and taking $k = \Theta(\log(n))$ and $\varepsilon' = \Theta(1/n)$, we see by Theorem \ref{th:optimal_secr_existence} and by Part 1 in Theorem \ref{Thm:optimum_expansion} that with entropy $O(\log (n))$ we can achieve the success probability above $1/e$: \begin{corollary}\label{cor:above_1_over_e} There exists a multi-set $\mathcal{L} \subseteq \Pi_n$ of size $|\mathcal{L}| \leq O\left(\log^2(n) \cdot n^2\right)$ such that if we choose one of these permutations u.a.r.~from $\mathcal{L}$, then the optimal secretarial algorithm with time threshold $m_{0} = \lfloor n/e \rfloor$ achieves a success probability of at least $\frac{1}{e} + \Theta\left(\frac{1}{n}\right)$. \end{corollary} \section{Derandomization via Chernoff bound for the $1$-secretary problem}\label{sec:derandomization-classic} \begin{theorem}\label{Thm:Derandomization} Suppose that we are given integers $n$ and $k$, such that $n \geq 1$ and $n > k \geq 3$, and an error parameter $\varepsilon' > 0$. Define $\rho_k = OPT_n - \frac{2}{k} \left(1 - \frac{1}{e} \right)^{k}$. Then for $\ell = \Theta(\frac{2k\log{n}}{\rho_k (\varepsilon')^{2}})$ there exists a deterministic algorithm (Algorithm \ref{algo:Find_perm}) that finds a multi-set $\mathcal{L} = \{\pi_1,\pi_2,\ldots,\pi_{\ell}\}$ of $n$-element permutations $\pi_j \in \Pi_n$, for $j \in [\ell]$, such that for every $k$-tuple there are at least $(1-\varepsilon')\cdot \rho_k \ell$ successful permutations from $\mathcal{L}$. The running time of this algorithm is $O(k \cdot \ell \cdot n^{k+2} \cdot poly \log(n))$. \end{theorem} \vspace*{-1ex} \noindent We will present now the proof of Theorem \ref{Thm:Derandomization}. Missing details in this section and the full proof of Theorem \ref{Thm:Derandomization} can be found in Appendix \ref{sec:derandomization-proofs}. \noindent {\bf Preliminaries.} To derandomize Chernoff argument of Theorem \ref{th:optimal_secr_existence}, we will use the same method of conditional expectations method with a pessimistic estimator as for the $k$-secretary problem, with only some problem-specific differences. We will model an experiment to choose u.a.r.~a permutation $\pi_j \in \Pi_n$ by independent \lq\lq{}index\rq\rq{} r.v.'s $X^i_j$: $\mathbb{P}\mathrm{r}[X^i_j \in \{1,2,\ldots, n-i+1\}] = 1/(n-i+1)$, for $i \in [n]$, to define $\pi = \pi_j \in \Pi_n$ ``sequentially": $\pi(1) = X^1_j$, $\pi(2)$ is the $X^2_j$-th element in $I_1 = \{1,2,\ldots,n\} \setminus \{\pi(1)\}$, $\pi(3)$ is the $X^3_j$-th element in $I_2 = \{1,2,\ldots,n\} \setminus \{\pi(1), \pi(2)\}$, etc, where elements are increasingly ordered. Suppose random permutations $\mathcal{L} = \{\pi_1, \ldots, \pi_\ell\}$ are generated using $X^1_j, X^2_j, \ldots, X^n_j$ for $j\in[\ell]$. Given a $k$-tuple $\hat{S} \in \mathcal{K}$, recall definition of r.v.~$X^{\hat{S}}_j$ for $j \in [\ell]$ from proof of Theorem \ref{th:optimal_secr_existence}. For $X^{\hat{S}} = X^{\hat{S}}_1 + \ldots + X^{\hat{S}}_{\ell}$ and $\varepsilon\rq{} \in (0,1)$, we have $\mathbb{E}[X^{\hat{S}}] = \rho_k \ell$, and $ \mathbb{P}\mathrm{r}[X^{\hat{S}} < (1-\varepsilon\rq{}) \cdot \rho_k \ell] \leq 1/\exp((\varepsilon\rq{})^2 \rho_k \ell/2) = 1/n^{(\varepsilon\rq{})^2 c \rho_k k/2}, $ where $\ell = c \log(n^k)$. We call the $k$-tuple $\hat{S} \in \mathcal{K}$ {\em not well-covered} if $X^{\hat{S}} < (1-\varepsilon\rq{}) \cdot \rho_k \ell$ (then a new r.v.~$Y^{\hat{S}} = 1$), and {\em well-covered} otherwise (then $Y^{\hat{S}} = 0$). Let $Y = \sum_{\hat{S} \in \mathcal{K}} Y^{\hat{S}}$. By the above argument $\mathbb{E}[Y] = \sum_{\hat{S} \in \mathcal{K}} \mathbb{E}[Y^{\hat{S}}] < 1$ if $c \geq 1/(\varepsilon\rq{})^2$. We will keep the expectation $\mathbb{E}[Y]$ below $1$ in each step of derandomization, where these steps will sequentially define these permutations for set $\mathcal{L}$. \noindent {\bf Outline of derandomization.} Let $\pi_1$ be identity permutation. For some $s \in [\ell-1]$ let permutations $\pi_1,\ldots,\pi_s$ have already been chosen ({\em ``fixed"}). We will chose a {\em ``semi-random"} permutation $\pi_{s+1}$ position by position using $X^i_{s+1}$. Suppose that $\pi_{s+1}(1),$ $ \pi_{s+1}(2),..., \pi_{s+1}(r)$ are already chosen for some $r \in [n-1]$, where all $\pi_{s+1}(i)$ ($i \in [r-1]$) are fixed and final, except $\pi_{s+1}(r)$ which is fixed but not final yet. We will vary $\pi_{s+1}(r) \in [n] \setminus \{\pi_{s+1}(1), \pi_{s+1}(2),..., \pi_{s+1}(r-1)\}$ to choose the best value for $\pi_{s+1}(r)$, assuming that $\pi_{s+1}(r+1), \pi_{s+1}(r+2),..., \pi_{s+1}(n)$ are random. Permutations $\pi_{s+2},\ldots,\pi_n$ are {\em ``fully-random"}. \noindent {\bf Deriving a pessimistic estimator.} Given $\hat{S} \in \mathcal{K}$, observe $X^{\hat{S}}_{s+1}$ depends only on $\pi_{s+1}(1),\pi_{s+1}(2),$ $\ldots,$ $ \pi_{s+1}(r)$. We will show how to compute the {\bf conditional probabilities} (Algorithm \ref{algo:Cond_prob}~in~App.~\ref{sec:Cond_Prob_Thm_4}) $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$ ($=\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1]$ if $r=0$), where randomness is over random positions $\pi_{s+1}(r+1),\pi_{s+1}(r+2), \ldots, \pi_{s+1}(n)$. Theorem \ref{theorem:semi-random-conditional} is proved in App.~\ref{sec:Cond_Prob_Thm_4}. \begin{theorem}\label{theorem:semi-random-conditional} Suppose that values $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ have already been fixed for some $r \in \{0\} \cup [n]$. There exist a deterministic algorithm (Algorithm \ref{algo:Cond_prob}, Appendix \ref{sec:Cond_Prob_Thm_4}) to compute $\mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)]$, where the random event is the random choice of the semi-random permutation $\pi_{s+1}$ conditioned on its first $r$ elements already being fixed. Its running time is $O(k \cdot n \cdot poly \log(n))$, and $m_0 \in \{2,3,\ldots,n-1\}$ is the threshold of the secretarial algorithm. \end{theorem} \noindent {\bf Pessimistic estimator.} Let $\hat{S} \in \mathcal{K}$. Denote $\mathbb{E}[X^{\hat{S}}_j] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_j = 1] = \mu_j$ for each $j \in [\ell]$, and $\mathbb{E}[X^{\hat{S}}] = \sum_{j=1}^{\ell} \mu_j = \mu$. By Theorem \ref{Thm:optimum_expansion}, $f\left(n, \frac{n}{e}\right) = OPT_n = \frac{1}{e} + \frac{c_0}{n} + \Theta\left( \left( \frac{1}{n} \right)^{3/2} \right)$, where $c_0 = 1/2 - 1/(2e)$. By (\ref{Eq:Rho_Lower_Bound}) in the proof of Theorem \ref{th:optimal_secr_existence} and by Lemma \ref{lemma:f_expansion_1}, we obtain that $\mu_j \geq \rho_k \geq \frac{1}{e} - \Theta(1/k)$, for each $j \in [\ell]$. We will now use Raghavan's proof of the Chernoff bound, see \cite{Young95}, for any $\varepsilon\rq{} > 0$, using that $\mu_j \geq \rho_k$ (see more details in Appendix \ref{sec:Pess_Est_proofs}): \begin{eqnarray*} \mathbb{P}\mathrm{r}\left[X^{\hat{S}} < (1-\varepsilon\rq{}) \cdot \ell \cdot \rho_k \right] &\leq& \prod_{j=1}^{\ell} \frac{1-\varepsilon\rq{} \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}} < \prod_{j=1}^{\ell} \frac{\exp(- \varepsilon\rq{} \mu_j)}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}} \leq \prod_{j=1}^{\ell} \frac{\exp(- \varepsilon\rq{} \rho_k)}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}} \nonumber \\ &=& \frac{1}{\exp(b(-\varepsilon\rq{}) \ell \rho_k)} \,\, < \,\, \frac{1}{\exp((\varepsilon\rq{})^2 \ell \rho_k/2)} \, , \end{eqnarray*} where last inequality follows by $b(-x) > x^2/2$, see, e.g., \cite{Young95}. Thus, the union bound implies: \begin{eqnarray} \mathbb{P}\mathrm{r}\left[\exists \hat{S} \in \mathcal{K} : X^{\hat{S}} < (1-\varepsilon\rq{}) \cdot \ell \rho_k \right] \,\, \leq \,\, \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\varepsilon\rq{} \cdot \mathbb{E}[X^{\hat{S}}_j]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}} \label{Eq:Union_Bound_1} \, . \end{eqnarray} \noindent Let $\phi_j(\hat{S}) = 1$ if $\pi_j$ is successful for $\hat{S}$, and $\phi_j(\hat{S}) = 0$ otherwise, and failure probability~(\ref{Eq:Union_Bound_1})~is~at~most: \begin{eqnarray} & & \sum_{\hat{S} \in \mathcal{K}}\prod_{j=1}^{\ell} \frac{1-\varepsilon\rq{} \cdot \mathbb{E}[\phi_j(\hat{S})]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}} \label{eq:first_term} \\ &= & \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\varepsilon\rq{} \cdot \phi_j(\hat{S})}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right) \cdot \left(\frac{1-\varepsilon\rq{} \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right) \cdot \left(\frac{1-\varepsilon\rq{} \cdot \mathbb{E}[\phi_j(\hat{S})]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right)^{\ell - s - 1} \label{eq:second_term} \\ &\leq& \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} \frac{1-\varepsilon\rq{} \cdot \phi_j(\hat{S})}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right) \cdot \left(\frac{1-\varepsilon\rq{} \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right) \cdot \left(\frac{1-\varepsilon\rq{} \cdot \rho_k}{(1-\varepsilon\rq{})^{(1-\varepsilon\rq{})\rho_k}}\right)^{\ell - s - 1} \nonumber \\ &=& \, \Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)) \label{Eq:Pessimistic_Est} \, , \end{eqnarray} where equality (\ref{eq:second_term}) is conditional expectation under: (fixed) permutations $\pi_1,\ldots,\pi_s$ for some $s \in [\ell-1]$, the (semi-random) permutation $\pi_{s+1}$ currently being chosen, and (fully random) permutations $\pi_{s+2},\ldots,\pi_{\ell}$. The first term (\ref{eq:first_term}) is less than $|\mathcal{K}| / \exp((\varepsilon\rq{})^2 \ell \rho_k/2)$, which is strictly smaller than $1$ for large $\ell$. Let us denote $\mathbb{E}[\phi_{s+1}(\hat{S})] = \mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau] = \mathbb{P}\mathrm{r}[X^{\hat{S}}_{s+1} = 1 \, | \, \pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau]$, where positions $\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r)$ were fixed in the semi-random permutation $\pi_{s+1}$, $\pi_{s+1}(r)$ was fixed in particular to $\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$, and it can be computed by using the algorithm from Theorem \ref{theorem:semi-random-conditional}. This gives our pessimistic estimator $\Phi$. Because $s$ is fixed for all steps where the semi-random permutation is being decided, $\Phi$ is uniformly proportional to $\Phi_1$: \vspace*{-3ex} \begin{eqnarray} \Phi_1 = \sum_{\hat{S} \in \mathcal{K}} \left(\prod_{j=1}^{s} (1-\varepsilon\rq{} \cdot \phi_j(\hat{S}))\right) \cdot (1-\varepsilon\rq{} \cdot \mathbb{E}[\phi_{s+1}(\hat{S})]), \nonumber \\ \Phi_2 = \sum_{\hat{S} \in \mathcal{K}}\left(\prod_{j=1}^{s} (1-\varepsilon\rq{} \cdot \phi_j(\hat{S}))\right) \cdot \mathbb{E}[\phi_{s+1}(\hat{S})] \, . \label{Eq:Pessimistic_Est_Obj} \end{eqnarray} \vspace*{-1ex} \noindent Recall $\pi_{s+1}(r)$ in semi-random permutation was fixed but not final. To make it final, we choose $\pi_{s+1}(r) \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$ that minimizes $\Phi_1$, which is equivalent to maximizing $\Phi_2$. Proof of Lemma \ref{lem:potential_correct} can be found in Appendix \ref{sec:Pess_Est_proofs}. \begin{lemma}\label{lem:potential_correct} $\Phi(\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r))$ is a pessimistic estimator of the failure probability (\ref{Eq:Union_Bound_1}) if $\ell \geq \frac{2 k \ln(n)}{\rho_k (\varepsilon\rq{})^2}$. \end{lemma} \begin{proof} (of Theorem \ref{Thm:Derandomization}) See Appendix \ref{sec:Pess_Est_proofs}. \end{proof} \begin{algorithm}[t!] \SetAlgoVlined \DontPrintSemicolon \KwIn{Positive integers $n \geq 2$, $2 \leq k \leq n$, $\ell \geq 2$.} \KwOut{A multi-set $\mathcal{L} \subseteq \Pi_n$ of $\ell$ permutations.} /* This algorithm uses Function ${\sf Prob}(E_i,\hat{S})$ from Algorithm \ref{algo:Cond_prob} in Appendix \ref{sec:Cond_Prob_Thm_4}. */ $\pi_1 \leftarrow (1,2,\ldots,n)$ /* Identity permutation */ $\mathcal{L} \leftarrow \{\pi_1\}$ Let $\mathcal{K}$ be the set of all ordered $k$-element subsets of $[n]$. \For{$\hat{S} \in \mathcal{K}$}{$w(\hat{S}) \leftarrow 1-\varepsilon\rq{} \cdot \phi_1(\hat{S})$ \label{Alg:Weight_Init}} \For{$s = 1 \ldots \ell - 1$}{ \For{$r = 1 \ldots n$}{ \For{$\hat{S} \in \mathcal{K}$}{ \For{$\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$}{ $\mathbb{P}\mathrm{r}[E_i \, | \, \pi_{s+1}(1), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau] \leftarrow {\sf Prob}(E_i,\hat{S})$, for $i=2 \ldots k$. $\mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau] \leftarrow \sum_{i=2}^k \mathbb{P}\mathrm{r}[E_i \, | \, \pi_{s+1}(1), \ldots, \pi_{s+1}(r-1), \pi_{s+1}(r) = \tau]$ } } Choose $\pi_{s+1}(r) = \tau$ for $\tau \in [n] \setminus \{\pi_{s+1}(1),\pi_{s+1}(2), \ldots, \pi_{s+1}(r-1)\}$ to maximize $\sum_{\hat{S} \in \mathcal{K}} w(\hat{S}) \cdot \mathbb{E}[\phi_{s+1}(\hat{S}) \, | \, \pi_{s+1}(r) = \tau]$. } $\mathcal{L} \leftarrow \mathcal{L} \cup \{\pi_{s+1}\}$ \For{$\hat{S} \in \mathcal{K}$}{ $w(\hat{S}) \leftarrow w(\hat{S}) \cdot (1-\varepsilon\rq{} \cdot \phi_{s+1}(\hat{S}))$ \label{Alg:Weight_Update} } } \Return $\mathcal{L}$ \caption{Find permutations distribution ($1$-secretary)} \label{algo:Find_perm} \end{algorithm} \newpage \bibliographystyle{plainurl}
1,941,325,220,385
arxiv
\section*{Introduction} Much interest has been generated recently in lattice models for euclidean quantum gravity based on dynamical triangulations \cite{mig3,amb3,gross,mig4,amb4,brug4,us4,smit4}. The study of these models was prompted by the success of the same approach in the case of two dimensions, see for example \cite{david}. The primary input to these models is the ansatz that the partition function describing the fluctuations of a continuum geometry can be approximated by performing a weighted sum over all simplicial manifolds or triangulations $T$. \begin{equation} Z=\sum_{T}\rho\left(T\right) \label{eqn1} \end{equation} In all the work conducted so far the topology of the lattice has been restricted to the sphere $S^d$. The weight function $\rho\left(T\right)$ is taken to be of the form \begin{equation} \rho\left(T\right)=e^{-\kappa_d N_d +\kappa_0 N_0} \end{equation} The coupling $\kappa_d$ represents a bare lattice cosmological constant conjugate to the total volume (number of $d$-simplices $N_d$) whilst $\kappa_0$ plays the role of a bare Newton constant coupled to the total number of nodes $N_0$. We can rewrite eqn. \ref{eqn1} by introducing the entropy function $\Omega_d\left(N_d, \kappa_0\right)$ which counts the number of triangulations with volume $N_d$ weighted by the node term. This the primary object of interest in this note. \begin{equation} Z=\sum_{N_d} \Omega_d\left(N_d, \kappa_0\right) e^{-\kappa_d N_d} \end{equation} For this partition sum to exist it is crucial that the entropy function $\Omega_d$ increase no faster than exponentially with volume. For two dimensions this is known \cite{2dbound} but until recently the only evidence for this in higher dimensions came from numerical simulation. Indeed for four dimensions there is some uncertainty in the status of this bound \cite{usbound,ambbound,smit4}. With this in mind we have conducted a high statistics study of the three dimensional model at $\kappa_0=0$, extending the simulations reported in \cite{varsted} by an order of magnitude in lattice volume. While in the course of preparing this manuscript we received a paper \cite{boulatov} in which an argument for the bound in three dimensions is given. Whilst we observe a rather slow approach to the asymptotic, large volume limit, our results are entirely consistent with the existence of such a bound. However, the predicted variation of the mean node number with volume is not seen, rather the data supports a rather slow power law approach to the infinite volume limit. If we write $\Omega_3\left(N_3\right)$ as \begin{equation} \Omega_3\left(N_3\right)=ae^{\kappa_3^c\left(N_3\right) N_3} \end{equation} the effective critical cosmological constant $\kappa_3^c$ is taken dependent on the volume and a bound implies that $\kappa_3^c\to {\rm const} < \infty$ as $N_3\to\infty$. In contrast for a model where the entropy grew more rapidly than exponentially $\kappa_3^c$ would diverge in the thermodynamic limit. To control the volume fluctuations we add a further term to the action of the form $\delta S=\gamma\left(N_3-V\right)^2$. Lattices with $N_3\sim V$ are distributed according to the correct Boltzmann weight up to correction terms of order $O\left(1\over \sqrt{\gamma}V \right)$ where we use $\gamma=0.005$ in all our runs. This error is much smaller than our statistical errors and can hence be neglected. Likewise, as a first approximation, we can set $\kappa_3^c$ equal to its value at the mean of the volume distribution $V$ which allows us to compute the expectation value of the volume exactly since the resultant integral is now a simple gaussian. We obtain \begin{equation} \left\langle N_3\right\rangle = {1\over 2\gamma}\left(\kappa_3^3\left(V\right) -\kappa_3\right)+V \label{eqn2} \end{equation} Equally, by measuring the mean volume $\left\langle N_3\right\rangle$ for a given input value of the coupling $\kappa_3$ we can estimate $\kappa_3^c\left(V\right)$ for a set of mean volumes $V$. The algorithm we use to generate a Monte Carlo sample of three dimensional lattices is described in \cite{simon}. We have simulated systems with volumes up to $128000$ 3-simplices and using up to $400000$ MC sweeps (a sweep is defined as $V$ {\it attempted} elementary updates of the triangulation where $V$ is the average volume). Our results for $\kappa_3^c\left(V\right)$, computed this way, are shown in fig. \ref{fig1} as a function of $\ln V$. The choice of the latter scale is particularly apt as the presence of a factorial growth in $\Omega_3$ would be signaled by a logarithmic component to the effective $\kappa_3^c\left(V\right)$. As the plot indicates there is no evidence for this. Indeed, the best fit we could make corresponds to a {\it convergent} power law \begin{equation} \kappa_3^c\left(V\right)=\kappa_3^c\left(\infty\right) + a V^{-\delta} \end{equation} If we fit all of our data we obtain best fit parameters $\kappa_3^c\left(\infty\right)=2.087(5)$, $a=-3.29(8)$ and $\delta=0.290(5)$ with a corresponding $\chi^2$ per degree of freedom $\chi^2=1.3$ at $22\%$ confidence (solid line shown). Leaving off the smallest lattice $V=500$ yields a statistically consistent fit with an even better $\chi^2=1.1$ at $38\%$ confidence. We have further tested the stability of this fit by dropping either the small volume data ($V=500-2000$ inclusive), the large volume data ($V=64000-128000$ inclusive) or intermediate volumes ($V=8000-24000$). In each of these cases the fits were good and yielded fit parameters consistent with our quoted best fit to all the data. Furthermore, these numbers are consistent with the earlier study \cite{varsted}. We are thus confident that this power law is empirically a very reasonable parameterisation of the approach to the thermodynamic limit. Certainly, our conclusions must be that the numerical data {\it strongly} favour the existence of a bound. One might object that the formula used to compute $\kappa_3^c$ is only approximate (we have neglected the variation of the critical coupling over the range of fluctuation of the volumes). This, in turn might yield finite volume corrections which are misleading. To check for this we have extracted $\kappa_3^c$ directly from the measured distribution of 3-volumes $Q\left(N_3\right)$. To do this we computed a new histogram $P\left(N_3\right)$ \begin{equation} P\left(N_3\right)=Q\left(N_3\right)e^{\kappa_3 N_3+\gamma\left(N_3-V\right)^2} \end{equation} As an example we show in fig. \ref{fig2} the logarithm of this quantity as a function of volume for $V=64000$. The gradient of the straight line fit shown is an unbiased estimator of the critical coupling $\kappa_3^c\left(64000\right)$. The value of $1.9516(10)$ compares very favourably with the value $\kappa_3^c\left( 64000\right)=1.9522(12)$ obtained using eqn. \ref{eqn2}. Indeed, this might have been anticipated since we might expect corrections to eqn. \ref{eqn2} to be of magnitude $O\left(V^{-\left(1+\delta\right)}\right)$ which even for the smallest volumes used in this study is again much smaller than our statistical errors. In addition to supplying a proof of the exponential bound in \cite{boulatov} Boulatov also conjectures a relation between the mean node number and volume in the crumpled phase of the model (which includes our node coupling $\kappa_0=0$). This has the form \begin{equation} \left\langle N_0/V\right\rangle=c1 + c2 {\ln(V)\over V} \label{eqn3} \end{equation} Our data for this quantity are shown in fig. \ref{fig3}. Whilst it appears that the mean coordination may indeed plateau for large volumes the approach to this limit seems not to be governed by the corrections envisaged in eqn. \ref{eqn3} -- it is simply impossible to fit the results of the simulation with this functional form. Indeed, the best fit we could obtain corresponds again to a simple converging power with small exponent $\left\langle N_0/V\right\rangle \sim b + cV^{-d}$. The fit shown corresponds to using all lattices with volume $V\ge 8000$ and yields $b=0.0045(1)$, $c=1.14(2)$ and power $d=0.380(3)$ ($\chi^2=1.6$). Fits to subsets of the large volume data yield consistent results. Finally, we show in fig. \ref{fig4}, a plot of the mean intrinsic size of the ensemble of simplicial graphs versus their volume. This quantity is just the average geodesic distance (in units where the edge lengths are all unity) between two randomly picked sites. The solid line is an empirical fit of the form \begin{equation} L_3=e + f\left(\ln{V}\right)^{g} \end{equation} Clearly, the behaviour is close to logarithmic (as appears also to be the case in four dimensions \cite{us4}), the exponent $g=1.047(3)$ from fitting all the data ($\chi^2=1.7$ per degree of freedom). This is indicative of the crumpled nature of the typical simplicial manifolds dominating the partition function at this node coupling. It is tempting to speculate that the true behaviour is simply logarithmic and the deviation we are seeing is due to residual finite volume effects. Alternatively, we can fit the data as a linear combination of the form \begin{equation} L_3=e + f\ln{V} +g\ln{\ln{V}} \end{equation} This gives a competitive fit with $e=-1.45(4)$, $f=1.438(4)$ and $g=-0.55(3)$ with $\chi^2=1.6$. One might be tempted to favour this fit on the grounds that it avoids the problem of a power close to but distinct from unity. However, the situation must remain ambiguous without further theoretical insight. To summarise this brief note we have obtained numerical results consistent with the existence of an exponential bound in a dynamical triangulation model of three dimensional quantum gravity. Thus, practical numerical studies can reveal the bound argued for in \cite{boulatov}. Our results also favour the existence of a finite (albeit large $\sim 200$) mean coordination number in the infinite volume limit in the crumpled phase. However, the nature of the finite volume corrections to the latter appear very different from those proposed in \cite{boulatov}. Indeed, both for the critical coupling and mean coordination number we observe large power law corrections with small exponent. Finally, we show data for the scaling of the mean intrinsic extent with volume which suggests a very large (possible infinite) fractal dimension for the typical simplicial manifolds studied. This work was supported, in part, by NSF grant PHY 92-00148. Some calculations were performed on the Florida State University Cray YMP. \vfill \newpage
1,941,325,220,386
arxiv
\section{Introduction} The degenerate parabolic equation \begin{equation}\label{0.0} h_t + (h^n h_{xxx} )_x = 0 \end{equation} arises in description of the evolution of the height $y = h(t,x)$ of a liquid film which spreads over a solid surface ($y = 0$) under the action of the surface tension and viscosity in lubrication approximation (see \cite{4,6}). Lubrication models have shown to be extremely useful approximations to the full Navier-Stokes equations for investigation of the thin liquid films dynamics, including the motion and instabilities of their contact lines. For thicknesses in the range of a few micrometers and larger, the choice of the boundary condition at the solid substrate does not influence the eventual appearance of instabilities, such as formation of fingers at the three-phase contact line (see \cite{7,8}). For other applications, such as for the dewetting of nano-scale thin polymer film on a hydrophobic substrate the boundary condition at the substrate appears to have crucial impact on the dynamics and morphology of the film. The exponent $n \in \mathbb{R}^{+}$ is related to the condition imposed at the liquid-solid interface, for example, $n = 3$ for no-slip condition, and $n \in (0,3)$ for slip condition in the form \begin{equation}\label{form} v^{x} = \mu\,h^{n - 2} (v^{x})_{y} \text{ at } y = 0. \end{equation} Here, $v^x$ is the horizontal component of the velocity field, $\mu$ is a non-negative slip parameter, and $\mu\,h^{n - 2}$ is the weighted slip length. Distinguished are the cases $\mu = 0$ and $n = 2$. The first one corresponds to the assumption of a no-slip condition, the second one to the assumption of a Navier slip condition at the liquid-solid interface. The wetted region $\{h > 0\}$ is unknown, hence the system is simulated as a free-boundary problem, where the free boundary being given by $\partial \{h > 0\}$, i.\,e. the triple junctions where liquid, solid and air meet. The main difficulty in studying equation (\ref{0.0}) is its singular behaviour for $h = 0$. The mathematical study of equation~(\ref{0.0}) was initiated by F.~Bernis, A.~Friedman \cite{B8}. They showed the positivity property of solutions to (\ref{0.0}) and proved the existence of nonnegative generalized solutions of initial--boundary problem with an arbitrary nonnegative initial function from $H^1$. More regular (\emph{strong} or \emph{entropy}) solutions have been constructed in \cite{B2,B14}. One outstanding question is whether zeros develop in finite time, starting with a regular initial data. What is known is that with periodic boundary conditions, for $n \geqslant 3.5$ this does not occur \cite{B2,B8}, while for $n < 3/2$ the solution develops zeroes in a finite time \cite{10}. One way of looking at the problem (\ref{0.1}) has been to study similarity solutions to (\ref{0.0}) in the form $h(x,t) = t^{-\alpha} H(x\,t^{- \beta})$, where $n\alpha + 4\beta =1$ (see \cite{9}). In the paper \cite{1}, the authors proved also existence dipole solutions and found their asymptotic behaviour. We note that the solutions of such type do not exist in the case $n \geqslant 2$, however, there exists a traveling wave solution (see \cite{4}). In the present paper, we concentrate on a traveling wave solution to (\ref{0.0}) at $n = 2$, namely, we consider the following problem with a regular initial function: \begin{equation}\label{0.1} \left\{ \begin{array}{l} h_t + (h^2 h_{xxx} )_x = 0, \\ h(s_1 ) = h(s_2 ) = 0, \\ h_x (s_1 ) = \theta > 0,\ h^2 h_{xxx} = 0 \text{ at } x = s_1.\\ \end{array} \right. \end{equation} System (\ref{0.1}) describes the growth of dewetted regions in the film. Fluid transported out of the growing dry regions collects in a ridge profile which advances into the undisturbed fluid (see Figure~1). Under ideal conditions, it could be imagined that dry spots could grow indefinitely large. By conservation of mass, the growing holes would shift fluid into the ever-growing rims. In our situation, large length scale to limit the sizes of these structures is absent, and we might expect the motion and growth of the ridges to approach scale-invariant self-similar form. At the same time, the ridge profiles have a pronounced asymmetry (see \cite{3}). \begin{figure}[t] \includegraphics[width= \textwidth, keepaspectratio]{Fig1.eps} \caption[1]{Sketch of the cross-section of a dewetting film after rupture, showing the expanding dewetted residual layer (also called a ''hole'' or ''dry spot'') in the middle and the adjacent ''dewetting ridges'' moving into the surrounding undisturbed uniform film \cite{3}.} \end{figure} In the problem at hand, $x = s_1(t)$ is the position of the former moving interface, i.\,e. the contact line, while the position of the latter interface will give an effective measure of the width of the ridge, $x = s_2(t) = s_1(t) + w(t)$. The ridge is assumed to be moving forward, $\dot{s}_1(t) > 0$, corresponding to an expanding hole. The arbitrary positive parameter $\theta$ corresponds to the contact angle of liquid-solid interface. Thus, we can control a dewetted region in the film by the contact line and obtain asymmetry profiles of solutions. As the paper \cite{3} has shown that the axisymmetric profile can be analyzed within a one-dimensional thin-film model. The authors found matched asymptotic expansion, speed and structure of the profile, in particular, they obtained that \begin{equation}\label{as} h(x,t) \sim A\, \dot{s_1}^{1/2} (s_2 - x)^{3/2} \text{ at } x = s_2, \end{equation} where the asymptotic constant $A = 2(2/3)^{1/2}$. Hereinafter, we assume that the contact line moves with a constant velocity (${\rm{v}}$) and the width of the ridge ($w$) is a constant. As in \cite{3}, we are going to look for a solution to (\ref{0.1}) in the following form $$ h(x,t) = \hat{h}(\xi ), \text{ where } \xi = x - {\rm{v}}t,\ s_1 = {\rm{v}}t,\ s_2 = {\rm{v}}t + w. $$ We can remove v and $\theta$ from the resulted problem by rescaling appropriately, $$ \hat{h} = \tfrac{\theta ^3}{\rm{v}}\varphi ,\ \xi = \tfrac{\theta ^2 }{{\rm{v}}}\eta ,\ w = \tfrac{\theta ^2 }{{\rm{v}}}d. $$ As a result, we obtain the following problem for the traveling-wave \begin{equation}\label{0.2} \left\{ \begin{array}{l} \varphi (\eta) \varphi'''(\eta) = 1,\ \varphi (\eta ) \geqslant 0, \\ \varphi (0) = \varphi (d) = 0,\ \varphi' (0) = 1,\ 0 \leqslant \eta \leqslant d. \\ \end{array} \right. \end{equation} Boatto et al. \cite{4} reduced this problem to the problem of finding a co-dimension one orbit of a second-order ODE system connecting equilibria. Hence generically solutions will exist but only for isolated values of the free parameter $d$. The parameter $d$ was found in \cite{3} by integration (\ref{0.2}), and \begin{equation}\label{d} d=1/2. \end{equation} Our paper is organized as follows. In Section~2 we prove the existence and uniqueness of a traveling wave solution to the problem (\ref{0.2}) (Theorem~\ref{Th1}). Lower and upper bounds for the traveling wave solution are contained in Section~3 (Theorem~\ref{Th2}). We note that the bounds assert that the constant $A$ of (\ref{as}) must be from the interval $[4\sqrt{2}/3, 4 \sqrt{6}/3]$ (Corollary~3.1). In Section~4 we find an absolute error estimate of approximation of a solution to (\ref{0.1}) by the traveling-wave solution (Theorem~\ref{Th3}). \section{Existence of the traveling wave solution} Below we prove the existence and uniqueness traveling wave solution to the problem (\ref{0.2}). Our proof is based on some modification of the proof of the existence and uniqueness dipole solutions from \cite{1}. \begin{theorem}\label{Th1} There exists a unique solution $\varphi (\eta )$ to the problem (\ref{0.2}) such that $\varphi (\eta ) \in C^3 (0,d) \cap C^1 [0,d]$ and $\varphi (\eta ) > 0$ for $0 < \eta < d$. \end{theorem} \noindent First, we prove the following auxiliary lemma: \begin{lemma}\label{l1.2} Assume that $\varphi \in C^3 (0,d) \cap C^1 [0,d]$, $ \varphi (0) = \varphi (d) = 0,\ \varphi' (0) = 1$, $\varphi > 0$ and $\varphi''' (\eta )\geqslant 0$ in $(0,d)$. Then $\varphi$ has a unique maximum and $\varphi' (d) \leqslant 0$. \end{lemma} \begin{proof} Since $\varphi''' (\eta ) \geqslant 0$ we have that $ \varphi'(\eta )$ is convex, $\varphi''(\eta )$ is increasing. By Rolle's theorem $\varphi '(\eta )$ has at least one zero in $(0,d)$, and $\varphi'(\eta )$ has no more than two zeroes in $(0,d)$ by convexity. Let $d_1 ,d_2 \in (0,d) : \varphi' (d_1 ) = \varphi' (d_2 ) = 0 $. Then, by Rolle's theorem, there exists $d_3 \in (d_1 ,d_2 ): \varphi'' (d_3 ) = 0$ whence $\varphi (d) \geqslant 0$. In view of $\varphi > 0$ in $ (0,d)$ and $\varphi (d) = 0$, we obtain $d_2 = d$ and $\varphi' (d) = 0$. This proves that $\varphi'(\eta )$ has exactly one zero in $(0,d)$ and hence $\varphi (\eta )$ has a unique maximum. Now $ \varphi' (d) < 0$ follows easily. \end{proof} \begin{proof}[Proof of Theorem~\ref{Th1}] \emph{Green's function.} We define a Green's function $G(\eta ,t)$ by \begin{equation}\label{1.1} \left\{ \begin{array}{l} G'''(\eta ,t) = \delta (\eta - t),\ 0 \leqslant \eta \leqslant d,\ 0 \leqslant t \leqslant d, \\ G(0,t) = G(d,0) = G'(0,t) = 0,\ 0 \leqslant t \leqslant d,\\ \end{array} \right. \end{equation} where $d = 1/2$. By explicit computation, we find that \begin{equation}\label{1.2} 0 \leqslant G(\eta ,t) = \left\{ \begin{array}{l} 2(t - d)^2 \eta ^2 \text{ if } 0 \leqslant \eta \leqslant t \leqslant d, \\ 2(t - d)^2 \eta ^2 - d(\eta - t)^2 \text{ if } 0 \leqslant t \leqslant \eta \leqslant d, \\ \end{array} \right. \end{equation} whence \begin{equation}\label{1.2*} \int\limits_0^\eta {G(\eta ,t)\,dt} = \tfrac{2}{3}\eta ^3 (d - \eta )(1 - \eta ),\ \int\limits_\eta ^d {G(\eta ,t)\,dt} = \tfrac{2}{3}\eta ^2 (d - \eta )^3 \end{equation} if $0 < \eta < d$, and \begin{equation}\label{1.3} G(\eta ,t) \leqslant C\,t^2 (d - t),\quad |G'(\eta ,t)| \leqslant C\,t(d - t)\ \forall\, \eta ,\,t \in [0,d]. \end{equation} \medskip \noindent \emph{Approximating problems.} For each positive integer $k$ we consider the problem \begin{equation}\label{1.4} \left\{ \begin{array}{l} \varphi_{k}''' (\eta ) = \varphi _k^{ - 1} \text{ for } 0 < \eta < d, \\ \varphi _k (0) = \varphi _k (d) = \frac{1}{k},\ \varphi' _{k} (0) = 1, \\ \varphi _k (\eta ) \in C^3 [0,d],\ \varphi _k (\eta ) > 0 \text{ for } 0 \leqslant \eta \leqslant d. \\ \end{array} \right. \end{equation} Consider the closed convex set $$ S = \{ v \in C[0,d]:v \geqslant 1/k \text{ in } (0,d)\} $$ and the nonlinear operator $\Phi _k$ defined by $$ \Phi _k v(\eta ) = \tfrac{1}{k} + 2\eta (d - \eta ) + \int\limits_0^d {G(\eta ,t)v^{ - 1} (t)\,dt}, $$ where $G(\eta ,t)$ is from (\ref{1.2}). The operator $\Phi _k$ mapping $S$ into $S$ is continuous. Moreover, $\Phi _k (S)$ is (for each $k$) a bounded subset of $C^3 [0,d]$ and hence a relatively compact subset of $S$. By Schauder's fixed-point theorem, there exists $\varphi _k \in \Phi _k (S)$ such that $\Phi _k \varphi _k = \varphi _k$. This is the desired solution of the problem~(\ref{1.4}). Note that $\varphi _k$ satisfies \begin{equation}\label{1.5} \begin{gathered} \varphi _k (\eta ) = \tfrac{1}{k} + 2\eta (d - \eta ) + \int\limits_0^d {G(\eta ,t)\varphi''' _{k} (t)\,dt} ,\hfill\\ \varphi' _{k} (\eta ) = 2(d - 2\eta ) + \int\limits_0^d {G'(\eta ,t)\varphi''' _{k} (t)\,dt}. \hfill \end{gathered} \end{equation} In view of Lemma~\ref{l1.2} (applied to $\varphi _k - 1/k$), there exists a unique point $m_k$ in which the maximum of $\varphi _k$ is attained. Therefore, $$ \varphi _k (\eta ) \nearrow \text{ in } (0,m_k )\ \text{and}\ \varphi _k (\eta ) \searrow \text{ in } (m_k ,d). $$ \medskip \noindent \emph{Estimates.} Since $G \geqslant 0$ we get $$ \begin{array}{r} \varphi _k (\eta ) - \frac{1}{k} = 2\eta (d - \eta ) + \int\limits_0^d {G(\eta ,t)\varphi _k^{ - 1} (t)\,dt} \geqslant \varphi _k^{ - 1} (\eta )\int\limits_0^\eta {G(\eta ,t)\,dt} \\ = \tfrac{2}{3} \eta ^3 (d - \eta )(1 - \eta ) \varphi _k^{ - 1} (\eta ) \geqslant \frac{2}{3} \eta ^3 (d - \eta )^2\varphi _k^{ - 1} (\eta ), \\ \end{array} $$ whence $$ \varphi _k (\eta ) \geqslant \sqrt {\tfrac{2}{3}} \eta ^{3/2} (d - \eta ) \geqslant C\,\eta (d - \eta )^{3/2} \quad {\rm{if}}\quad 0 < \eta < m_k, $$ and $$ \begin{array}{r} \varphi _k (\eta ) - \frac{1}{k} = 2\eta (d - \eta ) + \int\limits_0^d {G(\eta ,t)\varphi _k^{ - 1} (t)\,dt} \geqslant \varphi _k^{ - 1} (\eta )\int\limits_\eta ^d {G(\eta ,t)\,dt} \\ = \tfrac{2}{3} \eta ^2 (d - \eta )^3 \varphi _k^{ - 1} (\eta ), \\ \end{array} $$ whence $$ \varphi _k (\eta ) \geqslant \sqrt {\tfrac{2}{3}} \eta (d - \eta )^{3/2} \quad {\rm{if}}\quad m_k < \eta < d. $$ Hence, \begin{equation}\label{1.6} \varphi _k (\eta ) \geqslant C\,\eta (d - \eta )^{3/2} \quad \forall\, \eta \in (0,d). \end{equation} Next we deduce from the differential equation that, for all $\eta \in (0,d)$, \begin{equation}\label{1.7} \eta (d - \eta )\varphi''' _{k} (\eta ) = \eta (d - \eta )\varphi _k^{ - 1} (\eta ) \mathop \leqslant \limits^{(\ref{1.6})} C\,(d - \eta )^{ - 1/2} , \end{equation} where the right-hand side is an integrable function. \medskip \noindent \emph{Passing to the limit.} From (\ref{1.5}), (\ref{1.3}) and (\ref{1.7}) it follows that $\varphi _k$ is bounded in $C^1 [0,d]$. Therefore, there exists a subsequence, again denoted by $\varphi _k$, and a function $\varphi$ such that $\varphi _k \mathop \to \varphi$ uniformly on $[0,d]$ as $ k \to \infty $. Thus $\varphi (0) = \varphi (d) = 0$ and, by (\ref{1.6}), $\varphi > 0$ in $(0,d)$. Hence, for each compact subset $I$ of $(0,d)$ we have $$ \varphi''' _{k} (\eta ) = \varphi _k^{ - 1} (\eta )\mathop \to \limits_{k \to \infty } \varphi ^{ - 1} (\eta ) \text{ in } C(I), $$ and, by (\ref{1.7}) and Lebesgue's dominated convergence theorem, \begin{equation}\label{1.8} \eta (d - \eta )\varphi''' _{k} (\eta )\mathop \to \limits_{k \to \infty } \eta (d - \eta )\varphi ^{ - 1} (\eta )\text{ in } L^1 (0,d). \end{equation} Since $\varphi''' _{k} \mathop \to \limits_{k \to \infty } \varphi''' $ in the distribution sense, it follows that \begin{equation}\label{1.9} \varphi ''' (\eta ) = \varphi ^{ - 1} (\eta )\text{ in } (0,d), \end{equation} i.e. $\varphi$ satisfies the differential equation. Moreover, from (\ref{1.8}), (\ref{1.9}), (\ref{1.5}) and (\ref{1.3}) we deduce that $\varphi _k \mathop \to \varphi$ in $C^1 [0,d]$ as $k \to \infty $, and hence $\varphi$ also satisfies $\varphi' (0) = 1$. This completes the proof of the existence. \medskip \noindent \emph{Uniqueness}. Let $\varphi_1$ and $\varphi_2$ be two solutions of the problem (\ref{0.2}) and set $v = \varphi_1 - \varphi_2$. Since $v''' = \varphi_1^{-1} - \varphi_2^{-1}$ and the function $v \mapsto v^{-1}$ is decreasing we deduce that $v\,v ''' \leqslant 0$. Since $v\,v''' \leqslant 0$ and $v(0) = v(d) = v '(0) = 0$, we conclude that $v \equiv 0$. This completes the proof of Theorem~\ref{Th1}. \end{proof} \section{Lower and upper bounds for the traveling wave solution} Integrating (\ref{0.2}) with respect to $\eta$, we arrive at the following problem \begin{equation}\label{2.1} \left\{ \begin{array}{l} \varphi(\eta) \varphi''(\eta) = \tfrac{1}{2}(\varphi'(\eta) )^2 + \eta - d,\ \varphi (\eta ) \geqslant 0, \\ \varphi (0) = 0,\,\varphi' (0) = 1,\ 0 \leqslant \eta \leqslant d, \\ \end{array} \right. \end{equation} where $d$ is from (\ref{d}). Analyzing the behaviour of a solution to (\ref{2.1}), we find explicit lower and upper bounds for the solution. \begin{theorem}\label{Th2} Let $\varphi (\eta )$ be a solution from Theorem~\ref{Th1}. Then the following estimates are valid \begin{equation}\label{2.5} \varphi _{\min} (\eta ) \leqslant \varphi (\eta ) \leqslant \varphi _{\max} (\eta ) \Leftrightarrow \tfrac{4\sqrt{2}}{3}\eta (d - \eta )^{3/2}\leqslant \varphi (\eta ) \leqslant \tfrac{{4\sqrt 6 }}{3}\,\eta (d - \eta )^{3/2} \end{equation} for all $\eta \in [0,d]$ $($see Figure~2$)$. \end{theorem} \begin{corollary} In particular, from (\ref{2.5}) it follows that \begin{equation}\label{2.5'} \tfrac{4\sqrt{2}}{3} \dot{s}_1^{1/2} \tfrac{x - s_1}{s_2 - s_1} (s_2 - x)^{3/2} \leqslant h(x,t) \leqslant \tfrac{4\sqrt 6}{3} \dot{s}_1^{1/2} \tfrac{x - s_1}{s_2 - s_1} (s_2 - x)^{3/2} \end{equation} for all $x \in [s_1, s_2]$. \end{corollary} \begin{figure}[t] \includegraphics[width= \textwidth, keepaspectratio]{Fig2.eps} \caption[2]{Lower and upper bounds for the traveling wave solution} \end{figure} \begin{lemma}\label{l2.1} The function $ \varphi_0 (\eta ) = A_0\,\eta (d - \eta )^{3/2}$ $(A_0 > 0)$ satisfies the inequalities \begin{equation}\label{2.2} \varphi \varphi'' \geqslant \tfrac{1}{2}(\varphi' )^2 + \eta - d \ \forall\, \eta \in [0,d] \text{ if } A_0^2 d^2 \leqslant 5/3, \end{equation} \begin{equation}\label{2.3} \varphi \varphi'' \leqslant \tfrac{1}{2}(\varphi')^2 + \eta - d \ \forall\, \eta \in [0,d] \text{ if } A_0^2 d^2 \geqslant 8/3. \end{equation} \end{lemma} \begin{proof} Indeed, the function $\varphi_0 (\eta )$ satisfies the equation $$ \varphi _0 \varphi'' _{0} = \tfrac{1}{2}(\varphi'_{0} )^2 + f (\eta ), $$ where $$ f(\eta ): = \tfrac{5}{8}A_0^2 (d - \eta )\left( {\eta - \tfrac{2d(1 - \sqrt 6 )}{5}} \right)\left( {\eta - \tfrac{2d(1 + \sqrt 6 )}{5}} \right) \leqslant 0 \ \forall\, \eta \in [0,d]. $$ From $$ f(\eta ) \geqslant \eta - d \Leftrightarrow 5\eta ^2 - 4d\eta - 4d^2 + \tfrac{8}{A_0^2} \geqslant 0 \ \forall \,\eta \in [0,d], $$ $$ D = 4d^2 + 20d^2 - \tfrac{40}{A_0^2} = \tfrac{24}{A_0^2} \left( {A_0^2 d^2 - \tfrac{5}{3}} \right) \leqslant 0 \text{ if } A_0^2 d^2 \leqslant 5/3 $$ we obtain (\ref{2.2}). In a similar way, we obtain (\ref{2.3}) for $f(\eta ) \leqslant \eta - d$ $\forall\, \eta \in [0,d]$ if $A_0^2 d^2 \geqslant 8/3$. \end{proof} \begin{lemma}\label{l2.2} The function $\varphi_{\min} (\eta ) = A_1 \eta (d - \eta )^{3/2}$ $(A_1 > 0)$ if $A_1^2 d^2 \leqslant 8/9$ is a lower bound for the solution $\varphi (\eta )$ of (\ref{2.1})$:$ $$ \varphi _{\min}(\eta ) \leqslant \varphi (\eta ) \ (\text{i.\,e. } \varphi _{\min} (\eta ) - \varphi (\eta ) \leqslant 0) \ \forall\, \eta \in [0,d]. $$ \end{lemma} \begin{proof} ''Contraction Principle''. Let us define $v(\eta):= \varphi _{\min}(\eta) - \varphi (\eta)$. Suppose that there exists a point $\eta _0 \in [0,d]$ such that $ v(\eta_0) > 0$ then $\eta _0$ is a point of maximum for $v(\eta)$, i.\,e. $v'(\eta _0 ) = 0 \Leftrightarrow \varphi '_{\min} (\eta _0 ) = \varphi '(\eta _0 ) = M$ and $v''(\eta _0) < 0$. From (\ref{2.1}) and (\ref{2.2}) we deduce that $$ \left\{ \begin{array}{l} \varphi \varphi'' = \tfrac{1}{2}(\varphi')^2 + \eta - d \\ \varphi _{\min} \varphi'' _{\min} \geqslant \tfrac{1}{2}(\varphi' _{\min})^2 + \eta - d \\ \end{array} \right.\Rightarrow $$ $$ v''(\eta) \geqslant \tfrac{1}{2}\left( {\tfrac{(\varphi _{\min,\eta } )^2 }{\varphi _{\min}} - \tfrac{(\varphi _\eta )^2 }{\varphi }} \right) + (\eta - d)\left({\tfrac{1}{\varphi _{\min}} - \tfrac{1}{\varphi }} \right), $$ whence $$ \underbrace {v''(\eta_0)}_{ < 0} \geqslant \underbrace {\tfrac{-v(\eta _0)}{\varphi _{\min} (\eta _0 )\varphi (\eta _0 )}}_{ < 0}\underbrace {\left( {\tfrac{1}{2}M^2 + \eta _0 - d} \right)}_{?}. $$ Using $M = A_1 (d - \eta _0 )^{1/2} (d - 5\eta _0 /2)$, we find $$ \tfrac{1}{2}M^2 + \eta _0 - d = (d - \eta _0 )\bigl[ \tfrac{1}{2} A_1^2 (d - 5\eta _0 /2)^2 - 1 \bigr] \leqslant 0 \ \forall \,\eta _0 \in [0,d] \text{ if } A_1^2 d^2 \leqslant 8/9. $$ Thus we obtain a contradiction with our assumption, which proves the assertion of Lemma~\ref{l2.2}. \end{proof} \begin{lemma}\label{l2.3} The function $ \varphi _{\max} (\eta ) = A_2\, \eta (d - \eta )^{3/2}$ $(A_2 > 0)$ if $A_2^2 d^2 \geqslant 8/3$ is an upper bound for the solution $\varphi (\eta )$ of (\ref{2.1})$:$ $$ \varphi (\eta ) \leqslant \varphi _{\max} (\eta ) \ (\text{i.\,e. } \varphi (\eta ) - \varphi _{\max} (\eta ) \leqslant 0) \ \forall \eta \in [0,d]. $$ \end{lemma} \begin{proof} ''Contraction Principle''. Let us define $v(\eta):= \varphi (\eta) - \varphi _{\max}(\eta)$. Suppose that there exists a point $\eta _0 \in [0,d]$ such that $ v(\eta_0) > 0$ then $\eta _0$ is a point of maximum for $v(\eta)$, i.\,e. $v'(\eta _0 ) = 0 \Leftrightarrow \varphi '_{\max} (\eta _0 ) = \varphi '(\eta _0 )$ and $v''(\eta _0) < 0$. Moreover, from $\varphi ''_{\max} (\eta ) = 3A_1 (d - \eta )^{ - 1/2} [5\eta /4 - d]$ we find that $\varphi ''_{\max} (\eta ) \leqslant 0$ for all $\eta \in [0,0.4]$ and $\varphi ''_{\max} (\eta ) > 0$ for all $\eta \in (0.4,d]$. From (\ref{2.1}) and (\ref{2.3}) we deduce that \begin{equation}\label{2.4} \left\{ \begin{array}{l} \varphi \varphi'' = \tfrac{1}{2}(\varphi' )^2 + \eta - d \\ \varphi _{\max} \varphi'' _{\max} \leqslant \tfrac{1}{2}(\varphi' _{\max})^2 + \eta - d\\ \end{array} \right. \Rightarrow \varphi(\eta _0 ) \varphi'' (\eta _0 ) - \varphi _{\max}(\eta _0 ) \varphi'' _{\max}(\eta _0 ) \geqslant 0, \end{equation} whence $$ \underbrace{\varphi (\eta _0 )\varphi''(\eta _0 ) - \varphi _{\max} (\eta _0 )\,\varphi'' _{\max} (\eta _0 )}_{\geqslant 0} = \underbrace {\varphi (\eta _0 ) v''(\eta _0 )}_{ \leqslant 0} + \underbrace {\varphi''_{\max} (\eta _0 )\, v(\eta _0 )}_{ \leqslant 0}. $$ This contradicts to our assumption if $\eta _0 \in [0,0.4]$. Now, let $\eta _0 \in (0.4,d]$. In this case, if $\varphi ''(\eta _0 ) \leqslant 0$, and we obtain a contradiction immediately from (\ref{2.4}). If $\varphi ''(\eta _0 ) > 0$ then we rewrite (\ref{2.4}) in equivalent form: \begin{equation}\label{2.04} \left\{ \begin{array}{l} \tfrac{1}{2}(\varphi^2 )'' = \frac{3}{2}(\varphi')^2 + \eta - d \\ \tfrac{1}{2}(\varphi_{\max}^2)'' \leqslant \tfrac{3}{2}(\varphi' _{\max} )^2 + \eta - d \\ \end{array} \right. \Rightarrow (\varphi^2(\eta _0 ) - \varphi_{\max}^2(\eta _0 ))'' \geqslant 0, \end{equation} whence $$ \underbrace{(\varphi^2(\eta _0 ) - \varphi_{\max}^2(\eta _0 ))''}_{\geqslant 0} = \underbrace{(\varphi''(\eta _0 ) + \varphi''_{\max}(\eta _0))}_{>0} \underbrace{v''(\eta _0 )}_{<0} $$ and we arrive at a contradiction. Thus, Lemma~\ref{l2.3} is proved. \end{proof} As a result of Lemmata~\ref{l2.2} and \ref{l2.3}, we obtain lower (more exact in comparison with (\ref{1.6})) and upper bounds for the solution of (\ref{2.1}), and consequently for the solution of (\ref{0.2}). \section{Absolute error estimate of approximation of a solution by a traveling-wave solution} The next theorem contains an absolute error estimate of approximation of a solution (e.g., a generalized solution) by a traveling-wave solution. \begin{theorem}\label{Th3} Let $h(x,t)$ be a solution and $\hat{h}(\xi )$ be the traveling-wave solution to the problem (\ref{0.1}). Then the following estimates hold \begin{equation}\label{3.1} \begin{array}{l} \mathop {\sup }\limits_{x \in [s_1 ,s_2 ]} | {h(x,t) - \hat{h}(\xi )} | \leqslant \tfrac{\sqrt{6}}{6}\theta (s_2 - s_1)^{1/2} \text{ if } |h_x (s_2 )| > \theta , \\ \mathop {\sup }\limits_{x \in [s_1 ,s_2 ]} \!\!\! \bigl|h(x,t) - \hat{h}(\xi ) - 2 \theta s_1^{1/2} \!(s_2 - s_1)^{1/2} \bigr| \!\leqslant \!\tfrac{\sqrt{6}}{6}\theta (s_2 - s_1)^{1/2} \text{ if } |h_x (s_2 )| \leqslant \theta, \\ \end{array} \end{equation} where $\xi = x - {\rm{v}}t,\ s_1 = {\rm{v}}t$ and $s_2 = s_1 + w$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{Th3}] We make the following change of variables in (\ref{0.1}) $$ h(x,t) \mapsto f(\xi ,t), \text{ where } \xi = x - {\rm{v}}t. $$ As a result, we obtain the following problem \begin{equation}\label{3.2} \left\{ \begin{array}{l} f_t - {\rm{v}}\,f_\xi + (f^2 f_{\xi \xi \xi } )_\xi = 0, \\ f(0,t) = f(w,t) = 0,\quad f^2 f_{\xi \xi \xi } = 0 \text{ at }\xi = 0, \\ f_\xi (0,t) = \theta > 0. \\ \end{array} \right. \end{equation} Multiplying (\ref{3.2}$_1$) by $- f_{\xi \xi } (\xi ,t)$ and integrating with respect to $\xi$, we get $$ \begin{array}{c} \tfrac{1}{2} \tfrac{d}{dt} \int\limits_0^w {f_\xi ^2 (\xi ,t)\,d\xi } = - {\rm{v}}\int\limits_0^w {f_\xi f_{\xi \xi }\, d\xi } + \int\limits_0^w {(f^2 f_{\xi \xi \xi } )_\xi f_{\xi \xi }\, d\xi } = - \tfrac{ {\rm{v}}}{2} \int\limits_0^w { \tfrac{\partial}{\partial \xi}(f_\xi ^2 )\,d\xi } + \\ + \int\limits_0^w {(f^2 f_{\xi \xi \xi } )_\xi f_{\xi \xi } \,d\xi } \mathop = \limits^{(\ref{3.2}_2),(\ref{3.2}_3)} \tfrac{\rm{v}}{2} (\theta ^2 - f_\xi ^2 (w,t)) - \int\limits_0^w {f^2 f_{\xi \xi \xi }^2\, d\xi }, \end{array} $$ whence \begin{multline}\label{3.3} \tfrac{1}{2} \tfrac{d}{dt}\int\limits_0^w {f_\xi ^2 (\xi ,t)\,d\xi } + \int\limits_0^w {f^2 f_{\xi \xi \xi }^2 \,d\xi } = \tfrac{\rm{v}}{2}(\theta ^2 - f_\xi ^2 (w,t)) \Rightarrow \\ \tfrac{d}{dt} \int\limits_0^w {f_\xi ^2 (\xi ,t)\,d\xi } \leqslant {\rm{v}}(\theta ^2 - f_\xi ^2 (w,t)). \end{multline} Integrating (\ref{3.3}) with respect to time, we find \begin{equation}\label{3.4} \| {f_\xi (\xi ,t)} \|_{L^2 (0,w)}^2 \leqslant \| {f_\xi (\xi ,0)} \|_{L^2 (0,w)}^2 + {\rm{v}}\int\limits_0^t {(\theta ^2 - f_\xi ^2 (w,t))\,dt} . \end{equation} From (\ref{3.4}) it follows that \begin{equation}\label{3.4*} \left\| {f_\xi (\xi ,t)} \right\|_{L^2 (0,w)}^2 \leqslant \left\{ \begin{array}{c} \left\| {f_\xi (\xi ,0)} \right\|_{L^2 (0,w)}^2 \text{ if } |f_\xi (w,t)| > \theta , \\ \left\| {f_\xi (\xi ,0)} \right\|_{L^2 (0,w)}^2 + {\rm{ v}}\theta ^2 t \text{ if }|f_\xi (w,t)| \leqslant \theta . \\ \end{array} \right. \end{equation} From this, by virtue of uniqueness of the traveling-wave solution $\hat{h}(\xi)$ (see Theorem~\ref{Th1}), $f(\xi ,0) = \hat{h}(\xi) $ we deduce from (\ref{3.4*}) that $$ \begin{array}{l} \| {f_\xi (\xi ,t) - \hat{h}_\xi (\xi )} \|_{L^2 (0,w)} \leqslant 2\| {\hat{h}_\xi (\xi )} \|_{L^2 (0,w)} \text{ if } |f_\xi (w,t)| > \theta , \\ \| {f_\xi (\xi ,t) - \hat{h}_\xi (\xi )} \|_{L^2 (0,w)} \leqslant 2\| {\hat{h}_\xi (\xi )} \|_{L^2 (0,w)} + 2\theta \sqrt {{\rm{v}}\,t} \text{ if } |f_\xi (w,t)| \leqslant \theta . \\ \end{array} $$ Therefore taking into account the embedding $\mathop {H^1} \limits^{o}(0,w) \subset C[0,w]$, we find that \begin{equation}\label{3.06} \begin{gathered} \mathop {\sup }\limits_{\xi \in [0,w]} | {f(\xi ,t) - \hat{h}(\xi )} | \leqslant 2 w^{1/2}\| {\hat{h}_\xi (\xi )} \|_{L^2 (0,w)} \text{ if } |f_\xi (w,t)| > \theta , \hfill\\ \mathop {\sup }\limits_{\xi \in [0,w]} |f(\xi ,t) - \hat{h}(\xi ) | \leqslant 2 w^{1/2} (\| {\hat{h}_\xi (\xi )} \|_{L^2 (0,w)}+ \theta \sqrt {{\rm{v}}\,t}) \text{ if } |f_\xi (w,t)| \leqslant \theta .\hfill \end{gathered} \end{equation} Since $\hat{h}_\xi (\xi ) = \theta \varphi'_{\eta}(\eta)$ ($\varphi(\eta)$ is from Theorem~\ref{Th1}) and $\varphi(\eta)$ has a unique maximum in $(0,d)$, due to (\ref{2.5}) we conclude that \begin{equation}\label{3.7} \| {\hat{h}_\xi (\xi )} \|_{L^2 (0,w)} \leqslant \tfrac{ \sqrt{6}}{12}\theta. \end{equation} Thus, we from (\ref{3.06}) and (\ref{3.7}) arrive at \begin{equation}\label{3.6} \begin{gathered} \mathop {\sup }\limits_{\xi \in [0,w]} | {f(\xi ,t) - \hat{h}(\xi )} | \leqslant \tfrac{\sqrt{6}}{6}\theta w^{1/2} \text{ if } |f_\xi (w,t)| > \theta , \hfill\\ \mathop {\sup }\limits_{\xi \in [0,w]} |f(\xi ,t) - \hat{h}(\xi ) - 2 \theta w^{1/2}\sqrt {{\rm{v}}\,t} | \leqslant \tfrac{\sqrt{6}}{6}\theta w^{1/2} \text{ if } |f_\xi (w,t)| \leqslant \theta ,\hfill \end{gathered} \end{equation} which completes the proof of Theorem~\ref{Th3}. \end{proof} {\footnotesize {\bf Acknowledgement.} Author would like to thank to Andreas M\"{u}nch for his valuable comments and remarks. Research is partially supported by the INTAS project Ref. No: 05-1000008-7921.}
1,941,325,220,387
arxiv
\section{Introduction} Crystal plasticity involves important features on multiple spatial and temporal scales ranging from atomistic processes to, e.g.,\ the grain structure of a specimen. Understanding and modeling such a rich phenomenon requires a multiscale approach, where different level models rely on inputs from lower scales. One of such steps that has been intensively studied is to bridge discrete dislocation plasticity with a higher scale continuum description \cite{groma2003spatial, groma2015scale, hochrainer2014continuum, poh2013homogenization, levkovitch2006large, sedlavcek2007continuum}. The main motivation behind these activities is that dislocations interact with long-range stress fields, so in a discrete model all mutual pair interactions between dislocations have to be taken into account leading to a time complexity that makes calculations unsolvable already for small samples. This restriction could be lifted by an appropriate continuum model, in which dislocations are smeared out in terms of continuous density fields, and one considers the dynamics of these fields in the form of continuity equations. These descriptions filter out spatial and temporal fluctuations appearing in the form of intermittent strain bursts caused by dislocation avalanches \cite{dimiduk2006scale, zaiser2008strain}. Such fluctuations, however, often represent important physics that one may intend to take into account. For instance, in the case of micron-scale specimens the stochastic strain bursts prohibit the predicted formability, and thus represent a major challenge for material design \cite{csikor2007dislocation}. They also play an important role in size effects, that is, the increase of material strength if specimen dimensions are reduced down to the micron regime or below \cite{Uchic_2004, dimiduk2005size, uchic2009plasticity}. So, from a technical point of view it seems desirable to extend the continuum dislocation models by a stochastic component to account for avalanche dynamics. Such a stochastic crystal plasticity model (SCPM) was proposed by Zaiser \emph{et al.}\ in two dimensions (2D), which extended the continuum models with a random term in the local yield stress of the material \cite{Zaiser2005}. This feature is meant to account for the inherent inhomogeneity of the dislocation microstructure that often appears in the form of distinctive patterns. The model is in fact a cellular automaton (CA) representation of the plastic strain field evolution. The elementary event is the local slip of a cell (achieved in practice by the motion of nearby dislocations), that induces a long-range internal stress redistribution, that may trigger further events. The local yield threshold is updated after each event to account for microstructural rearrangements that take place during plastic slip. The resulting model recovers the stochastic nature of plasticity and yields a power-law distribution for the random steps appearing on the stress-strain curves \cite{zaiser2007slip}. It also proved successful to model the quasi-periodic oscillatory behavior observed at slow deformation of micron-scale single crystalline pillars \cite{papanikolaou2012quasi}. Interestingly, very similar mesoscopic stochastic models were already introduced earlier with a different aim, namely, to study plasticity in amorphous materials, where the dislocation-mediated deformation mechanism is absent \cite{baret2002extremal, talamali2011avalanches, budrikis2013avalanche, sandfeld2015avalanches, PhysRevLett.116.065501}. Among these models the basic assumptions are identical to those of SCPM: (i) that plastic strain accumulates in local shear transformations that generate long-range internal stress redistribution and (ii) that the material exhibits internal disorder represented by a fluctuating local yield stress. In fact, the 2D model of Roux \emph{et al.}\cite{baret2002extremal},\ apart from a few minor technical differences, is identical to the SCPM of Zaiser \emph{et al.}\cite{Zaiser2005} The reason for this is, on the one hand, that although dislocation slip is characterized by the quantum of the Burgers vector, a local strain increment is (roughly speaking) the product of the Burgers vector and the total distance traveled by the dislocations which is, therefore, not restricted to discrete values. On the other hand, the elastic stress induced by a local plastic slip event is independent of the underlying deformation mechanism, and is described by the Eshelby solution of the corresponding eigenstrain problem \cite{Eshelby1957}. In the amorphous model of Roux \emph{et al.}\cite{baret2002extremal}\ and the SCPM of Zaiser \emph{et al.}\cite{Zaiser2005}\ the free parameters of the model are the distribution of the local yield stress and the magnitude (or distribution) of a local slip event. These parameters are representing the microstructural features of the actual material, and are, of course, expected to differ for amorphous and crystalline materials. In the present paper, we demonstrate how these parameters can be calibrated in the case of crystalline plasticity. At the lower scale we use conventional 2D discrete dislocation dynamics (DDD) models, that have been studied extensively in the literature \cite{miguel2001intermittent, ispanovity2010submicron, tsekenis2011dislocations, ovaska2015quenched, kapetanou2015stress}. Load-controlled quasi-static plastic deformation is simulated, where individual avalanches can be identified \cite{ispanovity2014avalanches}. We find that the stress value corresponding to the first avalanche follows a Weibull distribution, and the mean stress at the $i$th avalanche represent a weakest link sequence from the same distribution. This implies that plastic events are local and the subsequent avalanches are weakly correlated confirming the main assumption of the SCPM. We also provide an in-depth statistical analysis of the stress-strain curves and show by scaling relations that at not too large strains both DDD and SCPM exhibit a smooth plastic response if the system size tends to infinity. This tendency is characterized by a size exponent that, with an appropriate choice for the local strain increment in the SCPM, equals for the different models. The so configured SCPM, thus, provides stress-strain curves statistically equivalent to those obtained by DDD. The paper is organized as follows. Dimensionless units used in the paper are introduced in Sec.~\ref{sec:units}, followed by the description of the plasticity models in Sec.~\ref{sec:model}, and the summary of the numerical results in Sec.~\ref{sec:results}. Section \ref{sec:model} presents a plasticity theory that correctly describe the numerical findings in the microplastic regime. The paper concludes with a Discussion and a Summary section. \section{Dimensionless units} \label{sec:units} Infinite dislocation systems, apart from the core region not considered here, are invariant to the following re-scaling (see, e.g.,\ Ref.~\onlinecite{zaiser2014scaling}): \begin{equation} \bm r \to \bm r/c\text{, } \gamma \to c\gamma \text{, and } \tau \to c\tau, \end{equation} where $\bm r$, $\gamma$, and $\tau$ denote the spatial coordinate, the plastic shear strain, and the shear stress, respectively, and $c>0$ is an arbitrary constant. This universal feature is a simple result of the $1/r$ type (scale-free) decay of the dislocation stress fields. This property also means that in an infinite dislocation system the only length scale is the average dislocation spacing $\rho^{-1/2}$, where $\rho$ is the total dislocation density, so, naturally, this value is chosen as $c$. In the case of strain $\gamma$ and stress $\tau$ the Burgers vector $b$, shear modulus $\mu$, and the Poisson ratio $\nu$ are also required to arrive at dimensionless units denoted by $(\cdot)'$: \begin{equation} \bm r' = \rho^{1/2} \bm r\text{, } \gamma' = \gamma/(b\rho^{1/2}) \text{, } \tau' = \tau/ \left( \frac{\mu b}{2\pi(1-\nu)}\rho^{1/2} \right). \end{equation} Throughout the paper these dimensionless units will be used, and the distinguishing $(\cdot)'$ symbol will be omitted. \section{Simulation methods} \label{sec:sim_methods} In this paper, for simplicity, two-dimensional (2D) models are applied. All three models introduced below have been used extensively in the literature, therefore, only their main features are summarized. \subsection{Stochastic continuum plasticity model (SCPM)} \label{sec:scpm} The model is based on a crystal plasticity model introduced by Zaiser and Moretti \cite{Zaiser2005} and considers a plane strain problem with a local plastic shear strain field ${\gamma ^{{\text{pl}}}}\left( {\bm r} \right)$ and a local shear stress $\tau^\text{loc} \left( {\bm r} \right)$. In an infinite system one can write the stress at an arbitrary position $\bm r$ as \begin{equation} {\tau ^{{\text{loc}}}}\left( \bm{r} \right) = {\tau ^{{\text{ext}}}} + \left( {{G^E} * {\gamma ^{{\text{pl}}}}} \right)\left( \bm{r} \right), \label{eq:tau_loc} \end{equation} i.e.,\ it consists of two parts: an external load and internal part generated by the inhomogeneous ${\gamma ^{{\text{pl}}}}\left( {\bm r} \right)$ field via ${G^E}\left( {\mathbf{r}} \right)$, the elastic Green's function specified by the corresponding Eshelby inclusion problem.\cite{Eshelby1957} The stress and strain fields are discretized on a square lattice with cell size $d$ (measured in dimensionless units introduced in Sec.~\ref{sec:units}) of global size $L\cdot d \times L\cdot d$ with the edges parallel to the $x$ and $y$ direction and $L = 8, 16, ..., 8192$. The discretized Green's function $G^E_{ij}$ is proportional to the stress field of a local slip at the origin ($\gamma^\text{pl}_{ij} = \delta_{ij} \Delta \gamma^\text{pl}$) and is, therefore, calculated as the stress field of four edge dislocations with Burgers vectors $b{{\bm{e}}_x}$, $b{{\bm{e}}_y}$, $-b{{\bm{e}}_x}$ and $-b{{\bm{e}}_y}$ at the right, top, left and bottom sides of the cell, respectively. This corresponds to a local plastic shear of $\Delta {\gamma ^{{\text{pl}}}} = 2/d$. The stress values are evaluated at the center-points of the cells and, for example, at the origin it gives $G_{0,0}^E\Delta {\gamma ^{{\text{pl}}}} = - 4 \Delta \gamma^\text{pl} = - 8/d$. For the rest of the cells the numerical values of $G_{ij}^E$ can be seen in the units of $\left| {G_{0,0}^E} \right|$ in Fig.~\ref{fig:stress_kernel}. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=1, angle=0]{Figures/SCPM/periodic_dipole_stress_field128x128symm_bin_dat_tilized_multiplot2.pdf} \caption{\label{fig:stress_kernel} The center part of the stress field of an elementary slip event $\Delta {\gamma ^{{\text{pl}}}}$ for the case $L=128$ in the units of $\left| {G_{0,0}^E} \right|\Delta {\gamma ^{{\text{pl}}}}$. In the upper right corner a magnification of the cells $\left[ {0,2} \right] \times \left[ {0,2} \right]$ is shown. Note the 4-fold symmetry. } \end{center} \end{figure} The internal structural disorder is taken into account via the fluctuating local threshold value $\tau^\text{th}$. This means that if for a given cell \begin{equation} \label{eq:flow_rule} {\tau^r}\left( \bm{r},t \right): = {\tau ^{{\text{th}}}}\left( \bm{r},t \right) - \left| {\tau ^\text{loc}\left( {\bm{r},t} \right)} \right| \leqslant 0 \end{equation} holds, then it is in equilibrium, otherwise it is active, that is, it yields. We assume that cells are large enough to neglect the correlation between the threshold values of the neighbor cells making them independent random variables. The values are chosen from a Weibull distribution with shape parameter $\nu = 1,1.4{\text{ and }}2$ and scale parameter $\tau_w$. At the beginning of a simulation initial values of the local threshold values are distributed and the strain (and, thus, the local stress, too) is set to zero everywhere. Then a stress-controlled loading procedure is implemented as follows. The external stress is increased until Eq.~(\ref{eq:flow_rule}) is violated in a single cell, which then becomes active. At this cell the local strain is increased by $\Delta {\gamma ^{{\text{pl}}}}$ and a new local threshold value is assigned from the threshold distribution. The internal stress is recalculated according to Eq.~(\ref{eq:tau_loc}) and the newly activated cells are determined using Eq.~(\ref{eq:flow_rule}). As long as ${\tau^r} \leqslant 0$ holds for at least one cell the system is in an avalanche, and the local strain is increased by $\Delta \gamma^\text{pl}$ in the cell where $\tau^r$ is the smallest (extremal dynamics). When for all the cells ${\tau^r} > 0$ holds the avalanche ceases and the external stress is further increased by the smallest ${\tau^r}$ to trigger the next avalanche. At every state of the system the total strain is the spatial average of the local strain: $\gamma := \langle \gamma^\text{pl} \rangle$. \subsection{Discrete dislocation dynamics} \subsubsection{Continuous representation (TCDDD)} \label{sec:taddd} The model called time-continuous discrete dislocation dynamics (TCDDD) considered here consists of $N$ straight parallel edge dislocations, all of which lay in the same slip system. The slip direction was chosen to be parallel to the $x$ side of the square-shaped simulation area, so the Burgers vectors of the dislocations, assuming that their magnitude is $b$, may point in two directions described by their \emph{sign} $s$: $\bm b_i = s_i(b, 0)$, where $i=1,\dots,N$. Since there is only one slip direction present, the interaction of the dislocations that influences glide can be described in terms of the shear stress field $\tau_{\mathrm{ind}}$ induced by each dislocation. Its form in the dimensionless units introduced above is \begin{equation} \label{eq:tau-ind} \tau_{\mathrm{ind}}(\bm r)= x(x^2-y^2) / (x^2+y^2)^{2} = r^{-1}\cos\varphi\cos 2 \varphi, \end{equation} where $\bm r=(x,y)$ is the relative displacement from the dislocation and $(r, \varphi)$ are the corresponding polar coordinates. To model an infinite crystal periodic boundary conditions were applied and the periodic form of Eq.~(\ref{eq:tau-ind}) was used (for details see, e.g.~Ref.~\onlinecite{Bako2006}). This model aims to describe the easy slip regime, where dislocation glide is dominant, therefore, climb and cross-slip are neglected. The system is driven by a homogeneous external shear stress field $\tau_{\mathrm{ext}}$, so the equation of motion of the $i$th dislocation is: \begin{equation} \dot{x}_i =s_i \Bigg[\sum_{j=1; j\neq i}^N s_j\tau_{\mathrm{ind}}(\bm r_i - \bm r_j) +\tau_{\mathrm{ext}}\Bigg], \quad \dot{y}_i =0, \label{eq:eqn_of_motion} \end{equation} where $\bm r_i = (x_i, y_i)$ denotes its position, and the dislocation mobility was absorbed into the time scale. Here it is assumed that due to the strong phonon drag the motion is overdamped and, thus, inertial terms can be neglected. The simulations were started from a random arrangement of an equal number of positive and negative sign dislocations. First, Eq.~(\ref{eq:eqn_of_motion}) was solved at $\tau_\text{ext}=0$ until the system reached equilibrium. Then a quasistatic load-controlled procedure was applied, i.e., stress was increased with a fixed rate between avalanches, and was kept constant during the active periods (for details see Ref.~\onlinecite{Bako2006}). The plastic strain at time $t$ is obtained using $\gamma = \sum_{i=1}^N s_i (x_i(t) - x_i(0))$. In the dimensionless units introduced above the linear system size is related only to the number of dislocations as $L=N^{0.5}$. The simulations were repeated for different system sizes $L=8, 11.31, 16, 22.63, 32$ on a large ensemble of statistically equivalent realizations in each case (consisting of $3000, 2000, 800, 300$, and $180$ individual runs, respectively). Very narrow dislocation dipoles were annihilated since they practically do not affect the dynamics but due to numerical reasons they slow down the simulations considerably. \subsubsection{Cellular automaton representation (CADDD)} \label{sec:caddd} The cellular automaton discrete dislocation dynamics (CADDD) is very similar to the continuous method introduced above except for two important differences: \begin{enumerate} \item The space is discretized, meaning that dislocations move on a regular equidistant grid, and only one dislocation may be present in a cell at the same time. In the simulations performed the cell size $\delta$ was 128 times smaller than the average dislocation spacing, meaning that only every $128\times128$th cell was populated. \item The time is also discretized, i.e.,\ the dynamics is defined by a rule that controls how to move dislocations from one cell to a neighbor cell. Here we use extremal dynamics (ED), meaning that the stress induced by the other dislocations $\tau$ [i.e., the RHS of Eq.~(\ref{eq:eqn_of_motion})] is evaluated at the left (right) border of the cell containing the dislocation. If the force $s_i \tau >0(<0)$ then a step in the right (left) direction is energetically favorable and the decrease in the stored elastic energy $\Delta E$ is proportional with $-|\tau| \delta$. In every timestep the single dislocation with the highest energy drop is moved, then the interaction stresses are recomputed. If there is no dislocation eligible to move (that is, $\Delta E > 0$ for each) then the external stress is increased until a dislocation starts to move. If two dislocations of opposite sign occupy the same cell, they are annihilated. \end{enumerate} As seen, the driving is similar to the quasistatic load-control of the TCDDD. The simulations are also started from a random dislocation configuration, and different system sizes are considered. It is noted, that due to the absence of a real time scale in this model there is no straightforward way to define a strain burst. The advantage of this model lies in its faster computational speed compared to TCDDD that allows much larger systems to be studied. In addition, it allows to test the role of the chosen dynamics (overdamped or extreme) in the results obtained. \section{Numerical Results} \label{sec:results} In this section simulation results are provided using the plasticity models introduced in Sec.~\ref{sec:sim_methods}. In every model, the obtained stress-strain curves are step-like and different for all realizations. In the following, the statistical properties of the stress-strain curves will be examined followed by the analysis of the stress and strain sequences corresponding to individual strain bursts. The latter ones (denoted by $\tau^{(i)}$ and $\gamma^{(i)}$, respectively) are defined by the sketch of Fig.~\ref{fig:stress_strain_sketch}. In the rest of this paper, for simplicity, the external stress $\tau^\text{ext}$ will be denoted by $\tau$. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5, angle=0]{Figures/stress_strain_sketch.pdf} \caption{\label{fig:stress_strain_sketch} Sketch of a stress-strain curve obtained by the models of Sec.~\ref{sec:sim_methods}. The curve can be fully characterized by the stress and strain sequence $\tau^{(i)}$ and $\gamma^{(i)}$, respectively. } \end{center} \end{figure} \subsection{The SCPM model} As said above, there are three scalar parameters of the SCPM: (i) the $\nu$ shape parameter of the Weibull distribution describing the local yield stress distribution, (ii) $\Delta \gamma^\text{pl}$ determining the local increment of the strain during the activation of a cell, and (iii) $\tau_w$ characterizing the average strength (threshold stress) of the cells. In the following section we present simulations with different parameters, and if not stated otherwise $\nu = 1.4$, $\Delta \gamma^\text{pl} = 1/4$ and $\tau_w = 1$ is used. \subsubsection{Average stress-strain curve} \label{sec:avg_stress_strain_scpm} Figure \ref{fig:avg_stress_strain_SCPM}(a) plots the average stress-strain curves obtained by the SCPM simulations at different system sizes and for different values of the exponent $\nu$. The curves were obtained by the following method: for a given strain value $\gamma$ the assigned stress value $\langle \tau \rangle$ is the average of the stress values measured in individual simulations at $\gamma$. This procedure was repeated for different $\gamma$ values to obtain the whole average stress-strain curve. It is seen, that the microplastic regime is described by a power-law which follows \begin{equation} \langle \tau \rangle(\gamma) = \tau_1 \gamma^\alpha \label{eq:avg_stress_strain} \end{equation} for several decades with $\tau_1$ being a constant prefactor and exponent $\alpha$ being dependent on the value of Weibull parameter $\nu$. The fitted values of $\alpha$ are summarized in Table \ref{tab:example}. It is interesting to note, that the curves do not exhibit a clear sign of size effects, they practically overlap for every $\nu$ and $L$, therefore, the $L$-dependence was neglected in Eq.~(\ref{eq:avg_stress_strain}). For plastic strains $\gamma \gtrsim 1$ the stress saturates and the system enters the continuously flowing state. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/SCPM/stress_strain_curve_scaled.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5]{Figures/SCPM/stress_strain_scaling_SCPM.pdf} \begin{picture}(0,0) \put(-240,144){\textsf{(b)}} \end{picture} \caption{\label{fig:avg_stress_strain_SCPM} The average stress-plastic strain curves obtained by the SCPM with different choices of the parameters. In every case for small strains they follow a power-law, then saturate. (a) The effect of the shape parameter $\nu$. The power-law region is consistent with the $\langle \tau \rangle = \gamma^{1/\nu \zeta}$ relation predicted by Eq.~(\ref{eq:tau}) with $\zeta=1$. (b) The effect of $\tau_w$ and $\Delta \gamma^\text{pl}$ at $\nu=1.4$ and $L=128$. According to the scaling collapse of the inset the average stress-strain curves obey $\langle \tau \rangle = \tau_w^{1.32} (\Delta \gamma^\text{pl})^{-0.32} f(\gamma \tau_w^{-0.5} (\Delta \gamma)^{-0.5})$ with a suitable function $f$.} \end{center} \end{figure} Figure \ref{fig:avg_stress_strain_SCPM}(b) shows the role of the two other parameters $\Delta \gamma^\text{pl}$ and $\tau_w$. As seen, the average stress-strain curves do depend on the specific choice, but according to the inset, a scaling collapse can be obtained if both stresses and strains are rescaled using specific powers of $\Delta \gamma^\text{pl}$ and $\tau_w$. This means that the shape of the curves is only affected by the parameter $\nu$, whereas $\Delta \gamma^\text{pl}$ and $\tau_w$ calibrates the scale of stress and strain. \subsubsection{Fluctuation in the plastic response} \begin{figure*}[!htbp] \begin{center} \hspace*{-0.5cm} \includegraphics[scale=0.425]{{Figures/SCPM/scaled_cumulative_external_stress_probability_at_0-007813_def}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(a)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/SCPM/scaled_cumulative_external_stress_probability_at_0-01563_def}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(b)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/SCPM/scaled_cumulative_external_stress_probability_at_0-03125_def}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(c)}} \end{picture} \hspace*{-0.5cm} \caption{\label{fig:stress_distrib_scpm} Cumulative stress distribution at different deformation levels for the SCPM case. As system size increases the distributions tend to a step function, that is, stress fluctuations disappear for large samples. By multiplying the external stress with a power of the system size one can fit the curves with a normal distribution (dashed line) as can be seen in the insets. (a) $\gamma = 0.008$, (b) $\gamma = 0.016$, (c) $\gamma = 0.032$.} \end{center} \end{figure*} Although the average stress-strain curve discussed above is smooth, the stress-strain curves corresponding to individual realizations are staircase-like and differ from each other. Here we investigate the cumulative distribution of stresses $\Phi_\gamma(\tau)$ measured at a given strain $\gamma$ for different realizations. The wider this distribution is, the more are the individual realizations unpredictable. Macroscopic bodies are characterized by a well-defined and smooth stress-strain curve, so for large systems one expects shrinking of this distribution. Indeed, as seen in Fig.~\ref{fig:stress_distrib_scpm} the measured $\Phi_\gamma(\tau)$ curves tend to a step function as the system size increases at every strain $\gamma$. Since there was only a negligible size effect in the average stress-strain curve, the stress-strain response of an infinite system must be equal to $\langle \tau \rangle (\gamma)$, therefore, the limiting step function must be at $\langle \tau \rangle (\gamma)$. Interestingly, the $\Phi_\gamma(\tau)$ curves seem to intersect with each other at a single point which, therefore, must correspond to $\langle \tau \rangle (\gamma)$. According to the inset of Fig.~\ref{fig:stress_distrib_scpm} the curves can be collapsed by rescaling the stresses by the system size around $\langle \tau \rangle (\gamma)$. In addition, the curves can be fit very well with a normal distribution, that is, \begin{equation} \Phi_\gamma(\tau) = \frac12 \left[ 1 + \text{erf} \left( \frac{\tau-\langle \tau \rangle (\gamma)}{c L^{-\beta}} \right) \right], \label{eq:stress_fluct} \end{equation} where $\langle \tau \rangle (\gamma)$ is the average stress-strain curve of Eq.~(\ref{eq:avg_stress_strain}), $\beta = 1 \pm 0.05$ is the exponent characterizing the system size dependence of the stress fluctuations, and $c$ is an appropriate constant. We note that to fit the same distribution for 2D and 3D DDD as well as micro-pillar compression data a shifted Weibull distribution was used previously with shape parameter $\sim$3.5.\cite{Ispanovity2013} Since a Weibull with this shape parameter is practically indistinguishable from a normal distribution, we used here the latter, as it contains only two fitting parameters. It is also noted that the theory to be proposed in Sec.~\ref{sec:model} predicts the normal distribution of Eq.~(\ref{eq:stress_fluct}). \subsubsection{The stress sequence} The two following subsections aim at studying the statistics of the stress and strain sequences $\tau^{(i)}$ and $\gamma^{(i)}$, because they will play a central role in the simple plasticity model described in Sec.~\ref{sec:model}. First, the distribution $\Phi^{(1)}$ of the stress where the first event takes place $\tau^{(1)}$ is considered. In the SCPM model the plastic strain field is initially zero, therefore, the local stress is everywhere equal to the applied stress until the occurrence of the first event. Consequently, the distribution of the stress where the first plastic event sets on $\Phi^{(1)}(\tau^{(1)})$ must be described by a Weibull distribution with shape parameter $\nu$ and scale parameter proportional with $L^{-2/\nu}$ (see Sec.~\ref{sec:stress_seq} for details). Indeed, according to Fig.~\ref{fig:tau_i_scpm}(a) $\Phi^{(1)}$ is perfectly fit by the corresponding Weibull distribution, and the distributions overlap if stress values are rescaled by $L^{2/\nu}$. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/SCPM/plotting_cumulative_prob_at_ext_stress_scaled.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5]{Figures/SCPM/{scaled_mean_stress_at_ith_avalanche_1_1.4_2}.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(b)}} \end{picture} \includegraphics[scale=0.5]{Figures/SCPM/{deviation_of_stress_at_ith_avalanche_1_1.4_2}.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(c)}} \end{picture} \caption{\label{fig:tau_i_scpm} (a) The cumulative distribution $\Phi^{(1)}$ of the first activation stress $\tau^{(1)}$ for SCPM simulations with $\nu=1, 1.4, 2$ and different system sizes. The scaling collapse for different system sizes is obtained by rescaling the stress by $L^{2/\nu}$. The corresponding Weibull distributions of Eq.~(\ref{eqn:weibull}) are also plotted (solid, dashed, and dotted lines). (b) The average stress sequence $\langle \tau^{(i)} \rangle$. The curves are proportional with $i^{1/\nu}$ and the ones corresponding to different system sizes collapse if stresses are rescaled by $L^{2/\nu}$, in accordance with Eq.~(\ref{eq:tau_average_scpm}). (c) STD of the stress sequence $\delta \tau^{(i)}$. The curves are consistent with Eq.~(\ref{eq:tau_std_scpm}). } \end{center} \end{figure} Figures \ref{fig:tau_i_scpm}(b) and \ref{fig:tau_i_scpm}(c) plot the average stress sequence $\langle \tau^{(i)} \rangle$ and its standard deviation (STD) $\delta \tau^{(i)}$, respectively. The curves corresponding to a given $\nu$ parameter and for small $i$ values (that is, when $\tau^{(i)} \lesssim 0.1$) overlap if the stresses are rescaled by $L^{2/\nu}$ and are well described by the power-laws \begin{align} \langle \tau^{(i)} \rangle = \tau_0 \left( \frac{i}{L^\eta} \right)^{1/\nu}, \label{eq:tau_average_scpm}\\ \delta \tau^{(i)} = \frac{\tau_0}{i^{1/2}} \left( \frac{i}{L^\eta} \right)^{1/\nu}, \label{eq:tau_std_scpm} \end{align} with $\eta=2.0\pm 0.05$ yielded by visual inspection. \subsubsection{Strain sequence} The average and the STD of the strain sequence $\gamma^{(i)}$ is seen in Fig.~\ref{fig:gamma_i_scpm}. It is clear that both $\langle \gamma^{(i)} \rangle$ and $\delta \gamma^{(i)}$ follow a power-law for small $i$ values: \begin{align} \langle \gamma^{(i)} \rangle = s_0 \frac{i^\zeta}{L^\xi}, \label{eq:strain_sequence_average} \\ \delta \gamma^{(i)} = s_1 \frac{i^{\zeta-1/2}}{L^\xi}, \label{eq:strain_sequence_std} \end{align} with $\zeta=1.0\pm 0.05$ and $\xi = 2.0 \pm 0.1$. It is important to note, that none of the exponents $\zeta$ and $\nu$ are sensitive to the choice of the exponent $\nu$, but the actual level of $\gamma^{(i)}$ and its scatter (and, thus, the values $s_0$ and $s_1$) are significantly larger for smaller $\nu$ values, which means that individual avalanches become much larger in this case. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/SCPM/{plotting_deformation_at_ith_aval_1_1_4_2}.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5]{Figures/SCPM/{plotting_deformation_deviation_at_ith_aval_1_1_4_2}.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(b)}} \end{picture} \caption{\label{fig:gamma_i_scpm} The average [panel (a)] and STD [panel (b)] of the strain $\gamma^{(i)}$ measured at the $i$th strain burst for different system sizes and $\nu$ values. The curves follow Eqs.~(\ref{eq:strain_sequence_average}) and (\ref{eq:strain_sequence_std}) with $\zeta=1$ and $\xi=2$. } \end{center} \end{figure} \subsection{DDD models} \subsubsection{Average stress-strain curve} The average plastic response of the specimens was calculated in the same manner as for SCPM described in Sec.~\ref{sec:avg_stress_strain_scpm}. According to Fig.~\ref{fig:avg_stress_strain_DDD} the average stress-strain curves show similar features to those obtained by the SCPM: (i) the microplastic regime is characterized by a power-law with an exponent $\alpha = 0.8 \pm 0.05$ [see Eq.~(\ref{eq:avg_stress_strain})] and only a weak sign of size effects is seen, (ii) this regime breaks down at $\tau \approx 0.1$, and (iii) the stress-strain curves saturate for large ($\gamma \gtrsim 1$) strains. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/TCDDD/stress-average-at-gamma_v2.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5]{Figures/CADDD/avg_stress_strain_ca.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(b)}} \end{picture} \caption{\label{fig:avg_stress_strain_DDD} The average stress-strain curves of the DDD simulations. They follow a power-law until $\gamma \approx 0.1$, then saturate. The panels correspond to (a) TCDDD, (b) CADDD simulations.} \end{center} \end{figure} The only significant difference in the average stress-strain curves between the different DDD simulations appears at large ($\gamma \gtrsim 1$) strains. Here the spatial discretization of the CA model leads to an increasing stress for large systems. In this part of the stress-strain curve mechanisms not included in these simple models (like dislocation creation or cross-slip) may also play an important role, so in this paper we focus on small to medium strains, where the two models yield quite similar behavior. \subsubsection{Fluctuation in the plastic response} The cumulative distribution $\Phi_\gamma$ of the stresses measured for different realizations at a given strain $\gamma$ are plotted in Fig.~\ref{fig:stress_distrib_ddd}. Like in the case of SCPM, they (i) tend to a step function for large systems, (ii) the curves for different system sizes intersect in a single point, (iii) by scaling the stresses with a power of the system size scaling collapse can be obtained, and (iv) the curves follow a normal distribution. This means that Eq.~(\ref{eq:stress_fluct}) is valid now with $\beta = 0.8 \pm 0.05$, being somewhat smaller than the value obtained for SCPM. \begin{figure*}[!htbp] \begin{center} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/TCDDD/stress-distrib-0.05}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(a)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/TCDDD/stress-distrib-0.1}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(b)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/TCDDD/stress-distrib-0.2}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(c)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/CADDD/stress_distrib_0.05}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(d)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/CADDD/stress_distrib_0.1}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(e)}} \end{picture} \hspace*{-0.4cm} \includegraphics[scale=0.425]{{Figures/CADDD/stress_distrib_0.2}.pdf} \begin{picture}(0,0) \put(-174,120){\textsf{(f)}} \end{picture} \caption{\label{fig:stress_distrib_ddd} Cumulative stress distributions $\Phi_\gamma$ at different deformation levels $\gamma$ for the DDD cases. As in Fig.~\ref{fig:stress_distrib_scpm} scaling collapse can be obtained by multiplying the external stress with a power of the system size. The so collapsed curves can be fit by an appropriate normal distribution (dashed lines). Panels (a)-(c) and (d)-(f) correspond to TCDDD and CADDD, respectively. (a),(d): $\gamma = 0.05$, (b),(e): $\gamma=0.1$, (c),(f): $\gamma=0.2$.} \end{center} \end{figure*} \subsubsection{The stress sequence} The following two subsections investigate the statistics of stress and strain sequences introduced above. As said in Sec.~\ref{sec:caddd}, these sequences cannot be unambiguously defined for CADDD, we, therefore, constrain ourselves to the TCDDD simulations. First, like for the SCPM, the cumulative distribution $\Phi^{(1)}$ of $\tau^{(1)}$, i.e.,\ the stress where the first plastic event sets on, is calculated. According to Fig.~\ref{fig:tau_i_tcddd}(a) $\Phi^{(1)}$ can be fit perfectly by a Weibull distribution with shape parameter $\nu$ that can be collapsed for different system sizes when rescaled by $L^{\eta/\nu}$, with parameters \begin{align} \nu = 1.4 \pm 0.05,\\ \eta = 1.6 \pm 0.1 \end{align} Similarly to the SCPM case, the average $\langle \tau^{(i)} \rangle$ and STD $\delta \tau^{(i)}$ follow Eqs.~(\ref{eq:tau_average_scpm}) and (\ref{eq:tau_std_scpm}), respectively, with the same exponents $\nu$ and $\eta$ obtained from $\Phi^{(1)}$ above. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5, angle=0]{Figures/TCDDD/first-aval-stress_v3.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5, angle=0]{Figures/TCDDD/i-th-aval-mean-stress.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(b)}} \end{picture} \includegraphics[scale=0.5, angle=0]{Figures/TCDDD/i-th-aval-deviation-stress.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(c)}} \end{picture} \caption{\label{fig:tau_i_tcddd} Statistics of the stress sequence $\tau^{(i)}$ for TCDDD simulations of different system sizes.\\ (a) The cumulative distribution $\Phi^{(1)}$ of the first activation stress $\tau^{(1)}$. Inset: data collapse is obtained by plotting $\Phi^{(1)}$ as a function of $\tau L^{1.15}$. The curves can be fitted with a Weibull-distribution with a shape parameter $\nu \approx 1.4 \pm 0.05$.\\ (b),(c) The average and STD of the external stress $\tau^{(i)}$ at the \textit{i}th avalanche for different system sizes. The data are consistent with Eqs.~(\ref{eq:tau_average_scpm}) and (\ref{eq:tau_std_scpm}) (solid lines) if $i\gtrsim3$ and $\langle \tau^{(i)}\rangle \lesssim 0.2$ and with $\nu = 1.4 \pm 0.05$ and $\eta = 1.6 \pm 0.1$.} \end{center} \end{figure} \subsubsection{Strain sequence} Figure \ref{fig:gamma_i_tcddd} plots the average ($\langle \gamma^{(i)} \rangle$) and STD ($\delta \gamma^{(i)}$) of the strain sequence obtained for different system sizes. The curves are consistent with Eqs.~(\ref{eq:strain_sequence_average}) and (\ref{eq:strain_sequence_std}) found for the SCPM, with exponents $\zeta = 0.9 \pm 0.05$ and $\xi=1.5 \pm 0.1$ \begin{figure}[!hbp] \begin{center} \includegraphics[scale=0.5, angle=0]{Figures/TCDDD/i-th_aval_mean_deformation.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(a)}} \end{picture} \includegraphics[scale=0.5, angle=0]{Figures/TCDDD/i-th_aval_deviation_gamma.pdf} \begin{picture}(0,0) \put(-205,144){\textsf{(b)}} \end{picture} \caption{\label{fig:gamma_i_tcddd} Statistics of the strain sequence $\gamma^{(i)}$ for TCDDD simulations of different system sizes. The average [panel (a)] and STD [panel (b)] of the plastic strain $\gamma^{(i)}$ at the \textit{i}th avalanche for different system sizes. The data are consistent with Eqs.~(\ref{eq:strain_sequence_average}) and (\ref{eq:strain_sequence_std}) (solid lines) if $i\gtrsim3$ and $\langle \gamma^{(i)}\rangle \lesssim 0.2$ with $\zeta = 0.9 \pm 0.05$ and $\xi = 1.5 \pm 0.1$.} \end{center} \end{figure} The overview of the introduced exponents, and their measured values are summarized in Table \ref{tab:example}. \renewcommand{\arraystretch}{1.4} \begin{table*} \caption{\label{tab:example}Summary of exponents used in the paper.} \begin{ruledtabular} \begin{tabular}{L{1.5cm} L{6.5cm} C{2.5cm} C{2.2cm} C{2.2cm} C{2.2cm}} Exponent & Description & Value predicted by theory & Value for SCPM & Value for TCDDD & Value for CADDD\\ \hline $\nu$ & Characterizes threshold stress distribution, see Eq.~(\ref{eqn:link_distrib}) & - & $1.0$, $1.4$, and $2.0$\footnotemark[1] & $1.4 \pm 0.05$ & - \\ $\eta$ & Describes the relation between the system size $L$ and the total number of links $M$, see Eq.~(\ref{eq:exponent_eta_def}) & - & $2.0 \pm 0.05$ & $1.6 \pm 0.1$ & - \\ $\zeta$ & Characterizes the strain sequence, see Eq.~(\ref{eq:gamma_i}) & - & $1.0 \pm 0.05$ & $0.9 \pm 0.05$ & - \\ $\xi$ & Characterizes the system size dependence of the average avalanche size, see Eq.~(\ref{eq:av_avalanche_size}) & $\eta \zeta$ & $2.0 \pm 0.1$ & $1.5 \pm 0.1$ & - \\ $\alpha$ & Exponent of the power-law characterizing the microplastic regime of the stress-strain curves, see Eq.~(\ref{eq:avg_stress_strain}) & $(\nu \zeta)^{-1}$ & $1.0\pm 0.05$, $0.7\pm 0.05$, and $0.5 \pm 0.05$ & $0.8\pm 0.05$ & $0.8 \pm 0.05$\\ $\beta$ & Exponent characterizing the system size dependence of the stress fluctuations, see Eq.~(\ref{eq:stress_fluct}) & $\eta/2$ & $1.0 \pm 0.05$ & $0.8 \pm 0.05$ & $0.8 \pm 0.05$ \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{This exponent is an input parameter for the SCPM model} \end{table*} \renewcommand{\arraystretch}{1.0} \section{Plasticity model based on extreme statistics} \label{sec:model} In this section a simple model for stochastic plasticity is introduced for stress-controlled loading. In this case the stress-strain curves are step-like and can be characterized by the stress and strain values $\tau^{(i)}$ and $\gamma^{(i)}$ corresponding to each step (see the sketch in Fig.~\ref{fig:stress_strain_sketch}). In the following, we propose assumptions for the stress and strain sequence and then combine them to obtain statistical predictions for the stress-strain curves. As we shall see, the proposed scaling forms will be identical to those obtained numerically in the previous section, so, for clarity, the same notation will be used for the exponents as before (see Table \ref{tab:example}). \subsection{Stress sequence} \label{sec:stress_seq} Recently Derlet and Maa\ss \ proposed a probabilistic approach to explain size effects observed in crystalline specimens \cite{derlet2015probabilistic}. They assumed, in accordance with the main idea of the SCPM, that plasticity occurs via irreversible structural excitations and that the material inhomogeneities are represented by a critical stress distribution $P(\tau) = \frac{\mathrm d}{\mathrm d \tau} \Phi(\tau)$, with $\Phi(\tau)$ being the cumulative distribution. Since during stress increase the weakest sites are activated, only the $\tau \to 0$ asymptote of this distribution is important, for which they used a power-law form \begin{equation} \Phi(\tau) \approx \left(\frac{\tau}{\tau_0} \right)^{\nu},\quad\text{if }\tau \to 0 \label{eqn:link_distrib} \end{equation} with $\nu\ge1$. It was also assumed, that the subsequent events are independent, so the spatial correlations of stress and plastic strain present in the SCPM were completely neglected. In this case, if the sample consists of $M$ sites where plastic events may occur, then the $i$th stress value $\tau^{(i)}$ in the $M \to \infty$ case follows Weibull order statistics \cite{rinne2008weibull}. In particular, the first event $\tau^{(1)}$ is Weibull distributed with shape parameter $\nu$: \begin{equation} \Phi^{(1)}(\tau^{(1)}) = 1 -\exp\left(-\frac1M \left(\frac{\tau^{(1)}}{\tau_0}\right)^\nu \right), \label{eqn:weibull} \end{equation} the expected value for $\tau^{(i)}$ is \begin{equation} \left \langle \tau^{(i)} \right \rangle = \frac{\tau_0}{M^{1/\nu}} \frac{\Gamma\left(i+\frac{1}{\nu}\right)}{\Gamma(i)} \approx \tau_0 \left( \frac{i}{M} \right)^{\frac{1}{\nu}}, \label{eq:tau_i} \end{equation} and for the standard deviation of $\tau^{(i)}$ one obtains \begin{equation} \begin{split} \delta \tau^{(i)} &= \sqrt{ \left( \frac{\tau_0}{M^{1/\nu}}\right)^2 \left[ \frac{\Gamma\left(i+\frac{2}{\nu}\right)}{\Gamma(i)} - \left( \frac{\Gamma\left(i+\frac{1}{\nu}\right)}{\Gamma(i)} \right)^2 \right] }\\ &\approx \frac{\tau_0}{i^{1/2}} \left( \frac{i}{M} \right)^{\frac{1}{\nu}}. \end{split} \label{eq:delta_tau_i} \end{equation} This means that the relative fluctuation decreases as $\delta \tau^{(i)}/\left \langle \tau^{(i)} \right \rangle \approx i^{-1/2}$, independent of the number of sites $M$. We also note, that the error of these approximations is less than 1\% if $i \gtrsim5$ and $M \gg i$, and that the distribution of $\tau^{(i)}$ tends to normal for large $i$ values. Finally, one has to find the relation between $M$ and the linear system size $L$. It is natural to assume, that the activation sites are homogeneously distributed, and that their density does not depend on the sample size. This indicates $M\propto L^d$, with $d$ the dimension of the system. As we shall see below this hypothesis must be refined for DDD systems due to anomalous system size scaling. Therefore, exponent $\eta$ is introduced as: \begin{equation} \label{eq:exponent_eta_def} M\propto L^\eta, \end{equation} leading to \begin{equation} \Phi^{(1)}(\tau^{(1)}) = 1 -\exp\left(-\left(\frac 1{\tau_0}\frac{\tau^{(1)}}{L^{\eta/\nu}}\right)^\nu \right), \end{equation} \begin{equation} \left \langle \tau^{(i)} \right \rangle \approx \tau_0 L^{-\eta/\nu} i^{1/\nu}, \label{eq:tau_i_2} \end{equation} and \begin{equation} \delta \tau^{(i)} \approx \tau_0 L^{-\eta/\nu} i^{1/\nu-1/2}. \label{eq:delta_tau_i_2} \end{equation} \subsection{Strain sequence} \label{sec:strain_seq} According to numerous recent experimental and numerical studies, the plastic strain increments, corresponding to the strain burst events, exhibit power-law distribution \cite{miguel2001intermittent, dimiduk2006scale, zaiser2008strain, ispanovity2014avalanches}. However, the scale-free behavior is observed only in a bounded region since (i) at large strain jumps the distribution is chopped off due to finite system size, and (ii) in the case of very small strain bursts, deviation from the power-law is necessary otherwise the strain burst distribution could not be normalized. The physical origin of this lower cutoff is that here individual dislocation motion dominates over collective dislocation dynamics. In summary, the strain burst size ($\Delta \gamma$) distribution looks as \begin{equation} P_\text{sb}(\Delta \gamma) = C s^{-\tau_a} f(\Delta \gamma/\Delta \gamma_u), \text{ if }\Delta \gamma > \Delta \gamma_l, \end{equation} where $\Delta \gamma_l$ and $\Delta \gamma_u$ represent the lower and upper cutoff, respectively, $\tau_a$ is the avalanche size exponent, $C$ is a normalization factor, and $f$ is the cutoff function that decays faster than algebraically for large arguments and $f(x) \to 1$, if $x \to 0$. It follows, that for finite system sizes and small applied stresses, due to the cutoffs, distribution $P_\text{sb}(\Delta \gamma)$ has finite moments, in particular, finite mean and variance. The recent numerical study of 2D DDD systems of Ispánovity \emph{et al.}\ showed that in the microplastic regime $\tau_a \approx 1$, the upper cutoff $\Delta \gamma_u$ depends weakly on the applied stress, and it exhibits anomalous system size dependence.\cite{ispanovity2014avalanches} Consequently, the mean and variance of strain increment can be written as \begin{eqnarray} \langle \Delta \gamma \rangle = \frac{s_0}{L^\xi}, \label{eq:av_avalanche_size}\\ \delta (\Delta \gamma) = \frac{s_1}{L^\xi}, \end{eqnarray} respectively, where $\xi$ is the exponent characterizing the system size dependence of the avalanche sizes, and $s_0$ and $s_1$ are appropriate constants, that may depend on the applied stress. Here $\xi=2$ corresponds to normal scaling, where the total plastic slip during an avalanche is independent of the system size, whereas $\xi<2$ indicates anomalous scaling. In order to derive predictions for the strain sequence it is assumed that the size of subsequent strain bursts is uncorrelated. Then from the central limit theorem it follows, that for $i \gg 1$ and small applied stresses $\gamma^{(i)}$ is distributed normally, and \begin{gather} \langle \gamma^{(i)} \rangle = i^\zeta \langle \Delta \gamma \rangle = i^\zeta \frac{s_0}{L^\xi}, \label{eq:gamma_i}\\ \delta \gamma^{(i)} = i^{\zeta-0.5} \delta (\Delta \gamma) = i^{\zeta-0.5} \frac{s_1}{L^\xi}. \label{eq:delta_gamma_i}, \end{gather} with $\zeta = 1$. \subsection{Stress-strain curves} \label{sec:stress_strain_curves} Since the sequences $\tau^{(i)}$ and $\gamma^{(i)}$ give a full description of the stress-strain curve, in the following the expressions for $\tau^{(i)}$ and $\gamma^{(i)}$ derived above are combined to obtain statistical properties of the stress-strain curves. It was predicted that both $\gamma^{(i)}$ and $\tau^{(i)}$ are distributed normally [see Eqs.~(\ref{eq:gamma_i}, \ref{eq:delta_gamma_i}) and Eqs.~(\ref{eq:tau_i},\ref{eq:delta_tau_i})] for $i\gg 1$. By ``inverting'' $\gamma^{(i)}$ to express $i$ at a given plastic strain $\gamma$, and then inserting $i(\gamma)$ into $\tau^{(i)}$ one obtains that for $i\gg 1$ $\tau$ is distributed normally and: \begin{gather} \langle \tau \rangle = \frac{\tau_0}{s_0^{1/\nu \zeta}} L^{(\xi/\zeta-\eta)/\nu} \gamma^{1/\nu \zeta}, \label{eq:tau}\\ \delta \tau = \tau_2 L^{(\xi/\zeta-\eta)/\nu - \xi/2 \zeta} \gamma^{1/\nu\zeta - 1/2\zeta}. \label{eq:delta_tau} \end{gather} This means that the average stress-strain curve starts as a power-law and if $\xi/\zeta \ne \eta$ then it has a system size dependence even at very large system sizes. To exclude this nonphysical situation one requires \begin{equation} \xi = \eta \zeta. \label{eq:size_eff} \end{equation} In this case the average and the fluctuation of the stress-strain curve behave as \begin{gather} \langle \tau \rangle \propto \gamma^{\alpha}, \label{eq:tau2} \\ \delta \tau \propto L^{-\beta}, \label{eq:delta_tau_2} \end{gather} with \begin{gather} \alpha = 1/\nu \zeta, \\ \beta = \eta/2 \end{gather} Since $\eta$ is positive, this means that stress fluctuations decrease as size increases, that is, one obtains a smooth stress-strain curve for very large samples, as expected. To summarize this section, a weakest ling assumption proposed by Derlet and Maa\ss \ was adopted for the stress sequence [Eqs.~(\ref{eq:tau_i}, \ref{eq:delta_tau_i})],\cite{derlet2015probabilistic} and a straightforward rule for the strain sequence was proposed which is able to capture the anomalous system size dependence of 2D DDD systems [Eqs.~(\ref{eq:gamma_i}, \ref{eq:delta_gamma_i})].\cite{ispanovity2014avalanches} The combination of the two series has led to statistical predictions on the stress-strain curves [Eqs.~(\ref{eq:tau}, \ref{eq:delta_tau})], which, in fact, coincide with the numerical findings described in Sec.~\ref{sec:results}. \section{Discussion} In the preceding two sections numerical results and a theory were presented yielding identical scaling forms for the average and fluctuation of the individual stress-strain curves and the stress/strain sequence in the microplastic regime. The exponents introduced to describe these quantities and their measured/predicted values are summarized in Table \ref{tab:example}. In the following discussion we highlight the most important consequences of these findings. Firstly, we consider the main idea of SCPM that the material can be decomposed into local units each characterized by a yield threshold. This non-trivial assumption implies that the distribution of $\tau^{(1)}$, i.e., the stress at the onset of the first event, must follow a weakest link distribution, so, the fact that for DDD a Weibull distribution was found to describe $P(\tau^{(1)})$ supports this fundamental hypothesis. In addition, it provides us access to the individual link distribution for dislocation structures, since the shape parameter of a Weibull $\nu$ unambiguously determines the asymptote of the underlying link distribution, in this case a power-law of Eq.~(\ref{eqn:link_distrib}). As such, exponent $\nu$ emerges as a central parameter that also influences the power-law exponent of the plastic stress-strain relation ($\alpha=1/\nu\zeta$), that is, the amount of plasticity in the microplastic regime (see Fig.~\ref{fig:avg_stress_strain_SCPM}). The origin of $\nu=1.4$ for DDD systems is not addressed in this paper, it may be influenced by the internal structure of dislocations, like slip systems, patterns, etc. It is noted, however, that a similar analysis of the average stress-strain curves performed earlier on 3D DDD simulations and micropillar compressions yielded $\alpha \approx 0.8$ in both cases,\cite{ispanovity2010submicron,Ispanovity2013} hinting at some generality in the value of $\nu$ (with the straightforward assumption of $\zeta \approx 1$). Although the behavior of the stress sequence shows strong similarity between DDD and SCPM, exponent $\eta$ characterizing the system size dependence of the number of ,,links" of the system $M$ differs considerably. For SCPM $\eta \approx 2$ was found, that corresponds to proportionality between $M$ and the 2D system size, whereas for DDD a significantly smaller value of $\eta \approx 1.5$ was obtained, hinting at a fractal-like structure of the weakest regions of the system. In order to quantify this conjecture we consider the correlation integral $C(r)$ of the initiation points of the events, defined as the probability of the distance of two such arbitrarily chosen points being smaller than $r$. A fractal dimension $d$ can be defined from the asymptotic behavior as $C(r) \propto r^d$. Indeed, according to Fig.~\ref{fig:corr_int} the $C(r)\propto r^\eta$ is a very good approximation both for SCPM and DDD. Although the explanation of this difference is out of the scope of the present paper we mention that the fact that plasticity accumulates on a fractal-like sub-domain of the system may explain the recent findings on the long-range nature of dynamical correlations and peculiar critical behavior of 2D DDD systems.\cite{ispanovity2014avalanches} In addition, it echoes on the experimental findings of Weiss et al., where it was found that during creep deformation of an ice single crystal AE signals initiate from a fractal sub-volume of the specimen with dimension $\sim$2.5.\cite{weiss2003three} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/Comparison/corr_int.pdf} \caption{\label{fig:corr_int} Correlation integral of the avalanche positions for the SCPM and TCDDD models. The measured data are consistent with $C(r)\propto r^\eta$, the latter indicated by the solid lines.} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{Figures/Comparison/stress_strain_comparison.pdf} \caption{\label{fig:comparison} Stress-strain curves obtained by the three different plasticity models. For the SCPM $\nu=1.4$, $\Delta \gamma^\text{pl} = 6$, and $\tau_w = 2$ was chosen.} \end{center} \end{figure} The plastic response of micron-scale samples usually show strong size-effects, for instance, the strength of micro-pillars and nano-pillars may increase two orders of magnitude by reducing their size. The models of this paper, on the other hand, only exhibit very weak size-effects with a maximum increase of $\sim$10\% in the measured average stress (see Figs.~\ref{fig:avg_stress_strain_SCPM}(a) and \ref{fig:avg_stress_strain_DDD}). One thus concludes, that the strong stress increase observed in experiments is not due to the finiteness of the dislocation content, but is the result of the free surfaces, where a significant fraction of dislocations can leave the crystal. In summary, the two models behave similarly in the microplastic regime, where the simple plasticity theory introduced that neglects internal correlations and the spatial extent of avalanches is able to properly describe the fluctuating plastic response. More is true, however, since the average and scatter of the stress-strain curves obtained by SCPM and DDD are quite similar for moderately large strains, too, where the plasticity model is clearly not applicable due to strong internal correlations (that is, in the $0.1\lesssim \gamma \lesssim 10$ region in Fig.~\ref{fig:comparison}). So, it seems that SCPM can correctly capture the internal stress and strain correlations developing upon increasing strain. At very large deformations ($\gamma \gtrsim10$), however, specific dislocation patterns develop in the DDD models with characteristic scales comparable to the system size,\cite{zhou2015dynamic,PhysRevB.91.054106} the SCPM cannot account for. Therefore, similarity between the models is not expected in this regime. \section{Summary} In this paper we have demonstrated that the SCPM model introduced earlier is able to quantitatively describe the stochastic properties of crystalline plasticity. Using a simple theoretical model for the microplastic regime based on the subsequent activation of the weakest links in the sample we derived a method how to calibrate the parameters of the SCPM based on lower-level DDD simulations. The proposed methodology does not only represent a bridge between micro- and meso-scales, but also gives insight into the nature of stochastic processes characterizing plasticity. The current paper has focused on crystal plasticity and a simple 2D DDD representation, but the authors do not see any reason why the proposed plasticity model and the multi-scale methodology would not be applicable for more involved DDD models or amorphous materials. The verification of this conjecture is delegated to future work, and is expected to open new perspectives in the applicability of stochastic continuum plasticity models. \section*{Acknowledgments} PDI thanks Peter Derlet for fruitful discussions. Financial supports of the Hungarian Scientific Research Fund (OTKA) under contract numbers K-105335 and PD-105256 and of the European Commission under grant agreement No. CIG-321842 are also acknowledged. PDI is also supported by the J\'anos Bolyai Scholarship of the Hungarian Academy of Sciences.